text
sequencelengths
2
2.54k
id
stringlengths
9
16
[ [ "The overtone level spacing of a black hole quasinormal frequencies: a\n fingerprint of a local $SL(2,\\mathbb{R})$ symmetry" ], [ "Abstract The imaginary part of the quasinormal frequencies spectrum for a static and spherically symmetric black hole is analytically known to be equally spaced, both for the highly damped and the weakly damped families of quasinormal modes.", "Some interesting attempts have been made in the last twenty years to understand in simple ways this level spacing for the only case of highly damped quasinormal frequencies.", "Here, we show that the overtone level spacing, for both the highly damped and weakly damped families of quasinormal modes, can simply be understood as a fingerprint of a hidden local $SL(2,\\mathbb{R})$ symmetry, near different regions of the black hole spacetime, i.e.", "the near-horizon and the near-photon sphere regions." ], [ "Introduction", "In the context of small perturbations of a black hole (BH) spacetime, the quasinormal modes (QNM), which are damped modes of perturbation which propagate towards spatial infinity, are playing a fundamental role both in our experimental and theoretical understanding of gravity and of BH in particular.", "The QNM are related to complex quasinormal frequencies (QNF) at which they occur.", "The real part of the QNF corresponds to the frequency of oscillation and the imaginary part to the damping rate of the QNM.", "Analytically, for example in the case of static and spherically symmetric (asymptotically flat) BH perturbed by a massless scalar field, two families of QNM are known the highly damped QNM with the associated QNF (at first order in $n$ ) $\\omega _n = \\frac{\\kappa }{2\\pi }\\ln \\left[1+2\\cos \\left(\\frac{\\pi (qD-2)}{2}\\right)\\right] - i\\kappa \\left(n+\\frac{1}{2}\\right)$ where $\\kappa $ is the surface gravity of the BH, $D=d-2$ for $d$ -dimensional spacetimes and $q$ is a purely real power law exponent that characterizes the BH metric near its singularity [1], [2], and the weakly damped QNM with the associated QNF (at first order in $\\ell $ and $n$ ) $\\omega _{\\ell ,n}=\\Omega _c\\left(\\ell +\\frac{1}{2}\\right)-i|\\Lambda _c|\\left(n+\\frac{1}{2}\\right)$ where $\\Omega _c$ is the angular velocity of massless particles on unstable circular orbits around the BH, and $|\\Lambda _c|$ is the Lyapunov exponent associated with these unstable circular orbits [3], [4].", "The imaginary part of the QNF, in both cases, is interesting.", "It contains the overtone number $n$ and is equally spaced.", "Some remarks about its amplitude will also be given in this paper.", "Then, it would be natural to ask whether the seemingly simple behavior of the imaginary part of the QNF in both cases may, or may not, hide a “universal” feature of BH spacetimes.", "For static and spherically symmetric spacetimes, such an imaginary part for the BH QNF, in the only case of the highly damped QNM, has been discussed in [5], where it has been obtained as poles of the scattering amplitude in the Born approximation, and in [6] where it has been extracted via a Coulomb-like phase shift.", "In this paper, we would like to show that such a behavior for the imaginary part of a BH QNF is in fact more “universal” and can simply be understood, in both the highly and weakly damped regimes, as a fingerprint of a hidden local $SL(2,\\mathbb {R})$ symmetry.", "We will focus on static and spherically symmetric BH endowed with a photon sphere (also known as “lightrings”, or “photon rings”).", "The strategy is made simple as long as one can show that certain particular regions of a BH spacetime admit a Rindler metric as an approximation.", "Indeed, on the one hand, the resonant scattering of a massless scalar field in a Rindler spacetime is shown to be deeply linked to the resonant scattering problem of a scalar field by an inverted harmonic oscillator (IHO) potential of the form $V(x)=-(\\alpha ^2/4) x^2$ , where $\\alpha $ is the potential curvature, or potential strength.", "On the other hand, the resonant scattering problem by an IHO potential can be algebraically solved using the $SL(2,\\mathbb {R})$ algebra for which the representations and the eigenvalues spectrum are discrete and indexed by a non negative integer.", "The general claim of this paper follows: in any BH geometry, if there exist some regions where the BH metric can be locally approximated as a Rindler one, in some set of coordinates, then the underlying algebra is $SL(2,\\mathbb {R})$ , and the imaginary part of the BH QNF will be indexed by a non negative integer, i.e.", "the overtone number, and it will be equally spaced.", "The amplitude of the overtone level spacing is shown to be related to the IHO potential curvature.", "The paper is organized as follows.", "In section , we give an $SL(2,\\mathbb {R})$ approach to the resonant scattering problem of a scalar field by an IHO potential.", "This allows to highlight that the eigenvalues spectrum of the corresponding Hamiltonian is discrete, purely imaginary and indexed by a non negative integer.", "In section , we focus first on the motions of massless particles in Rindler spacetime, and then on the resonant scattering problem of a massless scalar field in such a spacetime.", "We show that massless dynamics in Rindler spacetime is deeply linked to the problem treated in section , and we apply the results we have obtained to the Rindler case.", "Finally, in section , we apply the $SL(2,\\mathbb {R})$ algebra to the case of static and spherically symmetric BH endowed with a photon sphere, and show that the imaginary part of the BH QNF in both the highly an weakly damped regimes are equally spaced.", "In sections and we discuss the amplitude of the overtone level spacing and gives somes conclusions about the results obtained in this paper." ], [ "An $SL(2,\\mathbb {R})$ approach to the resonant scattering by an inverted harmonic oscillator potential", "With a particular choice of “position” operator $X$ and conjugate “linear momentum” operator $P$ , acting both on a same Hilbert space, the Hamiltonian of a particle in an IHO potential can always be written as $H=P^2 - \\frac{\\alpha ^2}{4}X^2,$ where $\\alpha $ is the curvature of the IHO potential, and the operators, which satisfy $[X,P]=i\\mathbb {I}$ , are defined, as usual, as $P=-i\\frac{d}{dx}$ and $X=x$ , in an “$x$ -representation”.", "In a system of units such that $h=c=1$ , it should be noted that if one wants $H$ to have dimension $L^{-1}$ , $P$ and $X$ have to be defined here with dimensions $L^{-1/2}$ and $L^{1/2}$ respectively.", "The IHO potential curvature $\\alpha $ has dimension $L^{-1}$ .", "From the point of view of a scalar field theory, the Hamiltonian gives the following equation for its eigenstates $\\psi $ in the $x$ -representation $H \\psi (x)=\\left(-\\frac{d^2}{d x^2}-\\frac{\\alpha ^2}{4}x^2\\right)\\psi (x)=h \\psi (x).$ Here $h$ is the Hamiltonian eigenvalue associated with the eigenstate $\\psi $ .", "This equation is a Weber differential equation whose solutions are related to parabolic cylinder functions [7].", "They describe the scattering states of the scalar field by an IHO potential.", "Amongst the scattering states, the system presents a discrete set of eigenstates, i.e.", "the resonant modes, or QNM, which can be obtained, for example, by looking at the poles of the corresponding $S$ -matrix in the complex energy plane.", "An expression of $H$ in the form $(\\ref {Hamiltonian})$ (or $(\\ref {ScalarFieldEq})$ ) plays an “universal” role for the resonant scattering problem of a scalar field by the top of any potential barrier.", "Indeed, in such case, the QNM are associated with energies close to the top of the potential.", "For such energies, one can always approximate any potential barrier close to its maximum by a second order Taylor series expansion which locally gives an inversed parabola.", "With the right choice of $X$ and $P$ operators, the corresponding Hamiltonian $H$ can then always be reduced to the form $(\\ref {Hamiltonian})$ (or $(\\ref {ScalarFieldEq})$ ).", "This form of $H$ is not only “universal” for a problem of resonant scattering by a potential barrier, but it also permits to reveal a hidden (dynamical) conformal symmetry near the top of the potential through an $SO(2,1) \\sim SL(2,\\mathbb {R})$ algebra which allows to obtain the associated set of QNM from the algebra representations and, especially for the aim of this paper, to obtain the overtone level spacing in the imaginary part of the quasinormal frequencies.", "We briefly recall below the outline of one of the possible proofs that mainly lies on the introduction of “creation and annihilation operators” analogues in the factorization of $H$ , which allows to highlight the underlying $SL(2,\\mathbb {R})$ algebra (see [8] for more details).", "We have seen that the time-independent, one-dimensional Hamiltonian of a quantum particle scattered by an IHO potential reads in the $x$ -representation $H=-\\frac{d^2}{dx^2} - \\frac{\\alpha ^2}{4}x^2.$ As one usually proceeds in the context of the well-known harmonic oscillator problem, let us begin by factorizing $H$ .", "From the operators $P=-i\\frac{d}{dx}$ and $X=x$ (mentioned in the previous section), one can introduce “creation and annihilation operators” analogues $U_{\\pm }=\\pm \\frac{1}{\\sqrt{\\alpha }}P+\\frac{\\sqrt{\\alpha }}{2}X=\\mp i\\frac{1}{\\sqrt{\\alpha }}\\frac{d}{dx}+\\frac{\\sqrt{\\alpha }}{2}x,$ with $(U_{\\pm })^{\\dagger } = U_{\\pm } \\ne U_{\\mp }$ (with respect to the usual scalar product) and $[U_{-},U_{+}]=i\\mathbb {I}$ .", "Let us note that in our system of units, $U_{\\pm }$ are dimensionless operators.", "The Hamiltonian $H$ can then be written in terms of $U_{\\pm }$ as $H=P^2-\\frac{\\alpha ^2}{4}X^2=-\\frac{\\alpha }{2}\\left(U_{+}U_{-}+U_{-}U_{+}\\right).$ It is also possible to introduce the dilation operator $D$ and the operator $S$ , which is associated with special conformal transformations generator $K=X^2$ , as follows $&D&=\\frac{1}{2}\\left(PX+XP\\right)=\\frac{1}{2}\\left(U_{+}^2-U_{-}^2\\right)=-i\\left(x\\frac{d}{dx} + \\frac{1}{2}\\right),\\nonumber \\\\&S&=P^2+\\frac{\\alpha ^2}{4}X^2=\\frac{\\alpha }{2}\\left(U_{+}^2+U_{-}^2\\right)=-\\frac{d^2}{dx^2}+\\frac{\\alpha }{4}x^2.$ Up to dimensional factors, it is important to note that the expressions of $H$ and $D$ are switched whether one is looking at their expressions in terms of $X$ and $P$ operators, or in terms of $U_{\\pm }$ operators.", "In other words, for what concerns the IHO problem, $H$ can be seen as a dilation process or as a scattering process by an IHO potential, depending on the choice of “coordinates” ($x$ or $u_{\\pm }$ ) and the associated conjugate momenta ($p$ or $u_{\\mp }$ ).", "It is interesting to note that the operator $S$ remains unchanged in both “coordinates” systems, as a sum of squared operators, and that it would play the role of the Hamiltonian in the usual harmonic oscillator problem.", "It becomes then quite trivial to write the expressions of $H$ , $D$ and $S$ as differential operators in “$u_{\\pm }$ ”-representations, from their expressions in the $x$ -representation (see [9] and references therein, for a nice exhaustive analysis of the physics of the IHO).", "For example, $H$ in the $u_{\\pm }$ -representation reads simply $H = \\mp i \\alpha \\left(u_{\\pm }\\frac{d}{du_{\\pm }}+\\frac{1}{2}\\right).$ In order to simplify the underlying algebra, one can finally define the following three (dimensionless) operators $J_1&=&\\frac{1}{2}D=-\\frac{i}{2}\\left(x\\frac{d}{dx}+\\frac{1}{2}\\right),\\\\J_2&=&-\\frac{i}{2\\alpha }S=\\frac{i}{2\\alpha }\\left(\\frac{d^2}{dx^2}-\\frac{\\alpha ^2}{4}x^2\\right),\\\\J_3&=&\\frac{i}{2\\alpha }H=-\\frac{i}{2\\alpha }\\left(\\frac{d^2}{dx^2}+\\frac{\\alpha ^2}{4}x^2\\right).$ Those operators satisfy the commutation relations of an $SO(2,1)$ algebra $[J_1,J_2]=-iJ_3; \\quad [J_2,J_3]=iJ_1; \\quad [J_3,J_1]=iJ_2.$ Finally, from $J_1$ and $J_2$ , one can also introduce “ladder operators”.", "For example $J_{\\pm }=\\pm iJ_1-J_2=\\frac{i}{2}U_{\\pm }^2,$ which, together with $J_3$ , satisfy an $SL(2,\\mathbb {R})$ algebra $[J_{+},J_{-}]=-2J_3; \\quad [J_3,J_{\\pm }]=\\pm J_{\\pm }.$ It should be noted first that one could have obviously constructed $J_3$ and $J_{\\pm }$ , which satisfy commutation relations $(\\ref {LadderOp_CommutationRelation})$ , more directly from $H$ and $U_{\\pm }^2$ , i.e.", "without any references to the operators $D$ and $S$ .", "Finally, let us also note that that the ladder operators $J_{\\pm }$ are not self-adjoint conjugate to each other.", "In our example here, one has instead $\\left(J_\\pm \\right)^\\dagger =-J_\\pm $ , i.e.", "anti-self-adjoint operators.", "Once the algebra is found, the computation of the eigenstates and eigenvalues of $H$ follows.", "Indeed, as usually done in the harmonic oscillator case, one can begin by looking for the ground states of the ladder operators $J_\\pm $ .", "Then, by an application of $(J_\\pm )^n$ to the associated ground state (for every non negative integer $n$ ), it is an easy task to construct the set of eigenstates which belong to the associated rigged Hilbert space, that allows, in a few words, states to be defined as distributions (see [10], [11] for more details about the rigged Hilbert space formalism).", "The interested reader may found a detailed construction of the set of eigenstates and eigenvalues of $(J_\\pm )^n$ and $H$ in [8], in the context of a hidden $SL(2,\\mathbb {R})$ symmetry on a BH photon sphere.", "Finally, one shows that the associated spectrum of $H$ is discrete, purely imaginary, and indexed by an integer.", "The eigenvalues $h_n^{(\\pm )}$ of $H$ , for every non negative integer $n$ , can finally be written as $h_n^{(\\pm )} = \\mp i \\alpha \\left(n+\\frac{1}{2}\\right).$ These discrete and purely imaginary eigenvalues are shown to be related to the frequencies of the resonant modes in the resonant scattering problem of a scalar field by an IHO potential [9], [8].", "The non negative integer index $n$ will be the key to understand the overtone level spacing of a BH quasinormal frequencies.", "It should be noted that the curvature $\\alpha $ of the IHO potential appears to be the amplitude of the eigenvalues spacing of $H$ .", "In the following, we will first show that the resonant scattering problem of a massless scalar field in a Rindler spacetime can precisely be reduced to the study of a resonant scattering problem by an IHO potential with the corresponding Hamiltonian $H$ .", "From the above, this obviously allows its algebraic treatment through the $SL(2,\\mathbb {R})$ algebra with all the related results.", "By a natural extension, it should be noted that every regions of a black hole spacetime that locally admit the Rindler metric as an approximation will be places where a similar treatment can be applied.", "In particular, we will show in the following sections that the horizon (which is the best known case) but also the photon sphere are both such regions." ], [ " Massless particles in Rindler spacetime", "We consider the line element of a $(1+1)$ -dimensional Rindler spacetime, in Rindler's coordinates $(t,x)$ : $ds^2=a^2x^2 dt^2-dx^2.$ It is known to describe a line element seen by an uniformly accelerated observer (with a constant acceleration $a$ ), in coordinates $(t,x)$ .", "The study can be restricted, for example, to the right Rindler wedge, for which $t\\in ]-\\infty ,+\\infty [$ and $x\\in [0,+\\infty [$ .", "In the following, we will be focusing on the motions of massless particles in those coordinates.", "With this aim in mind, we introduce an affine parameter $\\lambda $ to describe the geodesics, and we decide to use a (quadratic) Lagrangian approach which is a minimalist but efficient way to highlight our point.", "Let us consider the following quadratic lagrangian: $\\mathcal {L}=\\frac{1}{2}\\frac{ds^2}{d\\lambda ^2}=\\frac{1}{2}(a^2x^2\\dot{t}^2-\\dot{x}^2)$ where $\\dot{t}=dt/d\\lambda $ and $\\dot{x}=dx/d\\lambda $ .", "In units such that $h=c=1$ , here $x$ and $t$ have dimension $L$ , $\\lambda $ has dimension $L^2$ , while $a$ has dimension $[x]^{-1}=L^{-1}$ .", "From Euler-Lagrange equations and spacetime symmetries, the only integral of motion (equivalently associated with the Killing vector $\\partial /\\partial t$ ) is here the “energy” $E$ defined by $E=\\frac{\\partial \\mathcal {L}}{\\partial \\dot{t}}=a^2x^2\\dot{t}.$ The “linear momentum” is defined as $p=\\frac{\\partial \\mathcal {L}}{\\partial \\dot{x}}=-\\dot{x}.$ Inserting $(\\ref {E})$ into the line element $(\\ref {RindlerMetric})$ for a massless particle, i.e.", "$ds^2=0$ , one finds $\\dot{x}^2 - \\frac{E^2}{a^2x^2}=0$ or $\\left(\\frac{dx}{dt}\\right)^2 - a^2x^2=0.$ This equation, whose solution grows exponentially with an associated Lyapunov exponent $\\Lambda =|a|$ (see [12] and references therein), describes the trajectory of a massless particle in Rindler's coordinates $(t,x)$ , i.e.", "seen by an uniformly accelerated observer.", "Equation $(\\ref {MasslessMotion2})$ clearly shows that this motion, in $(t,x)$ coordinates, is actually a motion in an IHO potential.", "This equation can then also be interpreted as describing an instability near the (hyperbolic) point $x=0$ (the top of the IHO potential barrier) in the phase space $(dx/dt,x)$ .", "In other words, the Rindler horizon corresponds to an unstable equilibrium point for massless particles, in the phase space $(dx/dt,x)$ of an accelerated observer.", "Let us note that for massless geodesics ($ds^2=0$ ), dividing $(\\ref {RindlerMetric})$ by $dt^2$ would have given directly $(\\ref {MasslessMotion2})$ .", "It should be noted that there exist a simple dispersion relation between the energy $E$ and the linear momentum $p$ of the particle, obtained by inserting $(\\ref {p})$ in $(\\ref {MasslessMotion1})$ $E=\\pm a x p.$ This dispersion relation is characteristic of a particle in a IHO potential.", "Indeed, with a change of variables of the form $\\tilde{x}=x-\\frac{2}{a}p \\quad \\textrm {and}\\qquad \\tilde{p}=\\frac{a}{2}x + p,$ which is equivalent to $(\\ref {CreationAnnihilationOperators})$ , one can rewrite $(\\ref {RindlerDispersionRelation})$ as $E = \\pm \\left(\\tilde{p}^2-\\frac{a^2}{4}\\tilde{x}^2\\right).$ where the IHO potential appears explicitly." ], [ " Scattering of a massless scalar field in a Rindler spacetime", "The Klein Gordon equation for a massless scalar field $\\Phi (x^\\mu )$ in a spacetime with metric $g_{\\mu \\nu }$ reads $\\Box \\Phi = g^{\\mu \\nu } \\nabla _\\mu \\nabla _\\nu \\Phi = \\frac{1}{\\sqrt{-g}} \\partial _\\mu \\left(\\sqrt{-g} g^{\\mu \\nu }\\partial _\\nu \\Phi \\right)=0.$ In a Rindler spacetime with metric $(\\ref {RindlerMetric})$ , the Klein Gordon equation becomes $\\left[-\\partial _t^2 + a^2x\\partial _x\\left(x\\partial _x\\right)\\right]\\Phi (t,x)=0.$ Let us assume a stationary scalar field with harmonic time dependence of the form $\\Phi (t,x)=\\phi (x)e^{-i\\omega t}$ .", "The Klein-Gordon equation becomes $\\left[(-i\\omega )^2 - ax\\partial _x\\left(x\\partial _x\\right)\\right]\\Phi (t,x)=0,$ which can be easily factorized in the form $\\left[(-i\\omega )+ax\\partial _x\\right]\\left[(-i\\omega )-ax\\partial _x\\right]\\Phi (t,x)=0.$ It should be noted that a massless scalar plane wave, e.g.", "$\\Phi (t,x)=\\Phi _0 e^{-i(\\omega t \\mp kx)}$ , solution of each term of $(\\ref {KG_factorized})$ gives a dispersion relation $\\omega = \\pm a x k$ which is equivalent to the relation $(\\ref {RindlerDispersionRelation})$ .", "In order to establish explicitly the link between $(\\ref {KG_factorized})$ and $(\\ref {H_Diff_Upm})$ , we will have to introduce a “trick” that will highlight an important aspect of thermality comparing the Rindler and the IHO case.", "Let us begin to rewrite $(\\ref {KG_factorized})$ as $\\left[-i\\omega -\\frac{a}{2}+\\frac{a}{2}+ax\\partial _x\\right]\\left[-i\\omega +\\frac{a}{2}-\\frac{a}{2}-ax\\partial _x\\right]\\Phi (t,x)=0.$ Most of the solutions $\\Phi $ [13] can be reduce to a linear combination of $\\tilde{\\Phi }^{(\\pm )}$ , such as $\\left\\lbrace \\begin{array}{ll}\\left(ax\\partial _x + \\dfrac{a}{2}\\right)\\tilde{\\Phi }^{(+)}=\\left(i\\omega +\\dfrac{a}{2}\\right)\\tilde{\\Phi }^{(+)}\\\\-\\left(ax\\partial _x + \\dfrac{a}{2}\\right)\\tilde{\\Phi }^{(-)}=\\left(i\\omega -\\dfrac{a}{2}\\right)\\tilde{\\Phi }^{(-)}.\\end{array}\\right.$ In other words, $\\tilde{\\Phi }^{(\\pm )}$ both satify $\\pm ia\\left(x\\partial _x +\\frac{1}{2}\\right)\\tilde{\\Phi }^{(\\pm )} = - \\left(\\omega \\mp \\frac{ia}{2}\\right)\\tilde{\\Phi }^{(\\pm )},$ which reads $H \\tilde{\\Phi }^{(\\pm )} = \\left(\\omega \\mp \\frac{ia}{2}\\right)\\tilde{\\Phi }^{(\\pm )}$ where $H$ is the Hamiltonian of an IHO written in the form $(\\ref {H_Diff_Upm})$ , i.e.", "as a dilation operator.", "In other words, the equation of motion of a massless scalar field, of frequency $\\omega $ , seen by an uniformly accelerated observer (with acceleration $a$ ) in Rindler coordinates $(t,x)$ , can simply be expressed in term of an IHO Hamiltonian.", "It should be noted that the naive “trick” used in $(\\ref {KG_factorized_2})$ that leads to $(\\ref {H_Rindler_IHO})$ , could be understood as being related to the thermal behavior of the number of quantized particles in the Unruh effect.", "Indeed, on the one hand, the factorization of the massless Klein Gordon equation for a scalar field $\\Phi $ in a $(1+1)$ -dimensional Rindler spacetime with metric $(\\ref {RindlerMetric})$ gives a system of two massless scalar field $\\tilde{\\Phi }^{(\\pm )}$ equations, the same way the usual $(1+1)$ -dimensional massless scalar wave equation can be factorized into a system of two $(1+1)$ -dimensional uncoupled Weyl-like equations.", "On the other hand, it is well-known that, for a scalar field, the thermal behavior of the number of quantized particles seen by an accelerated observer is given by a Bose-Einstein factor $(e^{2\\pi \\omega /T}-1)^{-1}$ with Unruh temperature $T$ , whereas for a spin-$1/2$ field the thermal behavior is given by a Fermi-Dirac factor $(e^{2\\pi \\omega /T}+1)^{-1}$ .", "From this perspective, the “trick” in $(\\ref {KG_factorized_2})$ formally corresponds to the passage from one thermal behavior, that originates from equation $(\\ref {KleinGordon_Rindler})$ , to the other, that originates from equations $(\\ref {Weyl_like_eqs1})$ , with the change $\\omega \\rightarrow \\omega \\mp ia/2$ in the computation of the related Unruh effect [14].", "We have explicitly shown in this section that the scattering by an IHO potential is actually hidden in the scattering problem of a massless scalar field in Rindler spacetime, in coordinates $(t,x)$ .", "The resonant scattering in Rindler spacetime is then simply given by the $SL(2,\\mathbb {R})$ algebraic approach presented in section .", "Indeed, by combining $(\\ref {HEigenvalues})$ and $(\\ref {H_Rindler_IHO})$ gives a set of purely imaginary frequencies, for every non negative integer $n$ $\\omega _{n}^{(\\pm )}=\\mp i a n.$ As anticipated previously, it should be noted that the discrete eigenvalues spectrum of the IHO Hamiltonian for the associated resonant modes can be understood as being at the origin of the regular level spacing in the QNF.", "This level spacing appears clearly here, in the case of resonant scattering in Rindler spacetime.", "Let us note that the amplitude of the level spacing is nothing less than the IHO potential curvature, which is interpreted as a constant acceleration in the Rindler case.", "This will appear in a very similar manner in the QNF spectrum for BH, as long as the BH metric admits close to certain regions of spacetime, the Rindler metric as an approximation." ], [ "The static and spherically symmetric black hole case", "In the Schwarzschild coordinates $(t,r,\\theta ,\\phi )$ , we consider a static spherically symmetric four-dimensional spacetime with metric $ds^2=f(r)dt^2-\\frac{dr^2}{f(r)}-r^2d\\sigma ^{2}.$ As usual, $d\\sigma ^{2}=d\\theta ^2 + \\sin ^2 \\theta d\\varphi ^2$ is the line element on the unit 2-sphere $S^{2}$ , with $\\theta \\in [0,\\pi ]$ and $\\varphi \\in [0,2\\pi ]$ .", "Moreover, let us consider the BH exterior with $r \\in ]r_h,+\\infty [$ , where $r=r_h$ is a simple root of $f(r)$ and defines the location of the BH event horizon.", "Let us assume also that the background geometry is asymptotically flat and the tortoise coordinate $r_\\ast = r_\\ast (r)$ , defined as $dr_\\ast /dr = 1/f(r)$ , is a bijection from $]r_h,+\\infty [$ to $] -\\infty , +\\infty [$ .", "Without loss of generality, we will consider motions on the equatorial plane $\\theta =\\pi /2$ .", "A free-falling massless particle moves along null geodesics according to $-f(r)\\dot{t}^2+\\frac{1}{f(r)}\\dot{r}^2+r^2\\dot{\\varphi }^2=0,$ where, as done earlier in this paper, $\\dot{t}=dt/d\\lambda $ , $\\dot{r}=dr/d\\lambda $ , $\\dot{\\varphi }=d\\varphi /d\\lambda $ and $\\lambda $ is an affine parameter which describes null geodesics.", "From symmetries of the BH spacetime, one can define integrals of motion, i.e.", "energy $E$ and angular momentum $L$ of the massless particle, associated respectively with the Killing vectors $\\partial /\\partial t$ and $\\partial /\\partial \\varphi $ : $f(r)\\dot{t}=E\\quad ; \\quad r^2\\dot{\\varphi }=L.$ The equation of motion is easily deduced from (REF ) and reads $\\dot{r}^2+V_{\\textrm {eff}}(r)=E^2,$ where the effective potential $V_{\\textrm {eff}}$ is defined as $V_{\\textrm {eff}}(r)=\\frac{L^2}{r^2}f(r).$ A photon sphere (or “lightrings”, or “photon rings”) located at $r=r_c$ corresponds to a local maximum of $V_{\\textrm {eff}}(r)$ at $r=r_c$ such as $\\left.\\frac{d}{dr}V_{\\textrm {eff}}(r)\\right|_{r_c}=0 &\\Leftrightarrow & \\frac{2}{r_c}f_c=f^{\\prime }_c, \\\\\\left.\\frac{d^2}{dr^2}V_{\\textrm {eff}}(r)\\right|_{r_c}<0 &\\Leftrightarrow & f^{\\prime \\prime }_c - \\frac{2}{r_c^2}f_c<0.$ The subscript “$c$ ” means, above and in the following, that the quantity considered is evaluated at $r=r_c$ , and the superscripts “ $^{\\prime }$ ” and “ $^{\\prime \\prime }$ ” respectively mean the first and second derivatives with respect to $r$ .", "A massless particle reaches the photon sphere when the turning point of its motion satisfies $E^2 = V_\\textrm {eff,c} $ , i.e.", "$L/E = r_c/\\sqrt{f_c}$ , where $r_c/\\sqrt{f_c}=b_c$ is the critical impact parameter for massless particles to reach tangentially the photon sphere, before circling the BH at $r=r_c$ .", "Moreover, at $r=r_c$ one also has $V^{\\prime \\prime }_{\\textrm {eff},c}=-2\\eta _c^2 \\frac{L^2}{r_c^4},$ where $\\eta _c = \\frac{1}{2}\\sqrt{4f_c-2r_c^2f^{\\prime \\prime }_c}.$ The study of instability associated with the circular orbits of massless particles on the photon sphere follows [3].", "Indeed, writing down a second order Taylor series expansion of $V_{\\textrm {eff}}$ near the photon sphere at $r=r_c$ gives $V_{\\textrm {eff}}(r) \\simeq V_{\\textrm {eff},c} + \\frac{1}{2}V^{\\prime \\prime }_{\\textrm {eff},c} (r-r_c)^2,$ such as (REF ) becomes $\\dot{r}^2+V_{\\textrm {eff},c} +\\frac{1}{2}V^{\\prime \\prime }_{\\textrm {eff},c} (r-r_c)^2 = E^2.$ If the turning point is close enough to $r=r_c$ , one can consider that $E^2 \\approx V_{\\textrm {eff},c}$ .", "The equation of motion then simplifies to $\\dot{r}^2 +\\frac{1}{2}V^{\\prime \\prime }_{\\textrm {eff},c} (r-r_c)^2 = 0,$ which can be written, in $(r,t)$ coordinates, as $\\left(\\frac{dr}{dt}\\right)^2 + \\frac{V^{\\prime \\prime }_{\\textrm {eff},c}}{2 \\dot{t}^2} (r-r_c)^2 = 0.$ Close to $r=r_c$ , one has $\\dot{t}=dt/d\\lambda \\approx E/f_c$ , and the equation of motion in the vicinity of the photon sphere simply reads $\\left(\\frac{dr}{dt}\\right)^2 - \\Lambda _c^2 (r-r_c)^2 = 0,$ where $|\\Lambda _c|=\\eta _c \\frac{\\sqrt{f_c}}{r_c}$ is the Lyapunov exponent associated with the unstable circular motion of a free-falling massless particle near the BH photon sphere." ], [ "The near horizon region: the highly damped QNM", "In this short section, we consider the well-known case of a purely radial motion in the line element $(\\ref {metric_BH})$ $ds^2=f(r)dt^2-\\frac{dr^2}{f(r)}.$ In the near-horizon region, one can approximate $f(r)$ by $f(r)=f^{\\prime }_h(r-r_h)$ at first order, with $f^{\\prime }_h=(df/dr)(r=r_h)$ .", "In a coordinate systems for which $d\\rho =\\frac{dr}{\\sqrt{f^{\\prime }_h(r-r_h)}}$ and introducing the surface gravity $\\kappa =(1/2)f^{\\prime }_h$ , the Rindler approximation of the metric $(\\ref {metric_BH_radial})$ in the near horizon region follows easily $ds^2=\\kappa ^2\\rho ^2dt^2-d\\rho ^2$ The form $(\\ref {RindlerMetric})$ is obviously recovered from which an equation similar to $(\\ref {MasslessMotion2})$ is derived.", "It should be noted that the equation of motion of a free-falling massless particle can also be obtained from $(\\ref {EqMotion})$ , with $L=0$ because $\\varphi $ is zero for purely radial motions towards the BH, which reads $\\dot{r}^2=E^2.$ With $(\\ref {EL})$ , one deduces $\\left(\\frac{dr}{dt}\\right)^2 - f(r)^2=0.$ In the near horizon region, $f(r)=f^{\\prime }_h(r-r_h)$ at first order in $r$ , and with the change of variable $(\\ref {rhoRindler})$ one obviously recovers $(\\ref {MasslessMotion2})$ in the form $\\left(\\frac{d\\rho }{dt}\\right)^2-\\kappa ^2\\rho ^2=0.$ All the results of section REF can now be applied to the near horizon case with the $SL(2,\\mathbb {R})$ algebra approach.", "In particular, the highly damped QNF are given by a relation like $(\\ref {Rindler_QNF})$ $\\omega _{n}^{(\\pm )} \\approx \\mp i \\kappa n,$ for which the imaginary part is equally spaced as expected.", "It should be noted that this approach does not give the real part of the QNF that appears in $(\\ref {HighlyDampedQNF})$ , which is related to the behavior of the BH metric near the singularity $r\\rightarrow 0$ .", "Of course, although almost everything has already been said in the literature about the Hawking-Unruh effect in the near horizon region, this $SL(2,\\mathbb {R})$ algebraic approach is worth mentioning, because it gives easily the equally spaced overtone level of the imaginary part of the highly damped QNF.", "Moreover, an interpretation that is less known is that the BH horizon appears also in this context as an unstable point, as for $(\\ref {MasslessMotion2})$ , here for purely radial null geodesics, in the phase space of an accelerated observer $(d\\rho /dt,\\rho )$ .", "In this context, the surface gravity plays a role of a Lyapunov exponent, i.e.", "an inverse characteristic time, that characterizes the (in)stability of the BH horizon for radial trajectories of massless particles seen from an accelerated observer $(t,\\rho )$ (see [12] and references therein for a very exhaustive study of this interpretation).", "In this section, we reproduce the outline of the proof in [8] according to which, in the ultrarelativistic limit, the near photon sphere limit of the metric $(\\ref {metric_BH})$ is also a Rindler metric.", "Then, from the above, all the results of section could also be easily applied.", "We start by considering the motion of a free-falling test particle of mass $m$ following the geodesic line element $(\\ref {metric_BH})$ .", "For a massive test particle to get very close to the photon sphere, we will need to consider the ultrarelativistic limit of its motion.", "Without loss of generality, let us focus again on the equatorial plane $\\theta =\\pi /2$ of the metric $(\\ref {metric_BH})$ $ds^2=-f(r)dt^2+\\frac{dr^2}{f(r)}+r^2d\\varphi ^{2}.$ In the massive case, the integrals of motion read $f(r)\\left(\\frac{dt}{d\\tau }\\right)=\\frac{E}{m}\\quad ; \\quad r^2\\left(\\frac{d\\varphi }{d\\tau }\\right)=\\frac{L}{m}$ where $\\tau $ is the masssive particle proper time.", "Using (REF ), the equation of motion for the test particle reads $m^2\\left(\\frac{dr}{d\\tau }\\right)^2 + U_\\textrm {eff}(r) = E^2,$ where $U_\\textrm {eff}(r)=f(r)\\left[\\frac{L^2}{r^2}+m^2\\right].$ The effective potential $U_\\textrm {eff}(r)$ has extrema located at $r=r_i$ such that $f^{\\prime }(r_i)-\\frac{2}{r_i^2}f(r_i)+\\frac{m^2}{L^2}f^{\\prime }(r_i)=0.$ We simplify the study by assuming that the particle angular momentum $L$ value is such that $U_\\textrm {eff}(r)$ admits a local maximum.", "Let us call $r_0(L)$ the location of this local maximum.", "In the limit $L \\gg 1$ , $(\\ref {r0})$ tends to $(\\ref {rc})$ , the local maximum coincides with the location of the photon sphere, i.e.", "$r_0(L)$ tends to $r_c$ and $U_\\textrm {eff}(r_0(L))$ tends to $V_\\textrm {eff,c}$ .", "In other words, a test particle (with energy $E$ and angular momentum $L$ ) coming from infinity, gets very close to the photon sphere if at least $&m^2& < E^2 \\approx U_\\textrm {eff}(r_0(L)),\\\\&L& \\gg 1 \\quad \\textrm {with} \\quad L/E \\quad \\textrm {finite} .$ The condition (REF ) implies that the turning point of the particle motion is located in the vicinity of the maximum of $U_\\textrm {eff}(r)$ .", "The second condition () implies that the location of this maximum tends to the location of the photon sphere.", "If the conditions (REF ) are both satisfied, then one has $E^2 \\approx U_\\textrm {eff}(r_0(L)) \\approx V_\\textrm {eff,c}$ , i.e.", "$L/E \\approx r_c/\\sqrt{f_c}$ , which corresponds to the limit of an ultrarelativistic test particle.", "Then, in the ultrarelativistic limit, the test particle has an impact parameter $b$ which tends, from above, to the critical impact parameter $b_c$ associated with massless particles motion, and the particle coming from infinity will get close to the photon sphere before moving away, back to infinity.", "Let us now focus on the near-photon sphere limit of $(\\ref {geodesics_equatorial_plane})$ to describe the motion of the test particle in the ultrarelativistic limit.", "We first use the constants of motions $(\\ref {EL_massive})$ for geodesic motions [15], to restrict ourselves to an equivalent effective geodesic line element in the $(t,r)$ -plane.", "From $(\\ref {EL_massive})$ , one can write $r^2 d\\varphi ^2=f(r)^2\\frac{L^2}{E^2 r^2}dt^2.$ This allows to transform $(\\ref {geodesics_equatorial_plane})$ into a line element, that would give the same radial equation of motion $(\\ref {EqMotionMassive})$ , $ds^2=f(r)\\left(-1+\\frac{V_\\textrm {eff}(r)}{E^2}\\right)dt^2+\\frac{dr^2}{f(r)},$ where $V_\\textrm {eff}(r)$ is still, from a pure formal point of view, defined by expression $(\\ref {EffPotential})$ but with $L$ being now the angular momentum of the massive test particle.", "It should be noted that the line elements (REF ) and (REF ) have the same magnitude for any geodesic motion, i.e.", "for any given $E$ and $L$ .", "Moreover, let us emphasize that (REF ) does not describe a purely radial motion in the BH background (in such case $\\varphi $ would have been constant, i.e.", "$L=0$ , and the photon sphere would have had no effect on the test particle motion), but rather describes the effective non-radial geodesic motion of a test particle in the $(t,r)$ -plane of a static and spherically symmetric BH background, taking into account explicitly the effect of the centrifugal potential barrier in its time component.", "Let us recall that we will not consider the case where the particle gets trapped into the BH, i.e.", "here one has $b \\gtrsim b_c$ .", "The near-photon sphere limit of $(\\ref {metric_BH2})$ requires the conditions $(\\ref {EL_conditions})$ to be satisfied, i.e.", "$E^2 \\approx V_\\textrm {eff,c}$ , and is obtained from the lowest order Taylor series expansion around $r=r_c$ which does not cancel the time component of $(\\ref {metric_BH2})$ .", "Using $(\\ref {Taylor_EffPotential})$ , the effective line element $(\\ref {metric_BH2})$ then becomes in this limit $ds^2 \\simeq -\\frac{V^{\\prime \\prime }_\\textrm {eff,c}}{2E^2}f_c(r-r_c)^2dt^2+\\frac{dr^2}{f_c}.$ Now, from $(\\ref {VeffSecond})$ and $(\\ref {Lyapunov})$ , and introducing the variable $\\rho = \\frac{r-r_c}{\\sqrt{f_c}} \\Leftrightarrow d\\rho = \\frac{dr}{\\sqrt{f_c}},$ one finally obtains a Rindler form of the effective line element near the photon sphere, which acts in this setting as an effective Rindler horizon $ds^2 \\simeq - \\Lambda _c^2 \\rho ^2 dt^2 + d\\rho ^2,$ where we have used $L^2/E^2 \\approx r_c^2/f_c$ because $E^2 \\approx V_\\textrm {eff,c}$ .", "The Lyapunov exponent $|\\Lambda _c|$ associated with the massless particle motions around the photon sphere plays the role of a constant proper acceleration in the near-photon sphere limit of the line element (REF ), describing the test particle effective geodesic motion in the $(t,r)$ -plane, a role analogue to the role played by the surface gravity in the near-horizon limit." ], [ "Massless scalar field in the BH metric", "In this section, we look at the Klein-Gordon equation for a massless scalar field $\\Phi $ in a spacetime metric $(\\ref {metric_BH})$ .", "After separation of variables, assuming a harmonic time dependence ($e^{-i\\omega t}$ ) for $\\Phi $ and the introduction of the radial partial wave functions $\\Phi _{\\ell \\omega }(r)$ with $\\ell =0,1,2,\\dots $ , the Klein-Gordon equation gives the well-known Regge-Wheeler equation $\\frac{d^2 \\Phi _{\\ell \\omega }}{d r_\\ast ^2} + \\left[\\omega ^2 - V_\\ell (r_\\ast )\\right] \\Phi _{\\ell \\omega } = 0.$ where $r_\\ast = r_\\ast (r)$ is the tortoise coordinate, introduced previously, and $V_{\\ell }(r)$ is the Regge-Wheeler potential defined as $V_{\\ell }(r)=f(r)\\left[\\frac{\\ell (\\ell +1)}{r^2}+\\frac{1}{r}f^{\\prime }(r)\\right].$ It should be noted that, for every $\\ell \\in \\mathbb {N}$ , $V_{\\ell }(r)$ admits a local maximum at $r=r_{0}(\\ell )$ which is close to the photon sphere located at $r=r_c$ .", "Moreover, in the limit $\\ell \\gg 1$ , $r_0(\\ell ) \\approx r_c$ and $V_{\\ell }(r) \\approx V_{\\textrm {eff}}(r)$ .", "Using the tortoise coordinate, we will denote $(r_\\ast )_{0,\\ell } = r_\\ast (r_0(\\ell ))$ the location of the maximum of $V_{\\ell }(r)$ .", "Following [16], let us consider $V_\\ell (r_\\ast )$ around the location of its local extremum at $(r_\\ast )_{0,\\ell }$ , i.e.", "around a location as close to the photon sphere as $\\ell \\gg 1$ .", "A second order Taylor series expansion gives $V_\\ell (r_\\ast ) \\approx V_0(\\ell ) + \\frac{1}{2}V^{(2)}_0(\\ell ) \\left(r_\\ast - (r_\\ast )_{0,\\ell } \\right)^2,$ where we have used the notation $V_0^{(2)}(\\ell )=\\left(\\frac{d^2 V_{\\ell }(r_\\ast )}{dr_\\ast ^2}\\right)_{(r_\\ast )_{0,\\ell }}.$ Introducing $x$ , $h(\\ell ,\\omega )$ and $\\alpha (\\ell )$ such that $x &=& \\left(r_\\ast - (r_\\ast )_{0,\\ell } \\right),\\nonumber \\\\h(\\ell ,\\omega ) &=&\\omega ^2-V_0(\\ell ),\\nonumber \\\\\\alpha (\\ell ) &=& \\sqrt{-2V_0^{(2)}(\\ell )},$ equation (REF ) reads $H\\tilde{\\Phi }_{\\ell \\omega }(x) = h(\\ell ,\\omega )\\tilde{\\Phi }_{\\ell \\omega }(x)$ where $\\tilde{\\Phi }_{\\ell \\omega }(x)=\\Phi _{\\ell \\omega }(r_\\ast (x))$ and $H=-\\frac{d^2}{dx^2} - \\frac{\\alpha (\\ell )^2}{4}x^2$ is the time-independent “Hamiltonian” (here with dimension $L^{-2}$ ) governing the massless scalar field dynamics in the near-photon sphere limit.", "In the coordinate $x$ , $H$ is in the form $(\\ref {Hamiltonian})$ , and then all the eigenvalue problem can be solved via algrebraic $SO(2,1)\\sim SL(2,\\mathbb {R})$ method.", "For the weakly damped BH QNF (i.e.", "$\\ell \\gg 1$ and $\\ell \\gg n$ ), it should be noted that one can make the substitution $\\ell (\\ell +1)=(\\ell +1/2)^2-1/4$ in $(\\ref {RWPotential})$ and look at $(\\ell +1/2)$ (instead of $\\ell $ ) as the quantity in which one will compute a Taylor series expansion of the Regge-Wheeler potential and its second derivative at $(r_{\\ast })_{0,\\ell }$ , in the limit $(\\ell +1/2)\\gg 1$ .", "At lowest order in $\\ell +1/2$ , one has $\\alpha (\\ell ) &\\approx & 2\\eta _c \\frac{f_c}{r_c^2} \\left(\\ell +\\frac{1}{2}\\right),\\\\V_{0}(\\ell ) &\\approx & \\frac{f_c}{r_c^2}\\left(\\ell +\\frac{1}{2}\\right)^2.$ Equation $(\\ref {HEigenvalues})$ then reads $\\omega ^2=V_{0}(\\ell ) \\pm i \\alpha (\\ell )\\left(n+\\frac{1}{2}\\right),$ which gives, in the weakly damped regime ($(\\ell +1/2)\\gg 1$ and $\\ell \\gg n$ ), a equally spaced imaginary part of the BH QNF $\\textrm {Im}\\left(\\omega _{\\ell n}^{(\\pm )}\\right)\\approx -\\frac{i}{2}\\frac{\\alpha (\\ell )}{\\sqrt{V_0(\\ell )}}\\left(n+\\frac{1}{2}\\right)=-i|\\Lambda _c|\\left(n+\\frac{1}{2}\\right),$ with $|\\Lambda _c|=\\eta _c \\sqrt{f_c}/r_c=$ is the Lyapunov exponent that characterizes the instability of null motions on the photon sphere." ], [ "Remarks on the amplitude of the overtone spacing", "A relevant fact is that the amplitude of the overtone level spacing is related to a Lyapunov exponent that characterizes the (in)stability of the top of the underlying IHO potential barrier.", "Rindler spacetime is an example of such behavior: the surface gravity in $(\\ref {RindlerSimple})$ is understood in this framework as a Lyapunov exponent, and reciprocally, the Lyapunov exponent in $(\\ref {Rindler_metric})$ can be interpreted as a locally constant acceleration for a certain family of observers.", "It should be noted that in both cases, the Lyapunov exponent characterizes motions of massless particles that seem to play a fundamental role in this description.", "Of course, as long as one has a Rindler approximation of a BH metric, the thermal aspects follow directly (see [12], [8] for more details).", "As a Lyapunov exponent is interpreted as the inverse of a characteristic time, this allows to expect a fundamental link between “time-thermality-instability”.", "Another remark is about the real part of the QNF in the highly damped regimes.", "This approach does not allow to access such a real part, because it has been shown to be related to the behavior of the BH metric near its singularity $r\\rightarrow 0$ , which is not considered here." ], [ "Conclusion", "In this paper, we propose a simple way to understand the equally spaced overtone level of BH QNF both in the highly and weakly damped regimes.", "To do so, we have shown that the resonant scattering problem in Rindler spacetime is deeply linked to the resonant scattering problem by an IHO potential, which in turn can be completely described through the $SL(2,\\mathbb {R})$ algebra.", "The $SL(2,\\mathbb {R})$ algebra allows to show that the spectrum of the “Hamiltonian” is discrete, purely imaginary and indexed by a non negative integer.", "This integer turns out to be simply the overtone of the QNF, and is shown to be equally spaced.", "A general claim could follow: in any BH geometry, at last for static and spherically symmetric BH, if there exist some regions where the BH metric can be locally approximated as a Rindler spacetime, in some set of coordinates, then the underlying algebra should be $SL(2,\\mathbb {R})$ , and the imaginary part of the BH QNF, be it in the highly damped or weakly damped regimes, will be indexed by a non negative integer, i.e.", "the overtone number, which will be equally spaced.", "A very interesting fact is that we explicitly obtained this result for both highly and weakly damped QNM families, by looking at different regions of the BH spacetime.", "This work can be easily extended to higher dimensional spacetime.", "A future look at the Kerr BH is of interest.", "The author would like to thank J.P. Provost and J.L.", "Jaramillo for stimulating discussions.", "The IMB receives support from the EIPHI Graduate School (contract ANR-17-EURE-0002)." ] ]
2212.05538
[ [ "Extending TrOCR for Text Localization-Free OCR of Full-Page Scanned\n Receipt Images" ], [ "Abstract Digitization of scanned receipts aims to extract text from receipt images and save it into structured documents.", "This is usually split into two sub-tasks: text localization and optical character recognition (OCR).", "Most existing OCR models only focus on the cropped text instance images, which require the bounding box information provided by a text region detection model.", "Introducing an additional detector to identify the text instance images in advance is inefficient, however instance-level OCR models have very low accuracy when processing the whole image for the document-level OCR, such as receipt images containing multiple text lines arranged in various layouts.", "To this end, we propose a localization-free document-level OCR model for transcribing all the characters in a receipt image into an ordered sequence end-to-end.", "Specifically, we finetune the pretrained Transformer-based instance-level model TrOCR with randomly cropped image chunks, and gradually increase the image chunk size to generalize the recognition ability from instance images to full-page images.", "In our experiments on the SROIE receipt OCR dataset, the model finetuned with our strategy achieved 64.4 F1-score and a 22.8% character error rates (CER) on the word-level and character-level metrics, respectively, which outperforms the baseline results with 48.5 F1-score and 50.6% CER.", "The best model, which splits the full image into 15 equally sized chunks, gives 87.8 F1-score and 4.98% CER with minimal additional pre or post-processing of the output.", "Moreover, the characters in the generated document-level sequences are arranged in the reading order, which is practical for real-world applications." ], [ "Introduction", "Scanned receipt digitization aims at documenting text in receipts.", "This process was formally defined as Scanned Receipts OCR and Information Extraction (SROIE) task in the ICDAR 2019 competition [1], which provides a benchmark dataset, called SROIE, and splits the task into localization and recognition sub-tasks.", "Existing works focus on either the text detection [2], [3], [4] or character recognition [5], [6], [7], but there is no document-level OCR model for transcribing receipt images end-to-end without text localization.", "Introducing an additional detector to identify the text instance images in advance is inefficient, and it requires post-processing to combine the instance-level sequences to obtain a document-level transcription.", "Figure: Our proposed step-by-step finetuning strategy for adapting TrOCR to the document-level OCR.", "“æ” indicates the separator token for dividing characters in each text-line.Recently, a pretrained Transformer-based OCR model TrOCR [8] was proposed which achieved state-of-the-art performance on the SROIE dataset.", "However, TrOCR was trained using only text instance images, which makes its adaptation to document-level OCR challenging because of the great variation in input images, which include many more characters and more lines than the single line it was trained on.", "To explore the potential of TrOCR for end-to-end document-level OCR without text localization, we propose a step-by-step finetuning strategy as shown in Fig.REF .", "Specifically, we first randomly split the whole receipt image into image chunks whose size is closer to the original text instance images.", "These are then used to finetune the TrOCR model for what we call “chunk-level” OCR, and we gradually increase the chunk size to introduce more difficult chunks containing more lines and characters.", "Every time we use the model finetuned in the previous step to initialize the current model.", "Finally, we train using the entire, un-chunked, receipt images to achieve document-level OCR.", "The intuition behind this strategy is to progressively get the model to generalize its recognition ability to larger images.", "We define the order of characters in the chunk-level label as top-left to bottom-right, and propose a method to construct the reference label for each chunk automatically.", "We also include a text-line separator token in the constructed reference label, which aims to encode the line segmentation for the layout learning and also facilitate post-processing the model output into lines during inference.", "We conduct experiments on the SROIE dataset for the document-level OCR.", "We cannot directly compare our results with those in the literature, since they only focus on instance-level OCR.", "Thus we construct two baseline finetuning methods for comparison.", "The experimental results show that our method achieves better performance than the two baselines on both word-level and character-level metrics.", "The main contributions of our work are summarized as follows: We propose a method to construct chunk-level reference labels automatically using only annotated instance-level labels.", "This method can be easily applied to other OCR datasets.", "The generated characters are arranged in the reading order with a unique text-line separator token for post-processing, which is practical for real-world applications.", "We propose a step-by-step finetuning strategy to adapt TrOCR for document-level OCR, which can process the entire image and achieve competitive performance.", "Figure: Finetuning the TrOCR model for chunk-level OCR." ], [ "Related Works", "CNN-Based OCR models Based on our target, we mainly focus on end-to-end OCR models which include two modules for detection and recognition, respectively.", "Previous research treats these two problems independently [9], [10], [11] by combining a text detector with a recognition model.", "Since the interaction between the two modules can complement each other to avoid the error propagation, recent research [12], [13], [14], [15] jointly optimizes the two modules by sharing the intermediate results.", "However, all these models include a text detection module explicitly or implicitly whereas we aim to perform document-level OCR without extracting intermediate text regions.", "Transformer-Based OCR models Recently, a Transformer-based model TrOCR [8] was proposed for the receipt OCR task.", "TrOCR incorporates a vision transformer and a language model in its encoder-decoder architecture, which was trained on large-scale printed and handwritten OCR data for robust text recognition.", "Several Transformer-based models were also proposed focusing on hand written text OCR [16], [17] or scene text OCR [18], [19], [20].", "However, all these models are restricted to the transcription of the cropped text-line or instance images instead of the full images." ], [ "TrOCR Model Architecture", "We will first introduce TrOCR as the backbone model for our finetuning work.", "TrOCR is a Transformer-based OCR model which consists of a pretrained vision Transformer encoder BEiT [21] and a pretrained language model decoder RoBERTa [22] as shown in Fig.REF .", "To recognize characters in the cropped text instance images, the images are first resized into square boxes of size $384\\times 384$ pixels and then flattened into a sequence of 576 patches, which are then encoded by BEiT into high-level representations and decoded by RoBERTa into corresponding characters step-by-step.", "TrOCR was pretrained with 684M textlines in English, which ensures the robust recognition ability for characters in various formats.", "However, TrOCR can only handle cropped text instance images, which leads to the underperformance when finetuning the model directly using whole receipt images.", "Therefore, a better finetuning strategy to adapt the model recognition from cropped images to full images is required." ], [ "Chunk-Level OCR Finetuning", "To leverage TrOCR for whole image text recognition, we propose to finetune the model with image chunks extracted from the whole images for chunk-level OCR.", "As Fig.REF shows, our finetuning pipeline contains three modules: (i) randomly sample image chunks from the full receipt image; (ii) construct the label for the sampled chunks; and (iii) finetune the model with chunks and corresponding chunk labels.", "We will introduce the random sampling and the finetuning process in this section.", "Randomly sampling image chunks from whole images aims to obtain larger images for training the model.", "The reason for introducing randomness is that we hope to extract different chunks from each image across different epochs, which can increase data diversity and improve model generalization.", "Formally, to sample a chunk from a receipt image of width $W$ and height $H$ , we first set a hyper-parameter $L$ for the chunk numbers that we will split a receipt image into, then define the image chunk size whose width $w$ is always the same as the corresponding image width $W$ and height $h$ equal to $H/L$ .", "With the determined chunk size, we randomly select an image chunk starting point $s$ on the y-axis whose value ranges from 0 to $H-H/L$ , and crop the chunk between $s$ and $s+h$ on the y-axis.", "Last, we repeat the sampling $N$ times to extract multiple chunks from each receipt image.", "With the randomly cropped chunk images, we can obtain corresponding chunk-level labels with the method that will be described in the next section, and use the chunks and labels for chunk-level OCR finetuning.", "The chunk size determines the contents in the chunk which in turn affect the learning difficulty.", "Therefore we feed the model with smaller chunks whose resolution is similar to the text instance images at the early stage, and increase the chunk size gradually to finetune the model with progressively more difficult image chunks.", "Concretely, we start by setting $L$ with a large value which produces smaller chunks, then after the training is finished using the current $L$ , we start the next stage training with smaller $L$ , and every time we use the best model checkpoint from the previous training step to initialize the model in the current step.", "Finally, we increase the chunk size to the full image size (L=1) for document-level OCR.", "We use the notation Growing-Finetune for our proposed method.", "[t] [1] Input: overlapping threshold $\\theta $ , merging threshold $\\delta $ , chunk numbers to split L, chunk numbers to sample N Output: chunk set $X$ and chunk label set $Y$ Data: input image $\\textit {V} = \\textit {W} \\times \\textit {H}$ Init $X\\leftarrow \\lbrace \\rbrace $ , $Y\\leftarrow \\lbrace \\rbrace $ , $n\\leftarrow 1$ , $h \\leftarrow H/L$ $n$ = 1 to $N$ Randomly select a starting point $s \\in (0,H-H/L)$ Chunk $x$ between $s$ and $s+h$ Gather boxes $B=\\lbrace b_{i}\\rbrace _{i=1}^{I}$ and labels $R=\\lbrace r_{i}\\rbrace _{i=1}^{I}$ Sort boxes and labels by the y-axis values of anchors $b_{i}$ in B filtering boxes $overlap(b_{i},x) \\le \\theta $ $B \\leftarrow B-b_{i}$ Init merged boxes $B^{^{\\prime }} \\leftarrow \\lbrace \\rbrace $ and labels $T^{^{\\prime }} \\leftarrow \\lbrace \\rbrace $ $b_{i}$ in $B$ merging text-line labels $b_{i} \\notin B{^{\\prime }}$ Init text-line label set $t_{i} \\leftarrow \\lbrace r_{i}\\rbrace $ $b_{j}$ in $B-b_{i}$ $height\\_overlap(b_{i},b_{j}) \\ge \\delta $ $t_{i} \\leftarrow $ $t_{i} + r_{j}$ Sort $t_{i}$ based on x-axis values of anchors Concat labels in $t_{i}$ with whitespace $T^{^{\\prime }} \\leftarrow T^{^{\\prime }} + t_{i}$ $B^{^{\\prime }} \\leftarrow B^{^{\\prime }} + b_{i}$ $y \\leftarrow $ Concat labels in $T^{^{\\prime }}$ with the separator æ $X \\leftarrow X + x$ $Y \\leftarrow Y + y$ return $X,Y$ Chunk-Level Label Construction" ], [ "Chunk-Level Label Construction", "Chunks usually contain multiple text-lines and some characters are split vertically in half as shown in Fig.REF .", "Therefore, to construct labels for randomly cropped chunks, a definition for the character order in the label and which split characters should be included in the label is required.", "In this paper, we define the top-left to bottom-right reading order for characters in the chunk as the correct order, and set a overlapping threshold $\\theta $ to include characters with an overlapping rate larger than $\\theta $ to make sure no unrecognizable split characters are mistakenly included in the label.", "Based on these definitions, we construct the chunk-level label with the use of annotated texts and corresponding bounding boxes information as shown in Algorithm REF .", "We first gather the $I$ bounding boxes in the randomly cropped image chunk and filter out boxes with overlapping rate less than $\\theta $ , and sort boxes as well as labels based on the y-axis values of the left-upper anchors of boxes.", "To align boxes horizontally in the same line, we define a merging threshold $\\delta $ to merge boxes that overlap vertically over the threshold into text-line level labels, and sort the boxes in each group based on the x-axis values of the left-upper anchors from left to right.", "Lastly, we concatenate labels of boxes in each group with white space as text-line labels, which are concatenated with text-line label separator token “æ\".", "This character does not appear in any of the receipts and is used to encode the line segmentation for the receipt layout learning.", "To clarify, we only use the boxes for the label construction and no box is used during inference." ], [ "Settings", "Dataset We use the SROIE (Scanned Receipts OCR and Information Extraction) dataset from the ICDAR 2019 competition for our experiments.", "The SROIE OCR task focuses on text recognition of cropped receipt images.", "There are 626 and 361 images in the training and testing set, respectively, which are annotated with ground truth bounding boxes and corresponding texts, and we keep the train and test data split the same as the TrOCR setting.", "We randomly sample 60 images from the training set to construct the validation set.", "For the chunk-level OCR setting, the training chunks are randomly sampled from each image in the training set, while the validation and testing chunks are sequentially cropped from each image to ensure the full image area is covered for the evaluation.", "All the labels are obtained by the chunk-level label construction method in Section 3.3.", "Hyper-parameters We use the trocr-base-printedhttps://huggingface.co/microsoft/trocr-base-printed model checkpoint finetuned with the original SROIE training data from Hugging Face [23] for our own finetuning experiments.", "The boxes filtering threshold $\\theta $ , text-line merging threshold $\\delta $ and sampled chunk number $N$ for training data are 0.3, 0.5, and 20, respectively, which are determined by results on the validation set.", "For the split chunk number $L$ , we first compute the distribution of text-line numbers of training images as shown in Fig.REF , then use the median number 30 as the initial value.", "By decreasing the value to 15, 7, 4, 2 and 1, we train the model with increasingly larger chunks at subsequent stages.", "We set the beam search size as 5 for the text generation.", "Evaluation Metrics We use two evaluation metrics adopted in OCR tasks to evaluate the performance: Word-Level precision, recall, F1 (Word-Level PRF) and Character Error Rate (CER).", "The Word-Level PRF focuses on correctly matched words in the hypothesis without considering the word order, while the CER focuses on the character-level substitutions, deletions, and insertions as: Figure: Distribution of the number of text lines in receipt images.$CER = (S + D + I) / (S+D+C)$ where $S$ , $D$ , $I$ , and $C$ are the number of substitutions, deletions, insertions, and correct characters, respectively." ], [ "Baselines", "Direct-Finetune This is a straight-forward method that uses the whole receipt image for the document-level OCR directly.", "Concretely, we resize the whole image as input and split it into patches, and finetune the model with patches and document-level labels end-to-end.", "Resizing the whole image to $384\\times 384$ pixels will cause a large variation in the average resolution of each character for the recognition.", "Concatenate-Finetune This is a compromise strategy to keep the input resolution closer to the original setting, which splits the image equally into several chunks, and embeds each chunk into sequences that are concatenated in the temporal dimension to construct document-level inputs.", "Since the linearly increased input length brings higher computational cost, we restrict the split number to 4 and interpolate the position embeddings to adjust for longer inputs.", "Figure: Error analysis on the generated full-page receipt OCR texts.", "Characters in the red, blue, and green colors represent the substitution, insertion, and deletion errors, respectively.", "Black characters are the corresponding correct content.Figure: Generated Texts with Different Beam Search Size.", "The underlined texts are deleted contents with the default beam size." ], [ "Quantitative Analysis", "Performance Comparison We first compare the performance of our method and two baselines.", "As the results in Table REF show, our method outperforms the two baselines on all metrics, which indicates that the direct finetuning methods do not work well.", "Moreover, the worst performance with Concatenate-Finetune also highlights the increased computational complexity on the longer inputs.", "Compared with TrOCR, our results are worse since the evaluation for the longer and ordered document-level sequence is more strict, which also reveals the efficiency of our method to achieve good results without using the bounding box information for the localization.", "Table: Model performance comparison with different finetuning strategies for whole document OCR.Table: Model performance with different chunk numbers per image.", "Larger number L indicates smaller chunks on average.Chunk Size Analysis We then analyzed how the model performance changes according to the input chunk size.", "Since the chunk size is determined by the chunk number $L$ , we test the performance under different $L$ and the variation is shown in Table REF .", "We observed that performance improved with increased chunk size at first, since the larger chunk will introduce fewer split characters which reduces errors caused by the insertion and deletion, but the performance decreases when the size becomes larger, since images with more content and a longer target sequence significantly increases the learning difficulty, especially for the half-page and full-page settings which doubles the error rates compared with their previous settings.", "The optimal trade-off is achieved at $L=15$ , where each chunk contains roughly 2 text-lines, on average.", "It is worth noting that if the text detection in the baseline TrOCR misses 4% of ground truth regions, our best method results would be better than TrOCR.", "Ablation Study Lastly, we analyze the influence of two main factors in our strategy with the ablation study as shown in Table REF .", "By removing the separator token, we noticed the CER performance drop is minor.", "While by sampling chunks sequentially without randomness, we found the performance dropped significantly on all metrics, which indicates the importance of randomness in bringing more diverse data and larger data size among all epochs for the convergence.", "This may also be thought of as a form of data augmentation.", "Table: Ablation study for the factor influence analysis." ], [ "Qualitative Analysis", "To understand the quality of generated document-level texts, we conduct the error analysis on generated full-page OCR texts with $L=1$ as shown in Fig.REF .", "For the first example, our model generates a much longer sequence than instance-level TrOCR and achieved 0.13% error rate where only one character was mistakenly substituted.", "This indicates the model can generate high-quality texts for full-image receipts.", "On the other hand, for the second example, our model achieved a 14.82% error rate and made substitution errors on numbers and insertion or deletion errors on sequences.", "We hypothesize that the insertions are caused by memorization during model training, and the deletions are caused by the small beam size.", "We generate texts for the second example with the beam size 1 and 7, respectively, and the results in Fig.REF show that decoding with a larger beam size can successfully generate the missing contents which supports our hypothesis." ], [ "Conclusions and Future Work", "In this paper, we propose a step-by-step finetuning strategy and an automatic label construction method for adapting TrOCR to perform document-level receipt image OCR.", "The finetuned model can handle the full image input and transcribe all characters into long ordered sequences.", "Moreover, our method outperforms other straight-forward finetuning baselines, which indicates the efficiency of finetuning with image chunks of increasing size.", "We observed the trade-off between performance and chunk size, and learned the importance of random sampling from the ablation study.", "We expect this study can serve as a baseline for future studies that concentrate on document-level receipt OCR, using more recent Transformer models such as Donut [24]." ] ]
2212.05525
[ [ "Crossover from exciton polarons to trions in doped two-dimensional\n semiconductors at finite temperature" ], [ "Abstract We study systematically the role of temperature in the optical response of doped two-dimensional semiconductors.", "By making use of a finite-temperature Fermi-polaron theory, we reveal a crossover from a quantum-degenerate regime with well-defined polaron quasiparticles to an incoherent regime at high temperature or low doping where the lowest energy \"attractive\" polaron quasiparticle is destroyed, becoming subsumed into a broad trion-hole continuum.", "We demonstrate that the crossover is accompanied by significant qualitative changes in both absorption and photoluminescence.", "In particular, with increasing temperature (or decreasing doping), the emission profile of the attractive branch evolves from a symmetric Lorentzian to an asymmetric peak with an exponential tail involving trions and recoil electrons at finite momentum.", "We discuss the effect of temperature on the coupling to light for structures embedded into a microcavity, and we show that there can exist well-defined polariton quasiparticles even when the exciton-polaron quasiparticle has been destroyed, where the transition from weak to strong light-matter coupling can be explained in terms of the polaron linewidths and spectral weights." ], [ "Introduction", "The notion of a quantum impurity, where a particle is dressed by excitations of a quantum gas, was first introduced by Landau in 1933 [1] to describe the behavior of conduction electrons in a dielectric medium.", "The properties of this dressed impurity (or polaron), such as mobility or effective mass, are changed with respect to those of an isolated impurity, leading to strong modifications of, e.g., electrical and thermal transport.", "Despite nearly a century of work, the polaron problem continues to attract significant interest, with noticeable realizations in ultracold atomic gases [2], [3], [4], $^3$ He impurities in $^4$ He [5], and the Kondo effect generated by localized magnetic impurities in a metal [6], just to cite a few examples.", "Recently, the absorption and emission spectra of doped two-dimensional (2D) transition metal dicalchogenide (TMD) monolayers [7], [8], [9], [10], [11], [12] have been interpreted in terms of a Fermi-polaron model [13], [14], [15], [16], [17], [18].", "Here, the optically generated exciton is dressed by excitations of a 2D Fermi gas of charge carriers (electrons or holes) induced by either gating or natural doping of the TMD monolayer.", "This leads to the formation of “attractive” and “repulsive” Fermi polarons, where the exciton attracts or repels the surrounding charge carriers, respectively.", "Furthermore, when such a TMD monolayer is embedded in a microcavity [7], [19], [20], [9], [21], the strong coupling between light and matter can lead to the formation of Fermi polaron polaritons.", "The intense recent activity on this topic is motivated by several features of exciton-polaron quasiparticles.", "The dressing of polaritons by a Fermi gas can boost their pairwise interactions or non-linearities [22], [20], which can potentially be exploited to reach the polariton blockade regime [23].", "Another interesting aspect of this configuration is the possibility of manipulating dressed excitons or polaritons by external electric and magnetic fields [24], [25], [26], [27].", "Finally, exciton polarons can be used as a probe of strongly correlated states of electrons, such as Wigner crystals in a TMD monolayer [28], [29], quantum Hall physics in TMD monolayers [26], fractional quantum Hall states in proximal graphene layers [30], and correlated Mott states of electrons in a Moiré superlattice [31].", "The theoretical analysis of the Fermi polaron in 2D semiconductors has so far mainly focused on the zero-temperature limit While the results in Refs.", "[74], [15] have been derived at finite temperature, the former only considered infinite exciton mass, while the latter did not analyze the consequences of temperature on the polaron description..", "However, a natural question to ask is how the system changes with temperature, since this has been shown to strongly modify the nature of the Fermi polaron in cold-atom experiments [33].", "In particular, there is a competing picture to that of the Fermi polaron based on few-body complexes such as excitons and trions, which provides a description for the photoluminescence of the attractive branch in doped semiconductors at finite temperature [34], [35], [36].", "In this paper, we use a finite-temperature variational approach developed in the context of cold atoms [37] to reveal the important role that temperature plays in the exciton-polaron problem.", "In particular, we demonstrate that, when the temperature becomes large compared to the Fermi energy of charge carriers, the attractive Fermi polaron merges with a continuum of states comprised of a bound trion and unbound Fermi-sea-hole—the so-called trion-hole continuum.", "Here, the attractive branch ceases to be a well-defined quasiparticle and it crosses over into an incoherent regime dominated by the trion-hole continuum.", "We discuss the implications of this crossover for the existence of polariton quasiparticles when the semiconductor is strongly coupled to light in a microcavity.", "By contrast, we find that temperature does not qualitatively change the nature of the repulsive branch at typical doping levels, only its polaron properties.", "We find that the destruction of the attractive polaron quasiparticle has little effect on the energy and spectral weight, but it strongly modifies the linewidth and the overall shape of the attractive peak in both absorption and photoluminescence.", "In particular, we observe that the attractive branch evolves from a symmetric Lorentzian shape to a strongly asymmetric profile with an exponential tail below the trion energy.", "In the latter trion-dominated regime, our theory becomes perturbatively exact since it corresponds to the lowest order term in a quantum virial expansion [38].", "Note that, even though our model is formulated to describe doped 2D semiconductors, our results can be easily generalized to 2D atomic Fermi gases [39], [40], [41], and they can straightforwardly be extended to the three-dimensional case.", "The paper is organized as follows: In Sec.", "we introduce the model describing excitons and electrons in a doped TMD monolayer.", "In Sec.", "we describe the variational approach for impurities at finite temperature, we relate it to the $T$ -matrix approach, and we define absorption and photoluminescence.", "In Sec.", "we illustrate the results for optical absorption and photoluminescence, comparing the case of finite temperature against the zero temperature limit.", "We also carry out a perturbatively exact quantum virial expansion to relate the results of this work with those of the accompanying paper [38].", "In Sec.", "we extend the formalism and results to the case where the doped TMD monolayer is embedded into a microcavity to allow for strong light-matter coupling.", "Finally, in Sec.", "we conclude and present future perspectives of this work." ], [ "Model", "We consider the following model Hamiltonian describing a doped 2D semiconductor such as a TMD monolayer: H = H0 + H0X +Hint , H0= k (k -)c†kck , H0X= k Xkx†kxk , Hint= -vAkk'qx†k+qc†k'-q ck'xk , where $\\mathcal {A}$ is the system area (throughout this paper we set $\\hbar = k_B= 1$ ).", "The Hamiltonian $\\hat{H}_0$ describes the fermionic medium-only part, i.e., a Fermi gas of either electrons or holes, in terms of the fermionic operator $\\hat{c}^{\\dag }_{{\\bf k}}$ that creates a charge carrier with mass $m$ and dispersion $\\epsilon _{{\\bf k}}= |{\\bf k}|^2/2m \\equiv k^2/2m$ .", "For concreteness, we consider the specific case where the charge carriers are electrons, but our results can be trivially extended to the case of hole doping.", "For simplicity, we treat electrons as non-interacting, an assumption which becomes exact in the high-temperature, low-doping limit [38].", "We furthermore ignore the spin/valley degree of freedom such that all the electrons are spin polarized but distinguishable from the electron within the exciton (see Ref.", "[42] for a theoretical analysis of the case where the charge carriers are indistinguishable).", "This minimal model is reasonable for describing the polaron and trion physics in TMD monolayers, such as MoSe$_2$ monolayers [7], [14], [18].", "The medium is described within the grand canonical ensemble, where the chemical potential $\\mu $ is related to the excess electron density $n=N/\\mathcal {A}$ by = T (eEF -1 ), EF =2 m n , where $E_F$ is the Fermi energy and $\\beta = T^{-1}$ the inverse temperature.", "While we use a grand canonical formalism for the medium, it is convenient to consider only a single excitonic impurity in the canonical formalism [37], as in $\\hat{H}_{0X}$ .", "Formally, this corresponds to considering uncorrelated excitons, and this description is thus appropriate for a low density of excitons, i.e., the quantum impurity limit.", "Following previous works [7], [14], [43], we treat the exciton as a structureless boson, with corresponding creation operator $\\hat{x}^{\\dag }_{{\\bf k}}$ , mass $m_X$ and free-particle dispersion $\\epsilon _{X{\\bf k}}=k^2/2m_X$ .", "This is because in TMD monolayers the optically generated exciton is tightly bound and its binding energy is larger than all other relevant energy scales of the problem.", "Note that, throughout this paper, energies are measured with respect to that of the exciton at rest.", "The $H_{int}$ term () describes the short-range attractive interaction between electrons and excitons, which can be approximated as a contact interaction [44], [18] with coupling strength $v>0$ up to a high-momentum cutoff $\\Lambda $ .", "The renormalization of such an interaction can be carried out by relating the bare parameters to the trion binding energy $\\varepsilon _T$ : $\\frac{1}{v} = \\frac{1}{\\mathcal {A}}\\sum _{{\\bf k}}^{\\Lambda } \\frac{1}{\\varepsilon _T+\\epsilon _{X{\\bf k}} + \\epsilon _{{\\bf k}}} \\; .$ Note that all our results are independent of cutoff since we formally take $\\Lambda \\rightarrow \\infty $ .", "Our formalism is applicable to a general quantum impurity in a 2D Fermi gas, including both ultracold atomic gases and doped semiconductors.", "To be concrete, in the following we will consider the experimentally relevant case of electron-doped MoSe$_2$ monolayers, where $\\varepsilon _T\\simeq 25$ meV [7], [12] and $m_X/m=2.05$  [45].", "In this system one can readily achieve doping densities $E_F$ in the range of 0–40 meV [7]." ], [ "Finite-temperature ansatz", "At finite temperature, one can employ the variational approach for impurity dynamics developed in Ref.", "[37] in the context of the Fermi polaron problem in ultracold atomic gases.", "This variational approach has been successfully used to model dynamical probes such as Ramsey spectroscopy [46], [37] and Rabi oscillations [47], [48], as well as static thermodynamic properties such as the impurity contact [33], [49].", "For completeness, we review the approach here, formulating it for the 2D semiconductor problem.", "We consider the case of zero impurity momentum, relevant for evaluating absorption and emission.", "The generalization to finite impurity momentum is discussed in Appendix .", "The starting point of the variational approach [37] is the time-dependent impurity operator that approximates the exact operator in the Heisenberg picture, $\\hat{x}_{{\\bf 0}}^{}(t) = e^{i\\hat{H}t} \\hat{x}_{{\\bf 0}}^{} e^{-i\\hat{H}t}$ .", "We choose the form $\\hat{x}_{{\\bf 0}}^{}(t) \\simeq \\varphi _{0}^{}(t) \\hat{x}_{{\\bf 0}}^{} + \\frac{1}{\\mathcal {A}} \\sum _{{\\bf k},{\\bf q}} \\varphi _{{\\bf k}{\\bf q}}^{}(t) \\hat{c}_{{\\bf q}}^{\\dag } \\hat{c}_{{\\bf k}}^{} \\hat{x}_{{\\bf q}-{\\bf k}}^{}\\; ,$ which is written in terms of the time-dependent variational coefficients $\\varphi _{0}^{}(t)$ and $\\varphi _{{\\bf k}{\\bf q}}^{}(t)$ .", "The truncated form of this operator is similar to that of the Chevy ansatz [50] employed for the zero-temperature state, and describes an impurity dressed by a single excitation of the fermionic medium.", "The time-dependent exciton operator (REF ) does not coincide with the exact solution of the Heisenberg equation of motion, and thus we determine the variational coefficients by minimizing the error function $\\Delta (t) = \\langle \\hat{e}(t) \\hat{e}^{\\dag }(t) \\rangle _{\\beta } \\equiv [\\hat{\\rho }_0 \\hat{e}(t) \\hat{e}^{\\dag }(t)]\\; ,$ with respect to $\\varphi _{0}^{*}(t)$ and $\\varphi _{{\\bf k}{\\bf q}}^{*}(t)$ .", "Here, the trace is over medium-only states, $\\hat{e}(t) = i\\partial _t \\hat{x}_{{\\bf 0}}^{}(t) - [\\hat{x}_{{\\bf 0}}^{}(t),\\hat{H}]$ is an error operator, and $\\hat{\\rho }_0= e^{-\\beta \\hat{H}_{0}}/Z_0$ is the medium-only density matrix with $Z_0=\\text{Tr}[e^{-\\beta \\hat{H}_{0}}]$ the medium partition function in the grand canonical ensemble.", "By considering the stationary solutions, $\\varphi _{0}^{}(t)= \\varphi _{0}^{} e^{-i E t}$ and $\\varphi _{{\\bf k}{\\bf q}}^{}(t) = \\varphi _{{\\bf k}{\\bf q}}^{} e^{-i E t}$ , we obtain the following eigenvalue problem: E0 = -vA2k,q fq(1-fk) kq Ekq = EXkq kq - v0 -vAk'(1-fk') k'q .", "Here, $E_{X{\\bf k}{\\bf q}} = \\epsilon _{X{\\bf q}-{\\bf k}} + \\epsilon _{\\bf k}- \\epsilon _{\\bf q}$ and we have used the Fermi-Dirac distribution for the electron occupation, $\\langle \\hat{c}^{\\dag }_{{\\bf k}} \\hat{c}_{{\\bf k}}^{} \\rangle _{\\beta }=f_{\\bf k}= [e^{\\beta (\\epsilon _{{\\bf k}}-\\mu )}+1]^{-1}$ , and for the hole occupation, $\\langle \\hat{c}_{{\\bf k}}^{} \\hat{c}^{\\dag }_{{\\bf k}}\\rangle _{\\beta }=1-f_{\\bf k}$ .", "Note that we have dropped terms that vanish when $\\Lambda \\rightarrow \\infty $ ; for instance, since $v\\sim 1/\\ln \\Lambda \\rightarrow 0$ , terms like $\\frac{v}{\\mathcal {A}}\\sum _{{\\bf q}^{\\prime }} f_{{\\bf q}^{\\prime }} \\varphi _{{\\bf k}{\\bf q}^{\\prime }}^{}$ also go to zero as $\\Lambda \\rightarrow \\infty $ .", "The set of equations in () constitutes an eigenvalue problem which can be solved to give a set of eigenvalues $E^{(n)}$ and associated eigenvectors $\\varphi _{0}^{(n)}$ and $\\varphi _{{\\bf k}{\\bf q}}^{(n)}$ , with $n$ a discrete index.", "We require that the corresponding stationary operators $\\hat{x}_{{\\bf 0}}^{(n)} = \\varphi _{0}^{(n)} \\hat{x}_{{\\bf 0}}^{} + \\frac{1}{\\mathcal {A}} \\sum _{{\\bf k},{\\bf q}} \\varphi _{{\\bf k}{\\bf q}}^{(n)} \\hat{c}_{{\\bf q}}^{\\dag } \\hat{c}_{{\\bf k}}^{} \\hat{x}_{{\\bf q}-{\\bf k}}^{}\\; ,$ are orthonormal under a thermal average, $\\langle \\hat{x}_{{\\bf 0}}^{(n)} \\hat{x}_{{\\bf 0}}^{(m)\\dag } \\rangle _{\\beta } = \\delta _{n,m}$ , implying that $\\varphi _{0}^{(n)} \\varphi _{0}^{(m)*} + \\frac{1}{\\mathcal {A}^2}\\sum _{\\textbf {k},\\textbf {q}} f_{\\bf q}(1-f_{\\bf k}) \\varphi _{{\\bf k}{\\bf q}}^{(n)} \\varphi _{{\\bf k}{\\bf q}}^{(m)*} = \\delta _{n,m} \\; .$ The stationary operators thus form a complete basis within which we can expand the approximate impurity operator (REF ), giving $\\hat{x}_{{\\bf 0}}^{}(t) = \\sum _n \\varphi _{0}^{(n)*} \\hat{x}_{{\\bf 0}}^{(n)} e^{-i E^{(n)} t}\\; ,$ where $\\varphi _{0}^{(n)*} = \\langle \\hat{x}_{{\\bf 0}}^{} \\hat{x}_{{\\bf 0}}^{(n)\\dag }\\rangle _{\\beta }$ and where we have used the boundary condition $\\hat{x}_{{\\bf 0}}^{}(0) = \\hat{x}_{{\\bf 0}}^{}$ .", "The exciton retarded Green's function in the time domain, $\\mathcal {G}_{X} (t) = -i \\theta (t) \\langle [\\hat{x}_{{\\bf 0}}^{}(t), \\hat{x}_{{\\bf 0}}^{\\dag }] \\rangle _{\\beta }=-i \\theta (t) \\langle \\hat{x}_{{\\bf 0}}^{}(t) \\hat{x}_{{\\bf 0}}^{\\dag } \\rangle _{\\beta },$ can be evaluated approximately within the variational ansatz (REF ) by using Eq.", "(REF ).", "By taking the Fourier transform into the frequency domain we obtain: $G_{X} (\\omega +i0) = \\sum _n\\frac{|\\varphi _0^{(n)}|^2}{\\omega -E^{(n)}+i0} \\; ,$ where the small imaginary part originates from the Heaviside function $ \\theta (t)$ in the retarded Green's function $\\mathcal {G}_{X} (t)$ ." ], [ "Exciton self-energy and $T$ matrix", "As discussed above, solving the eigenvalue problem () allows us to evaluate the exciton Green's function.", "It turns out that it is numerically convenient to instead consider the exciton self-energy $\\Sigma (\\omega )$ , which is related to the Green's function via $G_{X}(\\omega ) =\\frac{1}{\\omega - \\Sigma (\\omega )}\\;.$ The expression for the exciton self-energy can be derived by manipulating the eigenvalue problem () [37], following the same procedure valid at zero temperature [51] — for completeness we describe this in Appendix .", "In this way, the exciton self-energy reads $\\Sigma (\\omega )=\\frac{1}{\\mathcal {A}}\\sum _{{\\bf q}} f_{\\bf q}\\mathcal {T}({\\bf q}, \\omega +\\epsilon _{{\\bf q}})\\; ,$ where the inverse of the $T$ matrix is defined as $\\mathcal {T}^{-1}_{}({\\bf q}, \\omega ) =-\\frac{1}{v} - \\frac{1}{\\mathcal {A}} \\sum _{{\\bf k}} \\frac{1-f_{\\bf k}}{\\omega - \\epsilon _{{\\bf k}} - \\epsilon _{X{\\bf k}-{\\bf q}} + i0}\\; .$ The same expression (REF ) can also be derived by using a diagrammatic expansion within the ladder approximation [51], [37].", "Thus, our variational approach provides an additional theoretical foundation for the ladder diagrams.", "It is profitable to separate the vacuum contribution to the $T$ matrix, describing the electron-exciton scattering in the absence of a surrounding Fermi gas, from the many-body contribution.", "To this end, we note that the logarithmic divergence of the second term in (REF ) cancels with that of the inverse contact interaction constant $v^{-1}$  (REF ), allowing the vacuum contribution $\\mathcal {T}_0$ to be calculated analytically [3]: T-1(q, ) = T-10(q, )-mb(q, ) T-10(q, ) = mr2 (-T- q22mT+i0) mb(q, ) = -1A k fk- k - Xk-q + i0 , where $m_T=m+m_X$ is the trion mass and $m_r=mm_X/m_T$ is the exciton-electron reduced mass.", "Note that the vacuum $T$ matrix (REF ) has a pole at the bound state, i.e., the trion energy, $\\omega =-\\varepsilon _T+\\epsilon _{T{\\bf q}}$ , where $\\epsilon _{T{\\bf q}} = \\frac{q^2}{2m_T}$ is the trion kinetic energy (for a summary of the trion properties at finite momentum and finite doping, see Appendix ).", "The many-body contribution $\\Pi _{mb}({\\bf q}, \\omega )$ to the $T$ matrix has to be evaluated numerically—for details on the numerical procedure see Appendix ." ], [ "Absorption and photoluminescence", "The optical absorption coincides with the exciton spectral function (up to a frequency-independent prefactor): $A(\\omega )=-\\frac{1}{\\pi }\\Im G_X(\\omega +i0) \\; .$ Indeed, in the linear-response regime, the spectral function is equivalent to the transfer rate from an initial state $|n\\rangle $ containing no excitons (the impurity vacuum) to a final state $|\\nu \\rangle $ containing a single exciton.", "Here, the impurity vacuum and single-impurity states are eigenstates of the Hamiltonian, i.e., $\\hat{H} |n\\rangle =\\hat{H}_{0} |n\\rangle = E_n |n\\rangle $ and $\\hat{H} |\\nu \\rangle = E_\\nu |\\nu \\rangle $ .", "Using Fermi's golden rule, we have $A(\\omega ) = \\sum _{n,\\nu } \\langle n| \\hat{\\rho }_{0} |n \\rangle |\\langle \\nu | \\hat{x}_{{\\bf 0}}^\\dag |n \\rangle |^2 \\delta (E_{\\nu } - E_n - \\omega ) \\; ,$ where $\\hat{\\rho }_0$ is the medium-only partition function introduced above.", "By using the completeness of the $\\lbrace |\\nu \\rangle \\rbrace $ basis, one can easily show that Eq.", "(REF ) coincides with (REF ) — see Ref. [52].", "Using the definition (REF ), it is now straightforward to show that the exciton optical absorption satisfies the sum-rule: $\\int _{-\\infty }^{\\infty }d\\omega \\, A(\\omega )=1\\;.$ In order to evaluate the photoluminescence, we instead consider the opposite situation, i.e., an initial state $|\\nu \\rangle $ containing the medium and the exciton, and a final state $|n\\rangle $ after the exciton has recombined to emit a photon.", "Here, we have assumed that the exciton density is low enough such that each exciton can be treated individually.", "The transfer rate at thermal equilibrium is then given by $P(\\omega ) = \\sum _{n,\\nu } \\langle \\nu | \\hat{\\rho } |\\nu \\rangle |\\langle n| \\hat{x}_{{\\bf 0}}^{} |\\nu \\rangle |^2 \\delta (E_{\\nu } - E_n - \\omega ) \\; ,$ where $\\hat{\\rho } = e^{-\\beta \\hat{H}}/Z_{int}$ is the density matrix associated with the interacting exciton and medium system () and $Z_{int} = \\sum _\\nu *{e^{-\\beta \\hat{H}}}{\\nu }$ the associated partition function.", "It is straightforward to show that the photoluminescence satisfies the following sum rule $\\int _{-\\infty }^{\\infty }d\\omega \\,P(\\omega ) = \\text{Tr} [\\hat{\\rho } \\hat{x}_{{\\bf 0}}^{\\dag } \\hat{x}_{{\\bf 0}}^{}] \\; .$ Using the properties of the delta function, the absorption $A(\\omega )$ and photoluminscence $P(\\omega )$ can be related by a detailed balanced condition [52]: $P(\\omega ) = \\displaystyle \\frac{Z_0}{Z_{int}} e^{-\\beta \\omega } A(\\omega ) \\; .$ The thermodynamic, Boltzmann-type scaling between absorption and emission profiles (REF ) is also known as the Kubo-Martin-Schwinger relation [53], [54], the Kennard-Stepanov relation [55], [56], [57] or the van Roosbroeck-Shockley relation [58], depending on the context within which it has been studied, and it applies to a broad range of systems, including semiconductors [59], [60], [61].", "It relies on the assumption that the population of excited states, here excitons, has thermalized at a temperature $T$ before the emission and that they are otherwise uncorrelated.", "Note also that the thermalization temperature $T$ can be different from the system lattice (cryostat) temperature." ], [ "Numerical implementation", "Even though one only has to evaluate two momentum integrals to obtain the exciton Green's function in Eq.", "(REF ), namely the integrals in Eqs.", "(REF ) and (REF ), some comments about the numerical procedure are necessary.", "For the optical absorption (REF ), the numerical convergence of the integrals is much improved by shifting the frequency to the complex plane, $\\omega \\mapsto \\omega + i\\eta $ .", "Apart from helping with convergence, this shift provides a simplified description of the exciton's intrinsic broadening due to effects beyond those included in the Hamiltonian such as recombination and disorder.", "In the following, we have used the typical value $\\eta =0.04\\varepsilon _T \\simeq 1$  meV.", "Including $\\eta $ implies that the exciton spectral function decays as a Lorentzian at low and high energies.", "However, one cannot evaluate the photoluminescence using this procedure.", "Indeed, by using the detailed balance condition (REF ), the photoluminescence diverges at infinitely low frequencies if one uses a finite value of $\\eta $ to evaluate the absorption, since the Boltzmann occupation increases more rapidly than the Lorentzian decay of absorption.", "This means that photoluminescence needs to be evaluated by first calculating absorption at $\\eta =0$ and then multiplying by the Boltzmann occupation.", "The effects of the exciton intrinsic decay time can be re-introduced at the end of the calculation by convolving the photoluminescence with a Lorentzian profile with broadening $2\\eta $ .", "This procedure is described in detail in Appendix .", "Figure: (a) Spectral function A(ω)A(\\omega ) at zero temperature as a function of doping and energy.", "Blue and purple dashed lines are respectively the attractive (AA) and repulsive (RR) polaron energies, while the dashed red lines are the boundaries of the trion-hole continuum E ± E_{\\pm } (see text).", "(b) Spectral function A(ω)A(\\omega ) at temperature T=50T=50 K≃0.17ε T \\simeq 0.17 \\varepsilon _T.", "Dashed lines are the zero-temperature energies as in panel (a) — for the trion-hole continuum we plot only the upper boundary E + E_+.", "Solid lines are the attractive (blue) and repulsive (purple) branch energies at finite temperature.", "(c,d) Doping dependence of the spectral weights ZZ and half linewidths Γ\\Gamma extracted from the spectral function (see Appendix  for details of how these are determined) at T=0T=0 (dashed) and T=50T=50 K≃0.17ε T \\simeq 0.17 \\varepsilon _T (solid).", "In panel (d), note that the constant value of Γ A \\Gamma _A at T=0T=0 coincides with the intrinsic broadening." ], [ "Results", "We now illustrate our results for both optical absorption and photoluminescence.", "In order to stress the differences between the zero and finite temperature cases, let us first briefly summarize what is known about polarons at zero temperature." ], [ "Zero temperature", "Numerous theoretical works have so far analyzed the zero-temperature properties of an impurity interacting with a free Fermi gas, in the context of both semiconductor heterostructures [13], [7], [14], [15], [16], [17], [18] and ultracold atomic gases [2], [4].", "The main polaron properties at $T=0$ are summarized in Fig.", "REF .", "In panel (a) we plot the doping and energy dependence of the spectral function.", "One observes that the optical absorption is dominated by two polaron quasiparticle resonances: the attractive polaron branch at lower energy $E_{A}$ and the repulsive polaron branch at higher energy $E_R$ .", "The energies $E_{A,R}$ are evaluated from the positions of the absorption peaks in Fig.", "REF .", "Here, both polaron branches are quasiparticles in the sense that $E_{A,R}$ satisfy the condition [62] $E_{A,R} = \\Re \\Sigma (E_{A,R}) \\; ,$ which coincides with a pole of the exciton Green's function when the corresponding imaginary part of the self-energy $\\Im \\Sigma (E_{A,R})$ becomes small.", "Note that the repulsive branch eventually stops satisfying this condition when $E_F\\gtrsim 3\\varepsilon _T$  [63], and thus ceases to be a polaron quasiparticle.", "This large doping regime is not analyzed in this work.", "In the limit of vanishing doping, the attractive branch energy recovers the trion energy $E_A \\rightarrow -\\varepsilon _T$ , while the repulsive branch energy reduces to the exciton energy, $E_R \\rightarrow 0$ .", "(We remind the reader that we measure energies with respect to that of the exciton at rest).", "However, differently from the trion, whose energy blueshifts with doping due to its interactions with the surrounding electrons [see Ref.", "[64] and Eq.", "()], the attractive polaron branch redshifts with doping.", "At the same time, the repulsive polaron branch blueshifts with doping.", "Thus, at zero temperature, as soon as doping is finite, the attractive and repulsive polaron quasiparticles are no longer the trion and the exciton, respectively, even if they recover the corresponding energies at low doping.", "Furthermore, within the single particle-hole-excitation ansatz (REF ), the effective mass of the attractive polaron [65], [66] does not evolve into the trion mass at low doping.", "As far as the coupling to light of the polaron branches is concerned, at very low doping the repulsive branch retains all the spectral weight and the attractive branch is dark, as is also the case for the trion.", "However, when doping increases, there is a transfer of oscillator strength from the repulsive to the attractive branch.", "This is shown in Fig.", "REF (c), where we plot the doping dependence of the spectral weights $Z_{A,R}$ for each branch.", "At low doping $E_F \\ll \\varepsilon _T$ , $Z_A$ grows linearly with $E_F$ , which agrees with the doping dependence of the trion oscillator strength [43].", "There are also important differences between the attractive and repulsive polaron branches at zero temperature, even though they both satisfy Eq.", "(REF ) for values of doping up to $E_F \\sim \\varepsilon _T$ .", "The attractive polaron is always a sharp resonance, with a Lorentzian broadening coinciding with the intrinsic broadening $2\\Gamma _A=2\\eta $ — see Fig.", "REF (d).", "By solving the eigenvalue problem () at zero temperature, one finds that the attractive branch energy coincides with the lowest eigenvalue energy, $E_A=E^{(n=1)}$ , which is separated from the energy of the excited states $n>1$ by a gap.", "By contrast, the repulsive branch does not correspond to a specific eigenstate; rather it is composed of a continuum of eigenstates with closely spaced eigenvalues, resulting in a polaron quasiparticle with finite lifetime.", "As a consequence, its broadening $2\\Gamma _R$ grows monotonically with $E_F$ and only coincides with $2\\eta $ at small doping — see Fig.", "REF (d).", "In between the attractive and repulsive branches, one can observe in Fig.", "REF (a) a continuum of trion and Fermi-sea-hole states with a well-defined relative momentum ${\\bf q}$ in the variational function $\\varphi _{{\\bf k}{\\bf q}}^{}$ .", "The boundaries of this so-called trion-hole continuum can be evaluated from the energy of a trion (see Appendix ) plus a hole separately.", "If the hole has zero momentum ${\\bf q}={\\bf 0}$ , then, because of momentum conservation, the trion is also at zero momentum and the upper boundary of the trion-hole continuum is $E_{+} = E_T^{({\\bf 0},E_F)}=-\\varepsilon _T + \\frac{m_T}{m_X} E_F\\; .$ Conversely, the energy of the trion-hole lower bound is $E_{-} = E_T^{({\\bf k}_F,E_F)} -E_F\\; ,$ where both the hole and trion are at ${\\bf q}={\\bf k}_F=k_F \\hat{\\textbf {n}}$ , with $\\hat{\\textbf {n}}$ an arbitrary direction, and $E_T^{({\\bf k}_F,E_F)}$ is the trion energy [which can be evaluated from Eq.", "(REF )].", "Thus, at zero temperature and finite doping, the attractive branch is separated from the trion-hole continuum by an energy gap.", "Note that, while this is a consequence of considering a single excitation of the medium, our results are consistent with those of diagrammatic quantum Monte Carlo [67] which demonstrated that there is a region of anomalously low spectral weight (a “dark continuum”) above the narrow attractive polaron branch.", "Furthermore, calculations that incorporate Coulomb interactions between charges also obtain a suppression of spectral weight of this continuum [15].", "Figure: (a) Spectral function A(ω)A(\\omega ) at finite doping E F =0.4ε T E_F=0.4\\varepsilon _T as a function of temperature and energy.", "The solid lines are the energies of the attractive (AA, blue) and repulsive (RR, purple) branches, while the dashed (red) line is the T=0T=0 upper boundary of the trion-hole continuum E + E_{+} in Eq. ().", "Dot-dashed lines are the AA and RR energies at E F =0.04ε T E_F=0.04\\varepsilon _T.", "(b,c) Temperature dependence of spectral weight ZZ and half linewidth Γ\\Gamma evaluated from the exciton spectral function (see Appendix ) at E F =0.04ε T E_F=0.04\\varepsilon _T (dot-dashed) and E F =0.4ε T E_F=0.4\\varepsilon _T (solid).", "Note that, in panel (c), the constant value of Γ R \\Gamma _R at E F =0.04ε T E_F=0.04\\varepsilon _T is approximately given by the intrinsic broadening η\\eta ." ], [ "Finite temperature", "We now discuss the effect of temperature on the optical response.", "Figure REF (b-d) allows a comparison between the doping-dependent properties of optical absorption at zero and finite temperature.", "Both the energies and spectral weights of attractive and repulsive branches have a very weak dependence on temperature in the regime $T\\lesssim \\varepsilon _T$ .", "The branch energies are slightly redshifted, while $Z_A$ ($Z_R$ ) is slightly smaller (larger) compared to the zero-temperature case.", "This small variation with temperature is also observed in Fig.", "REF (a,b) for a fixed doping.", "The most important difference at finite temperature is the behavior of the trion-hole continuum, which subsumes the attractive branch when $T\\gtrsim E_F$ , i.e., at sufficiently low doping or sufficiently high temperature.", "This can be clearly seen from Fig.", "REF (b), where we no longer observe a sharp lower bound of the trion-hole continuum at low doping because, at finite temperature, the unbound hole belonging to the trion-hole continuum can thermally occupy any momentum state.", "However, the upper bound of the trion-hole continuum is still clearly visible and approximately follows the zero-temperature expression $E_{+}$  (REF ).", "Similarly, for a fixed doping in Fig.", "REF (a), we observe that the trion-hole continuum is only well-separated from the attractive branch at low temperatures.", "Since the spectral weight of the trion-hole continuum is small, its merging with the attractive branch only slightly affects the attractive peak energy.", "However, the disappearance of the attractive polaron quasiparticle strongly modifies the attractive-branch linewidth $2\\Gamma _A$ .", "In particular, we observe in Fig.", "REF (d) that $\\Gamma _A$ has a striking non-monotonic dependence at low doping, while it decreases towards its zero-temperature value (corresponding to the intrinsic broadening $\\eta $ ) when $E_F$ increases.", "Likewise, increasing the temperature at fixed doping can substantially increase $\\Gamma _A$ from $\\eta $ , as shown in Fig.", "REF (c).", "As we will discuss in Sec.", "REF , this behavior signals a crossover from a coherent Fermi polaron regime to an incoherent trion-dominated regime, where there no longer exists a well-defined attractive quasiparticle that is separated from the trion-hole continuum.", "The repulsive branch, on the other hand, remains a polaron quasiparticle with a finite lifetime (broadening) for the dopings considered in this work ($E_F \\lesssim \\varepsilon _T$ ).", "In particular, we see that temperature does not change the nature of the repulsive branch in Figs.", "REF and REF , but it can lead to a faster increase of the half linewidth $\\Gamma _R$ with increasing doping [Fig.", "REF (d)].", "Figure: (a) Spectral function A(ω)A(\\omega ) and (b) Lorentzian convolved photoluminescence P ¯(ω)\\bar{P}(\\omega )for different dopings E F E_F and at a fixed temperature of T=50T=50 K≃0.17ε T \\simeq 0.17 \\varepsilon _T.", "The attractive branch photoluminescence peaks are rescaled to unity.Figure: (a) Spectral function A(ω)A(\\omega ) and (b) Lorentzian convolved photoluminescence P ¯(ω)\\bar{P}(\\omega )for different values of the temperature TT and at a fixed doping E F =0.1ε T E_F=0.1\\varepsilon _T.", "The attractive branch photoluminescence peaks are rescaled to unity.Figure: (a) Unconvolved photoluminescence P(ω)P(\\omega ) and (b) Lorentzian convolved photoluminescence P ¯(ω)\\bar{P}(\\omega ) for different dopings E F E_F, at a fixed temperature of T=50T=50 K≃0.17ε T \\simeq 0.17 \\varepsilon _T, and for a frequency range around the attractive branch only.", "The photoluminescence peaks are rescaled to unity.", "In panel (a) the vertical dashed lines are the upper boundary of the trion-hole continuum E + E_{+} at zero temperature (see text).We now discuss the shape of the optical response profiles and how they evolve with either doping or temperature.", "Figures REF and REF display both the absorption and the Lorentzian convolved photoluminescence (see Appendix ) at fixed temperature and fixed doping, respectively.", "The repulsive polaron quasiparticle shows up approximately as a Lorentzian symmetric profile in both absorption and photoluminescence spectra, with a full width at half maximum (FWHM) that increases with increasing $E_F$ .", "Note that the FWHM of the repulsive branch only has a weak dependence on temperature in Fig.", "REF since $E_F$ has been fixed to a low value such that the intrinsic width $2\\eta $ dominates [see Fig.", "REF (c)].", "By contrast, the shape of the attractive branch is strongly modified by temperature: in the low-temperature (high-doping) regime $E_F > T$ , it is described by a Lorentzian with FWHM $2\\eta $ , while for $E_F < T$ , it develops a strongly asymmetric shape with an exponential tail below the trion energy.", "This evolution in the asymmetry of the attractive branch once again indicates a crossover from a Fermi-polaron quasiparticle to a continuum of trion states.", "In Fig.", "REF we further analyze the shape of the attractive peak at low doping $E_F \\lesssim T$ , comparing the Lorentzian convolved photoluminescence $\\bar{P} (\\omega )$ with the “bare” photoluminescence $P(\\omega )$ , where we have removed the effects of any intrinsic exciton broadening.", "In panel (a), we observe a sharp onset of the photoluminescence which approximately coincides with the upper boundary of the trion-hole continuum at zero temperature, $E_+$ .", "Thus, according to Eq.", "(REF ), it blueshifts with increasing doping.", "As shown in Fig.", "REF (b), any intrinsic broadening $\\eta $ only smooths out the sharp onset, while it has little effect on the position of the peak.", "When $E_F$ increases, the sharp onset tends to disappear as the attractive peak redshifts and detaches from the trion-hole continuum.", "Our calculated profiles for the attractive branch are in excellent quantitative agreement with recent experiments in the high-temperature (low-doping) regime [12], as discussed in the accompanying paper [38].", "The exponential tail of the asymmetric attractive peak has previously been modelled within a trion picture for the case of photoluminescence [68], [69], [70], [71].", "There, the tail is ascribed to the kinetic energy of remaining electrons after the exciton within each trion has decayed into a photon.", "This description in terms of electron recoil can be formally derived from our theory in the limit of weak interactions, as we will discuss in Sec.", "REF ." ], [ "Loss of the attractive polaron quasiparticle: polaron to trion-hole continuum crossover", "In this section, we use the pole condition in Eq.", "(REF ) to characterize the crossover from a well-defined polaron quasiparticle to a trion-hole continuum with increasing temperature (decreasing doping).", "In order to find the values of temperature and doping at which this crossover occurs, we plot in Fig.", "REF the local maximum value of the function $\\omega - \\Re \\Sigma (\\omega )$ for $\\omega < 0$ and identify the curve of doping versus fugacity $z=e^{\\beta \\mu } = e^{\\beta E_F} -1$ at which this maximum value is zero.", "For $E_F \\lesssim \\varepsilon _T$ , we find that this occurs roughly when $z \\sim 1$ and thus $E_F \\sim 0.7 T$ .", "On the left of this curve, we lose the attractive polaron quasiparticle, as the condition (REF ) cannot be satisfied, i.e., $E_A - \\Re \\Sigma (E_A)\\ne 0$ .", "On the right of this curve, instead, the system is in the polaron regime where (REF ) is satisfied.", "In order to further illustrate this crossover, we compare the results for the polaron energies, spectral weights and linewidths extracted from the spectral function with those obtained by treating the polaron as a well-defined quasiparticle [62].", "In the latter case, the polaron properties can be obtained directly from the expression of the impurity self-energy.", "Close to a quasiparticle resonance, the exciton Green's function can be approximated as $G_X(\\omega ) \\operatornamewithlimits{\\simeq }_{\\omega \\simeq E_{j}} \\frac{Z_{j}}{\\omega -E_{j} +i\\Gamma _{j}} \\; ,$ where $j=A,R$ is the two-branch index, and the quasiparticle energy $E_{j}$ is a solution of Eq.", "(REF ).", "The pole weight or residue $Z_{j}$ is $Z_{j} = \\left(1 - \\left.\\displaystyle \\frac{\\partial \\Re \\Sigma (\\omega )}{\\partial \\omega }\\right|_{E_{j}}\\right)^{-1} \\; ,$ and the polaron damping rate is $\\Gamma _{j} = - Z_{j}\\Im \\Sigma (E_{j})\\; .$ We compare the results for $E_j$ , $Z_j$ , and $\\Gamma _j$ obtained with both methods in Fig.", "REF .", "We observe that the positions of the poles coincide to high accuracy with those of the spectral function maxima — see panel (a).", "For the repulsive branch, both the spectral weight and the broadening from the quasiparticle theory are in good agreement with those evaluated from the spectral function, even when the linewidth is non-negligible.", "By contrast, for the attractive branch, the results depart from one another when we approach the (gray) region $E_F \\lesssim 0.7 T$ where there is no attractive quasiparticle, and the quasiparticle description breaks down since, according to Eq.", "(REF ), $1/Z_A\\rightarrow 0$ when $\\rm {max}[\\omega -\\Re \\Sigma (\\omega )]=0$ .", "In the following section, we analyze the system properties well inside the trion-hole continuum (gray) regime, where the attractive branch is no longer a polaron quasiparticle.", "Here, for temperatures $T \\gtrsim E_F$ , we can apply a systematic quantum virial expansion.", "This connects the results of this work with those obtained in the accompanying paper [38].", "Figure: Maximum value of ω-ℜΣ(ω)\\omega - \\Re \\Sigma (\\omega ) for ω<0\\omega < 0 as a function of the Fermi gas fugacity z=e βμ z=e^{\\beta \\mu } and doping.", "The black dashed line describes the values of zz and E F E_F at which this maximum is zero.", "On the left of this curve (blue area) the attractive branch is not a polaron quasiparticle (no QP), while on the right (red area) the attractive branch is a well defined polaron quasiparticle (QP).Figure: (a) Polaron energies E A,R E_{A,R}, (b) spectral weights Z A,R Z_{A,R}, and (c) half linewidths Γ A,R \\Gamma _{A,R}, evaluated from the exciton spectral function (solid lines) (as described in Appendix ) and from quasiparticle expressions (symbols).", "The attractive branch stops to be a quasiparticle resonance in the gray region at low doping when E F ≲0.7TE_F \\lesssim 0.7 T.In panel (c) we plot Γ A,R -η\\Gamma _{A,R}-\\eta is order to compare the numerical results with the analytical estimate of the repulsive branch broadening evaluated at η=0\\eta =0 at small doping (dashed line), derived within the virial expansion in Sec.", ".Temperature is fixed at T=50T=50 K≃0.17ε T \\simeq 0.17 \\varepsilon _T." ], [ "Virial expansion and connection to the trion wave function at high temperature or low doping", "As discussed, at high temperature or low doping such that the fugacity $z= e^{\\beta \\mu }\\lesssim 1$ , the attractive polaron quasiparticle disappears and the attractive branch only consists of a broad continuum.", "In this limit, one can formally apply a perturbatively exact quantum virial expansion in the fugacity [38].", "We will now briefly discuss how this expansion is related to the polaron theory, and how this allows us to demonstrate that the trion picture results of Refs.", "[68], [12] are contained within the polaron formalism.", "When $T \\gg E_F$ and $z \\simeq \\beta E_F\\ll 1$ , we can formally expand the Fermi occupation function which then coincides with the Boltzmann distribution $f_{\\bf k}\\simeq z e^{-\\beta \\epsilon _{{\\bf k}}}$ .", "Likewise, to leading order in $z$ we have ${\\mathcal {T}}\\simeq {\\mathcal {T}}_0$ — see Eq.", "(REF ).", "Within this expansion, the exciton self-energy in Eq.", "(REF ) becomes ()zAqe-q T0(q,+q).", "In other words, to leading order in the fugacity, the self-energy is determined by two-body interactions weighted by a Boltzmann distribution.", "Focusing first on the attractive branch, the dominant contribution to the self-energy arises from the pole of the $T$ matrix at the trion energy.", "We therefore expand the $T$ matrix for $\\omega \\simeq -\\varepsilon _T + \\epsilon _{T{\\bf q}}$ , with the result T0(q,+q) ZT+q-Tq+T+i0, where $Z_T\\equiv 2\\pi \\varepsilon _T/m_r$ .", "The $T$ matrix can be expressed in terms of the vacuum ($E_F=0$ ) trion wave function $\\tilde{\\eta }_{{\\bf q}_r}^{({\\bf 0})}$ for a contact electron-exciton interaction (see Appendix ).", "In our case, the relative momentum is ${\\bf q}_r={\\bf q}m_X/m_T$ , and therefore the kinetic energy of the relative motion is $\\epsilon _{r{\\bf q}_r}=\\epsilon _{\\bf q}-\\epsilon _{T{\\bf q}}$ .", "Using the expression for the vacuum trion wave function in the center of mass frame, $\\tilde{\\eta }_{{\\bf q}_r}^{({\\bf 0})}= \\displaystyle \\frac{\\sqrt{Z_T}}{\\varepsilon _T + \\epsilon _{r{\\bf q}_r}}$ (see Appendix ), we have T0(q,+q) |(T+rqr)qr(0)|2+rqr+T+i0.", "Here, the numerator is derived by manipulating the relation between the trion wave function and $Z_T$ , and is momentum independent.", "However, this is a special property of the contact electron-exciton interaction, and Eq.", "(REF ) in fact yields the correct generalization for an arbitrary electron-exciton interaction that leads to the formation of a trion.", "In other words, it can be applied for any realistic trion wave function.", "For details of the generalization to arbitrary interactions, see Appendix .", "Using the approximation for the vacuum $T$ matrix, Eq.", "(REF ), the self-energy in Eq.", "(REF ) becomes $\\Sigma _{A}(\\omega <-\\varepsilon _T) \\simeq \\frac{z}{\\mathcal {A}}\\sum _{\\bf q}e^{-\\beta \\epsilon _{\\bf q}} |\\tilde{\\eta }_{{\\bf q}_r}^{({\\bf 0})}|^2 \\\\\\times \\left[{\\mathcal {P}}\\frac{|\\epsilon _{r{\\bf q}_r}+\\varepsilon _T|^2}{\\omega +\\epsilon _{r{\\bf q}_r}+\\varepsilon _T}-i\\pi \\omega ^2 \\delta (\\omega +\\epsilon _{r{\\bf q}_r}+\\varepsilon _T)\\right]\\;,$ where ${\\mathcal {P}}$ denotes the principal value.", "This explicitly relates the self-energy calculated within the polaron theory to the trion wave function in the regime where the fugacity is small, and importantly it applies for any realistic trion wave function.", "A similar approach involving trion wave functions has been used to calculate absorption [34].", "Note that the real part diverges logarithmically when $\\omega +\\varepsilon _T\\rightarrow 0^-$ , and therefore it cannot in general be neglected close to the onset of the attractive branch.", "For the repulsive branch, we can again apply Eq.", "(REF ) to find the leading contribution at small fugacity.", "In the regime where $E_F\\ll \\varepsilon _T$ , the width of the repulsive branch is much smaller than the trion binding energy, and to lowest order in the fugacity, we can simply evaluate the repulsive polaron self-energy within the repulsive branch by taking $\\omega =0$ .", "We then use the fact that, to logarithmic accuracy, the logarithmic behavior at small momentum is generic for the vacuum $T$ matrix for any short-range interaction [72], [73] and thus we have R(0) zA2mrqe-q(T/rqr)+i z(m/mr) T2+2(eET)[(eET)-i] , where $\\gamma _{\\rm E}\\simeq 0.5772$ is the Euler-Mascheroni constant.", "Here, we have evaluated the integral by noting that the integral is dominated by $\\epsilon _{\\bf q}\\sim 1/\\beta $ for small $\\beta \\varepsilon _T$ , and the inclusion of $\\gamma _{\\rm E}$ originates from expanding the integral to the first subleading order in $\\beta \\varepsilon _T$ .", "The self-energies in Eqs.", "(REF ) and (REF ) can now be directly inserted into the Dyson equation (REF ) to yield the absorption in Eq.", "(REF ) or the photoluminescence in Eq.", "(REF ).", "To be explicit, within the virial expansion we have the exciton spectral function A() = -1Im [(--T)-A()+1-R(0)].", "This yields a broad continuum for the attractive branch, where the spectral weight vanishes at $\\omega +\\varepsilon _T\\rightarrow 0^-$ , and when we are far below this onset, we have $\\Sigma _{A}(\\omega )/\\omega \\rightarrow 0$ .", "Therefore we have an exponential tail modulated by the trion wave function: A()*- z  e(+T)mTmX|2mr|+T|(0)|2.", "The peak of the absorption is between the onset and the tail, and therefore it will not correspond to the vacuum trion energy.", "This can lead to an overestimate of the trion binding energy in experiments [12], as shown in the accompanying paper [38].", "From Eq.", "(REF ), we find that the repulsive branch is a Lorentzian of width $\\Gamma _{R}=\\pi (m/m_r) E_F/[\\pi ^2+\\ln ^2(e^{\\gamma _{\\rm E}}\\beta \\varepsilon _T)]$ .", "We have compared in Fig.", "REF (c) this expression against the numerical evaluation of the repulsive branch half linewidth $\\Gamma _R - \\eta $ (where we have removed the effect of the intrinsic exciton lifetime), finding excellent agreement at low doping, inside the region of validity of the virial expansion.", "For the photoluminescence, we find P() -Z0Zint1Im [(--T)e--A()+1-R(0)], where we have used the fact that the width of the repulsive branch is much smaller than the temperature, and thus the repulsive branch is very weakly modified by the Boltzmann factor." ], [ "Connection to the trion theory of electron recoil", "Finally, we discuss how our theory relates to the calculation of electron recoil in previous trion-picture calculations such as by Esser et al., Refs.", "[68], [69].", "As we shall demonstrate, that theory corresponds to a weak-interaction limit of the low doping/high temperature version of our polaron theory.", "To see this, we take the limit of a small self-energy in the Dyson equation in Eq.", "(REF ) as follows: GX()1+12() .", "Since the $1/\\omega $ terms only have a pole at $\\omega =0$ (i.e., at the bare exciton energy), the attractive branch is obtained from the imaginary part of the self-energy.", "The detailed balance equation (REF ) then implies that the corresponding PL from the attractive branch is PA()-1Z0Zint e-2ImA() Z0Zint zAqe-(+q)|qr(0)|2(+rqr+T) Z0Zint z  e(m+mTT)mX|2mr|+T|(0)|2(--T) , where we used Eq.", "(REF ).", "Up to frequency-independent prefactors, this precisely matches the result of Esser et al.", "Thus, the trion-picture PL is already contained within the polaron picture.", "However, as we have already argued, the trion picture calculation assumes that we are in a weakly interacting limit where the self-energy is much smaller than the frequency.", "This assumption manifestly breaks down close to the onset of the attractive branch, where the real part of the self-energy diverges, and hence the previous trion picture calculation fails to correctly describe the onset and shape of the attractive branch.", "On the other hand, it correctly identifies the exponential tail of the PL, which is dominated by the imaginary part of the self-energy.", "For the repulsive branch, the trion picture again uses the weak-interaction limit of the Dyson equation (REF ), this time neglecting even the second term on the right hand side.", "This results in PR()Z0Zint(), with a small correction that counteracts the oscillator strength transferred to the attractive branch [69].", "We see that the trion picture fails to describe the Lorentzian shape of the repulsive polaron, which arises from the exciton-electron scattering states as well as the full Dyson series.", "We now extend the formalism of Sec.", "to describe recent experiments where a doped TMD monolayer was embedded into a microcavity [7], [19], [20], [9], [21].", "In this case, the strong coupling between light and matter can lead to the formation of Fermi polaron-polaritons [74], [7].", "A particularly important question is whether there still exists strong coupling to light either because of temperature induced broadening effects or because the attractive branch does not correspond to a well-defined quasiparticle.", "To describe the light-matter coupled system, we add two terms describing cavity photons and the photon-exciton interaction to the Hamiltonian (): H = H0 + H0X+ H0C+ Hint+HXC H0C= k Cka†kak HXC=2k( x†kak + H.c. ) .", "Photons are described by the bosonic creation operator $\\hat{a}^{\\dag }_{{\\bf k}}$ and, in a cavity, they acquire a quadratic dispersion $\\epsilon _{C{\\bf k}} = \\delta + {\\bf k}^2/2m_C$  [75], where $\\delta $ is the photon-exciton energy detuning.", "Typically the photon mass in a planar microcavity is $m_C = 10^{-5} m_X$  [75].", "The term $\\hat{H}_{XC}$  () describes an exciton recombining to emit a photon and vice versa, with a coupling strength given by the Rabi splitting $\\Omega $  [75].", "In monolayer TMDs, this is typically in the range of 1-2 $\\varepsilon _T$  [76], [77].", "In order to derive the photon Green's function and the optical absorption in the strong coupling regime, one can follow the same procedure employed in Sec. .", "The difference is that now we formulate a variational ansatz for the time-dependent photon operator, with an analogous form to (REF ): $\\hat{a}_{{\\bf 0}}^{}(t) \\simeq \\alpha _{0}^{}(t) \\hat{a}_{{\\bf 0}}^{} + \\frac{1}{\\mathcal {A}} \\sum _{{\\bf k},{\\bf q}} \\varphi _{{\\bf k}{\\bf q}}^{}(t) \\hat{c}_{{\\bf q}}^{\\dag } \\hat{c}_{{\\bf k}}^{} \\hat{x}_{{\\bf q}-{\\bf k}}^{}+ \\varphi _{0}^{}(t) \\hat{x}_{{\\bf 0}}^{} \\; .$ We neglect the dressing of the photon operator by a particle-hole excitation $\\frac{1}{\\mathcal {A}} \\sum _{{\\bf k},{\\bf q}} \\alpha _{{\\bf k}{\\bf q}}^{}(t) \\hat{c}_{{\\bf q}}^{\\dag } \\hat{c}_{{\\bf k}}^{} \\hat{a}_{{\\bf q}-{\\bf k}}^{}$ .", "This term involves photon recoil, and therefore implies energies far off resonance from the exciton and trion energies because of the extremely small mass of the photon.", "To obtain the eigenvalue problem for the light-matter coupled system, we introduce the error operator corresponding to the photon $\\hat{e}(t) = i\\partial _t \\hat{a}_{{\\bf 0}}^{}(t) - [\\hat{a}_{{\\bf 0}}^{}(t),\\hat{H}]$ and minimize the error function, Eq.", "(REF ), with respect to the variational coefficients $\\alpha _{0}^{*}(t)$ , $\\varphi _{{\\bf k}{\\bf q}}^{*}(t)$ , and $\\varphi _{0}^{*}(t)$ .", "Considering the stationary solutions, we find E0 = 2 0 -vA2k,q fq(1-fk) kq Ekq = EXkq kq - v0 -vAk'(1-fk') k'q E0 = 0 + 2 0 , where we have again neglected terms that vanish in the limit $\\Lambda \\rightarrow \\infty .$ These equations reduce to those for the exciton polaron, Eq.", "(), when we take $\\Omega =0$ and $\\alpha _0^{}=0$ .", "By following an analogous derivation to that in Sec.", ", one can easily demonstrate that the retarded photon Green's function in the frequency domain is given by: $G_{C} (\\omega ) = \\sum _n\\frac{|\\alpha _0^{(n)}|^2}{\\omega -E^{(n)}+i0} \\; .$ For the same reason that we can neglect the particle-hole dressing of the photon operator in the ansatz (REF ), the expressions for the exciton self-energy $\\Sigma (\\omega )$ in Eqs.", "(REF ) and (REF ) are not affected by the coupling to light.", "Therefore, we can derive the coupled exciton and photon Green's functions in the strong coupling regime by inverting the matrix [78] $\\mathbb {G}(\\omega ) = \\begin{pmatrix}\\omega -\\Sigma (\\omega ) & -\\Omega /2 \\\\-\\Omega /2 & \\omega -\\delta \\end{pmatrix}^{-1} \\; ,$ and evaluating the diagonal elements, giving: GX() =G11()=1- () - (/2)2- GC() =G22()=1--(/2)2-() .", "The expression () for the exciton Green's function in terms of the self-energy can also be obtained by following the same procedure as in Appendix , where we have first eliminated the photon amplitude by solving Eq.", "(), i.e., $\\alpha _0 = \\frac{\\Omega }{2} \\varphi _0^{} (E-\\delta )^{-1}$ .", "Finally, in the strong coupling regime, optical absorption coincides (up to a frequency-independent prefactor) with the photon spectral function [79]: $A_{C}(\\omega )=-\\frac{1}{\\pi }\\Im G_C(\\omega ) \\; ,$ which we will focus on in the following.", "We plot in Fig.", "REF the finite-temperature photon spectral function at normal incidence as a function of energy and detuning $\\delta $ .", "When $T \\ll E_F \\lesssim \\Omega $ , the strong coupling to light leads to three polariton branches, the lower (LP), middle (MP), and upper polariton (UP), as can be seen in Fig.", "REF (a,c).", "The existence of these polariton modes is connected to the attractive and repulsive exciton branches in the presence of doping.", "We can capture the behavior of the Fermi polaron-polaritons by employing a three coupled oscillator model, which yields the following simplified expression for the photon Green's function GC() = .", "(- H)-1 |11 H = -i C A/2 R/2 A/2 EA - iA 0 R/2 0 ER - iR .", "Here, we have explicitly included a cavity photon lifetime $1/\\eta _C$ and we have used the extracted parameters for the exciton-polaron branches in the absence of coupling to light, namely the energies $E_{A,R}$ , spectral weights $Z_{A,R} = (\\Omega _{A,R}/\\Omega )^2$ , and half linewidths $\\Gamma _{A,R}$ .", "Note that we cannot evaluate the polariton branch energies as the (complex) eigenvalues of the matrix $\\tilde{H}$  [80].", "Rather, we have to evaluate first the photon spectral function (REF ) and then determine the polariton energies from the photon spectral function peak positions.", "The comparison in Fig.", "REF between the LP, MP and UP energies obtained in this way and the full calculation demonstrates essentially perfect agreement.", "Remarkably, we find that a strong enough light-matter coupling can produce well-defined polariton quasiparticles (where $\\Re \\tilde{G}_{C}^{-1} (\\omega )=0$ ) for the lower and middle branches even when there is no attractive polaron quasiparticle and Eq.", "(REF ) is not satisfied.", "However, once the linewidths $2\\Gamma _{A,R}$ approach $\\Omega _{A,R}$ , the energy splitting between branches closes, indicating a loss of strong coupling.", "We observe this in the low-doping regime for LP and MP [Fig.", "REF (b)], and in the high-doping regime for MP and UP [Fig.", "REF (c,d)].", "Within the three coupled oscillator model, this transition from weak to strong light-matter coupling approximately occurs when $2\\Gamma _{A,R} \\gtrsim \\Omega _{A,R}$ , as expected, though there are small deviations when the attractive branch is no longer a well-defined polaron quasiparticle.", "To conclude, the effect of temperature on this transition can be easily accounted for by considering its effect on the quasiparticle linewidths and spectral weights (see Sec.", ")." ], [ "Conclusions and perspectives", "We have studied the optical properties of a doped two-dimensional semiconductor at a finite temperature using a Fermi-polaron approach involving a single excitation of the fermionic medium.", "Our results reveal that the attractive branch can experience a smooth transition from a regime where it is a well-defined quasiparticle to a regime where is subsumed into a broad continuum of trion-hole scattering states.", "This crossover results in a strong change in the spectral lineshape and can be driven by either decreasing doping or increasing temperature, but it cannot occur at zero temperature.", "While the Fermi polaron theory is able to capture both limits, theories based on the trion wavefunction necessarily only apply in the limit where there is no a well-defined quasiparticle.", "In particular, we formally show that the trion theory corresponds to a weak-interaction limit of our finite-temperature Fermi polaron theory.", "In the regime of strong light-matter coupling, we have shown how the temperature can modify the properties of the Fermi polaron-polaritons.", "We demonstrate that the strong-to-weak coupling crossover observed at finite temperature for the attractive branch at low enough doping and the repulsive branch in the high doping regime can be explained in terms of the linewidths and spectral weights of the two branches.", "In future studies, it would be interesting to investigate how the quasiparticle transition of the attractive branch, driven by either temperature or doping, would affect the interactions between exciton impurities.", "In particular, common descriptions of polaron-polaron interactions use Landau's theory of dilute solutions [81], which assumes well-defined quasiparticles.", "Such interactions could, for instance, be measured using coherent multidimensional spectroscopy on gated 2D materials, similarly to recent experiments on intrinsically doped samples [82].", "We gratefully acknowledge useful discussions with Dmitry Efimkin.", "AT and FMM acknowledge financial support from the Ministerio de Ciencia e Innovación (MICINN), project No.", "AEI/10.13039/501100011033 (2DEnLight).", "FMM acknowledges financial support from the Proyecto Sinérgico CAM 2020 Y2020/TCS-6545 (NanoQuCo-CM).", "JL and MMP are supported through Australian Research Council Future Fellowships FT160100244 and FT200100619, respectively.", "JL, BM and MMP also acknowledge support from the Australian Research Council Centre of Excellence in Future Low-Energy Electronics Technologies (CE170100039)." ], [ "Finite impurity momentum", "For completeness, we generalize the finite-temperature polaron formalism illustrated in Sec.", "to absorption and photoluminescence at finite momentum.", "This can in principle be measured in doped semiconductors using angle-resolved photoemission spectroscopy [83], [84].", "Similarly to Eq.", "(REF ), we approximate the exciton operator in the Heisenberg picture at finite momentum $\\hat{x}_{{\\bf Q}}^{}(t) = e^{i\\hat{H}t} \\hat{x}_{{\\bf Q}}^{} e^{-i\\hat{H}t}$ as $\\hat{x}_{{\\bf Q}}^{}(t) \\simeq \\varphi _{0}^{({\\bf Q})}(t) \\hat{x}_{{\\bf Q}}^{} + \\frac{1}{\\mathcal {A}} \\sum _{{\\bf k},{\\bf q}} \\varphi _{{\\bf k}{\\bf q}}^{({\\bf Q})}(t) \\hat{c}_{{\\bf q}}^{\\dag } \\hat{c}_{{\\bf k}}^{} \\hat{x}_{{\\bf Q}+{\\bf q}-{\\bf k}}^{}\\; .$ The derivation then follows similarly to the zero momentum case.", "We minimize the error function $\\Delta _{{\\bf Q}}(t) = \\langle \\hat{e}_{{\\bf Q}}^{}(t) \\hat{e}_{{\\bf Q}}^{\\dag }(t) \\rangle _{\\beta }$ , where $\\hat{e}_{{\\bf Q}}^{}(t) = i\\partial _t \\hat{x}_{{\\bf Q}}^{}(t) - [\\hat{x}_{{\\bf Q}}^{}(t),\\hat{H}]$ , obtaining the following eigenvalue problem: E(Q)0(Q) = XQ 0(Q) -vA2k,q fq(1-fk) kq(Q) E(Q)kq(Q) = EXQkq kq(Q) - v0(Q) -vAk'(1-fk') k'q(Q) , where $E_{X{\\bf Q}{\\bf k}{\\bf q}} = \\epsilon _{X{\\bf Q}+{\\bf q}-{\\bf k}} + \\epsilon _{\\bf k}- \\epsilon _{\\bf q}$ .", "The exciton Green's function can thus be written in terms of the eigenvalues $E^{({\\bf Q},n)}$ and eigenvectors $\\varphi _{0}^{({\\bf Q},n)}$ as $G_{X} ({\\bf Q}, \\omega ) = \\sum _n\\frac{|\\varphi _0^{({\\bf Q},n)}|^2}{\\omega -E^{({\\bf Q},n)}+i0} \\; .$ Equivalently, the exciton Green's function can be written in terms of the exciton self-energy at finite momentum GX(Q,) =1- XQ - (Q,) (Q,) =1Aq fqT(q+Q, +q) , where the inverse of the $T$ matrix is defined in Eq.", "(REF ).", "This follows the same procedure as in the case of ${\\bf Q}={\\bf 0}$ , which is described in Appendix .", "The absorption of a photon with momentum ${\\bf Q}$ is given by $A({\\bf Q},\\omega ) =-\\frac{1}{\\pi }\\Im G_X({\\bf Q},\\omega ) \\; .$ Absorption and photoluminescence can be connected using detailed balance conditions as derived in the main text, starting from the Fermi's golden rule definitions: A(Q,) = n, n| 0 |n || xQ†|n |2 (En - ) P(Q,) = n, | ||n| xQ ||2 (En - ) .", "Thus, the detailed balance condition is identical to that at zero momentum: $P({\\bf Q},\\omega ) = \\displaystyle \\frac{Z_0}{Z_{int}} e^{-\\beta \\omega } A({\\bf Q},\\omega ) \\; .$" ], [ "Exciton self-energy", "In this appendix, we demonstrate that the exciton self-energy can be derived starting from the eigenvalue equations () and that it coincides with the expression (REF ) that can be derived within a $T$ matrix formalism.", "Let us start from the eigenvalue problem () which can be rewritten in terms of the auxiliary function $\\chi _{{\\bf q}}^{}$ : q =vAk (1-fk) kq E 0 =-1Aq fqq E kq = EXkq kq-v0- q .", "Introducing Eq.", "() into Eq.", "() and solving for $\\chi _{\\bf q}$ we obtain $\\chi _{{\\bf q}}^{} = \\varphi _{0}^{} \\left[\\frac{1}{v}+\\frac{1}{\\mathcal {A}}\\sum _{{\\bf k}} \\frac{1-f_{\\bf k}}{E-E_{X{\\bf k}{\\bf q}}} \\right]^{-1} \\; ,$ where we have used that $(v/\\mathcal {A}) \\sum _{\\bf k}(1-f_{{\\bf k}})/(E-E_{X{\\bf k}{\\bf q}}) \\rightarrow -1$ in the limit $\\Lambda \\rightarrow \\infty $ .", "Substituting (REF ) into Eq.", "(), one thus finally obtains an implicit equation for the energy $E$ : $E = - \\frac{1}{\\mathcal {A}}\\sum _{{\\bf q}} f_{\\bf q}\\left[\\frac{1}{v} +\\frac{1}{\\mathcal {A}}\\sum _{{\\bf k}}\\frac{1-f_{\\bf k}}{E-E_{X{\\bf k}{\\bf q}}} \\right]^{-1} \\; .$ This expression coincides with the pole of the exciton Green's function (REF ) and therefore we can identify the self-energy term correcting the exciton energy because of the exciton-electron interaction as the term: $\\Sigma (E) \\equiv - \\frac{1}{\\mathcal {A}}\\sum _{{\\bf q}} f_{\\bf q}\\left[\\frac{1}{v} +\\frac{1}{\\mathcal {A}}\\sum _{{\\bf k}}\\frac{1-f_{\\bf k}}{E -E_{X{\\bf k}{\\bf q}}} \\right]^{-1} \\; .$ Recalling that $E_{X{\\bf k}{\\bf q}} = \\epsilon _{X{\\bf q}-{\\bf k}} + \\epsilon _{\\bf k}-\\epsilon _{\\bf q}$ , we obtain exactly the expression (REF ) in the main text.", "By a completely equivalent procedure, we can arrive at the self-energy in the case of finite exciton momentum, Eq.", "()." ], [ "Trion", "In this appendix, we summarize for completeness the known trion properties within our model Hamiltonian ().", "At zero temperature, a trion with momentum ${\\bf Q}$ in the presence of a Fermi sea $|FS\\rangle = \\prod _{|{\\bf q}|<k_F} c_{{\\bf q}}^\\dag |0\\rangle $ is described as: $|T_2^{({\\bf Q})}\\rangle = \\displaystyle \\frac{1}{\\sqrt{\\mathcal {A}}}\\sum _{|{\\bf k}|>k_F} \\eta _{{\\bf k}}^{({\\bf Q})} \\hat{x}^{\\dag }_{{\\bf Q}-{\\bf k}} \\hat{c}_{{\\bf k}}^\\dag | FS\\rangle \\; .$ The trion wave function $\\eta _{{\\bf k}}^{({\\bf Q})}$ satisfies the Schrödinger equation [64] $E_T^{({\\bf Q},E_F)} \\eta _{{\\bf k}}^{({\\bf Q})} = \\left(\\epsilon _{X{\\bf Q}-{\\bf k}} + \\epsilon _{{\\bf k}}\\right) \\eta _{{\\bf k}}^{({\\bf Q})} - \\frac{v}{\\mathcal {A}}\\sum _{|{\\bf k}^{\\prime }|>k_F}^{\\Lambda } \\eta _{{\\bf k}^{\\prime }}^{({\\bf Q})} \\; ,$ and the trion energy can be evaluated by solving the implicit equation: $\\displaystyle \\frac{1}{v} = \\displaystyle \\frac{1}{\\mathcal {A}}\\sum _{|{\\bf k}|>k_F}^{\\Lambda } \\displaystyle \\frac{1}{-E_T^{({\\bf Q},E_F)} + \\epsilon _{X{\\bf Q}-{\\bf k}} + \\epsilon _{\\bf k}} \\; .$ Note that some care has to be taken when comparing the trion energy with that of a bare exciton, since the Fermi sea in Eq.", "(REF ) involves one less electron.", "This difference has been taken into account in Eqs.", "(REF ) and (REF ) of the main text.", "We now discuss the solution of Eq.", "(REF ) in various limits.", "When $E_F=0$ , it is profitable to introduce the relative momentum in the center of mass frame ${\\bf q}_r$ , so that the electron momentum becomes ${\\bf k}= {\\bf q}_r + {\\bf Q}_c$ , with ${\\bf Q}_c=\\frac{m}{m_T} {\\bf Q}$ , where $m_T = m_X + m$ is the trion mass, and the exciton momentum becomes ${\\bf Q}-{\\bf k}= {\\bf Q}_X - {\\bf q}_r$ , with ${\\bf Q}_X= \\frac{m_X}{m_T} {\\bf Q}$ .", "Now one has that relative and center of mass kinetic energies factorize, $\\epsilon _{X{\\bf Q}-{\\bf k}} + \\epsilon _{{\\bf k}} = \\epsilon _{X{\\bf Q}_X-{\\bf q}_r} + \\epsilon _{{\\bf q}_r+{\\bf Q}_c} = \\epsilon _{r{\\bf q}_r} + \\epsilon _{T{\\bf Q}}$ , where $\\epsilon _{r{\\bf q}_r} = \\frac{q_r^2}{2m_r}$ , $m_r = m m_X/m_T$ is the trion reduced mass, and $\\epsilon _{T{\\bf Q}} = \\frac{Q^2}{2m_T}$ .", "Thus, the trion energy and wave function are given by ET(Q,EF=0) = -T + TQ qr(Q) = ZTT + rqr = qr(0) , where $Z_T = 2\\pi \\varepsilon _T/m_r$ and where $\\tilde{\\eta }_{{\\bf q}_r}^{({\\bf Q})} = \\eta _{{\\bf q}_r + {\\bf Q}_c}^{({\\bf Q})}$ .", "At finite doping and zero momentum ${\\bf Q}={\\bf 0}$ , Eqs.", "(REF ) and (REF ) can also be solved exactly to give ET(0,EF) = - T+mTmXEF k(0) = ZTT - mTmX EF + rk .", "At finite doping and finite momentum, Eq.", "(REF ) can be solved analytically [66] to find an implicit equation for $E_T^{({\\bf Q},E_F)}$ .", "Here, one finds that the trion acquires a finite momentum when $E_F > \\frac{m_X m_r}{m^2} \\varepsilon _T$ .", "We can now analyze the trion coupling strength to light.", "At zero doping, $E_F=0$ , trions do not couple to light, and thus cannot be probed optically.", "This is because the matrix element between a trion state and a cavity photon plus a majority particle at zero momentum $|C+1\\rangle = \\hat{a}_{\\bf 0}^\\dag \\hat{c}_{\\bf 0}^\\dag |0 \\rangle $ of the light-matter interaction term $\\sum _{{\\bf k}} \\hat{x}^{\\dag }_{{\\bf k}}\\hat{a}^{}_{{\\bf k}}$ is given by $\\langle T_2^{({\\bf 0})} | \\sum _{{\\bf k}} \\hat{x}^{\\dag }_{{\\bf k}}\\hat{a}^{}_{{\\bf k}} |C+1\\rangle = \\displaystyle \\frac{1}{\\sqrt{\\mathcal {A}}} \\eta _{{\\bf 0}}^{({\\bf 0})}\\; .$ Taking the squared amplitude of this matrix element, we see that the coupling to light of a single trion scales as $1/\\mathcal {A}$ , which is vanishingly small.", "On the other hand, if we have $N$ electrons within $\\mathcal {A}$ , then the coupling to light scales instead as $N/\\mathcal {A}=n\\sim E_F$ .", "Thus, even though the coupling per electron vanishes, the net effect of having an electronic medium is to create a continuum of states, with a total spectral weight that scales as $E_F$ , in agreement with Ref.", "[43]." ], [ "Numerical evaluation of the many-body $T$ matrix and the exciton self-energy", "In contrast to the vacuum $T$ matrix (REF ), the many-body correction in Eq.", "(REF ) cannot be evaluated analytically.", "The approach we illustrate in the following allows us to treat it in a simpler way and to evaluate it numerically, even when the intrinsic exciton linewidth $2\\eta $ is set to zero.", "Let us start by re-writing (REF ) in an equivalent form: $\\Pi _{mb}({\\bf q},\\omega ) = - \\frac{1}{\\mathcal {A}} \\sum _{{\\bf k}} \\displaystyle \\frac{f_{{\\bf k}+ \\frac{m}{m_T}{\\bf q}}}{\\omega - \\epsilon _{{\\bf k}+\\frac{m}{m_T}{\\bf q}} - \\epsilon _{X{\\bf k}-\\frac{m_X}{m_T}{\\bf q}} + i0^+} \\\\= - \\int \\frac{dkk}{2\\pi } \\frac{\\int \\frac{d\\theta }{2\\pi } f_{{\\bf k}+ \\frac{m}{m_T}{\\bf q}}}{\\omega - \\epsilon _{\\bf q}\\frac{m}{m_T} - \\epsilon _{\\bf k}\\frac{m_T}{m_X} + i0^+}\\;.$ The $k$ -integral can then be evaluated numerically by applying the Sokhotski–Plemelj theorem dxF(x)x + i0+=- iF(0)+Pdx F(x)x , where $\\mathcal {P}[\\dots ]$ is the integral principal part.", "As far as the ${\\bf q}$ -integral for evaluating the exciton self-energy (REF ) is concerned, it can be re-written in the following equivalent form by defining $y=q^2$ : $\\Sigma (\\omega ) =\\int \\frac{dy}{4\\pi } \\frac{f_{\\sqrt{y}}}{ \\mathcal {T}^{-1} (\\sqrt{y} , \\omega + \\epsilon _{\\sqrt{y}})} \\; .$ This integral has a pole when $\\Re \\mathcal {T}^{-1} = 0 = \\Im \\mathcal {T}^{-1}$ .", "Using a model involving contact interactions in 2D means that such a pole always exists, and furthermore it is a simple pole [85].", "Thus, we define $y^*= y^*(\\omega )$ to be the pole of the $T$ matrix.", "In this case, we can apply the Cauchy residue theorem to write: $\\Sigma (\\omega ) = - i\\pi \\; \\text{sign}\\left[\\Im \\mathcal {T}^{-1} (\\sqrt{y^*}, \\omega + \\epsilon _{\\sqrt{y^*}})\\right] \\frac{\\Theta (y^*)}{4\\pi } \\mathcal {R}es \\left[\\frac{f_{\\sqrt{y}}}{\\Re \\left[\\mathcal {T}^{-1}(\\sqrt{y},\\omega + \\epsilon _{\\sqrt{y}})\\right]}\\right]_{y^*}\\\\+ \\mathcal {P} \\int \\frac{dy}{4\\pi } \\frac{f_{\\sqrt{y}}}{\\Re \\mathcal {T}^{-1}(\\sqrt{y},\\omega +\\epsilon _{\\sqrt{y}}) + i\\Im \\mathcal {T}^{-1} (\\sqrt{y},\\omega + \\epsilon _{\\sqrt{y}})} \\; ,$ where the principal part prescription is used in the vicinity of the pole at $y^*$ , and the residue can be evaluated as $\\mathcal {R}es \\left[\\frac{f_{\\sqrt{y}}}{\\Re \\left[\\mathcal {T}^{-1}(\\sqrt{y},\\omega + \\epsilon _{\\sqrt{y}})\\right]}\\right]_{y^*}\\\\= \\displaystyle \\frac{f_{\\sqrt{y^*}}}{\\displaystyle \\left| \\frac{\\partial \\Re \\mathcal {T}^{-1}(\\sqrt{y},\\omega +\\epsilon _{\\sqrt{y}}) }{\\partial y}\\right|_{y^*}}\\; .$ From the expression of the exciton self-energy, we can get the exciton Green's function (REF ), absorption (REF ), and photoluminesence (REF ).", "These results allow us to evaluate the absorption and photoluminescence without any intrinsic homogeneous broadening for the exciton.", "For absorption alone, the integration method illustrated here is unnecessary, as one can conveniently shift the frequency to the complex plane, $\\omega \\mapsto \\omega + i\\eta $ , where $\\eta $ is related to the exciton decay rate, giving well converged results.", "However, as explained in the main text, for the luminescence one has to adopt the numerical method illustrated here.", "Thus, in order to compare with experiments, we introduce the effect of lifetime broadening at the end by convolving the photoluminescence with a Lorentzian profile with broadening $\\eta $ : $\\bar{P}(\\omega ^{\\prime }) = \\int _{-\\infty }^{\\infty } d\\omega ^{\\prime } P(\\omega ^{\\prime }) \\displaystyle \\frac{1}{\\pi } \\displaystyle \\frac{\\eta }{(\\omega - \\omega ^{\\prime })^2 + \\eta ^2}\\; .$ Figure: Spectral function A(ω)A(\\omega ) at temperature T=50T=50 K≃0.17ε T \\simeq 0.17\\varepsilon _T for two different dopings: E F =0.04ε T E_F= 0.04\\varepsilon _T (a) and E F =0.8ε T E_F=0.8 \\varepsilon _T (b).", "From the spectral function profile we extract the attractive and repulsive polaron energy E A,R E_{A,R} as the peak position, thelinewidth 2Γ A,R 2\\Gamma _{A,R} as the peak FWHM, and the polaron spectral weight Z A,R,continuum Z_{A,R,continuum} as the area under the peak.While at low doping in panel (a) the trion-hole continuum is merged with the attractive branch (which, here, ceases to be a quasiparticle resonance) and Z continuum =0Z_{continuum} = 0, at larger doping in panel (b) the polaron resonances are separated by the trion-hole continuum with a finite spectral weight Z continuum ≠0Z_{continuum} \\ne 0." ], [ "Extracting polaron properties at finite temperature", "In this appendix we illustrate how to extract the polaron energies, spectral weights, and half linewidths of Figs.", "REF and  REF from the spectral function profile $A(\\omega )$ .", "This is illustrated in Fig.", "REF .", "The polaron energy $E_{A,R}$ is evaluated as the spectral function peak position, the linewidth $2\\Gamma _{A,R}$ as the full width at half maximum (FWHM) and the spectral weight as the area under the peak.", "We have used the location of the spectral function minima as limits for the integrals evaluating the spectral weights: If there is only one minimum, this is the upper (lower) bound for evaluating $Z_A$ ($Z_R$ ), while if there are two minima, these are the limits for evaluating the trion-hole continuum spectral weight $Z_{continuum}$ .", "This criterion is the origin of the discontinuity of the residues in Figs.", "REF , REF , and REF .", "Note that, in the cases shown in Fig.", "REF , at low doping of panel (a) the trion-hole continuum does not appear because is merged with the attractive branch, which in this case, ceases to be a polaron quasiparticle resonance in the sense explained in Sec.", "REF .", "This can also be appreciated by the non-Lorentzian and asymmetric form of the spectral function around $E_A$ .", "By contrast, at larger doping in panel (b), the attractive branch is separated from the continuum, is a polaron quasiparticle in this case and recovers the Lorentzian symmetric shape." ], [ "Relationship between electron-exciton $T$ matrix and the trion wave function", "While Eq.", "(REF ) was derived for the special case of contact exciton-electron interactions, it is in fact general and can be applied for any realistic trion wave function.", "To see this, we note that the generalization of Eq.", "(REF ) to arbitrary two-body transition operator $\\hat{T}_0$ is ()zAqe-q T0(+q)Q,qr.", "We have defined the two-body state ${{\\bf Q},{\\bf q}_r}\\equiv \\hat{c}_{\\bf q}^\\dag x_{\\bf 0}^\\dag {vac}$ in terms of the relative momentum ${\\bf q}_r$ and the total momentum ${\\bf Q}$ which are related to the electron momentum via ${\\bf q}_r+{\\bf Q}m/m_T={\\bf q}$ and the exciton momentum via $-{\\bf q}_r+{\\bf Q}m_X/m_T={\\bf 0}$ .", "To evaluate the matrix element of the transition operator for an electron-exciton interaction that yields a trion bound state, we use the relationship between the Green's operator and the transition operator at energy $E$ : G(E) = G0(E)+G0(E)T(E)G0(E).", "Here, $\\hat{G}(E)=\\frac{1}{E-\\hat{H}+i0}$ and $\\hat{G}_0(E)=\\frac{1}{E-\\hat{H}_0-\\hat{H}_{0X}+i0}$ (which, for the two-body problem, should be evaluated in the canonical ensemble, effectively taking $\\mu =0$ ).", "Close to the trion resonance, we can neglect the first term, and we therefore have the spectral representation T0(E)Q,qr G0-1(E)G(E)G0-1(E)Q,qr =(E-q)2G(E)Q,qr (E-q)2|qr(0)|2E+T-TQ+i0 (-T+TQ-q)2|qr(0)|2E+T-TQ+i0, where in the third step we approximated the expectation value by inserting the trion state, and in the last step we used the pole condition.", "Taking $E=\\omega +\\epsilon _{\\bf q}$ and using ${\\bf Q}={\\bf q}$ we see that Eq.", "() reduces to Eq.", "(REF ) which demonstrates that it holds for arbitrary electron-exciton interactions that lead to trion formation.", "In the special case of contact interactions, the numerator of Eq.", "() reduces to $Z_T$ due to Eq.", "(), and therefore it reproduces the pole expansion in Eq.", "(REF ) as it should.", "Note that the fact that the $T$ matrix in this case is independent of the relative momentum is a special feature of contact interactions." ] ]
2212.05635
[ [ "Stability of the Catenoid for the Hyperbolic Vanishing Mean Curvature\n Equation Outside Symmetry" ], [ "Abstract We study the problem of stability of the catenoid, which is an asymptotically flat rotationally symmetric minimal surface in Euclidean space, viewed as a stationary solution to the hyperbolic vanishing mean curvature equation in Minkowski space.", "The latter is a quasilinear wave equation that constitutes the hyperbolic counterpart of the minimal surface equation in Euclidean space.", "Our main result is the nonlinear asymptotic stability, modulo suitable translation and boost (i.e., modulation), of the $n$-dimensional catenoid with respect to a codimension one set of initial data perturbations without any symmetry assumptions, for $n \\geq 5$.", "The modulation and the codimension one restriction on the data are necessary and optimal in view of the kernel and the unique simple eigenvalue, respectively, of the stability operator of the catenoid.", "In a broader context, this paper fits in the long tradition of studies of soliton stability problems.", "From this viewpoint, our aim here is to tackle some new issues that arise due to the quasilinear nature of the underlying hyperbolic equation.", "Ideas introduced in this paper include a new profile construction and modulation analysis to track the evolution of the translation and boost parameters of the stationary solution, a new scheme for proving integrated local energy decay for the perturbation in the quasilinear and modulation-theoretic context, and an adaptation of the vectorfield method in the presence of dynamic translations and boosts of the stationary solution." ], [ "Introduction", "Catenoids are among the simplest examples of a non-flat minimal hypersurface in Euclidean space.", "With respect to the Lorentzian generalization of the minimal hypersurface equation, which is a quasilinear wave equation that will be referred to as the hyperbolic vanishing mean curvature equation (HVMC equation) in this paper, these minimal hypersurfaces furnish examples of nontrivial asymptotically flat time-independent solutions.", "From this point of view, a fundamental question is that of the nonlinear asymptotic stability of catenoids as solutions to the HVMC equation – this will be the subject of the present paper.", "Our main result is the nonlinear asymptotic stability, modulo suitable translation and boost, of the $n$ -dimensional catenoid as a solution to the HVMC equation with respect to a “codimension-1” set of initial data perturbations without any symmetry assumptions, for $n \\ge 5$ (see Theorem REF below).", "The codimension-1 condition is necessary and sharp, in view of the fact that the linearized HVMC equation around the catenoid admits a one-parameter family of growing solutions corresponding to the negative eigenvalue of the stability operator (second variation of area).", "The necessity for an adjustment of the translation and boost parameters (i.e., modulation) stems from the kernel of the linearized equation arising from Lorentz invariance.", "Our result extends the pioneering work of Donninger–Krieger–Szeftel–Wong [12], which considers the same problem in radial symmetry for $n \\ge 2$ , to the non-symmetric context albeit for $n \\ge 5$ .", "Beyond the intrinsic interest in the asymptotic stability problem for the catenoid, our motivation for this work is to take on specific challenges for soliton stability problems brought on by the quasilinear nature of the corresponding hyperbolic evolution equations.", "We are hopeful for applications of our approach in the study of well-known topological solitons arising in quasilinear wave equations such as the Skyrmion for the Skyrme model." ], [ "Stability Problems for the Hyperbolic Vanishing Mean Curvature Equation", "We begin by giving a precise formulation of the hyperbolic vanishing mean curvature equation.", "Let $(\\mathbb {R}^{1+(n+1)},\\eta )$ be the $1+(n+1)$ dimensional Minkowski space with the standard metric $\\begin{split}\\eta =-\\mathrm {d}X^0\\otimes \\mathrm {d}X^0+\\mathrm {d}X^1\\otimes \\mathrm {d}X^1+\\dots + \\mathrm {d}X^{n+1}\\otimes \\mathrm {d}X^{n+1},\\end{split}$ and let $\\mathcal {M}$ be an $n+1$ dimensional connected orientable manifold without boundary.", "We consider embeddings $\\Phi :\\mathcal {M}\\rightarrow \\mathbb {R}^{1+{(n+1)}}$ such that the pull-back metric $\\Phi ^\\ast \\eta $ is Lorentzian (i.e., $\\Phi (\\mathcal {M})$ is timelike), and which satisfy $\\begin{split}\\Box _{\\Phi ^\\ast \\eta }\\Phi =0.\\end{split}$ The vector $\\Box _{\\Phi ^\\ast \\eta }\\Phi $ is the mean curvature vector of $\\Phi (\\mathcal {M})$ as a hypersurface in $\\mathbb {R}^{1+(n+1)}$ , and equation (REF ) is the requirement that this hypersurface have vanishing mean curvature (VMC).", "Embeddings satisfying these requirements are called (timelike) maximal and equation (REF ) is referred to as the hyperbolic vanishing mean curvature equation (HVMC equation).", "When there is no risk of confusion, by a slight abuse of notation, we will often identify $\\mathcal {M}$ with its image $\\Phi (\\mathcal {M})$ and simply refer to $\\mathcal {M}$ as a hypersurface of $\\mathbb {R}^{1+(n+1)}$ .", "The HVMC equation is the hyperbolic analogue of the elliptic minimal surface equation (or the parabolic mean curvature flow), and arises variationally as the Euler-Lagrange equations of the area functional $\\begin{split}\\mathcal {A}(\\Phi )=\\int _{\\mathcal {M}}\\sqrt{|\\det \\Phi ^\\ast \\eta |}.\\end{split}$ Maximal hypersurfaces are also called membranes when $n=2$ , and strings when $n=1$ .", "As (REF ) is a system of wave equations, it is natural to consider the associated Cauchy problem which can be described as follows.", "Given a coordinate patch $U\\subseteq \\mathcal {M}$ with coordinates $s=(s^0,\\dots ,s^n)$ , let $U_0:=\\lbrace s\\in U~\\vert ~ s^0=0\\rbrace \\subset U$ , and consider two functions $\\Phi _0,\\Phi _1\\colon U_0\\rightarrow \\mathbb {R}^{1+(n+1)}$ .", "We assume that $\\Phi _0$ is an embedding, that $\\Phi _0^\\ast \\eta $ is Riemannian, and that the metric $g_{\\mu \\nu }:={\\left\\lbrace \\begin{array}{ll}\\eta (\\partial _{\\mu }\\Phi _0,\\partial _\\nu \\Phi _0),\\quad &\\mu ,\\nu =1,\\dots ,n\\\\\\eta (\\Phi _1,\\partial _\\nu \\Phi _0),\\quad &\\mu =0,\\nu =1,\\dots n\\\\\\eta (\\partial _\\mu \\Phi _0,\\Phi _1),\\quad &\\mu =1,\\dots n, \\nu =0\\\\\\eta (\\Phi _1,\\Phi _1),\\quad &\\mu =\\nu =0\\end{array}\\right.", "}$ satisfies $\\sup _{U_0}\\det g<0$ .", "We then ask if there is a neighborhood $V\\subseteq U$ of $U_0$ such that there is a timelike embedding $\\Phi \\colon V\\rightarrow \\mathbb {R}^{1+(n+1)}$ satisfying (REF ), as well as $\\Phi \\vert _{U_0}=\\Phi _0$ and $\\partial _{0}\\Phi \\vert _{U_0}=\\Phi _1$ .", "Due to the diffeomorphism invariance of the problem, the solution cannot be unique.", "But, it is shown in [4] that this problem admits a solution $\\Phi $ and that any two solutions $\\Phi $ and $\\Psi $ are related by a diffeomorphism.", "In the present work we are interested in manifolds $\\mathcal {M}$ that can be written as direct products $\\mathcal {M}=\\mathbb {R}\\times M$ (in fact, we will soon restrict attention to the case where $M$ is a catenoid).", "In this case we use $(t,x)$ to denote points in $\\mathbb {R}\\times M$ .", "Given $\\Phi _0:M\\rightarrow \\lbrace 0\\rbrace \\times \\mathbb {R}^{n+1}\\subseteq \\mathbb {R}^{1+(n+1)}$ and a family of future directed timelike vectors $\\Phi _1:M\\rightarrow \\mathbb {R}^{1+(n+1)}$ , by finite speed of propagation and standard patching arguments, the result of [4] implies the existence of an interval $I\\ni 0$ , and a unique solution $\\Phi \\colon I\\times M\\rightarrow \\mathbb {R}^{1+(n+1)}$ to (REF ) such that $\\Phi (t,M)\\subseteq \\lbrace t\\rbrace \\times \\mathbb {R}^{n+1}$ , $\\Phi (0)=\\Phi _0$ , and $\\partial _t\\Phi (0)=\\Phi _1$ .", "Having a satisfactory local theory, one can consider the question of global (in time) dynamics of solutions to (REF ).", "For instance, in the context of the Cauchy problem formulated on $\\mathbb {R}\\times M$ , one can ask if the local solution extends from $I\\times M$ to all of $\\mathbb {R}\\times M$ , and if so, how it behaves as $t\\rightarrow \\pm \\infty $ .", "A special class of maximal hypersurfaces for which the global dynamics are easily described are the products of Riemannian VMC surfaces in $\\mathbb {R}^{n+1}$ with $\\mathbb {R}$ .", "More precisely, if $\\Phi _0\\colon M\\rightarrow \\mathbb {R}^{n+1}$ is a Riemannian embedding with vanishing mean curvature, then $\\Phi \\colon \\mathbb {R}\\times M\\rightarrow \\mathbb {R}^{1+(n+1)}$ given by $\\Phi (t,x)=(t,\\Phi _0(x))$ satisfies (REF ) with $\\Phi (0)=\\Phi _0$ and $\\partial _t\\Phi (0)=(1,0)$ .", "We refer to such product solutions as stationary solutions.", "A natural question regarding the long time dynamics of solutions of (REF ) is the stability of stationary solutions.", "The simplest case is when $\\Phi _0$ is a linear embedding of a hyperplane in $\\mathbb {R}^{n+1}$ .", "When $n\\ge 3$ , it was proved in [7] that small perturbations of a hyperplane solution lead to global solutions which decay back to a hyperplane.", "A similar result when $n=2$ was later proved in [24].", "Analytically, the results in [7], [24] amount to proving global existence and decay estimates for solutions to a system of quasilinear wave equations with small initial data on Minkowski space.", "From this point of view, hyperplanes can be thought of as the zero solution to (REF ).", "The first stability result for a non-flat stationary solution of (REF ) is contained in [11], [12] for the Lorentzian catenoid.", "The Riemannian catenoid is a VMC surface of revolution in $\\mathbb {R}^{n+1}$ (see Section REF for a more detailed description), and the Lorentzian catenoid is the corresponding stationary solution.", "The authors in [12] consider radial perturbations of the $(1+2)$ dimensional Lorentzian catenoid that satisfy an additional discrete symmetryThis is an important technical assumption, which avoids the resonances of the linearized operator in dimension two.. Their main result asserts that if the initial data belong to a codimension one subset in an appropriate topology, then the corresponding solution can be extended globally and converges to a Lorentzian catenoid as $t\\rightarrow \\infty $ .", "The codimension one restriction on the initial data is necessary and sharp (see the comments following Theorem REF ).", "A similar result for radial perturbations of the Lorentzian helicoid was subsequently obtained in [27].", "From a PDE point of view, [12], [27] establish the codimension one (asymptotic) stability of a time-independent solution to a quasilinear wave equation on Minkowski space under radial symmetry.", "In this work we prove the codimension one stability of the $(1+n)$ dimensional Lorentzian catenoid in dimensions $n\\ge 5$ without any symmetry restrictions on the perturbations .", "In the previous paragraphs we mentioned the results on the HVMC equation which are most directly related to our work.", "A more complete account is given in Section REF below.", "Before providing a simplified statement of our main theorem, we describe the catenoid solution in more detail in the next subsection." ], [ "The Catenoid Solution", "We recall some basic geometric properties of the catenoid.", "In $\\mathbb {R}^{n+1}$ let ${\\underline{X}}=(X^{\\prime },X^{n+1})$ where $X^{\\prime }=(X^1,\\dots ,X^n)$ denote the first $n$ coordinates and $X^{n+1}$ the last coordinate.", "Suppose $I$ is an interval in $\\mathbb {R}$ (possibly all of $\\mathbb {R}$ ), which we identify with the $X^{n+1}$ axis, and let $\\begin{split}\\mathfrak {f}: I \\rightarrow [1,\\infty )\\end{split}$ be a given even function withOne could also consider $\\mathfrak {f}(0)=\\mathfrak {f}_0>0$ , corresponding to a different radius for the neck of the catenoid.", "But this can be reduced to the case considered here by a rescaling ${\\underline{X}}\\mapsto \\lambda {\\underline{X}}$ .", "$\\mathfrak {f}(0)=1$ , $\\mathfrak {f}^{\\prime }(0)=0$ .", "Consider the surface of revolution obtained by rotating the graph of $\\begin{split}X^{n+1} \\mapsto X^{\\prime } =(\\mathfrak {f}(X^{n+1}),0,\\dots ,0)\\end{split}$ about the $X^{n+1}$ -axis.", "It can be parameterized as $\\begin{split}F:I\\times \\mathbb {S}^{n-1}\\rightarrow \\mathbb {R}^{n+1},\\qquad F(\\mathfrak {z},\\omega )=(\\mathfrak {f}(\\mathfrak {z})\\Theta (\\omega ),\\mathfrak {z}),\\end{split}$ where $\\Theta :\\mathbb {S}^{n-1}\\rightarrow \\mathbb {R}^{n}$ is the standard embedding of the unit sphere.", "As a level set our surface is $\\begin{split}\\lbrace G({\\underline{X}}):=|X^{\\prime }|^2-\\mathfrak {f}^2(X^{n+1})=0\\rbrace ,\\end{split}$ and the unit outward normal is $\\begin{split}\\mathfrak {n}= \\frac{\\nabla G}{|\\nabla G|}= \\frac{1}{(1+(\\mathfrak {f}^{\\prime }(\\mathfrak {z}))^2)^{\\frac{1}{2}}}(\\Theta (\\omega ),-\\mathfrak {f}^{\\prime }(\\mathfrak {z})).\\end{split}$ Below we identify $\\partial _\\mathfrak {z}$ and $\\partial _{a}$ , $a\\in \\lbrace \\omega ^1,\\dots ,\\omega ^{n-1}\\rbrace $ , with their images in $\\mathbb {R}^{n+1}$ under $dF$ , that is, $\\begin{split}\\partial _\\mathfrak {z}= (\\mathfrak {f}^{\\prime }\\Theta ,1),\\qquad \\partial _{a}= (\\mathfrak {f}\\partial _{a}\\Theta ,0).\\end{split}$ Differentiating (REF ) with respect to the ambient covariant derivative $\\nabla $ we find that $\\begin{split}\\nabla _{\\partial _\\mathfrak {z}}\\mathfrak {n}= -\\frac{\\mathfrak {f}^{\\prime \\prime }}{(1+(\\mathfrak {f}^{\\prime })^2)^{\\frac{3}{2}}}\\partial _\\mathfrak {z},\\qquad \\nabla _{\\partial _a}\\mathfrak {n}= \\frac{1}{\\mathfrak {f}(1+(\\mathfrak {f})^2)^{\\frac{1}{2}}}\\partial _a.\\end{split}$ From this we see that the second fundamental form $ {\\mathrm {I\\!I}} $ of the surface, as a matrix with components $\\langle \\nabla _{\\partial _j} {\\bf n},\\partial _i\\rangle $ , $i,j\\in \\lbrace \\mathfrak {z},\\omega ^1,\\dots ,\\omega ^{n-1}\\rbrace $ , is (here $\\mathring{{g}}$ denotes the round metric on $\\mathbb {S}^{n-1}$ ) $\\begin{split} {\\mathrm {I\\!I}} =\\begin{pmatrix} \\lambda _1(1+(\\mathfrak {f}^{\\prime })^2)&0\\\\0&{\\lambda }\\mathfrak {f}^2\\mathring{{g}} \\end{pmatrix},\\qquad \\lambda _1:=-\\frac{\\mathfrak {f}^{\\prime \\prime }}{(1+(\\mathfrak {f}^{\\prime })^2)^{\\frac{3}{2}}}, \\quad {\\lambda }:=\\frac{1}{\\mathfrak {f}(1+(\\mathfrak {f}^{\\prime })^2)^{\\frac{1}{2}}}.\\end{split}$ It follows that the principal curvatures of the surface are $\\lbrace \\lambda _1, \\dots ,\\lambda _n\\rbrace $ where $\\lambda _1$ is as above and $\\lambda _j={\\lambda }$ for $j\\ge 2$ .", "The mean curvature of the embedding is then $\\begin{split}{\\bf H}=\\frac{1}{n}\\Bigg (-\\frac{\\mathfrak {f}^{\\prime \\prime }}{(1+(\\mathfrak {f}^{\\prime })^2)^{\\frac{3}{2}}}+\\frac{n-1}{\\mathfrak {f}(1+(\\mathfrak {f}^{\\prime })^2)^{\\frac{1}{2}}}\\Bigg ).\\end{split}$ Therefore, for the surface to have zero mean curvature, $\\mathfrak {f}$ must satisfy the ODE $\\frac{\\mathfrak {f}^{\\prime \\prime }}{(1+(\\mathfrak {f}^{\\prime })^2)^{\\frac{3}{2}}}-\\frac{n-1}{\\mathfrak {f}(1+(\\mathfrak {f}^{\\prime })^2)^{\\frac{1}{2}}}=0,\\qquad \\mathfrak {f}(0)=1,\\quad \\mathfrak {f}^{\\prime }(0)=0.$ Definition 1.1 The Riemannian Catenoid is the surface of revolution ${\\underline{\\mathcal {C}}}$ in $\\mathbb {R}^{n+1}$ defined by (REF ), where $\\mathfrak {f}$ satisfies (REF ).", "The Lorentzian Catenoid is the surface $\\mathcal {C}:=\\mathbb {R}\\times {\\underline{\\mathcal {C}}}$ in the Minkowski space $\\mathbb {R}^{1+(n+1)}$ .", "There is a qualitative difference between the shape of the catenoid in dimension $n=2$ and dimensions $n\\ge 3$ .", "Indeed, (REF ) implies the following ODE for $\\mathfrak {z}$ : $\\begin{split}\\frac{\\mathrm {d}^2 \\mathfrak {z}}{\\mathrm {d}\\mathfrak {f}^2}+\\frac{n-1}{\\mathfrak {f}}\\frac{\\mathrm {d}\\mathfrak {z}}{\\mathrm {d}\\mathfrak {f}}+\\frac{n-1}{\\mathfrak {f}}\\big (\\frac{\\mathrm {d}\\mathfrak {z}}{\\mathrm {d}\\mathfrak {f}}\\big )^3=0.\\end{split}$ From this, one can derive that $I=\\mathbb {R}$ when $n=2$ and $I=(-S,S)$ when $n\\ge 3$ , where (see for instance [44] for more details on these calculations) $\\begin{split}S:=\\int _{1}^{\\infty }\\frac{\\mathrm {d}\\mathfrak {f}}{\\sqrt{\\mathfrak {f}^{2(n-1)}-1}}<\\infty .\\end{split}$ As we will see more explicitly below, one significance of this difference for the analysis is that in dimension $n=2$ , the zero modes for the linearized operator, which correspond to the symmetries of the ambient space, are not eigenfunctions (that is, they do not belong to $L^2$ ), but rather resonances.", "In this work we will consider only the high dimensional case $n\\ge 5$ , where the geometry of the catenoid approaches the flat geometry at a fast rate.", "To make these statements precise, we compute an expression for the induced metric on ${\\underline{\\mathcal {C}}}$ .", "Using polar coordinates $X^{\\prime }={\\underline{R}}\\underline{\\Omega }$ for the first $n$ coordinates in the ambient $\\mathbb {R}^{n+1}$ and denoting the $X^{n+1}$ coordinate by ${\\underline{Z}}$ , the ambient Euclidean metric becomes $\\begin{split}\\mathrm {d}{\\underline{Z}}^2+ \\mathrm {d}{\\underline{R}}^2 + {\\underline{R}}^2\\mathrm {d}\\underline{\\Omega }^2,\\end{split}$ where $\\mathrm {d}\\underline{\\Omega }^2$ denotes the standard metric on $\\mathbb {S}^{n-1}$ .", "On ${\\underline{\\mathcal {C}}}\\cap \\lbrace {\\underline{Z}}>0\\rbrace $ we introduce radial coordinates $({\\underline{r}},{\\underline{\\theta }})\\in [1,\\infty )\\times \\mathbb {S}^{n-1}$ by $\\begin{split}({\\underline{r}},{\\underline{\\theta }})\\mapsto ({\\underline{Z}}= {\\underline{Z}}({\\underline{r}}), {\\underline{R}}= {\\underline{r}},\\underline{\\Omega }= \\Theta ({\\underline{\\theta }})),\\qquad \\frac{\\mathrm {d}{\\underline{Z}}}{\\mathrm {d}{\\underline{r}}} = \\frac{1}{\\sqrt{{\\underline{r}}^{2(n-1)}-1}}.\\end{split}$ The induced Riemannian metric on ${\\underline{\\mathcal {C}}}$ in these coordinates becomes $\\begin{split}\\Big (1+\\frac{1}{{\\underline{r}}^{2(n-1)}-1}\\Big )\\mathrm {d}{\\underline{r}}\\otimes \\mathrm {d}{\\underline{r}}+{\\underline{r}}^2 \\mathring{{g}}_{ab}\\mathrm {d}{\\underline{\\theta }}^a\\otimes \\mathrm {d}{\\underline{\\theta }}^b.\\end{split}$ Instead of the geometric radial coordinate function ${\\underline{r}}$ , it is more convenient to use $\\rho \\in (-\\infty ,\\infty )$ with $\\begin{split}{\\underline{r}}(\\rho )=\\langle \\rho \\rangle :=\\sqrt{1+\\rho ^2}.\\end{split}$ The coordinates $(\\rho ,\\omega )\\in \\mathbb {R}\\times \\mathbb {S}^{n-1}$ , where $\\omega ={\\underline{\\theta }}$ , now describe all of ${\\underline{\\mathcal {C}}}$ (not just half) with $Z=Z({\\underline{r}}(\\rho ))$ if $\\rho \\ge 0$ and $Z= - Z({\\underline{r}}(\\rho ))$ if $\\rho <0$ .", "The metric on ${\\underline{\\mathcal {C}}}$ in these coordinates becomes $\\begin{split}\\frac{\\rho ^2\\langle \\rho \\rangle ^{2(n-2)}}{\\langle \\rho \\rangle ^{2(n-1)}-1}\\mathrm {d}\\rho \\otimes \\mathrm {d}\\rho +\\langle \\rho \\rangle ^2\\mathring{{g}}_{ab} \\mathrm {d}\\omega ^a\\otimes \\mathrm {d}\\omega ^b.\\end{split}$ Using $t$ to denote the variable in $\\mathbb {R}$ in $\\mathcal {C}=\\mathbb {R}\\times {\\underline{\\mathcal {C}}}$ , the Lorentzian metric on $\\mathcal {C}$ becomes $\\begin{split}-\\mathrm {d}t\\otimes \\mathrm {d}t+\\frac{\\rho ^2\\langle \\rho \\rangle ^{2(n-2)}}{\\langle \\rho \\rangle ^{2(n-1)}-1}\\mathrm {d}\\rho \\otimes \\mathrm {d}\\rho +\\langle \\rho \\rangle ^2 \\mathring{{g}}_{ab} \\mathrm {d}\\omega ^a\\otimes \\mathrm {d}\\omega ^b.\\end{split}$ From the second variation of the area functional one can see that the stability, or linearized, operator for the catenoid as a minimal surface is ${\\underline{H}}:=\\Delta _{\\underline{\\mathcal {C}}}+ | {\\mathrm {I\\!I}} |^2$ (respectively, $\\Box _\\mathcal {C}+| {\\mathrm {I\\!I}} |^2$ in the Lorentzian case), where $\\Delta _{\\underline{\\mathcal {C}}}$ denotes the Laplacian on ${\\underline{\\mathcal {C}}}$ (respectively, $\\Box _\\mathcal {C}=-\\partial _t^2+\\Delta _{\\underline{\\mathcal {C}}}$ denotes the d'Alembertian on $\\mathcal {C}$ ).", "See for instance [44], [14].", "As mentioned earlier, it is shown in [14], [44] that ${\\underline{H}}$ admits a unique positive eigenvalue, indicating the instability of the catenoid as a minimal surface: $\\begin{split}{\\underline{H}}\\underline{\\varphi }_\\mu =\\mu ^2\\underline{\\varphi }_\\mu .\\end{split}$ Heuristically, this instability corresponds to the shrinking of the neck of the catenoid.", "See for instance [23], [12] for more discussion on this.", "On the other hand, since every translation of ${\\underline{\\mathcal {C}}}$ in the ambient $\\mathbb {R}^{n+1}$ is another minimal surface (another catenoid), by differentiating in the translation parameter one obtains $n+1$ zero modes of ${\\underline{H}}$ .", "Explicitly, in the $(\\rho ,\\omega )$ coordinates above, these are given by $\\begin{split}{\\underline{e}}_j=\\frac{\\Theta ^j(\\omega )}{\\langle \\rho \\rangle ^{n-1}},\\quad 1\\le j\\le n,\\qquad {\\underline{e}}_{n+1}=\\frac{\\sqrt{\\langle \\rho \\rangle ^{2(n-1)}-1}}{\\langle \\rho \\rangle ^{n-1}},\\end{split}$ corresponding to translations in the direction $\\frac{\\partial }{\\partial X^j}$ respectively.", "In general, the zero mode ${\\underline{e}}_{n+1}$ , corresponding to translation in the direction of the axis of symmetry, does not belong to $L^2$ for any $n\\ge 2$ .", "For the other directions, ${\\underline{e}}_j$ are in $L^2$ when $n\\ge 3$ , while they logarithmically fail to be in $L^2$ when $n=2$ .", "In the Lorentzian case, the Lorentz boosts of the ambient $\\mathbb {R}^{1+(n+1)}$ give the additional zero modes $t{\\underline{e}}_j$ of $\\Box _\\mathcal {C}+| {\\mathrm {I\\!I}} |^2$ , which, for $1\\le j\\le n$ and $n\\ge 3$ , are referred to as the generalized eigenfunctions of the linear operator.Ambient rotations about the axis of symmetry map ${\\underline{\\mathcal {C}}}$ to itself, so differentiation along the rotation parameter yields the trivial zero mode ${\\underline{e}}=0$ for ${\\underline{H}}$ .", "Similarly for translations along the time axis in the Lorentzian case.", "When $n\\ge 3$ , scaling changes the value of $S$ in (REF ) and differentiation in the scaling parameter yields a zero solution which is neither an eigenfunction nor a resonance.", "See Section REF for the discussion of eigenfunctions and generalized eigenfunctions in the context of the first order formulation.", "We end this subsection by giving a more explicit description of the boosted and translated catenoid, which will be needed for the statement of our theorem.", "For any $\\ell _0\\in \\mathbb {R}^n$ let $P_{\\ell _0}$ denote the orthogonal projections in the direction of $\\ell _0$ , and $P_{\\ell _0}^\\perp $ the orthogonal projection to the complement.", "Here and below, by a slight abuse of notation, we view $\\mathbb {R}^n$ as a subset of $\\mathbb {R}^{n+1}$ using the embedding $X^{\\prime }\\mapsto (X^{\\prime },0)^\\intercal $ .", "The corresponding ambient Lorentz boost $\\Lambda _{\\ell _0}$ is defined by $\\begin{split}\\Lambda _{\\ell _0}=\\begin{pmatrix} \\gamma _0&-\\gamma _0\\ell _0^\\intercal \\\\-\\gamma _0 \\ell _0&\\gamma _0 P_{\\ell _0}+P_{\\ell _0}^\\perp \\end{pmatrix},\\quad \\gamma _0=\\frac{1}{\\sqrt{1-|\\ell _0|^2}}.\\end{split}$ The Lorentzian catenoid boosted by $\\ell _0\\in \\mathbb {R}^n$ and translated by $a_0\\in \\mathbb {R}^n$ is the following HVMC submanifold of $\\mathbb {R}^{1+(n+1)}$ , $\\begin{split}\\mathcal {C}_{a_0,\\ell _0}:=\\lbrace \\Lambda _{-\\ell _0} X~\\vert ~ X\\in \\mathcal {C}\\rbrace +\\begin{pmatrix} 0\\\\a_0 \\end{pmatrix}.\\end{split}$" ], [ "First Statement of the Main Result", "We are now ready to give a first formulation of the main result of this paper.", "For a manifold $\\mathcal {X}$ , we will use the notation $\\mathcal {T}_p\\mathcal {X}$ to denote the tangent space at $p\\in \\mathcal {X}$ .", "We also use the parameterization $F\\colon I\\times \\mathbb {S}^{n-1}\\rightarrow \\mathbb {R}^{n+1}$ introduced in (REF ).", "Theorem 1.2 Let $\\Phi _0\\colon I\\times \\mathbb {S}^{n-1} \\rightarrow \\lbrace 0\\rbrace \\times \\mathbb {R}^{n+1}$ , $n\\ge 5$ , be an embedding and $\\Phi _1\\colon I\\times \\mathbb {S}^{n-1}\\rightarrow \\mathbb {R}^{1+(n+1)}$ be a family of future directed timelike vectors such that $\\Psi _0=F$ and $\\Psi _1=(1,0)$ outside of a compact set.", "Suppose $\\Phi _0$ and $\\Phi _1$ belong to an appropriate codimension-1 subset in a suitable topology, and are sufficiently close to $F$ and $(0,1)$ , respectively, in this topology.", "Then there is a unique complete timelike VMC hypersurface $\\mathcal {M}$ in $\\mathbb {R}^{1+(n+1)}$ such that $\\mathcal {M}\\cap \\lbrace X^0=0\\rbrace =\\Phi _0({\\underline{\\mathcal {C}}})$ and $\\mathcal {T}_{\\Phi _0(p)}\\mathcal {M}$ is spanned by $\\mathrm {d}_p \\Phi _0(\\mathcal {T}_p{\\underline{\\mathcal {C}}})$ and $\\Phi _1(p)$ for any $p\\in I \\times \\mathbb {S}^{n-1}$ .", "Moreover, there exist $a_0,\\ell _0\\in \\mathbb {R}^n$ such that the ambient Euclidean distance between $\\mathcal {M}\\cap \\lbrace X^0=t\\rbrace $ and $\\mathcal {C}_{a_0,\\ell _0}\\cap \\lbrace X^0=t\\rbrace $ tends to zero as $t\\rightarrow \\infty $ .", "A more precise version of the theorem will be stated as Theorem REF below.", "We pause to make a few comments.", "[leftmargin=*] 1.", "In view of the growing mode of the stability operator (see Section REF ), the codimension-1 restriction on the data in Theorem REF is optimal.", "See for instance [23].", "However, we do not pursue the question of uniqueness or regularity of the codimension-1 set in the initial data topology.", "See Item (3) in Section REF for more on this point.", "2.", "The fact that the solution approaches a boosted and translated catenoid is related to the presence of a non-trivial kernel and generalized kernel for the linearized operator.", "As we saw in Section REF , the kernel and generalized kernel are generated by the translation and boost symmetries.", "Therefore, to obtain decay for the perturbative part of the solution we need to choose the translation and boost parameters dynamically (modulation) to stay away from the kernel and generalized kernel.", "To give a more precise description of how we track these parameters, we need to decompose the solution into a profile and a perturbation, and set up a first order formulation of the problem.", "These aspects are summarized in Sections REF and REF .", "In Remark REF in Section REF we give more precise references for how the parameters are tracked.", "Translation and boost symmetries are common features of quasilinear soliton stability problems that arise from Lorentz invariant theories.", "Developing a robust modulation approach for translation invariant quasilinear wave equations is one of the main achievements of this work.", "In this direction, our novel profile construction plays a central role.", "We hope that our methods will find applications in other quasilinear soliton stability problems.", "3.", "The assumption $\\Psi _0=F$ and $\\Psi _1=(0,1)$ outside a compact set can be replaced by sufficient decay at spatial infinity.", "Indeed, outside an ambient cone ${{C}}$ with vertex at $(-R,0)$ , with $R$ sufficiently large, the problem reduces to a quasilinear wave equation on Minkowski space.", "By finite speed of propagation, this problem can be analyzed separately in this region, for instance using the vectorfield $(t-r)^{p} \\partial _{t}$ .", "This will lead to suitably decaying and small data on the cone ${{C}}$ which can be taken as the starting point of the analysis in this paper.", "4.", "We consider dimensions $n\\ge 5$ in this work, because this range is a more accessible analytic setting to approach some of the structural challenges in quasilinear soliton stability problems.", "Specifically, this restriction has the following two advantages: (i) The faster spatial decay rate of the difference between the catenoid and flat metrics (faster decay of the tail of the soliton) amounts to weaker interactions between the profile and the radiation.", "(ii) The faster time decay of waves in higher dimensions allows us to directly obtain twice integrability of the time derivatives of the boost and translation parameters.", "This strong decay enters in proving integrated local energy decay for the perturbative part of the solution.", "See Section REF .", "The case $n=2$ (outside of radial symmetry) poses the additional challenge that the zero modes corresponding to translations and boosts are no longer eigenfunctions, but rather resonances (see Section REF ).", "We expect that the general scheme in this paper is applicable to dimensions $n=3,4$ and hope to address these cases in future work.", "5.", "The minimal surface equation is widely studied in Riemannian geometry and calculus of variations.", "In particular, the spectral properties of the stability operator for the catenoid are well-understood.", "See for instance [14], [44].", "This makes our problem a natural starting point for the study of asymptotic stability of solitons in quasilinear wave equations." ], [ "Overall Scheme and Main Difficulties", "In Sections REF –REF below, we will describe the main ideas for the proof of Theorem REF .", "Before we do so, let us begin with an executive summary of the overall scheme, as well as a discussion of the main difficulties in the proof." ], [ "Overall scheme", "The overall scheme of our proof of Theorem REF is as follows: Decomposition of solution.", "The basic idea is to make the (formal) decomposition $ \\hbox{(Solution)} = \\underbrace{\\mathcal {Q}}_{\\hbox{profile}} \\hbox{``}+\\hbox{''} \\underbrace{\\psi }_{\\hbox{perturbation}}$ This decomposition will be made precise in Section REF .", "A key goal is to show that the perturbation $\\psi $ decays to zero as $t \\rightarrow \\infty $ in a suitable sense (see Item (4) and Section REF ).", "In the absence of any obstructions, the profile $\\mathcal {Q}$ would be the object that we wish to prove the asymptotic stability of – the catenoid in our case.", "However, as discussed earlier, the linearized HVMC equation around the catenoid, $(-\\partial _{t}^{2} + \\underline{H}) \\psi = 0$ , admits non-decaying solutions, namely (i) a 1-dimensional family of exponentially growing solutions, which arises from the simple negative eigenvalue of $\\underline{H}$ , and (ii) a $2n$ -dimensional family of solutions growing at most linearly in $t$ , which arises from the $n$ -dimensional kernel of $\\underline{H}$ generated by translational symmetries.", "To avoid these obstructions, we employ the ideas of modulation and shooting, which we turn to now.", "Modulation.", "To ensure transversality to the $2n$ -dimensional family of non-decaying solutions in (ii), we impose $2n$ orthogonality conditions on $\\psi $ at each time.", "To compensate for such a restriction, we allow the profile $\\mathcal {Q}$ to depend on $2n$ time-dependent parameters.", "Since the family in (ii) arises from translational symmetries, it is natural to introduce an $n$ -dimensional position vector $\\xi (\\sigma ) = (\\xi ^{1}, \\ldots , \\xi ^{n})(\\sigma )$ , an $n$ -dimensional velocity (or boost) vector $\\ell (\\sigma ) = (\\ell ^{1}, \\ldots , \\ell ^{n})(\\sigma )$ , a foliation $\\sigma $ (whose leaves will represent an appropriate notion of time for our problem) and an approximate solution (or profile) $\\mathcal {Q}= \\mathcal {Q}(\\xi (\\cdot ), \\ell (\\cdot ))$ to HVMC that represents “a moving catenoid at position $\\xi (\\sigma )$ with velocity $\\ell (\\sigma )$ at each time $\\sigma $ ”.", "Appropriate choices of the profile $\\mathcal {Q}$ , the foliation $\\sigma $ and the $2n$ orthogonality conditions would lead, upon combination with the HVMC equation, to $2n$ equations that dictate the evolution of $(\\xi (\\sigma ), \\ell (\\sigma ))$ in terms of $\\psi $ .", "Shooting argument.", "To avoid the exponential growth stemming from obstruction (i), we further decompose the perturbation $\\psi $ as follows: $\\psi = a_{+}(\\sigma ) Z_{+} + a_{-}(\\sigma ) Z_{-} + \\phi ,$ where $Z_{+}$ and $Z_{-}$ are uniformly bounded functions, $a_{+}(\\sigma )$ and $a_{-}(\\sigma )$ obey ODEs in $\\sigma $ with growing and damping linear parts, respectively, and $\\phi $ obeys $2n+2$ orthogonality conditions so as to be transversal (to a sufficient extent) to all possible linear obstructions to decay at each timeStrictly speaking, the term $a_{-}(\\sigma ) Z_{-}$ decays forward-in-time, so the reader may wonder why we also took it out of $\\phi $ by imposing $2n+2$ orthogonality conditions (as opposed to $2n+1$ , the dimension of the space of forward-in-time non-decaying solutions to $(-\\partial _{t}^{2} + \\underline{H}) \\psi = 0$ ).", "The reason is that we want $\\phi $ to exhibit only dispersive behaviors like $\\mathbb {P}_{c} \\psi $ in the simple example $(-\\partial _{t}^{2} + \\underline{H}) \\psi = 0$ ..", "If $\\mathcal {Q}$ were simply the Lorentzian catenoid, then we may choose $a_{+} Z_{+} + a_{-} Z_{-}$ and $\\phi $ to be the $L^{2}$ -orthogonal projections of $\\psi $ to the negative eigenvalue and the absolutely continuous spectrum of $\\underline{H}$ , respectively.", "By analyzing the ODE for $a_{-}$ , the modulation equations for $(\\xi , \\ell )$ , and the wave equation for $\\phi $ (see (4) below), we will show that $\\dot{\\xi } - \\ell $ , $\\dot{\\ell }$ and $\\psi $ decay as long as the unstable mode $a_{+}(\\sigma )$ satisfies the so-called trapping assumption, which roughly says that $a_{+}(\\sigma )$ decays in time.", "We then employ a topological shooting argument to select a family of initial data – which is codimension 1 in the sense described below in Theorem REF and (REF ) – such that $a_{+}$ indeed continues to satisfy the trapping assumption for all times $\\sigma \\ge 0$ .", "Integrated local energy decay and vectorfield method.", "Finally, we study the quasilinear wave equation satisfied by $\\phi $ , which also satisfies $2n+2$ orthogonality conditions.", "Under suitable bootstrap assumptions (to handle nonlinear terms) and the trapping assumption for $a_{+}$ (see (3)), we prove the pointwise decay of $\\phi $ via the following steps: $&\\hbox{(transversality to linear obstructions)} \\\\&\\Rightarrow \\hbox{(uniform boundedness of energy and integrated local energy decay (ILED))} \\\\& \\Rightarrow \\hbox{(pointwise decay)}$ Here, integrated local energy decay estimates (ILED; also known as Morawetz estimates) refer to, roughly speaking, bounds on integrals of the energy density on spacetime cylinders for finite energy solutions.", "They are a weak form of dispersive decay.", "These have the advantage of being $L^{2}$ -based and hence being amenable to a wide range of techniques, such as the vectorfield method, Fourier transform, spectral theory etc.", "A powerful philosophy, that has recently arisen in works [10], [47], [32], [37] related to the problem of black hole stability, is to view integrated local energy decay as a key intermediate step for obtaining stronger pointwise decay (see also [46], [31] for papers in the related context of global Strichartz estimates).", "Specifically, in our proof we adapt the $r^{p}$ -method of Dafermos–Rodnianski [10], extended by Schlue [41] and Moschidis [33].", "See Section REF for further discussions." ], [ "Main difficulties", "We face several significant challenges in implementing the above scheme for our problem.", "A summary of the main difficulties is as follows: (Quasilinearity) First and foremost, the hyperbolic vanishing mean curvature equation is quasilinear.", "While the stability property of the linearized equation $(-\\partial _{t}^{2} + \\underline{H}) \\phi = f$ around the Lorentzian catenoid $\\mathcal {C}$ is well-understood, upgrading it to the nonlinear asymptotic stability result Theorem REF involves a number of serious difficulties; specifically, see (Proof of integrated local energy decay) below.", "Furthermore, since the highest order term is nonlinear, at various places in the proof we need to be careful to avoid any derivative losses.", "(Gauge choice) Another basic point about HVMC is that it is an equation for a geometric object, namely a hypersurface in $\\mathbb {R}^{1+(n+1)}$ .", "Hence, we need to fix a way of describing the hypersurface by a function to perform any analysis – this is the well-known problem of gauge choice.", "(Profile and foliation construction) In order for the above scheme to work, it is crucial for the profile $\\mathcal {Q}$ , representing a moving catenoid at position $\\xi (\\sigma )$ with velocity $\\ell (\\sigma )$ at each time $\\sigma $ , to solve HVMC up to an adequately small error.", "Unfortunately, the obvious construction based on the standard $t = X^{0}$ -foliation would lead to an inappropriately large error.", "The key issue is the inaccuracy of the construction in the far-away region (i.e., as $\\rho \\rightarrow \\pm \\infty $ ), which is fatal due to the slow spatial decay of the catenoid (i.e., mere polynomial decay towards the flat hyperplane as $\\rho \\rightarrow \\pm \\infty $ ).", "As we will see, we are led to consider a different foliation $\\sigma $ consisting of moving asymptotically null leaves; see Section REF .", "(Proof of integrated local energy decay) Existing methods [28], [30], combined with the detailed knowledge of the spectral properties of the stability operator $\\underline{H}$ for $\\underline{\\mathcal {C}}$ [14], [44], establish integrated local energy decay for the linearized equation $(-\\partial _{t}^{2} + \\underline{H}) \\phi = f$ around the Lorentzian catenoid $\\mathcal {C}$ when $\\phi = \\mathbb {P}_{c} \\phi $ ($L^{2}$ -projection to the absolutely continuous spectrum).", "See Section REF .", "Transferring this estimate to the solution $\\phi $ to the quasilinear wave equation satisfying our orthogonality conditions, however, is met with several difficulties, such as (i) quasilinearity, (ii) existence of a trapped null geodesic (traveling around the collar $\\lbrace \\rho = 0\\rbrace $ in case of $\\mathcal {C}$ ), (iii) existence of zero and negative eigenvalues of $\\underline{H}$ (what we referred to as linear obstructions to decay) and (iv) nonstationarity of the profile $\\mathcal {Q}$ .", "(Modulation theory and vectorfield method) Standard modulation theory [43], [48] is based on the standard $t = X^{0}$ -foliation, whose leaves are flat spacelike hypersurfaces; the method needs to be adapted to the foliation $\\sigma $ used in our profile construction.", "Similarly, the standard $r^{p}$ -method utilizes a foliation consisting of non-moving asymptotically null leaves [10], [41], [33], which needs to be adapted to our foliation $\\sigma $ of moving asymptotically null leaves.", "The presence of linear obstructions to decay (i.e., zero and negative eigenvalues for $\\underline{H}$ ) also needs to be incorporated.", "In Sections REF –REF , we describe the main ideas in this paper for resolving the above difficulties." ], [ "Profile, Foliation and Gauge Construction", "Here we describe our profile $\\mathcal {Q}$ and foliation $\\tau $ , as well as the gauge we use to express our solution as a scalar function $\\psi $ on $\\mathcal {Q}$ ; these constructions make the basic decomposition (REF ) precise.", "Since this decomposition is needed for the discussion of other parts of our proof, we shall give the full construction here.", "Let $\\xi = \\xi (\\sigma )$ and $\\ell = \\ell (\\sigma )$ be two functions on an interval $I \\subseteq \\mathbb {R}$ with values in $\\mathbb {R}^n$ and $|\\ell |,|{\\dot{\\xi }}|<1$ .", "By a slight abuse of notation, we will always view $\\xi $ and $\\ell $ as vectors both in $\\mathbb {R}^n$ and in $\\mathbb {R}^{n+1}$ , using the embedding of $\\mathbb {R}^{n}$ in $\\mathbb {R}^{n+1}$ given by $X^{\\prime }\\mapsto (X^{\\prime },0)^\\intercal $ .", "Denoting the orthogonal projection in the direction of $\\ell $ by $P_\\ell $ and the projection on the orthogonal complement by $P_\\ell ^\\perp $ , the linear boost in the ambient $\\mathbb {R}^{1+(n+1)}$ in the direction of $\\ell $ is $\\begin{split}\\Lambda _\\ell =\\begin{pmatrix} \\gamma &-\\gamma \\ell ^\\intercal \\\\-\\gamma \\ell &A_\\ell \\end{pmatrix},\\qquad A_\\ell = \\gamma P_\\ell +P_\\ell ^\\perp ,\\quad \\gamma =\\frac{1}{\\sqrt{1-|\\ell |^2}},\\end{split}$ and the inverses of $\\Lambda _\\ell $ and $A_\\ell $ are $\\begin{split}\\Lambda _\\ell ^{-1}=\\Lambda _{-\\ell },\\qquad A_\\ell ^{-1}=\\gamma ^{-1}P_\\ell +P_\\ell ^\\perp .\\end{split}$ Recall that ${\\underline{\\mathcal {C}}}$ denotes the Riemannian catenoid in $\\mathbb {R}^{n+1}$ , and $\\mathcal {C}=\\mathbb {R}\\times {\\underline{\\mathcal {C}}}$ the product Lorentzian catenoid in $\\mathbb {R}^{1+(n+1)}$ .", "Given two functions $\\xi (\\cdot )$ and $\\ell (\\cdot )$ as above, let $\\begin{split}\\mathcal {C}_{\\sigma }:=\\lbrace \\Lambda _{-\\ell (\\sigma )} X~\\vert ~ X\\in \\mathcal {C}\\rbrace +\\begin{pmatrix} 0\\\\\\xi (\\sigma )-\\sigma \\ell (\\sigma ) \\end{pmatrix}.\\end{split}$ Note that if ${\\dot{\\ell }}\\equiv 0$ and $\\xi (\\sigma )\\equiv \\sigma \\ell +a_0$ for a constant vector $a_0\\in \\mathbb {R}^n\\subseteq \\mathbb {R}^{n+1}$ , then $\\mathcal {C}_\\sigma $ is the Lorentzian catenoid obtained by boosting $\\mathcal {C}$ by $\\Lambda _{-\\ell }$ and then translating the result by $a_0$ .", "We will assume that $|\\ell |,|{\\dot{\\xi }}|<1$ and that $|\\ell (0)|,|\\xi (0)|$ , and $|{\\dot{\\ell }}|$ are sufficiently smallThe smallness assumptions are not essential and are a consequence of how we have set things up.", "The smallness requirement on $|{\\dot{\\ell }}|$ is to guarantee that the curve $\\xi -\\gamma R$ is timelike.", "The smallness conditions on $|\\ell (0)|$ and $|\\xi (0)|$ are so that ${\\tilde{}}_0$ contains ${{C}}_{-R}$ .", "If we remove these assumptions we simply need to take $R$ larger and replace $R-2$ by $R-C$ for a larger constant $C$ in the definition of ${{C}}_{-R}$ .", "In our applications the smallness of the initial data and the bootstrap assumptions imply all the smallness conditions required here.. We will first define a foliation of the interior of the ambient cone $\\begin{split}{{C}}_{-R} =\\lbrace X\\in \\mathbb {R}^{1+(n+1)}~\\vert ~ X^0+R-2=|{\\underline{X}}|\\rbrace \\end{split}$ as $\\cup _\\sigma {}_\\sigma $ , and then define the profile of our solution on the leaf ${}_\\sigma $ to be $\\mathcal {C}_\\sigma \\cap {}_\\sigma $ .", "Note that we will restrict attention to compactly supported perturbationsThis simplifying assumption is not essential for the proof.", "See the third comment following the statement of Theorem REF .", "of $\\mathcal {C}$ (in a suitable sense to be described below), so by finite speed of propagation we already know the form of our solution in the exterior of ${{C}}_{-R}$ .", "The leaves ${}_\\sigma $ will be chosen to be asymptotically null, more precisely hyperboloidal, away from the moving center $\\xi (\\sigma )$ .", "To define this foliation, we first fix reference hyperboloids defined as (here $\\gamma $ is evaluated at $\\sigma $ ) $\\begin{split}\\mathcal {H}_\\sigma = \\lbrace y=(y^0,y^{\\prime },y^{n+1})~\\vert ~y^0-\\gamma ^{-1}\\sigma +R=\\sqrt{|y^{\\prime }|^2+1}\\rbrace .\\end{split}$ In general, we will denote the restriction to $\\lbrace X^{n+1}=S\\rbrace $ by an underline, so (a similar construction can be carried out with respect $\\lbrace X^{n+1}=-S\\rbrace $ ) $\\begin{split}{\\underline{\\mathcal {H}}}_\\sigma = \\mathcal {H}_\\sigma \\cap \\lbrace y^{n+1}=S\\rbrace .\\end{split}$ The boosted and translated hyperboloids are denoted by ${\\tilde{}}_\\sigma $ , that is (here $\\ell $ and $\\xi $ are evaluated at $\\sigma $ ), $\\begin{split}{\\tilde{}}_\\sigma =\\big (\\Lambda _{-\\ell }\\mathcal {H}_\\sigma +(0,\\xi -\\sigma \\ell )^\\intercal \\big )=\\lbrace X~\\vert ~X^0-\\sigma +\\gamma R=\\sqrt{|X^{\\prime }-\\xi +\\gamma R\\ell |^2+1}\\rbrace .\\end{split}$ The restriction of ${\\tilde{}}_\\sigma $ to $\\lbrace X^{n+1}=S\\rbrace $ is denoted by (here $x=(x^0,x^{\\prime })\\in \\mathbb {R}\\times \\mathbb {R}^{n}$ denote the rectangular coordinates on $\\lbrace X^{n+1}=S\\rbrace $ ) $\\begin{split}{\\underline{{\\tilde{}}}}_\\sigma :={\\tilde{}}_\\sigma \\cap \\lbrace X^{n+1}=S\\rbrace =\\lbrace x~\\vert ~x^0-\\sigma +\\gamma R=\\sqrt{|x^{\\prime }-\\xi +\\gamma R\\ell |^2+1}\\rbrace .\\end{split}$ Remark 1.3 Let ${\\underline{{{C}}}}_{-R}=\\lbrace (x^0,x^{\\prime },S)~\\vert ~x^0+R-2=|x^{\\prime }|\\rbrace $ .", "The fact that $\\eta =\\xi -\\gamma R$ is a timelike curve (that is, $|{\\dot{\\eta }}|<1$ ) implies that $\\cup _{\\sigma \\ge 0}{\\underline{{\\tilde{}}}}_\\sigma $ gives a foliation of the region ${{R}}:=\\lbrace (x^0,x^{\\prime })\\in \\lbrace X^{n+1}=S\\rbrace ~\\vert ~x^0+\\gamma (0)R\\ge \\sqrt{|x^{\\prime }-\\eta (0)|^2+1}\\rbrace $ , which contains ${\\underline{{{C}}}}_{-R}$ because we have assumed that $|\\xi (0)|$ and $|\\ell (0)|$ are small (${{R}}$ is also contained in a slightly larger cone ${\\underline{{{C}}}}_{-(R+3)}$ ).", "Indeed, if $(x^0,x^{\\prime })\\in {{R}}$ belongs to ${\\underline{{\\tilde{}}}}_{\\sigma _2}\\cap {\\underline{{\\tilde{}}}}_{\\sigma _1}$ , $\\sigma _2>\\sigma _1$ , then $\\begin{split}\\sigma _2-\\sigma _1&=\\sqrt{|x^{\\prime }-\\eta (\\sigma _1)|^2+1}-\\sqrt{|x^{\\prime }-\\eta (\\sigma _2)|^2+1}\\le \\big ||x^{\\prime }-\\eta (\\sigma _1)|-|x^{\\prime }-\\eta (\\sigma _2)|\\big |\\le |\\eta (\\sigma _2)-\\eta (\\sigma _1)|\\\\&<\\sigma _2-\\sigma _1,\\end{split}$ which is impossible.", "Here to pass to the last inequality we have used $|{\\dot{\\eta }}|<1$ .", "It follows that the map $U:(\\sigma ,y)=(\\sigma ,y^0,y^{\\prime })\\mapsto \\Lambda _{-\\ell }y+(0,\\xi -\\sigma \\ell )$ from $\\cup _{\\sigma \\ge 0}{\\underline{\\mathcal {H}}}_\\sigma $ is a diffeomorphism to its image.", "To see that it covers all of ${{R}}$ , given $x=(x^0,x^{\\prime })\\in {{R}}$ choose $\\sigma _0>x^0+R+5$ and note that $x$ lies between ${\\underline{{\\tilde{}}}}_0=U({\\underline{\\mathcal {H}}}_0)$ and ${\\underline{{\\tilde{}}}}_{\\sigma _0}=U({\\underline{\\mathcal {H}}}_\\sigma )$ , and since $U$ is a diffeomorphism onto its image, $x$ must lie on ${\\underline{{\\tilde{}}}}_\\sigma $ for some $\\sigma \\in [0,\\sigma _0)$ .", "It follows from this that ${\\tilde{}}_\\sigma $ foliate a region containing ${{C}}_{-R}$ .", "Fix $\\mathfrak {m}:\\mathbb {R}\\times \\mathbb {R}\\rightarrow \\mathbb {R}$ to be a smoothed out version of the minimum function such that for some small $\\delta _1>0$ $\\begin{split}\\mathfrak {m}(x,y)= {\\mathrm {min}}(x,y)\\qquad \\mathrm {when~}|x-y|>\\delta _1.\\end{split}$ Define $\\sigma ,\\sigma _{\\mathrm {temp}}: \\lbrace -S\\le X^{n+1}\\le S\\rbrace \\rightarrow \\mathbb {R}$ by $\\begin{split}&\\sigma _{\\mathrm {temp}}(X)=\\sigma ^{\\prime }\\qquad \\mathrm {if~}X\\in {\\tilde{}}_{\\sigma ^{\\prime }},\\\\&\\sigma (X)= \\mathfrak {m}(X^0,\\sigma _{\\mathrm {temp}}(X)).\\end{split}$ Finally let $\\begin{split}{}_{\\sigma ^{\\prime }}=\\lbrace X~\\vert ~\\sigma (X)=\\sigma ^{\\prime }\\rbrace ,\\qquad {\\underline{{}}}_{\\sigma ^{\\prime }} = {}_{\\sigma ^{\\prime }}\\cap \\lbrace X^{n+1}=S\\rbrace .\\end{split}$ The transition from ${\\tilde{}}_\\sigma $ to ${}_\\sigma $ corresponds to considering an asymptotically null (more precisely, hyperboloidal) foliation only starting at a large radius from the center $\\xi (\\sigma )$ .", "Note that $\\mathfrak {m}$ can be chosen so that $\\cup _\\sigma {}_\\sigma $ gives a smooth foliation of a region containing ${{C}}_{-R}$ .", "Indeed, as we have seen, the level sets of $\\sigma _{\\mathrm {temp}}$ and $X^0$ provide such foliations, and $d\\sigma _{\\mathrm {temp}}$ and $dX^0$ are full rank.", "Since $\\begin{split}d\\sigma = (\\partial _{x}\\mathfrak {m}) dX^0+ (\\partial _y\\mathfrak {m}) d\\sigma _{\\mathrm {temp}},\\end{split}$ we can choose $\\mathfrak {m}$ so that the $d\\sigma $ is also full rank.", "Going forward, we will assume that $\\mathfrak {m}$ is chosen as such.", "The hyperboloidal (where $X^0\\ge \\sigma _{\\mathrm {temp}}+\\delta _1$ ) and flat (where $\\sigma _{\\mathrm {temp}}\\ge X^0+\\delta _1$ ) parts of these surfaces are denoted by ${}_\\sigma ^{\\mathrm {hyp}}, {\\underline{{}}}_\\sigma ^{\\mathrm {hyp}}$ and ${}_\\sigma ^{\\mathrm {flat}}, {\\underline{{}}}_\\sigma ^{\\mathrm {flat}}$ respectively.", "We have thus constructed the foliation $\\sigma $ adapted to $\\xi , \\ell $ .", "We define our profile as the hypersurface $\\begin{split}\\mathcal {Q}:= \\cup _{\\sigma } \\Sigma _{\\sigma }, \\quad \\hbox{ where } \\Sigma _\\sigma :=\\mathcal {C}_\\sigma \\cap {}_\\sigma .\\end{split}$ Next, we fix a gauge, that is, describe a parameterization (or embedding) of the VMC hypersurface $\\mathcal {M}$ and a way to measure its deviation from the the profile $\\mathcal {Q}= \\cup _\\sigma \\Sigma _\\sigma $ .", "We will do this by fixing a vector $N:\\cup _{\\sigma }\\Sigma _\\sigma \\rightarrow \\mathbb {R}^{1+(n+1)},$ and defining the perturbation $\\psi :\\cup _{\\sigma }\\Sigma _\\sigma \\rightarrow \\mathbb {R}$ by the requirement that (later we will further decompose $\\psi $ as in Item (3) of Section REF ; see also Section REF ) $\\begin{split}p+\\psi (p)N(p)\\in \\mathcal {M},\\qquad \\forall ~ p\\in \\cup _{\\sigma }\\Sigma _\\sigma .\\end{split}$ Under suitable smallness assumptions on the perturbation, this condition determines $\\psi $ uniquely (see Lemma REF ).", "If ${\\dot{\\ell }}\\equiv 0$ and ${\\dot{\\xi }}\\equiv \\ell $ , from the second variation of the area we would expect that the relevant perturbations are those which are in the direction of the (spacetime) normal to $\\Sigma _\\sigma $ .", "In general, since $\\xi $ and $\\ell $ do not necessarily obey ${\\dot{\\ell }}=0$ and ${\\dot{\\xi }}=\\ell $ , there will be additional errors.", "Nevertheless, the most natural geometric choice for $N$ still seems to be the normal to $\\cup _{\\sigma }\\Sigma _\\sigma $ , or perhaps $\\Lambda _{-\\ell }n$ , where $n$ is the normal to the straight Lorentzian catenoid $\\mathcal {C}$ .", "However, for reasons that will be discussed below we choose to work with a less geometric $N$ which is defined as follows.", "First, if $\\Sigma _\\sigma \\ni p=\\Lambda _{-\\ell }q+(0,\\xi (\\sigma )-\\sigma \\ell (\\sigma ))^\\intercal $ for some $q$ in $\\mathcal {C}$ , let $n_\\wp (p)=\\Lambda _{-\\ell (\\sigma )}n(q)$ , where $n(q)$ is the normal to $\\mathcal {C}$ at $q$ .", "Let $\\begin{split}{\\tilde{N}}=\\chi {\\tilde{N}}_{\\mathrm {int}}+(1-\\chi )\\frac{\\partial }{\\partial X^{n+1}}\\end{split}$ where ${\\tilde{N}}_{\\mathrm {int}}$ is the normal to $\\Sigma _{\\sigma }$ viewed as a subspace of ${}_{\\sigma }$ , and where $\\chi $ is some cutoff function which is equal to one in $\\mathcal {C}_\\sigma \\cap {}_\\sigma ^{\\mathrm {flat}}$ and equal to zero in $\\mathcal {C}_\\sigma \\cap {}_\\sigma ^{\\mathrm {hyp}}$ .", "We then define $N$ to be parallel to ${\\tilde{N}}$ and such that $\\eta (n_\\wp ,N)=1$ : $\\begin{split}N\\parallel {\\tilde{N}},\\qquad \\eta (n_\\wp ,N)=1.\\end{split}$ A few remarks are in order about this gauge choice.", "[leftmargin=*] 1.", "In the exterior hyperboloidal region, $N$ is parallel to $\\frac{\\partial }{\\partial X^{n+1}}$ .", "This choice is motivated by the fact that in this region the catenoid looks almost like a hyperplane, so we are in fact parameterizing the VMC hypersurface $\\mathcal {M}$ as a graph over a hyperplane.", "The advantage is that this simplifies the derivation of the equations and the form of the nonlinear terms.", "As will become clear in the course of the proof of our main theorem, the precise structure of the nonlinearity is important only in this exterior region where we will be able to treat the difference between a hyperplane and a catenoid perturbatively.", "We will come back to the normalization of the length of $N$ .", "2.", "In the interior our choice of $N$ is crude, but since $\\ell $ and $\\xi -\\sigma \\ell $ are small, it is still close to the geometric normal $n_{\\wp }$ .", "Our choice in this region is consistent with our general philosophy that besides some spectral information on the linearized operator and appropriate modulation equations for the parameters (which will be a consequence of our first order formulation and orthogonality conditions), precise structures are not so important in the interior region.", "3.", "Finally, the reason for the length normalization of $N$ is that we want the linear part of the equation satisfied by $\\psi $ to be (except for errors coming from ${\\dot{\\ell }}$ and ${\\dot{\\xi }}-\\ell $ not vanishing) $\\begin{split}\\Box \\psi +| {\\mathrm {I\\!I}} |^2\\psi ,\\end{split}$ where $\\Box $ and $ {\\mathrm {I\\!I}} $ denote, respectively, the wave operator and second fundamental form of $\\mathcal {C}_\\sigma $ , in the case where ${\\dot{\\ell }}\\equiv 0$ and ${\\dot{\\xi }}\\equiv \\ell $ .", "This is important because $\\Box +| {\\mathrm {I\\!I}} |^2$ is precisely the operator $-\\partial _{t}^{2} + \\underline{H}$ conjugated by the Lorentz transform with parameter $\\ell $ (after a suitable translation).", "To summarize, our profile is defined as $\\mathcal {Q}= \\cup _\\sigma \\Sigma _\\sigma $ with $\\Sigma _\\sigma =\\mathcal {C}_\\sigma \\cap {}_\\sigma $ , and $\\mathcal {C}_\\sigma $ and ${}_\\sigma $ as defined in (REF ), (REF ), (REF ), (REF ), and the perturbation is described by a scalar function $\\psi :\\cup _\\sigma \\Sigma _\\sigma \\rightarrow \\mathbb {R}$ defined by (REF ), (REF ), (REF ).", "We will denote the hyperboloidal and flat parts of the profile by $\\mathcal {C}_{{\\mathrm {hyp}}}:=\\lbrace X^0\\ge \\sigma _{\\mathrm {temp}}(X)+\\delta _1\\rbrace $ and $\\mathcal {C}_{{\\mathrm {flat}}}:=\\lbrace X^0\\le \\sigma _{\\mathrm {temp}}(X)-\\delta _1\\rbrace $ respectively.", "Remark 1.4 The following simplified picture is helpful when thinking about the foliation and the definition of the profile.", "Imagine the scenario where we want to decompose a solution $u$ of a semilinear equation $\\Box u = F(u)$ in terms of a soliton $Q$ and a remainder $\\psi $ .", "Suppose the equation is translation and Lorentz invariant, and let $Q_{\\xi ,\\ell }$ denote the translated and boosted soliton.", "We foliate the domain by leaves which are flat up to a radius of order $R$ about $\\xi (\\tau )$ , and then become asymptotically null and approach the cone through $(-R,0)^\\intercal $ translated by $\\xi (\\tau )$ and boosted by $\\ell (\\tau )$ , as in the following figure [scale=0.8,transform shape] (-4,0) – (4,0); [->] (0,0) – (0,3.5) ; (A) at (0,0); (B) at (0.25,0.4); (C) at (0.5,1.2); (D) at (0.75,3); [thick] plot [smooth] coordinates (A) (B) (C) (D); [thick] plot [smooth] coordinates (-1,0) (A) (1,0); [thick] plot [smooth] coordinates (1,0) (2.25,0.7) (3.35,1.9); [thick] plot [smooth] coordinates (-1,0) (-2.05,0.9) (-3.15,2.3); [thick] plot [smooth] coordinates (-0.75,0.4) (B) (1.25,0.4); [thick] plot [smooth] coordinates (1.25,0.4) (2.4,1.1) (3.6,2.5); [thick] plot [smooth] coordinates (-0.75,0.4) (-1.8,1.3) (-2.9,2.7); [thick] plot [smooth] coordinates (-0.5,1.2) (C) (1.5,1.2); [thick] plot [smooth] coordinates (1.5,1.2) (2.65,2.1) (3.75,3.5); [thick] plot [smooth] coordinates (-0.5,1.2) (-1.55,2.1) (-2.65,3.5); above] at (D) $\\xi $ ; right] at (3.75,3.5) ${}_\\tau $ ; right] at (3.35,1.9) ${}_0$ ; right] at (4,0) $x^0=0$ ; Our profile construction corresponds to decomposing the solution as $u=Q_{\\xi (\\tau ),\\ell (\\tau )}+\\psi $ on the leaf ${}_\\tau $ ." ], [ "First-Order Formulation, Modulation Equations and Selection of a Codimension One Set of Initial Data", "The role of the first-order formulation is to derive the evolution equations for the modulation parameters $\\xi $ and $\\ell $ .", "The modulation parameters are fixed by imposing the matching number of “orthogonality conditions” on the perturbation.", "The orthogonality conditions also guarantee that the perturbation stays away from the kernel of the linearized operator.", "Our approach is based on that of [43], which is in turn motivated by [48].", "The first order formulation is closely related to a Hamiltonian formulation of the original Euler-Lagrange equations.", "To arrive at an adequate first order formulation we need to fix a time function.", "In our case we have already discussed the foliation and we simply take the time function to be $\\sigma $ in (REF ).", "This is a degenerate choice because the level sets of $\\sigma $ are asymptotically null.", "To deal with this, we make use of the observation that the orthogonality conditions may be localized to a large compact set (see for instance [15]), and we impose conditions that involve the perturbation only on the flat part of $\\Sigma _{\\sigma }$ .", "An implication of localizing the orthogonality conditions is that the perturbation enters linearly in the parameter ODEs.", "Since the derivatives of parameters also enter linearly in the equation for the perturbation (see Section REF ), some care is needed to avoid circularity in the estimates.", "The key here is that the linear contributions of the perturbation stem from the localization of the eigenfunctions to the complement of the large compact set.", "Hence, the spatial decay of the eigenfunctions furnishes extra smallness.", "Two more technical issues deserve further explanation.", "In view of the gauge invariance of the problem, the choice of momentum variable for the first order formulation is not obvious.", "The proper choice must be such that the orthogonality conditions result in non-degenerate first order ODEs for $\\ell $ and $\\xi $ .", "We motivate our choice in Section REF .", "The derivation of the equations in first order form is rather technical and occupies most of Section .", "Additionally, due to the quasilinear nature of the HMVC equation, sometimes more derivatives of the modulation parameters arise than we can a priori control in our bootstrap.", "In principle, it may be possible to use the hyperbolic structure of the equation to solve for the highest order time derivatives in terms of spatial derivatives of the perturbation, and to use integration by parts to avoid the loss of regularity (see for instance [12]).", "However, this approach would have to carefully exploit the structure of the equation, which becomes especially difficult in view of the complex form of the equations in the first order formulation.", "Instead, we modify the orthogonality conditions to obtain smoothing of the modulation parameters.", "This is a robust approach that does not rely on the algebraic structures of the equations.", "The details are carried out in Section REF .", "Conceptually, we exploit the freedom that while the final values of the parameters are determined by the initial conditions for the HVMC equation, their trajectories are not.", "Technically, this is achieved by choosing the orthogonality conditions so that $\\xi $ and $\\ell $ satisfy ODEs of the forms ${\\dot{\\ell }}=S\\mathcal {F}_{\\ell }$ and ${\\dot{\\xi }}-\\ell =S\\mathcal {F}_\\xi $ , where $\\mathcal {F}_\\ell $ and $\\mathcal {F}_\\xi $ depend on the perturbation and its derivatives, and $S$ is a smoothing operator in the time variable.", "We choose the integral kernel $k(\\sigma , \\sigma ^{\\prime })$ of $S$ to be compactly supported in the range $\\sigma ^{\\prime } < \\sigma $ to preserve the causality of the smoothed-out modulation equations (i.e., $\\xi (\\sigma ), \\ell (\\sigma )$ are independent of the solution at future times $\\sigma ^{\\prime } > \\sigma $ ).", "Finally we say a few words about the shooting argument discussed in Item (3) in Section REF .", "The decomposition $\\psi =a_{+}Z_{+}+a_{-}Z_{-}+\\phi $ is derived in Section REF .", "The ODEs satisfied by $a_{\\pm }$ are given in equation (REF ), and again involve a smoothing operator in the time variable.", "The trapping assumption is stated in equation (REF ).", "Note that this is at the level of the derivative of $a_{+}$ .", "Finally, the standard topological argument (see for instance [29]) is described in Step 2a of the proof of Theorem REF in Section REF .", "For more background on the construction of center-stable manifolds we refer to [40], [34]." ], [ "Uniform Boundedness of Energy and Integrated Local Energy Decay", "We now discuss the ideas behind our proof of the uniform boundedness and integrated local energy decay estimates for $\\phi $ .", "In the case of the linearized equation $(-\\partial _{t}^{2} + \\underline{H}) \\psi = f$ around the Lorentzian catenoid, both bounds follow from existing methods [46], [31], [28], [30]; see Section REF below.", "The challenge is to extend these estimates to $\\phi $ in our decomposition of the solution.", "Here, $\\phi $ solves the equation $\\mathcal {P}\\phi = f,$ where $\\mathcal {P}$ is the linearized HMVC operator around $\\mathcal {Q}$ modulo terms involving $\\dot{\\ell }$ and $\\dot{\\xi }-\\ell $ , which are regarded as nonlinearities.", "The right-hand side $f$ consists of the profile error (i.e., failure of $\\mathcal {Q}$ to solve HVMC; this includes terms linear in $\\dot{\\ell }$ and $\\dot{\\xi }-\\ell $ ) and quasilinear nonlinearity.", "We refer to (REF ), (REF ) (interior) and (REF ) (exterior) for the precise expressions.", "We work under a trapping assumption for $a_{+}$ and suitable bootstrap assumptions on $\\xi $ , $\\ell $ , $a_{-}$ and $\\phi $ ; see Section .", "The proof of uniform boundedness of energy for $\\phi $ follows by using the global almost stationary vectorfield ${\\bf T}$ (see Section REF for the definition) as a vectorfield multiplier, and using the orthogonality conditions to obtain coercivity of the spatial part of the operator $\\mathcal {P}$ .", "To control higher ${\\bf T}$ -derivatives, we use ${\\bf T}$ as a commuting vectorfield.", "Using the equation and elliptic regularity estimates to also control higher spatial derivatives.", "We refer to Proposition REF for the precise statements and proofs.", "The proof of integrated local energy decay for $\\phi $ is significantly more difficult due to the reasons discussed in Section REF , including quasilinearity (i.e., occurrence of nonlinear second-order terms), trapping (i.e., existence of a unstable trapped null geodesics along $\\lbrace \\rho = 0\\rbrace $ ), eigenvalues of the stability operator (i.e., zero and negative eigenvalues of $\\underline{H}$ ), nonstationarity of $\\mathcal {Q}$ .", "We seek to divide-and-conquer these difficulties.", "Our first main tool is a vectorfield multiplier argument that resembles the proof for the base case $-\\partial _{t}^{2} + \\underline{H}$ on the Lorentzian catenoid $\\mathcal {C}$ in Section REF , but adapted to our profile $\\mathcal {Q}$ .", "This argument gives the desired integrated local energy decay estimate with an additional lower order term $\\Vert \\phi \\Vert _{L^{2}(K)}$ on the right-hand side, where $K$ is a spacetime cylinder around the trajectory $\\xi $ .", "For details, see the proof of Lemma REF below.", "In particular, this argument takes care of issues (i) (quasilinearity) and (ii) (trapping), which are “high time frequency” issues.", "To handle the remaining issues we introduce our next key tool, the time smoothing operator $P_{\\le N}$ , where $N^{-1}$ is the smoothing scale (equivalently, $N$ is the time frequency localization scale).", "Our initial observation is that $\\Vert \\phi - P_{\\le N} \\phi \\Vert _{L^{2}(K)}$ is small compared to the left-hand side of integrated local energy decay if $N$ is sufficiently large (see Lemma REF below), so we only need to control $P_{\\le N} \\phi $ .", "We have thus reduced the problem to the consideration of only “low time frequencies”!", "The key benefit of $P_{\\le N}$ is that, by elliptic regularity (for the part of $\\mathcal {P}$ that does not involve ${\\bf T}$ ), any potentially dangerous second order term $\\partial ^{2} P_{\\le N} \\phi $ may be bounded in terms of $P_{\\le N} \\phi $ and $\\mathcal {P}(P_{\\le N} \\phi )$ .", "Hence, the equation $\\mathcal {P}P_{\\le N} \\phi = P_{\\le N} f + [\\mathcal {P}, P_{\\le N}] \\phi $ may be thought of as $\\mathcal {P}_{0} P_{\\le N} \\phi = \\hbox{(perturbative error)}$ , where $\\mathcal {P}_{0}$ on the left-hand side is the operator obtained by conjugating $(-\\partial _{t}^{2} + \\underline{H})$ with the Lorentz transformation with parameter $\\underline{\\ell }$ .", "In the context of the bootstrap argument, $\\underline{\\ell }$ would be the final velocity parameter.", "This summarizes how issue (iv) (nonstationarity of $\\mathcal {Q}$ ) shows up and gets resolved in our proof.", "It remains to establish an integrated local energy decay estimate for $P_{\\le N} \\phi $ , which would in particular control $\\Vert P_{\\le N} \\phi \\Vert _{L^{2}(K)}$ .", "As discussed earlier, to obtain such a bound from the properties of $-\\partial _{t}^{2} + \\underline{H}$ , we need $P_{\\le N} \\phi $ to satisfy $2n+2$ orthogonality conditions on suitable time slices (in this case, they are boosts of $\\lbrace X^{0} = const \\rbrace $ by $\\underline{\\ell }$ ).", "This is issue (iii) (eigenvalues of the stability operator).", "Our idea is to use a suitable multiplier argument to transfer our orthogonality conditions on $\\Sigma _{\\sigma }$ to the needed ones; see the proof of Proposition REF .", "We remark that, at this point, we need doubly integrable decay rates of $\\dot{\\xi } - \\ell $ and $\\dot{\\ell }$ to control the error.", "This procedure also requires the right-hand side of the equation to be localized to the flat portion of $\\Sigma _{\\sigma }$ .", "For this reason, we enact (in fact, twice for technical purposes) the so-called near-far decomposition in our proof; see (REF ), (REF ), (REF ) and (REF ).", "We end this part with a remark on the time smoothing operator $P_{\\le N}$ .", "We define this operator as a smooth cutoff in time frequencies, where the Fourier transform is defined in suitable coordinates.", "However, unlike the Fourier transform in time, whose definition usually requires taking the Laplace transform first and then considering its analytic continuation, the time smoothing operator is easier to make sense of as an integral operator on physical space." ], [ "Vectorfield Method for Moving Profile", "The final part of the scheme from Section REF is proving pointwise decay for the perturbation.", "For this purpose we use the $r^p$ -vectorfield method introduced in [10].", "This method combines an ILED estimate in a bounded region with vectorfield estimates outside a compact set to obtain pointwise decay.", "We refer to [33] for a review of the history of the vectorfield method.", "The $r^p$ method applies to wave equations on asymptotically flat spacetimes.", "In its simplest form in [10] it yields the (interior) pointwise decay rate $t^{-1}$ on $\\mathbb {R}^{1+n}$ .", "In [41] the method was extended to give the decay rate $t^{-\\frac{3}{2}+\\delta }$ .", "A further extension was obtained in [33] giving the rate $t^{-\\frac{n}{2}}$ .", "We refer to [10], [33] for a general review of the method, and to [33] for an explanation of the scheme for the improved $t^{-\\frac{n}{2}}$ decay.", "In this work we adapt the method from [33].", "Our setup differs from the one in [33] in a few important respects which we now describe.", "The first new aspect is that our foliation is centered at the trajectory $\\xi $ (see Section REF ).", "To deal with this, we introduce a null frame that is adapted to the dynamically constructed foliation.", "We then define our weighted multiplier and commutator vectorfields with respect to this null frame, with spatial weights that are measured from the moving center $\\xi $ .", "The remarkable fact is that the wave equation written in the moving null frame has the right structure for the application of the $r^p$ -vectorfield method.", "In particular, because in general ${\\dot{\\ell }}\\ne 0$ and ${\\dot{\\xi }}\\ne \\ell $ , there will be new error terms with time decay in the wave equation itself, and in the multiplier and commutator identities.", "The important point is that these errors do not grow at spatial infinity, so they can be estimated in our bootstrap argument.", "Related to this issue, is the failure of the profile to be an exact solution of the HVMC equation.", "This implies that the radiation satisfies a wave equation with a source term with time decay.", "One of the main advantages of our foliation, and the adapted definition of the commutators, is that there is no spatial growth when the commutators fall on the source term.", "Another difference of our setup with that of [33] is that our linearized operator has an order zero potential.", "Moreover, the elliptic part of the operator has a nontrivial kernel.", "These differences become relevant when using the improved decay of higher time derivatives of the perturbation to get improved decay for arbitrary derivatives.", "In [33] this is achieved by viewing the wave equation as an elliptic equation with the time derivatives as source terms, and applying global elliptic estimates.", "In our context we need a separate argument to deal with the zero order potential and the kernel.", "These arguments are presented in Lemma REF and Corollary REF .", "The orthogonality conditions from the first order formulation are used here to guarantee that the projection of the perturbation on the kernel has sufficient decay.", "Our modified scheme in dimension $n=5$ gives the pointwise decay $t^{-\\frac{9}{4}+\\frac{\\kappa }{2}}$ , $\\kappa >0$ arbitrary, for the perturbation.", "This is different from the rate $t^{-\\frac{5}{2}}$ in [33], and we now explain the reason for the discrepancy.", "Let $\\phi $ denote the perturbation.", "An intermediate step in deriving pointwise decay is proving that $\\Vert \\partial ^3\\phi \\Vert _{L^2}\\lesssim t^{-3}$ , when at least one of the derivatives is with respect to time.", "Global eliptic estimates are then applied in [33] to conclude that $\\Vert \\partial ^3\\phi \\Vert _{L^2}\\lesssim t^{-3}$ for arbitrary derivatives.", "A similar argument gives $\\Vert \\partial ^2\\phi \\Vert _{L^2}\\lesssim t^{-2}$ .", "The $t^{-\\frac{5}{2}}$ pointwise decay in [33] then follows from the Gagliardo-Nirenberg estimate $\\Vert \\phi \\Vert _{L^\\infty }\\lesssim \\Vert \\partial ^2\\phi \\Vert _{L^2}^{\\frac{1}{2}}\\Vert \\partial ^3\\phi \\Vert _{L^2}^{\\frac{1}{2}}$ .", "In our case, the equation for $\\phi $ contains a source term that depends linearly on the parameter derivatives, which we denote by ${\\dot{\\wp }}$ .", "Spatial derivatives do not improve the time decay of this term, so we cannot hope to improve the decay of $\\Vert \\partial ^k\\phi \\Vert _{L^2}$ beyond the decay of ${\\dot{\\wp }}$ .", "On the other hand, the ODEs for the parameters can be used to bound $|{\\dot{\\wp }}|$ by a small multiple of $\\Vert \\langle r\\rangle ^{-c_1}\\phi \\Vert _{L^2}$ , $c_1<\\frac{7}{2}$ (here $r$ denotes the distance to the center $\\xi $ ).", "Using the elliptic estimates discussed in the previous paragraph (see Lemma REF ) we can estimate $\\Vert \\langle r\\rangle ^{-c_2}\\phi \\Vert _{L^2}$ , $c_2<\\frac{5}{2}$ , by $|{\\dot{\\wp }}|$ , where the restriction on $c_2$ comes from the order zero potential in the linear operator.", "Taking $c_1=c_2=\\frac{5}{2}-\\kappa $ we get the estimate $\\Vert \\langle r\\rangle ^{-\\frac{5}{2}+\\kappa }\\phi \\Vert _{L^2}\\lesssim t^{-\\frac{5}{2}+\\delta }$ .", "This sharp estimate can then be used to obtain the non-sharp estimate $\\Vert \\partial ^3\\phi \\Vert _{L^2}\\lesssim t^{-\\frac{5}{2}+\\kappa }$ .", "Combined with Gagliardo-Nirenberg we obtain the pointwise decay $t^{-\\frac{9}{4}+\\frac{\\kappa }{4}}$ .", "Note that if we used elliptic estimates with fractional derivatives (instead of weights) and a fractional Gagliardo-Nirenberg estimate, we could hope to obtain the decay rate $t^{-\\frac{5}{2}+\\kappa }$ .", "Since the rate $t^{-\\frac{9}{4}+\\frac{\\kappa }{2}}$ is already sufficient to close our bootstrap, we did not further complicate the argument by introducing fractional derivatives." ], [ "Second Statement of the Main Result", "To state our main result more precisely, we first describe the initial data.", "Consider two functions $\\begin{split}\\psi _0,\\psi _1\\in C^\\infty _0({\\underline{\\mathcal {C}}}),\\qquad {\\mathrm {supp}}~ \\psi _0,\\psi _1\\subseteq {\\underline{\\mathcal {C}}}\\cap \\lbrace |{\\underline{X}}|<R/2\\rbrace .\\end{split}$ Using the notation introduced in Sections REF and REF , see (REF ) and (REF ), consider $\\begin{split}\\Phi _0[\\psi _0]= F+(\\psi _0\\circ F) N\\circ F,\\qquad \\Phi _1[\\psi _1]=(1,0)+(\\psi _1\\circ F)N\\circ F.\\end{split}$ We also let ${\\tilde{\\varphi }}_\\mu :=\\chi \\underline{\\varphi }_\\mu $ where the cutoff function $\\chi $ is equal to one on ${\\underline{\\mathcal {C}}}\\cap \\lbrace |{\\underline{X}}|<R/3\\rbrace $ and supported on ${\\underline{\\mathcal {C}}}\\cap \\lbrace |{\\underline{X}}|<R/2\\rbrace $ .", "As discussed earlier, our stability theorem holds under a codimension one condition on the initial data.", "This condition is given by the vanishing of a certain functional on $(\\psi _0,\\psi _1)$ .", "But, as the exact form of the vanishing condition is a bit complicated to state at this point, we defer this until Section , and simply refer to (REF ) in the statement of our main theorem.", "Theorem 1.5 Let $n\\ge 5$ , and consider $\\Phi _0$ , $\\Phi _1$ as in (REF ), (REF ), and assume that $(\\psi _0,\\psi _1)$ satisfy (REF ).", "If $\\epsilon \\ge 0$ is sufficiently small, then there exist $b_0\\in \\mathbb {R}$ with $|b_0|\\lesssim 1$ and $\\Phi :\\mathbb {R}\\times I \\times \\mathbb {S}^{n-1}\\rightarrow \\mathbb {R}^{1+(n+1)}$ satisfying (REF ), such that $\\Phi \\vert _{\\lbrace t=0\\rbrace }=\\Phi _0[\\epsilon (\\psi _0+b_0{\\tilde{\\varphi }}_\\mu )]$ and $\\partial _t\\Phi \\vert _{\\lbrace t=0\\rbrace }=\\Phi _1[\\epsilon (\\psi _1-\\mu b_0{\\tilde{\\varphi }}_\\mu )]$ .", "Moreover, there exist $\\ell ,\\xi :\\mathbb {R}\\rightarrow \\mathbb {R}^{n}$ satisfying $|\\ell |,|{\\dot{\\xi }}|\\lesssim \\epsilon $ and $\\begin{split}|{\\dot{\\ell }}(\\sigma )|, |{\\dot{\\xi }}(\\sigma )-\\ell (\\sigma )|\\rightarrow 0\\qquad \\mathrm {as}~\\sigma \\rightarrow \\infty ,\\end{split}$ such that the image of $\\Phi $ can be parameterized as $\\begin{split}\\cup _\\sigma \\Sigma _\\sigma \\ni p\\mapsto p+\\psi (p)N(p),\\end{split}$ with $\\Vert \\psi \\Vert _{L^\\infty (\\Sigma _\\sigma )}\\rightarrow 0$ as $\\sigma \\rightarrow \\infty $ .", "More precisely, there exists a positive $\\kappa \\ll 1$ such that $\\begin{split}|{\\dot{\\ell }}(\\sigma )|,|{\\dot{\\xi }}(\\sigma )-\\ell (\\sigma )|\\lesssim \\epsilon \\sigma ^{-\\frac{5}{2}+\\kappa },\\qquad \\Vert \\psi \\Vert _{L^\\infty (\\Sigma _\\sigma )}\\lesssim \\epsilon \\sigma ^{-\\frac{9}{4}+\\kappa },\\qquad \\mathrm {as}~\\sigma \\rightarrow \\infty .\\end{split}$ More precise decay estimates on $\\psi $ and the parameters can be found in Propositions REF and REF .", "We now make a few remarks about Theorem REF .", "Remark 1.6 It follows from the decay rate of ${\\dot{\\ell }}$ and ${\\dot{\\xi }}-\\ell $ that there exist ${\\underline{a}},{\\underline{\\ell }}\\in \\mathbb {R}^n$ such that $\\ell (t)\\rightarrow {\\underline{\\ell }}$ and $\\xi (t)\\rightarrow {\\underline{a}}+ {\\underline{\\ell }}t$ as $t\\rightarrow \\infty $ .", "In this sense our theorem implies that the solution approaches a fixed, boosted and translated Lorentzian catenoid.", "The differential equations governing the evolution of the parameters are derived in Section REF .", "Remark 1.7 As mentioned earlier the codimension one condition of the data in Theorem REF is optimal.", "However, in this work, we do not pursue the question of uniqueness and continuous dependence of $b_0$ on the initial data $\\psi _0$ and $\\psi _1$ .", "As a result, we cannot infer that the set of initial data, considered in Theorem REF form a codimension one submanifold in any topology." ], [ "Further Discussions", "Further discussion of related works and subjects are in order." ], [ "Other prior works on the hyperbolic vanishing mean curvature equation", "Beyond the previously mentioned result [4] on local existence for the HVMC equation for sufficiently smooth initial data, we point out the low regularity local well-posedness results [13], [2].", "We refer to [49] for the study of local well-posedness in relation to the action principle formulation.", "The global nonlinear stability of hyperplanes under the HVMC evolution was considered in [7], [24], [42], [51].", "Under symmetric perturbations the nonlinear stability of the Lorentzian catenoid was studied in [23], [12] and that of the Lorentzian helicoid in [27].", "Simple planar travelling wave solutions to the HVMC equation were proven to be globally nonlinearly stable in [1].", "Singularity formation has been analyzed in [35], [19], [52], [6].", "For a discussion of the relevance of the HVMC equation in physics, we refer the reader to [5], [4], [17].", "The Lorentzian constant positive mean-curvature flow has been considered in [50]." ], [ "Comparison with black hole stability", "The present paper concerns nonlinear asymptotic stability of a stationary solution to a multi-dimensional quasilinear wave equation without any symmetry assumptions.", "Despite obvious differences in the inherent complexities of the underlying PDEs, our main result may be formally compared with the recent colossal works [9], [21], [20] on the nonlinear asymptotic stability of Kerr and Schwarzschild black holes as stationary solutions to the vacuum Einstein equation, which is a $(3+1)$ -dimensional quasilinear wave equation, without any symmetry assumptions.", "Our problem and the black hole stability problem share some important features.", "The nontrivial kernel of the linearized operator around the stationary solution necessitates modulation of some parameters and a suitable choice of gauge (i.e., a way to represent the solution among many equivalent descriptions).", "In the case of the Schwarzschild black hole, a codimension condition on the initial data naturally appears as in our problem [9], [21].", "At the level of proofs, this paper and the above works share the same basic strategy for proving the pointwise decay of the perturbation, namely, to first prove an integrated local energy decay (or Morawetz) estimate and the uniform boundedness of energy, then to establish pointwise decay by some version of the vectorfield method.", "Indeed, this powerful strategy was mostly developed in works with the black hole stability problem in mind – see [10], and also [47], [32], [37].", "Needless to say, our problem is simpler compared to the black hole stability problem in a number of aspects, such as the spatial dimension, the gauge choice (compare our choice described in Section REF with [9], [21], [20]), and the analysis of the linearized problem (compare the discussion in Section REF with [16], [8], [3], [18]).", "Nevertheless, in this paper we satisfactorily resolve a key issue that is shared by many soliton stability problems, but not with the black hole stability problem – this is the issue of modulation of the translation and boost parameters.", "In our problem, as well as in many soliton stability problems, the stationary solution (the catenoid or the soliton) is defined on a natural ambient spacetime, and it is of interest to track the evolution of the translation and boost parameters in relation to the ambient spacetime.", "In contrast, in general relativity there is no notion of an ambient spacetime, and the analogous issue is subsumed in the choice of the gauge in the black hole stability problem.", "As discussed earlier, this issue is resolved in our work by a new construction of a dynamic profile representing a “moving catenoid,” the use of localized orthogonality conditions that enables us to utilize a suitable first-order formulation of the equation to derive the evolution equations for the parameters, a robust scheme for establishing integrated local energy decay for perturbations of the dynamic profile from the case of the stationary solution, as well as an adaptation of the $r^{p}$ -method for the dynamic profile.", "In view of the pervasiveness of the same issue in soliton stability problems, we are hopeful that our ideas might be useful elsewhere as well." ], [ "Soliton stability problem for semilinear dispersive equations", "There is a vast literature on the problem of stability of solitons for semilinear dispersive equations; for those who are interested, we recommend the excellent survey articles of Kowalczyk–Martel–Muñoz [22] and Tao [45] as a good starting point.", "In relation to this rich and beautiful subject, our aim in this paper is to specifically tackle those challenges that arise from the quasilinearity of the equation.", "Our aim, in turn, is motivated by the conjectured asymptotic stability of some well-known topological solitons solving quasilinear wave equations, such as the Skyrmion for the Skyrme model [26]." ], [ "Outline of the Paper", "The remainder of this paper is organized as follows.", "Section  contains the notation and some preliminary results.", "In Section  we derive a first order formulation of our problem in terms of a vector unknown $\\vec{\\psi }$ , for a given set of parameters $\\xi $ and $\\ell $ .", "We also state the corresponding orthogonality conditions and carry out a further decomposition of $\\vec{\\psi }=\\vec{\\phi }+a_{+}{\\vec{Z}}_{+}+a_{-}{\\vec{Z}}_{-}$ , by separating the contribution of growing mode of the linearized operator.", "For this decomposition and our choice of orthogonality conditions, we then derive the modulation equations satisfied by $\\xi $ , $\\ell $ , and $a_{\\pm }$ .", "In Section , we give a more detailed description of the foliation, various coordinates, and vectorfields, again for a given choice of parameters $\\xi $ and $\\ell $ .", "We also derive expressions for the relevant operators in terms of the described coordinates and vectorfields.", "The bootstrap assumptions are stated in Section , where, in Propositions REF and REF we also give more precise decay estimates than the ones given in Theorem REF .", "The proof of Propositions REF and REF will occupy the remaining sections of the paper, and in Section  we further show that Theorem REF follows from the bootstrap propositions.", "Sections  contains the proof of Proposition REF which closes the bootstrap assumptions for all parameters except the growing mode $a_{+}$ .", "For the latter, a separate shooting argument is needed, which is carried out in the proof of Theorem REF in the last part of Section .", "The proof of Proposition REF is contained in Sections  and .", "Section  contains a general local energy decay at the linear level.", "In view of the calculations in Section  and the bootstrap assumptions in Section , the assumptions on the linear operator in this estimate are satisfied for us.", "In Section  we use the linear result of Section  to prove nonlinear energy and local energy decay estimates.", "Using these, we also prove $r^p$ weighted energy estimates, which in turn are used to prove decay estimates and Proposition REF ." ], [ "Notation and Conventions", "Here we collect some of the notation and conventions that are used repeatedly in this work.", "This is meant as a reference for the reader, and some of the precise definitions will appear only later in the paper.", "Some of the notation and conventions which are used more locally in various parts of the paper do not appear in this list." ], [ "${\\underline{\\mathcal {C}}}$ denotes the Riemannian catenoid with its standard embedding in $\\mathbb {R}^{n+1}$ , and $\\mathcal {C}= \\mathbb {R}\\times \\underline{\\mathcal {C}}$ the product Lorentzian catenoid in $\\mathbb {R}^{1+(n+1)}$ .", "The boost and translationTo be precise, to leading order $\\xi \\approx t\\ell +a$ where $a$ is a fixed translation parameter.", "parameters are denoted by $\\ell $ and $\\xi $ respectively, where $|\\ell |,|{\\dot{\\xi }}|<1$ .", "In our applications we will always have $|\\ell |,|{\\dot{\\xi }}|\\ll 1$ .", "Here, and below, the dot over a parameter denotes the time derivative.", "We will also sometimes use a prime $^{\\prime }$ to denote the derivative of a function of a single variable (such as time).", "Given $\\ell $ and $\\xi $ as above, the boosted catenoid $\\mathcal {C}_\\sigma $ and ${}_\\sigma $ are defined as in (REF ) and (REF ), and the profile is $\\cup _\\sigma \\Sigma _\\sigma $ , $\\Sigma _\\sigma =\\mathcal {C}_\\sigma \\cap \\Sigma _\\sigma $ .", "The almost normal vector to the profile is denoted by $N:\\cup _\\sigma \\Sigma _\\sigma \\rightarrow \\mathbb {R}^{1+(n+1)}$ , and the perturbation, defined in (REF ), by $\\psi :\\cup _\\sigma \\Sigma _\\sigma \\rightarrow \\mathbb {R}$ .", "In the first order formulation, $\\vec{\\psi }=(\\psi ,{\\dot{\\psi }})$ denotesWhen there is no risk of confusion, we identify row and column vectors in this work.", "So, for instance, we use both $(\\psi ,{\\dot{\\psi }})$ and $(\\psi ,{\\dot{\\psi }})^\\intercal $ for $\\vec{\\psi }$ .", "the vector form of the perturbation, where ${\\dot{\\psi }}$ is the momentum variable and roughly corresponds to the time derivative of $\\psi $ .", "Corresponding to the negative eigenvalue of the linearized operator $-\\Delta _{\\underline{\\mathcal {C}}}-| {\\mathrm {I\\!I}} |^2$ (see Section REF ) there are two projection coefficients in the first order formulation: $a_{+}$ denotes the unstable (growing mode) coefficient corresponding to the eigenfunction, and $a_{-}$ the stable (decaying mode) coefficient.", "The remainder, after subtracting the contribution of the corresponding eigenfunction from $\\vec{\\psi }$ , is denoted by $\\vec{\\phi }$ at the vector level (in the first order formulation) and by $\\phi $ at the scalar level (see Section REF ).", "We will denote the flat and hyperboloidal parts of the profile by $\\mathcal {C}_{{\\mathrm {flat}}}:=\\lbrace X^0\\ge \\sigma _{\\mathrm {temp}}(X)+\\delta _1\\rbrace $ and $\\mathcal {C}_{{\\mathrm {hyp}}}:=\\lbrace X^0\\le \\sigma _{\\mathrm {temp}}(X)-\\delta _1\\rbrace $ respectively.", "We often refer to the region inside a large compact set contained in $\\mathcal {C}_{\\mathrm {flat}}$ as the interior, and the complement of this region as the exterior." ], [ "We will use ${\\dot{\\wp }}$ to denote the parameter derivatives ${\\dot{\\ell }}$ and ${\\dot{\\xi }}-\\ell $ .", "When used as a vector, ${\\dot{\\wp }}=({\\dot{\\ell }},{\\dot{\\xi }}-\\ell )^\\intercal $ in that order.", "When used schematically, for instance in estimates or to denote dependence on parameter derivatives, the order will not be important, so for example $O({\\dot{\\wp }})$ denotes terms that are bounded by $|{\\dot{\\ell }}|$ or $|{\\dot{\\xi }}-\\ell |$ .", "The distinction will be clear from the context.", "More generally, ${\\dot{\\wp }}^{(k)}$ denotes a total of $k$ derivatives of the parameters, so for instance ${\\dot{\\wp }}^{(2)}$ could be any of $\\ddot{\\ell }$ , $|{\\dot{\\xi }}-\\ell |^2$ , ${\\ddot{\\xi }}-{\\dot{\\ell }}$ , etc.", "$\\dot{\\wp }^{(\\le k)}$ denotes a total of up to $k$ , but at least one, parameter derivatives.", "$\\wp ^{(\\le k)}$ denotes a total of up to $k$ parameter derivatives, but possibly also an undifferentiated $\\ell $ .", "We sometimes also use the notation $\\wp $ for $\\ell $ .", "Note that $\\xi $ itself cannot be written as $\\wp $ ($\\xi $ is expected to grow linearly in time), but ${\\dot{\\xi }}$ can be written as ${\\dot{\\xi }}={\\dot{\\xi }}-\\ell +\\ell $ , which is a sum of terms of the form ${\\dot{\\wp }}$ and $\\wp $ .", "A similar notation is used for ${\\dot{a}}_{\\pm }^{(k)}$ , $a_{\\pm }^{(\\le k)}$ , etc.", "Note that here even the undifferentiated $a_{\\pm }$ are expected to have time decay (for the growing mode $a_{+}$ only after appropriately modifying the initial data; see Theorem REF )." ], [ "$\\epsilon $ is the smallness parameter for the size of the initial perturbation.", "$\\kappa $ is a small positive absolute constant which arises in the decay rates in the bootstrap argument; see Section .", "In our bootstrap argument, the energy of the perturbation enters to linear order in estimating the parameter derivatives, and the parameter derivatives enter linearly in the energy estimates.", "What breaks the circularity is that the linear appearance of the energy of the perturbation in the estimates for the parameter derivatives is always accompanied by a small (but not decaying) constant.", "This small constant is denoted by $\\delta _\\wp $ in the bootstrap assumptions of Section .", "The final time of the bootstrap interval is denoted by $\\tau _f$ .", "There are also a few large radii that appear in our arguments.", "$R$ is a large constant such that the initial data are supported in ${\\underline{\\mathcal {C}}}\\cap \\lbrace |{\\underline{X}}|<R/2\\rbrace $ ; see (REF ).", "Also, the transition region from the flat to hyperboloidal parts of the foliation happens in the region $|{\\underline{X}}|\\simeq R$ ; see Section REF .", "The constant $R_1\\gg 1$ is such that $R_1\\ll R$ and such that the support of the test functionsThese are the truncated eigenfunctions of the linearized operator in the first order formulation.", "${\\vec{Z}}_i$ , $i\\in \\lbrace \\pm ,1,\\dots ,2n\\rbrace $ , in Section  is contained in $|\\rho |\\le R_1$ (see REF ).", "We will use ${R_1}$ as an absolute constant and gain smallness in inverse powers of ${R_1}$ .", "The size of the data, $\\epsilon $ , is considered small relative to any inverse power of ${R_1}$ .", "In particular, since in view of the bootstrap assumptions in Section  we have $|\\ell |\\lesssim \\epsilon $ , quantities such as $({R_1})^m|\\ell |$ are considered small, for any power $m>0$ .", "The smallness of the constant $\\delta _\\wp $ above is in terms of $\\ell $ and inverse powers of ${R_1}$ (the reason the energy of the perturbation enters linearly in the equation for the parameter derivatives is that the eigenfunctions are truncated at scale ${R_1}$ , so one should expect the error to get smaller for larger ${R_1}$ )." ], [ "We will mainly work with two sets of coordinates: $(t,\\rho ,\\omega )$ in the interior (flat) part of the foliation and $(\\tau ,r,\\theta )$ in the exterior (hyperboloidal) part.", "The precise definitions are given in Section  and REF for the interior, and in Section REF for the exterior.", "In addition to these, in a few occasions we will use the global non-geometric coordinates $(,,)$ , see Section REF , and the global geometric coordinates $({\\tilde{}},{\\tilde{}},{\\tilde{}})$ , see Section REF .", "${{\\bf T}}$ denotes the global almost stationary vectorfield, which in terms of the global non-geometric coordinates introduced in Section REF is given by $\\partial _{}$ .", "In general $\\partial $ denotes arbitrary derivatives that have size of order one, and $\\partial _\\Sigma $ the subset of these derivatives that are tangential to the leaves of the foliations.", "In the exterior region, ${\\tilde{\\partial }}_\\Sigma $ denotes derivatives which can be written as a linear combination of $\\partial _\\Sigma $ and $r^{-1}{{\\bf T}}$ , with coefficients of size of order one.", "In general we denote the number of derivatives by a superscript.", "For instance $\\partial _\\Sigma ^{\\le k}$ means up to $k$ tangential derivatives.", "There are also a few commutator and multiplier vectorfields which are used in the exterior in Section  in the context of proving decay estimates for the perturbation.", "The precise definitions are given in Section REF , but we give a brief description here: $L$ and ${\\underline{L}}$ are the outgoing and incoming almost null vectorfields.", "$\\Omega $ is the rotation vectorfield.", "$T$ , which is comparable and almost colinear with ${{\\bf T}}$ , is defined by $T=\\frac{1}{2}(L+{\\underline{L}})$ .", "In the exterior region where these vectorfields are defined we use $\\tilde{r}L$ , $\\Omega $ , and $T$ as commutators, and use $X^k$ (when $k=1$ we simply write $X$ ) to denote am arbitrary string of $k$ such vectorfields.", "Here ${\\tilde{r}}$ is a geometric radial variable introduced in Section REF ." ], [ "In general we use $\\mathrm {d}V$ to denote the induced volume form from the ambient space $\\mathbb {R}^{1+(n+1)}$ .", "If there is any risk of confusion we use a subscript to denote the subset on which the volume form is induced (for instance $\\mathrm {d}V_S$ for the subset $S$ ).", "When working in a fixed set of coordinates we sometimes write out the volume form explicitly.", "In the exterior region, it is sometimes more convenient to use the coordinate volume form for the Minkowski metric rather than the geometric induced one.", "It will be clear from the bootstrap assumptions that these two volume forms are comparable and therefore various norms defined with respect to them are equivalent.", "The volume form on the standard unit sphere will be denoted by $\\mathrm {d}\\theta $ or $\\mathrm {d}S$ interchangeably (or $\\mathrm {d}\\omega $ , $\\mathrm {d}$ , etc, depending on the coordinate system we are using)." ], [ "We use the notation $\\chi $ for smooth cutoff functions defined on $\\cup _\\sigma \\Sigma _\\sigma $ and taking values in $[0,1]$ .", "We may denote the set on which $\\chi $ is equal to one by a subscript.", "For instance $\\chi _S$ is equal to one on $S$ and equal to zero outside of a neighborhood of $S$ (we will make the support more precise when needed).", "For a positive number $c$ , $\\chi _{\\ge c}$ denotes a cutoff which is one in the region $\\lbrace ||\\ge c\\rbrace $ and equal to zero outside of $\\lbrace ||\\ge \\frac{c}{2}\\rbrace $ .", "Here $$ is the radial coordinate from the global non-geometric coordinates in Section REF .", "We also define $\\chi _{<c}:=1-\\chi _{\\ge c}$ ." ], [ "The main result of this work is valid for dimensions $n\\ge 5$ .", "For concreteness we have set $n=5$ in many places (for instance for the decay rates in the bootstrap assumptions) and kept the notation $n$ in other places (for instance in some multiplier identities) where we thought this would add to the clarity of exposition.", "The reader can set $n=5$ everywhere, and the modifications for higher dimensional cases are minimal." ], [ "For the standard Riemannian catenoid as described in Section REF , and with the notation used there, the normal vector is given by $\\begin{split}\\nu \\equiv \\nu (z,\\omega )=(\\frac{\\Theta (\\omega )}{\\langle z\\rangle ^{n-1}},\\sqrt{1-\\langle z\\rangle ^{2-2n}}).\\end{split}$ As mentioned in Section REF , the first $n$ components, $\\nu ^i=\\frac{\\Theta ^i}{\\langle z\\rangle ^{n-1}}$ , $i=1,\\dots ,n$ , appear as eigenfunctions of the main linearized operator H. It is useful to keep in mind that these have decay $\\langle z\\rangle ^{1-n}$ and satisfy (here $\\sqrt{|h|}$ denotes the volume form associated to the metric (REF )) $\\begin{split}\\int \\nu ^i\\nu ^j \\sqrt{|h|}\\mathrm {d}\\omega \\mathrm {d}z = C\\delta ^{ij},\\end{split}$ where $\\delta ^{ij}$ is the Kronecker delta and $C$ is a constant of order one.", "We also remark that since the metric is asymptotically flat, the eigenfunction $\\underline{\\varphi }_\\mu $ from Section REF is exponentially decaying." ], [ "Many of the estimates and identities in this work are derived only near one of the asymptotic ends of the solution.", "In all cases, the other asymptotic end can be treated in exactly the same way, possibly with a change of overall sign.", "This remark applies in particular to many of the vectorfied identities and estimates, for instance in sections  and ." ], [ "As discussed in the introduction, the stability (or linearized) operator for the Riemannian catenoid is $-\\Delta _{\\underline{\\mathcal {C}}}-| {\\mathrm {I\\!I}} |^2$ , where $ {\\mathrm {I\\!I}} $ denotes the second fundamental form.", "We will sometimes use the notation $V=| {\\mathrm {I\\!I}} |^2$ when working with this linearized operator.", "When proving more general linear estimates (such as local energy decay) we still use $V$ for the potential, and impose conditions on the linearized operator which are satisfied by $-\\Delta _{\\underline{\\mathcal {C}}}-| {\\mathrm {I\\!I}} |^2$ .", "This distinction between the different uses of $V$ will be clear from the context." ], [ "Outside a large compact set, we can parameterize each asymptotic end of the solution as a graph over a hyperplane (for instance the hyperplanes $\\lbrace X^{n+1}=\\pm S\\rbrace $ ).", "The function giving this parameterization for the Riemannian catenoid is denoted by $Q$ .", "We use $Q_\\wp $ to denote the corresponding function when taking into account the boost and translation parameters, although when there is no risk of confusion we drop $\\wp $ from the notation and simply write $Q$ .", "See Section ." ], [ "The notation $f=O(g)$ is used as usual to mean $|f|\\le C|g|$ for some constant $C$ .", "The notation $f=o_{\\alpha }(g)$ is also used in the usual way to mean that $|f|/|g|$ goes to zero as the parameter $\\alpha $ approaches a limiting value which will be clear from the context (usually zero or infinity).", "We will also use the notation $\\mathcal {O}$ whose meaning we now explain.", "In order to prove decay estimates for $\\phi $ we will commute the equation it satisfies with ${{\\bf T}}$ (see Subsection REF for the notation).", "To obtain the desired decay in time, it is important that $k$ applications of ${{\\bf T}}$ improve the decay of $\\phi $ by $^{-k}$ for $k\\le 2$ (the upper bound 2 comes from setting $n=5$ ).", "Similarly, we will need improved decay estimates on the time derivatives of the parameters up to two commutations of ${{\\bf T}}$ .", "These improved decay rates are reflected in the bootstrap assumptions in Section  and the estimates in Section  (see for instance Proposition REF ).", "For this, it is important that the various error terms that appear in our estimates have improved time decay up to two orders of differentiation in ${{\\bf T}}$ .", "In this process, we also need to commute the equation satisfied by $\\phi $ with the weighted derivatives ${\\tilde{r}}L$ and $\\Omega $ (see Subsection REF ), which have size of order ${\\tilde{r}}$ , near the asymptotically flat ends.", "Again, it is important that certain error terms have faster ${\\tilde{r}}$ decay in the exterior region with every application of $L$ and $\\frac{1}{{\\tilde{r}}}\\Omega $ , up to the order of commutation.", "To capture these improved decay properties we use the notation $\\mathcal {O}$ .", "That is, an error term of the form $\\mathcal {O}(f)$ is still bounded by $|f|$ after any number of differentiations by ${\\tilde{r}}L$ or $\\Omega $ in the exterior, and by $\\sum _{j\\le k}|{\\dot{\\wp }}^{(j)}{{\\bf T}}^{k-j}f|$ after $k$ differentiations by ${{\\bf T}}^k$ globally.", "For instance, an error that is denoted by $\\mathcal {O}({\\dot{\\wp }})$ will still be bounded by $\\mathcal {O}({\\dot{\\wp }})$ after applications of ${\\tilde{r}}L$ and $\\Omega $ in the exterior, and by $\\mathcal {O}({\\dot{\\wp }}^{(k+1)})$ after $k$ applications of ${{\\bf T}}$ globally.", "Also note that more than two differentiations in ${{\\bf T}}$ does not change the decay rate, so for instance a term of the form $\\mathcal {O}({\\dot{\\wp }})$ is still bounded by $\\mathcal {O}({\\dot{\\wp }}^{(3)})$ after $k$ applications of ${{\\bf T}}$ , $k\\ge 3$ .", "That we can bound higher derivatives of the parameters by their lower derivatives is a consequence of the parameter smoothing, which is carried out in Sections REF and REF to prevent loss of regularity (see also Section  for the corresponding estimates).", "Even though we start using this notation already in Section , the corresponding properties of these error terms follow only after the bootstrap estimates are stated in Section .", "It is worth mentioning that the error terms in sections  and  are always estimated after integrating against a compactly supported function.", "Therefore, the spatial decay of these terms is not relevant, and are not specified when using the $\\mathcal {O}$ notation there." ], [ "Local Existence", "As mentioned in the introduction, a systematic treatment of local existence for the HVMC equation is contained in [5], [4].", "For our purposes it is convenient to also have a formulation with respect to an almost null foliation of the ambient space.", "The results of [5], [4] can be adapted to this setting using the arguments of [25] (see also [38]) which proves local existence for a class of nonlinear wave equations with characteristic initial data.", "Without reproducing the details of these arguments, we record the desired corollary of these works for our future reference.", "Before doing so, we need to introduce some more notation.", "Recall the definition of the profile $\\cup _\\sigma (\\mathcal {C}_\\sigma \\cap {}_\\sigma )$ from Section REF .", "Given fixed $\\ell _0, \\xi _0\\in \\mathbb {R}^n$ , let $\\mathring{\\mathcal {C}}_\\sigma (\\ell _0,\\xi _0)$ and $\\mathring{{}}_\\sigma (\\xi _0,\\ell _0)$ denote the submanifolds of $\\mathbb {R}^{1+(n+1)}$ corresponding to the choices $\\ell \\equiv \\ell _0$ and $\\xi (\\tau )\\equiv \\xi _0+\\ell _0\\tau $ .", "The corresponding choice of transversal vector $N$ is denoted by ${\\mathring{N}}_{\\ell _0,\\xi _0}$ .", "Then, for each $\\tau >0$ let $\\begin{split}\\mathcal {D}_0^\\tau (\\ell _0,\\xi _0):=\\cup _{\\sigma \\in [0,\\tau )} \\mathring{\\Sigma }_\\sigma (\\ell _0,\\xi _0):=\\cup _{\\sigma \\in [0,\\tau )} (\\mathring{\\mathcal {C}}_\\sigma (\\ell _0,\\xi _0)\\cap \\mathring{{}}_\\sigma (\\ell _0,\\xi _0)).\\end{split}$ We equip each leaf $\\mathring{\\Sigma }_\\sigma (\\ell _0,\\xi _0)=\\mathring{\\mathcal {C}}_\\sigma (\\ell _0,\\xi _0)\\cap \\mathring{{}}_\\sigma (\\ell _0,\\xi _0)$ with the (Riemannian) metric induced from the ambient space, and denote the tangential derivatives of size one by ${\\mathring{\\partial }}_\\Sigma $ .", "The restriction of $\\frac{\\partial }{\\partial X^0}+\\ell _0$ to $\\mathcal {D}_0^\\tau (\\ell _0,\\xi _0)$ is denoted by $T$ (note that $\\frac{\\partial }{\\partial X^0}+\\ell _0$ is tangent to $\\mathcal {D}_0^\\tau (\\ell _0,\\xi _0)$ ).", "We use $\\rho $ to denote the distance along $\\mathring{\\Sigma }_\\sigma (\\ell _0,\\xi _0)$ to ${\\underline{X}}=\\xi _0+\\sigma \\ell _0$ , with respect to the induced metric.", "Proposition 2.1 Let $n\\ge 5$ , and consider $\\mathring{\\Phi }_0(p)=p+\\mathring{\\epsilon }\\mathring{\\psi }_0 {\\mathring{N}}_{\\ell _0,\\xi _0}$ , $\\mathring{\\Phi }_1= (1,\\ell _0)+\\mathring{\\epsilon }\\mathring{\\psi }_1 {\\mathring{N}}_{\\ell _0,\\xi _0}$ , where $\\mathring{\\psi }_j$ are smooth functions on $\\mathring{\\Sigma }_0(\\ell _0,\\xi _0)$ with $\\Vert {\\mathring{\\partial }}_\\Sigma ^k\\mathring{\\psi }_0\\Vert _{L^2(\\Sigma _0(\\ell _0,\\xi _0))}$ and $\\Vert \\langle \\rho \\rangle ^{-1}{\\mathring{\\partial }}_\\Sigma ^{k-1}\\mathring{\\psi }_1\\Vert _{L^2(\\Sigma _0(\\ell _0,\\xi _0))}$ finite for $k\\le M$ , $M$ sufficiently large.", "If $|\\ell _0|$ and $\\mathring{\\epsilon }>0$ are sufficiently small, then there exists $\\tau \\gtrsim 1$ and a unique smooth function $\\mathring{\\psi }:\\mathcal {D}_0^{\\tau }(\\ell _0,\\xi _0)\\rightarrow \\mathbb {R}$ , such that $\\mathring{\\Phi }:\\mathcal {D}_0^{\\tau }(\\ell _,\\xi _0)\\rightarrow \\mathbb {R}^{1+(n+1)}$ defined by $\\begin{split}\\mathring{\\Phi }(p)= p+\\mathring{\\psi }(p){\\mathring{N}}_{\\ell _0,\\xi _0}(p)\\end{split}$ satisfies (REF ), and $\\mathring{\\psi }(0)=\\mathring{\\epsilon }\\mathring{\\psi }_0$ , $T\\mathring{\\psi }(0)=\\mathring{\\epsilon }\\mathring{\\psi }_1$ .", "We also want to be able to parameterize the solution given by Proposition REF as in Section REF for other choices of $\\ell (\\tau )$ and $\\xi (\\tau )$ , with $|\\ell |,|{\\dot{\\xi }}|\\ll 1$ .", "This is possible according to the following normal neighborhood lemma.", "Lemma 2.2 Consider $\\ell $ and $\\xi $ with $|\\ell |$ , $|{\\dot{\\xi }}|$ , and $|\\xi (0)-\\xi _0|$ sufficiently small, and the solution $\\mathcal {M}$ from Proposition REF parameterized by (REF ) for $\\tau \\in [0,\\tau _0]$ .", "Then condition (REF ) for $\\sigma \\in [0,\\tau _0]$ determines $\\psi $ uniquely and $p\\mapsto p+\\psi (p) N(p)$ as defined in (REF ) gives another parameterization of $\\mathcal {M}$ .", "To see that $\\psi $ is uniquely determined, we need to show that for each $p\\in \\cup \\Sigma _\\sigma $ , the line $p+sN(p)$ intersects $\\mathcal {M}$ only once.", "Let $P$ be a point of intersection.", "By construction, there is a unique point ${\\mathring{p}}\\in \\cup \\mathring{\\Sigma }_\\sigma (\\ell _0,\\xi _0)$ such that $P$ is on the line through ${\\mathring{p}}$ in the direction of ${\\mathring{N}}_{(\\ell _0,\\xi _0)}({\\mathring{p}})$ .", "Moreover, since ${\\mathring{N}}_{(\\ell _0,\\xi _0)}$ is almost normal to $\\mathcal {M}$ , this line satisfies $\\mathrm {dist}(P+s{\\mathring{N}}_{(\\ell _0,\\xi _0)}({\\mathring{p}}),\\mathcal {M})\\gtrsim |s|,$ where $\\mathrm {dist}(A,\\mathcal {P})$ denotes the Euclidean distance from $A$ to $\\mathcal {M}$ .", "But then, since $|N(p)-{\\mathring{N}}_{(\\ell _0,\\xi _0)}({\\mathring{p}})|\\ll 1$ (which follows from the smallness of $\\ell $ and $\\ell _0$ ), $\\begin{split}\\mathrm {dist}(P+sN(p),\\mathcal {M})\\ge \\mathrm {dist}(P+s{\\mathring{N}}_{(\\ell _0,\\xi _0)}({\\mathring{p}}),\\mathcal {M})-|s||N(p)-{\\mathring{N}}_{(\\ell _0,\\xi _0)}({\\mathring{p}})|\\gtrsim |s|,\\end{split}$ which shows that the line $P+sN(p)$ does not intersect $\\mathcal {M}$ again.", "To see that we have a parameterization, suppose $p+\\psi (p)N(p) = q+\\psi (q)N(q)$ for some $p,q\\in \\cup _\\sigma \\Sigma _\\sigma $ .", "Then by derivative bounds on $\\psi $ and $N$ , $\\begin{split}|p-q|\\le |\\psi (p)N(p)-\\psi (q)N(q)|\\ll |p-q|,\\end{split}$ which can happen only if $p=q$ ." ], [ "Local Energy Decay for the Product Lorentzian Catenoid", "The notation used in this section is independent of the rest of the paper.", "We prove a local energy decay (LED) estimate for $\\Box +V$ , where $\\Box $ denotes the wave operator of the product Lorentzian catenoid with metric $\\begin{split}g= -\\mathrm {d}t\\otimes \\mathrm {d}t+\\frac{\\rho ^2\\langle \\rho \\rangle ^{2(n-2)}}{\\langle \\rho \\rangle ^{2(n-1)}-1}\\mathrm {d}\\rho \\otimes \\mathrm {d}\\rho +\\langle \\rho \\rangle ^2 \\mathrm {d}\\omega \\otimes \\mathrm {d}\\omega ,\\end{split}$ and $V$ is a smooth, time independent, potential satisfying $|V|\\lesssim \\langle \\rho \\rangle ^{-6}$ .", "This abstract estimate will be used during the proof of LED in Section  and the choice of multipliers here motivate the ones made there.", "To start, let $\\psi $ satisfy $\\begin{split}(\\Box +V)\\psi =g.\\end{split}$ In coordinates, we have $\\begin{split}\\Box :=-\\partial _t^2+\\Delta = -\\partial _t^2+g^{\\rho \\rho }\\partial _\\rho ^2+\\frac{\\partial _\\rho (|g|g^{\\rho \\rho })}{|g|}\\partial _\\rho +\\langle \\rho \\rangle ^{-2}{\\mathring{{\\Delta }}},\\end{split}$ where ${\\mathring{{\\Delta }}}$ denotes the Laplacian on the round sphere of radius one.", "We will also use the notation ${{\\Delta }}=\\rho ^2\\Delta $ , and similarly for ${\\mathring{{\\nabla }}}$ and ${{\\nabla }}$ .", "We use $L^p_x$ to denote the $L^p$ norm on constant $t$ hypersurfaces with respect to the volume form induced by $g$ .", "We will also use the notation $L^p_tL^q_x[t_1,t_2]$ to indicate that the $L^p_t$ norm is calculated over the time interval $[t_1,t_2]$ .", "In this section we use the notation $\\begin{split}&\\Vert \\phi \\Vert _{LE[t_1,t_2]}^2:=\\Vert \\rho \\langle \\rho \\rangle ^{-\\frac{1}{2}(3+\\alpha )}\\partial _t\\phi \\Vert _{L^2_{t,x}[t_1,t_2]}^2+\\Vert \\rho \\langle \\rho \\rangle ^{-\\frac{1}{2}(5+\\alpha )}\\phi \\Vert _{L^2_{t,x}[t_1,t_2]}^2+\\Vert \\rho \\langle \\rho \\rangle ^{-\\frac{3}{2}}{{\\nabla }}\\phi \\Vert _{L^2_{t,x}[t_1,t_2]}^2\\\\&\\phantom{\\Vert \\phi \\Vert _{LE[t_1,t_2]}^2:=}+\\Vert \\langle \\rho \\rangle ^{-\\frac{1+\\alpha }{2}}\\partial _\\rho \\phi \\Vert _{L^2_{t,x}[t_1,t_2]}^2,\\\\&\\Vert f\\Vert _{LE^\\ast [t_1,t_2]}^2:=\\Vert \\langle \\rho \\rangle ^{\\frac{1+\\alpha }{2}}f\\Vert _{L^2_{t,x}[t_1,t_2]}^2.\\end{split}$ Note the degeneracy at $\\rho =0$ for the non-radial derivatives in the definition of the $LE$ norm.", "This degeneracy appears due to the presence of the trapped sphere at $\\rho =0$ .", "The energy norm is defined as $\\begin{split}\\Vert \\phi \\Vert _E^2=\\Vert \\partial _t\\phi \\Vert _{L^2_x}^2+\\Vert \\partial _x\\phi \\Vert _{L^2_x}^2,\\end{split}$ where by definition $\\Vert \\partial _x\\phi \\Vert _{L^2_x}^2=\\int (g^{-1})^{ij}(\\partial _i\\phi )(\\partial _j\\phi )\\sqrt{|g|}\\mathrm {d}\\omega \\mathrm {d}\\rho $ (here and in the remainder of this section $i,j$ run over $1,\\dots ,n$ ).", "The $L^2_x$ pairing with respect to $\\sqrt{|g|}$ will be denoted by $\\langle \\cdot ,\\cdot \\rangle $ .", "We also use the following notation to denote the $L^2_{t,x}$ norm over the region $\\lbrace |\\rho |\\le 1\\rbrace $ : $\\begin{split}\\Vert \\phi \\Vert _{L^2_{t,x}(\\lbrace |\\rho |\\le 1\\rbrace )}.\\end{split}$ We assume that $\\Delta +V$ has spectrum consisting of the absolutely continuous part $(-\\infty ,0]$ and possibly a finite number of eigenvalues at zero and in $(0,\\infty )$ .", "In particular, there are no threshold resonancesBy a threshold resonance, we mean a function $\\varphi $ belonging to the spatial part of $LE$ but not to $L^2_x$ , such that $(\\Delta +V)\\varphi =0$ .", "Threshold resonances do not exist in our applications, because the strong spatial decay of $V$ and the difference between the coefficients of $\\Delta $ and the Euclidean Laplacian imply that a threshold resonance $\\varphi $ must decay at the same rate as the Newtonian potential.", "In dimensions $n\\ge 5$ this implies that $\\varphi \\in L^2_x$ ..", "The eigenfunctions are assumed to satisfy the decay rate $\\langle \\rho \\rangle ^{-n+1}$ or faster.", "We use $\\mathbb {P}_c$ to denote the projection onto the continuous spectrum of $\\Delta +V$ (with respect to the volume form induced by $g$ ).", "These conditions are easily verified when $V=| {\\mathrm {I\\!I}} |^2$ is the squared norm of the second fundamental form of the standard embedding of ${\\underline{\\mathcal {C}}}$ in $\\mathbb {R}^{n+1}$ ; see Section REF and [14], [44].", "Proposition 2.3 Suppose $\\psi $ satisfies (REF ) on a time interval $[t_1,t_2]$ .", "Then for any small constant $\\delta >0$ the following three estimates hold $\\begin{split}&\\sup _{[t_1,t_2]}\\Vert \\mathbb {P}_c\\psi (t)\\Vert _E\\lesssim \\Vert \\mathbb {P}_c\\psi (t_1)\\Vert _{E}+\\Vert \\mathbb {P}_cg\\Vert _{L^1_tL^2_x[t_1,t_2]},\\\\&\\sup _{[t_1,t_2]}\\Vert \\mathbb {P}_c\\psi (t)\\Vert _E\\lesssim \\Vert \\mathbb {P}_c\\psi (t_1)\\Vert _{E}+C_\\delta \\Vert \\mathbb {P}_cg\\Vert _{LE^\\ast [t_1,t_2]}+\\delta (\\Vert \\mathbb {P}_c\\psi \\Vert _{LE[t_1,t_2]}+\\Vert \\partial _t\\mathbb {P}_c\\psi \\Vert _{L^2_{t,x}(\\lbrace \\rho \\le 1\\rbrace )}),\\\\&\\sup _{[t_1,t_2]}\\Vert \\mathbb {P}_c\\psi (t)\\Vert _E\\lesssim \\Vert \\mathbb {P}_c\\psi (t_1)\\Vert _{E}+C_\\delta \\Vert \\partial _t\\mathbb {P}_cg\\Vert _{LE^\\ast ([t_1,t_2])}+\\Vert \\mathbb {P}_cg\\Vert _{L^\\infty L^2[t_1,t_2]}+\\delta \\Vert \\mathbb {P}_c\\psi \\Vert _{LE[t_1,t_2]},\\\\&\\Vert \\mathbb {P}_c\\psi \\Vert _{LE[t_1,t_2]}\\lesssim \\sup _{[t_1,t_2]}\\Vert \\mathbb {P}_c\\psi (t)\\Vert _E+\\Vert \\mathbb {P}_cg\\Vert _{LE^\\ast [t_1,t_2]+L^1_tL^2_x[t_1,t_2]}.\\end{split}$ Remark 2.4 Combining the first and third estimates in the proposition we get $\\begin{split}\\sup _{[t_1,t_2]}\\Vert \\mathbb {P}_c\\psi (t)\\Vert _E+\\Vert \\mathbb {P}_c\\psi \\Vert _{LE[t_1,t_2]}\\lesssim \\Vert \\mathbb {P}_c\\psi (t_1)\\Vert _{E}+\\Vert \\mathbb {P}_cg\\Vert _{L^1_tL^2_x[t_1,t_2]}.\\end{split}$ This is sufficient for most applications, but in a few instances we need to estimate $\\mathbb {P}_cg$ in the $LE^\\ast $ norm.", "For this we need to use the second and third energy estimates in the statement of the proposition.", "Note that the last term on the right-hand side of the second estimate cannot be absorbed by the $LE$ norm due to the degeneracy of the $LE$ norm at $\\rho =0$ .", "Let $\\phi =\\mathbb {P}_c\\psi $ and $f=\\mathbb {P}_cg$ .", "The first two estimates are standard energy estimates which follow from multiplying the equation by $\\partial _t\\phi $ .", "The fact that $\\phi $ is orthogonal to the eigenfunctions of $\\Delta +V$ guarantees that the flux on $\\lbrace t=\\mathrm {constnat}\\rbrace $ bounds the energy.", "For the third estimate, the integral of $f\\partial _t\\phi $ is treated by integration by parts in $t$ , in the region $\\lbrace |\\rho |\\le 1\\rbrace $ (where the $LE$ norm is degenrate) and observing that, using Hardy and Poincaré type inequalities (see for instance the proof of Lemma REF in Section ), $\\begin{split}\\int _{t_1}^{t_2}\\int _{\\lbrace |\\rho |\\le 1\\rbrace }(\\partial _tf)\\phi \\,\\mathrm {d} x\\,\\mathrm {d} t\\le \\delta \\Vert \\phi \\Vert _{LE[t_1,t_2]}^2+C_\\delta \\Vert \\partial _tf\\Vert _{LE^\\ast [t_1,t_2]}^2.\\end{split}$ For the last estimate in the statement of the proposition we start by proving the weaker estimate $\\begin{split}\\Vert \\phi \\Vert _{LE[t_1,t_2]}\\lesssim \\sup _{[t_1,t_2]}\\Vert \\phi (t)\\Vert _E+\\Vert f\\Vert _{LE^\\ast [t_1,t_2]+L^1_tL^2_x[t_1,t_2]}+\\Vert \\phi \\Vert _{L^2_{t,x}(\\mathcal {B})},\\end{split}$ where $\\mathcal {B}:=[t_1,t_2]\\times B$ and $B$ denotes a large compact spatial region.", "With $\\partial _\\rho ^\\ast $ denoting the formal adjoint of $\\partial _\\rho $ for the pairing $\\langle \\cdot ,\\cdot \\rangle $ , let $\\begin{split}Q:=\\beta \\partial _\\rho -\\partial _\\rho ^\\ast \\beta ,\\qquad \\partial _\\rho ^\\ast =\\partial _\\rho -(\\partial _\\rho \\log |g|),\\end{split}$ where $\\beta \\equiv \\beta (\\rho )$ is to be chosen.", "The main positive commutator identity is $\\begin{split}\\langle (\\Box +V) \\phi ,Q\\phi \\rangle = -\\partial _t\\langle \\partial _t\\phi ,Q\\phi \\rangle +\\overbrace{ \\langle \\Delta \\phi ,Q\\phi \\rangle }^{-\\frac{1}{2}\\langle [Q,\\Delta ]\\phi ,\\phi \\rangle }-\\overbrace{\\langle [Q,V]\\phi ,\\phi \\rangle }^{2\\langle (\\beta \\partial _\\rho V)\\phi ,\\phi \\rangle }.\\end{split}$ Note that $Q \\phi = (\\beta \\partial _\\rho - \\partial _\\rho ^\\ast \\beta ) \\phi = 2 \\beta (\\partial _\\rho \\phi ) - (\\partial _\\rho ^\\ast \\beta ) \\phi .$ Then we have $ \\begin{aligned}\\langle - \\Delta \\phi , Q \\phi \\rangle &= \\langle \\partial _\\rho ^\\ast g^{\\rho \\rho } \\partial _\\rho \\phi , Q \\phi \\rangle - \\langle \\langle \\rho \\rangle ^{-2} {\\mathring{{\\Delta }}}\\phi , Q \\phi \\rangle \\\\&= \\langle \\partial _\\rho ^\\ast g^{\\rho \\rho } \\partial _\\rho \\phi , 2 \\beta (\\partial _\\rho \\phi ) - (\\partial _\\rho ^\\ast \\beta ) \\phi \\rangle - \\langle \\langle \\rho \\rangle ^{-2} {\\mathring{{\\Delta }}}\\phi , 2 \\beta (\\partial _\\rho \\phi ) - (\\partial _\\rho ^\\ast \\beta ) \\phi \\rangle .\\end{aligned}$ For the first term on the right-hand side we integrate by parts repeatedly and rearrange to find that $\\langle \\partial _\\rho ^\\ast g^{\\rho \\rho } \\partial _\\rho \\phi , 2 \\beta (\\partial _\\rho \\phi ) - (\\partial _\\rho ^\\ast \\beta ) \\phi \\rangle &= \\langle g^{\\rho \\rho } \\partial _\\rho \\phi , (2 (\\partial _\\rho \\beta ) - (\\partial _\\rho ^\\ast \\beta )) \\partial _\\rho \\phi \\rangle + \\langle g^{\\rho \\rho } \\partial _\\rho \\phi , 2 \\beta (\\partial _\\rho ^2 \\phi ) \\rangle \\\\&\\quad - \\langle g^{\\rho \\rho } \\partial _\\rho \\phi , (\\partial _\\rho \\partial _\\rho ^\\ast \\beta ) \\phi \\rangle \\\\&= \\langle \\partial _\\rho \\phi , ( 2 g^{\\rho \\rho } (\\partial _\\rho \\beta ) - g^{\\rho \\rho } (\\partial _\\rho ^\\ast \\beta ) + \\partial _\\rho ^\\ast ( g^{\\rho \\rho } \\beta ) ) \\partial _\\rho \\phi \\rangle \\\\&\\quad - \\frac{1}{2} \\langle g^{\\rho \\rho } (\\partial _\\rho \\partial _\\rho ^\\ast \\beta ), \\partial _\\rho (\\phi ^2) \\rangle \\\\&= \\langle ( 2 g^{\\rho \\rho } (\\partial _\\rho \\beta ) - (\\partial _\\rho g^{\\rho \\rho }) \\beta ) \\partial _\\rho \\phi , \\partial _\\rho \\phi \\rangle - \\frac{1}{2} \\langle (\\partial _\\rho ^\\ast g^{\\rho \\rho } \\partial _\\rho \\partial _\\rho ^\\ast \\beta ) \\phi , \\phi \\rangle \\\\&= \\langle (2 g^{\\rho \\rho } (\\partial _\\rho \\beta ) - (\\partial _\\rho g^{\\rho \\rho }) \\beta ) \\partial _\\rho \\phi , \\partial _\\rho \\phi \\rangle + \\frac{1}{2} \\langle (\\Delta \\partial _\\rho ^\\ast \\beta ) \\phi , \\phi \\rangle .$ For the second term on the right-hand side of (REF ) we obtain that $- \\langle \\langle \\rho \\rangle ^{-2} {\\mathring{{\\Delta }}}\\phi , 2 \\beta (\\partial _\\rho \\phi ) - (\\partial _\\rho ^\\ast \\beta ) \\phi \\rangle &= \\langle \\langle \\rho \\rangle ^{-2} {\\mathring{{\\nabla }}}\\phi , 2 \\beta {\\mathring{{\\nabla }}}\\partial _\\rho \\phi \\rangle - \\langle \\langle \\rho \\rangle ^{-2} (\\partial _\\rho ^\\ast \\beta ) {\\mathring{{\\nabla }}}\\phi , {\\mathring{{\\nabla }}}\\phi \\rangle \\\\&= \\langle \\partial _\\rho ^\\ast ( \\langle \\rho \\rangle ^{-2} \\beta ) {\\mathring{{\\nabla }}}\\phi , {\\mathring{{\\nabla }}}\\phi \\rangle - \\langle \\langle \\rho \\rangle ^{-2} (\\partial _\\rho ^\\ast \\beta ) {\\mathring{{\\nabla }}}\\phi , {\\mathring{{\\nabla }}}\\phi \\rangle \\\\&= 2 \\langle {\\textstyle \\frac{\\rho }{\\langle \\rho \\rangle ^4} } \\beta {\\mathring{{\\nabla }}}\\phi , {\\mathring{{\\nabla }}}\\phi \\rangle \\\\&= 2 \\langle {\\textstyle \\frac{\\rho }{\\langle \\rho \\rangle ^2} } \\beta {{\\nabla }}\\phi , {{\\nabla }}\\phi \\rangle .$ Putting the above identities together, we arrive at $\\langle - \\Delta \\phi , Q \\phi \\rangle = \\langle (2 g^{\\rho \\rho } (\\partial _\\rho \\beta ) - (\\partial _\\rho g^{\\rho \\rho }) \\beta ) \\partial _\\rho \\phi , \\partial _\\rho \\phi \\rangle + 2 \\langle {\\textstyle \\frac{\\rho }{\\langle \\rho \\rangle ^2} } \\beta {{\\nabla }}\\phi , {{\\nabla }}\\phi \\rangle + \\frac{1}{2} \\langle (\\Delta \\partial _\\rho ^\\ast \\beta ) \\phi , \\phi \\rangle .$ In view of (REF ), for (REF ) we can get control of the spatial part of the $LE$ norm in (REF ) with the choice $\\begin{split}\\beta :=\\chi \\rho + K(1-\\chi ) \\beta _E,\\end{split}$ where $K$ is a large constant, $\\beta _E$ is the Euclidean choice for LED (which is an odd function), and $\\chi $ is a radial cut-off to the region $\\rho \\in [-R,R]$ , for some large R, which decays to zero monotonically outside of $[-R,R]$ and is zero on $[-2R,2R]$ .", "The point is that in this way $\\rho \\beta $ is positive and if $K$ is sufficiently large $\\begin{split}\\partial _\\rho \\beta = -(\\partial _\\rho \\chi )(K \\beta _E-\\rho )\\end{split}$ is also positive in the gluing region.", "Note that for the angular derivative we get a degeneracy of order two at $\\rho =0$ .", "In order to control $\\partial _t \\phi $ in the LED estimate we also use the multiplier identity $\\begin{split}\\langle \\Box \\phi ,\\gamma \\phi \\rangle =\\langle \\gamma \\partial _t\\phi ,\\partial _t\\phi \\rangle -\\langle \\gamma \\nabla _g\\phi ,\\nabla _g\\phi \\rangle -\\partial _t\\langle \\gamma \\partial _t\\phi ,\\phi \\rangle +\\frac{1}{2}\\langle (\\Delta \\gamma )\\phi ,\\phi \\rangle .\\end{split}$ For (REF ), the choice $\\gamma (\\rho )=\\frac{\\rho ^2}{\\langle \\rho \\rangle ^{3+\\epsilon }}$ gives $\\begin{split}\\langle \\frac{\\rho ^2}{\\langle \\rho \\rangle ^{3+\\epsilon }}\\partial _t\\phi ,\\partial _t\\phi \\rangle =&\\langle f,\\frac{\\rho ^2}{\\langle \\rho \\rangle ^{3+\\epsilon }}\\phi \\rangle +\\langle \\frac{\\rho ^2}{\\langle \\rho \\rangle ^{3+\\epsilon }}\\nabla _g\\phi ,\\nabla _g\\phi \\rangle -\\frac{1}{2}\\langle (\\Delta \\frac{\\rho ^2}{\\langle \\rho \\rangle ^{3+\\epsilon }})\\phi ,\\phi \\rangle +\\partial _t\\langle \\frac{\\rho ^2}{\\langle \\rho \\rangle ^{3+\\epsilon }}\\partial _t\\phi ,\\phi \\rangle .\\end{split}$ Adding this identity to a suitable multiple of (REF ), we arrive at (REF ).", "The $L^2_{t,x}$ error in the bounded region $\\mathcal {B}$ in (REF ) can now be removed by a standard contradiction argument using the absence of threshold resonances and embedded eigenvalues.", "See for instance [28] and references therein for an implementation of this argument.", "Here we provide a brief outline with references to this article.", "By introducing a suitable damping with parameter $\\varepsilon $ and applying the Fourier transform in the time variable, with time frequency variable $\\tau $ , we can deduce from what we have proved above that for any function $v$ on ${\\underline{\\mathcal {C}}}$ (see [28], Steps 1–3) $\\begin{split}\\Vert v\\Vert _{LE_x}\\lesssim \\Vert (\\Delta +V-\\tau -i\\varepsilon )v\\Vert _{LE^\\ast _x}+\\Vert v\\Vert _{L^2_x(B)},\\end{split}$ uniformly in $\\varepsilon >0$ and $\\tau \\in \\mathbb {R}$ .", "Here $\\Vert \\cdot \\Vert _{LE_x}$ and $\\Vert \\cdot \\Vert _{LE_x^\\ast }$ denote the spatial parts of the local energy and dual local energy norms.", "In fact, by applying the spectral projection $\\mathbb {P}_c$ , we can replace $v$ by $\\mathbb {P}_cv$ everywhere in this estimate.", "For this we use the $O(\\langle \\rho \\rangle ^{-n+1})$ decay of the eigenfunctions of $\\Delta +V$ to observe that $\\mathbb {P}_c$ is bounded in the $LE^\\ast _x$ norm (which is just the weighted space $L^2(\\langle \\rho \\rangle ^{1+\\alpha }\\sqrt{|g|}\\mathrm {d}\\omega \\mathrm {d}\\rho )$ ) in dimensions $n\\ge 5$ .", "By similar considerations, the estimate we want to prove is equivalent to having the following bound uniformly in $\\varepsilon >0$ and $\\tau \\in \\mathbb {R}$ : $\\begin{split}\\Vert \\mathbb {P}_c v\\Vert _{LE_x}\\lesssim \\Vert (\\Delta +V-\\tau -i\\epsilon )\\mathbb {P}_cv\\Vert _{LE^\\ast _x}.\\end{split}$ For large $|\\tau |$ , estimate (REF ) follows from (REF ) (with $v$ replaced by $\\mathbb {P}_cv$ ) using elliptic estimates exactly as in Step 4 in [28].", "For $|\\tau |\\lesssim 1$ , assuming (REF ) fails, we can find sequences $\\varepsilon _n\\rightarrow 0$ , $\\tau _n\\rightarrow \\tau $ , and $v_n\\in LE_x$ such that $\\mathbb {P}_cv_n=v_n$ , $\\begin{split}\\Vert (\\Delta +V-\\tau _n-i\\epsilon _n)v_n\\Vert _{LE_x^\\ast }\\rightarrow 0,\\qquad \\Vert v_n\\Vert _{L^2_x(B)}=1,\\end{split}$ and $v_n\\rightharpoonup v$ in $LE_x$ and $v_n\\rightarrow v$ in $L^2_{x,loc}$ for some $v\\in LE_x$ with $\\mathbb {P}_c v=v$ .", "It follows that $(\\Delta +V-\\tau )v=0$ and $\\Vert v\\Vert _{L^2_x(\\mathcal {B})}=1$ .", "To derive a contradiction from this we consider three separate cases $\\tau <0$ , $\\tau >0$ , and $\\tau =0$ .", "The first two cases give a contradiction exactly as in [28] Steps 6 and Steps 8–10, respectively.", "In the case $\\tau =0$ , as in [28] Step 7, we see that $v$ must be a threshold resonance, which we assume does not exist, or an eigenfunction with zero eigenvalue, which is ruled out by the fact that $v=\\mathbb {P}_cv$ , where we recall that we were allowed to replace $v$ by $\\mathbb {P}_cv$ in (REF ) because $\\mathbb {P}_c$ is bounded in $LE_x^\\ast $ ." ], [ "Interior", "In this section we first define the momentum variable ${\\dot{\\psi }}$ and we derive, in vector form, the first-order equation for $\\vec{\\psi }$ .", "Then we introduce suitable orthogonality conditions for the modulation parameters $\\ell (t)$ and $\\xi (t)$ , and derive the equations satisfied by $\\dot{{\\wp }}=({\\dot{\\ell }},{\\dot{\\xi }}-\\ell )^\\intercal $ .", "Finally, we enact a further decomposition of the perturbation $\\vec{\\psi }$ to take into account the unstable mode of the linearized operator.", "Throughout this section, all error estimates are to be understood to hold under the bootstrap assumptions (REF )–().", "Moreover, we refer to Subsection REF for the definition of the $\\mathcal {O}$ notation." ], [ "Setup", "Our parametrization for the profile in the flat region $\\mathcal {C}_{\\mathrm {flat}} := \\lbrace \\sigma _{\\text{temp}}(X)\\ge X^0+ \\delta _1\\rbrace $ is $ \\Psi _\\wp (t, \\rho , \\omega ) = \\bigl ( t, \\xi + \\gamma ^{-1} P_\\ell F(\\rho , \\omega ) + P_\\ell ^\\perp F(\\rho , \\omega ) \\bigr ),$ where $\\xi = \\xi (t)$ and $\\ell = \\ell (t)$ are our time-dependent modulation parameters.", "We denote the normal to $\\Sigma _t$ , in the case where the parameters are treated as fixed, by $n_\\wp := \\Lambda _{-\\ell } \\begin{pmatrix} 0 \\\\ \\nu \\end{pmatrix} = \\begin{pmatrix} \\gamma \\ell \\cdot \\nu \\\\ A_{-\\ell } \\nu \\end{pmatrix},$ where $\\nu $ is the geometric normal to the Riemannian catenoid $\\underline{\\mathcal {C}}$ .", "Then $\\widetilde{N}_{int} = (0, A_{-\\ell } \\nu )$ is the normal to $\\Sigma _t$ viewed as a subspace of ${}_t$ .", "In the interior we define $N$ to be parallel to ${\\widetilde{N}}_{int}$ and such that $\\eta (n_\\wp , N) = 1$ , that is, $N := \\begin{pmatrix} 0 \\\\ |A_{-\\ell } \\nu |^{-2} A_{-\\ell } \\nu \\end{pmatrix}.$ Moreover, we write $W := N - n_\\wp .$ We introduce the scalar perturbation $\\psi $ in the interior via the decomposition $\\Phi = \\Psi _\\wp + \\psi N.$ Next, we introduce the metric components $g_{\\mu \\nu } := \\eta \\bigl ( \\partial _\\mu \\Phi , \\partial _\\nu \\Phi \\bigr ), \\quad \\quad k_{\\mu \\nu } := \\eta \\bigl ( \\partial _\\mu \\Psi _\\wp , \\partial _\\nu \\Psi _\\wp \\bigr ), \\quad \\quad 0 \\le \\mu , \\nu \\le n,$ and the Lagrangian density $\\mathcal {L}:= \\sqrt{|g|} = \\sqrt{-\\det (g)}.$ We also introduce $\\Psi _{\\mu ; \\wp } &:= \\partial _\\mu \\Psi _\\wp \\Big |_{\\begin{array}{c}\\dot{\\ell } = 0 \\\\\\dot{\\xi } = \\ell \\end{array}}, \\quad \\quad 0 \\le \\mu \\le n, \\\\h_{\\mu \\nu } &:= \\eta \\bigl ( \\partial _\\mu \\Psi _\\wp , \\partial _\\nu \\Psi _\\wp \\bigr ) \\Big |_{\\begin{array}{c}\\dot{\\ell } = 0 \\\\ \\dot{\\xi } = \\ell \\end{array}} = \\eta \\bigl ( \\Psi _{\\mu ; \\wp }, \\Psi _{\\nu ; \\wp } \\bigr ), \\quad \\quad 0 \\le \\mu , \\nu \\le n.$ Then we have $\\Psi _{0;\\wp } = \\begin{pmatrix} 1 \\\\ \\ell \\end{pmatrix}, \\quad \\quad \\Psi _{j;\\wp } = \\begin{pmatrix} 0 \\\\ \\gamma ^{-1} P_\\ell \\partial _jF + P_\\ell ^\\perp \\partial _jF \\end{pmatrix}, \\quad \\quad 1 \\le j \\le n.$ In what follows, we still view $\\ell $ as time-dependent in the expressions for $\\Psi _{\\mu ; \\wp }$ and $h_{\\mu \\nu }$ .", "Remark 3.1 A parametrization of the Lorentzian catenoid boosted by a fixed $\\ell \\in \\mathbb {R}^{n+1}$ , $|\\ell | < 1$ , and translated by $a\\in \\mathbb {R}^{n+1}$ (with $\\xi $ corresponding to $a+t\\ell $ ) is given by $\\Gamma _{a,\\ell }(t, \\rho , \\omega ) := \\bigl ( t, a+t\\ell + \\gamma ^{-1} P_\\ell F(\\rho , \\omega ) + P_\\ell ^\\perp F(\\rho , \\omega ) \\bigr ).$ Then $\\Gamma _{a,\\ell }$ satisfies the HVMC equation $\\frac{1}{\\sqrt{|\\kappa |}} \\partial _\\mu \\bigl ( \\sqrt{|\\kappa |} \\kappa ^{\\mu \\nu } \\partial _\\nu \\Gamma _\\ell \\bigr ) = 0$ with $\\kappa _{\\mu \\nu } = \\eta ( \\partial _\\mu \\Gamma _{a,\\ell }, \\partial _\\nu \\Gamma _{a,\\ell } )$ .", "Since the metric coefficients $\\kappa _{\\mu \\nu }$ are time-independent and since by direct computation $\\partial _t\\partial _\\nu \\Gamma _{a,\\ell } = 0$ for fixed $\\ell $ , we in fact have $ \\partial _j\\bigl ( \\sqrt{|\\kappa |} \\kappa ^{j \\nu } \\partial _\\nu \\Gamma _{a,\\ell } \\bigr ) = 0.$ Formally, the expressions for $h_{\\mu \\nu }$ and $\\Psi _{\\nu ; \\wp }$ are the same as those for $\\kappa _{\\mu \\nu }$ and $\\partial _\\nu \\Gamma _{a,\\ell }$ , only that the parameter $\\ell $ is considered time-dependent in $h_{\\mu \\nu }$ and $\\Psi _{\\nu ; \\wp }$ , while $\\ell $ is considered time-independent in $\\kappa _{\\mu \\nu }$ and $\\partial _\\nu \\Gamma _{a,\\ell }$ .", "Since in the preceding equation (REF ) only spatial derivatives act from outside, we can conclude that $\\begin{aligned}\\partial _j\\bigl ( \\sqrt{|h|} h^{j\\nu } \\Psi _{\\nu ;\\wp } \\bigr ) = 0.\\end{aligned}$ Remark 3.2 We point out that the coefficients $(h^{-1})^{0j} = \\mathcal {O}(\\ell )$ , $1 \\le j \\le n$ , have extra smallness.", "This follows from observing that $h_{00}= -1 + |\\ell |^2$ , $h_{0j} = \\mathcal {O}(|\\ell |)$ , and $h_{ij} = \\partial _i F \\cdot \\partial _j F + \\mathcal {O}(|\\ell |^2)$ ." ], [ "Definition of the momentum variable", "The HVMC equation $\\Box _g \\Phi = 0$ for $\\Phi = \\Psi _\\wp + \\psi N$ gives rise to a second-order quasilinear wave equation for the scalar $\\psi $ .", "In order to be able to formulate first-order modulation equations later on, our first goal is to arrive at a suitable formulation of an associated system of linearized first-order equations for $\\vec{\\psi } := \\begin{pmatrix} \\psi \\\\ \\dot{\\psi } \\end{pmatrix}$ for a suitably defined momentum variable $\\dot{\\psi }$ .", "We begin by motivating our definition of $\\dot{\\psi }$ .", "A good starting point is to examine the Euler-Lagrange equation for $\\psi $ given by (here and below the notation $\\frac{\\delta }{\\delta \\psi }$ , etc, simply mean the partial derivative of $\\mathcal {L}$ with respect to the corresponding variable, and not the functional derivative) $ \\partial _t \\biggl ( \\frac{\\delta \\mathcal {L}}{\\delta (\\partial _t \\psi )} \\biggr ) + \\partial _j \\biggl ( \\frac{\\delta \\mathcal {L}}{\\delta (\\partial _j \\psi )} \\biggr ) = \\frac{\\delta \\mathcal {L}}{\\delta \\psi }.$ It suggests that the quantity $\\dot{\\psi }$ should be part of $\\frac{\\delta \\mathcal {L}}{\\delta (\\partial _t \\psi )} = \\frac{1}{2} \\sqrt{|g|} (g^{-1})^{\\mu \\nu } \\frac{\\delta g_{\\mu \\nu }}{\\delta (\\partial _t \\psi )}.$ Since $g_{\\mu \\nu } = \\eta (\\partial _\\mu \\Phi , \\partial _\\nu \\Phi )$ and $\\partial _\\mu \\Phi = \\partial _\\mu \\Psi _\\wp + (\\partial _\\mu \\psi ) N + \\psi \\partial _\\mu N$ , we have $\\frac{\\delta g_{\\mu \\nu }}{\\delta (\\partial _t \\psi )} = \\delta _{\\nu 0} \\eta (\\partial _\\mu \\Phi , N) + \\delta _{\\mu 0} \\eta (N, \\partial _\\nu \\Phi ),$ and therefore $\\frac{\\delta \\mathcal {L}}{\\delta (\\partial _t \\psi )} = \\eta ( \\sqrt{|g|} (g^{-1})^{0 \\nu } \\partial _\\nu \\Phi , N).$ Next, we determine the precise expression for $\\frac{\\delta \\mathcal {L}}{\\delta (\\partial _t \\psi )}$ up to quadratic (and higher-order) terms in the perturbation $\\psi $ and its derivatives $\\partial _\\mu \\psi $ .", "To this end we first record the following expansions $ \\begin{aligned}(g^{-1})^{\\mu \\nu } &= (k^{-1})^{\\mu \\nu } - (k^{-1})^{\\mu \\alpha } \\eta ( \\partial _\\alpha \\Psi _\\wp , (\\partial _\\beta \\psi ) N + \\psi \\partial _\\beta N ) (k^{-1})^{\\beta \\nu } \\\\&\\quad \\quad - (k^{-1})^{\\mu \\alpha } \\eta ( (\\partial _\\alpha \\psi ) N + \\psi (\\partial _\\alpha N), \\partial _\\beta \\Psi _\\wp ) \\bigr ) (k^{-1})^{\\beta \\nu } + \\mathcal {O}\\bigl ( (\\psi , \\partial _\\mu \\psi )^2 \\bigr )\\end{aligned}$ as well as $ \\begin{aligned}\\sqrt{|g|} &= \\sqrt{|k|} + \\sqrt{|k|} (k^{-1})^{\\alpha \\beta } \\eta ( \\partial _\\alpha \\Psi _\\wp , (\\partial _\\beta \\psi ) N + \\psi \\partial _\\beta N ) + \\mathcal {O}\\bigl ( (\\psi , \\partial _\\mu \\psi )^2 \\bigr ),\\end{aligned}$ Expanding up to terms that are at least quadratic in the perturbation $\\psi $ and its derivatives $\\partial _\\mu \\psi $ , we have $ \\begin{aligned}\\frac{\\delta \\mathcal {L}}{\\delta (\\partial _t \\psi )} &= \\eta ( \\sqrt{|g|} (g^{-1})^{0 \\nu } \\partial _\\nu \\Phi , N) \\\\&= \\sqrt{|k|} (k^{-1})^{0 \\nu } \\eta ( \\partial _\\nu \\Psi _\\wp , N) + \\eta \\biggl ( \\frac{\\delta (\\sqrt{|g|} (g^{-1})^{0 \\nu } \\partial _\\nu \\Phi )}{\\delta (\\partial _\\mu \\psi )} \\bigg |_{\\psi =0} \\partial _\\mu \\psi , N \\biggr ) \\\\&\\quad + \\eta \\biggl ( \\frac{\\delta (\\sqrt{|g|} (g^{-1})^{0 \\nu } \\partial _\\nu \\Phi )}{\\delta \\psi } \\bigg |_{\\psi =0} \\psi , N \\biggr ) + \\mathcal {E}_1,\\end{aligned}$ where the remainder term $\\mathcal {E}_1$ satisfies $\\mathcal {E}_1 = \\mathcal {O}\\bigl ( (\\psi , \\partial _\\mu \\psi )^2 \\bigr )$ .", "By direct computation, using the expansions (REF ) and (REF ), we obtain $ \\begin{aligned}\\frac{\\delta (\\sqrt{|g|} (g^{-1})^{\\mu \\nu } \\partial _\\nu \\Phi )}{\\delta \\psi } \\bigg |_{\\psi =0} &= \\sqrt{|k|} (k^{-1})^{\\mu \\nu } \\partial _\\nu N + \\sqrt{|k|} (k^{-1})^{\\mu \\nu } (k^{-1})^{\\alpha \\beta } \\eta (\\partial _\\alpha \\Psi _\\wp , \\partial _\\beta N) \\partial _\\nu \\Psi _\\wp \\\\&\\quad - \\sqrt{|k|} (k^{-1})^{\\mu \\alpha } (k^{-1})^{\\beta \\nu } \\eta (\\partial _\\alpha \\Psi _\\wp , \\partial _\\beta N) \\partial _\\nu \\Psi _\\wp \\\\&\\quad - \\sqrt{|k|} (k^{-1})^{\\mu \\alpha } (k^{-1})^{\\beta \\nu } \\eta (\\partial _\\alpha N, \\partial _\\beta \\Psi _\\wp ) \\partial _\\nu \\Psi _\\wp \\end{aligned}$ and $ \\begin{aligned}\\frac{\\delta (\\sqrt{|g|} (g^{-1})^{0\\nu } \\partial _\\nu \\Phi )}{\\delta (\\partial _\\mu \\psi )} \\bigg |_{\\psi =0} &= \\sqrt{|k|} (k^{-1})^{0\\mu } N + \\sqrt{|k|} (k^{-1})^{\\alpha \\mu } (k^{-1})^{0\\nu } \\eta (\\partial _\\alpha \\Psi _\\wp , N) \\partial _\\nu \\Psi _\\wp \\\\&\\quad - \\sqrt{|k|} (k^{-1})^{0\\alpha } (k^{-1})^{\\mu \\nu } \\eta (\\partial _\\alpha \\Psi _\\wp , N) \\partial _\\nu \\Psi _\\wp \\\\&\\quad - \\sqrt{|k|} (k^{-1})^{0\\mu } (k^{-1})^{\\beta \\nu } \\eta (N, \\partial _\\beta \\Psi _\\wp ) \\partial _\\nu \\Psi _\\wp .\\end{aligned}$ For later use we also record that $ \\begin{aligned}\\frac{\\delta (\\sqrt{|g|} (g^{-1})^{j\\nu } \\partial _\\nu \\Phi )}{\\delta (\\partial _\\mu \\psi )} \\bigg |_{\\psi =0} &= \\sqrt{|k|} (k^{-1})^{j\\mu } N + \\sqrt{|k|} (k^{-1})^{\\alpha \\mu } (k^{-1})^{j\\nu } \\eta (\\partial _\\alpha \\Psi _\\wp , N) \\partial _\\nu \\Psi _\\wp \\\\&\\quad - \\sqrt{|k|} (k^{-1})^{j\\alpha } (k^{-1})^{\\mu \\nu } \\eta (\\partial _\\alpha \\Psi _\\wp , N) \\partial _\\nu \\Psi _\\wp \\\\&\\quad - \\sqrt{|k|} (k^{-1})^{j\\mu } (k^{-1})^{\\beta \\nu } \\eta (N, \\partial _\\beta \\Psi _\\wp ) \\partial _\\nu \\Psi _\\wp .\\end{aligned}$ We proceed with a further examination of the terms on the right-hand side of (REF ).", "Using that $\\eta (\\partial _j\\Psi _\\wp , N) = 0$ , we can rewrite the first term on the RHS of (REF ) as $\\begin{aligned}\\sqrt{|k|} (k^{-1})^{0 \\nu } \\eta ( \\partial _\\nu \\Psi _\\wp , N) &= \\sqrt{|k|} (k^{-1})^{00} \\eta ( \\partial _t \\Psi _\\wp , N) \\\\&= \\sqrt{|h|} (h^{-1})^{00} \\eta \\bigl ( (1,\\ell ), N \\bigr ) \\\\&\\quad + \\sqrt{|h|} (h^{-1})^{00} \\eta \\bigl ( \\partial _t\\Psi _\\wp - (1,\\ell ), N \\bigr ) \\\\&\\quad + \\eta \\Bigl ( \\bigl ( \\sqrt{|k|} (k^{-1})^{00} - \\sqrt{|h|} (h^{-1})^{00} \\bigr ) \\partial _t \\Psi _\\wp , N \\Bigr ) \\\\&= \\sqrt{|h|} (h^{-1})^{00} \\eta \\bigl ( (1,\\ell ), N \\bigr ) \\\\&\\quad + \\sqrt{|h|} (h^{-1})^{00} \\eta \\Bigl ( \\bigl ( \\dot{\\ell } \\cdot \\nabla _\\ell + (\\dot{\\xi }-\\ell ) \\cdot \\nabla _\\xi \\bigr ) \\Psi _\\wp , N \\Bigr ) + \\mathcal {E}_2,\\end{aligned}$ where the remainder term satisfies $\\begin{aligned}\\mathcal {E}_2 := \\eta \\Bigl ( \\bigl ( \\sqrt{|k|} (k^{-1})^{00} - \\sqrt{|h|} (h^{-1})^{00} \\bigr ) \\partial _t \\Psi _\\wp , N \\Bigr ) = \\mathcal {O}\\bigl ({\\dot{\\wp }}^2, {\\dot{\\wp }}\\ell \\bigr ).\\end{aligned}$ Using (REF ) we obtain that the second term on the right-hand side of (REF ) is explicitly given by $ \\begin{aligned}&\\eta \\biggl ( \\frac{\\delta (\\sqrt{|g|} (g^{-1})^{0 \\nu } \\partial _\\nu \\Phi )}{\\delta (\\partial _\\mu \\psi )} \\bigg |_{\\psi =0} \\partial _\\mu \\psi , N \\biggr ) \\\\&\\quad = \\sqrt{|k|} (k^{-1})^{0\\mu } (\\partial _\\mu \\psi ) \\eta \\bigl (N, N - (k^{-1})^{\\alpha \\beta } \\eta ( N, \\partial _\\alpha \\Psi _\\wp ) \\partial _\\beta \\Psi _\\wp \\bigr ).\\end{aligned}$ Now recall that if $\\Psi _\\wp $ were a genuine maximal embedding, then $\\lbrace n_\\wp , \\partial _\\nu \\Psi _\\wp \\rbrace $ would form a basis of the ambient space with $(k^{-1})^{\\alpha \\beta } \\eta ( N, \\partial _\\alpha \\Psi _\\wp ) \\partial _\\beta \\Psi _\\wp $ denoting the tangential part of $N$ .", "Since $\\eta (N, n_\\wp ) = 1$ by construction, the right-hand side of the preceding identity (REF ) would then just read $\\sqrt{|k|} (k^{-1})^{0\\mu } \\partial _\\mu \\psi $ .", "To quantify the difference, we write $\\begin{aligned}(k^{-1})^{\\alpha \\beta } \\eta ( N, \\partial _\\alpha \\Psi _\\wp ) \\partial _\\beta \\Psi _\\wp = (h^{-1})^{\\alpha \\beta } \\eta ( N, \\Psi _{\\alpha ; \\wp } ) \\Psi _{\\beta ; \\wp } + \\mathcal {E}_{3,1} + \\mathcal {E}_{3,2}\\end{aligned}$ with remainder terms $\\mathcal {E}_{3,1} &:= \\bigl ( (k^{-1})^{\\alpha \\beta } - (h^{-1})^{\\alpha \\beta } \\bigr ) \\eta ( N, \\partial _\\alpha \\Psi _\\wp ) \\partial _\\beta \\Psi _\\wp , \\\\\\mathcal {E}_{3,2} &:= (h^{-1})^{\\alpha \\beta } \\eta \\bigl ( N, \\partial _\\alpha \\Psi _\\wp - \\Psi _{\\alpha ;\\wp } \\bigr ) \\partial _\\beta \\Psi _\\wp + (h^{-1})^{\\alpha \\beta } \\eta ( N, \\Psi _{\\alpha ;\\wp } ) \\bigl ( \\partial _\\beta \\Psi _\\wp - \\Psi _{\\beta ; \\wp } \\bigr )$ of the form $\\mathcal {O}({\\dot{\\wp }})$ .", "Correspondingly, we have $\\begin{aligned}\\eta \\biggl ( \\frac{\\delta (\\sqrt{|g|} (g^{-1})^{0 \\nu } \\partial _\\nu \\Phi )}{\\delta (\\partial _\\mu \\psi )} \\bigg |_{\\psi =0} \\partial _\\mu \\psi , N \\biggr ) &= \\sqrt{|k|} (k^{-1})^{0\\mu } \\partial _\\mu \\psi + \\mathcal {E}_3\\end{aligned}$ with remainder term $\\begin{aligned}\\mathcal {E}_3 := \\sqrt{|k|} (k^{-1})^{0\\mu } (\\partial _\\mu \\psi ) \\eta \\bigl (N, \\mathcal {E}_{3,1} + \\mathcal {E}_{3,2} \\bigr ) = \\mathcal {O}\\bigl ( (\\partial \\psi ) {\\dot{\\wp }}\\bigr ).\\end{aligned}$ In order to obtain a more favorable structure of the linearized equation for $\\vec{\\psi }$ , it is preferable to remove the linear terms involving $\\psi $ from the above candidate for $\\dot{\\psi }$ , that is, in (REF ).", "But in order to make sure no second order derivatives of the parameters appear when we calculate $\\partial _t{\\dot{\\psi }}$ , we replace $k$ by $h$ when subtracting off the linear contributions of $\\psi $ in (REF ).", "More specifically, we also only subtract off those expressions where we think of $\\ell $ as being time-independent (so for instance $\\partial _t N$ should be thought of as zero and we use $\\Psi _{\\nu ; \\wp }$ instead of $\\partial _\\nu \\Psi _\\wp $ ).", "We introduce the following more succinct notation for these terms $ \\begin{aligned}B \\psi &:= \\eta \\biggl ( \\frac{\\delta (\\sqrt{|g|} (g^{-1})^{0 \\nu } \\partial _\\nu \\Phi )}{\\delta \\psi } \\bigg |_{\\begin{array}{c}\\psi = 0 \\\\ k = h\\end{array}} \\psi , N \\biggr ) \\\\&= \\sqrt{|h|} (h^{-1})^{0j} \\eta ( \\partial _j N, N ) \\psi \\\\&\\quad - \\sqrt{|h|} (h^{-1})^{0 \\alpha } (h^{-1})^{j \\nu } \\eta ( \\Psi _{\\alpha ; \\wp }, \\partial _j N ) \\eta ( \\Psi _{\\nu ; \\wp }, N ) \\psi \\\\&\\quad - \\sqrt{|h|} (h^{-1})^{0 j} (h^{-1})^{\\beta \\nu } \\eta ( \\partial _j N, \\Psi _{\\beta ; \\wp } ) \\eta ( \\Psi _{\\nu ; \\wp }, N ) \\psi \\\\&\\quad + \\sqrt{|h|} (h^{-1})^{\\alpha j} (h^{-1})^{0\\nu } \\eta ( \\Psi _{\\alpha ; \\wp }, \\partial _j N ) \\eta ( \\Psi _{\\nu ; \\wp }, N) \\psi .\\end{aligned}$ We denote the resulting difference by $\\mathcal {E}_4 := \\eta \\biggl ( \\frac{\\delta (\\sqrt{|g|} (g^{-1})^{0 \\nu } \\partial _\\nu \\Phi )}{\\delta \\psi } \\bigg |_{\\begin{array}{c}\\psi = 0\\end{array}} \\psi , N \\biggr ) - \\eta \\biggl ( \\frac{\\delta (\\sqrt{|g|} (g^{-1})^{0 \\nu } \\partial _\\nu \\Phi )}{\\delta \\psi } \\bigg |_{\\begin{array}{c}\\psi = 0 \\\\ k = h\\end{array}} \\psi , N \\biggr ),$ which satisfies $\\mathcal {E}_4 = \\mathcal {O}( {\\dot{\\wp }}\\psi )$ .", "Remark 3.3 The notation $\\frac{\\delta (\\cdot )}{\\delta \\psi } \\big |_{\\begin{array}{c}\\psi = 0 \\\\ k = h\\end{array}}$ shall indicate that we compute $\\frac{\\delta }{\\delta \\psi }$ , while the $t$ -dependence of $\\xi $ and $\\ell $ is frozen, i.e., we replace $\\dot{\\xi }$ by $\\ell $ as well as $\\dot{\\ell }$ by 0 and we use $\\Psi _{\\nu ; \\wp }$ instead of $\\partial _\\nu \\Psi _\\wp $ This means that $\\partial _t (B \\psi )$ will involve at most one $t$ -derivative of $\\xi $ or $\\ell $ .", "Moreover, in those terms $\\dot{\\xi }$ or $\\dot{\\ell }$ are always multiplied by $\\psi $ .", "We arrive at the following definition $\\begin{aligned}\\dot{\\psi } := \\eta \\bigl ( \\sqrt{|g|} (g^{-1})^{0 \\nu } \\partial _\\nu \\Phi , N \\bigr ) - \\eta \\bigl ( \\sqrt{|h|} (h^{-1})^{00} (1, \\ell ), N \\bigr ) - B \\psi .\\end{aligned}$" ], [ "Relation between $\\dot{\\psi }$ and {{formula:86dd8952-7f08-4b01-8e2b-4a84d5f0af48}}", "Next, we record the relation between $\\dot{\\psi }$ and $\\partial _t\\psi $ .", "From the preceding we obtain $\\begin{aligned}\\dot{\\psi } = \\sqrt{|h|} (h^{-1})^{00} \\eta \\Bigl ( \\bigl ( \\dot{\\ell } \\cdot \\nabla _\\ell + (\\dot{\\xi }-\\ell ) \\cdot \\nabla _\\xi \\bigr ) \\Psi _\\wp , N \\Bigr ) + \\sqrt{|k|} (k^{-1})^{0\\mu } \\partial _\\mu \\psi + \\mathcal {E}_1 + \\ldots + \\mathcal {E}_4.\\end{aligned}$ Solving for $\\partial _t\\psi $ yields $\\begin{aligned}\\partial _t\\psi &= \\frac{1}{\\sqrt{|k|} (k^{-1})^{00}} \\dot{\\psi } - \\frac{\\sqrt{|k|} (k^{-1})^{0j}}{\\sqrt{|k|} (k^{-1})^{00}} \\partial _j\\psi \\\\&\\quad - \\frac{\\sqrt{|h|} (h^{-1})^{00}}{\\sqrt{|k|} (k^{-1})^{00}} \\eta \\Bigl ( \\bigl ( \\dot{\\ell } \\cdot \\nabla _\\ell + (\\dot{\\xi }-\\ell ) \\cdot \\nabla _\\xi \\bigr ) \\Psi _\\wp , N \\Bigr ) - \\frac{1}{\\sqrt{|k|} (k^{-1})^{00}} \\bigl ( \\mathcal {E}_1 + \\ldots + \\mathcal {E}_4 \\bigr ).\\end{aligned}$ Upon rewriting the prefactors in terms of $h$ , which accrues further errors, we arrive at the relation $ \\begin{aligned}\\partial _t\\psi &= \\frac{1}{\\sqrt{|h|} (h^{-1})^{00}} \\dot{\\psi } - \\frac{(h^{-1})^{0j}}{(h^{-1})^{00}} \\partial _j\\psi - \\eta \\Bigl ( \\bigl ( \\dot{\\ell } \\cdot \\nabla _\\ell + (\\dot{\\xi }-\\ell ) \\cdot \\nabla _\\xi \\bigr ) \\Psi _\\wp , N \\Bigr ) + f,\\end{aligned}$ where $\\begin{aligned}f &:= - \\frac{1}{\\sqrt{|k|} (k^{-1})^{00}} \\bigl ( \\mathcal {E}_1 + \\ldots + \\mathcal {E}_4 \\bigr ) + \\biggl ( \\frac{1}{\\sqrt{|k|} (k^{-1})^{00}} - \\frac{1}{\\sqrt{|h|} (h^{-1})^{00}} \\biggr ) \\dot{\\psi } \\\\&\\quad - \\biggl ( \\frac{\\sqrt{|k|} (k^{-1})^{0j}}{\\sqrt{|k|} (k^{-1})^{00}} - \\frac{\\sqrt{|h|} (h^{-1})^{0j}}{\\sqrt{|h|} (h^{-1})^{00}} \\biggr ) \\partial _j\\psi \\\\&\\quad + \\biggl (- \\frac{\\sqrt{|h|} (h^{-1})^{00}}{\\sqrt{|k|} (k^{-1})^{00}} + 1\\biggr ) \\eta \\Bigl ( \\bigl ( \\dot{\\ell } \\cdot \\nabla _\\ell + (\\dot{\\xi }-\\ell ) \\cdot \\nabla _\\xi \\bigr ) \\Psi _\\wp , N \\Bigr ).\\end{aligned}$ Remark 3.4 Note that the term $f$ still contains $\\partial _t \\psi $ terms, but those come with additional smallness.", "Correspondingly, under suitable smallness assumptions we can use the implicit function theorem to solve for $\\partial _t\\psi $ , as we will do further below." ], [ "Computation of $\\partial _t\\dot{\\psi }$", "Next, we compute the time derivative of $\\dot{\\psi }$ , $ \\begin{aligned}\\partial _t\\dot{\\psi } &= \\eta \\Bigl ( \\partial _t\\bigl ( \\sqrt{|g|} (g^{-1})^{0 \\nu } \\partial _\\nu \\Phi \\bigr ), N \\Bigr ) - \\eta \\Bigl ( \\partial _t\\bigl ( \\sqrt{|h|} (h^{-1})^{00} (1, \\ell ) \\bigr ), N \\Bigr ) - \\partial _t\\bigl ( B \\psi \\bigr ) \\\\&\\quad \\quad + \\eta \\Bigl ( \\sqrt{|g|} (g^{-1})^{0 \\nu } \\partial _\\nu \\Phi - \\sqrt{|h|} (h^{-1})^{00} (1, \\ell ), \\partial _tN \\Bigr ).\\end{aligned}$ We rewrite the first term on the right-hand side of (REF ) as $ \\begin{aligned}\\eta \\Bigl ( \\partial _t\\bigl ( \\sqrt{|g|} (g^{-1})^{0 \\nu } \\partial _\\nu \\Phi \\bigr ), N \\Bigr ) &= \\eta \\Bigl ( \\partial _\\mu \\bigl ( \\sqrt{|g|} (g^{-1})^{\\mu \\nu } \\partial _\\nu \\Phi \\bigr ), N \\Bigr ) - \\eta \\Bigl ( \\partial _j\\bigl ( \\sqrt{|g|} (g^{-1})^{j \\nu } \\partial _\\nu \\Phi \\bigr ), N \\Bigr ) \\\\&= - \\eta \\Bigl ( \\partial _j\\bigl ( \\sqrt{|g|} (g^{-1})^{j \\nu } \\partial _\\nu \\Phi \\bigr ) - \\partial _j\\bigl ( \\sqrt{|k|} (k^{-1})^{j \\nu } \\partial _\\nu \\Psi _\\wp \\bigr ) , N \\Bigr ) \\\\&\\quad - \\eta \\Bigl ( \\partial _j\\bigl ( \\sqrt{|k|} (k^{-1})^{j \\nu } \\partial _\\nu \\Psi _\\wp \\bigr ) - \\partial _j\\bigl ( \\sqrt{|h|} (h^{-1})^{j \\nu } \\Psi _{\\nu ; \\wp } \\bigr ), N \\Bigr ),\\end{aligned}$ where we used the HVMC equation $\\partial _\\mu \\bigl ( \\sqrt{|g|} (g^{-1})^{\\mu \\nu } \\partial _\\nu \\Phi \\bigr ) = 0$ and that $\\partial _j\\bigl ( \\sqrt{|h|} (h^{-1})^{j \\nu } \\Psi _{\\nu ; \\wp } \\bigr ) = 0$ , see Remark REF .", "Then to leading order the first term on the right-hand side of (REF ) is given by $\\begin{aligned}&- \\eta \\Bigl ( \\partial _j\\bigl ( \\sqrt{|g|} (g^{-1})^{j \\nu } \\partial _\\nu \\Phi \\bigr ) - \\partial _j\\bigl ( \\sqrt{|k|} (k^{-1})^{j \\nu } \\partial _\\nu \\Psi _\\wp \\bigr ) , N \\Bigr ) \\\\&= - \\eta \\biggl ( \\partial _j\\Bigl ( \\frac{ \\delta (\\sqrt{|g|} (g^{-1})^{j \\nu } \\partial _\\nu \\Phi ) }{\\delta (\\partial _\\mu \\psi )} \\bigg |_{\\begin{array}{c}\\psi = 0, \\\\ k = h\\end{array}} \\partial _\\mu \\psi \\Bigr ), N \\biggr ) - \\eta \\biggl ( \\partial _j\\Bigl ( \\frac{ \\delta (\\sqrt{|g|} (g^{-1})^{j \\nu } \\partial _\\nu \\Phi ) }{\\delta \\psi } \\bigg |_{\\begin{array}{c}\\psi = 0, \\\\ k = h\\end{array}} \\psi \\Bigr ), N \\biggr ) + \\mathcal {E}_5 + \\mathcal {E}_6\\end{aligned}$ with remainder terms $\\mathcal {E}_5 &:= \\eta \\biggl ( \\partial _j\\Bigl ( \\frac{ \\delta (\\sqrt{|g|} (g^{-1})^{j \\nu } \\partial _\\nu \\Phi ) }{\\delta (\\partial _\\mu \\psi )} \\bigg |_{\\begin{array}{c}\\psi = 0, \\\\ k = h\\end{array}} \\partial _\\mu \\psi \\Bigr ), N \\biggr ) + \\eta \\biggl ( \\partial _j\\Bigl ( \\frac{ \\delta (\\sqrt{|g|} (g^{-1})^{j \\nu } \\partial _\\nu \\Phi ) }{\\delta \\psi } \\bigg |_{\\begin{array}{c}\\psi = 0, \\\\ k = h\\end{array}} \\psi \\Bigr ), N \\biggr ) \\\\&\\quad - \\eta \\biggl ( \\partial _j\\Bigl ( \\frac{ \\delta (\\sqrt{|g|} (g^{-1})^{j \\nu } \\partial _\\nu \\Phi ) }{\\delta (\\partial _\\mu \\psi )} \\bigg |_{\\psi = 0} \\partial _\\mu \\psi \\Bigr ), N \\biggr ) - \\eta \\biggl ( \\partial _j\\Bigl ( \\frac{ \\delta (\\sqrt{|g|} (g^{-1})^{j \\nu } \\partial _\\nu \\Phi ) }{\\delta \\psi } \\bigg |_{\\psi = 0} \\psi \\Bigr ), N \\biggr )$ and $\\mathcal {E}_6 &:= \\eta \\biggl ( \\partial _j\\Bigl ( \\frac{ \\delta (\\sqrt{|g|} (g^{-1})^{j \\nu } \\partial _\\nu \\Phi ) }{\\delta (\\partial _\\mu \\psi )} \\bigg |_{\\psi = 0} \\partial _\\mu \\psi \\Bigr ), N \\biggr ) + \\eta \\biggl ( \\partial _j\\Bigl ( \\frac{ \\delta (\\sqrt{|g|} (g^{-1})^{j \\nu } \\partial _\\nu \\Phi ) }{\\delta \\psi } \\bigg |_{\\psi = 0} \\psi \\Bigr ), N \\biggr ) \\\\&\\quad - \\eta \\Bigl ( \\partial _j\\bigl ( \\sqrt{|g|} (g^{-1})^{j \\nu } \\partial _\\nu \\Phi \\bigr ) - \\partial _j\\bigl ( \\sqrt{|k|} (k^{-1})^{j \\nu } \\partial _\\nu \\Psi _\\wp \\bigr ) , N \\Bigr ).$ We have $\\mathcal {E}_5 = \\mathcal {O}\\bigl ( {\\dot{\\wp }}(\\partial \\psi ), {\\dot{\\wp }}(\\partial ^2 \\psi ) \\bigr ) \\quad \\text{and} \\quad \\mathcal {E}_6 = \\mathcal {O}\\bigl ( (\\psi , \\partial \\psi , \\partial ^2 \\psi )^2 \\bigr ).$ The second term on the right-hand side of (REF ) is an error term with $\\mathcal {E}_7 := - \\eta \\Bigl ( \\partial _j\\bigl ( \\sqrt{|k|} (k^{-1})^{j \\nu } \\partial _\\nu \\Psi _\\wp \\bigr ) - \\partial _j\\bigl ( \\sqrt{|h|} (h^{-1})^{j \\nu } \\Psi _{\\nu ; \\wp } \\bigr ), N \\Bigr ) = \\mathcal {O}\\bigl ( {\\dot{\\wp }}^2, {\\dot{\\wp }}\\ell \\bigr ).$ To see that $\\mathcal {E}_7$ is a quadratic error, we use that $\\sqrt{|k|}-\\sqrt{|h|} = \\mathcal {O}(|\\ell |^2)$ and $(k^{-1})^{ij}-(h^{-1})^{ij} = \\mathcal {O}(|\\ell |^2)$ .", "The latter observations follow from Remark REF and a Taylor expansion.", "For the second term on the right-hand side of (REF ) we have $\\begin{aligned}- \\eta \\Bigl ( \\partial _t\\bigl ( \\sqrt{|h|} (h^{-1})^{00} (1, \\ell ) \\bigr ), N \\Bigr ) &= - \\eta \\Bigl ( \\sqrt{|h|} (h^{-1})^{00} (0, \\dot{\\ell }), N \\Bigr ) + \\mathcal {E}_8\\end{aligned}$ with remainder term $\\begin{aligned}\\mathcal {E}_8 := - \\partial _t\\Bigl ( \\sqrt{|h|} (h^{-1})^{00} \\Bigr ) \\eta \\bigl ( (1,\\ell ), N \\bigr ) = \\mathcal {O}\\bigl ( {\\dot{\\wp }}\\ell \\bigr ),\\end{aligned}$ and for the third term on the right-hand side of (REF ), we compute $\\begin{aligned}-\\partial _t(B \\psi ) &= -\\eta \\biggl ( \\partial _t\\Bigl ( \\frac{\\delta (\\sqrt{|g|} (g^{-1})^{0 \\nu } \\partial _\\nu \\Phi )}{\\delta \\psi } \\bigg |_{\\begin{array}{c}\\psi = 0 \\\\ k = h\\end{array}} \\psi \\Bigr ), N \\biggr ) + \\mathcal {E}_9\\end{aligned}$ with $\\begin{aligned}\\mathcal {E}_9 := - \\eta \\biggl ( \\frac{\\delta (\\sqrt{|g|} (g^{-1})^{0 \\nu } \\partial _\\nu \\Phi )}{\\delta \\psi } \\bigg |_{\\begin{array}{c}\\psi = 0 \\\\ k = h\\end{array}} \\psi , \\partial _tN \\biggr ) = \\mathcal {O}\\bigl ( \\psi \\dot{\\ell } \\bigr ).\\end{aligned}$ Finally, the fourth term on the right-hand side of (REF ) is again an error term of the form $\\begin{aligned}\\mathcal {E}_{10} := \\eta \\Bigl ( \\sqrt{|g|} (g^{-1})^{0 \\nu } \\partial _\\nu \\Phi - \\sqrt{|h|} (h^{-1})^{00} (1, \\ell ), \\partial _tN \\Bigr ) = \\mathcal {O}\\bigl ( (\\psi , \\partial \\psi ) {\\dot{\\wp }}\\bigr ) + \\mathcal {O}\\bigl ( {\\dot{\\wp }}^2 \\bigr ).\\end{aligned}$ Combining the preceding expressions, we find that $ \\begin{aligned}\\partial _t\\dot{\\psi } &= - \\eta \\biggl ( \\partial _j\\Bigl ( \\frac{ \\delta (\\sqrt{|g|} (g^{-1})^{j \\nu } \\partial _\\nu \\Phi ) }{\\delta (\\partial _\\mu \\psi )} \\bigg |_{\\begin{array}{c}\\psi = 0, \\\\ k = h\\end{array}} \\partial _\\mu \\psi \\Bigr ), N \\biggr ) - \\eta \\biggl ( \\partial _\\mu \\Bigl ( \\frac{ \\delta (\\sqrt{|g|} (g^{-1})^{\\mu \\nu } \\partial _\\nu \\Phi ) }{\\delta \\psi } \\bigg |_{\\begin{array}{c}\\psi = 0, \\\\ k = h\\end{array}} \\psi \\Bigr ), N \\biggr ) \\\\&\\quad - \\eta \\Bigl ( \\sqrt{|h|} (h^{-1})^{00} (0, \\dot{\\ell }), N \\Bigr ) + \\mathcal {E}_6 + \\ldots + \\mathcal {E}_{10} \\\\&= - \\partial _j\\biggl ( \\eta \\Bigl ( \\frac{ \\delta (\\sqrt{|g|} (g^{-1})^{j \\nu } \\partial _\\nu \\Phi ) }{\\delta (\\partial _\\mu \\psi )} \\bigg |_{\\begin{array}{c}\\psi = 0, \\\\ k = h\\end{array}} \\partial _\\mu \\psi , N \\Bigr ) \\biggr ) + \\eta \\Bigl ( \\frac{ \\delta (\\sqrt{|g|} (g^{-1})^{j \\nu } \\partial _\\nu \\Phi ) }{\\delta (\\partial _\\mu \\psi )} \\bigg |_{\\begin{array}{c}\\psi = 0, \\\\ k = h\\end{array}} \\partial _\\mu \\psi , \\partial _jN \\Bigr ) \\\\&\\quad - \\eta \\biggl ( \\frac{ \\delta (\\sqrt{|g|} (g^{-1})^{\\mu \\nu } \\partial _\\nu \\Phi ) }{\\delta \\psi } \\bigg |_{\\begin{array}{c}\\psi = 0, \\\\ k = h\\end{array}} \\partial _\\mu \\psi , N \\biggr ) - \\eta \\biggl ( \\partial _\\mu \\Bigl ( \\frac{ \\delta (\\sqrt{|g|} (g^{-1})^{\\mu \\nu } \\partial _\\nu \\Phi ) }{\\delta \\psi } \\bigg |_{\\begin{array}{c}\\psi = 0, \\\\ k = h\\end{array}} \\Bigr ) \\psi , N \\biggr ) \\\\&\\quad - \\eta \\Bigl ( \\sqrt{|h|} (h^{-1})^{00} (0, \\dot{\\ell }), N \\Bigr ) + \\mathcal {E}_6 + \\ldots + \\mathcal {E}_{10}.\\end{aligned}$ From (REF ) we obtain for the first term on the right-hand side of (REF ) that $&- \\partial _j\\biggl ( \\eta \\Bigl ( \\frac{ \\delta (\\sqrt{|g|} (g^{-1})^{j \\nu } \\partial _\\nu \\Phi ) }{\\delta (\\partial _\\mu \\psi )} \\bigg |_{\\begin{array}{c}\\psi = 0, \\\\ k = h\\end{array}} \\partial _\\mu \\psi , N \\Bigr ) \\biggr ) \\\\&= - \\partial _j\\Bigl ( \\sqrt{|h|} (h^{-1})^{j\\mu } \\bigl ( \\eta (N, N) - (h^{-1})^{\\beta \\nu } \\eta (N_\\ell , \\Psi _{\\beta ; \\wp }) \\eta (\\Psi _{\\nu ; \\wp }, N) \\bigr ) \\partial _\\mu \\psi \\Bigr ) \\\\&= - \\partial _j\\Bigl ( \\sqrt{|h|} (h^{-1})^{j\\mu } \\partial _\\mu \\psi \\Bigr ),$ where we used that $\\begin{aligned}\\eta (N, N) - (h^{-1})^{\\beta \\nu } \\eta (N, \\Psi _{\\beta ; \\wp }) \\eta (\\Psi _{\\nu ; \\wp }, N) = \\eta (N, n) = 1.\\end{aligned}$ The latter identity follows from the fact that $\\lbrace \\Psi _{\\mu ;\\wp }, n_\\wp \\rbrace $ forms a basis for the ambient space.", "Using (REF ) and (REF ), it follows that the second and third terms on the right-hand side of (REF ) exactly cancel each other out.", "To evaluate the fourth term on the right-hand side of (REF ) we first need the following identity.", "Lemma 3.5 Let $ {\\mathrm {I\\!I}} $ be the second fundamental form of the embedding $\\Psi _\\wp |_{\\begin{array}{c}\\dot{\\ell }=0, \\dot{\\xi }=\\ell \\end{array}}$ .", "Then we have $\\frac{1}{\\sqrt{|h|}} \\eta \\biggl ( \\partial _j\\Bigl ( \\frac{ \\delta (\\sqrt{|g|} (g^{-1})^{j \\nu } \\partial _\\nu \\Phi ) }{\\delta \\psi } \\bigg |_{\\begin{array}{c}\\psi = 0, \\\\ k = h\\end{array}} \\Bigr ), N \\biggr ) = | {\\mathrm {I\\!I}} |^2.$ First, we observe that $\\begin{aligned}\\frac{1}{\\sqrt{|h|}} \\eta \\biggl ( \\partial _j\\Bigl ( \\frac{ \\delta (\\sqrt{|g|} (g^{-1})^{j \\nu } \\partial _\\nu \\Phi ) }{\\delta \\psi } \\bigg |_{\\begin{array}{c}\\psi = 0, \\\\ k = h\\end{array}} \\Bigr ), N \\biggr ) &= \\eta \\biggl ( \\frac{\\delta \\bigl ( \\Box _g \\Phi \\bigr )}{\\delta \\psi } \\bigg |_{\\begin{array}{c}\\psi = 0, \\\\ k = h\\end{array}} , N \\biggr ).\\end{aligned}$ We now split the evaluation of the right-hand side into several steps.", "In what follows, we write $N = n_\\wp + W$ , where $W$ denotes the tangential part of $N$ .", "In what follows, $\\nabla $ denotes the covariant derivative with respect to the embedding $\\Phi $ .", "Step 1: Computation of the part of $\\Box _g \\Phi $ that is linear in $\\psi $ .", "We begin by expanding $\\begin{aligned}\\Box _g \\Phi &= (g^{-1})^{\\mu \\nu } \\nabla _\\mu \\nabla _\\nu \\bigl ( \\Psi _\\wp + \\psi N \\bigr ) \\\\&= \\Box _h \\Psi _\\wp + (\\dot{g}^{-1})^{\\mu \\nu } \\nabla _\\mu \\nabla _\\nu \\Psi _\\wp - (h^{-1})^{\\mu \\nu } \\dot{\\Gamma }^\\lambda _{\\mu \\nu } \\partial _\\lambda \\Psi _\\wp + (h^{-1})^{\\mu \\nu } \\nabla _\\mu \\nabla _\\nu \\bigl ( \\psi N \\bigr ) + \\mathcal {O}\\bigl ( (\\psi , \\partial \\psi )^2 \\bigr ) \\\\&= \\Box _{h} \\Psi _\\wp + (\\dot{g}^{-1})^{\\mu \\nu } \\nabla _\\mu \\nabla _\\nu \\Psi _\\wp - (h^{-1})^{\\mu \\nu } \\dot{\\Gamma }^\\lambda _{\\mu \\nu } \\partial _\\lambda \\Psi _\\wp \\\\&\\quad + (h^{-1})^{\\mu \\nu } \\partial _\\mu \\partial _\\nu \\bigl ( \\psi N \\bigr ) - (h^{-1})^{\\mu \\nu } \\Gamma _{\\mu \\nu }^\\lambda \\partial _\\lambda \\bigl ( \\psi N \\bigr ) + \\mathcal {O}\\bigl ( (\\psi , \\partial \\psi )^2 \\bigr ),\\end{aligned}$ where $\\begin{aligned}(\\dot{g}^{-1})^{\\mu \\nu } &= - (h^{-1})^{\\mu \\alpha } \\eta ( \\partial _\\alpha \\Psi _\\wp , \\partial _\\beta N ) (h^{-1})^{\\beta \\nu } \\psi - (h^{-1})^{\\mu \\alpha } \\eta ( \\partial _\\alpha N, \\partial _\\beta \\Psi _\\wp ) (h^{-1})^{\\beta \\nu } \\psi + \\mathcal {O}( \\partial \\psi ) + \\mathcal {O}\\bigl ( (\\psi , \\partial \\psi )^2 \\bigr )\\end{aligned}$ and $\\begin{aligned}\\dot{\\Gamma }_{\\mu \\nu }^\\lambda &= (h^{-1})^{\\lambda k} \\bigl ( \\eta ( \\partial _\\mu \\partial _\\nu \\Psi _\\wp , \\partial _k N ) + \\eta ( \\partial _k \\Psi _\\wp , \\partial _\\mu \\partial _\\nu N ) \\bigr ) \\psi \\\\&\\quad - (h^{-1})^{\\lambda \\rho } \\Gamma _{\\mu \\nu }^\\sigma \\bigl ( \\eta ( \\partial _\\rho \\Psi _\\wp , \\partial _\\sigma N ) + \\eta ( \\partial _\\sigma \\Psi _\\wp , \\partial _\\rho N ) \\bigr ) \\psi + \\mathcal {O}( \\partial \\psi ) + \\mathcal {O}\\bigl ( (\\psi , \\partial \\psi )^2 \\bigr ).\\end{aligned}$ Observe that $\\begin{aligned}- (h^{-1})^{\\mu \\nu } \\dot{\\Gamma }_{\\mu \\nu }^\\lambda &= - (h^{-1})^{\\mu \\nu } (h^{-1})^{\\lambda k} \\eta ( \\partial _\\mu \\partial _\\nu \\Psi _\\wp , \\partial _k N ) \\psi + (h^{-1})^{\\mu \\nu } (h^{-1})^{\\lambda \\rho } \\Gamma _{\\mu \\nu }^\\sigma \\eta ( \\partial _\\sigma \\Psi _\\wp , \\partial _\\rho N ) \\\\&\\quad - (h^{-1})^{\\mu \\nu } (h^{-1})^{\\lambda k} \\eta ( \\partial _k \\Psi _\\wp , \\partial _\\mu \\partial _\\nu N ) \\psi + (h^{-1})^{\\mu \\nu } (h^{-1})^{\\lambda \\rho } \\Gamma ^{\\sigma }_{\\mu \\nu } \\eta ( \\partial _\\rho \\Psi _\\wp , \\partial _\\sigma N ) \\\\&\\quad + \\mathcal {O}( \\partial \\psi ) + \\mathcal {O}\\bigl ( (\\psi , \\partial \\psi )^2 \\bigr ) \\\\&= - \\eta ( (h^{-1})^{\\mu \\nu } \\partial _\\mu \\partial _\\nu \\Psi _\\wp , \\nabla ^\\lambda N ) \\psi + \\eta ( (h^{-1})^{\\mu \\nu } \\Gamma _{\\mu \\nu }^\\sigma \\partial _\\sigma \\Psi _\\wp , \\nabla ^\\lambda N ) \\psi \\\\&\\quad - \\eta ( \\nabla ^\\lambda \\Psi _\\wp , (h^{-1})^{\\mu \\nu } \\partial _\\mu \\partial _\\nu N ) \\psi + \\eta ( \\nabla ^\\lambda \\Psi _\\wp , (h^{-1})^{\\mu \\nu } \\Gamma _{\\mu \\nu }^\\sigma \\partial _\\sigma N ) \\psi \\\\&\\quad + \\mathcal {O}( \\partial \\psi ) + \\mathcal {O}\\bigl ( (\\psi , \\partial \\psi )^2 \\bigr ) \\\\&= - \\eta ( \\Box _{h} \\Psi _\\wp , \\nabla ^\\lambda N ) \\psi - \\eta ( \\nabla ^\\lambda \\Psi _\\wp , \\Box _h N ) + \\mathcal {O}( \\partial \\psi ) + \\mathcal {O}\\bigl ( (\\psi , \\partial \\psi )^2 \\bigr ).\\end{aligned}$ In what follows, we use the notation $\\begin{aligned}\\tilde{\\nabla }_\\mu \\tilde{\\nabla }_\\nu \\Psi _\\wp &:= \\partial _\\mu \\partial _\\nu \\Psi _\\wp - (h^{-1})^{\\mu \\nu } \\tilde{\\Gamma }_{\\mu \\nu }^\\lambda \\partial _\\lambda \\Psi _\\wp , \\\\\\tilde{\\Gamma }_{\\mu \\nu }^\\lambda &:= \\frac{1}{2} (h^{-1})^{\\lambda \\kappa } \\bigl ( \\partial _\\mu h_{\\kappa \\nu } + \\partial _\\nu h_{\\mu \\kappa } - \\partial _\\kappa h_{\\mu \\nu } \\bigr ), \\\\\\tilde{\\nabla }^\\mu \\Psi _\\wp &:= (h^{-1}){}^{\\mu \\nu } \\partial _\\nu \\Psi _\\wp , \\\\[ \\tilde{\\nabla }_\\mu , \\tilde{\\nabla }_\\nu ] \\tilde{\\nabla }_\\lambda \\Psi _\\wp &= \\tilde{R}_{\\mu \\nu \\lambda \\sigma } \\tilde{\\nabla }^\\sigma \\Psi _\\wp , \\\\\\tilde{R}_{\\mu \\nu } &= (h^{-1})^{\\lambda \\sigma } \\tilde{R}_{\\mu \\lambda \\nu \\sigma }.\\end{aligned}$ Correspondingly, we obtain $\\begin{aligned}\\frac{\\delta \\bigl ( \\Box _g \\Phi \\bigr )}{\\delta \\psi } \\bigg |_{\\begin{array}{c}\\psi = 0, \\\\ k = h\\end{array}} &= \\Box _h N + \\frac{ \\delta \\bigl ( (\\dot{g}^{-1})^{\\mu \\nu } \\bigr ) }{\\delta \\psi } \\bigg |_{\\begin{array}{c}\\psi = 0, \\\\ k = h\\end{array}} \\tilde{\\nabla }_\\mu \\tilde{\\nabla }_\\nu \\Psi _\\wp - \\frac{\\delta \\bigl ( (h^{-1})^{\\mu \\nu } \\dot{\\Gamma }^\\lambda _{\\mu \\nu } \\partial _\\lambda \\Psi _\\wp \\bigr )}{\\delta \\psi } \\bigg |_{\\begin{array}{c}\\psi = 0, \\\\ k = h\\end{array}} \\\\&= \\Box _{h} N - \\eta ( \\tilde{\\nabla }^\\mu \\Psi _\\wp , \\tilde{\\nabla }^\\nu N ) (\\tilde{\\nabla }_\\mu \\tilde{\\nabla }_\\nu \\Psi _\\wp ) - \\eta ( \\tilde{\\nabla }^\\nu \\Psi _\\wp , \\tilde{\\nabla }^\\mu N ) (\\tilde{\\nabla }_\\mu \\tilde{\\nabla }_\\nu \\Psi _\\wp ) \\\\&\\quad - \\eta ( \\underbrace{\\Box _{h} \\Psi _\\wp }_{=0}, \\partial ^\\lambda N ) \\partial _\\lambda \\Psi _\\wp - \\eta ( \\partial ^\\lambda \\Psi _\\wp , \\Box N ) \\partial _\\lambda \\Psi _\\wp \\\\&= \\Box _h N - \\eta ( \\tilde{\\nabla }^\\mu \\Psi _\\wp , \\tilde{\\nabla }^\\nu N ) (\\tilde{\\nabla }_\\mu \\tilde{\\nabla }_\\nu \\Psi _\\wp ) - \\eta ( \\tilde{\\nabla }^\\nu \\Psi _\\wp , \\tilde{\\nabla }^\\mu N ) (\\tilde{\\nabla }_\\mu \\tilde{\\nabla }_\\nu \\Psi _\\wp ) \\\\&\\quad - \\eta ( \\tilde{\\nabla }^\\lambda \\Psi _\\wp , \\Box _h N ) \\partial _\\lambda \\Psi _\\wp .\\end{aligned}$ Step 2: Computation of $\\eta ( \\Box _h W, n_\\wp )$ and $\\eta ( \\Box _h N, n_\\wp )$ .", "Since $W$ is tangential, we may write $\\begin{aligned}W = \\eta ( W, \\partial _\\mu \\Psi _\\wp ) \\tilde{\\nabla }^\\mu \\Psi _\\wp .\\end{aligned}$ In what follows we will use that for all $\\mu , \\nu , \\sigma $ $ \\begin{aligned}\\eta ( \\tilde{\\nabla }_\\mu \\tilde{\\nabla }_\\nu \\Psi _\\wp , \\partial _\\sigma \\Psi _\\wp ) &= 0.\\end{aligned}$ To see this we expand $\\begin{aligned}\\eta ( \\tilde{\\nabla }_\\mu \\tilde{\\nabla }_\\nu \\Psi _\\wp , \\partial _\\sigma \\Psi _\\wp ) &= \\eta ( \\partial _\\mu \\partial _\\nu \\Psi _\\wp , \\partial _\\sigma \\Psi _\\wp ) - \\tilde{\\Gamma }^\\lambda _{\\mu \\nu } h_{\\lambda \\sigma } \\\\&= \\eta ( \\partial _\\mu \\partial _\\nu \\Psi _\\wp , \\partial _\\sigma \\Psi _\\wp ) - \\frac{1}{2} (h^{-1})^{\\lambda \\kappa } \\bigl ( \\partial _\\mu h_{\\kappa \\nu } + \\partial _\\nu h_{\\mu \\kappa } - \\partial _\\kappa h_{\\mu \\nu } \\bigr ) h_{\\lambda \\sigma } \\\\&= \\eta ( \\partial _\\mu \\partial _\\nu \\Psi _\\wp , \\partial _\\sigma \\Psi _\\wp ) - \\frac{1}{2} \\bigl ( \\partial _\\mu h_{\\sigma \\nu } + \\partial _\\nu h_{\\mu \\sigma } - \\partial _\\sigma h_{\\mu \\nu } \\bigr ) \\\\&= \\eta ( \\partial _\\mu \\partial _\\nu \\Psi _\\wp , \\partial _\\sigma \\Psi _\\wp ) - \\eta ( \\partial _\\mu \\partial _\\nu \\Psi _\\wp , \\partial _\\sigma \\Psi _\\wp ) \\\\&= 0.\\end{aligned}$ Using (REF ), we obtain by direct computation $\\begin{aligned}\\Box _h W &= \\eta ( W, \\partial _\\mu \\Psi _\\wp ) \\tilde{\\nabla }^\\mu \\underbrace{\\Box _h \\Psi _\\wp }_{= \\, 0} + \\eta ( W, \\partial _\\mu \\Psi _\\wp ) {\\tilde{R}}^{\\mu \\lambda } \\partial _\\lambda \\Psi _\\wp + \\bigl ( \\Box _h \\eta ( W, \\partial _\\mu \\Psi _\\wp ) \\bigr ) \\tilde{\\nabla }^\\mu \\Psi _\\wp \\\\&\\quad + 2 \\eta ( \\partial _\\nu W, \\partial _\\mu \\Psi _\\wp ) \\tilde{\\nabla }^\\nu \\tilde{\\nabla }^\\mu \\Psi _\\wp + 2 \\underbrace{\\eta ( W, \\tilde{\\nabla }_\\nu \\tilde{\\nabla }_\\mu \\Psi _\\wp )}_{= \\, 0} \\tilde{\\nabla }^\\nu \\tilde{\\nabla }^\\mu \\Psi _\\wp .\\end{aligned}$ Note that $\\eta ( W, \\tilde{\\nabla }_\\nu \\tilde{\\nabla }_\\mu \\Psi _\\wp ) = 0$ follows from (REF ) since $W$ is tangential.", "Testing against $n_\\wp $ and inserting the relation $W = N - n_\\wp $ , we arrive at the identity $\\begin{aligned}\\eta ( \\Box _h W, n_\\wp ) &= 2 \\eta ( \\partial _\\nu W, \\partial _\\mu \\Psi _\\wp ) \\eta ( \\tilde{\\nabla }^\\nu \\tilde{\\nabla }^\\mu \\Psi _\\wp , n_\\wp ) \\\\&= 2 \\eta ( \\partial _\\nu N, \\partial _\\mu \\Psi _\\wp ) \\eta ( \\tilde{\\nabla }^\\nu \\tilde{\\nabla }^\\mu \\Psi _\\wp , n_\\wp ) - 2 \\eta ( \\partial _\\nu n_\\wp , \\partial _\\mu \\Psi _\\wp ) \\eta ( \\tilde{\\nabla }^\\nu \\tilde{\\nabla }^\\mu \\Psi _\\wp , n_\\wp ) \\\\&= 2 \\eta ( \\partial _\\nu N, \\partial _\\mu \\Psi _\\wp ) \\eta ( \\tilde{\\nabla }^\\nu \\tilde{\\nabla }^\\mu \\Psi _\\wp , n_\\wp ) + 2 \\eta ( \\partial _\\nu n_\\wp , \\partial _\\mu \\Psi _\\wp ) \\eta ( \\tilde{\\nabla }^\\mu \\Psi _\\wp , \\tilde{\\nabla }^\\nu n_\\wp ).\\end{aligned}$ Moreover, using that $\\Box _h n_\\wp = - | {\\mathrm {I\\!I}} |^2 n_\\wp $ by [39], we obtain $\\begin{aligned}\\eta ( \\Box _h N, n_\\wp ) &= \\eta ( \\Box _h n_\\wp , n_\\wp ) + \\eta ( \\Box _h W, n_\\wp ) \\\\&= - | {\\mathrm {I\\!I}} |^2 + 2 \\eta ( \\partial _\\nu N, \\partial _\\mu \\Psi _\\wp ) \\eta ( \\tilde{\\nabla }^\\nu \\tilde{\\nabla }^\\mu \\Psi _\\wp , n_\\wp ) + 2 \\eta ( \\partial _\\nu n_\\wp , \\partial _\\mu \\Psi _\\wp ) \\eta ( \\tilde{\\nabla }^\\mu \\Psi _\\wp , \\tilde{\\nabla }^\\nu n_\\wp ).\\end{aligned}$ Step 3: Final computation.", "We decompose into $\\begin{aligned}\\eta \\biggl ( \\frac{\\delta \\bigl ( \\Box _g \\Phi \\bigr )}{\\delta \\psi } \\bigg |_{\\begin{array}{c}\\psi = 0, \\\\ k = h\\end{array}}, N \\biggr ) = \\eta \\biggl ( \\frac{\\delta \\bigl ( \\Box _g \\Phi \\bigr )}{\\delta \\psi } \\bigg |_{\\begin{array}{c}\\psi = 0, \\\\ k = h\\end{array}}, n_\\wp \\biggr ) + \\eta \\biggl ( \\frac{\\delta \\bigl ( \\Box _g \\Phi \\bigr )}{\\delta \\psi } \\bigg |_{\\begin{array}{c}\\psi = 0, \\\\ k = h\\end{array}}, W \\biggr ).\\end{aligned}$ Then on the one hand we have $\\begin{aligned}\\eta \\biggl ( \\frac{\\delta \\bigl ( \\Box _g \\Phi \\bigr )}{\\delta \\psi } \\bigg |_{\\begin{array}{c}\\psi = 0, \\\\ k = h\\end{array}}, n_\\wp \\biggr ) &= \\eta ( \\Box N, n_\\wp ) - \\eta ( \\tilde{\\nabla }^\\mu \\Psi _\\wp , \\tilde{\\nabla }^\\nu N ) \\eta ( \\tilde{\\nabla }_\\mu \\tilde{\\nabla }_\\nu \\Psi _\\wp , n_\\wp ) \\\\&\\quad - \\eta ( \\tilde{\\nabla }^\\nu \\Psi _\\wp , \\tilde{\\nabla }^\\mu N ) \\eta ( \\tilde{\\nabla }_\\mu \\tilde{\\nabla }_\\nu \\Psi _\\wp , n_\\wp ) - \\eta ( \\tilde{\\nabla }^\\lambda \\Psi _\\wp , \\Box N ) \\underbrace{\\eta ( \\partial _\\lambda \\Psi _\\wp , n_\\wp )}_{= \\, 0} \\\\&= - | {\\mathrm {I\\!I}} |^2 + 2 \\eta ( \\partial _\\mu \\Psi _\\wp , \\partial _\\nu N ) \\eta ( \\tilde{\\nabla }^\\nu \\tilde{\\nabla }^\\mu \\Psi _\\wp , n_\\wp ) + 2 \\eta ( \\partial _\\mu \\Psi _\\wp , \\partial _\\nu n_\\wp ) \\eta ( \\tilde{\\nabla }^\\mu \\Psi _\\wp , \\tilde{\\nabla }^\\nu n_\\wp ) \\\\&\\quad - \\eta ( \\tilde{\\nabla }^\\mu \\Psi _\\wp , \\tilde{\\nabla }^\\nu N ) \\eta ( \\tilde{\\nabla }_\\mu \\tilde{\\nabla }_\\nu \\Psi _\\wp , n_\\wp ) - \\eta ( \\tilde{\\nabla }^\\nu \\Psi _\\wp , \\tilde{\\nabla }^\\mu N ) \\eta ( \\tilde{\\nabla }_\\mu \\tilde{\\nabla }_\\nu \\Psi _\\wp , n_\\wp ) \\\\&= - | {\\mathrm {I\\!I}} |^2 + 2 \\eta ( \\partial _\\mu \\Psi _\\wp , \\partial _\\nu n_\\wp ) \\eta ( \\tilde{\\nabla }^\\mu \\Psi _\\wp , \\tilde{\\nabla }^\\nu n_\\wp ) \\\\&= | {\\mathrm {I\\!I}} |^2.\\end{aligned}$ On the other hand, using that $\\eta ( \\tilde{\\nabla }_\\mu \\tilde{\\nabla }_\\nu \\Psi _\\wp , \\nabla _\\lambda \\Psi _\\wp ) = 0$ , we find $\\begin{aligned}\\eta \\biggl ( \\frac{\\delta \\bigl ( \\Box _g \\Phi \\bigr )}{\\delta \\psi } \\bigg |_{\\begin{array}{c}\\psi = 0, \\\\ k = h\\end{array}}, W \\biggr ) &= \\eta ( \\Box _h N, W ) - \\eta ( \\tilde{\\nabla }^\\mu \\Psi _\\wp , \\tilde{\\nabla }^\\nu N ) \\underbrace{\\eta ( \\tilde{\\nabla }_\\mu \\tilde{\\nabla }_\\nu \\Psi _\\wp , W )}_{= \\, 0} \\\\&\\quad - \\eta ( \\tilde{\\nabla }^\\nu \\Psi _\\wp , \\tilde{\\nabla }^\\mu N ) \\underbrace{\\eta ( \\tilde{\\nabla }_\\mu \\tilde{\\nabla }_\\nu \\Psi _\\wp , W )}_{= \\, 0} - \\underbrace{\\eta ( \\tilde{\\nabla }^\\lambda \\Psi _\\wp , \\Box _h N ) \\eta ( \\partial _\\lambda \\Psi _\\wp , W )}_{= \\, \\eta ( \\Box _h N, W )} \\\\&= 0.\\end{aligned}$ This finishes the proof.", "Using the preceding lemma, we find that the fourth term on the right-hand side of (REF ) simplifies to $\\begin{aligned}&- \\eta \\biggl ( \\partial _\\mu \\Bigl ( \\frac{ \\delta (\\sqrt{|g|} (g^{-1})^{\\mu \\nu } \\partial _\\nu \\Phi ) }{\\delta \\psi } \\bigg |_{\\begin{array}{c}\\psi = 0, \\\\ k = h\\end{array}} \\Bigr ) \\psi , N \\biggr ) \\\\&= - \\eta \\biggl ( \\partial _j\\Bigl ( \\frac{ \\delta (\\sqrt{|g|} (g^{-1})^{j \\nu } \\partial _\\nu \\Phi ) }{\\delta \\psi } \\bigg |_{\\begin{array}{c}\\psi = 0, \\\\ k = h\\end{array}} \\Bigr ) \\psi , N \\biggr ) - \\eta \\biggl ( \\partial _t\\Bigl ( \\frac{ \\delta (\\sqrt{|g|} (g^{-1})^{0 \\nu } \\partial _\\nu \\Phi ) }{\\delta \\psi } \\bigg |_{\\begin{array}{c}\\psi = 0, \\\\ k = h\\end{array}} \\Bigr ) \\psi , N \\biggr ) \\\\&= - \\sqrt{|h|} | {\\mathrm {I\\!I}} |^2 \\psi + \\mathcal {E}_{11}\\end{aligned}$ with remainder term $\\begin{aligned}\\mathcal {E}_{11} := - \\eta \\biggl ( \\partial _t\\Bigl ( \\frac{ \\delta (\\sqrt{|g|} (g^{-1})^{0 \\nu } \\partial _\\nu \\Phi ) }{\\delta \\psi } \\bigg |_{\\begin{array}{c}\\psi = 0, \\\\ k = h\\end{array}} \\Bigr ) \\psi , N \\biggr ) = \\mathcal {O}\\bigl ( {\\dot{\\wp }}\\psi \\bigr ).\\end{aligned}$ We arrive at the equation $\\begin{aligned}\\partial _t\\dot{\\psi } &= - \\partial _j\\bigl ( \\sqrt{|h|} (h^{-1})^{j\\mu } \\partial _\\mu \\psi \\bigr ) - \\sqrt{|h|} | {\\mathrm {I\\!I}} |^2 \\psi - \\eta \\bigl ( \\sqrt{|h|} (h^{-1})^{00} (0, \\dot{\\ell }), N \\bigr ) + \\mathcal {E}_6 + \\ldots + \\mathcal {E}_{11}.\\end{aligned}$ Finally, inserting for $\\partial _t\\psi $ on the right-hand side the relation (REF ) between $\\dot{\\psi }$ and $\\partial _t\\psi $ , we obtain $ \\begin{aligned}\\partial _t\\dot{\\psi } &= - \\sqrt{|h|} L \\psi - \\partial _j\\biggl ( \\frac{(h^{-1})^{j0}}{(h^{-1})^{00}} \\dot{\\psi } \\biggr ) - \\eta \\bigl ( \\sqrt{|h|} (h^{-1})^{00} (0, \\dot{\\ell }), N \\bigr ) + \\dot{f},\\end{aligned}$ where we introduce the linear operator $\\begin{aligned}L := \\frac{1}{\\sqrt{|h|}} \\partial _j\\bigl ( \\sqrt{|h|} (\\underline{h}^{-1})^{jk} \\partial _k\\bigr ) + | {\\mathrm {I\\!I}} |^2\\end{aligned}$ with $\\begin{aligned}(\\underline{h}^{-1})^{jk} := (h^{-1})^{jk} - \\frac{(h^{-1})^{j0} (h^{-1})^{0k}}{(h^{-1})^{00}},\\end{aligned}$ and where $\\begin{aligned}\\dot{f} &:= \\partial _j\\biggl ( \\sqrt{|h|} (h^{-1})^{j0} \\eta \\Bigl ( \\bigl ( \\dot{\\ell } \\cdot \\nabla _\\ell + (\\dot{\\xi }-\\ell ) \\cdot \\nabla _\\xi \\bigr ) \\Psi _\\wp , N \\Bigr ) \\biggr ) - \\partial _j\\bigl ( \\sqrt{|h|} (h^{-1})^{j0} f \\bigr ) + \\mathcal {E}_6 + \\ldots + \\mathcal {E}_{11}.\\end{aligned}$ Note that the first term in the preceding definition of $\\dot{f}$ is of the form $\\mathcal {O}(\\ell {\\dot{\\wp }})$ since $(h^{-1})^{j0} = \\mathcal {O}(\\ell )$ by Remark REF , which justifies its treatment as a quadratic error.", "Introducing the matrix operator $ \\begin{aligned}M := \\begin{pmatrix}- \\frac{(h^{-1})^{0j}}{(h^{-1})^{00}} \\partial _j& \\frac{1}{\\sqrt{|h|} (h^{-1})^{00}} \\\\-\\sqrt{|h|} L & - \\partial _j\\Bigl ( \\frac{(h^{-1})^{j0}}{(h^{-1})^{00}} \\Bigr )\\end{pmatrix}\\end{aligned}$ and setting $\\begin{aligned}\\vec{K} := \\begin{pmatrix}K \\\\ \\dot{K}\\end{pmatrix}= \\begin{pmatrix}- \\eta \\Bigl ( \\bigl ( \\dot{\\ell } \\cdot \\nabla _\\ell + (\\dot{\\xi }-\\ell ) \\cdot \\nabla _\\xi \\bigr ) \\Psi _\\wp , N \\Bigr ) \\\\ - \\eta \\bigl ( \\sqrt{|h|} (h^{-1})^{00} (0, \\dot{\\ell }), N \\bigr )\\end{pmatrix},\\qquad \\vec{f} := \\begin{pmatrix}f \\\\ \\dot{f}\\end{pmatrix},\\end{aligned}$ we obtain from (REF ) and (REF ) the following first-order formulation of the HVMC equation for $\\vec{\\psi }$ $ \\begin{aligned}(\\partial _t- M) \\vec{\\psi } = \\vec{K} + \\vec{f}.\\end{aligned}$ We end this subsection by computing the second order equation for $\\psi $ coming from (REF ).", "Upon rearranging, we obtain from (REF ) that $ \\dot{\\psi } = \\sqrt{|h|}(h^{-1})^{0\\nu } \\partial _\\nu \\psi - \\sqrt{|h|}(h^{-1})^{00} (K+f),$ and $\\begin{split}\\partial _t \\dot{\\psi } + \\partial _j\\bigl ( \\sqrt{|h|}(h^{-1})^{ij} \\partial _i\\psi \\bigr ) - \\partial _j \\biggl ( \\sqrt{|h|} \\frac{(h^{-1})^{0i}(h^{-1})^{0j}}{(h^{-1})^{00}} \\partial _i\\psi \\biggr ) + \\partial _j \\biggl ( \\frac{(h^{-1})^{0j}}{(h^{-1})^{00}} \\dot{\\psi } \\biggr ) + \\sqrt{|h|} | {\\mathrm {I\\!I}} |^2 \\psi = \\dot{K} + \\dot{f}.\\end{split}$ Substituting (REF ) for $\\dot{\\psi }$ in the preceding identity, we arrive at $\\begin{split}\\partial _\\mu \\bigl ( \\sqrt{|h|}(h^{-1})^{\\mu \\nu } \\partial _\\nu \\psi \\bigr ) + \\sqrt{|h|} | {\\mathrm {I\\!I}} |^2 \\psi = (\\dot{K} + \\dot{f}) + \\partial _\\nu \\bigl (\\sqrt{|h|}(h^{-1})^{0\\nu } (K+f) \\bigr ).\\end{split}$ We conclude that if $\\vec{\\psi }$ satisfies (REF ), then $\\psi $ satisfies the wave equation $ \\begin{split}\\frac{1}{\\sqrt{|h|}} \\partial _\\mu \\bigl ( \\sqrt{|h|} (h^{-1})^{\\mu \\nu } \\partial _\\nu \\psi \\bigr ) + | {\\mathrm {I\\!I}} |^2 \\psi = \\frac{1}{\\sqrt{|h|}} (\\dot{K} + \\dot{f}) + \\frac{1}{\\sqrt{|h|}} \\partial _\\nu \\bigl ( \\sqrt{|h|}(h^{-1})^{0\\nu } (K+f) \\bigr ).\\end{split}$" ], [ "Eigenfunctions", "In this subsection we determine the eigenfunctions and generalized eigenfunctions of the matrix operator $M$ defined in (REF ) when the parameter $\\ell (t)$ is time-independent, i.e., $\\ell (t) = \\ell $ for some fixed $\\ell \\in \\mathbb {R}^{n+1}$ with $|\\ell | < 1$ , and $\\xi (t)=a+t\\ell $ for some fixed $ a\\in \\mathbb {R}^{n+1}$ .", "We will denote the particular choice of $\\ell $ for which we want to compute the eigenfunctions by $\\ell _0$ , and use the notation $M_{\\ell _0}$ for the corresponding operator.", "To find the eigenfunctions and generalized eigenfunctions of $M_{\\ell _0}$ , we consider the following maximal embeddings $\\begin{split}\\Psi _{\\xi _0,\\ell _0} &:= \\bigl ( t, \\xi _0 + \\gamma _0^{-1}P_{\\ell _0}F+P_{\\ell _0}^\\perp F \\bigr ), \\qquad \\xi _0 = a_0+t\\ell _0, \\\\\\Psi _{\\xi ,\\ell } &:= \\bigl ( t, \\xi + \\gamma ^{-1} P_\\ell F+P_\\ell ^\\perp F \\bigr ), \\qquad \\quad \\, \\xi = a+t\\ell ,\\end{split}$ for fixed $(a_0, \\ell _0)$ and $(a, \\ell )$ .", "For each $(a,\\ell )$ we write $\\Psi _{\\xi ,\\ell } = \\Psi _{\\xi _0,\\ell _0} + \\psi _{\\xi ,\\ell }N_{\\ell _0},$ where $N_{\\ell _0}$ is the normal to $\\text{im}(\\Psi _{\\xi _0,\\ell _0}(t)) \\cap {}_t$ viewed as a subspace of ${}_t$ .", "Then we have $\\psi _{\\xi _0,\\ell _0} \\equiv 0$ .", "The metric $\\Psi _{\\xi _0,\\ell _0}^\\ast \\eta $ is denoted by $h$ and the metric $\\Psi _{\\xi ,\\ell }^\\ast \\eta $ by $g$ (note that in this subsection there is no difference between $h$ and what would be $k$ ).", "In view of (REF ), we define $\\begin{split}B_0 &:= \\sqrt{|h|} (h^{-1})^{0 j}\\eta (\\partial _j N_{\\ell _0},N_{\\ell _0}) \\\\&\\qquad - \\sqrt{|h|} (h^{-1})^{0 \\kappa } (h^{-1})^{\\nu j}\\eta (\\partial _\\kappa \\Psi _{\\xi _0,\\ell _0},\\partial _j N_{\\ell _0})\\eta (\\partial _\\nu \\Psi _{\\xi _0,\\ell _0},N_{\\ell _0}) \\\\&\\qquad - \\sqrt{|h|} (h^{-1})^{0 j} (h^{-1})^{\\nu \\lambda }\\eta (\\partial _\\lambda \\Psi _{\\xi _0,\\ell _0},\\partial _j N_{\\ell _0})\\eta (\\partial _\\nu \\Psi _{\\xi _0,\\ell _0},N_{\\ell _0}) \\\\&\\qquad + \\sqrt{|h|} (h^{-1})^{0 \\nu } (h^{-1})^{\\kappa j}\\eta (\\partial _\\kappa \\Psi _{\\xi _0,\\ell _0},\\partial _j N_{\\ell _0})\\eta (\\partial _\\nu \\Psi _{\\xi _0,\\ell _0},N_{\\ell _0})\\end{split}$ and let $\\begin{split}{\\dot{\\psi }}_{\\xi ,\\ell } := \\eta \\bigl ( \\sqrt{|g|} (g^{-1})^{0\\nu }\\partial _\\nu \\Psi _{\\xi ,\\ell }, N_{\\ell _0} \\bigr ) - \\eta \\bigl ( \\sqrt{|h|} (h^{-1})^{00} (1,\\ell _0),N_{\\ell _0}\\bigr ) - B_0 \\psi _{\\xi ,\\ell }.\\end{split}$ Then ${\\dot{\\psi }}_{\\xi _0,\\ell _0} = 0$ and $\\vec{\\psi }_{\\xi ,\\ell } = (\\psi _{\\xi ,\\ell }, {\\dot{\\psi }}_{\\xi ,\\ell })$ satisfies $(\\partial _t-M_{\\ell _0}) \\vec{\\psi }_{\\xi ,\\ell } = \\vec{\\mathcal {F}},$ where $\\vec{\\mathcal {F}}$ depends at least quadratically on $\\vec{\\psi }_{\\xi ,\\ell }$ .", "In particular $\\begin{split}\\frac{\\delta \\vec{\\mathcal {F}}}{\\delta a} \\Big |_{(a,\\ell )=(a_0,\\ell _0)} = \\frac{\\delta \\vec{\\mathcal {F}}}{\\delta \\ell }\\Big |_{(a,\\ell )=(a_0,\\ell _0)}=0.\\end{split}$ It follows that $\\frac{\\delta \\vec{\\psi }_{\\xi ,\\ell }}{\\delta a^i}\\vert _{(a,\\ell )=(a_0,\\ell _0)}$ and $\\frac{\\delta \\vec{\\psi }_{\\xi ,\\ell }}{\\delta \\ell ^i}\\vert _{(a,\\ell )=(a_0,\\ell _0)}$ for $1 \\le i \\le n$ are solutions of $(\\partial _t-M_{\\ell _0}) \\vec{\\varphi }= 0.$ To compute these parameter derivatives more easily, we also observe that in view of (REF ) ${\\dot{\\psi }}_{\\xi ,\\ell } = \\sqrt{|h|} h^{0\\nu } \\partial _\\nu \\psi _{\\xi ,\\ell } + {\\tilde{f}},$ where ${\\tilde{f}}$ depends quadratically on $\\psi _{\\xi ,\\ell }$ , whence $\\frac{\\delta {\\tilde{f}}}{\\delta a^i}\\vert _{(a,\\ell )=(a_0,\\ell _0)}=\\frac{\\delta {\\tilde{f}}}{\\delta \\ell ^i}\\vert _{(a,\\ell )=(a_0,\\ell _0)}=0$ .", "Moreover, we note that $\\frac{\\delta }{\\delta a^i}$ is the same as $\\frac{\\delta }{\\delta \\xi ^i}$ and $\\begin{split}\\frac{\\delta \\psi _{\\xi ,\\ell }}{\\delta \\xi ^i} \\Bigr |_{\\ell =\\ell _0} = \\eta \\Bigl (\\frac{\\delta \\Psi _{\\xi ,\\ell }}{\\delta \\xi ^i} \\Bigr |_{\\ell =\\ell _0},|N_{\\ell _0}|^{-2}N_{\\ell _0}\\Bigr ), \\qquad \\frac{\\delta \\psi _{\\xi ,\\ell }}{\\delta \\ell ^i} \\Bigr |_{\\ell =\\ell _0} = \\eta \\Bigl (\\frac{\\delta \\Psi _{\\xi ,\\ell }}{\\delta \\ell ^i} \\Bigr |_{\\ell =\\ell _0},|N_{\\ell _0}|^{-2}N_{\\ell _0}\\Bigr ).\\end{split}$ With these observations we compute $\\begin{split}\\vec{\\varphi }_i = \\begin{pmatrix} \\varphi _i \\\\ {\\dot{\\varphi }}_i \\end{pmatrix} := \\begin{pmatrix} \\frac{\\delta \\psi _{\\xi ,\\ell }}{\\delta \\xi ^i} \\bigl |_{\\ell =\\ell _0} \\\\ \\frac{\\delta {\\dot{\\psi }}_{\\xi ,\\ell }}{\\delta \\xi ^i} \\bigl |_{\\ell =\\ell _0} \\end{pmatrix} = \\begin{pmatrix} \\frac{\\delta \\psi _{\\xi ,\\ell }}{\\delta \\xi ^i} \\bigl |_{\\ell =\\ell _0} \\\\ \\sqrt{|h|} (h^{-1})^{0\\nu } \\bigl |_{\\ell =\\ell _0} \\partial _\\nu \\frac{\\delta \\psi _{\\xi ,\\ell }}{\\delta \\xi ^i} \\bigl |_{\\ell =\\ell _0} \\end{pmatrix}, \\quad 1 \\le i \\le n,\\end{split}$ to be $\\begin{aligned}\\varphi _i = |\\ell _0|^{-2} (\\gamma _0 - 1) (\\ell _0 \\cdot \\nu ) \\ell _0^i + \\nu ^i, \\qquad {\\dot{\\varphi }}_i = \\sqrt{|h|} (h^{-1})^{0j} \\bigl |_{\\ell =\\ell _0} \\partial _j \\varphi _i\\end{aligned}$ with $\\gamma _0 = (1-|\\ell _0|^2)^{-\\frac{1}{2}}.$ Since $\\vec{\\varphi }_i$ is independent of $t$ , we conclude that $M_{\\ell _0} \\vec{\\varphi }_i=0, \\qquad 1 \\le i \\le n.$ Similarly, for the derivatives with respect to $\\ell ^i$ we have (below we have used the fact that the first $n$ components of $\\nu $ and $F$ are proportional, to simplify a bit) $\\begin{split}\\frac{\\delta \\psi _{\\xi ,\\ell }}{\\delta \\ell ^i} \\Bigr |_{\\ell =\\ell _0} = \\eta \\Bigl (\\frac{\\delta \\Psi _{\\xi ,\\ell }}{\\delta \\ell ^i} \\Bigr |_{\\ell =\\ell _0}, |N_{\\ell _0}|^{-2}N_{\\ell _0}\\Bigr )&= -\\gamma _0(\\ell _0 \\cdot F) \\nu ^i - |\\ell _0|^{-2} \\gamma _0 (\\gamma _0-1) (\\ell _0 \\cdot F)(\\ell _0 \\cdot \\nu ) \\ell _0^i + t\\varphi _i.\\end{split}$ We set $\\begin{split}\\varphi _{n+i} := -\\gamma _0 (\\ell _0 \\cdot F)\\nu ^i-|\\ell _0|^{-2} \\gamma _0 (\\gamma _0-1)(\\ell _0 \\cdot F)(\\ell _0 \\cdot \\nu ) \\ell _0^i, \\quad 1 \\le i \\le n,\\end{split}$ and thus have $\\frac{\\delta \\psi _{\\xi ,\\ell }}{\\delta \\ell ^i} \\Bigr |_{\\ell =\\ell _0} = \\varphi _{n+i} + t \\varphi _i, \\quad 1 \\le i \\le n.$ It follows that $\\begin{split}\\frac{\\delta {\\dot{\\psi }}_{\\xi ,\\ell }}{\\delta \\ell ^i} \\Bigr |_{\\ell =\\ell _0} &= \\sqrt{|h|} (h^{-1})^{0\\nu } \\partial _\\nu \\Bigl ( \\frac{\\delta \\psi _{\\xi ,\\ell }}{\\delta \\ell ^i} \\Bigr ) \\Bigr |_{\\ell =\\ell _0} \\\\&= \\sqrt{|h|}(h^{-1})^{0j} \\bigr |_{\\ell =\\ell _0} \\partial _j \\varphi _{n+i} + \\sqrt{|h|}(h^{-1})^{00} \\bigr |_{\\ell =\\ell _0} \\varphi _i + t \\sqrt{|h|} (h^{-1})^{0j} \\bigr |_{\\ell =\\ell _0} \\partial _j \\varphi _{i} \\\\&= \\sqrt{|h|}(h^{-1})^{0j} \\bigr |_{\\ell =\\ell _0} \\partial _j \\varphi _{n+i} + \\sqrt{|h|}(h^{-1})^{00} \\bigr |_{\\ell =\\ell _0} \\varphi _i + t {\\dot{\\varphi }}_i.\\end{split}$ Hence, upon defining ${\\dot{\\varphi }}_{n+i} := \\sqrt{|h|}(h^{-1})^{0j} \\bigr |_{\\ell =\\ell _0} \\partial _j \\varphi _{n+i} + \\sqrt{|h|}(h^{-1})^{00} \\bigr |_{\\ell =\\ell _0} \\varphi _i, \\quad 1 \\le i \\le n,$ as well as $\\vec{\\varphi }_{n+i} = \\begin{pmatrix} \\varphi _{n+i} \\\\ {\\dot{\\varphi }}_{n+i} \\end{pmatrix}, \\quad 1 \\le i \\le n,$ we have $\\begin{pmatrix} \\frac{\\delta \\psi _{\\xi ,\\ell }}{\\delta \\ell ^i} \\Bigr |_{\\ell =\\ell _0} \\\\ \\frac{\\delta {\\dot{\\psi }}_{\\xi ,\\ell }}{\\delta \\ell ^i} \\Bigr |_{\\ell =\\ell _0} \\end{pmatrix} = \\vec{\\varphi }_{n+i} + t \\vec{\\varphi }_i, \\quad 1 \\le i \\le n.$ Now recall that we have shown for time-independent $\\ell = \\ell _0$ , $\\begin{split}(\\partial _t - M_{\\ell _0}) \\begin{pmatrix} \\frac{\\delta \\psi _{\\xi ,\\ell }}{\\delta \\ell ^i} \\Bigr |_{\\ell =\\ell _0} \\\\ \\frac{\\delta {\\dot{\\psi }}_{\\xi ,\\ell }}{\\delta \\ell ^i} \\Bigr |_{\\ell =\\ell _0} \\end{pmatrix} = 0, \\quad 1 \\le i \\le n.\\end{split}$ Since $\\vec{\\varphi }_i$ satisfies $(\\partial _t-M_{\\ell _0}) \\vec{\\varphi }_i = M_{\\ell _0} \\vec{\\varphi }_i=0$ and since $\\vec{\\varphi }_{n+i}$ satisfies $\\partial _t\\vec{\\varphi }_{n+i}=0$ , we conclude $M_{\\ell _0} \\vec{\\varphi }_{n+i}=\\vec{\\varphi }_i, \\quad 1 \\le i \\le n.$" ], [ "Modulation equations", "In this section we revert to the notation used before Section REF .", "In particular, $\\ell = \\ell (t)$ and $\\xi =\\xi (t)$ are assumed to be time-dependent again, and $\\xi (t)$ is no longer assumed to be of the form $a+t\\ell $ .", "We begin with the definition of the symplectic form.", "To this end we introduce the operator $\\begin{aligned}J = \\begin{pmatrix} 0 & 1 \\\\ -1 & 0 \\end{pmatrix}.\\end{aligned}$ Note that $J^\\ast = - J$ , in the sense that $(J{\\vec{u}})\\cdot {\\vec{v}}=-{\\vec{u}}\\cdot (J{\\vec{v}})$ .", "We define the symplectic form $\\Omega $ as $\\begin{aligned}\\Omega (\\vec{u}, \\vec{v}) := \\langle \\vec{u}, J \\vec{v} \\rangle , \\quad \\vec{u} = \\begin{pmatrix} u_1 \\\\ u_2 \\end{pmatrix}, \\quad \\vec{v} = \\begin{pmatrix} v_1 \\\\ v_2 \\end{pmatrix},\\end{aligned}$ where $\\langle \\vec{u}, \\vec{v} \\rangle = \\int \\bigl ( u_1 v_1 + u_2 v_2 \\bigr ) \\, \\mathrm {d}\\omega \\, \\mathrm {d}\\rho .$ We emphasize that the reason why we are using $\\mathrm {d}\\omega \\,\\mathrm {d}\\rho $ for the volume form is that in our applications, $\\sqrt{| h|}$ is incorporated in the definition of $\\vec{\\psi }$ .", "Recall the definition of the matrix operator $\\begin{aligned}M := \\begin{pmatrix}- \\frac{(h^{-1})^{0j}}{(h^{-1})^{00}} \\partial _j& \\frac{1}{\\sqrt{|h|} (h^{-1})^{00}} \\\\-\\sqrt{|h|} L & - \\partial _j\\Bigl ( \\frac{(h^{-1})^{j0}}{(h^{-1})^{00}} \\Bigr )\\end{pmatrix}.\\end{aligned}$ Its adjoint with respect to the inner product $\\langle \\vec{u}, \\vec{v} \\rangle $ is given by $\\begin{aligned}M^\\ast := \\begin{pmatrix}\\partial _j\\Bigl ( \\frac{(h^{-1})^{j0}}{(h^{-1})^{00}} \\Bigr ) & -\\sqrt{|h|} L \\\\\\frac{1}{\\sqrt{|h|} (h^{-1})^{00}} & \\frac{(h^{-1})^{0j}}{(h^{-1})^{00}} \\partial _j\\end{pmatrix}.\\end{aligned}$ Then we have $\\begin{aligned}J M + M^\\ast J = 0.\\end{aligned}$ In particular, it follows that $\\Omega ({\\vec{u}}, M {\\vec{v}}) = - \\Omega (M{\\vec{u}}, {\\vec{v}}).$ Motivated by the discussion in Subsection REF , we define for arbitrary $\\ell \\in \\mathbb {R}^n$ , $|\\ell | < 1$ , $\\begin{aligned}\\vec{\\varphi }_i := \\begin{pmatrix} \\varphi _i \\\\ {\\dot{\\varphi }}_i \\end{pmatrix}, \\quad \\vec{\\varphi }_{n+i} := \\begin{pmatrix} \\varphi _{n+i} \\\\ {\\dot{\\varphi }}_{n+i} \\end{pmatrix}, \\quad 1 \\le i \\le n,\\end{aligned}$ where $\\begin{aligned}\\varphi _i &:= |\\ell |^{-2}(\\gamma -1)(\\ell \\cdot \\nu )\\ell ^i+\\nu ^i, \\\\{\\dot{\\varphi }}_i &:= \\sqrt{|h|}(h^{-1})^{0j}\\partial _j\\varphi _i, \\\\\\varphi _{n+i} &:= -\\gamma (\\ell \\cdot F)\\nu ^i-|\\ell |^{-2}\\gamma (\\gamma -1)(\\ell \\cdot F)(\\ell \\cdot \\nu )\\ell ^i, \\\\{\\dot{\\varphi }}_{n+i} &:= \\sqrt{|h|}(h^{-1})^{0j}\\partial _j\\varphi _{n+i}+\\sqrt{|h|}(h^{-1})^{00}\\varphi _i.\\end{aligned}$ Recall that here we no longer assume that $\\frac{\\mathrm {d}\\xi }{\\mathrm {d}t}=\\ell $ and $\\frac{\\mathrm {d}\\ell }{\\mathrm {d}t}=0$ .", "From Subsection REF we still obtain $M \\vec{\\varphi }_i = 0, \\quad M \\vec{\\varphi }_{n+i} = \\vec{\\varphi }_i, \\quad 1 \\le i \\le n,$ but $\\vec{\\varphi }_i$ and $\\vec{\\varphi }_{n+i}$ are no longer elements of the kernel, respectively generalized kernel, of $(\\partial _t - M)$ .", "Next, we introduce truncated versions of the generalized eigenfunctions given by ${\\vec{Z}}_i := \\chi \\vec{\\varphi }_i, \\quad {\\vec{Z}}_{n+i} := \\chi \\vec{\\varphi }_{n+i}, \\quad i = 1, \\ldots , n,$ where the smooth cut-off function $\\chi \\in C_c^\\infty (\\mathbb {R})$ satisfies $\\chi (\\rho ) = 1$ for $|\\rho | \\le {R_1}$ and $\\chi (\\rho ) = 0$ for $|\\rho | \\ge 2{R_1}$ .", "Then, using (REF ), we find for $i=1, \\ldots , n$ that $\\begin{aligned}\\partial _t \\bigl ( \\Omega ( \\vec{\\psi }, {\\vec{Z}}_i ) \\bigr ) &= \\Omega ({\\vec{K}}, {\\vec{Z}}_i) + \\Omega ({\\vec{f}}, {\\vec{Z}}_i) + \\Omega (\\vec{\\psi }, (\\partial _t-M) {\\vec{Z}}_i), \\\\\\partial _t \\bigl ( \\Omega ( \\vec{\\psi }, {\\vec{Z}}_{n+i} ) \\bigr ) &= \\Omega ({\\vec{K}}, {\\vec{Z}}_{n+i}) + \\Omega ({\\vec{f}}, {\\vec{Z}}_{n+i}) + \\Omega (\\psi , (\\partial _t-M){\\vec{Z}}_{n+i}).\\end{aligned}$ We determine the leading order behavior of $\\Omega ( \\vec{\\psi }, {\\vec{Z}}_i )$ and $\\Omega ( \\vec{\\psi }, {\\vec{Z}}_{n+i} )$ .", "To this end we first observe that to leading order $\\begin{aligned}K &= -(\\dot{\\xi }-\\ell ) \\cdot \\bigl ( \\nu + \\mathcal {O}(|\\ell |^2) \\nu \\bigr ) + \\dot{\\ell } \\cdot \\mathcal {O}(|\\ell |), \\\\\\dot{K} &= -\\sqrt{|h|} (h^{-1})^{00} \\dot{\\ell } \\cdot \\bigl ( \\nu + \\mathcal {O}(|\\ell |^2) \\nu \\bigr ),\\end{aligned}$ and $\\begin{aligned}\\varphi _i &= \\nu ^i + \\mathcal {O}(|\\ell |^2), \\\\\\dot{\\varphi }_i &= \\sqrt{|h|} (h^{-1})^{0j} \\partial _j \\varphi _i = \\mathcal {O}(|\\ell |), \\\\\\varphi _{n+i} &= \\mathcal {O}(|\\ell |), \\\\\\dot{\\varphi }_{n+i} &= \\sqrt{|h|} (h^{-1})^{00} \\nu ^i + \\mathcal {O}(|\\ell |).\\end{aligned}$ We obtain corresponding leading order expressions for ${\\vec{Z}}_i$ and ${\\vec{Z}}_{n+i}$ .", "Thus, we find that to leading order $\\begin{aligned}\\Omega ({\\vec{K}}, {\\vec{Z}}_i) &= \\sum _j \\dot{\\ell }_j ( d_{ij} + r_{ij} ) + \\sum _j (\\dot{\\xi }_j - \\ell _j) b_{ij}, \\\\\\Omega ({\\vec{K}}, {\\vec{Z}}_{n+i}) &= \\sum _j \\dot{\\ell }_j {\\tilde{b}}_{ij} - \\sum _j (\\dot{\\xi }_j - \\ell _j) ( d_{ij} + {\\tilde{r}}_{ij})\\end{aligned}$ with (see Section REF ) $ d_{ij} = \\int \\chi \\nu ^i \\nu ^j \\sqrt{|h|} (h^{-1})^{00} \\, \\mathrm {d}\\rho \\, \\mathrm {d}\\omega \\, \\simeq \\, \\left\\lbrace \\begin{aligned} 1, \\quad i = j, \\\\o_{{R_1},\\ell }(1), \\quad i \\ne j \\end{aligned} \\right.$ and $r_{ij} = b_{ij} = {\\tilde{r}}_{ij} = {\\tilde{b}}_{ij} = \\mathcal {O}(|\\ell |).$ Below we denote by $D$ the $n \\times n$ matrix with entries $d_{ij}$ defined in (REF ).", "Clearly, $D$ is invertible for small $|\\ell |$ and sufficiently large ${R_1}$ .", "Parts of the quantities $\\Omega ({\\vec{f}}, \\vec{Z}_i)$ and $\\Omega ({\\vec{f}}, \\vec{Z}_{n+i})$ are (at least) linear in ${\\dot{\\wp }}$ with coefficients that are $\\mathcal {O}({\\dot{\\wp }}, \\ell , \\psi , \\partial \\psi )$ .", "Note that not the entire quantities $\\Omega ({\\vec{f}}, \\vec{Z}_i)$ and $\\Omega ({\\vec{f}}, \\vec{Z}_{n+i})$ contain a factor of ${\\dot{\\wp }}$ , for instance $\\mathcal {E}_1$ does not.", "Finally, we have $\\begin{aligned}\\Omega (\\vec{\\psi }, (\\partial _t - M) {\\vec{Z}}_i) &= \\Omega (\\vec{\\psi }, \\chi \\dot{\\ell } \\cdot \\nabla _\\ell \\vec{Z}_i) - \\Omega (\\vec{\\psi }, M(\\chi \\vec{Z}_i)), \\\\\\Omega (\\vec{\\psi }, (\\partial _t - M) {\\vec{Z}}_{n+i}) &= \\Omega (\\vec{\\psi }, \\chi \\dot{\\ell } \\cdot \\nabla _\\ell \\vec{Z}_{n+i}) - \\Omega (\\vec{\\psi }, M(\\chi \\vec{Z}_{n+i})).\\end{aligned}$ Putting the preceding observations together, we have found that schematically $ \\begin{aligned}\\partial _t {\\vec{\\Omega }}+ {\\vec{N}}= \\begin{pmatrix} D + R & R \\\\ R & D + R \\end{pmatrix} {\\dot{\\wp }}+ {\\vec{H}},\\end{aligned}$ where (recall the notation from Section REF ) $\\begin{aligned}{\\vec{\\Omega }}&= \\bigl ( \\Omega (\\vec{\\psi }, {\\vec{Z}}_1), \\ldots , \\Omega (\\vec{\\psi }, {\\vec{Z}}_{2n}) \\bigr ), \\\\{\\vec{N}}&= \\bigl ( \\Omega (\\vec{\\psi }, M {\\vec{Z}}_1), \\ldots , \\Omega (\\vec{\\psi }, M {\\vec{Z}}_{2n}) \\bigr ), \\\\R &= \\mathcal {O}(\\psi , \\partial \\psi , \\partial _\\Sigma \\partial \\psi , {\\dot{\\wp }}, \\ell ), \\\\{\\vec{H}}&= \\mathcal {O}\\bigl ( (\\psi , \\partial \\psi , \\partial _\\Sigma \\partial \\psi )^2 \\bigr ).\\end{aligned}$ Note that here we have separated ${\\vec{N}}$ from ${\\vec{H}}$ to emphasize that ${\\vec{N}}$ , which contains the linear contribution in $\\vec{\\psi }$ , is the principal source term for $\\partial _t{\\vec{\\Omega }}$ .", "In what follows, we would like to view (REF ) as an equation entirely in terms of $\\vec{\\psi }= (\\psi , {\\dot{\\psi }})$ , ${\\dot{\\wp }}$ , and $\\ell $ .", "However, at this point the right-hand side of (REF ) still involves $\\partial _t \\psi $ .", "To remedy this, we use the relation (REF ) between $\\partial _t \\psi $ and ${\\dot{\\psi }}$ to replace any $\\partial _t \\psi $ terms on the right-hand side of (REF ).", "However, some care has to be taken here because the quadratic error term $f$ in (REF ) still contains terms involving $\\partial _t \\psi $ , but they are of the form $\\mathcal {O}(\\psi , \\partial \\psi , \\ell , {\\dot{\\wp }}) \\partial _t \\psi $ .", "We can therefore use the implicit function theorem to infer from the relation (REF ) that for sufficiently small $\\psi $ , $\\partial \\psi $ , $\\partial _\\Sigma \\partial \\psi $ , $\\ell $ , and ${\\dot{\\wp }}$ , we can write $ \\partial _t \\psi = \\frac{1}{\\sqrt{|h|} (h^{-1})^{00}} \\dot{\\psi } - \\frac{(h^{-1})^{0j}}{(h^{-1})^{00}} \\partial _j\\psi - \\eta \\Bigl ( \\bigl ( \\dot{\\ell } \\cdot \\nabla _\\ell + (\\dot{\\xi }-\\ell ) \\cdot \\nabla _\\xi \\bigr ) \\Psi _\\wp , N \\Bigr ) + \\mathcal {O}\\bigl ( (\\partial ^{\\le 2}_\\Sigma \\vec{\\psi }, \\ell , {\\dot{\\wp }})^2 \\bigr ).$ Remark 3.6 The smallness required for this application of the implicit function theorem will follow from the assumptions on the initial data and our bootstrap assumptions in Section , and will be assumed in the remainder of this section.", "Inserting the relation (REF ) into the right-hand side of (REF ) and rearranging, we obtain $ \\begin{aligned}\\partial _t {\\vec{\\Omega }}+ {\\vec{N}}= \\begin{pmatrix} D + {\\widetilde{R}}& {\\widetilde{R}}\\\\ {\\widetilde{R}}& D + {\\widetilde{R}}\\end{pmatrix} {\\dot{\\wp }}+ \\vec{{\\widetilde{H}}} =: {\\vec{F}}(\\partial ^{\\le 2}_\\Sigma \\vec{\\psi }, \\ell , {\\dot{\\wp }}),\\end{aligned}$ where ${\\widetilde{R}}$ and ${\\widetilde{H}}$ are smooth functions of the form $\\begin{aligned}{\\widetilde{R}}&= \\mathcal {O}(\\partial ^{\\le 2}_\\Sigma \\vec{\\psi }, \\ell , {\\dot{\\wp }}), \\\\\\vec{{\\widetilde{H}}} &= \\mathcal {O}\\bigl ( (\\partial ^{\\le 2}_\\Sigma \\vec{\\psi })^2, \\ell \\vec{\\psi }, \\ell \\partial _\\Sigma \\vec{\\psi }\\bigr ).\\end{aligned}$ As a consequence of the implicit function theorem, there exists a smooth function ${\\vec{G}}$ such that for small $x$ , $y$ , $w$ , and $q$ , we have $q = {\\vec{F}}(x,y,w)$ if and only if $w = {\\vec{G}}(x,y,q)$ .", "Moreover, in view of the structure of ${\\vec{F}}$ defined in (REF ), ${\\vec{G}}$ must then also satisfy (uniformly for all sufficiently small $x$ , $y$ , and $q$ ) the estimate $ |{\\vec{G}}(x,y,q)| \\lesssim |x| + |q|.$ Thus, assuming $\\partial _\\Sigma ^{\\le 2} \\vec{\\psi }$ , $\\ell $ , and ${\\dot{\\wp }}$ are sufficiently small (in a pointwise sense) as remarked above, (REF ) holds if and only if we have $ \\begin{aligned}{\\dot{\\wp }}= {\\vec{G}}(\\partial _\\Sigma ^{\\le 2} \\vec{\\psi }, \\ell , \\partial _t {\\vec{\\Omega }}+ {\\vec{N}}).\\end{aligned}$ Note that in this equation for ${\\dot{\\wp }}$ second-order derivatives of $\\vec{\\psi }$ appear in the source term.", "This has the potential danger that after differentiating in $t$ (that is, for after eventually commuting the equation with the maximum number of $\\partial _t$ derivatives), there is a loss of regularity.", "To avoid this issue, one can try to replace (REF ) by a smoothed out version of it.", "To achieve this, we could try to impose orthogonality conditions for ${\\vec{\\Omega }}$ that lead to a differential equation of the form $\\begin{aligned}{\\dot{\\wp }}= {\\vec{G}}( S\\partial _\\Sigma ^{\\le 2} \\vec{\\psi }, \\ell , S {\\vec{N}}),\\end{aligned}$ where $S$ is a smoothing operator (in time) that will be defined shortly.", "While this seems feasible in principle, technically, it seems simpler (see (REF ) below) to allow more flexibility in the final differential equation for ${\\dot{\\wp }}$ , and to let it be of the form $\\begin{aligned}{\\dot{\\wp }}= {\\vec{G}}( S\\partial _\\Sigma ^{\\le 2} \\vec{\\psi }, \\ell , S{\\vec{N}}- \\beta \\vec{\\omega }),\\end{aligned}$ where $\\beta > 0$ and $\\beta \\vec{\\omega }$ is no larger than ${\\vec{N}}$ .", "Comparing with (REF ), this is equivalent to $\\begin{aligned}\\partial _t {\\vec{\\Omega }}+ {\\vec{N}}= {\\vec{F}}(\\partial _\\Sigma ^{\\le 2} \\vec{\\psi }, \\ell , {\\vec{G}}( S\\partial _\\Sigma ^{\\le 2} \\vec{\\psi }, \\ell , S{\\vec{N}}- \\beta \\vec{\\omega }) ).\\end{aligned}$ Using that ${\\vec{F}}(0,\\ell , {\\vec{G}}(0,\\ell , q)) = q$ , this is also equivalent to $\\begin{aligned}\\partial _t {\\vec{\\Omega }}= (S-I) {\\vec{N}}- \\beta \\vec{\\omega }- {\\vec{F}}_\\omega ,\\end{aligned}$ where $\\begin{aligned}{\\vec{F}}_\\omega (\\partial _\\Sigma ^{\\le 2} \\vec{\\psi }, \\ell , S{\\vec{N}}-\\beta \\vec{\\omega }) &:= {\\vec{F}}(0,\\ell , {\\vec{G}}(0,\\ell , S{\\vec{N}}-\\beta \\vec{\\omega })) - {\\vec{F}}(\\partial _\\Sigma ^{\\le 2} \\vec{\\psi }, \\ell , {\\vec{G}}(S\\partial _\\Sigma ^{\\le 2} \\vec{\\psi }, \\ell , S{\\vec{N}}-\\beta \\vec{\\omega }))\\end{aligned}$ has additional smallness of order $\\mathcal {O}( \\partial _\\Sigma ^{\\le 2} \\vec{\\psi } )$ .", "Indeed, by (REF ) it follows from the preceding that $\\begin{aligned}|{\\vec{F}}_\\omega (\\partial _\\Sigma ^{\\le 2} \\vec{\\psi }, \\ell , S{\\vec{N}}-\\beta \\vec{\\omega })| \\lesssim |\\partial _\\Sigma ^{\\le 2} \\vec{\\psi }| \\bigl ( |\\partial _\\Sigma ^{\\le 2} \\vec{\\psi }| + |S{\\vec{N}}-\\beta \\vec{\\omega }| \\bigr ).\\end{aligned}$ We now seek to impose a decomposition $ \\begin{aligned}{\\vec{\\Omega }}= {\\vec{}}+ \\vec{\\omega }\\end{aligned}$ such that $ \\begin{aligned}\\partial _t{\\vec{}}&= (S-I)({\\vec{N}}+{\\vec{F}}_\\omega ), \\\\\\partial _t\\vec{\\omega }&= -\\beta \\vec{\\omega }-S{\\vec{F}}_\\omega .\\end{aligned}$ The motivation for this decomposition can be explained as follows.", "We view ${\\vec{N}}$ as the main source term for $\\partial _t{\\vec{\\Omega }}$ .", "As we will see momentarily, the smoothing operator $S$ can be chosen to be almost local in time, so that for instance $S \\psi $ and $\\psi $ will satisfy comparable decay estimates in time.", "We will also see that with this choice, we can write $S-I=\\partial _t{\\tilde{S}}$ where ${\\tilde{S}}$ is another almost local (but not smoothing) operator, so comparing with (REF ) we see that ${\\vec{}}$ is of the same order as ${\\vec{N}}$ .", "This captures the main contribution to ${\\vec{\\Omega }}$ .", "For the remainder $\\vec{\\omega }$ we no longer have the structure $S-I$ , but (REF ) was conceived so that the equation in (REF ) satisfied by $\\vec{\\omega }$ comes with the damping term $-\\beta \\vec{\\omega }$ , which can be used to estimate $\\vec{\\omega }$ in terms of $S{\\vec{F}}_\\omega $ .", "The details of this argument are presented in Section .", "To achieve (REF ) and (REF ), we will invoke the implicit function theorem on suitable Banach spaces of time-dependent curves on a time interval $J = [0, \\tau _0]$ .", "We first introduce the definition of the smoothing operator (in time) $S$ .", "Let $k \\in C_c^\\infty (\\mathbb {R})$ be a smooth bump function supported in the interval $[0,1]$ such that $\\int _0^1 k(s) \\, \\mathrm {d}s = 1$ .", "For a given (locally integrable) function $h(t)$ defined for $t \\ge -1$ , we set $(S h)(t) := \\int _\\mathbb {R}\\chi _{[-1,\\infty )}(s) h(s) k(t-s) \\, \\mathrm {d}s, \\quad t \\ge -1.$ Then $(S h)(t)$ is a smooth function for all $t \\ge 0$ .", "We also define an associated operator $\\widetilde{S}$ with the property that $(S-I) h = \\frac{\\mathrm {d}}{\\mathrm {d}t} (\\widetilde{S} h)$ .", "To this end we set $\\tilde{k}(r) := 0$ for $r < 0$ and $\\tilde{k}(r) := -\\int _r^\\infty k(s) \\, \\mathrm {d}s$ for $r \\ge 0$ .", "Note that $\\tilde{k}(r)$ is also supported in the interval $[0,1]$ .", "Then the operator $(\\widetilde{S} h)(t) := \\int _\\mathbb {R}\\chi _{[-1,\\infty )}(s) h(s) \\tilde{k}(t-s) \\, \\mathrm {d}s, \\quad t \\ge -1,$ has the desired properties.", "Let $\\Phi $ be a sufficiently regular (for instance $C^5$ ) solution to HVMC.", "Given $C^1$ curves $\\xi (t)$ and $\\ell (t)$ defined on some time interval $J = [0,\\tau _0]$ in the domain of definition of $\\Phi $ , we denote by $\\Psi _{\\wp }$ the associated profile in the flat region, defined as in (REF ).", "We let $\\psi _{\\xi ,\\ell } := \\eta ( \\Phi - \\Psi _\\wp , n_\\wp ),$ and correspondingly define $\\dot{\\psi }_{\\xi ,\\ell }$ as in (REF ).", "First, we trivially extend $\\xi (t)$ , $\\ell (t)$ , and $\\partial _\\Sigma ^{\\le 2} \\vec{\\psi }_{\\xi ,\\ell }$ to times $-1 \\le t \\le 0$ .", "Then we define $\\vec{\\omega }(t)$ to be the solution of $\\left\\lbrace \\begin{aligned}\\vec{\\omega }(t) &= - \\int _0^t e^{-\\beta (t-s)} S {\\vec{F}}_\\omega (\\partial _\\Sigma ^{\\le 2} \\vec{\\psi }_{\\xi ,\\ell }, \\ell , \\vec{\\omega })(s) \\, \\mathrm {d}s, &\\quad t \\ge 0, \\\\\\vec{\\omega }(t) &= 0, \\quad &t < 0.\\end{aligned} \\right.$ Observe that $\\vec{\\omega }$ has additional smallness in view of the definition of ${\\vec{F}}_\\omega $ .", "Next, in view of (REF ), we define ${\\vec{}}(t) := {\\widetilde{S}}\\bigl ( {\\vec{N}}+{\\vec{F}}_\\omega \\bigr )(t).$ Finally, we defineTo eventually apply the implicit function theorem to $\\Upsilon $ we need to specify its domain of definition.", "This can be done for instance as follows.", "Let $X=C^5_c(\\lbrace |\\rho |< {\\tilde{R}}_1\\rbrace ;\\mathbb {R}^{1+(n+1}))$ denote the space of 5 times continuously differentiable functions (of $(\\rho ,\\omega )$ ) supported in $\\lbrace |\\rho |\\le {\\tilde{R}}_1\\rbrace $ for some large ${\\tilde{R}}_1$ .", "We let $\\begin{split}\\mathcal {X}=C([-1,\\tau _0];X)\\cap C^1([-1,\\tau _0];X),\\quad \\mathcal {Y}=C([-1,\\tau _0];\\mathbb {R})\\cap C^1([-1,\\tau _0];\\mathbb {R}).\\end{split}$ Then we can view $\\Upsilon $ as a function $\\begin{split}\\Upsilon : \\mathcal {X}\\times \\mathcal {Y}^{2n}\\rightarrow \\mathcal {Y}^{2n},\\end{split}$ where $\\mathcal {Y}^{2n}$ represents the $2n$ components of $\\ell $ and $\\xi $ .", "Note that by construction, if ${\\tilde{R}}_1$ is chosen such that $\\chi $ in the definition of ${\\vec{Z}}_i$ is supported on $\\lbrace |\\rho |\\le {\\tilde{R}}_1\\rbrace $ , then $\\Upsilon (\\chi \\Psi _{0,0},0,0)=0$ .", "$\\vec{\\Upsilon }(\\Phi , \\xi , \\ell ) := {\\vec{\\Omega }}(\\vec{\\psi }_{\\xi ,\\ell })(t) - {\\vec{}}(t) - \\vec{\\omega }(t).$ Observe that by definition $\\vec{\\Upsilon }(\\Psi _{0,0},0,0) = 0,$ where $\\Psi _{0,0}(t,\\rho ,\\omega ) = (t, F(\\rho ,\\omega ))$ is the parametrization of the standard Lorentzian catenoid.", "We now want to show that the Fréchet derivative $D_{\\xi , \\ell } \\vec{\\Upsilon }(\\Psi _{0,0},0,0)$ is invertible.", "Then the existence of the decomposition (REF ) satisfying (REF ) follows from the implicit function theorem under our bootstrap assumptions.", "To this end, we observe that by the preceding definitions and in view of the computations in Subsection REF , we have for $1 \\le i, j \\le n$ that $\\begin{aligned}\\frac{\\delta \\bigl ( \\Omega (\\vec{\\psi }_{\\xi ,\\ell }, {\\vec{Z}}_i) \\bigr )}{\\delta \\xi _j} \\bigg |_{(\\xi , \\ell )=(0,0)} &= \\Omega (\\vec{\\varphi }_j, {\\vec{Z}}_i) \\big |_{\\ell =0} = 0, \\\\\\frac{\\delta \\bigl ( \\Omega (\\vec{\\psi }_{\\xi ,\\ell }, {\\vec{Z}}_i) \\bigr )}{\\delta \\ell _j} \\bigg |_{(\\xi , \\ell )=(0,0)} &= \\Omega (\\vec{\\varphi }_{n+j}, {\\vec{Z}}_i) \\big |_{\\ell =0} = - \\int \\chi \\nu ^j \\nu ^i \\sqrt{|h|}\\big |_{\\ell =0} \\, \\mathrm {d}\\omega \\, \\mathrm {d}\\rho , \\\\\\frac{\\delta \\bigl ( \\Omega (\\vec{\\psi }_{\\xi ,\\ell }, {\\vec{Z}}_{n+i}) \\bigr )}{\\delta \\xi _j} \\bigg |_{(\\xi , \\ell )=(0,0)} &= \\Omega (\\vec{\\varphi }_j, {\\vec{Z}}_{n+i}) \\big |_{\\ell =0} = \\int \\chi \\nu ^j \\nu ^i \\sqrt{|h|}\\big |_{\\ell =0} \\, \\mathrm {d}\\omega \\, \\mathrm {d}\\rho , \\\\\\frac{\\delta \\bigl ( \\Omega (\\vec{\\psi }_{\\xi ,\\ell }, {\\vec{Z}}_{n+i}) \\bigr )}{\\delta \\ell _j} \\bigg |_{(\\xi , \\ell )=(0,0)} &= \\Omega (\\vec{\\varphi }_{n+j}, {\\vec{Z}}_{n+i}) \\big |_{\\ell =0} = 0.\\end{aligned}$ This determines the contributions of ${\\vec{\\Omega }}(\\vec{\\psi }_{\\xi ,\\ell })(t)$ to the Fréchet derivative $D_{\\xi , \\ell } \\vec{\\Upsilon }(\\Psi _{0,0},0,0)$ .", "Since ${\\vec{F}}_\\omega $ has additional smallness, $\\vec{\\omega }$ does not contribute to $D_{\\xi , \\ell } \\vec{\\Upsilon }(\\Psi _{0,0},0,0)$ , and we only need to examine the part $(\\widetilde{S} {\\vec{N}})(t)$ in ${\\vec{}}(t)$ more carefully.", "Since $M \\vec{\\varphi }_i = 0$ for $1 \\le i \\le n$ , and ${\\vec{Z}}_i = \\chi \\vec{\\varphi }_i$ , we have that $M{\\vec{Z}}_i$ is supported in $\\lbrace |\\rho | \\simeq {R_1}\\rbrace $ and is of size $\\mathcal {O}({R_1}^{-n+1},\\ell )$ .", "Thus, we find $\\begin{aligned}\\frac{\\delta \\bigl ( \\Omega (\\vec{\\psi }_{\\xi ,\\ell }, M {\\vec{Z}}_i) \\bigr )}{\\delta \\xi _j} \\bigg |_{(\\xi , \\ell )=(0,0)} = \\frac{\\delta \\bigl ( \\Omega (\\vec{\\psi }_{\\xi ,\\ell }, M {\\vec{Z}}_i) \\bigr )}{\\delta \\ell _j} \\bigg |_{(\\xi , \\ell )=(0,0)} = o_{R_1}(1).\\end{aligned}$ Since $M \\vec{\\varphi }_{n+i} = \\vec{\\varphi }_i$ , $1 \\le i \\le n$ , we have $M {\\vec{Z}}_{n+i} = {\\vec{Z}}_i + \\text{error}$ up to an error term that is supported in $\\lbrace |\\rho | \\simeq {R_1}\\rbrace $ and is of size $\\mathcal {O}({R_1}^{-2}, \\ell )$ .", "Thus, $\\begin{aligned}\\frac{\\delta \\bigl ( \\Omega (\\vec{\\psi }_{\\xi ,\\ell }, M {\\vec{Z}}_{n+i}) \\bigr )}{\\delta \\xi _j} \\bigg |_{(\\ell ,\\xi )=(0,0)} &= o_{R_1}(1), \\\\\\frac{\\delta \\bigl ( \\Omega (\\vec{\\psi }_{\\xi ,\\ell }, M {\\vec{Z}}_{n+i}) \\bigr )}{\\delta \\ell _j} \\bigg |_{(\\ell ,\\xi )=(0,0)} &= \\Omega (\\vec{\\varphi }_{n+j}, {\\vec{Z}}_i) \\big |_{\\ell =0} + o_{R_1}(1) = - \\int \\chi \\nu ^j \\nu ^i \\sqrt{|h|}\\big |_{\\ell =0} \\, \\mathrm {d}\\omega \\, \\mathrm {d}\\rho + o_{R_1}(1).\\end{aligned}$ It follows that the Fréchet derivative $D_{\\xi , \\ell } \\vec{\\Upsilon }(\\Psi _{0,0},0,0)$ is a map of the form $ \\begin{aligned}C \\begin{bmatrix} 0 & Id \\\\ Id & Id \\end{bmatrix} + o_{R_1}(1),\\end{aligned}$ where $C$ is a constant of order one as in Section REF .", "Clearly, the map (REF ) is invertible as a linear map of Banach spaces for sufficiently large ${R_1}$ .", "Thus, in view of (REF ) the Fréchet derivative $D_{\\xi , \\ell } \\vec{\\Upsilon }(\\Psi _{0,0},0,0)$ is invertible for sufficiently large ${R_1}$ ." ], [ "Controlling the unstable mode", "Finally, we need to take into account the exponential instablity caused by the positive eigenvalue of the linearized operator $L$ .", "At this point we assume that the modulation parameters $\\ell (t)$ and $\\xi (t)$ have been determined in terms of $\\vec{\\psi }$ , so we treat these as given and enact a further decomposition of the perturbation $\\vec{\\psi }$ .", "Starting from the first-order equation (REF ) and inserting the relation (REF ) between $\\partial _t \\psi $ and $\\dot{\\psi }$ (furnished by the implicit function theorem), we obtain a first-order evolution equation for the perturbation $\\vec{\\psi }$ of the form $ (\\partial _t - M) \\vec{\\psi } = \\vec{F}_1\\bigl (\\partial _\\Sigma ^{\\le 2} \\vec{\\psi }, \\ell , {\\dot{\\wp }}\\bigr ).$ Recall from Section  that the linearized operator ${\\underline{H}}:= \\Delta _{\\underline{\\mathcal {C}}}+ | {\\mathrm {I\\!I}} |^2$ of the Riemannian catenoid has a positive eigenvalue $\\mu ^2 > 0$ with associated (exponentially decaying) eigenfunction $\\underline{\\varphi }_\\mu $ .", "For the operator $M$ we introduce the time-independent “almost eigenfunctions” $\\vec{Z}_\\pm := c_\\pm \\bigl ( \\chi \\underline{\\varphi }_\\mu , \\mp \\mu \\sqrt{|h|}\\big |_{\\ell =0} \\chi \\underline{\\varphi }_\\mu \\bigr ),$ where $\\chi \\in C_c^\\infty (\\mathbb {R})$ is the previously introduced smooth cut-off to $|\\rho | \\lesssim {R_1}$ and where $c_\\pm $ are normalization constants such that $\\Omega (\\vec{Z}_+, \\vec{Z}_-) = - \\Omega (\\vec{Z}_-, \\vec{Z}_+) = 1.$ Then we have $M Z_\\pm = \\pm \\mu Z_\\pm + \\mathcal {E}_\\pm ,$ where the errors $\\mathcal {E}_\\pm $ consist of terms that are supported around $|\\rho | \\simeq {R_1}$ or that have additional smallness in terms of the parameter $\\ell $ .", "We now enact a decomposition of $\\vec{\\psi }$ into $ \\vec{\\psi } = \\vec{\\phi } + a_+ \\vec{Z}_+ + a_- \\vec{Z}_-,$ where the time-dependent parameters $a_+(t)$ and $a_-(t)$ will be defined shortly by imposing suitable orthogonality conditions.", "Inserting (REF ) into (REF ), we find $\\begin{split}(a_{+}^{\\prime }-\\mu a_{+}){\\vec{Z}}_{+}+(a_{-}+\\mu a_{-}){\\vec{Z}}_{-} &= a_{+}(M{\\vec{Z}}_{+}-\\mu {\\vec{Z}}_{+})+a_{-}(M{\\vec{Z}}_{-}+\\mu {\\vec{Z}}_{-}) \\\\&\\quad \\quad -(\\partial _t-M)\\vec{\\phi }+ {\\vec{F}}_1(\\partial _\\Sigma ^{\\le 2} \\vec{\\psi }, \\ell , \\dot{{\\wp }}).\\end{split}$ Note that the dependence of ${\\vec{F}}_1(\\partial _\\Sigma ^{\\le 2} \\vec{\\psi }, \\ell , \\dot{{\\wp }})$ on $\\vec{\\psi }$ comes with additional smallness.", "Thus, if we replace $\\vec{\\psi }$ in the definition of ${\\vec{F}}_1(\\partial _\\Sigma ^{\\le 2} \\vec{\\psi }, \\ell , \\dot{{\\wp }})$ in terms of $\\vec{\\phi }$ and $a_{\\pm }$ , the terms involving $a_{\\pm }$ come with extra smallness.", "Also note that at this point we have already determined the parameters $\\ell (t)$ and $\\xi (t)$ , so we treat these as given.", "Now taking the $\\Omega $ inner product of (REF ) with ${\\vec{Z}}_{-}$ , multiplying by $e^{-\\mu t}$ , and recalling that $\\begin{split}\\Omega (M\\vec{\\phi },{\\vec{Z}}_{-})=-\\Omega (\\vec{\\phi },M{\\vec{Z}}_{-})=\\mu \\Omega (\\vec{\\phi },{\\vec{Z}}_{-})-\\Omega (\\vec{\\phi },M{\\vec{Z}}_{-}+\\mu {\\vec{Z}}_{-}),\\end{split}$ we get $\\begin{split}\\frac{\\mathrm {d}}{\\mathrm {d}t}\\big (e^{-\\mu t}a_{+}\\big )=-\\frac{\\mathrm {d}}{\\mathrm {d}t}\\big (e^{-\\mu t}\\Omega (\\vec{\\phi },{\\vec{Z}}_{-})\\big )-e^{-\\mu t}F_{+},\\end{split}$ where $\\begin{split}-F_{+} &:= \\Omega ({\\vec{F}}_1,{\\vec{Z}}_{-})-\\Omega (\\vec{\\phi },M{\\vec{Z}}_{-}+\\mu {\\vec{Z}}_{-})+a_{+}\\Omega (M{\\vec{Z}}_{+}-\\mu {\\vec{Z}}_{+},{\\vec{Z}}_{+})\\\\&\\quad +a_{-}\\Omega (M{\\vec{Z}}_{-}+\\mu {\\vec{Z}}_{-},{\\vec{Z}}_{-}).\\end{split}$ Similarly, taking the $\\Omega $ inner product of (REF ) with ${\\vec{Z}}_{+}$ and multiplying by $e^{\\mu t}$ , we get $\\begin{split}\\frac{\\mathrm {d}}{\\mathrm {d}t}\\big (e^{\\mu t}a_{-}\\big )=\\frac{\\mathrm {d}}{\\mathrm {d}t}\\big (e^{\\mu t}\\Omega (\\vec{\\phi },{\\vec{Z}}_{+})\\big )+e^{\\mu t}F_{-},\\end{split}$ where $\\begin{split}F_{-}=\\Omega ({\\vec{F}}_1,{\\vec{Z}}_{+})-\\Omega (\\vec{\\phi },M{\\vec{Z}}_{+}-\\mu {\\vec{Z}}_{+})+a_{+}\\Omega (M{\\vec{Z}}_{+}-\\mu {\\vec{Z}}_{+},{\\vec{Z}}_{+})+a_{-}\\Omega (M{\\vec{Z}}_{-}+\\mu {\\vec{Z}}_{-},{\\vec{Z}}_{+}).\\end{split}$ Motivated by (REF ) and (REF ), we require the orthogonality conditions $ \\begin{split}\\Omega (\\vec{\\phi },{\\vec{Z}}_{-}) = e^{\\mu t}{\\tilde{S}}(e^{-\\mu t}F_{+}), \\qquad \\Omega (\\vec{\\phi },{\\vec{Z}}_{+}) = e^{-\\mu t}{\\tilde{S}}(e^{\\mu t}F_{-}).\\end{split}$ In view of (REF ) and (REF ) and recalling that $\\partial _t {\\tilde{S}}= S-Id$ , the orthogonality conditions (REF ) lead to the following equations for $a_{\\pm }$ $\\begin{split}\\frac{\\mathrm {d}}{\\mathrm {d}t}\\big (e^{-\\mu t}a_{+}\\big )=-S(e^{-\\mu t}F_{+}),\\qquad \\frac{\\mathrm {d}}{\\mathrm {d}t}\\big (e^{\\mu t}a_{-}\\big )=S(e^{\\mu t}F_{-}).\\end{split}$ Finally note that derivatives commute nicely with (REF ) in the sense that (using the product rule and the fact that $[\\frac{\\mathrm {d}}{\\mathrm {d}t},{\\tilde{S}}]=0$ ) $\\begin{split}\\frac{\\mathrm {d}}{\\mathrm {d}t}\\Omega (\\vec{\\phi },{\\vec{Z}}_{-})=e^{\\mu t}{\\tilde{S}}(e^{-\\mu t}F_{+}^{\\prime }),\\qquad \\frac{\\mathrm {d}}{\\mathrm {d}t}\\Omega (\\vec{\\phi },{\\vec{Z}}_{+})=e^{-\\mu t}{\\tilde{S}}(e^{\\mu t}F_{-}^{\\prime }),\\end{split}$ and similarly for higher derivatives.", "To conclude, we briefly explain how to obtain the decomposition (REF ) satisfying the orthogonality conditions (REF ).", "Given $\\vec{\\psi }$ and $a_\\pm (t)$ on some time interval $J = [0,\\tau _0]$ , we first trivially extend these to times $-1 \\le t \\le 0$ .", "Then we consider $\\Upsilon _\\mu (\\vec{\\psi }, a_+, a_-) := \\begin{pmatrix} \\Omega \\bigl ( \\vec{\\psi } - a_+ \\vec{Z}_+ - a_- \\vec{Z}_-, \\vec{Z}_- \\bigr ) - e^{\\mu t}{\\tilde{S}}(e^{-\\mu t}F_{+}) \\\\ \\Omega \\bigl ( \\vec{\\psi } - a_+ \\vec{Z}_+ - a_- \\vec{Z}_-, \\vec{Z}_+ \\bigr ) - e^{-\\mu t}{\\tilde{S}}(e^{\\mu t}F_{-}) \\end{pmatrix}.$ Note that the orthogonality conditions (REF ) are equivalent to $\\Upsilon _\\mu (\\vec{\\psi }, a_+, a_-) = (0,0)$ .", "We have $\\Upsilon _\\mu (0,0,0) = 0$ and we compute the Fréchet derivative $D_{a_+, a_-}\\Upsilon _\\mu (0,0,0) = \\begin{pmatrix} -1 & 0 \\\\ 0 & 1 \\end{pmatrix} + o_{{R_1},\\ell }(1).$ Hence, for given $\\vec{\\psi }$ satisfying suitable bootstrap assumptions, the existence of the decomposition (REF ) obeying the orthogonality conditions (REF ) follows (for sufficiently large ${R_1}$ ) from the implicit function theorem." ], [ "Coordinates, vectorfields, and a more precise description of the profile", "Given $\\ell $ ,$\\xi $ , and $a_{\\pm }$ , we give a more detailed description of the foliation and the profile.", "Moreover, we obtain various expressions for the linear operator acting on $\\phi $ .", "Our starting point is to derive a parameterization of the profile." ], [ "Parameterization of the Profile", "We give separate parameterizations of the profile $\\cup _\\sigma \\Sigma _\\sigma $ in the interior and exterior regions.", "In the interior, our parameterization is the same as in the first order formulation in Section .", "That is, we parameterize the profile as $\\begin{split}(t,\\xi +\\gamma ^{-1}P_\\ell F(\\rho ,\\omega )+P_\\ell ^\\perp F(\\rho ,\\omega )),\\end{split}$ where $\\ell $ , $\\xi $ , and $\\gamma $ are functions of $t$ .", "According to the definition of the profile in Section REF , and with the notation used there, this parameterization is valid in the flat region $\\mathcal {C}_{{\\mathrm {flat}}}=\\lbrace X^0\\ge \\sigma _{\\mathrm {temp}}(X)+\\delta _1\\rbrace $ , where as usual $X=(X^0,\\dots X^{n+1})$ are the rectangular coordinates in the ambient $\\mathbb {R}^{1+(n+1)}$ .", "In the exterior we eventually want to parameterize the VMC surface as a graph over a hyperplane, so we start by parameterizing the profile itself as a graph.", "For this, let the function $x^0(\\sigma ,x^{\\prime })$ be defined by the requirement that $(x^0(\\sigma ,x^{\\prime }),x^{\\prime })\\in {\\underline{{}}}_\\sigma $ .", "Note that in the hyperboloidal region $\\mathcal {C}_{{\\mathrm {hyp}}}=\\lbrace X^0\\le \\sigma _{\\mathrm {temp}}(X)-\\delta _1\\rbrace $ $\\begin{split}x^0(\\sigma ,x^{\\prime })=\\sigma -\\gamma R+\\sqrt{|x^{\\prime }-\\xi +\\gamma R|^2+1},\\end{split}$ while in the flat region $\\lbrace X^0\\ge \\sigma _{\\mathrm {temp}}(\\sigma )+\\delta _1\\rbrace $ $\\begin{split}x^0(\\sigma ,x^{\\prime })=\\sigma .\\end{split}$ The expression for $x^0$ in the intermediate region $\\lbrace |X^0-\\sigma _{{\\mathrm {temp}}}(X)|<\\delta _1\\rbrace $ is not explicit and depends on the choice of the smoothed out minimum function $\\mathfrak {m}$ in Section REF .", "With $x^0$ determined, we define the function $\\mathcal {Q}(x^0,x^{\\prime })$ by the requirement that $(x^0(\\sigma ,x^{\\prime }),x^{\\prime },\\mathcal {Q}(x^{\\prime }(\\sigma ,x^{\\prime }),x^{\\prime }))\\in \\Sigma _\\sigma $ .", "The map $\\begin{split}(\\sigma ,x^{\\prime })\\mapsto (x^0(\\sigma ,x^{\\prime }),x^{\\prime },\\mathcal {Q}(x^{\\prime }(\\sigma ,x^{\\prime }),x^{\\prime }))\\end{split}$ is then a parameterization of the profile in the exterior region $\\lbrace |x^{\\prime }|\\gg 1\\rbrace $ (more precisely, this parameterization is valid in a neighborhood of the support of $1-\\chi $ in the definition (REF ) of $N$ ).", "We now want to derive more explicit expressions for $\\mathcal {Q}$ in the flat and hyperboloidal parts of ${}_\\sigma $ .", "First, for reasons that will become clear momentarily, we let $\\begin{split}\\eta = \\xi -\\gamma R\\ell ,\\end{split}$ and define the non-geometric polar coordinates $(\\tau ,r,\\theta )$ by $\\begin{split}\\tau = \\sigma -\\gamma (\\sigma )R\\qquad {\\ \\ \\text{and} \\ \\ }\\qquad x^{\\prime }=\\eta (\\tau )+r\\Theta (\\theta ).\\end{split}$ Here $\\Theta $ denotes the standard parameterization of $\\mathbb {S}^{n-1}\\subseteq \\mathbb {R}^n$ , and here and in what follows, by a slight abuse of notation, we simply write $\\eta (\\tau )$ for $\\eta (\\sigma (\\tau ))$ (and similarly for other parameters $\\ell $ , $\\xi $ , etc.).", "Differentiation with respect to $\\tau $ is denoted by a prime, and differentiation with respect to $\\sigma $ by a dot, so for instance $\\begin{split}\\eta ^{\\prime } - \\frac{\\mathrm {d}}{\\mathrm {d}\\tau }\\eta (\\tau )= \\frac{\\mathrm {d}\\sigma }{\\mathrm {d}\\tau }\\frac{\\mathrm {d}}{\\mathrm {d}\\sigma }\\eta (\\sigma )\\vert _{\\sigma =\\sigma (\\tau )},\\qquad {\\dot{\\eta }}= \\frac{\\mathrm {d}}{\\mathrm {d}\\sigma }\\eta (\\sigma ).\\end{split}$ Note that $\\begin{split}\\frac{\\mathrm {d}\\sigma }{\\mathrm {d}\\tau }=1-\\gamma ^{\\prime } R\\simeq 1.\\end{split}$ In the hyperboloidal region $\\mathcal {C}_{\\mathrm {hyp}}$ we have ${\\left\\lbrace \\begin{array}{ll}x^0=\\tau + \\langle r\\rangle \\\\x^{\\prime }=\\eta (\\tau )+r\\Theta (\\theta )\\end{array}\\right.", "},$ while in the flat region $\\mathcal {C}_{\\mathrm {flat}}$ ${\\left\\lbrace \\begin{array}{ll}x^0=\\tau + \\gamma R\\\\x^{\\prime }=\\eta (\\tau )+r\\Theta (\\theta )\\end{array}\\right.", "}.$ In general, our parameterization of the profile in polar coordinates becomes $\\begin{split}(x^0(\\tau ,r),\\eta (\\tau )+r\\Theta (\\theta ),\\mathcal {Q}(x^0(\\tau ,r),\\eta (\\tau )+r\\Theta (\\theta ))).\\end{split}$ Now to motivate our definition of the polar coordinates, we investigate the form of $\\mathcal {Q}$ in the hyperboloidal region more closely.", "Note that if $(x^0,x^{\\prime })\\in \\Sigma _\\sigma $ , is given by (recall the definition of ${\\tilde{}}_\\sigma $ from Section REF ) $\\begin{split}\\begin{pmatrix} x^0\\\\ x^{\\prime } \\end{pmatrix}=\\Lambda _{-\\ell }\\begin{pmatrix} y^0\\\\y^{\\prime } \\end{pmatrix}+\\begin{pmatrix} 0\\\\\\xi -\\sigma \\ell \\end{pmatrix},\\end{split}$ then using (REF ), $\\begin{split}&y^0=\\gamma ^{-1}(\\sigma -\\gamma R) +\\gamma (\\sqrt{1+|x^{\\prime }-(\\xi -\\sigma R\\ell )|^2}-(x^{\\prime }-(\\xi -\\sigma R\\ell ))\\cdot \\ell ),\\\\&y^{\\prime }= A_\\ell (x^{\\prime }-(\\xi -\\sigma R\\ell ))-\\gamma \\sqrt{1+|x^{\\prime }-(\\xi -\\sigma R\\ell )|^2} \\ell ,\\end{split}$ and the profile parameterization becomes $\\begin{split}(x^0,x^{\\prime },Q(A_\\ell (x^{\\prime }-\\eta (\\sigma ))-\\gamma \\ell \\sqrt{1+|x^{\\prime }-\\eta (\\sigma )|^2})).\\end{split}$ Here $Q(y)=Q(|y|)$ is given by the parameterization of the Riemannian catenoid and satisfies the ODE (after identifying it with a function of a single variable) $\\begin{split}Q^{\\prime \\prime }({\\tilde{r}})+\\frac{n-1}{{\\tilde{r}}}Q^{\\prime }({\\tilde{r}})-(1+(Q^{\\prime }({\\tilde{r}}))^2)^{-1}(Q^{\\prime }({\\tilde{r}}))^2Q^{\\prime \\prime }({\\tilde{r}})=0.\\end{split}$ Therefore, in our polar coordinates these expressions take the simple forms $\\begin{split}&y^0=\\gamma ^{-1}\\tau +\\gamma \\langle r\\rangle (1-\\frac{r}{\\langle r\\rangle }\\Theta \\cdot \\ell ),\\\\&y^{\\prime }=A_\\ell (r\\Theta -\\langle r\\rangle \\ell )=rA_\\ell \\Theta -\\gamma \\langle r\\rangle \\ell ,\\end{split}$ and $\\begin{split}(\\tau +\\langle r\\rangle ,\\eta (\\tau )+r\\Theta ,Q(rA_\\ell \\Theta -\\gamma \\langle r\\rangle \\ell )).\\end{split}$ For future use, we also record the following coordinate change formulas in the hyperboloidal region: $\\begin{split}&\\partial _\\tau = \\partial _{x^0}+\\eta ^{\\prime },\\\\&\\partial _r = \\frac{r}{\\langle r\\rangle }\\partial _{x^0}+\\Theta ,\\\\&\\partial _a= r\\Theta _a, ~a=1,\\dots n-1,\\end{split}$ and $\\begin{split}&\\partial _{x^0}=\\frac{1}{1-\\frac{r}{\\langle r\\rangle }\\Theta \\cdot \\eta ^{\\prime }}\\partial _\\tau -\\frac{\\eta ^{\\prime }\\cdot \\Theta }{1-\\frac{r}{\\langle r\\rangle }\\Theta \\cdot \\eta ^{\\prime }}\\partial _r-\\frac{(\\mathring{{g}}^{-1})^{ab}\\Theta _b\\cdot \\eta ^{\\prime }}{r(1-\\frac{r}{\\langle r\\rangle }\\Theta \\cdot \\eta ^{\\prime })}\\partial _a,\\\\&\\partial _{x^j}=-\\frac{r\\Theta ^j}{\\langle r\\rangle (1-\\frac{r}{\\langle r\\rangle }\\Theta \\cdot \\eta ^{\\prime })}\\partial _\\tau +\\frac{\\Theta ^j}{1-\\frac{r}{\\langle r\\rangle }\\Theta \\cdot \\eta ^{\\prime }}\\partial _r+\\Big (\\frac{(\\mathring{{g}}^{-1})^{ab}\\Theta _b^j}{r}+\\frac{(\\mathring{{g}}^{-1})^{ab}\\Theta _b\\cdot \\eta ^{\\prime } \\Theta ^j}{\\langle r\\rangle (1-\\frac{r}{\\langle r\\rangle }\\Theta \\cdot \\eta ^{\\prime })}\\Big )\\partial _a.\\end{split}$ Next, we want to define the rotation and outgoing and incoming null vectorfields $\\Omega $ , $L$ , ${\\underline{L}}$ and the geometric radial function ${\\tilde{r}}$ , corresponding to the Minkowski metric, at a point on ${\\underline{{}}}_\\sigma $ .", "These will play an important role in the analysis in the exterior region, and will appear in our bootstrap assumptions in Section  below.", "Let $(x^0,x^{\\prime })=(\\tau +\\langle r\\rangle ,r\\Theta +\\eta )$ be a point on the hyperboloidal part of ${\\underline{{}}}_\\sigma $ .", "With $\\tau =\\tau (\\sigma )$ , let $(y^0,y^{\\prime })$ be related to $(x^0,x^{\\prime })$ by (REF ).", "Note that by construction $\\begin{split}\\begin{pmatrix} x^0\\\\x^{\\prime } \\end{pmatrix}= \\Big (\\Lambda _{-\\ell }\\begin{pmatrix} y^0\\\\y^{\\prime } \\end{pmatrix} +\\begin{pmatrix} 0\\\\-\\sigma \\ell (\\sigma )+\\xi (\\sigma ) \\end{pmatrix}\\Big )\\Big {\\vert }_{\\sigma =\\sigma (\\tau )}.\\end{split}$ We define our $L$ , ${\\underline{L}}$ , $\\Omega $ , and $T$ as the push forward by $\\Lambda _{-\\ell }$ of the corresponding vectorfields in the $y$ coordinates.", "That is, $\\begin{split}& T=\\Lambda _{-\\ell }\\partial _{y^0},\\\\&L= \\Lambda _{-\\ell }(\\partial _{y^0}+\\frac{y^i}{|y^{\\prime }|}\\partial _{y^i}),\\\\&{\\underline{L}}= \\Lambda _{-\\ell }(\\partial _{y^0}-\\frac{y^i}{|y^{\\prime }|}\\partial _{y^i}),\\\\&\\Omega _{jk}=\\Lambda _{-\\ell }(y^j\\partial _{y^k}-y^k\\partial _{y^j}).\\end{split}$ Using (REF ) we can find the following coordinate representations for these vectorfields: $\\begin{split}&T=\\mathcal {O}(1)\\partial _\\tau +\\mathcal {O}({\\dot{\\wp }})\\partial _r+\\mathcal {O}({\\dot{\\wp }}r^{-1})\\partial _a,\\\\&L=\\mathcal {O}(r^{-2})\\partial _\\tau +\\mathcal {O}(1)\\partial _r+\\mathcal {O}(r^{-3})\\partial _a,\\\\&{\\underline{L}}= \\mathcal {O}(1)\\partial _\\tau +\\mathcal {O}(1)\\partial _r+\\mathcal {O}({\\dot{\\wp }}r^{-1}+r^{-3})\\partial _a,\\\\&\\Omega _{jk}=\\mathcal {O}(\\wp r)\\partial _r+\\mathcal {O}(1) \\partial _a.\\end{split}$ Using the notation ${{\\partial }}^a=(\\mathring{{g}}^{-1})^{ab}\\partial _b$ , for $T$ and $L$ also record the following more precise expressions $\\begin{split}T&= \\gamma \\Big (1+\\frac{r\\Theta \\cdot (\\eta ^{\\prime }-\\ell )}{\\langle r\\rangle (1-\\frac{r}{\\langle r\\rangle }\\Theta \\cdot \\eta ^{\\prime })}\\Big )\\partial _\\tau -\\frac{\\gamma (\\eta ^{\\prime }-\\ell )\\cdot \\Theta }{1-\\frac{r}{\\langle r\\rangle }\\Theta \\cdot \\eta ^{\\prime }}\\partial _r\\\\&\\quad -\\frac{\\gamma }{r(1-\\frac{r}{\\langle r\\rangle }\\Theta \\cdot \\eta ^{\\prime })}\\big ((\\Theta \\cdot \\ell ){{\\partial }}^a\\Theta \\cdot (\\eta ^{\\prime }-\\ell )-({{\\partial }}^a\\Theta \\cdot \\ell )\\Theta \\cdot (\\eta ^{\\prime }-\\ell )-{{\\partial }}^a\\Theta \\cdot (\\eta ^{\\prime }-\\ell )\\big )\\partial _a,\\\\\\end{split}$ and $\\begin{split}L&=(\\frac{1}{2}\\gamma ^{-1}(1-\\Theta \\cdot \\ell )^{-1}(1-\\Theta \\cdot \\eta ^{\\prime })^{-1}r^{-2}+\\mathcal {O}(r^{-4}))\\partial _\\tau \\\\&\\quad +(\\gamma ^{-1}(1-\\Theta \\cdot \\ell )^{-1}+\\mathcal {O}(r^{-2}))\\partial _r+\\mathcal {O}(r^{-3})\\partial _a.\\end{split}$ By inverting these relations we can also express the the coordinate derivatives in the $(\\tau ,r,\\theta )$ coordinates in terms of $L$ , ${\\underline{L}}$ , and $\\Omega $ : $\\begin{split}&\\partial _\\tau =\\mathcal {O}(1)L+\\mathcal {O}(1){\\underline{L}},\\\\&\\partial _r= \\mathcal {O}(1)L+\\mathcal {O}(r^{-2}){\\underline{L}}+\\mathcal {O}( r^{-3})\\Omega ,\\\\&\\partial _\\theta =\\mathcal {O}(\\wp r)L+\\mathcal {O}(1)\\Omega +\\mathcal {O}(\\wp r^{-1}){\\underline{L}}.\\end{split}$ Similarly, the geometric radial function is defined in terms of the $y$ variables as ${\\tilde{r}}=|y^{\\prime }|$ which in radial coordinates reads $\\begin{split}{\\tilde{r}}=|rA_\\ell \\Theta -\\gamma \\langle r\\rangle \\ell |=\\gamma (1-\\Theta \\cdot \\ell )r+\\mathcal {O}(r^{-1}).\\end{split}$ In particular ${\\tilde{r}}\\simeq r$ ." ], [ "Parameterization of the VMC Surface and Derivation of the Equations", "Recall from Section REF , that to derive a parameterization of the VMC surface we first introduced an almost-normal vectorfield $\\begin{split}N:\\cup _\\sigma \\Sigma _\\sigma \\rightarrow \\mathbb {R}^{1+(n+1)},\\end{split}$ and then defined $\\begin{split}\\psi :\\cup _{\\sigma }\\Sigma _\\sigma \\rightarrow \\mathbb {R}\\end{split}$ by the requirement that $p+\\psi (p)N(p)\\in \\mathcal {M}$ for all $p\\in \\cup _{\\sigma }\\Sigma _\\sigma $ (see also Lemma REF ).", "Then in Section  we further decomposed $\\psi $ as $\\begin{split}\\psi =\\phi +a_\\mu Z_\\mu ,\\qquad a_\\mu :=a_{+}+a_{-}.\\end{split}$ In view of the compact support of $Z_\\mu $ , the functions $\\phi $ and $\\psi $ agree outside a compact region of $\\mathcal {C}_{{\\mathrm {flat}}}$ .", "In the hyperboloidal region $\\mathcal {C}_{{\\mathrm {hyp}}}$ we will sometimes work with the following renormalized version of $\\phi $ : $\\begin{split}\\varphi :=\\langle \\phi N, \\partial _{X^{n+1}}\\rangle .\\end{split}$ The advantage of $\\varphi $ is that it satisfies a simpler equation, while the advantage of $\\phi $ is that the linear part of the equation satisfied by it has the form which is familiar from the second variation of the area functional.", "The exact relation between $\\phi $ and $\\varphi $ can be calculated as follows.", "In the region $\\mathcal {C}_{{\\mathrm {hyp}}}$ the normal $n_\\wp $ is given by (here the indices run over any set of coordinates, such as $(\\tau ,r,\\theta )$ or $(x^0,x^{\\prime })$ , and $Q_\\nu $ denotes $\\partial _\\nu Q$ with $\\ell $ treated as fixed as in Section ) $\\begin{split}n_\\wp = (1+(m^{-1})^{\\mu \\nu } Q_\\mu Q_\\nu )^{-\\frac{1}{2}}((-m^{-1})^{\\mu \\nu }Q_\\mu \\partial _\\nu ,1).\\end{split}$ It follows from the normalization $\\eta (N,n_\\wp )=1$ that in this region $\\begin{split}N= (1+(m^{-1})^{\\mu \\nu } Q_\\mu Q_\\nu )^{\\frac{1}{2}}\\frac{\\partial }{\\partial X^{n+1}},\\end{split}$ and hence $\\begin{split}\\varphi = s\\phi ,\\qquad s:=(1+(m^{-1})^{\\mu \\nu } Q_\\mu Q_\\nu )^{\\frac{1}{2}}.\\end{split}$ Since $(m^{-1})^{\\mu \\nu } Q_\\mu Q_\\nu =O(r^{-2(n- 1)})$ , the relation (REF ) implies that the various energy norms for $\\phi $ and $\\varphi $ are equivalent.", "In the remainder of this section we give more explicit expressions for $N$ using the parameterizations introduced in the previous section, and derive the equations satisfied by $\\psi $ , $\\phi $ , and $\\varphi $ in the respective coordinates.", "Finally, we introduce a set of global coordinates and discuss the structure of the linearized operator in these various coordinate systems." ], [ "Interior Non-Geometric Coordinates", "Here $N$ is defined to lie on the $\\lbrace X^0=\\mathrm {constnt}\\rbrace $ hypersurfaces of the ambient space, and the equations satisfied by $\\psi $ and $\\phi $ can be read off from the first order formulation.", "In particular, according to (REF ), in the coordinate system $(t,\\rho ,\\omega )$ introduced in Section  and in the region where $\\chi \\equiv 1$ in (REF ), the linear part of the equation satisfied by $\\phi $ can be written as $\\begin{split}\\mathcal {P}\\phi =\\frac{1}{\\sqrt{|h|}}\\partial _\\mu (\\sqrt{|h|}(h^{-1})^{\\mu \\nu }\\partial _\\nu \\phi )+V\\phi +a^{\\mu \\nu }\\partial ^2_{\\mu \\nu }\\phi +b^\\mu \\partial _\\mu \\phi +c\\phi ,\\end{split}$ where $a$ is symmetric and $a,b,c=\\mathcal {O}({\\dot{\\wp }}^{\\le 2})$ , and $V=| {\\mathrm {I\\!I}} |^2$ .", "Besides the coordinates introduced above, in the flat part of the foliation we will often use the coordinates $({\\tilde{t}},{\\tilde{\\rho }},{\\tilde{\\omega }})$ defined by (note that this is a valid change of variables in the region flat region where $\\rho $ is bounded) $\\begin{split}{\\tilde{t}}=\\gamma ^{-1}(t)t-\\ell (t)\\cdot F(\\rho ,\\omega ),\\quad {\\tilde{\\rho }}=\\rho ,\\quad {\\tilde{\\omega }}=\\omega .\\end{split}$ The corresponding coordinate derivatives are related as follows: $\\begin{split}&\\partial _t=(1+)\\partial _{\\tilde{t}},\\quad \\partial _\\rho =\\ell \\cdot F_\\rho \\partial _{\\tilde{t}}+\\partial _{\\tilde{\\rho }},\\quad \\partial _\\omega = \\ell \\cdot F_\\omega \\partial _{\\tilde{t}}+\\partial _{\\tilde{\\omega }},\\\\&\\partial _{\\tilde{t}}=(1+)^{-1}\\partial _t,\\quad \\partial _{\\tilde{\\rho }}=-(1+)^{-1}\\ell \\cdot F_\\rho \\partial _t+\\partial _\\rho ,\\quad \\partial _{\\tilde{\\omega }}=-(1+)^{-1}\\ell \\cdot F_\\omega \\partial _t+\\partial _\\omega ,\\end{split}$ where $\\begin{split}:=t\\frac{\\mathrm {d}}{\\mathrm {d}t}\\gamma ^{-1}-{\\dot{\\ell }}\\cdot F.\\end{split}$ Note that by these relations $\\det \\frac{\\partial ({\\tilde{t}},{\\tilde{\\rho }},{\\tilde{\\omega }})}{\\partial (t,\\rho ,\\omega )}=1+$ .", "In the calculations in the flat region we will often use both the $(t,\\rho ,\\omega )$ and the $({\\tilde{t}},{\\tilde{\\rho }},{\\tilde{\\omega }})$ coordinates.", "To emphasize which coordinate system is being used in each calculation, we will use a tilde to indicate that calculations are being done in the $({\\tilde{t}},{\\tilde{\\rho }},{\\tilde{\\omega }})$ coordinates.", "For instance, we write $h_{\\mu }$ for the components of $h$ in the $(t,\\rho ,\\omega )$ coordinates and ${\\tilde{h}}_{\\mu \\nu }$ for its components in the $({\\tilde{t}},{\\tilde{\\rho }},{\\tilde{\\omega }})$ coordinates, and similarly for $|h|$ and $|{\\tilde{h}}|$ ." ], [ "Exterior Non-Geometric Coordinates", "Since $\\cup _{\\sigma }\\Sigma _\\sigma $ can be parameterized by $\\cup _{\\sigma }{\\underline{{}}}_\\sigma $ , by a slight abuse of notation we will often view $N$ , $\\nu $ , $\\phi $ , and $\\varphi $ as functions on $\\cup _{\\sigma }{\\underline{{}}}_\\sigma $ in the exterior region, and use coordinates, such as $(x^0,x^{\\prime })$ or $(\\tau ,r,\\theta )$ , as their arguments.", "In the graph formulation, the requirement that $\\mathcal {M}$ be a VMC surface is equivalent to the following PDEs for $u= Q_\\wp +\\varphi $ : $\\begin{split}\\nabla _\\mu \\Big (\\frac{\\nabla ^\\mu u}{\\sqrt{1+\\nabla ^\\alpha u \\nabla _\\alpha u}}\\Big )=\\frac{1}{\\sqrt{|m|}}\\partial _\\mu \\Big (\\frac{\\sqrt{|m|}(m^{-1})^{\\mu \\nu }\\partial _\\nu u}{\\sqrt{1+(m^{-1})^{\\alpha \\beta }\\partial _\\alpha u\\partial _\\beta u}}\\Big )=0.\\end{split}$ Here $m$ denotes the Minkowski metric $\\begin{split}m=-\\mathrm {d}x^0\\otimes \\mathrm {d}x^0+\\sum _{i=1}^n \\mathrm {d}x^i\\otimes \\mathrm {d}x^i,\\end{split}$ and $\\nabla $ the corresponding covariant derivative.", "The equation for $v$ can be expanded as $\\begin{split}\\Box _m u -(1+\\nabla ^\\alpha u \\nabla _\\alpha u )^{-1}(\\nabla ^\\nu u )(\\nabla ^\\mu u) \\nabla _\\nu \\nabla _\\mu u=0.\\end{split}$ Plugging in the decomposition for $u$ we arrive at the following equation for $\\varphi $ and $\\phi $ (see (REF )): $\\begin{split}s\\mathcal {P}\\phi =\\mathcal {P}_{\\mathrm {graph}}\\varphi =\\sum _{i=0}^3\\mathcal {F}_i,\\qquad \\mathcal {P}:=\\mathcal {P}_{\\mathrm {graph}}+s^{-1}[\\mathcal {P}_{\\mathrm {graph}},s].\\end{split}$ Here $\\mathcal {F}_i$ denotes inhomogeneous terms of order $i$ in $\\varphi $ .", "A more explicit expression for $\\mathcal {P}$ is derived in (REF ) below.", "One advantage of working with $\\varphi $ rather than $\\phi $ is that $\\mathcal {F}_i$ on the right-hand side are easier to compute.", "The source term $\\mathcal {F}_0$ ,which is independent of $\\varphi $ (but depends on the derivatives of the parameters), is calculated in Lemma REF below.", "The linear operator $\\mathcal {P}$ is given by (here $Q\\equiv Q_{\\wp }$ ) $\\begin{split}\\mathcal {P}_{\\mathrm {graph}}&=\\Box _m-(1+\\nabla ^\\alpha Q \\nabla _\\alpha Q)^{-1}(\\nabla ^\\mu Q)(\\nabla ^\\nu Q)\\nabla _{\\mu }\\nabla _\\nu -2(1+\\nabla ^\\alpha Q \\nabla _\\alpha Q)^{-1}(\\nabla ^{\\mu }\\nabla ^\\nu Q)(\\nabla _\\mu Q)\\nabla _\\nu \\\\&\\quad +2(1+\\nabla ^\\alpha Q\\nabla _\\alpha Q)^{-2}(\\nabla ^\\nu Q)(\\nabla ^\\mu Q)(\\nabla ^\\lambda Q)(\\nabla _\\lambda \\nabla _\\mu Q)\\nabla _\\nu .\\end{split}$ The quadratic and cubic terms are (here $Q\\equiv Q_{\\wp }$ ) $\\begin{split}\\mathcal {F}_2&=-\\frac{2\\nabla ^\\mu Q\\nabla ^\\nu \\varphi \\nabla ^2_{\\mu \\nu }\\varphi }{1+\\nabla ^\\alpha u \\nabla _\\alpha u}-\\frac{\\nabla ^2_{\\mu \\nu }Q\\nabla ^\\mu \\varphi \\nabla ^\\nu \\varphi }{1+\\nabla _\\alpha u \\nabla ^\\alpha u}+\\frac{\\nabla ^\\mu Q \\nabla ^\\nu Q \\nabla ^2_{\\mu \\nu }Q \\nabla ^\\beta \\varphi \\nabla _\\beta \\varphi }{(1+\\nabla ^\\alpha u \\nabla _\\alpha u)(1+\\nabla ^\\alpha Q \\nabla _\\alpha Q)}\\\\&\\quad +\\frac{2\\nabla ^\\beta Q\\nabla _\\beta \\varphi (\\nabla ^\\mu Q\\nabla ^\\nu Q \\nabla ^2_{\\mu \\nu } \\varphi +2\\nabla ^2_{\\mu \\nu }Q\\nabla ^\\mu Q \\nabla ^\\nu \\varphi )}{(1+\\nabla ^\\alpha u \\nabla _\\alpha u)(1+\\nabla ^\\alpha Q\\nabla _\\alpha Q)}-\\frac{4(\\nabla ^\\mu Q\\nabla ^\\nu Q\\nabla ^2_{\\mu \\nu }Q)(\\nabla ^\\beta Q\\nabla _\\beta \\varphi )^2}{(1+\\nabla ^\\alpha u\\nabla _\\alpha u)(1+\\nabla ^\\alpha Q\\nabla _\\alpha Q)^2},\\end{split}$ and $\\begin{split}\\mathcal {F}_3&=-\\frac{\\nabla ^\\mu \\varphi \\nabla ^\\nu \\varphi \\nabla ^2_{\\mu \\nu }\\varphi }{1+\\nabla ^\\alpha u \\nabla _\\alpha u}-\\frac{\\nabla ^\\beta \\varphi \\nabla _\\beta \\varphi (\\nabla ^\\mu Q\\nabla ^\\nu Q \\nabla ^2_{\\mu \\nu } \\varphi +2\\nabla ^2_{\\mu \\nu }Q\\nabla ^\\mu Q \\nabla ^\\nu \\varphi )}{(1+\\nabla ^\\alpha u \\nabla _\\alpha u)(1+\\nabla ^\\alpha Q\\nabla _\\alpha Q)}\\\\&\\quad +\\frac{2(\\nabla ^\\mu Q \\nabla ^\\nu Q\\nabla ^2_{\\mu \\nu }Q)(\\nabla ^\\beta Q \\nabla _\\beta \\varphi )(\\nabla ^\\gamma \\varphi \\nabla _\\gamma \\varphi )}{(1+\\nabla ^\\alpha u \\nabla _\\alpha u)(1+\\nabla ^\\alpha Q\\nabla _\\alpha Q)^2}.\\end{split}$ In view of (REF ), the linearized operator $\\mathcal {P}$ has the expansion $\\begin{split}\\mathcal {P}_{\\mathrm {graph}}\\psi &=\\Box _m\\psi +{\\mathrm {Err}}_{\\mathcal {P}}(\\psi ),\\end{split}$ where for some symmetric bounded tensor $p_2$ and a bounded tensor $p_1$ , $\\begin{split}{\\mathrm {Err}}_{\\mathcal {P}}\\psi &= \\langle r\\rangle ^{-4}p_2^{\\mu \\nu }\\partial ^2_{\\mu \\nu }\\psi +\\langle r\\rangle ^{-5}p_1^\\mu \\partial _\\mu \\psi .\\end{split}$ Due to the asymptotic flatness of the metric of the catenoid, the Minkowski wave operator $\\Box _m$ will play a major role in the exterior analysis.", "For this reason, it is convenient to derive expressions for $m$ , $m^{-1}$ , and $\\Box _m$ in the $(\\tau ,r,\\theta )$ coordinates.", "Starting with $m$ , we have $m&=-{\\tilde{\\gamma }}^{-2}\\mathrm {d}\\tau \\otimes \\mathrm {d}\\tau -(1-\\langle r\\rangle ^{-2}+\\Theta \\cdot \\eta ^{\\prime })(\\mathrm {d}\\tau \\otimes \\mathrm {d}r +\\mathrm {d}r\\otimes \\mathrm {d}\\tau )+r\\Theta _a\\cdot \\eta ^{\\prime }(\\mathrm {d}\\tau \\otimes \\mathrm {d}\\theta ^a+\\mathrm {d}\\theta ^a\\otimes \\mathrm {d}\\tau )\\nonumber \\\\&\\quad +\\langle r\\rangle ^{-2} \\mathrm {d}r\\otimes \\mathrm {d}r + r^2\\mathring{{g}}_{ab}\\mathrm {d}\\theta ^a\\otimes \\mathrm {d}\\theta ^b,$ where we have used the notation ${\\tilde{\\gamma }}=(1-|\\eta ^{\\prime }|^2)^{-\\frac{1}{2}}$ .", "In matrix form this is $\\begin{split}m= m_0+\\langle r\\rangle ^{-2}m_1=\\begin{pmatrix} -{\\tilde{\\gamma }}^{-2}&-1+\\Theta \\cdot \\eta ^{\\prime }&r\\Theta _\\theta \\cdot \\eta ^{\\prime }\\\\-1+\\Theta \\cdot \\eta ^{\\prime }&0&0\\\\r\\Theta _\\theta \\cdot \\eta ^{\\prime }&0&r^2\\mathring{{g}} \\end{pmatrix}+\\langle r\\rangle ^{-2}\\begin{pmatrix} 0&1&0\\\\1&1&0\\\\0&0&0 \\end{pmatrix}.\\end{split}$ It follows that (here $|m|=|\\det m|$ )Here we have used the fact that $|\\Theta _\\theta \\cdot \\eta ^{\\prime }|^2=1-(\\Theta \\cdot \\eta ^{\\prime })^2$ , where$|\\Theta _\\theta \\cdot \\eta ^{\\prime }|^2=\\sum _{a=1}^{n-1}(\\Theta _a\\cdot \\eta ^{\\prime }) \\det \\mathring{{g}}_a$ , and where $\\mathring{{g}}_a$ is $\\mathring{{g}}$ with the $a^\\mathrm {th}$ column replaced by $\\Theta _\\theta \\cdot \\eta ^{\\prime }=(\\Theta _1\\cdot \\eta ^{\\prime },\\dots ,\\Theta _{n-1}\\cdot \\eta ^{\\prime })^\\intercal $ .", "$\\begin{split}| m|^{\\frac{1}{2}} = (1-\\Theta \\cdot \\eta ^{\\prime })r^{n-1}|\\mathring{{g}}|^{\\frac{1}{2}}(1+\\langle r\\rangle ^{-2}\\frac{{\\tilde{\\gamma }}^{-2}+(1-(\\Theta \\cdot \\eta ^{\\prime })^2)}{(1-\\Theta \\cdot \\eta ^{\\prime })^2}).\\end{split}$ The inverse $m^{-1}$ can be calculated using (REF ) (in this formula $\\Theta ^a=(\\mathring{{g}}^{-1})^{ab}\\Theta _b$ ): $\\begin{split}m^{-1}&=\\begin{pmatrix} \\frac{-\\langle r\\rangle ^{-2}}{(1-\\frac{r}{\\langle r\\rangle }\\Theta \\cdot \\eta ^{\\prime })^2}&\\frac{-1+\\Theta \\cdot \\eta ^{\\prime }+\\langle r\\rangle ^{-2}}{(1-\\frac{r}{\\langle r\\rangle }\\Theta \\cdot \\eta ^{\\prime })^2}&\\frac{\\langle r\\rangle ^{-2}\\Theta ^\\theta \\cdot \\eta ^{\\prime }}{r(1-\\frac{r}{\\langle r\\rangle }\\Theta \\cdot \\eta ^{\\prime })^2}\\\\\\frac{-1+\\Theta \\cdot \\eta ^{\\prime }+\\langle r\\rangle ^{-2}}{(1-\\frac{r}{\\langle r\\rangle }\\Theta \\cdot \\eta ^{\\prime })^2}&\\frac{1-(\\Theta \\cdot \\eta ^{\\prime })^2}{(1-\\frac{r}{\\langle r\\rangle }\\Theta \\cdot \\eta ^{\\prime })^2}&\\frac{(1-\\Theta \\cdot \\eta ^{\\prime }-\\langle r\\rangle ^{-2})\\Theta ^\\theta \\cdot \\eta ^{\\prime }}{r(1-\\frac{r}{\\langle r\\rangle }\\Theta \\cdot \\eta ^{\\prime })^2}\\\\\\frac{\\langle r\\rangle ^{-2}\\Theta ^\\theta \\cdot \\eta ^{\\prime }}{r(1-\\frac{r}{\\langle r\\rangle }\\Theta \\cdot \\eta ^{\\prime })^2}&\\frac{(1-\\Theta \\cdot \\eta ^{\\prime }-\\langle r\\rangle ^{-2})\\Theta ^\\theta \\cdot \\eta ^{\\prime }}{r(1-\\frac{r}{\\langle r\\rangle }\\Theta \\cdot \\eta ^{\\prime })^2}&\\frac{\\mathring{{g}}^{-1}}{r^2}-\\frac{\\langle r\\rangle ^{-2}\\Theta ^a\\cdot \\eta ^{\\prime }\\Theta ^b\\cdot \\eta ^{\\prime }}{r^2(1-\\frac{r}{\\langle r\\rangle }\\Theta \\cdot \\eta ^{\\prime })^2} \\end{pmatrix}.\\end{split}$ Expanding in powers of $r$ , we can write this as $\\begin{split}m^{-1}=m_0^{-1}+\\langle r\\rangle ^{-2}m_1,\\end{split}$ where $\\begin{split}m_0^{-1}=\\begin{pmatrix} 0&\\frac{-1}{1-\\Theta \\cdot \\eta ^{\\prime }}&0\\\\\\frac{-1}{1-\\Theta \\cdot \\eta ^{\\prime }}&\\frac{1+\\Theta \\cdot \\eta ^{\\prime }}{1-\\Theta \\cdot \\eta ^{\\prime }}&\\frac{\\Theta ^\\theta \\cdot \\eta ^{\\prime }}{r(1-\\Theta \\cdot \\eta ^{\\prime })}\\\\0&\\frac{\\Theta ^\\theta \\cdot \\eta ^{\\prime }}{r(1-\\Theta \\cdot \\eta ^{\\prime })}&r^{-2}\\mathring{{g}}^{-1} \\end{pmatrix},\\end{split}$ and $m_1$ is a matrix of size $\\mathcal {O}(1)$ .", "Next, to derive an expression for the wave operator $\\Box _m$ we write $\\begin{split}\\Box _m\\psi &=\\frac{1}{\\sqrt{|m|}}\\partial _\\mu (\\sqrt{|m|}\\partial _\\nu \\psi )\\\\&=\\frac{1}{\\sqrt{|m_0|}}\\partial _\\mu (\\sqrt{|m_0|}\\partial _\\nu \\psi )+\\langle r\\rangle ^{-2}m_1^{\\mu \\nu }\\partial _{\\mu \\nu }^2\\psi \\\\&\\quad +\\frac{1}{\\sqrt{|m|}}\\partial _\\mu (\\sqrt{|m_0|}((m^{-1})^{\\mu \\nu }-(m_0^{-1})^{\\mu \\nu })+(\\sqrt{|m|}-\\sqrt{|m_0|})(m^{-1})^{\\mu \\nu })\\partial _\\nu \\psi \\\\&\\quad +(|m|^{-\\frac{1}{2}}-|m_0|^{-\\frac{1}{2}})\\partial _\\mu (\\sqrt{|m_0|}(m_0^{-1})^{\\mu \\nu })\\partial _\\nu \\psi .\\end{split}$ Using the fact that $\\sqrt{|m_0|}(m_0^{-1})^{\\tau \\nu }$ is independent of $\\tau $ , we arrive at the expression $\\begin{split}\\Box _m\\psi &=2(m_0^{-1})^{\\tau r}\\partial ^2_{\\tau r}\\psi +\\frac{n-1}{r}(m_0^{-1})^{\\tau r}\\partial _\\tau \\psi +2(m_0^{-1})^{\\theta r}\\partial ^2_{\\theta r}\\psi \\\\&\\quad + (m_0^{-1})^{rr}\\partial ^2_r\\psi +\\frac{n-1}{r}(m_0^{-1})^{rr}\\partial _r\\psi +\\frac{1}{\\sqrt{|m_0|}}\\partial _\\theta (\\sqrt{|m_0|}(m_0^{-1})^{\\theta r})\\partial _r\\psi \\\\&\\quad +\\frac{1}{r^2}{{\\Delta }}_{\\mathbb {S}^{n-1}}\\psi +\\frac{n-2}{r}(m_0^{-1})^{\\theta r}\\partial _\\theta \\psi +\\frac{\\Theta _\\theta \\cdot {\\dot{\\eta }}}{r^2(1+\\Theta \\cdot {\\dot{\\eta }})}\\partial _\\theta \\psi +{\\mathrm {Err}}_{\\Box _m}(\\psi ),\\end{split}$ where $\\begin{split}{\\mathrm {Err}}_{\\Box _m}\\psi &= \\langle r\\rangle ^{-2}m_1^{\\mu \\nu }\\partial ^2_{\\mu \\nu }\\psi +(\\mathcal {O}(r^{-2}|{\\dot{\\wp }}|)+\\mathcal {O}(r^{-3}))\\partial _\\tau \\psi +(\\mathcal {O}(r^{-2}|{\\dot{\\wp }}|) +\\mathcal {O}(r^{-3}))\\partial _r\\psi \\\\&\\quad +\\mathcal {O}((r^{-3}|{\\dot{\\wp }}|)+\\mathcal {O}(r^{-4}))\\partial _\\theta \\psi .\\end{split}$ Remark 4.1 When working in the exterior region, it is often easier to work with the metric $m$ rather than the induced metric on the leaves of the foliation, and treat the difference as an error.", "In such cases we will often also use the volume form coming from the coordinate expression for $m$ .", "Due to the asymptotic flatness of the induced metric this volume form is comparable in size with the geometrically induced volume form, and therefore the various inequalities we derive remain valid if we change to the geometric volume form.", "For future reference, we end this subsection by deriving a geometric expression for the linear operator in the equation satisfied by $\\phi $ (rather than $\\varphi $ ).", "Recall equation (REF ), where $u=Q+\\varphi $ .", "Writing (see (REF )) $\\begin{split}\\varphi = s\\phi ,\\qquad s=\\sqrt{1+(m^{-1})^{\\alpha \\beta } Q_\\alpha Q_\\beta },\\end{split}$ our goal is to derive the linear part of (REF ) in terms of $\\phi $ .", "The computation is similar to those in Section  and we will be brief $\\begin{split}h_{\\alpha \\beta }= \\eta ((\\partial _\\alpha ,Q_\\alpha ), (\\partial _\\beta , Q_\\beta ))= m_{\\alpha \\beta }+Q_\\alpha Q_\\beta .\\end{split}$ Then by direct computation $\\begin{split}(h^{-1})^{\\alpha \\beta } = (m^{-1})^{\\alpha \\beta }-\\frac{ Q^\\alpha Q^\\beta }{1+|\\nabla Q|^2},\\qquad |h|= |m|(1+|\\nabla Q|^2),\\end{split}$ where we have used the notation $|\\nabla Q|^2:= (m^{-1})^{\\alpha \\beta } Q_\\alpha Q_\\beta $ and $ Q^\\alpha :=(m^{-1})^{\\alpha \\beta } Q_\\beta $ .", "It follows from the expression for $k^{-1}$ in (REF ) that the linear part of (REF ) is $\\begin{split}\\Box _h \\phi +s^{-1}(\\Box _h s-2s^{-1}(h^{-1})^{\\mu \\nu }\\partial _\\mu s\\partial _\\nu s)\\phi +\\mathcal {O}({\\dot{\\wp }}^{\\le 2}r^{-6})\\partial ^{\\le 2}\\phi .\\end{split}$ Now we claim that in the case when $Q$ corresponds to a true parameterization of a boosted and translated catenoid (that is, when ${\\dot{\\ell }}=0$ and ${\\dot{\\xi }}=\\ell $ ), the coefficient of $\\phi $ above is precisely $V=| {\\mathrm {I\\!I}} |^2$ .", "To see this, recall from Lemma REF (more precisely from [39]) that in this case we would have $\\begin{split}\\Box _hn_\\wp =-| {\\mathrm {I\\!I}} |^2n_\\wp .\\end{split}$ Since, with $N=s(0,1)$ , we have $\\eta (n_{\\wp },N)=1$ , it follows from the expression (REF ) that in this case (that is, when ${\\dot{\\ell }}=0$ and ${\\dot{\\xi }}=\\ell $ ) $\\begin{split}-| {\\mathrm {I\\!I}} |^2=\\eta (\\Box _h n_\\wp ,N)= s\\Box _h s^{-1}=-s^{-1}\\Box _hs+2s^{-2}(h^{-1})^{\\mu \\nu }\\partial _\\mu s\\partial _\\nu s,\\end{split}$ which proves our claim.", "Returning to (REF ), since the only errors come from when derivatives fall on parameters, we see that the linear part of (REF ) in terms of $\\phi $ is, with $V=| {\\mathrm {I\\!I}} |^2$ , $\\begin{split}\\Box _h\\phi + V\\phi +\\mathcal {O}({\\dot{\\wp }}^{\\le 2}r^{-6})\\partial ^{\\le 2}\\phi .\\end{split}$" ], [ "Global Non-Geometric Coordinates", "Under the assumption that $\\wp $ is sufficiently small, we introduce a global set of coordinates $(,,)$ which glue the interior and exterior coordinates introduced earlier.", "The smallness of $\\wp $ will be guaranteed by the choice of initial data and the bootstrap assumptions (to be described in Section ).", "The procedure is as follows.", "Let $\\Psi _{\\mathrm {int}}$ and $\\Psi _{\\mathrm {ext}}$ , defined on overlapping open sets $V_{\\mathrm {int}}$ and $V_{\\mathrm {ext}}$ , denote the coordinate maps associated to the rectangular coordinates of $(t,\\rho ,\\omega )$ and $(\\sigma ,r,\\theta )$ respectively.", "More precisely, for a point $p=(t,\\xi (t)+\\gamma ^{-1}P_{\\ell (t)} F(\\rho ,\\omega )+P_{\\ell (t)}^\\perp F(\\rho ,\\omega ))$ in $V_{\\mathrm {int}}\\cap \\big (\\cup \\Sigma _\\sigma )$ let $\\begin{split}\\Psi _{\\mathrm {int}}(p)= (y^0,\\dots ,y^n), \\qquad y^0=t, ~y^i= \\rho \\Theta ^i(\\omega ); i=1,\\dots ,n.\\end{split}$ The coordinate map $\\Psi _{\\mathrm {ext}}$ is defined similarly.", "In the overlapping region which is, by assumption, contained in $\\mathcal {C}_{\\mathrm {flat}}$ , for a point $p=(\\sigma ,\\eta (\\sigma )+r\\Theta (\\theta ))$ , $\\Psi _{\\mathrm {ext}}(p)=(y^0,\\dots ,y^n),\\qquad y^0=\\sigma ,~y^i=r\\Theta ^i(\\theta ); i=1,\\dots ,n.$ Let $\\chi $ be a cutoff function which is supported in $V_{\\mathrm {int}}$ and is equal to one on $V_{\\mathrm {int}}\\backslash (V_{\\mathrm {int}}\\cap V_{\\mathrm {ext}})$ .", "We define the global rectangular coordinates by $\\begin{split}y=\\Psi (p):= \\chi (p)\\Psi _{\\mathrm {int}}(p)+(1-\\chi (p))\\Psi _{\\mathrm {ext}}(p),\\end{split}$ and the global polar coordinates by expressing $y$ in polar coordinates $(,,)$ .", "To see that $\\Psi $ defines a coordinate map we need to check that $d\\Psi (p)$ is invertible for all $p\\in V_{\\mathrm {int}}\\cap V_{\\mathrm {ext}}$ and that $\\Psi $ is one to one.", "For the derivatives, it suffices to show that $d(\\Psi \\circ \\Psi ^{-1}_{\\mathrm {int}})$ is invertible.", "By the definition of $\\Psi $ , $\\begin{split}\\Psi \\circ \\Psi _{\\mathrm {int}}^{-1}(x)=\\chi \\circ \\Psi ^{-1}_{\\mathrm {int}}(x)x+(1-\\chi \\circ \\Psi ^{-1}_{\\mathrm {int}}(x))\\Psi _{\\mathrm {ext}}\\circ \\Psi _{\\mathrm {int}}^{-1}(x),\\end{split}$ so (here $I$ denotes the identity matrix) $\\begin{split}d(\\Psi \\circ \\Psi _{\\mathrm {int}}^{-1})(x)&=d(\\Psi _{\\mathrm {ext}}\\circ \\Psi _{\\mathrm {int}}^{-1})(x)\\\\&\\quad +\\chi \\circ \\Psi _{\\mathrm {int}}(x)(I-d(\\Psi _{\\mathrm {ext}}\\circ \\Psi _{\\mathrm {int}}^{-1})(x))+\\nabla (\\chi \\circ \\Psi _{\\mathrm {int}}^{-1}(x)) (x-\\Psi _{\\mathrm {ext}}\\circ \\Psi _{\\mathrm {int}}^{-1}(x)).\\end{split}$ Since $x-\\Psi _{\\mathrm {ext}}\\circ \\Psi _{\\mathrm {int}}^{-1}(x)$ is small for $x\\in \\Psi _{\\mathrm {int}}(V_{\\mathrm {int}}\\cap V_{\\mathrm {ext}})$ , it suffices to show that $I-d(\\Psi _{\\mathrm {ext}}\\circ \\Psi _{\\mathrm {int}}^{-1})(x)$ is also small for such $x$ .", "To compute the derivative we write $(\\tau ,z)$ for $\\Psi _{\\mathrm {ext}}\\circ \\Psi _{\\mathrm {int}}^{-1}(x)$ and $(t,y)$ for $x$ .", "Then according to the above formulas for $\\Psi _{\\mathrm {int}}$ and $\\Psi _{\\mathrm {ext}}$ , $(t,y)$ and $(\\sigma ,z)$ are related by $\\begin{split}t= \\sigma ,\\quad \\xi (t)+\\gamma ^{-1}(t) P_{\\ell (t)}(\\langle y\\rangle |y|^{-1}y)+P_{\\ell (t)}^\\perp (\\langle y\\rangle |y|^{-1}y)= \\eta (\\sigma )+z.\\end{split}$ The desired invertibility then follows by implicitly differentiating these relations to get $\\begin{split}&\\frac{\\partial \\sigma }{\\partial t}=1,\\quad \\frac{\\partial \\sigma }{\\partial y^j}=0,\\\\&\\frac{\\partial z}{\\partial t}+\\eta ^{\\prime }(\\sigma )\\frac{\\partial \\sigma }{\\partial t}=-\\gamma ^{\\prime }(t)\\gamma ^{-2}(t)P_\\ell (\\langle y\\rangle |y|^{-1}y)+(\\gamma ^{-1}-1)\\frac{\\langle y\\rangle }{|y|}\\big (\\frac{y\\cdot \\ell }{|\\ell |^2}{\\dot{\\ell }}+\\frac{y\\cdot {\\dot{\\ell }}}{|\\ell |^2}\\ell -2\\frac{({\\dot{\\ell }}\\cdot \\ell )(y\\cdot \\ell )}{|\\ell |^4}\\ell \\big )+\\xi ^{\\prime }(t),\\\\&\\frac{\\partial z}{\\partial y^j}=-\\gamma ^{-1}P_\\ell (|y|^{-3}\\langle y\\rangle ^{-1}y^jy)-P_\\ell ^\\perp (|y|^{-3}\\langle y\\rangle ^{-1}y^jy)+\\gamma ^{-1}P_\\ell (\\langle y\\rangle |y|^{-1}e_j)+P_\\ell (\\langle y\\rangle |y|^{-1}e_j),\\end{split}$ and using the smallness of $\\ell $ and $\\eta ^{\\prime }(\\tau )-\\xi ^{\\prime }(t)$ (by assumption).", "The fact that $\\Psi $ is one to one can be shown using similar considerations.", "We also remark that since in the overlapping region $t(p)=\\sigma (p) = X^0(p)$ , the coordinate $$ satisfies $=t=\\sigma $ .", "The normalized vectorfield ${{\\bf T}}:=\\partial _$ plays a distinguished role in this work a globally defined almost Killing and unit timelike vectorfield.", "Note that $T$ , defined in (REF ) in the exterior, and ${\\bf T}$ differ only by terms which have $$ decay (see (REF )).", "Remark 4.2 Since the global coordinates $(,,)$ agree with the coordinates introduced in the previous two subsections in the respective regions, and in view of the invariant form $\\Box +V$ appearing in (REF ) and (REF ), by inspection of the calculations in the interior and exterior regions (see (REF ), (REF ), (REF ), (REF ), (REF ), (REF ), (REF )), the operator $\\mathcal {P}$ in (REF ) and (REF ) satisfies the following properties: $\\mathcal {P}$ admits the decomposition $\\begin{split}\\mathcal {P}=\\mathcal {P}_+\\mathcal {P}_{\\mathrm {ell}},\\end{split}$ where $\\mathcal {P}_{\\mathrm {ell}}$ is elliptic and does not contain $\\partial _{}$ derivatives.", "$\\mathcal {P}_{\\mathrm {ell}}$ can be further decomposed as $\\begin{split}\\mathcal {P}_{\\mathrm {ell}}=\\Delta _{\\underline{\\mathcal {C}}}+V+\\mathcal {P}_{\\mathrm {ell}}^{{\\mathrm {pert}}},\\end{split}$ where $\\begin{split}\\Delta _{\\underline{\\mathcal {C}}}=\\frac{1}{\\langle \\rangle ^{n-1}|F_{}|}\\partial _(\\langle \\rangle ^{n-1}|F_|^{-1}\\partial _\\rho )+\\frac{1}{\\langle \\rangle ^2}{\\mathring{{\\Delta }}},\\end{split}$ and (recall the notation from Section REF ) $\\begin{split}\\mathcal {P}_{\\mathrm {ell}}^{\\mathrm {pert}}=o_{\\wp ,{{\\color {deepgreen} R}}}(1)(\\partial _\\Sigma ^2+\\langle \\rangle ^{-1}\\partial _\\Sigma +\\langle \\rangle ^{-2})+\\mathcal {O}({\\dot{\\wp }})\\partial _\\Sigma +\\mathcal {O}({\\dot{\\wp }})\\langle \\rangle ^{-2}.\\end{split}$ Here $o_{\\wp ,{{\\color {deepgreen} R}}}(1)$ denotes coefficients which are $\\mathcal {O}(\\wp )$ or can be made arbitrarily small by taking ${{\\color {deepgreen} R}}$ (the transition region from the flat to hyperboloidal foliation) large.", "Finally, $\\mathcal {P}_$ has the form $\\begin{split}\\mathcal {P}_=\\mathcal {O}(1)\\partial {{\\bf T}}+\\mathcal {O}(\\langle ^{-1}\\rangle ){{\\bf T}}.\\end{split}$ If $|{\\dot{\\wp }}^{\\le 2}|\\lesssim \\epsilon ^{-\\gamma -1}$ , for some $\\gamma >1$ , then the operator $\\mathcal {P}$ takes the form $\\begin{split}\\mathcal {P}\\psi =_q^{\\mu \\nu }\\partial ^2_{\\mu \\nu }\\psi +_l^\\mu \\partial _\\mu \\psi +_c\\psi ,\\end{split}$ where the coefficients satisfy the following properties.", "In view of the invariant form $\\Box _h+V$ appearing in (REF ) and (REF ) (see also the comments following (REF )), let $\\begin{split}&\\underline{}_q^{\\mu \\nu }=(h^{-1})^{\\mu \\nu }\\vert _{=t_2}, \\quad \\underline{}_l^\\nu =|h|^{-\\frac{1}{2}}\\partial _\\mu (|h|^{\\frac{1}{2}}(h^{-1})^{\\mu \\nu })\\vert _{=t_2},\\quad \\underline{}_c=V\\vert _{\\begin{array}{c}\\dot{\\ell } = 0 \\\\ \\dot{\\xi } = \\ell \\\\ =t_2\\end{array}},\\\\& \\mathring{}_q=_q-\\underline{},\\quad \\mathring{}_l=_l-\\underline{},\\quad \\mathring{}_c=_c-\\underline{},\\end{split}$ and correspondingly decompose $\\mathcal {P}$ as $\\mathcal {P}=\\mathcal {P}_0+\\mathcal {P}_{\\mathrm {pert}}$ with $\\begin{split}\\mathcal {P}_0=\\underline{}_q^{\\mu \\nu }\\partial ^2_{\\mu \\nu }+\\underline{}_l^\\mu \\partial _\\mu +\\underline{}_c, \\qquad \\mathcal {P}_{\\mathrm {pert}}=\\mathring{}_q^{\\mu \\nu }\\partial ^2_{\\mu \\nu }+\\mathring{}_l^\\mu \\partial _\\mu +\\mathring{}_c.\\end{split}$ Then $|\\mathring{}_q|, |\\mathring{}_l|, |\\mathring{}_c|\\lesssim \\langle \\rangle ^{-\\gamma }$ , and, with $y$ denoting the spatial variables $(,)$ , $\\begin{split}\\sup _y(\\langle \\rangle ^{2}|\\mathring{}_q^{}|+|\\mathring{}_q^{y}|+|\\mathring{}_q^{yy}|)\\lesssim \\epsilon \\langle \\rangle ^{-\\gamma }.\\end{split}$ Moreover, in the hyperboloidal part of the foliation, $\\mathcal {P}_{\\mathrm {pert}}$ has the more precise structure $\\begin{split}{\\mathring{a}}\\partial _(\\partial _+\\frac{n-1}{2})+{\\mathring{a}}^{\\mu \\nu }\\partial _{\\mu \\nu }+{\\mathring{b}}^\\mu \\partial _\\mu +{\\mathring{c}},\\end{split}$ where, $\\begin{split}&|{\\mathring{a}}|, |{\\mathring{a}}^{yy}|\\lesssim \\epsilon \\langle \\rangle ^{-\\gamma }, \\quad |\\partial _y{\\mathring{a}}^{yy}|, |{\\mathring{a}}^{y}|, |{\\mathring{b}}^y|\\lesssim \\epsilon \\langle \\rangle ^{-\\gamma }^{-1},\\quad |{\\mathring{a}}^{}|, |{\\mathring{b}}^|\\lesssim \\epsilon \\langle \\rangle ^{-\\gamma }^{-2},\\\\&|{\\mathring{c}}|\\lesssim \\epsilon \\langle \\rangle ^{-\\gamma } ^{-4}.\\end{split}$ $\\mathcal {P}$ can be written as $\\begin{split}\\mathcal {P}=\\frac{1}{\\sqrt{|{\\bf h}|}}\\partial _\\mu (\\sqrt{|{\\bf h}|}({\\bf h}^{-1})^{\\mu \\nu }\\partial _\\nu )+V+{\\tilde{\\mathcal {P}}},\\end{split}$ where the Lorentzian metric ${\\bf h}$ agrees with $h$ in (REF ) in the region where $(,,)$ agrees with $(t,\\rho ,\\omega )$ , and with $m$ in (REF ) in the region where $(,,)$ agrees with $(\\tau ,r,\\theta )$ .", "Moreover, the perturbative part ${\\tilde{\\mathcal {P}}}$ can be written as $\\begin{split}{\\tilde{\\mathcal {P}}}= {\\bf p}^{\\mu \\nu }\\partial _{\\mu \\nu }^2+{\\bf q}^\\mu \\partial _\\mu +{\\bf s}\\end{split}$ for a symmetric tensor ${\\bf p}$ , a vectorfield ${\\bf q}$ , and a scalar ${\\bf s}$ satisfying (in the notation of item (1) above) $\\begin{split}\\langle \\rangle ^2|{\\bf p}|+\\langle \\rangle ^3|{\\bf q}|+\\langle \\rangle ^4|{\\bf s}|=o_{\\wp ,{{\\color {deepgreen} R}}}(1)+\\mathcal {O}({\\dot{\\wp }}^{\\le 2}).\\end{split}$" ], [ "Global Geometric Coordinate", "Here we introduce a new set of coordinates $({\\tilde{}},{\\tilde{}},{\\tilde{}})$ to which we refer as geometric global coordinates.", "Their main property of interest to us is that the operator $\\mathcal {P}_0$ introduced in Remark REF above has the following expression in these coordinates: $\\begin{split}\\mathcal {P}_0=-\\partial _{{\\tilde{}}}^2+{\\tilde{\\Delta }}+V({\\tilde{\\rho }}),\\end{split}$ Here, with ${\\mathring{{\\Delta }}}$ denoting the Laplacian on the round sphere $\\mathbb {S}^{n-1}$ , $\\begin{split}{\\tilde{\\Delta }}=\\frac{1}{\\langle {\\tilde{}}\\rangle ^{n-1}|F_{{\\tilde{}}}|}\\partial _{\\tilde{}}(\\langle {\\tilde{}}\\rangle ^{n-1}|F_{\\tilde{}}|^{-1}\\partial _{\\tilde{}})+\\frac{1}{\\langle {\\tilde{}}\\rangle ^2}{\\mathring{{\\Delta }}}.\\end{split}$ The geometric global coordinates can be defined as follows.", "Let ${\\underline{\\ell }}=\\ell (t_2)$ , $\\underline{\\xi }=\\xi (t_2)$ and $\\underline{\\gamma }=\\gamma (t_2)$ .", "With these choices we consider two parameterizations of the catenoid defined in (REF ), where $\\ell \\equiv {\\underline{\\ell }}$ and $\\xi \\equiv \\underline{\\xi }$ .", "The first parameterization is exactly by the non-geometric global coordinates $(,,)$ from Section REF , corresponding the choice of parameters $\\ell \\equiv {\\underline{\\ell }}$ , $\\xi \\equiv \\underline{\\xi }+\\sigma {\\underline{\\ell }}$ .", "The second parameterization is simply $\\begin{split}\\Lambda _{-{\\underline{\\ell }}}({\\tilde{}},F({\\tilde{}},{\\tilde{}}))+(0,\\underline{\\xi }),\\end{split}$ where $F$ and $\\Lambda $ are as in (REF ) and (REF ).", "The coordinate change between $(,,)$ and $({\\tilde{}},{\\tilde{}},{\\tilde{}})$ is then obtained by equating the $X^i$ , $i=0,\\dots ,n$ coordinates of the ambient space with respect to these two parameterizations.", "The desired form of $\\mathcal {P}_0$ follows from the coordinate invariance of this operator.", "Explicit formulas for the coordinate transformation can also be given in the regions where the non-geometric coordinates agree with the coordinates $(t,\\rho ,\\omega )$ and $(\\tau ,r,\\theta )$ , and are given respectively by $\\begin{split}{\\tilde{}}=\\underline{\\gamma }^{-1}-{\\underline{\\ell }}\\cdot F(,), \\quad {\\tilde{}}=,\\quad {\\tilde{}}=\\end{split}$ in the interior, and by $\\begin{split}+\\langle \\rangle =\\underline{\\gamma }{\\tilde{}}+\\underline{\\gamma }{\\underline{\\ell }}\\cdot F({\\tilde{}},{\\tilde{}}), \\quad {\\underline{\\ell }}+\\Theta ()=\\underline{\\gamma }{\\tilde{}}{\\underline{\\ell }}+A_{\\underline{\\ell }}F({\\tilde{}},{\\tilde{}})\\end{split}$ in the exterior." ], [ "Main Bootstrap Argument and the Proof of Theorem ", "In the first part of this section we set up the bootstrap assumptions and state the propositions which assert that the bootstrap regime is trapped.", "These are Propositions REF and REF , and their proofs will occupy most of the remainder of the paper.", "In the rest of this section we will prove Theorem REF assuming Propositions REF and REF .", "For concreteness, we set $n=5$ for the remainder of the paper, but our arguments are easily adaptable to higher dimensions.", "We can now state our bootstrap assumptions.", "We assume that there exist $\\xi ,\\ell $ defined on $[0,\\tau _f)$ and a parameterization (REF ) for $\\tau \\in [0,\\tau _f)$ such that the orthogonality conditions (REF ) and (REF ) are satisfied.", "We also assume that the following trapping assumption (see (REF ) for the definition of $F_{+}$ and (REF ) for the relation between $\\psi $ and $\\phi $ ): $ |\\mu a_{+}(\\tau )-e^{\\mu \\tau }S(e^{-\\mu \\tau }F_{+})|\\le C_{trap} \\delta _{\\wp } \\epsilon \\tau ^{-3},$ and the following estimates hold for all $\\tau ,\\sigma _1,\\sigma _2\\in [0,\\tau _f)$ (recall from Section REF that ${\\tilde{\\partial }}_\\Sigma $ denotes size one tangential derivatives $\\partial _\\Sigma $ or $\\langle \\rangle ^{-1} {{\\bf T}}$ and that in the exterior $X$ denotes any of the vecotrfields ${\\tilde{r}}L$ , $\\Omega $ , or $T$ introduced in Section REF ): $&|a_{+}^{(k)}|\\le 2 C_{k} \\delta _{\\wp }\\epsilon \\tau ^{-\\frac{5}{2}+\\kappa }, \\quad \\forall k\\ge 0.\\\\&|a_{-}^{(k)}|\\le 2C_k \\delta _{\\wp } \\epsilon \\tau ^{-\\frac{5}{2}+\\kappa },\\quad \\forall k\\ge 0.\\\\&|{\\dot{\\wp }}^{(k)}|\\le 2C_k \\delta _{\\wp }\\epsilon \\tau ^{-\\frac{5}{2}+\\kappa },\\quad \\forall k\\ge 1.\\\\&|\\phi |+\\chi _{\\ge R}|X^k\\phi |\\le 2C\\epsilon \\tau ^{-\\frac{9}{4}+\\frac{\\kappa }{2}},\\quad k\\le M-7.\\\\&|\\partial ^k\\phi |+\\chi _{\\ge R}|\\partial ^{k-j}X^j\\phi |\\le 2C\\epsilon \\tau ^{-\\frac{5}{2}+\\kappa },\\quad 1\\le k\\le M-8,~j<k.\\\\&\\chi _{\\ge R}|\\partial ^{k-j}X^j\\phi |\\le 2C\\epsilon \\langle r\\rangle ^{-\\frac{3}{2}}\\tau ^{-1},\\quad 1\\le k\\le M-8,~j<k.\\\\&\\chi _{\\ge R}|\\partial ^{k-j}X^j\\phi |\\le 2C\\epsilon \\langle r\\rangle ^{-2}\\tau ^{-\\frac{1}{2}},\\quad 1\\le k\\le M-8,~j<k.\\\\&\\Vert \\chi _{\\le R}\\partial ^k\\phi \\Vert _{L^2(\\Sigma _\\tau )}+\\Vert \\langle r\\rangle ^{-\\frac{5}{2}+\\kappa }(\\chi _{\\ge R}X^k\\phi )\\Vert _{L^2(\\Sigma _\\tau )}\\le 2C \\epsilon \\tau ^{-\\frac{5}{2}+\\kappa },\\quad 0\\le k\\le M-4.$ $&\\Vert \\partial ^2_\\Sigma (\\chi _{\\le R}\\partial ^k{{\\bf T}}\\phi )\\Vert _{L^2(\\Sigma _\\tau )}+\\Vert \\partial ^2_\\Sigma (\\chi _{\\ge R} X^k{{\\bf T}}\\phi )\\Vert _{L^2(\\Sigma _\\tau )}\\le 2C \\epsilon \\tau ^{-3},\\quad 0\\le k \\le M-4.\\\\&\\Vert \\chi _{\\le R}\\partial _\\Sigma \\partial ^{k}{{\\bf T}}^j\\phi \\Vert _{L^2(\\Sigma _\\tau )}+\\Vert \\chi _{\\ge R} {\\tilde{\\partial }}_\\Sigma X^{k}{{\\bf T}}^j\\phi \\Vert _{L^2(\\Sigma _\\tau )}\\le 2C \\epsilon \\tau ^{-1-j},\\quad 0\\le j\\le 2,~ k+3j \\le M-2.\\\\&\\Vert \\chi _{\\ge R}r^{p} (\\partial _r+\\frac{n-1}{2r}) X^{k}{{\\bf T}}^j\\phi \\Vert _{L^2(\\Sigma _\\tau )}\\le 2C \\epsilon \\tau ^{-1-j+\\frac{p}{2}},\\quad 0\\le p\\le 2,~ k+3j \\le M-2.$ We state our bootstrap propositions in three steps.", "First we close the bootstrap assumptions for the parameters, but with a suboptimal rate for $\\mu a_{+}-e^{\\mu \\tau }S(e^{-\\mu \\tau }F_{+})$ in (REF ).", "We then use this to improve the bootstrap bounds on $\\phi $ , and finally we show that the initial data and parameters can be chosen such that the bootstrap regime remains trapped.", "The precise statements are contained in the following three propositions.", "Proposition 5.1 Suppose the estimates (REF )–() and orthogonality conditions (REF ) and (REF ) are satisfied.", "If $\\epsilon $ is sufficiently small and $C,C_k$ appearing on the right-hand side of (REF )–() are sufficiently large (compared to $C_{trap}$ ), then the following improved estimates hold: $&|a_{+}^{(k)}|\\le C_{k} \\delta _{\\wp } \\epsilon \\tau ^{-\\frac{5}{2}+\\kappa }, \\quad \\forall k\\ge 0.\\\\&|a_{-}^{(k)}|\\le C_k \\delta _{\\wp } \\epsilon \\tau ^{-\\frac{5}{2}+\\kappa },\\quad \\forall k\\ge 0.\\\\&|{\\dot{\\wp }}^{(k)}|\\le C_k \\delta _{\\wp } \\epsilon \\tau ^{-\\frac{5}{2}+\\kappa },\\quad \\forall k\\ge 1.$ Proposition 5.2 Suppose the estimates (REF )–() orthogonality conditions (REF ) and (REF ) are satisfied.", "If $\\epsilon $ is sufficiently small and $C,C_k$ appearing on the right-hand side of (REF )–() are sufficiently large (compared to $C_{trap}$ ), then the following improved estimates hold: $&|\\phi |+\\chi _{\\ge R}|X^k\\phi |\\le C\\epsilon \\tau ^{-\\frac{9}{4}+\\frac{\\kappa }{2}},\\quad k\\le M-7.\\\\&|\\partial ^k\\phi |+\\chi _{\\ge R}|\\partial ^{k-j}X^j\\phi |\\le C\\epsilon \\tau ^{-\\frac{5}{2}+\\kappa },\\quad 1\\le k\\le M-8,~j<k.\\\\&|\\partial ^k\\phi |+\\chi _{\\ge R}|\\partial ^{k-j}X^j\\phi |\\le C\\epsilon \\langle r\\rangle ^{-\\frac{3}{2}}\\tau ^{-1+\\kappa },\\quad 1\\le k\\le M-8,~j<k.\\\\&|\\partial ^k\\phi |+\\chi _{\\ge R}|\\partial ^{k-j}X^j\\phi |\\le C\\epsilon \\langle r\\rangle ^{-2}\\tau ^{-\\frac{1}{2}+\\kappa },\\quad 1\\le k\\le M-8,~j<k.\\\\&\\Vert \\langle r\\rangle ^{-\\frac{5}{2}+\\kappa }(\\chi _{\\le R}\\partial ^k\\phi )\\Vert _{L^2(\\Sigma _\\tau )}+\\Vert \\langle r\\rangle ^{-\\frac{5}{2}+\\kappa }(\\chi _{\\ge R}X^k\\phi )\\Vert _{L^2(\\Sigma _\\tau )}\\le C \\epsilon \\tau ^{-\\frac{5}{2}+\\kappa },\\quad 0\\le k\\le M-4.\\\\&\\Vert \\partial ^2_\\Sigma (\\chi _{\\le R}{{\\bf T}}\\partial ^k\\phi )\\Vert _{L^2(\\Sigma _\\tau )}+\\Vert \\partial ^2_\\Sigma (\\chi _{\\ge R}{{\\bf T}}X^k\\phi )\\Vert _{L^2(\\Sigma _\\tau )}\\le C \\epsilon \\tau ^{-3},\\quad 0\\le k \\le M-4.\\\\&\\Vert \\chi _{\\le R}\\partial _\\Sigma \\partial ^{k}{{\\bf T}}^j\\phi \\Vert _{L^2(\\Sigma _\\tau )}+\\Vert \\chi _{\\ge R} {\\tilde{\\partial }}_\\Sigma X^{k}{{\\bf T}}^j\\phi \\Vert _{L^2(\\Sigma _\\tau )}\\le C \\epsilon \\tau ^{-1-j},\\quad 0\\le j\\le 2,~k+3j \\le M-2.\\\\&\\Vert \\chi _{\\ge R}r^{p} (\\partial _r+\\frac{n-1}{2r}) X^{k}{{\\bf T}}^j\\phi \\Vert _{L^2(\\Sigma _\\tau )}\\le C \\epsilon \\tau ^{-1-j+\\frac{p}{2}},\\quad 0\\le p\\le 2,~ k+3j \\le M-2.$ Propositions REF and REF will be proved in Sections  and respectively.", "Observe that Proposition REF does not improve the trapping assumption (REF ), but only the bounds (REF ).", "We will employ a topological (shooting) argument to find a global solution for which (REF ) holds for all times, by choosing the initial data appropriately." ], [ "Proof of Theorem ", "Assuming Propositions REF and REF , we will prove Theorem REF .", "The proof consists of two steps as we now explain.", "Given $(\\psi _0,\\psi _1)$ for each $b$ (see the statement of Theorem REF ) we let $\\tau _f(b)$ be the maximal time on which there is a solution parameterized as in (REF ) such that the bootstrap assumptions (REF )–() orthogonality conditions (REF ) and (REF ) are satisfied.", "By local well-posedness (Proposition REF ), the normal neighborhood lemma (Lemma REF ), and the implicit function theorem arguments in Sections REF and REF , we know that $\\tau _f(b)$ is strictly positive for each choice of $b$ .", "Our goal is to show that $\\tau _f(b)$ is infinite for some choice of $b$ .", "Suppose $\\tau _f(b)$ is finite for all choices of $b$ .", "In the first step we show that condition (REF ) must get saturated, that is the inequality must be an equality, at $\\tau =\\tau _f$ .", "In the second step, we show that if $(\\psi _0,\\psi _1)$ satisfy a certain codimension one condition, equation (REF ), then there is a choice of $b$ for which (REF ) is not saturated by $\\tau _f(b)$ , and this is the desired contradiction.", "Before turning to the details of Steps 1 and 2, we clarify one point.", "Because of the nonlocal nature of the orthogonality conditions (REF ) and (REF ) (coming from the smoothing operator $S$ ), in order to apply the implicit function theorem to get parameters which guarantee (REF ) and (REF ), we first need to define $\\psi $ for $\\tau \\in [-1,0]$ .", "For this, we fix an extension procedure which is continuous (say with respect to the some finite regularity Sobolev norm) and linear in $(\\psi _0,\\psi _1)$ , and work with this fixed extension throughout the proof.", "Step 1.", "Fix $b$ and let $\\tau ^\\ast \\in (0,\\infty )$ be such that the bootstrap conditions described above (including the orthogonality conditions and the parameterization (REF )) are satisfied on $[0,\\tau ^\\ast ]$ .", "We want to show that if (REF ) is strict on $[0,\\tau ^\\ast ]$ then $\\tau _f(b)>\\tau ^\\ast $ , where as above $\\tau _f(b)$ is the maximal time on which the bootstrap conditions are satisfied.", "By Propositions REF and REF (with $\\tau _f$ replaced by $\\tau ^\\ast $ ) we can improve the bootstrap assumptions (REF )–() on $[0,\\tau ^\\ast ]$ .", "Now suppose (REF ) is strict on $[0,\\tau ^\\ast ]$ .", "By Proposition REF applied with $\\ell _0$ and $\\xi _0$ fixed at values of $\\ell $ and $\\xi $ close to $\\tau ^\\ast $ , we can extend the solution on an interval of size of order one beyond $\\tau ^\\ast $ .", "Applying Lemma REF and the implicit function theorems in Sections REF and REF (note that while the details of the proofs there were carried out for zero $\\ell _0$ and $\\xi _0$ , identical arguments can be used for other choices) we can extend $\\xi $ and $\\ell $ and the parameterization (REF ) beyond $\\tau ^\\ast $ such that the orthogonality conditions (REF ) and (REF ) are still satisfied.", "Moreover, since (REF ) is strict on $[0,\\tau ^\\ast ]$ , by continuity it is still satisfied on a larger interval.", "It follows that on this larger interval all the bootstrap conditions are satisfied and hence $\\tau _f(b)>\\tau ^\\ast $ .", "Step 2.", "Assume, for contradiction, that $\\tau _f(b)$ is finite for every choice of $b$ .", "To simplify notation let $\\begin{split}q(\\tau ):=\\mu a_{+}(\\tau )-e^{\\mu \\tau }S(e^{-\\mu \\tau }F_{+}(\\tau )),\\qquad q_0=q(0),\\qquad (\\tau ):=C_\\mathrm {trap}\\delta _\\wp \\epsilon \\langle \\tau \\rangle ^{-3},\\qquad _0:=C_\\mathrm {trap}\\delta _\\wp \\epsilon .\\end{split}$ Note that $q={\\dot{a}}_{+}$ .", "Step 2a.", "We claim that if $(\\psi _0,\\psi _1)$ satisfy an orthogonality condition (see (REF )), then for each $|q_0|\\le \\lambda _0$ there is a choice of $b$ in a neighborhood of zero for which $q(0)=q_0$ .", "From Section REF , the orthogonality condition (REF ) determines $a_{+}$ such that $\\begin{split}a_+(t)=\\Omega (\\vec{\\psi },{\\vec{Z}}_{-})-e^{\\mu t}{\\tilde{S}}(e^{-\\mu t}F_{+}).\\end{split}$ Recalling the extension procedure to $[-1,0]$ described at the beginning of the proof of Theorem REF , we define a map $\\mathcal {Z}:C^\\infty ({\\underline{\\mathcal {C}}})\\times C^\\infty ({\\underline{\\mathcal {C}}})\\times I\\rightarrow \\mathbb {R}$ , where $I$ is a neighborhood of zero in $\\mathbb {R}$ , by $\\begin{split}\\mathcal {Z}(\\psi _0,\\psi _1,b)=q(0),\\end{split}$ where $q$ is determined using initial data $\\begin{split}\\Phi \\vert _{\\lbrace t=0\\rbrace }=\\Phi _0[\\epsilon (\\psi _0+b{\\tilde{\\varphi }}_\\mu )]{\\ \\ \\text{and} \\ \\ }\\partial _t\\Phi \\vert _{\\lbrace t=0\\rbrace }=\\Phi _1[\\epsilon (\\psi _1-\\mu b{\\tilde{\\varphi }}_\\mu )],\\end{split}$ as in the statement of Theorem REF .", "We then restrict attention to $(\\psi _0,\\psi _1)$ satisfying the codimension one condition $\\begin{split}\\mathcal {Z}(\\psi _0,\\psi _1,0)=0.\\end{split}$ Now since $\\Omega (({\\tilde{\\varphi }}_\\mu ,-\\mu {\\tilde{\\varphi }}_\\mu )^{\\intercal },{\\vec{Z}}_{-})\\simeq 1$ , by (REF ) and a similar argument as for the implicit function theorem in Section REF , we see that $\\big |\\frac{\\partial q_0}{\\partial b}\\vert _{(\\psi _0,\\psi _1,0)}\\big |\\gtrsim 1$ .", "Our claim then follows from from the implicit function theorem and (REF ).", "Step 2b.", "By Step 1 and Step 2a, and our contradiction assumption, for every choice of $q_0$ there is $\\tau _{\\mathrm {trap}}(q_0)$ such that the corresponding solution satisfies $|q(\\tau )|<(\\tau )$ for $\\tau <\\tau _{\\mathrm {trap}}(q_0)$ and $|q(\\tau _{\\mathrm {trap}}(q_0))|=(\\tau _{\\mathrm {trap}}(q_0))$ .", "We use a standard shooting argument (see for instance [12], [36]) to derive a contradiction from this.", "The main observation is that if $\\frac{1}{2}(\\tau )<| q(\\tau )|<(\\tau ),$ for some $\\tau \\le \\tau _f$ , then $\\begin{split}\\frac{\\mathrm {d}}{\\mathrm {d}\\tau }q^2(\\tau )\\ge \\mu q^2(\\tau ).\\end{split}$ Indeed, rewriting the equation $\\frac{\\mathrm {d}}{\\mathrm {d}\\tau }(e^{-\\mu \\tau }{\\dot{a}}_{+})=-S(e^{-\\mu \\tau }{\\dot{F}}_{+})$ as $\\begin{split}\\dot{q}(\\tau )= \\mu q(\\tau )-e^{\\mu \\tau } S(e^{-\\mu \\tau }{\\dot{F}}_{+}(\\tau )),\\end{split}$ and multiplying by $2q(\\tau )$ , the first term on the right gives $2\\mu q^2(\\tau )$ .", "On the other hand, by the arguments in Section , $\\begin{split}|e^{\\mu \\tau } S(e^{-\\mu \\tau }{\\dot{F}}_{+}(\\tau ))|\\le c (\\tau )< c q(\\tau ),\\end{split}$ for some $c\\ll \\mu $ , proving (REF ).", "We will show that the map $\\Lambda :(-_0,_0)\\rightarrow \\lbrace \\pm _0\\rbrace $ , $\\Lambda (q_0)=q(\\tau _{\\mathrm {trap}}(q_0))$ is continuous.", "Since, by (REF ), $\\Lambda (q_0)=-_0$ if $q_0$ is close to $-_0$ and $\\Lambda (q_0)=_0$ if $q_0$ is close to $_0$ , the continuity of $\\Lambda $ contradicts the intermediate value theorem.", "By continuous dependence on initial data, it suffices to prove that $\\tau _{\\mathrm {trap}}(\\cdot )$ is continuous.", "Fix $q_0\\in (-_0,_0)$ and let $q$ denote the corresponding solution.", "By (REF ), given $>0$ there exists $\\in (0,1)$ such that if $(1-)(\\tau )<|q(\\tau )|<(\\tau )$ for some $\\tau <\\tau _f$ , then $|\\tau _{\\mathrm {trap}}(q_0)-\\tau |<$ .", "Let $\\tau _1<\\tau _f$ be such that $(1-^2)(\\tau _1)<|q(\\tau _1)|<(1-\\delta ^3)(\\tau _1)$ , and note that if $q_1$ is sufficiently close to $q_0$ then the solution ${\\tilde{q}}$ corresponding to $q_1$ satisfies $(1-)(\\tau _1)<|{\\tilde{q}}(\\tau _1)|<(\\tau _1)$ , and hence $|\\tau _{\\mathrm {trap}}(q_0)-\\tau _{\\mathrm {trap}}(q_1)|\\le |\\tau _{\\mathrm {trap}}(q_0)-\\tau _1|+|\\tau _{\\mathrm {trap}}(q_1)-\\tau _1|<2$ ." ], [ "Parameter Control", "In this section we prove Proposition REF .", "In the process we will also derive estimates on $\\Omega _i({{\\bf T}}^k\\phi ):=\\Omega ({{\\bf T}}^k\\vec{\\phi },{\\vec{Z}}_i)$ , $i\\in \\lbrace \\pm \\mu ,1,\\dots ,2n\\rbrace $ , $k\\ge 0$ , which are of independent interest for the local-energy decay estimate.", "Recall equations (REF ) and (REF ) from Section  for ${\\dot{\\wp }}=({\\dot{\\ell }},{\\dot{\\xi }}-\\ell )^{\\intercal }$ , $a_{-}$ , and $a_{+}$ , $\\begin{split}{\\dot{\\wp }}= {\\vec{F}}_{\\wp }(S\\vec{\\psi },\\ell ,S{\\vec{N}}-\\beta \\vec{\\omega }),\\qquad \\frac{\\mathrm {d}}{\\mathrm {d}t}(e^{-\\mu t}a_{+})=-S(e^{-\\mu t}F_{+}),\\qquad \\frac{\\mathrm {d}}{\\mathrm {d}t}(e^{\\mu t}a_{-})=S(e^{\\mu t}F_{-}),\\end{split}$ where by a slight abuse of notation we have suppressed the derivatives on $\\vec{\\psi }$ in (REF ) and written ${\\vec{F}}_{\\wp }(S\\vec{\\psi },\\ell ,S{\\vec{\\mathcal {N}}}_\\wp -\\beta \\vec{\\omega }):= {\\vec{G}}( S\\partial _\\Sigma ^{\\le 2} \\vec{\\psi }, \\ell , S{\\vec{N}}- \\beta \\vec{\\omega })$ .", "Here $\\vec{\\omega }$ is defined as in (REF ) (see also (REF )) as the solution of ${\\left\\lbrace \\begin{array}{ll}\\partial _t\\vec{\\omega }+\\beta \\vec{\\omega }=-S{\\vec{F}}_\\omega (\\vec{\\psi },\\ell ,\\vec{\\omega })\\quad &t> 0\\\\\\vec{\\omega }=0\\quad &t\\le 0\\end{array}\\right.", "},$ with ${\\vec{F}}_\\omega $ as in (REF ) (where again we have suppressed the derivatives on $\\vec{\\psi }$ in the notation).", "In view of the spatial support of the test functions in imposing orthogonality conditions, all the integrations appearing in the definitions of ${\\vec{F}}_\\wp $ , $F_{\\pm }$ , and ${\\vec{F}}_\\omega $ are over the region $\\lbrace \\rho \\le {R_1}\\rbrace $ .", "We will also need the equations for $\\Omega _i(\\psi )=\\Omega (\\vec{\\psi },{\\vec{Z}}_i)$ , $i=1,\\dots ,2n$ , and $\\Omega _{\\pm }(\\phi )=\\Omega (\\vec{\\phi },{\\vec{Z}}_{\\pm }^\\mu )$ .", "First, recall that $\\Omega _i(\\psi )=_i+\\omega _i$ , where $\\vec{\\omega }=(\\omega _1,\\dots ,\\omega _{2n})$ is as in (REF ) and $\\vec{}=(_1,\\dots _{2n})$ satisfies $\\begin{split}\\vec{}={\\tilde{S}}({\\vec{N}}+{\\vec{F}}_\\omega ),\\end{split}$ with ${\\vec{F}}_\\omega $ and $N$ are as above (see (REF ) and (REF )).", "Similarly, with ${\\tilde{F}}_{\\pm }=(S-I)F_{\\pm }$ ,To be precise we should write $(S(e^{-\\mu \\cdot }{\\tilde{F}}_{+}))(t)$ instead of $S(e^{-\\mu t}{\\tilde{F}}_{+}(t))$ .", "$\\begin{split}&\\Omega _{+}(\\phi )(t)=e^{\\mu t}S(e^{-\\mu t}{\\tilde{F}}_{+}(t)),\\\\&\\Omega _{-}(\\phi )(t)=e^{-\\mu t}S(e^{\\mu t}{\\tilde{F}}_{-}(t)),\\end{split}$ and $\\begin{split}&\\frac{\\mathrm {d}}{\\mathrm {d}t}\\Omega _{+}(\\phi )(t)=e^{\\mu t}S(e^{-\\mu t}{\\tilde{F}}_{+}^{\\prime }(t)),\\\\&\\frac{\\mathrm {d}^2}{\\,\\mathrm {d} t^2}\\Omega _{+}(\\phi )(t)=e^{\\mu t}S(e^{-\\mu t}{\\tilde{F}}_{+}^{\\prime \\prime }(t)),\\end{split}$ and $\\begin{split}&\\frac{\\mathrm {d}}{\\mathrm {d}t}\\Omega _{-}(\\phi )(t)=e^{-\\mu t}S(e^{\\mu t}{\\tilde{F}}_{-}^{\\prime }(t)),\\\\&\\frac{\\mathrm {d}^2}{\\,\\mathrm {d} t^2}\\Omega _{-}(\\phi )(t)=e^{-\\mu t}S(e^{\\mu t}{\\tilde{F}}_{-}^{\\prime \\prime }(t)).\\end{split}$ Finally recall that the smoothing operator $S$ and the operator ${\\tilde{S}}$ are given by $\\begin{split}Sf(t)=\\int K_S(s)f(t-s)\\mathrm {d}s,\\qquad {\\tilde{S}}f(t)=\\int K_{\\tilde{S}}(s)f(t-s)\\mathrm {d}s,\\end{split}$ where the smooth kernel $K_S$ and the non-smooth kernel $K_{\\tilde{S}}$ are supported in $[0,1]$ .", "In particular, if $|f(t)|\\lesssim \\langle t\\rangle ^{-\\gamma }$ for some $\\gamma >0$ , then we also have $|{\\tilde{S}}f(t)|+ |Sf(t)|\\lesssim \\langle t\\rangle ^{-\\gamma }.$ We will use this observation in this section without further mention.", "Our starting point is to estimate $\\vec{\\omega }$ .", "Lemma 6.1 Under the bootstrap assumptions (REF )–(), if ${R_1}$ is sufficiently large, then $\\begin{split}\\big (\\frac{\\mathrm {d}}{\\mathrm {d}t}\\big )^k\\vec{\\omega }\\lesssim \\epsilon ^2 \\langle t\\rangle ^{-\\frac{9}{2}+\\kappa },\\quad \\forall k\\ge 0.\\end{split}$ Integrating equation (REF ) gives $\\begin{split}\\vec{\\omega }(t)=\\int _0^t e^{-\\beta (t-s)}(S{\\vec{F}}_\\omega (\\vec{\\psi },\\ell ,\\vec{\\omega }))(s)\\mathrm {d}s.\\end{split}$ Recalling that $|{\\vec{F}}_\\omega (x,p,w)|\\lesssim |x| (|x|+|w|)$ (see (REF )), the desired estimate for $k=0$ follows from the assumptions (REF )–().", "Here a point that deserves further clarification is the relation between $\\vec{\\psi }$ and $\\phi $ and its derivatives.", "First, note that by writing $\\vec{\\psi }=\\vec{\\phi }+a_{+}{\\vec{Z}}_\\mu ^{+}+a_{-}{\\vec{Z}}_\\mu ^{-}$ and using the bootstrap assumptions on $a_{\\pm }$ , we can reduce the estimate on $\\vec{\\psi }$ to that on $\\vec{\\phi }$ .", "Then observe that by the first component of equation (REF ) (viewed as an equation for $\\vec{\\phi }$ ) we can estimate $|{\\dot{\\phi }}|\\lesssim |\\partial \\phi |+|{\\dot{\\wp }}|+|\\phi |^2$ .", "Finally, the higher order estimates $k\\ge 1$ follow by exactly the same argument after differentiating equation (REF ) and absorbing the time derivatives by the smoothing operator $S$ .", "For the proof of Proposition REF we also need to prove some estimates for $\\Omega _i(\\phi )$ .", "Lemma 6.2 Under the bootstrap assumptions (REF )–() and for ${R_1}$ sufficiently large, $\\begin{split}|\\Omega _i(\\phi )|\\lesssim o_{{R_1}}(1)\\epsilon \\langle t\\rangle ^{-\\frac{5}{2}+\\kappa }+\\mathcal {O}(\\wp )\\epsilon \\langle t\\rangle ^{-\\frac{5}{2}+\\kappa }+\\epsilon \\langle t\\rangle ^{-\\frac{9}{2}+\\kappa },\\quad i\\in \\lbrace \\pm ,1,\\dots ,2n\\rbrace ,\\end{split}$ where $o_{{R_1}}(1)$ denotes a constant that goes to zero as ${R_1}\\rightarrow \\infty $ .", "Starting with $\\Omega _1(\\psi ),\\dots ,\\Omega _{2n}(\\psi )$ , observe that in view of Lemma REF it suffices to estimate $\\vec{}$ .", "For this we distinguish between the first $n$ components and the last $n$ components of $\\vec{}$ by representing them as $_i$ and $_{n+i}$ , respectively, with $i=1,\\dots ,n$ .", "Appealing again to Lemma REF and the bootstrap assumptions (REF )–(), the only terms on the right-hand side of (REF ) which need special treatment are the linear terms in $\\vec{\\psi }$ .", "To use the bootstrap assumptions to draw this conclusion, note that one can account for the difference between $\\vec{\\psi }$ and $\\phi $ and its derivatives in the same way as in the proof of Lemma REF .", "For the linear terms, recall from the form of ${\\vec{N}}$ from Section  (see (REF )) that the only term that is not bounded by $\\mathcal {O}(\\wp )\\epsilon \\langle t\\rangle ^{-\\frac{5}{2}+\\kappa }$ directly by the bootstrap assumptions is $\\Omega (\\vec{\\psi },M{\\vec{Z}}_i)$ .", "But, $M{\\vec{Z}}_i$ would be zero, if it were not for the cutoff function in the definition of ${\\vec{Z}}_i$ .", "Treating the difference between $\\vec{\\psi }$ and $\\phi $ as before, since every term in the definition of ${\\vec{Z}}_i$ comes with a decay of $r^{-n+1}$ or a factor of $\\ell $ , our task reduces to estimating ${R_1}^{-1}\\langle \\chi _{\\lbrace r\\simeq {R_1}\\rbrace }\\partial \\phi ,\\mathcal {O}(r^{-n+1})\\rangle $ .", "This is then bounded by (recall that $n=5$ ) $\\begin{split}{R_1}^{-1}\\Vert \\chi _{\\lbrace r\\simeq {R_1}\\rbrace }r^{-\\frac{5}{2}+\\kappa }\\partial \\phi \\Vert _{L^2}\\Big (\\int _{\\lbrace r\\simeq {R_1}\\rbrace }r^{-5+1+5-2\\kappa }\\mathrm {d}r\\Big )^{\\frac{1}{2}}\\lesssim \\epsilon {R_1}^{-\\kappa }\\langle t\\rangle ^{-\\frac{5}{2}+\\kappa },\\end{split}$ where for the last estimate we have used the bootstrap assumption ().", "The case of $_{n+i}$ is similar, with the difference that now we need to estimate $\\Omega (\\vec{\\psi },M{\\vec{Z}}_{n+i})$ instead of $\\Omega (\\vec{\\psi },M{\\vec{Z}}_i)$ .", "Since $M{\\vec{Z}}_{n+i}$ would be ${\\vec{Z}}_{i}$ if it were not for the cutoffs in the definitions of ${\\vec{Z}}_i$ and ${\\vec{Z}}_{n+i}$ , this leads to estimating $\\Omega (\\vec{\\psi },{\\vec{Z}}_i)$ and ${R_1}^{-2}\\langle \\chi _{\\lbrace r\\simeq {R_1}\\rbrace }\\phi ,\\mathcal {O}(r^{-n+1})\\rangle $ .", "The first term was already treated above, and the second term is bounded, using (), as (recall that $n=5$ ) $\\begin{split}{R_1}^{-2}\\Vert \\chi _{\\lbrace r\\simeq {R_1}\\rbrace }r^{-\\frac{5}{2}+\\kappa }\\phi \\Vert _{L^2}\\Big (\\int _{\\lbrace r\\simeq {R_1}\\rbrace }r^{-5+1+5-2\\kappa }\\mathrm {d}r\\Big )^{\\frac{1}{2}}\\lesssim \\epsilon {R_1}^{-1-\\kappa }\\langle t\\rangle ^{-\\frac{5}{2}+\\kappa }.\\end{split}$ This proves the estimates for $\\Omega _{i}(\\psi )$ and the passage to $\\Omega (\\phi )$ is again by decomposing $\\vec{\\psi }$ in terms of $\\vec{\\phi }$ and $a_{\\pm }$ and observing the extra smallness in ${R_1}$ coming from $\\langle {\\vec{Z}}_i,{\\vec{Z}}_{\\pm }\\rangle $ , $i=1,\\dots ,2n$ .", "The estimates for $\\Omega _{\\pm }(\\phi )$ using (REF ) are similar, where again we use the smallness of $\\langle {\\vec{Z}}_i,{\\vec{Z}}_{\\pm }\\rangle $ , $i=1,\\dots ,2n$ , when estimating the linear contributions of ${\\dot{\\wp }}$ in $F_{\\pm }$ .", "We are now ready to prove Proposition REF .", "We start with the estimates for ${\\dot{\\wp }}$ for which we use the first equation in (REF ).", "As in the proof of Lemma REF , in view of the presence of the smoothing operator $S$ , the higher derivatives are treated in the same way as ${\\dot{\\wp }}$ .", "For ${\\dot{\\wp }}$ , in view of the estimate for $\\vec{\\omega }$ from Lemma REF and the bootstrap assumptions (REF )–(), the only terms on the right-hand side of the equation ${\\dot{\\wp }}={\\vec{F}}_\\wp $ which need special attention are the linear terms in $\\vec{\\psi }$ .", "Here the difference between $\\vec{\\psi }$ and $\\phi $ and its derivatives is accounted for in the same way as in the proof of Lemma REF .", "Turning to these linear terms, recall that as above (see (REF )) they are given by $\\Omega (\\vec{\\psi },M{\\vec{Z}}_i)$ and $\\Omega (\\vec{\\psi },M{\\vec{Z}}_{n+i})$ , $i=1,\\dots ,n$ .", "But, these can be estimated exactly as in (REF ) and (REF ).", "The estimate for $a_{-}$ is similar, where now we use the last equation in (REF ) which leads to the representation $\\begin{split}a_{-}(t)=a_{-}(0)e^{-\\mu t}+\\int _0^te^{-\\mu t}S(e^{\\mu s}F_{-}(s))\\mathrm {d}s.\\end{split}$ The first term already has better decay than we need.", "For the second term we can again consider the linear and quadratic and higher order contributions of $F_{-}$ separately, and gain smallness in ${R_1}$ for the linear terms from the smallness of $\\langle {\\vec{Z}}_i,{\\vec{Z}}_{\\pm }\\rangle $ .", "The higher derivatives are treated similarly, where in the case where derivatives fall on $SF_{-}$ we can absorb them in the smoothing operator $S$ .", "To estimate $a_{+}$ , we use the triangle inequality and the bootstrap assumption (REF ) to bound, $\\begin{split}|a_{+}(t)|\\lesssim \\delta _{\\wp }t^{-3}+e^{\\mu t}S(e^{-\\mu t}F_{+}).\\end{split}$ The desired estimate then follows by estimating $F_{+}$ using similar considerations as for $F_{-}$ .", "The higher derivative estimates for $a_{+}$ also follow similarly by using equation (REF ) to express ${\\dot{a}}_{+}$ algebraically in terms of $a_{+}$ and $F_{+}$ as ${\\dot{a}}_{+}=\\mu a_{+}+e^{\\mu t}S(e^{\\mu t}F_{+})$ .", "In addition to Proposition REF , we will also need some integrated estimates on $\\Omega _i({{\\bf T}}^k\\phi )$ , $a_\\pm ^{(k)}$ , and ${\\dot{\\wp }}^{(k)}$ for our local-energy decay estimate.", "These estimates are the content of the next lemma.", "Lemma 6.3 Under the bootstrap assumptions (REF )–(), if ${R_1}$ is sufficiently large, then for $k=1,2$ , $j\\ge 0$ , and $i\\in \\lbrace \\pm ,1,\\dots ,2n\\rbrace $ , $\\begin{split}&\\Vert \\Omega _i({{\\bf T}}^k\\phi )\\Vert _{L^2([t_1,t_2])}+\\Vert {\\dot{a}}_{\\pm }^{(k+j)}\\Vert _{L^2([t_1,t_2])}+\\Vert {\\dot{\\wp }}^{(k+1+j)}\\Vert _{L^2([t_1,t_2])}\\\\&\\lesssim (\\delta _\\wp +o_{{R_1}}(1)+\\mathcal {O}(\\wp ))\\epsilon \\langle t_1\\rangle ^{-3}+\\epsilon \\langle t_1\\rangle ^{-\\frac{9}{2}+2\\kappa }\\\\&\\quad +(o_{{R_1}}(1)+\\mathcal {O}(\\wp ))(\\Vert {{\\bf T}}^k\\phi \\Vert _{LE([t_1,t_2])}+\\sup _{t_1\\le \\tau \\le t_2}\\Vert {{\\bf T}}^k\\phi \\Vert _{E(\\Sigma _\\tau )}),\\\\\\end{split}$ where $o_{{R_1}}(1)$ denotes a constant that goes to zero as ${R_1}\\rightarrow \\infty $ .", "We start with the estimate for $\\Omega _i({{\\bf T}}^k\\psi )$ , $i\\in \\lbrace 1,\\dots ,n\\rbrace $ .", "This is achieved by differentiating the expression $\\Omega _i(\\psi )=\\omega _i+_i$ .", "The desired estimate is then derived similarly to the proof of Lemma REF .", "Indeed, using the bootstrap assumptions to estimate the quadratic and higher order terms by $\\epsilon ^2t_1^{-\\frac{9}{2}+\\kappa }$ , we see that except for the contribution of $\\Omega ({{\\bf T}}^k\\vec{\\psi },M{\\vec{Z}}_i)$ the remaining terms are bounded by $\\begin{split}\\mathcal {O}(\\wp )\\Vert {{\\bf T}}^k\\phi \\Vert _{L^2([t_1,t_2])}+\\mathcal {O}(\\wp )\\Vert {\\dot{a}}_{\\pm }^{(k)}\\Vert _{L^2([t_1,t_2])}+\\mathcal {O}(\\wp )\\Vert {\\dot{\\wp }}^{k+1}\\Vert _{L^2([t_1,t_2])}.\\end{split}$ Here as usual we have expressed $\\vec{\\psi }$ in terms of $\\phi $ and its derivatives as well as $a_{\\pm }$ and ${\\dot{\\wp }}$ .", "Note that the last two terms above can be absorbed on the left-hand side of (REF ).", "Similarly, using the smallness of $MZ={\\vec{Z}}_i$ , $i=1,\\dots ,n$ , we can estimate the contribution of $\\Omega _i({{\\bf T}}^k\\vec{\\psi },M{\\vec{Z}}_i)$ by $\\begin{split}o_{{R_1}}(1)\\Vert {{\\bf T}}^k\\phi \\Vert _{L^2([t_1,t_2])}+o_{{R_1}}(1)\\Vert {\\dot{a}}_{\\pm }^{(k)}\\Vert _{L^2([t_1,t_2])}+o_{{R_1}}(1)\\Vert {\\dot{\\wp }}^{k+1}\\Vert _{L^2([t_1,t_2])}.\\end{split}$ Note that even though ${\\tilde{S}}$ in (REF ) is not a smoothing operator, since only $\\vec{\\psi }$ and not ${{\\bf T}}\\vec{\\psi }$ appears in ${\\vec{N}}$ and ${\\vec{F}}_\\omega $ (and similarly in ${\\tilde{F}}_{\\pm }$ in the discussion for $\\Omega _{\\pm }({{\\bf T}}^k\\phi )$ below) and spatial derivatives can be integrated by parts to the lower order terms, there is no loss of regularity in these estimates.", "The passage from $\\Omega _i({{\\bf T}}^k\\psi )$ to $\\Omega _i({{\\bf T}}^k\\phi )$ follows as usual.", "The estimate for $\\Omega _{n+i}({{\\bf T}}^k\\phi )$ now also follows from that of $\\Omega _{i}({{\\bf T}}^k\\phi )$ in the same way as in the proof of Lemma REF .", "The estimates for $\\Omega _{\\pm }({{\\bf T}}^k\\phi )$ are proved similarly where now we use the differentiated equations (REF ) and (REF ).", "Once the estimates for $\\Omega _i({{\\bf T}}^k\\phi )$ , $i\\in \\lbrace \\pm ,1,\\dots ,2n\\rbrace $ , are established, the estimates for ${\\dot{a}}_{-}^{(k+j)}$ and ${\\dot{\\wp }}^{(k+1+j)}$ follow as in the proof of Proposition REF by differentiating the corresponding equations in (REF ).", "As usual, any excess derivatives can be absorbed by the smoothing operator $S$ .", "The argument for ${\\dot{a}}_{+}$ is more delicate, and it is here that the improved decay in (REF ) comes in.", "Differentiating the differential equation for $a_{+}$ from (REF ), and using the notation ${\\dot{F}}_{+}=\\frac{\\mathrm {d}}{\\mathrm {d}t}F_{+}$ , gives $\\begin{split}\\frac{\\mathrm {d}}{\\mathrm {d}t}(e^{-\\mu t}{\\dot{a}}_{+})=-S(e^{-\\mu t}{\\dot{F}}_{+}).\\end{split}$ Integrating from the final bootstrap time $\\tau _f$ and using the algebraic relation ${\\dot{a}}_{+}=\\mu a_{+}-e^{\\mu t}S(e^{-\\mu t}F_{+})$ (which is a rewriting of the differential equation for $a_{+}$ ), for any $t\\le \\tau _f$ we get $\\begin{split}{\\dot{a}}_{+}(t)=(\\mu a_{+}(\\tau _f)-e^{\\mu \\tau _f}S(e^{-\\mu \\tau _f}F_{+}(\\tau _f)))e^{-\\mu (\\tau _f-t)}-\\int _{t}^{\\tau _f}e^{-\\mu (s-t)}(e^{\\mu s}S(e^{-\\mu s}{\\dot{F}}_{+}(s)))\\mathrm {d}s.\\end{split}$ The desired estimate for ${\\dot{a}}_{+}$ now follows from (REF ) and an application of Schur's test with the kernel $e^{-\\mu (s-t)}\\chi _{s\\ge t}$ , where we use similar considerations as before to bound ${\\dot{F}}_{+}$ .", "The higher order estimates for $a_{+}$ are proved similarly by differentiating equation (REF )." ], [ "Local Energy Decay", "In this section we prove linear energy and local energy decay estimates.", "The relatively straightforward nonlinear applications are postponed to the next section after the nonlinearity and source terms of the equation are calculated more carefully.", "For any $\\tau _1<\\tau _2$ , let $\\begin{split}\\Sigma _{\\tau _1}^{\\tau _2}= \\cup _{\\tau =\\tau _1}^{\\tau _2}\\Sigma _\\tau .\\end{split}$ and consider two functions $,f:\\Sigma _0^T\\rightarrow \\mathbb {R}$ , satisfying $\\begin{split}\\mathcal {P}=f.\\end{split}$ We make the following assumptions on $\\mathcal {P}$ , which are consistent with the linear operator arising in our problem: In the global non-geometric coordinates from Section REF , $\\mathcal {P}$ satisfies the properties stated in Remark REF , while $\\mathcal {P}_0$ in Remark REF takes the form given in Section REF in the global geometric coordinates defined there.", "In the interior coordinates from Section REF and the exterior coordinate from Section REF , we assume that $\\mathcal {P}$ takes the forms (REF ) and (REF ), (REF ), (REF ), (REF ) respectively.", "We use $K_{\\mathrm {int}}$ and $K_{\\mathrm {ext}}$ to denote two large compact regions in $\\Sigma _0^T$ with $K_{\\mathrm {ext}}\\subseteq K_{\\mathrm {int}}$ , such that the coordinates $(t,\\rho ,\\omega )$ (Section REF ) are defined in a neighborhood $U_{\\mathrm {int}}$ of $K_{\\mathrm {int}}$ , and the coordinates $(\\tau ,\\rho ,\\theta )$ (Section REF ) are defined in a neighborhood of $\\overline{K_{\\mathrm {ext}}^c}$ .", "We assume that ${R_1}$ in Section  (see for instance Section REF ) is such that the region $\\lbrace \\rho \\le {R_1}\\rbrace $ is much larger than $K_{\\mathrm {int}}$ .", "For any $\\tau $ the energy norm of $$ on $\\Sigma _\\tau $ is defined by $\\begin{split}E[](\\tau )\\equiv \\Vert \\Vert _{E(\\Sigma _\\tau )}^2&:=\\int _{\\Sigma _\\tau }\\chi _{\\le {\\tilde{R}}}(|\\partial |^2+\\langle \\rho \\rangle ^{-2}||^2)\\mathrm {d}V\\\\&\\quad +\\int _{\\Sigma _\\tau }\\chi _{\\ge {\\tilde{R}}}(|\\partial _\\Sigma |^2+r^{-2}|T|^2+r^{-2}||^2) \\mathrm {d}V.\\end{split}$ Here $\\chi _{\\ge {\\tilde{R}}}$ is a cutoff function supported in $\\mathcal {C}_{\\mathrm {hyp}}$ , for some fixed large ${\\tilde{R}}\\gg 1$ , and $\\chi _{\\le {\\tilde{R}}}=1-\\chi _{\\ge {\\tilde{R}}}$ .", "The local energy norm on any (space-time) region $\\mathcal {R}$ of the domain of definition of $$ is defined by $\\begin{split}\\Vert \\Vert _{LE(\\mathcal {R})}^2=\\int _{\\mathcal {R}}\\chi _{\\le {\\tilde{R}}} ((\\rho {\\tilde{\\psi }})^2+(\\rho \\partial {\\tilde{\\psi }})^2+( \\partial _\\rho {\\tilde{\\psi }})^2)\\mathrm {d}V+\\int _{\\mathcal {R}}\\chi _{\\ge {\\tilde{R}}}( r^{-3-\\alpha }^2+r^{-1-\\alpha }(\\partial )^2)\\mathrm {d}V.\\end{split}$ Here $0<\\alpha \\ll 1$ is a fixed small positive number, and $\\rho $ and $r$ are the radial coordinates introduced in Section .", "The dual local energy norm is defined by $\\begin{split}\\Vert f\\Vert _{LE^\\ast (\\mathcal {R})}^2=\\int _{\\mathcal {R}}\\chi _{\\le {\\tilde{R}}} f^2\\mathrm {d}V+\\int _{\\mathcal {R}}\\chi _{\\ge {\\tilde{R}}}r^{1+\\alpha }f^2\\mathrm {d}V.\\end{split}$ We use the notation $\\begin{split}\\Vert \\Vert _{L^pL^q(\\Sigma _{\\tau _1}^{\\tau _2})}=\\Big (\\int _{\\tau _1}^{\\tau _2}\\Vert \\Vert _{L^q(\\Sigma _\\tau )}^p\\mathrm {d}\\tau \\Big )^{\\frac{1}{p}},\\end{split}$ with the usual modificaion when $p=\\infty $ .", "When $p=q$ we simply write $\\Vert \\Vert _{L^p(\\Sigma _{\\tau _1}^{\\tau _2})}$ , and similarly with $\\Sigma _{\\tau _1}^{{\\tau _2}}$ replaced by any other region.", "We also occasionally use the notation $\\begin{split}\\langle _1,_2\\rangle \\equiv \\langle _1,_2\\rangle _{\\Sigma _{\\tau }}=\\int _{\\Sigma _{\\tau }}_1_2\\,\\mathrm {d}V_{\\Sigma _\\tau }.\\end{split}$ Since our focus in this section is on linear estimates, we introduce ${}_k$ as a linear proxy for $\\Omega _k$ (which was defined nonlinearly in terms of $\\vec{\\psi }$ and $\\vec{\\phi }$ in Section ): $\\begin{split}&{}_k()(c)=-\\int _{\\lbrace = c\\rbrace }Z_kn^\\alpha \\partial _\\alpha \\sqrt{|h|}\\mathrm {d}y,\\quad {}_{n+k}((c))=\\int _{\\lbrace = c\\rbrace }Z_kn^\\alpha \\partial _\\alpha {\\tilde{}}\\sqrt{|h|}\\mathrm {d}y,\\quad k=1,\\dots ,n,\\\\&{}_{\\mu }^{\\pm }()(c)=\\int _{\\lbrace = c\\rbrace }(\\pm \\mu Z_\\mu \\partial _\\alpha {\\tilde{}}-Z_\\mu \\partial _\\alpha )n^\\alpha \\sqrt{|h|}\\mathrm {d}y.\\end{split}$ Here $n$ denotes the normal to $\\Sigma _c$ with respect to $h$ , and $y$ denotes the spatial variables (say $(,)$ ) on $\\Sigma _c$ .", "Our goal in this section is to prove the following two estimates.", "The first is the energy estimate.", "Proposition 7.1 Suppose $$ satisfies $\\mathcal {P}=f$ , and $\\sum _{k\\in \\lbrace \\pm \\mu ,1,\\dots ,2n\\rbrace }|{}_k((t))|\\le \\delta \\Vert \\Vert _{E(\\Sigma _t)}$ .", "Then if $\\delta $ is sufficiently small, for any $t_1<t_2$ and $\\varepsilon \\ll 1$ , $$ satisfies the estimates $\\begin{split}&\\sup _{t_1\\le t\\le t_2}\\Vert \\Vert _{E(\\Sigma _{t})}\\lesssim \\Vert \\Vert _{E(\\Sigma _{t_1})}+C_\\varepsilon \\Vert f\\Vert _{L^1L^2(\\Sigma _{t_1}^{t_2})},\\\\&\\sup _{t_1\\le t\\le t_2}\\Vert \\Vert _{E(\\Sigma _{t})}\\lesssim \\Vert \\Vert _{E(\\Sigma _{t_1})}+C_\\varepsilon \\Vert f\\Vert _{LE^\\ast (\\Sigma _{t_1}^{t_2})}+\\Vert {{\\bf T}}f\\Vert _{LE^\\ast (\\Sigma _{t_1}^{t_2})}\\\\&\\phantom{\\sup _{t_1\\le t\\le t_2}\\Vert \\Vert _{E(\\Sigma _{t})}\\lesssim }+\\Vert f\\Vert _{L^\\infty L^2(\\Sigma _{t_1}^{t_2})}+\\varepsilon \\Vert \\Vert _{LE(\\Sigma _{t_1}^{t_2})}.\\end{split}$ The second estimate we will prove in this section is a local energy decay (LED) estimate.", "Proposition 7.2 Suppose $$ satisfies $\\mathcal {P}=f$ , and $\\sum _{k\\in \\lbrace \\pm \\mu ,1,\\dots ,2n\\rbrace }|{}_k((t))|\\le \\delta \\Vert \\Vert _{E(\\Sigma _t)}$ .", "Then for any $t_1<t_2$ and $\\varepsilon \\ll 1$ , $$ satisfies the estimates $\\begin{split}&\\Vert \\Vert _{LE(\\Sigma _{t_1}^{t_2})}\\lesssim \\sum _{k\\in \\lbrace \\pm \\mu ,1,\\dots ,2n\\rbrace }\\Vert {}_k()\\Vert _{L^2([t_1,t_2])}+\\Vert \\Vert _{E(\\Sigma _{t_1})}+\\Vert f\\Vert _{L^1L^2(\\Sigma _{t_1}^{t_2})},\\\\&\\Vert \\Vert _{LE(\\Sigma _{t_1}^{t_2})}\\lesssim \\sum _{k\\in \\lbrace \\pm \\mu ,1,\\dots ,2n\\rbrace }\\Vert {}_k()\\Vert _{L^2([t_1,t_2])}+\\Vert \\Vert _{E(\\Sigma _{t_1})}+\\Vert f\\Vert _{LE^\\ast (\\Sigma _{t_1}^{t_2})}\\\\&\\phantom{\\Vert \\Vert _{LE(\\Sigma _{t_1}^{t_2})}\\lesssim }+\\Vert {{\\bf T}}f\\Vert _{LE^\\ast (\\Sigma _{t_1}^{t_2})}+\\Vert f\\Vert _{L^\\infty L^2(\\Sigma _{t_1}^{t_2})}.\\end{split}$ Remark 7.3 As mentioned earlier ${}_k$ is a linear substitute for $\\Omega _k$ .", "It is easy to see from our proofs that in Propositions REF and REF one can replace ${}_k$ by any other choice $\\tilde{{}}_k$ as long as $\\Vert \\tilde{{}}_k()-{}_k()\\Vert _{L^2([t_1,t_2])}$ is bounded by a small multiple of the $LE$ norm of $$ .", "In our nonlinear applications we will use this observation to apply these propositions with ${}_k$ replaced by $\\Omega _k$ .", "The condition $|\\Omega _k|((t))\\le \\delta \\Vert \\Vert _{E(\\Sigma _t)}$ will always be satisfied in our applications as a consequence of the orthogonality conditions.", "See for instance the arguments in Lemmas REF and REF .", "Remark 7.4 The proof of Proposition REF requires several multiplier identities.", "In applications, where we consider the equation after commuting derivatives, we may want to perform some integration by parts in the term $fQ$ , where $Q$ denotes the multiplier, before placing $f$ in $LE^\\ast (\\Sigma _{t_1}^{t_2})$ or $L^1L^2(\\Sigma _{t_1}^{t_2})$ .", "This is the case for instance where $f$ is of the form $\\partial _\\Sigma ^2g$ , where $g$ denotes the unknown with fewer commuted derivatives.", "While such integration by parts manipulations are not explicitly contained in the statement of Proposition REF , they can be easily incorporated by an inspection of the proof.", "Specifically, they can be performed in the treatment of equation (REF ) in Lemma REF .", "Remark 7.5 The explanation for the second estimate in (REF ) is the same as for the corresponding estimate in Proposition REF .", "See Remark REF .", "We start with the proof of Proposition REF .", "Recall from Remark REF , part (3), that in the global coordinates $(,,)$ $\\begin{split}\\mathcal {P}=\\frac{1}{\\sqrt{|{\\bf h}|}}\\partial _\\mu (\\sqrt{|{\\bf h}|}({\\bf h}^{-1})^{\\mu \\nu }\\partial _\\nu )+V+{\\tilde{\\mathcal {P}}},\\end{split}$ where ${\\tilde{\\mathcal {P}}}$ has the structure given in Remark REF , and $|\\partial _{\\bf h}|\\lesssim \\epsilon ^{-\\gamma }$ for some $\\gamma >1$ .", "We multiply equation (REF ) by $\\partial _\\sqrt{|{\\bf h}|}$ and integrate.", "Note that the contribution of $f\\partial _$ can be estimated by the right-hand side of each estimate in (REF ) plus a small multiple of the corresponding left-hand side, as in the proof of Proposition REF .", "The main term in $\\mathcal {P}\\partial _\\sqrt{|{\\bf h}|}$ is $\\begin{split}(\\mathcal {P}-{\\tilde{\\mathcal {P}}})\\partial _\\sqrt{|{\\bf h}|}&=\\partial _\\mu \\big (\\sqrt{|{\\bf h}|}({\\bf h}^{-1})^{\\mu \\nu }\\partial _\\mu \\partial _\\big )+\\frac{1}{2}\\partial _\\big (\\sqrt{|{\\bf h}|}V^2-\\sqrt{|{\\bf h}|}({\\bf h}^{-1})^{\\mu \\nu }\\partial _\\mu \\partial _\\nu \\big )\\\\&\\quad +\\partial _(\\sqrt{|{\\bf h}|}({\\bf h}^{-1})^{\\mu \\nu })\\partial _\\mu \\partial _\\nu .\\end{split}$ In view of the assumption on ${}_k()$ , the first line gives us the desired control of the energy of $$ .", "Indeed, we can write $=^\\perp +\\sum _k\\langle ,{\\underline{Z}}_i\\rangle _{\\Sigma _\\tau }{\\underline{Z}}_i$ where ${\\underline{Z}}_i$ denote truncated eigenfunctions $\\chi \\varphi _i$ of $\\Delta _{\\underline{\\mathcal {C}}}+V$ supported in some region $\\lbrace \\le _1\\rbrace $ with $_1$ large (specifically, $_1\\ge {R_1}$ ), normalized to have $L^2$ norm equal to one, and where $\\langle ,Z_i\\rangle _{\\Sigma _\\tau }=\\int _{\\Sigma _\\tau } {\\underline{Z}}_i \\mathrm {d}V$ .", "The first line of (REF ) then bounds the energy of $^\\perp $ and the energy of $$ can be bounded in terms of that of $^\\perp $ using the assumption on ${}_k()$ .", "The second line of (REF ) can be absorbed by a small multiple of the energy in view of the $$ decay of $\\partial _(\\sqrt{|{\\bf h}|}({\\bf h}^{-1})^{\\mu \\nu })$ .", "Here note that in view of the form of ${\\bf h}$ from Remark REF the terms in $\\partial _(\\sqrt{|{\\bf h}|}({\\bf h}^{-1})^{\\mu \\nu })$ where at least one of $\\mu ,\\nu $ is $$ , in particular $\\partial _(\\sqrt{|{\\bf h}|}({\\bf h}^{-1})^{})$ , come with extra $$ decay which allows us to bound the corresponding errors in the exterior region by the energy.", "Finally the contribution of ${\\tilde{\\mathcal {P}}}$ can again be bounded by a small multiple of the energy in view of the $$ decay of the coefficients of ${\\tilde{\\mathcal {P}}}$ .", "Here the only term in ${\\tilde{\\mathcal {P}}}$ that needs special attention is ${\\mathring{a}}(\\partial _+\\frac{n-1}{2})\\partial _$ which, after integration by parts, yields $\\begin{split}\\frac{1}{2}\\big (\\partial _({\\mathring{a}}\\sqrt{|{\\bf h}|})-\\frac{n-1}{\\rho }{\\mathring{a}}\\sqrt{|{\\bf h}|}\\big )(\\partial _)^2.\\end{split}$ Since $|\\partial _({\\mathring{a}}\\sqrt{|{\\bf h}|})-\\frac{n-1}{\\rho }{\\mathring{a}}\\sqrt{|{\\bf h}|}|\\lesssim ^{-\\gamma }^{-2}$ for large $$ , this contribution can be bounded by the energy as well.", "We next turn to the proof of Proposition REF .", "The proofs of the two estimates in this proposition are different only in which energy estimate from Proposition REF we use to bound the fluxes that come up in the integration by parts, so we give the proof only for the first estimate.", "Our starting point is a local energy decay estimate allowing for an $L^2$ error in a bounded region.", "For this we need to define some cutoff functions and auxiliary potentials.", "We fix $\\chi _1\\equiv \\chi _1({\\tilde{}})$ to be a smooth, non-decreasing, non-negative cutoff function supported in $U_{\\mathrm {ext}}$ that is equal to one on $\\overline{K_{\\mathrm {ext}}^c}$ and satisfies $({\\mathrm {sgn}}{\\tilde{}}) \\frac{\\mathrm {d}}{\\mathrm {d}{\\tilde{}}} \\chi _{1} \\le 0$ .", "Similarly, $\\chi _2\\equiv \\chi _2(\\rho )$ is a smooth, non-negative cutoff supported in $U_{\\mathrm {int}}$ that is equal to one on $K_{\\mathrm {int}}$ .", "Let $V_{\\mathrm {temp}}\\equiv V_{\\mathrm {temp}}(\\rho )$ be a compactly supported, non-negative, smooth potential such that $\\begin{split}{\\mathrm {supp}}\\,V_{\\mathrm {temp}}\\subseteq U_{\\mathrm {int}},\\qquad ({\\mathrm {sgn}}\\rho )\\frac{\\mathrm {d}}{\\mathrm {d}\\rho }V_{\\mathrm {temp}}(\\rho )\\le 0~\\mathrm {in~}U_{\\mathrm {int}},\\qquad ({\\mathrm {sgn}}\\rho )\\frac{\\mathrm {d}}{\\mathrm {d}\\rho }V_{\\mathrm {temp}}(\\rho )\\le -v_0<0~\\mathrm {in~} K_{\\mathrm {int}},\\end{split}$ and for some large constant $M$ to be fixed later, let $\\begin{split}V_{\\mathrm {far}}:=MV_{\\mathrm {temp}}.\\end{split}$ Let $_{\\mathrm {far}}$ be the solution to $\\begin{split}(\\mathcal {P}-V_{\\mathrm {far}})_{\\mathrm {far}}= f,\\qquad (_{\\mathrm {far}},\\partial __{\\mathrm {far}})\\vert _{\\Sigma _{t_1}}=(,\\partial _)\\vert _{\\Sigma _{t_1}},\\end{split}$ and $_{\\mathrm {near}}:=-_{\\mathrm {far}}$ .", "Note that $_{\\mathrm {near}}$ satisfies $\\begin{split}\\mathcal {P}_{\\mathrm {near}}=-V_{\\mathrm {far}}_{\\mathrm {far}},\\qquad (_{\\mathrm {far}},\\partial __{\\mathrm {near}})\\vert _{\\Sigma _{t_1}}=(0,0).\\end{split}$ Lemma 7.6 $_{\\mathrm {far}}$ and $_{\\mathrm {near}}$ as defined above satisfy $\\begin{split}\\Vert _{\\mathrm {far}}\\Vert _{LE(\\Sigma _{t_1}^{t_2})}\\lesssim \\Vert \\Vert _{E(\\Sigma _{t_1})}+ \\Vert f\\Vert _{L^1L^2(\\Sigma _{t_1}^{t_2})},\\end{split}$ and $\\begin{split}\\Vert _{\\mathrm {near}}\\Vert _{LE(\\Sigma _{t_1}^{t_2})}&\\lesssim \\Vert \\Vert _{E(\\Sigma _{t_1})}+ \\Vert f\\Vert _{L^1L^2(\\Sigma _{t_1}^{t_2})}+\\Vert _{\\mathrm {near}}\\Vert _{L^2(K_{\\mathrm {ext}})}.\\end{split}$ The proof consists of two multiplier arguments, one in the exterior and one in the interior.", "The proofs for the estimates for $_{\\mathrm {near}}$ and $_{\\mathrm {far}}$ are similar so we carry out the details for $_{\\mathrm {far}}$ which is slightly more involved.", "To simplify notation we write $$ for $_{\\mathrm {far}}$ and $U$ for $V_{\\mathrm {far}}-V$ in the remainder of the proof.", "In addition to the smooth cutoffs $\\chi _1$ and $\\chi _2$ introduced above, we will write $\\chi _A$ to denote an appropriate cutoff with support in a set $A$ .", "Starting with the exterior we use (REF ) and (REF ) to write the equation in the exterior as (to be precise, we have used the conjugation (REF ) and $$ corresponds to the conjugated variable, but the estimates are easy to transfer between the conjugated and original variables) $\\begin{split}\\Box _m-U+{\\mathrm {Err}}_\\mathcal {P}()=f.\\end{split}$ Let $Q$ be the multiplier defined in the $({\\tilde{}},{\\tilde{}},{\\tilde{}})$ coordinates, relative to the parameter values at $t_2$ , as $\\begin{split}Q=-2\\beta _1(\\partial _{\\tilde{}}-\\partial _{\\tilde{}})+(\\beta _1^{\\prime }+\\frac{n-1}{{\\tilde{}}}\\beta _1),\\end{split}$ where (here $\\chi _1$ is as defined before the statement of Lemma REF ) $\\begin{split}\\beta _1\\equiv \\beta _1({\\tilde{}}) = (\\frac{{\\tilde{}}}{\\langle {\\tilde{}}\\rangle }-\\frac{\\delta {\\tilde{}}}{\\langle {\\tilde{\\rho }}\\rangle ^{1+\\alpha }})\\chi _1({\\tilde{}}),\\end{split}$ for suitable small constants $\\alpha $ and $\\delta $ .", "We multiply equation (REF ) by $Q|m|^{\\frac{1}{2}}$ .", "Note that, except for the cutoff $\\chi _1$ , this choice of $Q$ is the standard multiplier for the proof of LED on Minkowski space near each asymptotically flat end ${\\tilde{}}\\rightarrow \\pm \\infty $ .", "As usual, for concreteness, we focus on the end ${\\tilde{}}\\rightarrow \\infty $ .", "The main contribution comes from $\\Box _m-U$ .", "By direct computation, for any vectorfield $a^\\mu \\partial _\\mu $ (here we use $i,j$ to denote tangential partial derivatives with respect to $$ and $$ ), $(\\Box _m-U) a^\\lambda \\partial _\\lambda |m|^{\\frac{1}{2}}&=-\\frac{1}{2}\\partial _(|m|^{\\frac{1}{2}}(m^{-1})^{ij}a^\\tau \\partial _i\\partial _j-|m|^{\\frac{1}{2}}U a^\\tau \\phi ^2)-\\frac{1}{2}\\partial _\\mu (U a^\\mu |m|^{\\frac{1}{2}})^2\\\\&\\quad +\\partial _i((m^{-1})^{i\\nu }|m|^{\\frac{1}{2}}a^j\\partial _\\nu \\partial _j-\\frac{1}{2}(m^{-1})^{\\mu \\nu }|m|^{\\frac{1}{2}}a^i\\partial _\\mu \\partial _\\nu +\\frac{1}{2}|m|^{\\frac{1}{2}}U a^j \\phi ^2)\\\\&\\quad +\\frac{1}{2}\\partial _\\lambda ((m^{-1})^{\\mu \\nu }|m|^{\\frac{1}{2}}a^\\lambda )\\partial _\\mu \\partial _\\nu -(m^{-1})^{\\mu \\nu }|m|^{\\frac{1}{2}}(\\partial _\\mu a^\\lambda )\\partial _\\nu \\partial _\\lambda ,$ and for any scalar function $S$ , $(\\Box _m-U)S|m|^{\\frac{1}{2}}&=\\partial _\\tau (|m|^{\\frac{1}{2}}(m^{-1})^{\\tau \\nu }\\partial _\\nu S-\\frac{1}{2}|m|^{\\frac{1}{2}}(m^{-1})^{\\tau \\nu }\\partial _\\nu S^2)\\\\&\\quad +\\partial _j(|m|^{\\frac{1}{2}}(m^{-1})^{j\\nu }\\partial _\\nu S-\\frac{1}{2}|m|^{\\frac{1}{2}}(m^{-1})^{j\\nu }\\partial _\\nu S^2)\\\\&\\quad +\\frac{1}{2}|m|^{\\frac{1}{2}}(\\Box _m S)^2-|m|^{\\frac{1}{2}}S(m^{-1})^{\\mu \\nu }\\partial _\\mu \\partial _\\nu -|m|^{\\frac{1}{2}}US^2.$ We apply and add these identities in the $(,,)$ coordinates with $a^\\mu $ and $S$ determined by $Q$ above.", "It follows with $B^\\mu []$ determined through these identities, $\\begin{split}fQ|m|^{\\frac{1}{2}}&=\\partial _\\mu B^\\mu []+\\mathcal {P}_{\\mathrm {pert}}Q|m|^{\\frac{1}{2}}-|m|^{\\frac{1}{2}}(|m|^{-\\frac{1}{2}}\\frac{1}{2}\\partial _\\mu (U a^\\mu |m|^{\\frac{1}{2}})^2-US^2)\\\\&\\quad +|m|^{\\frac{1}{2}}\\big (\\frac{1}{2}|m|^{-\\frac{1}{2}}\\partial _\\lambda ((m^{-1})^{\\mu \\nu }|m|^{\\frac{1}{2}}a^\\lambda )\\partial _\\mu \\partial _\\nu -(m^{-1})^{\\mu \\nu }(\\partial _\\mu a^\\lambda )\\partial _\\nu \\partial _\\lambda \\big )\\\\&\\quad +|m|^{\\frac{1}{2}}(\\frac{1}{2}(\\Box _m S)^2-S(m^{-1})^{\\mu \\nu }\\partial _\\mu \\partial _\\nu ).\\end{split}$ The contribution of $B^\\mu []$ can be bounded by the energy (see (REF ) and (REF ) for the form of $m$ ).", "To calculate the bulk terms, we first write $m={\\underline{m}}+{\\mathring{m}}$ where ${\\underline{m}}$ is defined by freezing the $$ values of the coefficients at $=t_2$ , and ${\\mathring{m}}:=m-{\\underline{m}}$ .", "In view of (REF ) and (REF ), and the $$ decay of ${\\mathring{m}}$ , the contribution of ${\\mathring{m}}$ is bounded by the energy.", "Here note that the term $|m|^{\\frac{1}{2}}(m^{-1})^{}$ , which could lead to a transversal derivative with no spatial decay on the leaves $\\Sigma _$ , is independent of $$ to leading order in $$ , so its leading order contribution to ${\\mathring{m}}$ vanishes (see (REF ), (REF )).", "For the contribution of ${\\underline{m}}$ , except for the multiplicative $|{\\underline{m}}|^{\\frac{1}{2}}$ , the expression of the bulk terms is coordinate invariant, so we can calculate in the $({\\tilde{}},{\\tilde{}},{\\tilde{}})$ coordinates.", "But then, using the asymptotic flatness of the metric in these coordinates, and the fact that $\\chi _1^{\\prime }\\ge 0$ , if $\\delta $ is sufficiently small the last two lines of (REF ) give control of $\\begin{split}-^{-1-\\alpha }((\\partial _)^2+(^{-1}\\partial _)^2+(^{-1})^2).\\end{split}$ Using similar considerations, the contributions of $\\mathcal {P}_{\\mathrm {pert}}$ and $U$ to (REF ) can be bounded by the energy and a small multiple of the LE norm of $$ .", "To get control of the remaining derivative $\\partial _$ , we again multiply (REF ) by $\\beta _1^{\\prime }|m|^{\\frac{1}{2}}$ , and manipulate as above to get, for the appropriate choice of ${\\tilde{B}}^\\mu []$ , $\\begin{split}f\\beta _1^{\\prime }|m|^{\\frac{1}{2}}&=\\partial _\\mu {\\tilde{B}}^\\mu []+\\mathcal {P}_{\\mathrm {pert}}\\beta _1^{\\prime }|m|^{\\frac{1}{2}}+|m|^{\\frac{1}{2}}U\\beta _1^{\\prime }^2\\\\&\\quad +|m|^{\\frac{1}{2}}(\\frac{1}{2}(\\Box _m \\beta _1^{\\prime })^2-\\beta _1^{\\prime }(m^{-1})^{\\mu \\nu }\\partial _\\mu \\partial _\\nu ).\\end{split}$ Using similar arguments as above, and as in the standard Minkowski computation, this gives control of $^{-1-\\alpha }(\\partial _)^2$ in terms of (REF ).", "Note also that by a similar argument as in the proof of Proposition REF we can prove an energy estimate for equation (REF ).", "Adding a suitably large multiple of (REF ) and the energy identity for (REF ) to (REF ), for a small constant $\\varepsilon $ depending on the support of $\\chi _1$ , we get (note that the induced volume form and $\\sqrt{|m|}$ are comparable in the support of $\\chi _1$ ) $\\begin{split}\\iint _{\\Sigma _{t_1}^{t_2}}\\chi _1 {\\tilde{r}}^{-1-\\alpha }((\\partial )^2+({\\tilde{r}}^{-1})^2)\\mathrm {d}V &\\lesssim \\Vert f\\Vert ^2_{L^1L^2(\\Sigma _{t_1}^{t_2})}+\\varepsilon \\Vert \\Vert ^2_{LE(\\Sigma _{t_1}^{t_2})}\\\\&\\quad +\\iint _{\\Sigma _{t_1}^{t_2}\\cap \\,{\\mathrm {supp}}\\chi _1^{\\prime }}||^2\\mathrm {d}V.\\end{split}$ For the interior we multiply the equation by two multipliers $Q\\phi $ and $P\\phi $ of the forms, $\\begin{split}Q\\phi :=q^\\mu \\partial _\\mu \\phi +|{\\tilde{h}}|^{-\\frac{1}{2}}\\partial _\\mu (|{\\tilde{h}}|^{\\frac{1}{2}}q^\\mu \\phi )\\qquad {\\ \\ \\text{and} \\ \\ }\\qquad P\\phi := p\\phi ,\\end{split}$ where (recall the relations (REF ), (REF ), (REF )) $\\begin{split}q^\\mu =\\delta _\\rho ^\\mu \\beta _2(\\rho )-\\gamma (t)\\delta _{t}^\\mu \\ell (t)\\cdot F_\\rho (\\rho ,\\omega ) (1-(1+)^{-1})\\beta _2(\\rho ), \\qquad p= p_0 \\rho ^2\\chi _2(\\rho ).\\end{split}$ Here $p_0$ is a constant to be fixed later (and with $\\chi _2$ as defined before the statement of Lemma REF ), $\\begin{split}\\beta _2(\\rho )=\\rho \\chi _2(\\rho ).\\end{split}$ It is helpful to keep in mind that $\\ell \\cdot F_\\rho =\\frac{\\rho }{\\langle \\rho \\rangle }\\ell \\cdot \\Theta $ vanishes at $\\rho =0$ .", "Note that in the $({\\tilde{t}},{\\tilde{\\rho }},{\\tilde{\\omega }})$ coordinates ${\\tilde{q}}^\\mu \\partial _\\mu =\\beta _2 \\partial _{\\tilde{\\rho }}$ (recall that in our notation we use ${\\tilde{q}}^\\mu $ to denote the components of the vectorfield $q$ in the $({\\tilde{t}},{\\tilde{\\rho }},{\\tilde{\\omega }})$ coordinates).", "Also, recalling (REF ) we use the following notation ${\\mathring{\\mathcal {P}}}=a^{\\mu \\nu }\\partial ^2_{\\mu \\nu }+b^\\mu \\partial _\\mu + c$ for the perturbation.", "For the principal part of $\\mathcal {P}$ we can change variables to $({\\tilde{t}},{\\tilde{\\rho }},{\\tilde{\\omega }})$ to get $\\begin{split}(\\Box _h -U)\\phi Q\\phi |{\\tilde{h}}|^{\\frac{1}{2}}&= \\partial _\\mu \\Big (2|{\\tilde{h}}|^{\\frac{1}{2}}({\\tilde{h}}^{-1})^{\\mu \\nu }{\\tilde{q}}^\\lambda \\partial _\\lambda \\phi \\partial _\\nu \\phi -|{\\tilde{h}}|^{\\frac{1}{2}}({\\tilde{h}}^{-1})^{\\lambda \\nu }{\\tilde{q}}^\\mu \\partial _\\lambda \\phi \\partial _\\nu \\phi -|{\\tilde{h}}|^{\\frac{1}{2}}U{\\tilde{q}}^\\lambda \\phi ^2\\Big )\\\\&\\quad +\\partial _\\mu \\Big (({\\tilde{h}}^{-1})^{\\mu \\nu }\\partial _\\lambda (|{\\tilde{h}}|^{\\frac{1}{2}}{\\tilde{q}}^\\lambda )\\phi \\partial _\\nu \\phi -\\frac{1}{2}({\\tilde{h}}^{-1})^{\\mu \\nu }\\partial _\\nu (|{\\tilde{h}}|^{-\\frac{1}{2}}\\partial _\\lambda (|{\\tilde{h}}|^{\\frac{1}{2}}{\\tilde{q}}^\\lambda ))|{\\tilde{h}}|^{\\frac{1}{2}}\\phi ^2\\Big )\\\\&\\quad -2\\big (({\\tilde{h}}^{-1})^{\\lambda \\nu }(\\partial _\\lambda {\\tilde{q}}^\\mu )-\\frac{1}{2}{\\tilde{q}}^\\lambda \\partial _\\lambda ({\\tilde{h}}^{-1})^{\\mu \\nu }\\big )\\partial _\\mu \\phi \\partial _\\nu \\phi |{\\tilde{h}}|^{\\frac{1}{2}}\\\\&\\quad +\\frac{1}{2}\\Box _{\\tilde{h}}(|{\\tilde{h}}|^{-\\frac{1}{2}}\\partial _\\lambda (|{\\tilde{h}}|^{\\frac{1}{2}}{\\tilde{q}}^\\lambda ))\\phi ^2|{\\tilde{h}}|^{\\frac{1}{2}}+({\\tilde{q}}^\\lambda \\partial _\\lambda U)\\phi ^2|{\\tilde{h}}|^{\\frac{1}{2}},\\end{split}$ and $\\begin{split}(\\Box _h-U) \\phi P\\phi |{\\tilde{h}}|^{\\frac{1}{2}}&= \\partial _\\mu \\big (|{\\tilde{h}}|^{\\frac{1}{2}}({\\tilde{h}}^{-1})^{\\mu \\nu }p\\phi \\partial _\\nu \\phi -\\frac{1}{2}|{\\tilde{h}}|^{\\frac{1}{2}}({\\tilde{h}}^{-1})^{\\mu \\nu }\\partial _\\mu p \\phi ^2\\big )\\\\&\\quad -p({\\tilde{h}})^{-1}\\partial _\\mu \\phi \\partial _\\nu \\phi |{\\tilde{h}}|^{\\frac{1}{2}}+\\big (\\frac{1}{2}\\Box _{\\tilde{h}}p-Up\\big )\\phi ^2|{\\tilde{h}}|^{\\frac{1}{2}}.\\end{split}$ Recalling that, in view of (REF ), $(1+)^{-1}|h|^{\\frac{1}{2}}=|{\\tilde{h}}|^{\\frac{1}{2}}$ , by adding a small multiple $\\epsilon _M$ of (REF ) to (REF ) and multiplying by $(1+)^{-1}$ we get, for some constant $c_M=o(M)$ , $\\begin{split}(U-\\Box _h) \\phi (Q\\phi +\\epsilon _MP\\phi ) |h|^{\\frac{1}{2}}&\\ge C\\chi _1(c_M\\rho ^2(T\\phi )^2+(\\partial _\\Sigma \\phi )^2+M\\phi ^2)\\\\&\\quad -O(1)\\chi _{U_{\\mathrm {int}}\\backslash K_{\\mathrm {int}}}((\\partial \\phi )^2+(\\phi )^2)\\\\&\\quad +O(\\epsilon \\,t^{-5/4})\\chi _{U_{\\mathrm {int}}}((\\partial \\phi )^2+(\\phi /\\rho )^2)\\\\&\\quad +\\partial (O(1)\\chi _{U_{\\mathrm {int}}}(\\partial \\phi )^2+O(1)\\chi _{U_{\\mathrm {int}}}(\\phi /\\rho )^2).\\end{split}$ In a similar manner, using the $t$ decay of $a,b,c$ , we can see that $\\begin{split}P \\phi (Q\\phi +\\epsilon _MP\\phi ) |h|^{\\frac{1}{2}}&=O(\\epsilon \\,t^{-9/4})\\chi _{U_{\\mathrm {int}}}((\\partial \\phi )^2+(\\phi /\\rho )^2)\\\\&\\quad +\\partial (O(\\epsilon )\\chi _{U_{\\mathrm {int}}}(\\partial \\phi )^2+O(\\epsilon )\\chi _{U_{\\mathrm {int}}}(\\phi /\\rho )^2).\\end{split}$ The desired estimate now follows by integrating (REF ) and (REF ) (with respect to the measure $\\mathrm {d}t \\mathrm {d}\\rho \\mathrm {d}\\omega $ ) and combining with the energy identity for (REF ) and a suitably large multiple (independent of $M$ ) of (REF ).", "Here note that the bulk error terms in (REF ) and (REF ) are absorbed by the suitably large multiple of (REF ), while the bulk $L^2({\\mathrm {supp}}\\chi _1^{\\prime })$ error terms in the latter are absorbed by (REF ) if $M$ is chosen sufficiently large.", "The argument for (REF ) is similar, where now we incur some $L^2$ errors in a compact region in view of the absence of $V_{\\mathrm {far}}$ from the left-hand side of (REF ).", "The contribution of the source term in (REF ) is bounded in $LE^\\ast $ using the decay of $V_{\\mathrm {far}}$ and (REF ).", "At this point we use the coordinates $(,,)$ and the decomposition $\\mathcal {P}=\\mathcal {P}_0+\\mathcal {P}_{\\mathrm {pert}}$ (see the opening paragraphs of this section).", "For a globally defined function $u$ , the frequency projections $P_{\\le N_0}$ and $P_N$ are defined by $\\begin{split}P_{\\le N_0}u(,,)=\\int _{-\\infty }^{\\infty }2^{N_0}\\chi (2^{N_0}^{\\prime })u(-^{\\prime })\\mathrm {d}^{\\prime },\\qquad P_{N}u(,,)=\\int _{-\\infty }^{\\infty }2^{N_0}{\\tilde{\\chi }}(2^{N_0}^{\\prime })u(-^{\\prime })\\mathrm {d}^{\\prime },\\end{split}$ where as usual $\\hat{\\chi }(\\hat{})=\\int _{-\\infty }^{\\infty }\\chi ()e^{-i\\hat{}}\\mathrm {d}$ and $\\hat{{\\tilde{\\chi }}}(\\hat{})=\\int _{-\\infty }^{\\infty }\\chi ()e^{-i\\hat{}}\\mathrm {d}$ are supported in $\\lbrace \\hat{}\\lesssim 1\\rbrace $ and $\\lbrace \\hat{}\\simeq 1\\rbrace $ respectively.", "In order to apply frequency projections, we need to extend $$ , $_{\\mathrm {far}}$ , and $_{\\mathrm {near}}$ .", "For this, we view these as functions of the variables $(,,)$ and extend them outside of their current domain of definition by requiring that they satisfy $\\begin{split}\\mathcal {P}-V_{\\mathrm {far}}=0,\\qquad \\mathcal {P}_{\\mathrm {far}}- V_{\\mathrm {far}}_{\\mathrm {far}}=0,\\end{split}$ in $( \\Sigma _{t_1}^{t_2})^c$ .", "Here $\\mathcal {P}$ is extended outside the original domain using the decomposition (REF ), by extending the coefficients of $\\mathcal {P}_0$ independently of $$ and those of $\\mathcal {P}_{\\mathrm {pert}}$ smoothy such that the estimates (REF ) are still satisfied.", "It follows that $_{\\mathrm {near}}:=-_{\\mathrm {far}}$ satisfies $\\begin{split}\\mathcal {P}_{\\mathrm {near}}-V_{\\mathrm {far}}_{\\mathrm {near}}=0,\\end{split}$ outside its original domain of definition.", "By a similar argument as in the proof of Lemma REF , we can then replace $\\Vert _{\\mathrm {far}}\\Vert _{LE(\\Sigma _{t_1}^{t_2})}$ and $\\Vert _{{\\mathrm {near}}}\\Vert _{LE(\\Sigma _{t_1}^{t_2})}$ in the estimates (REF ) and (REF ) by $\\Vert _{\\mathrm {far}}\\Vert _{LE}$ and $\\Vert _{{\\mathrm {near}}}\\Vert _{LE}$ , respectively, where $LE\\equiv LE(\\cup _{}\\Sigma _).$ In view of Lemma REF our task has reduced to estimating $\\Vert _{\\mathrm {near}}\\Vert _{L^2(K_{\\mathrm {ext}})}$ .", "The high frequency part of this error can already be absorbed by the $LE$ norm as shown in the next lemma.", "Lemma 7.7 Given $\\delta >0$ , if $N_0$ is sufficiently large then $P_{>N_0}_{\\mathrm {near}}:=_{\\mathrm {near}}- P_{\\le N_0}_{\\mathrm {near}}$ satisfies $\\begin{split}\\Vert P_{>N_0}_{\\mathrm {near}}\\Vert _{L^2(K_{{\\mathrm {ext}}})}\\le \\delta \\Vert _{\\mathrm {near}}\\Vert _{LE}.\\end{split}$ This would be immediate from the definition of $P_{\\ge N_0}$ and $\\Vert \\cdot \\Vert _{LE}$ if we had $\\Vert P_{>N_0}\\rho _{\\mathrm {near}}\\Vert _{L^2(K_{\\mathrm {ext}})}$ instead of $\\Vert P_{>N_0}_{\\mathrm {near}}\\Vert _{L^2(K_{\\mathrm {ext}})}$ (note that in $K_{\\mathrm {ext}}$ the coordinates $(t,\\rho ,\\omega )$ and $(,,)$ agree).", "To insert the extra factor of $\\rho $ we argue as follows.", "Let $u=P_{> N_0}$ .", "Then, with $\\chi \\equiv \\chi (\\rho )$ an appropriate cutoff, we need to estimate (recall that $|{\\tilde{h}}|\\simeq 1$ in $K_{\\mathrm {ext}}$ , in particular with no vanishing at $\\rho =0$ ) $\\begin{split}&\\int _{t_1}^{t_2}\\int _{\\mathbb {S}^{n-1}}\\int _{-\\infty }^\\infty u^2\\chi \\mathrm {d}\\rho \\,\\mathrm {d}\\omega \\mathrm {d}t= \\int _{t_1}^{t_2}\\int _{\\mathbb {S}^{n-1}}\\int _{-\\infty }^\\infty (\\partial _\\rho \\rho )u^2\\chi \\mathrm {d}\\rho \\,\\mathrm {d}\\omega \\mathrm {d}t\\\\&=-\\int _{t_1}^{t_2}\\int _{\\mathbb {S}^{n-1}}\\int _{-\\infty }^{\\infty } \\rho u^2\\partial _\\rho \\chi \\mathrm {d}\\rho \\,\\mathrm {d}\\omega \\mathrm {d}t-2\\int _{t_1}^{t_2}\\int _{\\mathbb {S}^{n-1}}\\int _{-\\infty }^{\\infty }\\rho u \\partial _\\rho u \\chi \\mathrm {d}\\rho \\,\\mathrm {d}\\omega \\mathrm {d}t.\\end{split}$ The first integral is supported away from $\\lbrace \\rho =0\\rbrace $ , so we can insert a factor of $\\rho $ making it an acceptable $L^2$ error.", "For the second integral we use $\\begin{split}|\\rho u \\partial _\\rho u| \\le C_\\epsilon \\rho ^2 u^2 + \\epsilon (\\partial _\\rho u)^2.\\end{split}$ Since the coefficient of $\\partial _\\rho u$ in the LE norm is non-degenerate we can absorb the last term on the right (note that $P_{> N_0}$ and $\\partial _\\rho $ commute and that $P_{>N_0}$ is bounded in $LE$ ).", "The first term on the right is exactly the term we had hoped for.", "It follows that $\\begin{split}\\Vert _{\\mathrm {near}}\\Vert _{LE}\\lesssim \\Vert \\Vert _{E(\\Sigma _{t_1})}+ \\Vert f\\Vert _{L^1L^2(\\Sigma _{t_1}^{t_2})}+\\Vert P_{\\le N_0}_{\\mathrm {near}}\\Vert _{L^2(K_{\\mathrm {ext}})}.\\end{split}$ Now to estimate $P_{\\le N_0}_{\\mathrm {near}}$ we again apply the near-far decomposition, this time with respect to $\\mathcal {P}_0$ .", "First note that $\\begin{split}\\Vert P_{\\le N_0}_{\\mathrm {near}}\\Vert _{L^2(K_{\\mathrm {ext}})}\\lesssim \\Vert P_{\\le N_0}_{\\mathrm {near}}\\Vert _{LE}.\\end{split}$ Let $_{{\\mathrm {near}},{\\mathrm {far}}}$ be defined by (note that the operator on the left-hand side is applied to $_{{\\mathrm {near}},{\\mathrm {far}}}$ while on right-hand side $_{\\mathrm {near}}$ and $_{\\mathrm {far}}$ appear and not $_{{\\mathrm {near}},{\\mathrm {far}}}$ ) $ \\begin{split}{\\left\\lbrace \\begin{array}{ll}(\\mathcal {P}_0-V_{\\mathrm {far}})_{{\\mathrm {near}},{\\mathrm {far}}} = -\\mathcal {P}_{\\mathrm {pert}}_{\\mathrm {near}}-V_{\\mathrm {far}}_{\\mathrm {far}}\\quad &\\mathrm {in~}\\Sigma _{t_1}^{t_2}\\\\(\\mathcal {P}_0-V_{\\mathrm {far}})_{{\\mathrm {near}},{\\mathrm {far}}} =-\\mathcal {P}_{\\mathrm {pert}}_{\\mathrm {near}}\\quad &\\mathrm {in~}(\\Sigma _{t_1}^{t_2})^c\\end{array}\\right.", "},\\qquad _{{\\mathrm {near}},{\\mathrm {far}}}\\vert _{\\Sigma _{t_1}}=0,\\end{split}$ so that $_{{\\mathrm {near}},{\\mathrm {near}}}:=_{\\mathrm {near}}-_{{\\mathrm {near}},{\\mathrm {far}}}$ satisfies $ \\begin{split}{\\left\\lbrace \\begin{array}{ll}\\mathcal {P}_0_{{\\mathrm {near}},{\\mathrm {near}}}=-V_{\\mathrm {far}}_{{\\mathrm {near}},{\\mathrm {far}}}\\quad &\\mathrm {~in~} \\Sigma _{t_1}^{t_2}\\\\(\\mathcal {P}_0-V_{\\mathrm {far}})_{{\\mathrm {near}},{\\mathrm {near}}}=0\\quad &\\mathrm {~in~} (\\Sigma _{t_1}^{t_2})^c\\end{array}\\right.", "},\\qquad _{{\\mathrm {near}},{\\mathrm {near}}}\\vert _{\\Sigma _{t_1}}=0.\\end{split}$ Since $P_{\\le N_0}$ commutes with $\\mathcal {P}_0$ and $V_{\\mathrm {far}}$ , we also have $\\begin{split}P_{\\le N_0}_{\\mathrm {near}}= P_{\\le N_0}_{{\\mathrm {near}},{\\mathrm {near}}}+P_{\\le N_0}_{{\\mathrm {near}},{\\mathrm {far}}}\\end{split}$ and $\\begin{split}&(\\mathcal {P}_0-V_{\\mathrm {far}})P_{\\le N_0}_{{\\mathrm {near}},{\\mathrm {far}}}=P_{\\le N_0}f_{{\\mathrm {near}},{\\mathrm {far}}},\\\\&\\mathcal {P}_0P_{\\le N_0}_{{\\mathrm {near}},{\\mathrm {near}}} =P_{\\le N_0}f_{{\\mathrm {near}},{\\mathrm {near}}},\\end{split}$ where $\\begin{split}f_{{\\mathrm {near}},{\\mathrm {far}}}:={\\left\\lbrace \\begin{array}{ll}-\\mathcal {P}_{\\mathrm {pert}}_{\\mathrm {near}}-V_{\\mathrm {far}}_{\\mathrm {far}}\\quad &\\mathrm {in~}\\Sigma _{t_1}^{t_2}\\\\ -\\mathcal {P}_{\\mathrm {pert}}_{\\mathrm {near}}\\quad &\\mathrm {in~}(\\Sigma _{t_1}^{t_2})^c\\end{array}\\right.", "},\\quad f_{{\\mathrm {near}},{\\mathrm {near}}}:={\\left\\lbrace \\begin{array}{ll}-V_{\\mathrm {far}}_{{\\mathrm {near}},{\\mathrm {far}}}\\quad &\\mathrm {in~}\\Sigma _{t_1}^{t_2}\\\\ V_{\\mathrm {far}}_{{\\mathrm {near}},{\\mathrm {near}}}\\quad &\\mathrm {in~}(\\Sigma _{t_1}^{t_2})^c\\end{array}\\right.", "}.\\end{split}$ By a slight abuse of notation we will sometimes write $\\begin{split}-P_{\\le N_0}(\\mathcal {P}_{\\mathrm {pert}}_{\\mathrm {near}})-P_{\\le N_0}(V_{\\mathrm {far}}_{\\mathrm {far}})\\end{split}$ for $P_{\\le N_0}f_{{\\mathrm {near}},{\\mathrm {far}}}$ , and write the equation for $P_{\\le N_0}_{{\\mathrm {near}},{\\mathrm {far}}}$ simply as $\\begin{split}(\\mathcal {P}_0-V_{\\mathrm {far}})P_{\\le N_0}_{{\\mathrm {near}},{\\mathrm {far}}}=-P_{\\le N_0}(\\mathcal {P}_{\\mathrm {pert}}_{\\mathrm {near}})-P_{\\le N_0}(V_{\\mathrm {far}}_{\\mathrm {far}}).\\end{split}$ The proof of the following lemma will occupy much of the remainder of this section.", "Lemma 7.8 $_{{\\mathrm {near}},{\\mathrm {far}}}$ satisfies $\\begin{split}\\Vert P_{\\le N_0}_{{\\mathrm {near}},{\\mathrm {far}}}\\Vert _{L^2(K_{\\mathrm {ext}})}\\lesssim \\Vert f\\Vert _{L^1L^2(\\Sigma _{t_1}^{t_2})}+\\epsilon \\sup _{t_1\\le t \\le t_2}\\Vert \\Vert _{E(\\Sigma _{t})}+\\epsilon \\Vert _{\\mathrm {near}}\\Vert _{LE(\\Sigma _{t_1}^{t_2})}.\\end{split}$ We postpone the proof of this lemma and proceed to prove Proposition REF using its statement.", "Throughout the proof, we use an underline to denote the parameters, or other functions depending on the parameters, with values fixed at $=t_2$ .", "So for instance we write ${\\underline{\\ell }}=\\ell (t_2)$ and ${\\underline{h}}$ for $h$ with $\\ell $ replaced by $\\underline{\\ell }$ .", "Let ${R_1}>0$ be a large constant (see for instance Section REF ) so that $K_{\\mathrm {ext}}\\subseteq \\mathcal {R}_{t_1}^{t_2}:=\\cup _{\\in [t_1,t_2]}\\Sigma _\\cap \\lbrace \\le {R_1}\\rbrace .$ In view of (REF ) and Lemmas REF and REF , it suffices for us to prove the following estimate $\\begin{split}\\Vert P_{\\le N_0}_{{\\mathrm {near}},{\\mathrm {near}}}\\Vert _{L^2(K_{\\mathrm {ext}}})&\\lesssim \\sum _k\\Vert {}_k()\\Vert _{L^2_([t_1,t_2])}+ \\epsilon \\Vert \\Vert _{LE(\\mathcal {R}_{t_1}^{t_2})}\\\\&\\quad +\\Vert _{\\mathrm {far}}\\Vert _{LE}+\\Vert _{{\\mathrm {near}},{\\mathrm {far}}}\\Vert _{LE}+\\Vert \\Vert _{LE((\\Sigma _{t_1}^{t_2})^c)}.\\end{split}$ To simplify notation let $u=P_{\\le N_0}_{{\\mathrm {near}},{\\mathrm {near}}}$ and $g=-P_{\\le N_0}f_{{\\mathrm {near}},{\\mathrm {near}}}$ , so that the equation $\\begin{split}\\mathcal {P}_0 u = g\\end{split}$ is satisfied globally.", "We also recall that in the coordinates $({\\tilde{}},{\\tilde{}},{\\tilde{}})$ the operator $\\mathcal {P}_0$ takes the form $\\begin{split}\\mathcal {P}_0 = -\\partial _{\\tilde{}}^2+{\\tilde{\\Delta }}+V({\\tilde{}}),\\end{split}$ where ${\\tilde{\\Delta }}$ denotes the Laplacian on the Riemannian Catenoid in polar coordinates: $\\begin{split}{\\tilde{\\Delta }}=\\frac{1}{\\langle {\\tilde{}}\\rangle ^{n-1}|F_{{\\tilde{}}}|}\\partial _{\\tilde{}}(\\langle {\\tilde{}}\\rangle ^{n-1}|F_{\\tilde{}}|^{-1}\\partial _\\rho )+\\frac{1}{\\langle {\\tilde{}}\\rangle ^2}{\\mathring{{\\Delta }}}.\\end{split}$ By (REF ), in the region $\\lbrace \\le {R_1}\\rbrace $ the two coordinates are related by $\\begin{split}{\\tilde{}}=\\underline{\\gamma }^{-1}-{\\underline{\\ell }}\\cdot F(,),\\quad {\\tilde{}}=,\\quad {\\tilde{}}=.\\end{split}$ We will use ${\\tilde{y}}$ for the spatial coordinates $({\\tilde{}},{\\tilde{}})$ and use $\\langle \\cdot ,\\cdot \\rangle _{\\tilde{y}}$ for the $L^2$ pairing with respect toFollowing our convention in this section, by a slight abuse of notation, we write $\\sqrt{|{\\underline{{\\tilde{h}}}}|}$ rather than $\\sqrt{|{\\underline{h}}|}$ to emphasize that we are working in the $({\\tilde{}},{\\tilde{}},{\\tilde{}})$ coordinates.", "$\\sqrt{|{\\underline{h}}|}\\mathrm {d}{\\tilde{y}}$ on the $\\lbrace {\\tilde{}}=\\mathrm {constant}\\rbrace $ hypersurfaces.", "On these hypersurfaces we define the spectral projection $\\mathbb {P}_c$ by $\\begin{split}u= \\mathbb {P}_cu+\\sum _{j=1}^{n} \\langle u,_j\\rangle _{\\tilde{y}}_j+\\langle u,_\\mu \\rangle _{\\tilde{y}}_\\mu ,\\end{split}$ where $_j$ , $j=1,\\dots ,n$ , denote the eigenfunctions of ${\\tilde{\\Delta }}+V$ with eigenvalue zero, and $_{\\mu }$ the eigenfunction with eigenvalue $-\\mu ^2<0$ .", "In what follows, unless otherwise specified, when summing over the eigenfunctions $_j$ we always let $j$ vary over $\\lbrace \\mu ,1,\\dots ,n\\rbrace $ without distinguishing between the zero and $-\\mu ^2$ eigenvalues.", "As in in the figure below, [scale=1,transform shape] [->] (0,-0.25) – (0,2) node[right] $$ ; [name path= C, red, very thick,decorate] (-1,0.5) – (1,0.5) node[right] $=t_1$ ; [name path = D, red, very thick,-,decorate] (-1,1) node[left] $=t_2$ – (1,1) ; [red,very thick] (1,1) – (1,0.5); [red, very thick] (-1,1) – (-1,0.5) (-0.5,0.76) node$\\mathcal {R}_{t_1}^{t_2}$ ; [of=C and D]red, opacity=0.1; (A) at (-3,2.25); (B) at (1,1); (C) at (3,0.75); [name path=O, thick,blue] plot [smooth] coordinates (A) (B) (C) ; (D) at (-3,1.2); (E) at (-1,0.5); (F) at (3,-0.25); [name path = U, thick,blue] plot [smooth] coordinates (D) (E) (F) ; right] at (C) ${\\color {blue} {\\tilde{}}={\\tilde{t}}_2}$ ; right] at (F) ${\\color {blue} {\\tilde{}}={\\tilde{t}}_1}$ ; [of=O and U]blue, opacity=0.1; let ${\\widetilde{\\mathcal {R}}}_{{\\tilde{t}}_1}^{{\\tilde{t}}_2}=\\lbrace {\\tilde{t}}_1\\le {\\tilde{}}\\le {\\tilde{t}}_2\\rbrace $ be the smallest infinite rectangle containing $\\mathcal {R}_{t_1}^{t_2}$ , and observe that (note that the implicit constant is independent of ${R_1}$ and rather depends on the size of $K_{\\mathrm {ext}}$ which we can choose to be much smaller than ${R_1}$ ) $\\begin{split}\\Vert u\\Vert _{L^2(K_{\\mathrm {ext}})}\\lesssim \\Vert u\\Vert _{LE({\\widetilde{\\mathcal {R}}}_{{\\tilde{t}}_1}^{{\\tilde{t}}_2})}.\\end{split}$ Let $a_j:=\\langle u,_j\\rangle _{\\tilde{y}}$ , $a_\\mu :=\\langle u,_\\mu \\rangle _{\\tilde{y}}$ and $a_j^{\\prime }:=\\frac{\\mathrm {d}}{\\mathrm {d}{\\tilde{}}}\\langle u,_j\\rangle _{\\tilde{y}}$ , $a_\\mu ^{\\prime }:=\\frac{\\mathrm {d}}{\\mathrm {d}{\\tilde{}}}\\langle u,_\\mu \\rangle _{\\tilde{y}}$ , and denote by ${\\tilde{I}}$ the time interval $\\lbrace {\\tilde{t}}_1\\le {\\tilde{}}\\le {\\tilde{t}}_2\\rbrace $ .", "We now apply the LED estimate Proposition REF , using the second and fourth estimates in the statement there.", "Note that since $u=P_{\\le N_0}_{{\\mathrm {near}},{\\mathrm {near}}}$ , we can drop the time derivative from the last term on the right-hand side of the second estimate in Proposition REF and absorb the corresponding error by the left-hand side of the fourth estimate.", "Using this argument (recall that $j$ varies over $\\lbrace \\mu ,1,\\dots ,n\\rbrace $ ), $\\begin{split}\\Vert u\\Vert _{LE({\\widetilde{\\mathcal {R}}}_{{\\tilde{t}}_1}^{{\\tilde{t}}_2})}&\\lesssim \\Vert \\mathbb {P}_cu\\Vert _{LE({\\widetilde{\\mathcal {R}}}_{{\\tilde{t}}_1}^{{\\tilde{t}}_2})}+\\sum _j(\\Vert a_j\\Vert _{L^2_{\\tilde{t}}({\\tilde{I}})}+\\Vert a_j^{\\prime }\\Vert _{L^2_{\\tilde{t}}({\\tilde{I}})})\\\\&\\lesssim \\Vert \\mathbb {P}_cg\\Vert _{LE^\\ast ({\\widetilde{\\mathcal {R}}}_{{\\tilde{t}}_1}^{{\\tilde{t}}_2})}+\\sum _j(\\Vert a_j\\Vert _{L^2_{\\tilde{t}}({\\tilde{I}})}+\\Vert a_j^{\\prime }\\Vert _{L^2_{\\tilde{t}}({\\tilde{I}})})\\\\&\\lesssim \\Vert g\\Vert _{L^2({\\widetilde{\\mathcal {R}}}_{{\\tilde{t}}_1}^{{\\tilde{t}}_2})}+\\sum _j(\\Vert a_j\\Vert _{L^2_{\\tilde{t}}({\\tilde{I}})}+\\Vert a_j^{\\prime }\\Vert _{L^2_{\\tilde{t}}({\\tilde{I}})}).\\end{split}$ Here, to pass to the last line, we have used that $\\langle {\\tilde{y}}\\rangle ^{\\frac{1+\\alpha }{2}}_j\\in L^2_{\\tilde{y}}$ (which holds for $n\\ge 4$ ) to bound $\\begin{split}\\Vert \\mathbb {P}_cg\\Vert _{LE^\\ast ({\\widetilde{\\mathcal {R}}}_{{\\tilde{t}}_1}^{{\\tilde{t}}_2})}\\le \\Vert g\\Vert _{LE^\\ast ({\\widetilde{\\mathcal {R}}}_{{\\tilde{t}}_1}^{{\\tilde{t}}_2})}+\\sum _j\\Vert \\langle g,_j\\rangle _{\\tilde{y}}\\Vert _{L^2_{\\tilde{t}}[{\\tilde{t}}_1,{\\tilde{t}}_2]}\\Vert \\langle y\\rangle ^{\\frac{1+\\alpha }{2}}_j\\Vert _{L^2_{\\tilde{y}}}\\lesssim \\Vert g\\Vert _{L^2({\\widetilde{\\mathcal {R}}}_{{\\tilde{t}}_1}^{{\\tilde{t}}_2})}.\\end{split}$ To treat the last term on the right-hand side of (REF ) we introduce some more notation.", "For $k=\\mu ,1,\\dots ,n$ , let ${\\underline{Z}}_k=\\chi _k$ where $\\chi \\equiv \\chi ({\\tilde{}})$ is supported in $\\lbrace {\\tilde{}}<{R_1}/2\\rbrace $ .", "Then let $\\begin{split}&{\\tilde{\\underline{\\Omega }}}_k(v):=-\\langle \\partial _{\\tilde{}}v,{\\underline{Z}}_k\\rangle _{\\tilde{y}},\\qquad {\\tilde{\\underline{\\Omega }}}_{n+k}(v):=\\langle v,{\\underline{Z}}_k\\rangle _{\\tilde{y}},\\quad k=1,\\dots ,n\\\\&{\\tilde{\\underline{\\Omega }}}^{+}_{\\mu }(v):=\\langle v,\\mu {\\underline{Z}}_\\mu \\rangle _{\\tilde{y}}-\\langle \\partial _{\\tilde{}}v,{\\underline{Z}}_k\\rangle _{\\tilde{y}},\\qquad {\\tilde{\\underline{\\Omega }}}^{-}_{\\mu }(v):=-\\langle v,\\mu {\\underline{Z}}_\\mu \\rangle _{\\tilde{y}}-\\langle \\partial _{\\tilde{}}v,{\\underline{Z}}_k\\rangle _{\\tilde{y}}.\\end{split}$ For any $c$ we define $\\mathcal {T}_c$ to be the intersection of $\\lbrace \\le {R_1}\\rbrace $ with the region bounded between $\\lbrace {\\tilde{}}=c\\rbrace $ and $\\lbrace =\\underline{\\gamma }c\\rbrace $ , and let $\\mathcal {T}_{c,1}:=\\mathcal {T}_c\\cap \\lbrace \\le \\underline{\\gamma }c\\rbrace $ and $\\mathcal {T}_{c,2}:=\\mathcal {T}_c\\cap \\lbrace \\ge \\underline{\\gamma }c\\rbrace $ .", "See the figure below.", "[scale=1,transform shape] [->] (0,-1.5) – (0,1.5) node[right] $$ ; [name path = O, anchor=center,red, very thick,decorate] (-2,0)–(-1.5,0) node[above,black]$\\mathcal {T}_{c,2}$ – (1.5,0) node[below,black]$\\mathcal {T}_{c,1}$ – (2,0); [red, very thick,decorate] (-2.25,0)–(-2,0) ; [red, very thick,decorate] (2,0) – (2.25,0)node[right] $=\\underline{\\gamma }c$ ; [dashed,thick] (2,1)node[left]$={R_1}$ –(2,-1.5); [dashed,thick] (-2,1)–(-2,-1)node[right]$=-{R_1}$ ; (A) at (-2,1); (B) at (0,0); (C) at (2,-1.5); [name path = U, thick,blue] plot [smooth] coordinates (A) (B) (C) ; right] at (C) ${\\color {blue} {\\tilde{}}=c}$ ; [of=O and U]blue, opacity=0.1; Finally, with ${\\underline{n}}$ denoting the normal (with respect to ${\\underline{h}}$ ) to $\\lbrace =c\\rbrace $ , we let $\\begin{split}&{\\underline{\\Omega }}_k(v)(c)=-\\int _{\\lbrace = c\\rbrace }{\\underline{Z}}_k{\\underline{n}}^\\alpha \\partial _\\alpha v \\sqrt{|{\\underline{h}}|}\\mathrm {d}y,\\quad {\\underline{\\Omega }}_{n+k}(v(c))=\\int _{\\lbrace = c\\rbrace }v{\\underline{Z}}_k{\\underline{n}}^\\alpha \\partial _\\alpha {\\tilde{}}\\sqrt{|{\\underline{h}}|}\\mathrm {d}y,\\quad k=1,\\dots ,n,\\\\&{\\underline{\\Omega }}_{\\mu }^{\\pm }(v)(c)=\\int _{\\lbrace = c\\rbrace }(\\pm \\mu v {\\underline{Z}}_\\mu \\partial _\\alpha {\\tilde{}}-{\\underline{Z}}_\\mu \\partial _\\alpha v){\\underline{n}}^\\alpha \\sqrt{|{\\underline{h}}|}\\mathrm {d}y.\\end{split}$ Returning to the last term on the right-hand side of (REF ) note that $\\begin{split}&{\\tilde{\\underline{\\Omega }}}_k(u)={\\tilde{\\underline{\\Omega }}}_k(\\mathbb {P}_cu)-\\sum _j a_j^{\\prime }\\langle _j,{\\underline{Z}}_k\\rangle _{\\tilde{y}},\\quad {\\tilde{\\underline{\\Omega }}}_{n+k}(u)={\\tilde{\\underline{\\Omega }}}_{n+k}(\\mathbb {P}_cu)+\\sum _j a_j\\langle _j,{\\underline{Z}}_k\\rangle _{\\tilde{y}},\\quad k=1,\\dots ,n,\\\\&{\\tilde{\\underline{\\Omega }}}_{\\mu }^{\\pm }(u)={\\tilde{\\underline{\\Omega }}}_{\\mu }^{\\pm }(\\mathbb {P}_cu)+\\sum _j\\big (\\pm \\mu a_j\\langle _j, {\\underline{Z}}_{\\mu }\\rangle _{\\tilde{y}}-a_j^{\\prime }\\langle _j,{\\underline{Z}}_{\\mu }\\rangle _{\\tilde{y}}\\big ).\\end{split}$ Viewing this as a linear system for $a_j,a_j^{\\prime }$ , $j\\in \\lbrace \\mu ,1,\\dots ,n\\rbrace $ , with invertible coefficient matrix, and since ${\\underline{Z}}_k$ , $k\\in \\lbrace \\mu ,1,\\dots ,n\\rbrace $ , are compactly supported, we get (below, each sum in $k$ is over $\\pm \\mu ,1,\\dots ,2n$ ) $\\begin{split}\\sum _j(\\Vert a_j\\Vert _{L^2_{\\tilde{}}({\\tilde{I}})}+\\Vert a_j^{\\prime }\\Vert _{L^2_{\\tilde{}}({\\tilde{I}})})&\\lesssim \\sum _k(\\Vert {\\tilde{\\underline{\\Omega }}}_k(u)\\Vert _{L^2_{\\tilde{}}({\\tilde{I}})}+\\Vert {\\tilde{\\underline{\\Omega }}}_{k}(\\mathbb {P}_cu)\\Vert _{L^2_{\\tilde{t}}({\\tilde{I}})})\\\\&\\lesssim \\Vert \\mathbb {P}_cu\\Vert _{LE({\\widetilde{\\mathcal {R}}}_{{\\tilde{t}}_1}^{{\\tilde{t}}_2})}+\\sum _k\\Vert {\\tilde{\\underline{\\Omega }}}_k(u)\\Vert _{L^2_{\\tilde{}}({\\tilde{I}})}\\\\&\\lesssim \\Vert g\\Vert _{L^2({\\widetilde{\\mathcal {R}}}_{{\\tilde{t}}_1}^{{\\tilde{t}}_2})}+\\sum _k\\Vert {\\tilde{\\underline{\\Omega }}}_k(u)\\Vert _{L^2_{\\tilde{}}({\\tilde{I}})},\\end{split}$ where to pass to the last line we have argued as in (REF ).", "It remains to estimate $\\Vert {\\tilde{\\underline{\\Omega }}}_k(u)\\Vert _{L^2_{\\tilde{}}({\\tilde{I}})}$ .", "Note that by the divergence theorem (note that if ${\\underline{{\\tilde{n}}}}$ denotes the normal to $\\lbrace {\\tilde{}}=\\mathrm {constant}\\rbrace $ , then $\\partial _{{\\tilde{}}}v$ in (REF ) can be written as ${\\underline{{\\tilde{n}}}}^\\alpha \\partial _\\alpha v$ ), for $k=1,\\dots ,n$ , $\\begin{split}{\\tilde{\\underline{\\Omega }}}_k(u)({\\tilde{}})={\\underline{\\Omega }}_k(u)(\\underline{\\gamma }{\\tilde{}})+\\iint _{\\mathcal {T}_{{\\tilde{}}}}({\\underline{Z}}_k\\mathcal {P}_0u-V{\\underline{Z}}_k u+({\\underline{{\\tilde{h}}}}^{-1})^{\\alpha \\beta }\\partial _\\alpha u \\partial _\\beta {\\underline{Z}}_k)\\sqrt{|{\\underline{{\\tilde{h}}}}|}\\mathrm {d}{\\tilde{y}}\\mathrm {d}{\\tilde{}}^{\\prime },\\end{split}$ so $\\begin{split}\\Vert {\\tilde{\\underline{\\Omega }}}_k(u)\\Vert _{L^2_{\\tilde{}}({\\tilde{I}})}&\\le \\Vert {\\underline{\\Omega }}_k(u)(\\underline{\\gamma }{\\tilde{}})\\Vert _{L^2_{\\tilde{}}({\\tilde{I}})}\\\\&\\quad +\\Big \\Vert \\iint _{\\mathcal {T}_{{\\tilde{}}}}({\\underline{Z}}_kg-V{\\underline{Z}}_k u+({\\underline{{\\tilde{h}}}}^{-1})^{\\alpha \\beta }\\partial _\\alpha u \\partial _\\beta {\\underline{Z}}_k)\\sqrt{|{\\underline{{\\tilde{h}}}}|}\\mathrm {d}{\\tilde{y}}\\mathrm {d}{\\tilde{}}^{\\prime }\\Big \\Vert _{L^2_{\\tilde{}}({\\tilde{I}})}.\\end{split}$ For the first term, after a change of variables, and recalling that $u=P_{\\le N_0}\\phi _{{\\mathrm {near}},{\\mathrm {near}}}= P_{\\le N_0}(-_{\\mathrm {far}}-_{{\\mathrm {near}},{\\mathrm {far}}})$ , $\\begin{split}\\Vert {\\underline{\\Omega }}_k(u)(\\underline{\\gamma }{\\tilde{}})\\Vert _{L^2_{\\tilde{}}({\\tilde{I}})}&\\lesssim \\Vert {\\underline{\\Omega }}_k(u)()\\Vert _{L^2_([\\underline{\\gamma }{\\tilde{t}}_1,\\underline{\\gamma }{\\tilde{t}}_2])}\\\\&\\lesssim \\Vert {\\underline{\\Omega }}_k(P_{\\le N_0})\\Vert _{L^2_([\\underline{\\gamma }{\\tilde{t}}_1,\\underline{\\gamma }{\\tilde{t}}_2])}+\\Vert {\\underline{\\Omega }}_k(P_{\\le N_0}(_{{\\mathrm {far}}}+_{{\\mathrm {near}},{\\mathrm {far}}}))\\Vert _{L^2_([\\underline{\\gamma }{\\tilde{t}}_1,\\underline{\\gamma }{\\tilde{t}}_2])}\\\\&\\lesssim \\Vert {\\underline{\\Omega }}_k(P_{\\le N_0})\\Vert _{L^2_t([\\underline{\\gamma }{\\tilde{t}}_1,\\underline{\\gamma }{\\tilde{t}}_2])}+\\Vert _{\\mathrm {far}}\\Vert _{LE}+\\Vert _{{\\mathrm {near}},{\\mathrm {far}}}\\Vert _{LE}.\\end{split}$ Here to pass to the last line we have used Lemmas REF and REF .", "To estimate $\\Vert {\\underline{\\Omega }}_k(P_{\\le N_0}\\phi )\\Vert _{L^2_t([\\underline{\\gamma }{\\tilde{t}}_1,\\underline{\\gamma }{\\tilde{t}}_2])}$ note that since ${\\underline{Z}}_k$ and ${\\underline{h}}$ are independent of $$ (recall that $(,)=({\\tilde{}},{\\tilde{}})$ in the support of ${\\underline{Z}}_k$ ), $\\begin{split}\\Vert {\\underline{\\Omega }}_k(P_{\\le N_0})\\Vert _{L^2_([\\underline{\\gamma }{\\tilde{t}}_1,\\underline{\\gamma }{\\tilde{t}}_2])}&=\\Vert P_{\\le N_0}{\\underline{\\Omega }}_k()\\Vert _{L^2_([\\underline{\\gamma }{\\tilde{t}}_1,\\underline{\\gamma }{\\tilde{t}}_2])}\\lesssim \\Vert {\\underline{\\Omega }}_k()\\Vert _{L^2_}\\\\&\\le \\Vert {\\underline{\\Omega }}_k()-{}_k()\\Vert _{L^2_([t_1,t_2])}+\\Vert {}_k()\\Vert _{L^2_([t_1,t_2])}+\\Vert \\Vert _{LE((\\Sigma _{t_1}^{t_2})^c)}.\\end{split}$ Since $\\begin{split}\\Vert {\\underline{\\Omega }}_k()-{}_k()\\Vert _{L^2_([t_1,t_2])}\\lesssim \\epsilon \\Vert \\Vert _{LE(\\mathcal {R}_{t_1}^{t_2})},\\end{split}$ combining the last few estimates we get, $\\begin{split}\\Vert {\\underline{\\Omega }}_k(u)(\\underline{\\gamma }{\\tilde{t}})\\Vert _{L^2_{\\tilde{}}({\\tilde{I}})}&\\lesssim \\Vert {}_k()\\Vert _{L^2_([t_1,t_2])}+ \\epsilon \\Vert \\Vert _{LE(\\mathcal {R}_{t_1}^{t_2})}\\\\&\\quad +\\Vert _{\\mathrm {far}}\\Vert _{LE}+\\Vert _{{\\mathrm {near}},{\\mathrm {far}}}\\Vert _{LE}+\\Vert \\Vert _{LE((\\Sigma _{t_1}^{t_2})^c)}.\\end{split}$ To treat the second term on the right in (REF ) we write $\\mathcal {T}_{\\tilde{}}=\\mathcal {T}_{{\\tilde{}},1}\\cup \\mathcal {T}_{{\\tilde{}},2}$ and treat the two regions separately.", "The estimates in these regions are similar so we carry out the details only for $\\mathcal {T}_{{\\tilde{}},1}$ .", "For each $c$ define ${\\tilde{}}_{\\mathrm {max}}(c)$ minimally and ${\\tilde{}}_{\\mathrm {min}}(c)$ maximally such that ${\\tilde{}}\\in [{\\tilde{}}_{\\mathrm {min}}(c),{\\tilde{}}_{\\mathrm {max}}(c)]$ in $\\mathcal {T}_{c,1}$ and let $\\begin{split}{\\tilde{}}_{\\mathrm {max}}:=\\sup _{{\\tilde{}}\\in [{\\tilde{t}}_1,{\\tilde{t}}_2]}{\\tilde{}}_{\\mathrm {max}}({\\tilde{}})\\quad {\\ \\ \\text{and} \\ \\ }\\quad {\\tilde{}}_{{\\mathrm {min}}}:=\\inf _{{\\tilde{}}\\in [{\\tilde{t}}_1,{\\tilde{t}}_2]}{\\tilde{}}_{\\mathrm {min}}({\\tilde{}}).\\end{split}$ For each ${\\tilde{}}$ let $\\begin{split}w(\\sigma ):=\\int _{\\lbrace {\\tilde{}}={\\tilde{}}\\rbrace }\\big |{\\underline{Z}}_kg-V{\\underline{Z}}_k u+({\\underline{{\\tilde{h}}}}^{-1})^{\\alpha \\beta }\\partial _\\alpha u \\partial _\\beta {\\underline{Z}}_k\\big |\\sqrt{|{\\underline{{\\tilde{h}}}}|}\\mathrm {d}{\\tilde{y}}.\\end{split}$ We can then bound the contribution of $\\mathcal {T}_{{\\tilde{}},1}$ to the last term on the right in (REF ) as $\\begin{split}\\Big (\\int _{{\\tilde{t}}_3}^{{\\tilde{t}}_4}\\Big (\\int _{{\\tilde{}}_{\\mathrm {min}}({\\tilde{}})}^{{\\tilde{}}_{\\mathrm {max}}({\\tilde{}})}w({\\tilde{}})\\mathrm {d}{\\tilde{}}\\Big )^2\\mathrm {d}{\\tilde{}}\\Big )^{\\frac{1}{2}}=\\Big (\\int _{{\\tilde{t}}_1}^{{\\tilde{t}}_2}\\Big (\\int _{{\\tilde{}}_{\\mathrm {min}}}^{{\\tilde{}}_{{\\mathrm {max}}}}\\chi _{\\lbrace {\\tilde{}}_{{\\mathrm {min}}}({\\tilde{}})\\le {\\tilde{}}\\le {\\tilde{}}_{\\mathrm {max}}({\\tilde{}})\\rbrace }w({\\tilde{}})\\mathrm {d}{\\tilde{}}\\Big )^2\\mathrm {d}{\\tilde{}}\\Big )^{\\frac{1}{2}},\\end{split}$ where we have used $\\chi _S$ to denote the characteristic function of a set $S$ .", "Applying Schur's test and noting that $\\begin{split}\\Vert \\chi _{\\lbrace {\\tilde{}}_{{\\mathrm {min}}}({\\tilde{}})\\le {\\tilde{}}\\le {\\tilde{}}_{\\mathrm {max}}({\\tilde{}})\\rbrace }\\Vert _{L^\\infty _{\\tilde{}}L^1_{\\tilde{}}\\cap L^\\infty _{\\tilde{}}L_{\\tilde{}}^1}\\lesssim _{{R_1}}|\\ell |\\lesssim \\epsilon ,\\end{split}$ we get $\\begin{split}&\\Big \\Vert \\iint _{\\mathcal {T}_{{\\tilde{t}},1}}\\big ({\\underline{Z}}_kg-V{\\underline{Z}}_k u+({\\underline{{\\tilde{h}}}}^{-1})^{\\alpha \\beta }\\partial _\\alpha u \\partial _\\beta {\\underline{Z}}_k\\big )\\sqrt{|{\\underline{{\\tilde{h}}}}|}\\mathrm {d}{\\tilde{y}}\\mathrm {d}{\\tilde{}}^{\\prime }\\Big \\Vert _{L^2_{\\tilde{}}({\\tilde{I}})}\\\\&\\lesssim \\epsilon \\Big (\\int _{{\\tilde{}}_{\\mathrm {min}}}^{{\\tilde{}}_{\\mathrm {max}}}\\Big (\\int _{\\lbrace {\\tilde{}}={\\tilde{}}^{\\prime }\\rbrace } \\big |{\\underline{Z}}_kg-V{\\underline{Z}}_k u+({\\underline{{\\tilde{h}}}}^{-1})^{\\alpha \\beta }\\partial _\\alpha u \\partial _\\beta {\\underline{Z}}_k\\big |\\sqrt{|{\\underline{{\\tilde{h}}}}|}\\mathrm {d}{\\tilde{y}}\\Big )^2\\mathrm {d}{\\tilde{}}^{\\prime }\\Big )^{\\frac{1}{2}}\\\\&\\lesssim \\epsilon \\big (\\Vert u\\Vert _{LE}+\\Vert g\\Vert _{L^2(\\lbrace \\le {R_1}\\rbrace }\\big ).\\end{split}$ Combining with (REF ) and (REF ) we have shown that $\\begin{split}\\Vert {\\tilde{\\underline{\\Omega }}}_k(u)\\Vert _{L^2_{\\tilde{}}({\\tilde{I}})}&\\lesssim {}_k()\\Vert _{L^2_([t_1,t_2])}+ \\epsilon \\Vert \\Vert _{LE(\\mathcal {R}_{t_1}^{t_2})}\\\\&\\quad +\\Vert _{\\mathrm {far}}\\Vert _{LE}+\\Vert _{{\\mathrm {near}},{\\mathrm {far}}}\\Vert _{LE}+\\Vert \\Vert _{LE((\\Sigma _{t_1}^{t_2})^c)}.\\end{split}$ The estimate for $\\Vert {\\tilde{\\underline{\\Omega }}}_{n+k}(u)\\Vert _{L^2_{\\tilde{}}({\\tilde{I}})}$ is similar except that when using the divergence identity to relate ${\\tilde{\\underline{\\Omega }}}_k(u)({\\tilde{}})$ and ${\\underline{\\Omega }}(u)(\\underline{\\gamma }{\\tilde{}})$ , analogously to (REF ), we need to integrate the quantity $\\begin{split}v{\\underline{Z}}_k\\Box {\\tilde{}}+({\\underline{{\\tilde{h}}}}^{-1})^{\\alpha \\beta }\\partial _\\alpha {\\tilde{}}\\partial _\\beta (v{\\underline{Z}}_k)\\end{split}$ over $\\mathcal {T}_{{\\tilde{}}}$ .", "The estimates for ${\\tilde{\\underline{\\Omega }}}_{\\mu }^{\\pm }(u)$ are obtained similarly, completing the proof of (REF ).", "It remains to prove Lemma REF .", "For this we will use the following technical lemma.", "Lemma 7.9 Suppose $a$ , $b$ , and $c$ , satisfy $\\begin{split}\\sup _y(\\langle \\rangle ^{\\frac{1+\\alpha }{2}}|a|+|b|+|c|)\\lesssim \\epsilon \\langle \\rangle ^{-\\gamma },\\end{split}$ for some $\\gamma >1$ .", "Then $\\begin{split}\\mathcal {X}&:=\\Vert [P_{\\le N_0},a]\\partial ^2__{\\mathrm {near}}\\Vert _{L^1_L^2_y\\cap L^2_{,y}(I)}^2+\\Vert [P_{\\le N_0},b]\\partial ^2_y _{\\mathrm {near}}\\Vert _{L^1_L^2_y\\cap L^2_{,y}(I)}^2\\\\&\\quad +\\Vert [P_{\\le N_0},c]\\partial ^2_{y} _{\\mathrm {near}}\\Vert _{L^1_L^2_y\\cap L^2_{,y}(I)}^2\\end{split}$ satisfies $\\begin{split}\\mathcal {X}\\lesssim \\epsilon \\Vert _{\\mathrm {near}}\\Vert _{LE}^2+\\epsilon \\Vert _{\\mathrm {far}}\\Vert _{LE(I)}^2+\\epsilon \\sup _\\Vert _{\\mathrm {near}}\\Vert _{E(\\Sigma _)}^2.\\end{split}$ To simplify notation we will write $\\phi $ instead of $_{\\mathrm {near}}$ during the proof.", "Note that since the coefficients $\\mathring{}_q^{\\mu \\nu }$ of $\\mathcal {P}_{\\mathrm {pert}}$ satisfy the conditions assumed on $a$ , $b$ , $c$ , we can simultaneously carry out our estimates with $a$ , $b$ , $c$ , replaced by $\\mathring{}_q^{\\mu \\nu }$ .", "This will allow us to absorb small multiplies of the quantity we are trying to estimate.", "With this in mind, to simplify notation, we simply assume that $a$ , $b$ , and $c$ are equal to $\\mathring{}_q^{}$ , $\\mathring{}_q^{yy}$ , and $\\mathring{}_q^{y}$ , respectively.", "We will repeatedly use the following standard weighted commutator estimate $\\begin{split}\\Vert [P_{N},h]g\\Vert _{L^r_}\\lesssim 2^{-N}(1+2^{-\\beta N})\\Vert \\langle \\rangle \\partial _h\\Vert _{L^q_}\\Vert \\langle \\rangle ^{-\\beta }g\\Vert _{L^p_},\\qquad \\frac{1}{p}+\\frac{1}{q}=\\frac{1}{r}.\\end{split}$ Indeed, with $\\chi _N()=2^N\\chi (2^N)$ , for an appropriate Schwartz function $\\chi $ we have $\\begin{split}\\Vert [P_{N},h]g\\Vert _{L^r_}&\\le 2^{-N}\\int _{\\mathbb {R}}\\int _0^1\\chi _N(s)\\Vert \\big (\\langle -s\\rangle ^\\alpha h^{\\prime }(-ts))(\\langle -s\\rangle ^{-\\alpha }g(-s))\\Vert _{L^r_}\\mathrm {d}t \\mathrm {d}s\\\\&\\le 2^{-N}\\int _{\\mathbb {R}} \\chi _N(s)\\int _0^1\\Vert \\langle \\rangle ^\\beta h\\Vert _{L^q_}\\Vert \\langle \\rangle ^{-\\beta }g\\Vert _{L^p_}\\Vert \\langle -(1-t)s\\rangle ^\\beta \\langle \\rangle ^{-\\beta }\\Vert _{L^\\infty _}\\mathrm {d}t \\mathrm {d}s\\\\&\\lesssim 2^{-N}\\Vert \\langle \\rangle ^\\beta h^{\\prime }\\Vert _{L^q_}\\Vert \\langle \\rangle ^{-\\beta }g\\Vert _{L^p_}\\int _{\\mathbb {R}}\\chi _N(s)(1+2^{-\\beta N}|s|^\\beta )\\mathrm {d}s,\\end{split}$ which proves the commutator estimate.", "In our applications $N\\ge 1$ and we use $N$ instead of $ N^{-1}(1+N^{-\\beta })$ .", "Also note that the same estimate holds if $P_N$ is replaced by $P_{\\le N}$ , ${\\bf P}_N$ , or ${\\bf P}_{\\le N}$ .", "Let $\\begin{split}\\mathcal {A}&:=\\Vert [P_{\\le N_0},a]\\partial ^2_\\phi \\Vert _{L^1_L^2_y\\cap L^2_{,y}(I)}^2+\\sum _{N\\ge N_0}\\Vert [P_N,a]\\partial ^2_\\phi \\Vert _{L^1_L^2_y\\cap L^2_{,y}(I)}^2\\\\&\\quad +\\Vert [P_{\\le N_0},b]\\partial ^2_y \\phi \\Vert _{L^1_L^2_y\\cap L^2_{,y}(I)}^2+\\sum _{N\\ge N_0}\\Vert [P_N,b]\\partial ^2_y\\phi \\Vert _{L^1_L^2_y\\cap L^2_{,y}(I)}^2\\\\&\\quad +\\Vert [P_{\\le N_0},c]\\partial ^2_{y} \\phi \\Vert _{L^1_L^2_y\\cap L^2_{,y}(I)}^2+\\sum _{N\\ge N_0}\\Vert [P_N,c]\\partial ^2_{y}\\phi \\Vert _{L^1_L^2_y\\cap L^2_{,y}(I)}^2.\\end{split}$ Since $\\mathcal {A}\\ge \\mathcal {X}$ , it suffices to prove the estimate (the extra terms are added to absorb error terms that arise in the estimates) $\\begin{split}\\mathcal {A}\\lesssim \\epsilon \\Vert _{\\mathrm {near}}\\Vert _{LE}^2+\\epsilon \\Vert _{\\mathrm {far}}\\Vert _{LE(I)}^2+\\epsilon \\sup _\\Vert _{\\mathrm {near}}\\Vert _{E(\\Sigma _)}^2.\\end{split}$ Let us start with the second terms on each line of (REF ).", "Let ${\\bf P}_N=\\sum _{M=N-4}^{M+4}P_M$ and ${\\bf P}_{\\le N}=P_{\\le N+4}$ be fattened projections, and decompose $\\begin{split}[P_N,a]\\partial ^2_\\phi &= \\sum _{M=N+1}^{N+4}[P_N, {\\bf P}_{\\le N}a]\\partial ^2_P_M\\phi +[P_N,{\\bf P}_{N}a]\\partial ^2_P_{\\le N}\\phi +\\sum _{M> N+4}[P_N,{\\bf P}_Ma]\\partial ^2_P_M\\phi \\\\&\\quad +[P_N,P_{\\le N+4}a]\\partial ^2_P_{\\le N}\\phi +\\sum _{M>N+4}[P_N,P_Ma]\\partial ^2_P_{\\le N+3}\\phi ,\\end{split}$ with similar decompositions for $[P_N,b]\\partial ^2_y\\phi $ and $[P_N,c]\\partial ^2_{y}\\phi $ .", "With $\\alpha _0:=\\frac{1}{2}(1+\\alpha )$ and ${\\tilde{{\\bf P}}}_N=\\sum _{M=N+1}^{N+4}P_M$ , the contribution of the first term on the right-hand side of (REF ) is bounded as $\\begin{split}\\sum _{N\\ge N_0}\\Vert [P_N, {\\bf P}_{\\le N}a]\\partial ^2_\\tau {\\tilde{{\\bf P}}}_N\\phi \\Vert _{L^2_{,y}}^2&\\lesssim \\sum _{N\\ge N_0} 2^{-2N}\\Vert \\langle \\rangle ^{\\alpha _0}\\partial _\\tau {\\bf P}_{\\le N} a\\Vert _{L^\\infty _{,y}}^2\\Vert \\langle \\rangle ^{-\\alpha _0}\\partial ^2_\\tau {\\tilde{{\\bf P}}}_N\\phi \\Vert _{L^2_{,y}}^2\\\\&\\lesssim \\epsilon \\sum _{N\\ge N_0}\\Vert \\langle \\rangle ^{-\\alpha _0}\\partial _\\tau {\\tilde{{\\bf P}}}_{N}\\phi \\Vert _{L^2_{,y}}^2\\lesssim \\epsilon \\Vert \\phi \\Vert _{LE}^2.\\end{split}$ The $L^1_L^2_y$ estimate is similar.", "The corresponding term for $c$ is bounded as $\\begin{split}\\sum _{N\\ge N_0}\\Vert [P_N, {\\bf P}_{\\le N}c]\\partial ^2_{,y}{\\tilde{{\\bf P}}}_N\\phi \\Vert _{L^1_L^2_y}^2&\\lesssim \\sum _{N\\ge N_0} 2^{-2N}\\Vert \\langle \\rangle ^{\\frac{1}{2}+\\delta }\\partial _{\\bf P}_{\\le N} c\\Vert _{L^2_L^\\infty _{y}}^2\\Vert \\langle \\rangle ^{-\\frac{1}{2}-\\delta }\\partial ^2_{y} {\\tilde{{\\bf P}}}_N\\phi \\Vert _{L^2_{,y}}^2\\\\&\\lesssim \\epsilon \\sum _{N\\ge N_0}2^{-2N}\\big (\\Vert \\partial _{\\tilde{{\\bf P}}}_{N}\\langle \\rangle ^{-\\frac{1}{2}-\\delta }\\partial _y\\phi \\Vert _{L^2_{,y}}^2+\\Vert [\\langle \\rangle ^{-\\frac{1}{2}-\\delta },{\\tilde{{\\bf P}}}_N \\partial _]\\partial _y\\phi \\Vert _{L^2_{,y}}^2\\big )\\\\&\\lesssim \\epsilon \\Vert \\langle \\rangle ^{-\\frac{1}{2}-\\delta }\\partial _y\\phi \\Vert _{L^2_{,y}}+\\delta \\sum _{N\\ge N_0}2^{-2N}\\Vert \\partial _\\langle \\rangle ^{-\\frac{1}{2}-\\delta }\\Vert _{L^2_L^\\infty _y}^2\\Vert \\partial _y\\phi \\Vert _{L^\\infty _L^2_y}^2\\\\&\\lesssim \\epsilon \\Vert \\partial _y\\phi \\Vert _{L^\\infty _L^2_y}^2,\\end{split}$ which is bounded by the energy.", "In this calculation $[\\langle \\rangle ^{-\\frac{1}{2}-\\delta },{\\bf P}_N \\partial _]$ was treated in a similar way as in the proof of (REF ).", "The estimate for the same sum in $L^2_{\\tau ,x}$ is the same, except that in the first step we place $\\langle \\tau \\rangle ^{\\frac{1}{2}+\\epsilon }\\partial _\\tau {\\bf P}_{\\le N} c$ in $L^\\infty _{\\tau ,x}$ instead of $L^2_\\tau L^\\infty _x$ .", "Let us now turn to the more difficult term $b$ : $\\begin{split}\\sum _{N\\ge N_0}\\Vert [P_N, {\\bf P}_{\\le N}b]\\partial ^2_y{\\tilde{{\\bf P}}}_N\\phi \\Vert _{L^1_L^2_y}^2&\\lesssim \\sum _{N\\ge N_0} 2^{-2N}\\Vert \\langle \\rangle ^{\\frac{1}{2}+\\delta }\\partial _{\\bf P}_{\\le N} b\\Vert _{L^2_L^\\infty _{y}}^2\\Vert \\langle \\rangle ^{-\\frac{1}{2}-\\delta }\\partial ^2_{ y} {\\tilde{{\\bf P}}}_N\\phi \\Vert _{L^2_{,y}}^2\\\\&\\lesssim \\epsilon \\sum _{N \\ge N_0}2^{-2N}\\Vert \\langle \\rangle ^{-\\frac{1}{2}-\\delta }\\mathcal {P}_{\\mathrm {ell}}{\\tilde{{\\bf P}}}_N\\phi \\Vert _{L^2_{,y}}^2\\\\&\\lesssim \\epsilon \\sum _{N_\\ge N_0}2^{-2N}\\Big (\\Vert \\langle \\rangle ^{-\\frac{1}{2}-\\delta }[\\mathcal {P}_{\\mathrm {ell}},{\\tilde{{\\bf P}}}_N]\\phi \\Vert _{L^2_{,y}}^2+\\Vert \\langle \\rangle ^{-\\frac{1}{2}-\\delta }{\\tilde{{\\bf P}}}_N \\mathcal {P}_\\phi \\Vert _{L^2_{,y}}^2\\\\&\\phantom{\\lesssim \\epsilon \\sum _{N_\\ge N_0}N^{-2}\\Big (}+\\Vert \\langle \\rangle ^{-\\frac{1}{2}-\\delta }{\\tilde{{\\bf P}}}_N \\mathcal {P}\\phi \\Vert _{L^2_{,y}}^2\\Big ).\\end{split}$ The last term above is bounded by the right-hand side of (REF ) using the equation for $\\phi =_{\\mathrm {near}}$ .", "In the line before last, the first term can be absorbed by $\\mathcal {A}$ , while the second term is treated as in the contribution of $a$ and $c$ above.", "The contribution of $\\sum _{N\\ge N_0}\\Vert [P_N, {\\bf P}_{\\le N}b]\\partial ^2_x{\\bf P}_N\\phi \\Vert _{L^2_{\\tau ,x}}^2$ can be handled similarly where now we bound $\\langle \\tau \\rangle ^{\\frac{1}{2}+\\epsilon }\\partial _\\tau {\\bf P}_{\\le N} b$ in $L^{\\infty }_{,y}$ , instead of $L^2_L^\\infty _y$ , in the first step.", "We next consider the second term in the decomposition (REF ).", "For $a$ we have $\\begin{split}\\sum _{N\\ge N_0}\\Vert [P_N,{\\bf P}_{N}a]\\partial ^2_\\tau P_{\\le N}\\phi \\Vert _{L^2_L^2_y}^2&\\lesssim \\sum _{N\\ge N_0} 2^{-2N}\\Vert \\langle \\rangle ^{\\alpha _0} 2^N\\partial _\\tau {\\bf P}_{ N} a\\Vert _{L^\\infty _{,y}}^2\\Vert \\langle \\rangle ^{-\\alpha _0}\\partial _P_{\\le N}\\phi \\Vert _{L^2_{,y} }^2\\\\&\\lesssim \\Vert \\langle \\rangle ^{\\alpha _0}\\partial ^2_a\\Vert _{L^\\infty _{,y}}^2\\Vert \\langle \\rangle ^{-\\alpha _0}\\partial _\\phi \\Vert _{L^2_{,y}}^2\\sum _{N\\ge N_0} 2^{-2N}\\lesssim \\epsilon \\Vert \\phi \\Vert _{LE}^2.\\end{split}$ The estimate in $L^1_L^2_y$ is similar.", "For $c$ , $\\begin{split}\\sum _{N\\ge N_0}\\Vert [P_N, {\\bf P}_{ N}c]\\partial ^2_{,y}P_{\\le N}\\phi \\Vert _{L^1_L^2_y}^2&\\lesssim \\sum _{N\\ge N_0} 2^{-2N}\\Vert \\langle \\rangle ^{\\frac{1}{2}+\\epsilon }\\partial _{\\bf P}_{N} c\\Vert _{L^2_L^\\infty _{y}}^2\\Vert \\langle \\rangle ^{-\\frac{1}{2}-\\epsilon }\\partial ^2_{y} P_{\\le N}\\phi \\Vert _{L^2_{,y}}^2\\\\&\\lesssim \\sum _{N\\ge N_0}2^{-2N}\\Vert \\partial _P_{\\le N}\\langle \\rangle ^{-\\frac{1}{2}-\\epsilon }\\partial _y\\phi \\Vert _{L^2_{,y}}^2\\Vert \\langle \\rangle ^{\\frac{1}{2}+\\epsilon }\\partial _{\\bf P}_Nc\\Vert _{L^2_L^\\infty _y}^2\\\\&\\quad +\\sum _{N\\ge N_0}2^{-2N}\\Vert [\\langle \\rangle ^{-\\frac{1}{2}-\\epsilon },P_{\\le N} \\partial _]\\partial _y\\phi \\Vert _{L^2_{,y}}^2\\Vert \\langle \\rangle ^{\\frac{1}{2}+\\epsilon }\\partial _{\\bf P}_Nc\\Vert _{L^2_L^\\infty _y}^2\\\\&\\lesssim \\Vert \\partial _y\\phi \\Vert _{L^\\infty _L^2_y}^2\\sum _{N\\ge N_0}\\Vert \\langle \\rangle ^{\\frac{1}{2}+\\epsilon }{\\bf P}_Nc\\Vert _{L^2_L^\\infty _y}^2\\lesssim \\epsilon \\Vert \\partial _y\\phi \\Vert _{L^\\infty _L^2_y}^2.\\end{split}$ The $L^2_{,y}$ estimate is similar.", "For $b$ we again use elliptic estimates and argue more carefully as $\\begin{split}\\sum _{N\\ge N_0}\\Vert [P_N, {\\bf P}_{ N}b]\\partial ^2_y P_{\\le N}\\phi \\Vert _{L^1_L^2_y}^2&\\lesssim \\sum _{N\\ge N_0} 2^{-2N}\\Vert \\langle \\rangle ^{\\frac{1}{2}+\\delta }\\partial _{\\bf P}_{N} b\\Vert _{L^2_L^\\infty _{y}}^2 \\Vert \\langle \\rangle ^{-\\frac{1}{2}-\\delta }[\\mathcal {P}_{{\\mathrm {ell}}} P_{\\le N}]\\phi \\Vert _{L^2_{,y}}^2\\\\&\\quad + \\sum _{N\\ge N_0} 2^{-2N}\\Vert \\langle \\rangle ^{\\frac{1}{2}+\\delta }\\partial _{\\bf P}_{N} b\\Vert _{L^2_L^\\infty _{y}}^2 \\Vert \\langle \\rangle ^{-\\frac{1}{2}-\\delta } P_{\\le N}\\mathcal {P}_\\phi \\Vert _{L^2_{,y}}^2\\\\&\\quad + \\sum _{N\\ge N_0} 2^{-2N}\\Vert \\langle \\rangle ^{\\frac{1}{2}+\\delta }\\partial _{\\bf P}_{N} b\\Vert _{L^2_L^\\infty _{y}}^2 \\Vert \\langle \\rangle ^{-\\frac{1}{2}-\\delta }P_{\\le N}\\mathcal {P}\\phi \\Vert _{L^2_{,y}}^2.\\end{split}$ The last two lines can be handled as in the case of $a$ and $c$ above and using the equation for $\\phi =_{\\mathrm {near}}$ .", "The first line is bounded by $\\begin{split}&\\sum _{N\\ge N_0} 2^{-2N}\\Vert \\langle \\rangle ^{\\frac{1}{2}+\\epsilon }\\partial _{\\bf P}_{N} b\\Vert _{L^2_L^\\infty _{y}}^2 \\Vert [\\mathcal {P}_{{\\mathrm {ell}}}, P_{\\le N_0}]\\phi \\Vert _{L^2_{,y}}^2\\\\&+\\sum _{N\\ge N_0}\\sum _{N_0< M\\le N} 2^{-2N}\\Vert \\langle \\rangle ^{\\frac{1}{2}+\\epsilon }\\partial _{\\bf P}_{N} b\\Vert _{L^2_L^\\infty _{y}}^2 \\Vert [\\mathcal {P}_{{\\mathrm {ell}}}, P_{M}]\\phi \\Vert _{L^2_{,y}}^2\\\\&\\lesssim \\mathcal {A}\\sum _{N\\ge N_0}2^{-2N}\\Vert \\langle \\tau \\rangle ^{\\frac{1}{2}+\\epsilon }\\partial _\\tau {\\bf P}_{N} b\\Vert _{L^2_\\tau L^\\infty _{x}}^2\\lesssim \\epsilon \\mathcal {A},\\end{split}$ which can be absorbed.", "The estimate in $\\sum _{N\\ge N_0}\\Vert [P_N, {\\bf P}_{ N}b]\\partial ^2_yP_{\\le N}\\phi \\Vert _{ L^2_{,y}}^2$ is similar.", "We consider the third term in the decomposition (REF ) next.", "For $a$ we have $\\begin{split}\\sum _{N\\ge N_0}\\Big \\Vert \\sum _{M> N+4}[P_N,{\\bf P}_Ma]\\partial ^2_P_M\\phi \\Big \\Vert _{L^2_{,y}}^2&\\lesssim \\sum _{N\\ge N_0}\\Big (\\sum _{M> N}2^{M-N}\\Vert \\langle \\rangle ^{\\alpha _0}\\partial _{\\bf P}_Ma\\Vert _{L^\\infty _{,y}}\\Vert \\langle -\\rangle ^{\\alpha _0}\\partial _P_M\\phi \\Vert _{L^2_{,y}}\\Big )^2\\\\&\\lesssim \\Vert \\langle r\\rangle ^{\\alpha }\\partial _^3a\\Vert _{L^\\infty _{,y}}^2\\Vert \\phi \\Vert _{LE}^2\\sum _{N\\ge N_0}2^{-4N}\\lesssim \\epsilon \\Vert \\phi \\Vert _{LE}^2.\\end{split}$ The estimate in $L^1_{}L^2_y$ is similar.", "For $c$ , $\\begin{split}&\\sum _{N\\ge N_0}\\Big \\Vert \\sum _{M> N+4}[P_N, {\\bf P}_{M}c]\\partial ^2_{,y}P_{M}\\phi \\Big \\Vert _{L^1_L^2_y}^2\\\\&\\lesssim \\sum _{N\\ge N_0} N^{-2}\\Big (\\sum _{M> N}M\\Vert \\langle \\rangle ^{\\frac{1}{2}+\\delta }\\partial _{\\bf P}_{M} c\\Vert _{L^2_L^\\infty _{y}}\\Vert \\langle \\rangle ^{-\\frac{1}{2}-\\delta }\\partial _{ y} {\\bf P}_{M}\\phi \\Vert _{L^2_{,y}}\\Big )^2\\\\&\\lesssim \\Vert \\langle \\rangle ^{\\frac{1}{2}+\\epsilon }\\partial _^2c\\Vert _{L^2_L^\\infty _y}^2\\Vert \\partial _y\\phi \\Vert _{L^\\infty _L^2_y}^2\\lesssim \\epsilon \\Vert \\partial _y\\phi \\Vert _{L^\\infty _L^2_y}^2,\\end{split}$ and the contribution in $L^2_{,y}$ is bounded in a similar way.", "For $b$ we again apply elliptic estimates.", "Except for the commutator with $\\mathcal {P}_{\\mathrm {ell}}$ , the resulting terms can be bounded as was done for $a$ and $c$ , and using the equation for $\\phi =_{\\mathrm {near}}$ , and the commutator term is bounded, using similar arguments as earlier, as (for the $L^1_L^2_y$ estimate; the $L^2_{,y}$ estimate is similar) $\\begin{split}&\\sum _{N\\ge N_0} N^{-2}\\Big (\\sum _{M\\ge N}\\Vert \\langle \\rangle ^{\\frac{1}{2}+\\epsilon }\\partial _{\\bf P}_{M} b\\Vert _{L^2_L^\\infty _{y}} \\Vert [\\mathcal {P}_{{\\mathrm {ell}}}, P_{M}]\\phi \\Vert _{L^2_{,y}}\\Big )^2\\\\&\\lesssim \\Vert \\langle \\rangle ^{\\frac{1}{2}+\\epsilon }\\partial _^2b\\Vert _{L^2_L^\\infty _y}^2\\sum _{M> N_0}\\Vert [\\mathcal {P}_{\\mathrm {ell}},P_M]\\phi \\Vert _{L^2_{,y}}^2\\lesssim \\epsilon \\mathcal {A},\\end{split}$ which can be absorbed.", "The first term on the second line of (REF ) can be further decomposed as $\\begin{split}\\sum _{N\\ge N_0}[P_N,P_{\\le N+4}a]\\partial _{}^2P_{\\le N}\\phi &=\\sum _{N=N_0}^{N_0+10}[P_{N},P_{\\le N +4}a]P_{\\le N}\\partial _{}^2\\phi +\\sum _{N>N_0+10}[P_N,{\\bf P}_Na]\\partial _{}^2P_{\\le N-4}\\phi \\\\&\\quad +\\sum _{N>N_0+10}\\sum _{M=N-3}^{N}[P_N,P_{\\le N+3}a]\\partial _{}^2P_M\\phi ,\\end{split}$ with similar decompositions for the contributions of $b$ and $c$ .", "Each of the terms above can then be bounded using similar arguments to the earlier ones.", "Similarly, the sum over $N\\ge N_0$ of the last term in (REF ) can be decomposed as (with similar decompositions for $b$ and $c$ contributions) $\\begin{split}\\sum _{N=N_0}^{N_0+10}\\sum _{M>N+4}[P_N,P_Ma]\\partial _{}^2P_{\\le N+3}\\phi +\\sum _{N=N_0}^{N_0+10}\\sum _{M>N+4}\\sum _{L=N-3}^{N+3}[P_N,P_Ma]\\partial _{}^2P_{L}\\phi ,\\end{split}$ and each of these terms can be estimated as before.", "It remains to estimate the first term on each line of the definition of $\\mathcal {A}$ .", "For this we use the following decomposition, where ${\\tilde{{\\bf P}}}_{\\le N_0}=P_{\\le N_0+2}$ , $\\begin{split}[P_{\\le N_0},a]\\partial ^2_\\phi &=[P_{\\le N_0}, {\\bf P}_{\\le N_0}a]\\partial ^2_{\\tilde{{\\bf P}}}_{\\le N_0}\\phi +\\sum _{N>N_0+2}[P_{\\le N_0},{\\bf P}_Na]\\partial ^2_P_N\\phi \\\\&\\quad +\\sum _{N>N_0+4}[P_{\\le N_0},P_Na]\\partial ^2_{\\tilde{{\\bf P}}}_{\\le N_0}\\phi ,\\end{split}$ and similarly for $b$ and $c$ .", "The contribution of the first term is bounded as $\\begin{split}\\Vert [P_{\\le N_0}, {\\bf P}_{\\le N_0}a]\\partial ^2_{\\tilde{{\\bf P}}}_{\\le N_0}\\phi \\Vert _{L^2_{,y}}^2\\lesssim \\Vert \\langle \\rangle ^{\\alpha _0} \\partial _{\\bf P}_{\\le N_0}a\\Vert _{L^\\infty _{,y}}^2\\Vert \\langle \\rangle ^{-\\alpha _0}\\partial _^2{\\tilde{{\\bf P}}}_{\\le N_0}\\phi \\Vert _{L^2_{,y}}^2\\lesssim \\epsilon \\Vert \\phi \\Vert _{LE}^2,\\end{split}$ with the $L^1_L^2_y$ estimate being similar.", "The corresponding contribution for $c$ is bounded as $\\begin{split}\\Vert [P_{\\le N_0}, {\\bf P}_{\\le N_0}c]\\partial ^2_{y}{\\tilde{{\\bf P}}}_{\\le N_0}\\phi \\Vert _{L^1_L^2_y}\\lesssim \\Vert \\langle \\rangle ^{\\frac{1}{2}+\\epsilon }\\partial _{\\bf P}_{\\le N_0}c\\Vert _{L^2_L^\\infty _y}^2\\Vert \\langle \\rangle ^{-\\frac{1}{2}-\\epsilon }\\partial _{y}{\\tilde{{\\bf P}}}_{\\le N_0}\\phi \\Vert _{L^2_{,y}}\\lesssim \\epsilon \\Vert \\partial _y\\phi \\Vert _{L^\\infty _L^2_y}^2,\\end{split}$ and the corresponding estimate in $L^2_{\\tau ,x}$ is handled similarly.", "For $b$ we apply elliptic estimates and as usual use similar arguments as for $a$ and $c$ above as well as the equation for $\\phi =_{\\mathrm {near}}$ to handle all terms except the commutator with $\\mathcal {P}_{\\mathrm {ell}}$ , while this commutator error is bounded by $\\epsilon \\mathcal {A}$ which can be absorbed.", "Turning to the second term in (REF ), the contribution of $a$ in $L^2_{,y}$ is bounded as $\\begin{split}\\Big \\Vert \\sum _{N> N_0+2}[P_{\\le N_0},{\\bf P}_Na]\\partial ^2_\\tau P_N\\phi \\Big \\Vert _{L^2_{,y}}^2&\\lesssim \\Big (\\sum _{N> N_0}\\Vert \\langle \\rangle ^{\\alpha _0}\\partial _\\tau N{\\bf P}_Na\\Vert _{L^\\infty _{\\tau ,y}}\\Vert \\langle \\rangle ^{-\\alpha _0}\\partial _\\tau P_N\\phi \\Vert _{L^2_{,y} }\\Big )^2\\\\&\\lesssim \\Vert \\langle \\rangle ^{\\alpha _0}\\partial _\\tau ^3a\\Vert _{L^\\infty _{\\tau ,y}}^2\\Vert \\phi \\Vert _{LE}^2\\lesssim \\epsilon \\Vert \\phi \\Vert _{LE}^2,\\end{split}$ and the $L^1_L^2_y$ contribution is similar.", "For $c$ $\\begin{split}\\Big \\Vert \\sum _{N> N_0+2}[P_{\\le N_0},{\\bf P}_Nc]\\partial ^2_{y}P_N\\phi \\Big \\Vert _{L^1_L^2_y}&\\lesssim \\Big (\\sum _{N\\ge N_0}\\Vert \\langle \\rangle ^{\\frac{1}{2}+\\delta }2^N\\partial _{\\bf P}_{N}c\\Vert _{L^2_L^\\infty _y}\\Vert \\langle \\rangle ^{-\\frac{1}{2}-\\delta }\\partial _yP_N\\phi \\Vert _{L_{,y}^2}\\Big )^2\\\\&\\lesssim \\epsilon \\Vert \\partial _y\\phi \\Vert _{L^\\infty _L^2_y}^2.\\end{split}$ The estimate for the corresponding term in $L^2_{,y}$ is similar.", "The contribution of $b$ is also handled using elliptic estimates as usual.", "Finally, the last term in (REF ) can also be handled using similar arguments as above, and we omit the details.", "This completes the proof of (REF ).", "We can now prove Lemma REF .", "In this proof, when working in the $(,,)$ coordinates, we will use $y$ for the coordinates $(,)$ .", "The notation $\\iint $ is used for integration over $\\Sigma _{t_1}^{t_2}$ .", "We will also use the notation $I=[t_1,t_2]$ with $LE(I)=LE(\\Sigma _{t_1}^{t_2})$ and $LE=LE(\\cup _{}\\Sigma _{})$ .", "Note that it suffices to estimate $\\Vert P_{\\le N_0}_{{\\mathrm {near}},{\\mathrm {far}}}\\Vert _{LE(I)}$ .", "By a similar multiplier argument as in the proof of Lemma REF , and with multipliers $Q_j^{\\mathrm {int}}P_{\\le N_0}_{{\\mathrm {near}},{\\mathrm {far}}}$ and $Q_j^{\\mathrm {ext}}P_{\\le N_0}_{{\\mathrm {near}},{\\mathrm {far}}}$ , $j=1,2$ , we get (recall that, by a slight abuse of notation, we write $P_{\\le N_0}\\mathcal {P}_{\\mathrm {pert}}_{{\\mathrm {near}}}$ for $P_{\\le N_0}$ , where $=\\mathcal {P}_{\\mathrm {pert}}_{{\\mathrm {near}}}$ in $\\Sigma _{t_1}^{t_2}$ and $=0$ in $(\\Sigma _{t_1}^{t_2})^c$ ) $\\begin{split}\\Vert P_{\\le N_0}_{{\\mathrm {near}},{\\mathrm {far}}}\\Vert _{LE(I)}^2&\\lesssim \\Vert V_{\\mathrm {far}}_{\\mathrm {far}}\\Vert _{LE^\\ast (I)}^2+\\sum _{j=1}^2\\Big |\\iint (P_{\\le N_0}\\mathcal {P}_{\\mathrm {pert}}_{\\mathrm {near}})(Q^{\\mathrm {int}}_jP_{\\le N_0}_{{\\mathrm {near}},{\\mathrm {far}}}))\\sqrt{|{\\underline{h}}|}\\mathrm {d}y\\mathrm {d}\\tau \\Big |\\\\&\\quad +\\sum _{j=1}^2\\Big |\\iint (P_{\\le N_0}\\mathcal {P}_{\\mathrm {pert}}_{\\mathrm {near}})(Q^{\\mathrm {ext}}_jP_{\\le N_0}_{{\\mathrm {near}},{\\mathrm {far}}}))\\sqrt{|{\\underline{h}}|}\\mathrm {d}y\\mathrm {d}\\tau \\Big |\\\\&\\lesssim \\Vert \\Vert _{E(\\Sigma _{t_1})}^2+\\Vert f\\Vert _{L^1L^2(\\Sigma _{\\tau _1}^{\\tau _2})}^2\\\\&\\quad +\\Big |\\iint (P_{\\le N_0}\\mathcal {P}_{\\mathrm {pert}}_{\\mathrm {near}})(QP_{\\le N_0}_{{\\mathrm {near}},{\\mathrm {far}}}))\\sqrt{|{\\underline{h}}|}\\mathrm {d}y\\mathrm {d}\\tau \\Big |\\\\&\\quad +\\sum _{j=1}^2\\Big |\\iint (P_{\\le N_0}\\mathcal {P}_{\\mathrm {pert}}_{\\mathrm {near}})(Q^{\\mathrm {ext}}_jP_{\\le N_0}_{{\\mathrm {near}},{\\mathrm {far}}}))\\sqrt{|{\\underline{h}}|}\\mathrm {d}y\\mathrm {d}\\tau \\Big |.\\end{split}$ Here $Q_1^{\\mathrm {int}}$ and $Q_1^{\\mathrm {ext}}$ denote the first order interior and exterior multipliers (see (REF ) and (REF )), respectively, and $Q_2^{\\mathrm {int}}$ and $Q_2^{\\mathrm {ext}}$ are the corresponding order zero multipliers (see (REF ) and (REF ); we have not used the notation $P$ for the order zero multipliers to prevent confusion with the frequency projections).", "Therefore, our task is reduced to estimating $\\begin{split}\\Big |\\iint (P_{\\le N_0}\\mathcal {P}_{\\mathrm {pert}}_{\\mathrm {near}})(Q_j^{{\\mathrm {int}},{\\mathrm {ext}}} P_{\\le N_0}_{{\\mathrm {near}},{\\mathrm {far}}})\\sqrt{|{\\underline{h}}|}\\mathrm {d}y\\mathrm {d}\\tau \\Big |.\\end{split}$ For the interior we place both terms in $LE$ where for $P_{\\le N_0}\\mathcal {P}_{\\mathrm {pert}}_{{\\mathrm {near}}}$ we use the frequency projection to remove one derivative.", "More precisely, let $\\chi _K\\equiv \\chi _K()$ be a cutoff to a large compact region $K$ containing the supports of $Q_j^{\\mathrm {int}}$ , and let $\\chi _{K^c}=1-\\chi _K$ .", "Then (note that we can always insert a factor of $$ in the order zero terms using the same argument as in Lemma REF ) $\\begin{split}&\\Big |\\iint \\chi _K(P_{\\le N_0}\\mathcal {P}_{\\mathrm {pert}}_{\\mathrm {near}})(Q_j^{\\mathrm {int}}P_{\\le N_0}_{{\\mathrm {near}},{\\mathrm {far}}}))\\sqrt{|{\\underline{h}}|}\\mathrm {d}y\\mathrm {d}\\tau \\Big |\\\\&\\lesssim \\delta \\Vert P_{\\le N_0}_{{\\mathrm {near}},{\\mathrm {far}}}\\Vert _{LE(I)}^2+C_\\delta \\iint (P_{\\le N_0}(\\chi _K\\mathcal {P}_{\\mathrm {pert}}_{\\mathrm {near}}))^2\\sqrt{|{\\underline{h}}|}\\mathrm {d}y\\mathrm {d}\\tau .\\end{split}$ The first term can be absorbed if $\\delta $ is small.", "For the last term we write $\\mathcal {P}_{\\mathrm {pert}}$ as $\\mathcal {P}_{\\mathrm {pert}}=\\mathring{}_q^{\\mu \\nu }\\partial ^2_{\\mu \\nu }+\\mathring{}_l^\\mu \\partial _\\mu +\\mathring{}_c,$ where $|\\mathring{}_q|,|\\mathring{}_l|,|\\mathring{}_c|\\lesssim \\epsilon $ .", "The contribution of $\\mathring{}_l^\\mu \\partial _\\mu + \\mathring{}_c$ is then already bounded by $\\epsilon \\Vert _{\\mathrm {near}}\\Vert _{LE}^2$ , which is admissible.", "For the second order part, if at least one of $\\mu $ or $\\nu $ is $$ then we can use the frequency projection to drop a $\\partial _$ derivative and argue as before.", "When both derivatives are with respect to the spatial variables $y$ , we use elliptic estimates using $\\mathcal {P}$ .", "That is, let $\\begin{split}\\mathcal {P}= \\mathcal {P}_+\\mathcal {P}_{\\mathrm {ell}},\\end{split}$ where $\\mathcal {P}_$ contains the terms with at least one $\\partial _$ derivative and $\\mathcal {P}_{\\mathrm {ell}}$ is the remainder which is elliptic.", "Then, with ${\\tilde{\\chi }}_K\\equiv {\\tilde{\\chi }}_K()$ a cutoff with slightly larger support than $\\chi _K$ , recalling equation (REF ), and using elliptic regularity, we can bound $\\Vert P_{\\le N_0}(\\chi _K\\mathring{}^{yy}\\partial _y^2_{\\mathrm {near}})\\Vert _{L^2_{\\tau ,y}(I)}$ by $\\begin{split}& \\Vert [P_{\\le N_0},\\chi _K \\mathring{}^{yy}\\partial _y^2]_{\\mathrm {near}}\\Vert _{L^2_{\\tau ,y}(I)}+\\epsilon \\Vert {\\tilde{\\chi }}_K\\mathcal {P}_{\\mathrm {ell}}P_{\\le N_0}_{\\mathrm {near}}\\Vert _{L^2_{\\tau ,y}(I)}+\\epsilon \\Vert {\\tilde{\\chi }}_K P_{\\le N_0}_{\\mathrm {near}}\\Vert _{L^2_{\\tau ,y}(I)}\\\\&\\lesssim \\Vert [P_{\\le N_0},\\chi _K \\mathring{}^{yy}\\partial _y^2]_{\\mathrm {near}}\\Vert _{L^2_{\\tau ,y}(I)}+\\epsilon \\Vert [{\\tilde{\\chi }}_K\\mathcal {P}_{\\mathrm {ell}},P_{\\le N_0}]_{\\mathrm {near}}\\Vert _{L^2_{\\tau ,y}(I)}\\\\&\\quad +\\epsilon \\Vert P_{\\le N_0}({\\tilde{\\chi }}_K\\mathcal {P}__{\\mathrm {near}})\\Vert _{L^2_{\\tau ,y}(I)}+\\epsilon \\Vert P_{\\le N_0}({\\tilde{\\chi }}_KV_{\\mathrm {far}}_{\\mathrm {far}})\\Vert _{L^2_{\\tau ,y}(I)}+\\epsilon \\Vert {\\tilde{\\chi }}_K P_{\\le N_0}_{\\mathrm {near}}\\Vert _{L^2_{\\tau ,y}(I)}.\\end{split}$ The last line contributes an admissible error by using the frequency projection to drop $\\partial _$ derivatives in the first term and using Lemma REF for the second term.", "The line before last with the commutators also contributes an admissible error by Lemma REF .", "For the exterior, note that by choosing the compact set $K$ above sufficiently large, we may assume that the coordinates $(,,)$ and $(\\tau ,r,\\theta )$ agree in $K^c$ .", "We will therefore use the notation $(\\tau ,r,\\theta )$ instead of $(,,)$ .", "In this region we need an extra weighted energy estimate for $_{\\mathrm {near}}$ which allows us to put more $r$ weights on $(\\partial _r+\\frac{n-1}{2r})_{\\mathrm {near}}$ (see Case 2 below).", "The details are as follows.", "First recall that $\\mathcal {P}_{\\mathrm {pert}}$ is of the form $\\begin{split}{\\mathring{a}}\\partial _\\tau (\\partial _r+\\frac{n-1}{2r})+{\\mathring{a}}^{\\mu \\nu }\\partial _{\\mu \\nu }+{\\mathring{b}}^\\mu \\partial _\\mu +{\\mathring{c}},\\end{split}$ where for some $\\gamma >1$ , $\\begin{split}|{\\mathring{a}}|, |{\\mathring{a}}^{yy}|,\\lesssim \\epsilon \\tau ^{-\\gamma }, \\quad |\\partial _y{\\mathring{a}}^{yy}|, |{\\mathring{a}}^{\\tau y}|, |{\\mathring{b}}^y|\\lesssim \\epsilon \\tau ^{-\\gamma }r^{-1},\\quad |{\\mathring{a}}^{\\tau \\tau }|, |{\\mathring{b}}^\\tau |\\lesssim \\epsilon \\tau ^{-\\gamma }r^{-2},\\quad |{\\mathring{c}}|\\lesssim \\epsilon \\tau ^{-\\gamma } r^{-4}.\\end{split}$ while $Q_j^{\\mathrm {ext}}$ have the structure $\\begin{split}Q_j^{\\mathrm {ext}}=\\beta ^\\tau \\partial _\\tau +\\beta ^y \\partial _y +\\frac{\\beta }{r},\\end{split}$ with $\\begin{split}|\\beta |, |\\beta ^\\tau |, |\\beta ^y|\\lesssim 1, \\quad |\\partial _y\\beta |, |\\partial _y\\beta ^\\tau |, |\\partial _y\\beta ^y|\\lesssim r^{-1-\\alpha },\\end{split}$ and $\\beta $ , $\\beta ^\\tau $ , $\\beta ^y$ supported outside of some large compact set ${\\tilde{K}}$ .", "We now consider the contribution of all combinations to (REF ).", "Below we will use the shorthand notation $\\langle f,g\\rangle =\\iint fg \\sqrt{{\\underline{h}}}\\mathrm {d}y \\mathrm {d}.$ Case 1: First we consider the contribution of $\\partial _\\tau $ in $Q_j^{\\mathrm {ext}}$ and every term except $2a\\partial _\\tau (\\partial _r+\\frac{n-1}{2r})$ in $\\mathcal {P}_{\\mathrm {pert}}$ .", "For ${\\mathring{a}}^{yy}$ , after one integration by parts in $\\partial _y$ we get $\\begin{split}&|\\langle P_{\\le N_0}{\\mathring{a}}^{yy}\\partial _y^2_{\\mathrm {near}},\\beta ^\\tau \\partial _\\tau P_{\\le N_0}_{{\\mathrm {near}},{\\mathrm {far}}}\\rangle |\\\\&\\lesssim \\langle |P_{\\le N_0}{\\mathring{a}}^{yy}\\partial _r_{\\mathrm {near}}|,|\\partial _\\tau P_{\\le N_0}\\partial _r_{{\\mathrm {near}},{\\mathrm {far}}}|\\rangle + \\langle |P_{\\le N_0}{\\mathring{a}}^{yy}\\partial _r_{\\mathrm {near}}|,|\\partial _\\tau P_{\\le N_0}r^{-1}_{{\\mathrm {near}},{\\mathrm {far}}}|\\rangle \\\\&\\quad + \\langle |P_{\\le N_0} r\\partial _y{\\mathring{a}}^{yy}\\partial _r_{\\mathrm {near}}|,|\\partial _\\tau P_{\\le N_0}r^{-1}_{{\\mathrm {near}},{\\mathrm {far}}}|\\rangle + \\epsilon (\\sup _{\\tau }\\Vert _{\\mathrm {near}}\\Vert _{E}+\\sup _{\\tau \\in I}\\Vert P_{\\le N_0}_{{\\mathrm {near}},{\\mathrm {far}}}\\Vert _E)\\\\&\\lesssim \\epsilon (\\sup _{\\tau }\\Vert _{\\mathrm {near}}\\Vert _{E}+\\sup _{\\tau \\in I}\\Vert P_{\\le N_0}_{{\\mathrm {near}},{\\mathrm {far}}}\\Vert _E),\\end{split}$ where in the last estimate we have used $P_{\\le N_0}$ to drop the $\\tau $ derivatives and the $\\tau $ decay of ${\\mathring{a}}^{yy}$ and $r\\partial _y{\\mathring{a}}^{yy}$ to integrate in $\\tau $ .", "The contributions of the other terms in ${\\mathring{a}}^{\\mu \\nu }\\partial _{\\mu \\nu }$ , ${\\mathring{b}}^\\mu \\partial _\\mu $ , ${\\mathring{c}}$ are handled similarly, where instead of integrating by parts we use the decay of $r$ and move one factor of $r^{-1}$ to $_{{\\mathrm {near}},{\\mathrm {far}}}$ .", "Case 2: For the contribution of $\\partial _\\tau $ to $Q_j^{\\mathrm {ext}}$ and $2{\\mathring{a}}\\partial _\\tau (\\partial _r+\\frac{n-1}{2r})$ to $\\mathcal {P}_{\\mathrm {pert}}$ we use the $\\tau $ decay and smallness of ${\\mathring{a}}$ to estimate (here for $_{{\\mathrm {near}},{\\mathrm {far}}}$ we do not drop $\\partial _\\tau $ ) $\\begin{split}&|\\langle P_{\\le N_0}{\\mathring{a}}\\partial _\\tau (\\partial _r+\\frac{n-1}{2r})_{\\mathrm {near}},\\beta \\partial _\\tau P_{\\le N_0}_{{\\mathrm {near}},{\\mathrm {far}}}\\rangle |\\\\&\\lesssim \\epsilon \\Vert P_{\\le N_0}_{{\\mathrm {near}},{\\mathrm {far}}}\\Vert _{LE(I)}^2+\\epsilon ^{-1}\\Vert \\chi _{{\\tilde{K}}^c}P_{\\le N_0}{\\mathring{a}}\\partial _\\tau r^{\\frac{1+\\alpha }{2}}(\\partial _r+\\frac{n-1}{2r})_{\\mathrm {near}}\\Vert _{L^2_{\\tau ,y}(I)}^2\\\\&\\lesssim \\epsilon \\Vert P_{\\le N_0}_{{\\mathrm {near}},{\\mathrm {far}}}\\Vert _{LE(I)}^2+\\epsilon \\sup _{\\tau }\\Vert \\chi _{{\\tilde{K}}^c}r^{\\frac{1+\\alpha }{2}}(\\partial _r+\\frac{n-1}{2r})_{\\mathrm {near}}\\Vert _{L^2_y}^2.\\end{split}$ For the last term we will need a weighted energy estimate for $_{\\mathrm {near}}$ , which we will discuss below.", "Case 3: Next we consider the contribution of $\\beta ^y\\partial _y+r^{-1}\\beta $ to $Q_j^{\\mathrm {ext}}$ and the contribution of every term except ${\\mathring{a}}^{yy}\\partial ^2_y$ to $\\mathcal {P}_{\\mathrm {pert}}$ .", "Here for $_{{\\mathrm {near}},{\\mathrm {far}}}$ we simply drop $P_{\\le N_0}$ while for $_{\\mathrm {near}}$ we use $P_{\\le N_0}$ to drop one $\\partial _\\tau $ in the second order terms in $\\mathcal {P}_{\\mathrm {pert}}$ .", "Using the decay and smallness of the coefficients of $\\mathcal {P}_{\\mathrm {pert}}$ , the corresponding contributions are then bounded by $\\epsilon (\\sup _{\\tau }\\Vert _{\\mathrm {near}}\\Vert _E^2+\\sup _{\\tau \\in I}\\Vert P_{\\le N_0}_{{\\mathrm {near}},{\\mathrm {far}}}\\Vert _E^2).$ Case 4: For the contribution of ${\\mathring{a}}\\partial ^2_y$ to $\\mathcal {P}_{\\mathrm {pert}}$ and $\\beta ^y\\partial _u+r^{-1}\\beta $ to $Q_j^{\\mathrm {ext}}$ we use the equation $\\mathcal {P}_{\\mathrm {near}}= -V_{\\mathrm {far}}_{\\mathrm {far}}$ and elliptic estimates for $\\mathcal {P}_{\\mathrm {ell}}$ .", "Note that, unlike the case of the interior above, in view of the $r$ decay of order zero term in $\\mathcal {P}_{\\mathrm {ell}}$ we do not need to add an $L^2_y$ term when applying elliptic estimates for $\\mathcal {P}_{\\mathrm {ell}}$ .", "Using this observation and Lemma REF , and with ${\\tilde{K}}_1$ a compact region contained in ${\\tilde{K}}$ , the corresponding contribution is bounded by $\\begin{split}&\\Vert P_{\\le N_0}_{{\\mathrm {near}},{\\mathrm {far}}}\\Vert _{L^\\infty _\\tau E(I)}\\Vert \\chi _{{\\tilde{K}}^c}P_{\\le N_0}{\\mathring{a}}^{yy}\\partial _y^2_{\\mathrm {near}}\\Vert _{L_\\tau ^1L_y^2(I)}\\\\&\\lesssim \\Vert P_{\\le N_0}_{{\\mathrm {near}},{\\mathrm {far}}}\\Vert _{L^\\infty _\\tau E(I)}(\\Vert \\chi _{{\\tilde{K}}^c}[P_{\\le N_0},{\\mathring{a}}^{yy}]\\partial _y^2_{\\mathrm {near}}\\Vert _{L_\\tau ^1L_y^2(I)}+\\epsilon \\Vert \\tau ^{-\\gamma }\\chi _{{\\tilde{K}}^c}\\partial _y^2P_{\\le N_0}_{{\\mathrm {near}}}\\Vert _{L^1_\\tau L^2_y(I)})\\\\&\\lesssim \\Vert P_{\\le N_0}_{{\\mathrm {near}},{\\mathrm {far}}}\\Vert _{L^\\infty _\\tau E(I)}(\\Vert \\chi _{{\\tilde{K}}^c}[P_{\\le N_0},{\\mathring{a}}^{yy}]\\partial _y^2_{\\mathrm {near}}\\Vert _{L_\\tau ^1L_y^2(I)}+\\epsilon \\Vert \\tau ^{-\\gamma }\\chi _{{\\tilde{K}}^c_1}\\mathcal {P}_{\\mathrm {ell}}P_{\\le N_0}_{{\\mathrm {near}}}\\Vert _{L^1_\\tau L^2_y(I)})\\\\&\\lesssim \\epsilon \\Vert P_{\\le N_0}_{{\\mathrm {near}},{\\mathrm {far}}}\\Vert _{L^\\infty _\\tau E(I)}^2+\\epsilon ^{-1} \\Vert \\chi _{{\\tilde{K}}^c}[P_{\\le N_0},{\\mathring{a}}^{yy}]\\partial _y^2_{\\mathrm {near}}\\Vert _{L_\\tau ^1L_y^2(I)}^2+\\epsilon \\Vert \\tau ^{-\\gamma } P_{\\le N_0}V_{\\mathrm {far}}_{{\\mathrm {far}}}\\Vert _{L^1_\\tau L_y^2(I)}^2\\\\&\\quad +\\epsilon \\Vert \\tau ^{-\\gamma }P_{\\le N_0}\\chi _{{\\tilde{K}}_1^c}\\mathcal {P}_\\tau _{\\mathrm {near}}\\Vert _{L^1_\\tau E(I)}^2+\\epsilon \\Vert \\tau ^{-\\gamma }[\\chi _{{\\tilde{K}}^c}\\mathcal {P}_{\\mathrm {ell}}, P_{\\le N_0}]_{{\\mathrm {near}}}\\Vert _{L^1_\\tau L_y^2(I)}^2\\\\&\\lesssim \\epsilon (\\sup _{\\tau }\\Vert _{\\mathrm {near}}\\Vert _E^2+\\sup _{\\tau \\in I}\\Vert P_{\\le N_0}_{{\\mathrm {near}},{\\mathrm {far}}}\\Vert _E^2+\\sup _{\\tau \\in I}\\Vert _{\\mathrm {far}}\\Vert _E^2+\\Vert _{\\mathrm {far}}\\Vert _{LE}^2).\\end{split}$ Putting everything together we have shown that $\\begin{split}\\Vert P_{\\le N_0}_{{\\mathrm {near}},{\\mathrm {far}}}\\Vert _{LE(I)}&\\lesssim \\Vert f\\Vert _{LE^\\ast (I)}+\\Vert \\Vert _{L^\\infty _\\tau E(I)}+\\epsilon \\Vert _{{\\mathrm {near}}}\\Vert _{LE}+\\epsilon \\Vert P_{\\le N_0}_{{\\mathrm {near}}\\,{\\mathrm {far}}}\\Vert _{L^\\infty _\\tau E(I)}\\\\&\\quad +\\epsilon \\Vert \\chi _{{\\tilde{K}}^c}r^{\\frac{1+\\alpha }{2}}(\\partial _r+\\frac{n-1}{2r})_{\\mathrm {near}}\\Vert _{L^\\infty _\\tau L^2_y}.\\end{split}$ Using a similar argument with the multiplier $\\partial _\\tau P_{\\le N_0}_{{\\mathrm {near}},{\\mathrm {far}}}$ , we can also prove an energy estimate for $P_{\\le N_0}_{{\\mathrm {near}},{\\mathrm {far}}}$ which allows us to absorb the last term on the first line above, and get $\\begin{split}\\Vert P_{\\le N_0}_{{\\mathrm {near}},{\\mathrm {far}}}\\Vert _{LE(I)}&\\lesssim \\Vert f\\Vert _{LE^\\ast (I)}+\\Vert \\Vert _{L^\\infty _\\tau E(I)}+\\epsilon \\Vert _{{\\mathrm {near}}}\\Vert _{LE}\\\\&\\quad +\\epsilon \\Vert \\chi _{{\\tilde{K}}^c}r^{\\frac{1+\\alpha }{2}}(\\partial _r+\\frac{n-1}{2r})_{\\mathrm {near}}\\Vert _{L^\\infty _\\tau L^2_y}.\\end{split}$ It remains to control $\\Vert \\chi _{{\\tilde{K}}^c} r^{\\frac{1+\\alpha }{2}}(\\partial _r+\\frac{n-1}{2r})_{\\mathrm {near}}\\Vert _{L^\\infty _\\tau L^2_y}$ .", "This is achieved by the same argument as we will later use to prove $r^p$ -energy estimates in the exterior.", "The more complete version of the argument is worked out in Lemma REF below, which holds independently of the results in this section.", "Without repeating the details, multiplying the equation $\\mathcal {P}_{\\mathrm {near}}=-V_{\\mathrm {far}}_{\\mathrm {far}}$ by $\\chi _{{\\tilde{K}}^c}r^p(\\partial _r+\\frac{n-1}{2r})_{\\mathrm {near}}$ (alternatively, by $\\chi _{{\\tilde{K}}^c}L({\\tilde{r}}^{\\frac{n-1}{2}}_{\\mathrm {near}})$ ), with $1+\\alpha \\le p \\le 2$ , and a few integration by parts yield the estimate (recall that $V_{\\mathrm {far}}$ is compactly supported) $\\begin{split}\\Vert \\chi _{{\\tilde{K}}^c}r^{p/2}(\\partial _r+\\frac{n-1}{2r})_{\\mathrm {near}}\\Vert _{L^\\infty _\\tau L^2_y}&\\lesssim \\Vert _{\\mathrm {near}}\\Vert _{LE}+ \\Vert _{\\mathrm {near}}\\Vert _{L^\\infty _\\tau E}\\\\&\\quad +|\\langle V_{\\mathrm {far}}_{\\mathrm {far}},\\chi _{{\\tilde{K}}^c} r^{p}(\\partial _r+\\frac{n-1}{2r})_{\\mathrm {near}}\\rangle _{L^2_{\\tau ,y}}|\\\\&\\lesssim \\Vert _{\\mathrm {near}}\\Vert _{LE}+ \\Vert _{\\mathrm {near}}\\Vert _{L^\\infty _\\tau E}+\\Vert _{\\mathrm {far}}\\Vert _{LE}.\\end{split}$ Plugging this back into (REF ) completes the proof of the lemma." ], [ "Exterior", "This section contains the proof of the decay estimates on $\\phi $ .", "We will use the $r^p$ weighted vectorfield method to derive decay estimates for the energy of $\\phi $ and improved decay for the energy of $T^k\\phi $ , $k=1,2$ .", "Then using elliptic and interpolation estimates we use the decay of the energies to deduce pointwise bounds, and in particular complete the proof of Proposition REF .", "We will start by deriving some commutator formulas and expressing the equation in terms of the vectofields $L$ , ${\\underline{L}}$ , $\\Omega $ ." ], [ "Frame Decomposition of the Operator and Commutation Relations", "We use the relations derived in Section REF to calculate the commutators among the vectorfields $T$ , $L$ , ${\\underline{L}}$ , $\\Omega $ .", "The calculations in this subsection are valid in the hyperboloidal region $\\mathcal {C}_{\\mathrm {hyp}}$ , where these vectorfields are defined.", "Lemma 8.1 The following commutation relations hold among the vectorfields $L,{\\underline{L}},\\Omega $ : $\\begin{split}&[{\\underline{L}},L]= \\mathcal {O}({\\dot{\\wp }}^{\\le 2})L+\\mathcal {O}({\\dot{\\wp }}^{\\le 2}r^{-2}){\\underline{L}}+\\mathcal {O}({\\dot{\\wp }}^{\\le 2} r^{-2})\\Omega ,\\\\&[{\\underline{L}},\\Omega ]= \\mathcal {O}({\\dot{\\wp }})\\Omega +\\mathcal {O}({\\dot{\\wp }}r) L+\\mathcal {O}({\\dot{\\wp }}){\\underline{L}},\\\\&[L,\\Omega ]=\\mathcal {O}({\\dot{\\wp }}r^{-1})L+\\mathcal {O}({\\dot{\\wp }}r^{-2})\\Omega +\\mathcal {O}({\\dot{\\wp }}r^{-2}){\\underline{L}},\\\\&[\\Omega _{ij},\\Omega _{k\\ell }]=\\delta _{[ki}\\Omega _{\\ell j]},\\\\&[T,L]=-[T,{\\underline{L}}]= \\mathcal {O}({\\dot{\\wp }}^{\\le 2})L+\\mathcal {O}({\\dot{\\wp }}^{\\le 2}r^{-2}){\\underline{L}}+\\mathcal {O}({\\dot{\\wp }}^{\\le 2} r^{-2})\\Omega ,\\\\&[T,\\Omega ]=\\mathcal {O}({\\dot{\\wp }})\\Omega +\\mathcal {O}({\\dot{\\wp }}r) L+\\mathcal {O}({\\dot{\\wp }}){\\underline{L}}.\\\\\\end{split}$ Starting with $[{\\underline{L}},L]$ , note that since ${\\underline{L}}= 2T-L$ this commutator is the same as $2[T,L]$ .", "The desired structure then follows from the relations (REF ), (REF ), and (REF ).", "For $[\\Omega _{ij},\\Omega _{k\\ell }]$ the desired relation follows from the fact that $\\Omega _{ij}=\\Lambda _\\ell {\\tilde{\\Omega }}_{ij}$ , where ${\\tilde{\\Omega }}_{ij}=y^i\\partial _{y^j}-y^j\\partial _{y^i}$ are tangential to the reference hyperboloids $\\mathcal {H}_\\sigma $ .", "This tangentiality implies that $[\\Omega _{ij},\\Omega _{k\\ell }]=\\Lambda _\\ell [{\\tilde{\\Omega }}_{ij},{\\tilde{\\Omega }}_{k\\ell }]$ .", "By the same reasoning, to compute the commutator $[L,\\Omega ]$ we decompose ${\\tilde{L}}=\\partial _{y^0}+\\frac{y^i}{|y^{\\prime }|}\\partial _{y^i}$ as (recall that $T= \\Lambda _\\ell \\partial _{y^0}$ ) $\\begin{split}{\\tilde{L}}= (1-{\\tilde{c}})\\partial _{y^0}+\\frac{y^i}{|y^{\\prime }|}\\partial _{y^i}+{\\tilde{c}}\\, \\partial _{y^0},\\end{split}$ where ${\\tilde{c}}$ is chosen so that ${\\tilde{L}}_{\\mathrm {temp}}:={\\tilde{L}}-{\\tilde{c}}\\,\\partial _{y^0}$ is tangential to $\\mathcal {H}_\\sigma $ .", "Let $c={\\tilde{c}}(y(\\tau ,r,\\theta ))$ where $y$ is as in (REF ).", "Since ${\\tilde{\\Omega }}$ is tangential to $\\mathcal {H}_\\sigma $ , it follows that $\\Omega ( c(y(\\tau ,r,\\theta ))) = ({\\tilde{\\Omega }}{\\tilde{c}})(y(\\tau ,r,\\theta ))$ , and therefore $\\begin{split}[L,\\Omega ] &= \\Lambda _\\ell [{\\tilde{L}}_{\\mathrm {temp}},{\\tilde{\\Omega }}]+[cT,\\Omega ]=\\Lambda _\\ell [{\\tilde{L}}_{\\mathrm {temp}},{\\tilde{\\Omega }}]+(\\Omega c) T+c[T,\\Omega ]\\\\&=\\Lambda _\\ell ([{\\tilde{L}}_{\\mathrm {temp}},{\\tilde{\\Omega }}]+({\\tilde{\\Omega }}{\\tilde{c}}) \\partial _{y^0})+c[T,\\Omega ]=\\Lambda _\\ell [{\\tilde{L}},{\\tilde{\\Omega }}]+c[T,\\Omega ]\\\\&=c[T,\\Omega ].\\end{split}$ To find ${\\tilde{c}}$ , recall that the defining equation for $\\sigma =\\sigma (\\tau )$ is $(y^0-\\gamma ^{-1}\\tau )^2-|y^{\\prime }|^2=1$ , and therefore ${\\tilde{c}}$ must be such that ${\\tilde{L}}_{\\mathrm {temp}}$ is Minkowski perpendicular to $(y^0-\\gamma ^{-1}\\tau ,y^{\\prime })$ .", "It follows that $\\begin{split}{\\tilde{c}}= 1-\\frac{|y^{\\prime }|}{y^0-\\gamma ^{-1}\\tau }=\\mathcal {O}({\\tilde{r}}^{-2}),\\end{split}$ where the last estimate follows from (REF ).", "The last two observations together give $[L,\\Omega ]=\\mathcal {O}(r^{-2})[T,\\Omega ]$ , and the desired expansion follows from (REF ), (REF ), and (REF ).", "Finally, the expansions for $[{\\underline{L}},\\Omega ]$ , $[T,L]$ , $[T,{\\underline{L}}]$ , and $[T,\\Omega ]$ follow from the previous ones and the observation that ${\\underline{L}}=2T-L$ .", "With these preparations we can turn to the calculation of the wave operator $\\Box _m$ in terms of $\\lbrace L, {\\underline{L}}, \\Omega \\rbrace $ .", "Lemma 8.2 For any function $$ $\\begin{split}\\Box _m&= -{\\underline{L}}L -\\frac{n-1}{2{\\tilde{r}}}({\\underline{L}}-L)+\\frac{1}{{\\tilde{r}}^2}\\sum _{\\Omega }\\Omega ^2 \\\\&\\quad +\\mathcal {O}({\\dot{\\wp }}^{\\le 2})L+\\mathcal {O}({\\dot{\\wp }}^{\\le 2} r^{-2}){\\underline{L}}+\\mathcal {O}({\\dot{\\wp }}^{\\le 2} r^{-2})\\Omega ,\\end{split}$ and, with ${\\tilde{}}:={\\tilde{r}}^{\\frac{n-1}{2}}$ , $\\begin{split}{\\tilde{r}}^{\\frac{n-1}{2}}\\Box _m&= -{\\underline{L}}L {\\tilde{}}+\\frac{1}{{\\tilde{r}}^2}\\sum _{\\Omega }\\Omega ^2{\\tilde{}}-\\frac{(n-1)(n-3)}{4{\\tilde{r}}^2}{\\tilde{}}\\\\&\\quad +\\mathcal {O}({\\dot{\\wp }}^{\\le 2})L{\\tilde{}}+\\mathcal {O}({\\dot{\\wp }}^{\\le 2} r^{-2}){\\underline{L}}{\\tilde{}}+\\mathcal {O}({\\dot{\\wp }}^{\\le 2} r^{-2})\\Omega {\\tilde{}}+\\mathcal {O}({\\dot{\\wp }}^{\\le 2}r^{-2}){\\tilde{}}.\\end{split}$ We work with an orthonormal frame $\\lbrace e_I\\rbrace =\\lbrace {\\tilde{L}}, L, e_A;~a=1,2\\rbrace $ , where $e_A=\\Lambda _\\ell {\\tilde{e}}_A$ , and $\\lbrace {\\tilde{e}}_1,{\\tilde{e}}_2\\rbrace $ is a local orthonormal frame for the reference spheres on $\\mathcal {H}_\\sigma $ .", "With $\\Gamma _{IJ}^K$ the corresponding connection coefficients, that is $\\nabla _{e_I}e_J=\\Gamma _{IJ}^Ke_K$ , and $D_I$ denoting scalar differentiation along $e_I$ , the wave operator can be written as $\\begin{split}\\Box _m &=(m^{-1})^{IJ} (D_I D_J-\\Gamma _{IJ}^KD_K)\\\\&=-\\frac{1}{2}{\\underline{L}}L -\\frac{1}{2}L{\\underline{L}}+ \\sum _A D_A^2 -(m^{-1})^{IJ}(\\Gamma ^L_{IJ}L+\\Gamma ^{\\underline{L}}_{IJ}{\\underline{L}}+\\Gamma ^A_{IJ}D_A)\\\\&=-{\\underline{L}}L -\\frac{1}{2}[L,{\\underline{L}}] + \\frac{1}{{\\tilde{r}}^2}\\sum _\\Omega \\Omega ^2 -(m^{-1})^{IJ}(\\Gamma ^L_{IJ}L+\\Gamma ^{\\underline{L}}_{IJ}{\\underline{L}}+\\Gamma ^A_{IJ}D_A).\\end{split}$ Here to pass to the last line we have used the fact that $e_A$ are tangential to the foliation, so $\\sum _A D_A^2(y(\\tau ,r,\\theta ))= \\sum _A ({\\tilde{e}}_A^2)(y(\\tau ,r,\\theta ))$ .", "To calculate the connection coefficients we use Koszul's formula, which for an orthonormal frame reads $\\begin{split}\\Gamma _{IJ}^M= \\frac{1}{2}(m^{-1})^{KM}(m([e_I,e_J],e_K)-m([e_I,e_K],e_J)-m([e_J,e_K],e_I)).\\end{split}$ These can now be computed using Lemma REF .", "Here note that the commutators with $e_A$ can be calculated in the same way as in the proof of Lemma REF , by noting that $e_A$ is a linear combination of $\\Omega _{ij}$ with coefficients of size $r^{-1}$ .", "In particular, with ${\\tilde{L}},{\\tilde{{\\underline{L}}}}$ such that $L=\\Lambda _\\ell {\\tilde{L}}$ and ${\\underline{L}}=\\Lambda _\\ell {\\tilde{{\\underline{L}}}}$ (see (REF )), $\\begin{split}&[{\\underline{L}},e_A]= \\Lambda _\\ell [{\\tilde{{\\underline{L}}}},{\\tilde{e}}_A]+\\mathcal {O}({\\dot{\\wp }}r^{-1})\\Omega +\\mathcal {O}({\\dot{\\wp }}) L+\\mathcal {O}({\\dot{\\wp }}r^{-1}){\\underline{L}},\\\\&[L,e_A]=\\Lambda _\\ell [{\\tilde{L}},{\\tilde{e}}_A]+\\mathcal {O}({\\dot{\\wp }}r^{-2})L+\\mathcal {O}({\\dot{\\wp }}r^{-3})\\Omega +\\mathcal {O}({\\dot{\\wp }}r^{-3}){\\underline{L}},\\\\&[e_A,e_B]=\\Lambda _\\ell [{\\tilde{e}}_A,{\\tilde{e}}_B].\\end{split}$ It follows from this and Lemma REF that $\\begin{split}&\\Gamma ^L_{{\\underline{L}}L}=-\\frac{1}{2}m([{\\underline{L}},L],{\\underline{L}})=\\mathcal {O}({\\dot{\\wp }}^{\\le 2}),\\\\&\\Gamma ^L_{L{\\underline{L}}}=-\\frac{1}{4}(m([L,{\\underline{L}}],{\\underline{L}})-m([L,{\\underline{L}}],{\\underline{L}}))=0,\\\\&\\Gamma ^{L}_{AA}=\\frac{1}{2}m([e_A,{\\underline{L}}],e_A).\\end{split}$ For the last term, for the purpose of deriving (REF ) it suffices to observe that $m([e_A,{\\underline{L}}],e_A)=m([{\\tilde{e}}_A,{\\tilde{{\\underline{L}}}}],{\\tilde{e}}_A)+\\mathcal {O}({\\dot{\\wp }})$ .", "But, for (REF ) we will need the better estimate $\\begin{split}\\Gamma ^{L}_{AA}= \\frac{1}{2}m([{\\tilde{e}}_A,{\\tilde{{\\underline{L}}}}],{\\tilde{e}}_A)+\\mathcal {O}({\\dot{\\wp }}r^{-1}).\\end{split}$ To prove (REF ), using our usual notation as in the proof of Lemma REF , note that $\\begin{split}\\frac{1}{2}m([e_A,{\\underline{L}}],e_A)=m([e_A,T],e_A)-\\frac{1}{2}m([e_A,L],e_A)=m([e_A,T],e_A)+\\frac{1}{2}m([{\\tilde{e}}_A,{\\tilde{{\\underline{L}}}}],{\\tilde{e}}_A)+\\mathcal {O}({\\dot{\\wp }}r^{-2}).\\end{split}$ For the first term on the right, we write $[e_A,T]=\\big (e_A(T^\\mu )-T(e_A^\\mu )\\big )\\partial _\\mu $ .", "On the other hand, in view of the expansions (REF ), we have $m(\\partial _r,e_A)=\\mathcal {O}(r^{-2})$ and $m(\\partial _\\tau ,e_A)=0$ .", "It then follows from (REF ) and the expansion $e_A=\\mathcal {O}(r^{-1})\\partial _a+\\mathcal {O}(1)\\partial _r$ that $\\begin{split}m((e_AT^\\mu )\\partial _\\mu ,e_A)=\\mathcal {O}({\\dot{\\wp }}r^{-1}),\\end{split}$ and $\\begin{split}m(T(e_A^\\mu )\\partial _\\mu ,e_A)=\\mathcal {O}({\\dot{\\wp }}r^{-1})+\\mathcal {O}(1)m((\\partial _\\tau e_A^\\mu )\\partial _\\mu ,e_A).\\end{split}$ For the last term observe that by (REF ), the metric components $m_{rr}=1-\\frac{r}{\\langle r\\rangle }$ , $m_{ra}=0$ , and $m_{ab}=r^2\\mathring{{g}}_{ab}$ are independent of $\\tau $ , so since $m(e_A,e_A)=1$ and $e_A=e_A^r\\partial _r+e_A^a\\partial _a$ , $\\begin{split}m((\\partial _\\tau e_A^\\mu )\\partial _\\mu ,e_A)=m_{\\mu \\nu }e_A^\\nu \\partial _\\tau e_A^\\mu =\\frac{1}{2}\\partial _\\tau (m_{\\mu \\nu }e_A^\\mu e_A^\\nu )=0.\\end{split}$ This completes the proof of (REF ).", "Returning to the other connection coefficients, by the Koszul formula, $\\begin{split}&\\Gamma _{{\\underline{L}}L}^{\\underline{L}}=0,\\quad \\Gamma _{L{\\underline{L}}}^{\\underline{L}}=\\frac{1}{2}m([{\\underline{L}},L],L)=\\mathcal {O}({\\dot{\\wp }}^{\\le 2} r^{-2}),\\\\&\\Gamma _{AA}^{\\underline{L}}=\\frac{1}{2}m([e_A,L],e_A)=\\frac{1}{2}m([{\\tilde{e}}_A,{\\tilde{L}}],{\\tilde{e}}_A)+\\mathcal {O}({\\dot{\\wp }}r^{-2}),\\end{split}$ and $\\begin{split}&\\Gamma ^B_{L{\\underline{L}}}=\\frac{1}{2}(m([L,{\\underline{L}}],e_B)-m([L,e_B],{\\underline{L}})-m([{\\underline{L}},e_B],L))=\\mathcal {O}({\\dot{\\wp }}r^{-1})\\\\&\\Gamma ^B_{L{\\underline{L}}}=\\frac{1}{2}(m([{\\underline{L}},L],e_B)-m([{\\underline{L}},e_B],L)-m([L,e_B],{\\underline{L}}))=\\mathcal {O}({\\dot{\\wp }}r^{-1}),\\\\&\\Gamma ^B_{AA}=-m([{\\tilde{e}}_A,{\\tilde{e}}_B],{\\tilde{e}}_A).\\end{split}$ Inserting the expressions we have derived for the connection coefficients into (REF ) and using the relations (REF ) gives $\\begin{split}\\Box _m&= -{\\underline{L}}L -\\frac{n-1}{2{\\tilde{r}}}({\\underline{L}}-L)+\\frac{1}{{\\tilde{r}}^2}\\sum _{\\Omega }\\Omega ^2 -[L,{\\underline{L}}]^LL\\\\&\\quad +\\mathcal {O}({\\dot{\\wp }}^{\\le 2}r^{-1})L+\\mathcal {O}({\\dot{\\wp }}^{\\le 2} r^{-2}){\\underline{L}}+\\mathcal {O}({\\dot{\\wp }}^{\\le 2} r^{-2})\\Omega ,\\end{split}$ The expansion (REF ) follows from (REF ) and the fact that, by (REF ), $[L,{\\underline{L}}]^L=\\mathcal {O}({\\dot{\\wp }}^{\\le 2})$ , but we will need a more precise expression for this commutator to derive (REF ).", "To prove (REF ) first note that, with the notation ${\\tilde{}}={\\tilde{r}}^{\\frac{n-1}{2}}$ , $\\begin{split}{\\tilde{r}}^{\\frac{n-1}{2}}\\Box =\\Box {\\tilde{}}-\\Box ({\\tilde{r}}^{\\frac{n-1}{2}})- 2 m^{IJ} (e_I {\\tilde{r}}^{\\frac{n-1}{2}})(e_J)=:I+II+III.\\end{split}$ In expanding the terms $I$ , $II$ , $III$ we use the notation ${\\mathrm {Err}}$ to denote error terms which are acceptable on the right-hand side of (REF ).", "Starting with $I$ , by (REF ), $\\begin{split}I=-{\\underline{L}}L{\\tilde{}}+\\frac{1}{{\\tilde{r}}^2}\\sum \\Omega ^2{\\tilde{}}-\\frac{n-1}{2{\\tilde{r}}}{\\tilde{r}}^{\\frac{n-1}{2}}({\\underline{L}}-L)-\\frac{n-1}{2{\\tilde{r}}}({\\underline{L}}-L){\\tilde{r}}^{\\frac{n-1}{2}}+{\\mathrm {Err}}.\\end{split}$ For $II$ we use (REF ), the more precise expression (REF ) for $\\Box _m$ (applied to ${\\tilde{r}}^{\\frac{n-1}{2}}$ ), and the usual decomposition $L=L_{\\mathrm {temp}}+\\mathcal {O}(r^{-2})T$ , with $L_{\\mathrm {temp}}{\\tilde{r}}=1$ , to write $\\begin{split}II&=\\frac{n-1}{2}{\\underline{L}}{\\tilde{r}}^{\\frac{n-1}{2}-1}+\\frac{n-1}{2{\\tilde{r}}} ({\\underline{L}}-L){\\tilde{r}}^{\\frac{n-1}{2}}+{\\mathrm {Err}}\\\\&=\\frac{n-1}{2{\\tilde{r}}}({\\underline{L}}-L){\\tilde{r}}^{\\frac{n-1}{2}}-\\frac{(n-1)(n-3)}{4{\\tilde{r}}^2}{\\tilde{}}+\\frac{n-1}{2{\\tilde{r}}} ({\\underline{L}}+\\frac{n-1}{2{\\tilde{r}}}){\\tilde{r}}^{\\frac{n-1}{2}}\\\\&\\quad +\\frac{n-1}{2}{\\tilde{}}({\\underline{L}}-\\frac{1}{{\\tilde{r}}}){\\tilde{r}}^{-1}+[L,{\\underline{L}}]^LL{\\tilde{r}}^{\\frac{n-1}{2}}+{\\mathrm {Err}}.\\end{split}$ To treat the last line as an error, we need to use the more precise expansions (REF ) and (REF ) to get (note that each term by itself is only $\\mathcal {O}({\\dot{\\wp }}^{\\le 2}r^{-1}){\\tilde{}}$ ) $\\begin{split}&\\frac{n-1}{2}{\\tilde{}}({\\underline{L}}-\\frac{1}{{\\tilde{r}}}){\\tilde{r}}^{-1}+[L,{\\underline{L}}]^LL{\\tilde{r}}^{\\frac{n-1}{2}}\\\\&=\\frac{n-1}{2{\\tilde{r}}^2}(L{\\tilde{r}}-1){\\tilde{}}+\\frac{n-1}{{\\tilde{r}}}[L,T]^L(L{\\tilde{r}}-1){\\tilde{}}+\\frac{n-1}{{\\tilde{r}}}([L,T]^L-{\\tilde{r}}^{-1}T{\\tilde{r}}){\\tilde{}}=\\mathcal {O}({\\dot{\\wp }}^{\\le 2}r^{-2}){\\tilde{}},\\end{split}$ and hence $\\begin{split}II=\\frac{n-1}{2{\\tilde{r}}}({\\underline{L}}-L){\\tilde{r}}^{\\frac{n-1}{2}}-\\frac{(n-1)(n-3)}{4{\\tilde{r}}^2}{\\tilde{}}+\\frac{n-1}{2{\\tilde{r}}}({\\underline{L}}+\\frac{n-1}{2{\\tilde{r}}}){\\tilde{r}}^{\\frac{n-1}{2}}+{\\mathrm {Err}}.\\end{split}$ For the term $III$ , since $e_A{\\tilde{r}}=0$ , $\\begin{split}III&=(L{\\tilde{r}}^{\\frac{n-1}{2}}){\\underline{L}}+({\\underline{L}}{\\tilde{r}}^{\\frac{n-1}{2}})L\\\\&=\\frac{n-1}{2{\\tilde{r}}}{\\tilde{r}}^{\\frac{n-1}{2}}({\\underline{L}}-L)+(L)({\\underline{L}}+\\frac{n-1}{2{\\tilde{r}}}){\\tilde{r}}^{\\frac{n-1}{2}}+{\\mathrm {Err}}.\\end{split}$ Equation (REF ) now follows by adding the expansions for $I$ , $II$ , and $III$ , and using the observation that ${\\tilde{r}}^{\\frac{n-1}{2}}(L+\\frac{n-1}{2{\\tilde{r}}})=L{\\tilde{}}+{\\mathrm {Err}}$ .", "Lemma REF and equations (REF ) and (REF ) yield the following representation for $\\mathcal {P}_{\\mathrm {graph}}$ : ${\\tilde{r}}^{\\frac{n-1}{2}}\\mathcal {P}_{\\mathrm {graph}}&= -(1+\\mathcal {O}(r^{-4})){\\underline{L}}L{\\tilde{}}+\\frac{1+\\mathcal {O}(r^{-5})}{{\\tilde{r}}^2}\\sum \\Omega ^2{\\tilde{}}-\\frac{(n-1)(n-3)}{4{\\tilde{r}}^2}{\\tilde{}}\\nonumber \\\\&\\quad +\\mathcal {O}(r^{-4})L^2{\\tilde{}}+\\mathcal {O}(r^{-4}){\\underline{L}}^2{\\tilde{}}+\\mathcal {O}(r^{-5})\\Omega L{\\tilde{}}+\\mathcal {O}(r^{-5}){\\underline{L}}\\Omega {\\tilde{}}\\\\&\\quad +\\mathcal {O}({\\dot{\\wp }}+r^{-5})L{\\tilde{}}+\\mathcal {O}({\\dot{\\wp }}r^{-2}+r^{-5}){\\underline{L}}{\\tilde{}}+\\mathcal {O}({\\dot{\\wp }}r^{-2}+r^{-6})\\Omega {\\tilde{}}+\\mathcal {O}({\\dot{\\wp }}r^{-2}+r^{-6}){\\tilde{}}.\\nonumber $ This is the representation we will use to derive multiplier identities in the exterior.", "Before starting on these multiplier identities, we calculate the equations satisfied by higher order derivatives of ${\\tilde{}}$ .", "Our goal is to calculate the analogous equation to (REF ) satisfied by ${\\tilde{r}}L$ , $T$ , and $\\Omega $ applied to ${\\tilde{}}$ .", "Since, due of the presence of parameters, these vectorfields do not commute, we start by deriving an estimate for the commutator of a string of them, valid in the hyperboloidal region where they are defined.", "Lemma 8.3 If $X_1,\\dots ,X_k\\in \\lbrace {\\tilde{r}}L,\\Omega ,T\\rbrace $ are $k$ vectorfields with $k_1$ factors of ${\\tilde{r}}L$ , $k_2$ factors of $\\Omega $ and $k_3$ factors of $T$ , then for any function $$ , $\\begin{split}X_k\\dots X_1= ({\\tilde{r}}L)^{k_1}\\Omega ^{k_2} T^{k_3}+\\sum _{j_1+j_2+j_3\\le k-1}\\mathcal {O}({\\dot{\\wp }}^{\\le 2k-2(j_1+j_2+j_3)})({\\tilde{r}}L)^{j_1}\\Omega ^{j_2}T^{j_3}.\\end{split}$ The proof is by induction on $k$ .", "For $k=2$ the statement follows from Lemma REF .", "For the induction step, suppose there are ${\\tilde{k}}_1$ , ${\\tilde{k}}_2$ , and ${\\tilde{k}}_3$ factors of ${\\tilde{r}}L$ , $\\Omega $ , and $T$ , respectively, among $X_1,\\dots ,X_{k-1}$ .", "Then $\\begin{split}X_k\\dots X_1=X_k({\\tilde{r}}L)^{k_1}\\Omega ^k_2 T^{k_3}+X_k\\sum _{j_1+j_2+j_3\\le k-2}\\mathcal {O}({\\dot{\\wp }}^{\\le 2k-2-2(j_1+j_2+j_3)})({\\tilde{r}}L)^{j_1}\\Omega ^{j_2}T^{j_3}=I+II.\\end{split}$ The term $II$ can be put in the desired form using the induction hypothesis.", "For the first term, if $X_k={\\tilde{r}}L$ , if $X_k=\\Omega $ and ${\\tilde{k}}_1=0$ , or if $X_k=T$ and ${\\tilde{k}}_1={\\tilde{k}}_2=0$ , then this is already of the desired form.", "If $X_k=\\Omega $ and ${\\tilde{k}}_1\\ne 0$ , then by Lemma REF the first term is $\\begin{split}({\\tilde{r}}L) \\Omega ({\\tilde{r}}L)^{{\\tilde{k}}_1-1}\\Omega ^{{\\tilde{k}}_2} T^{{\\tilde{k}}_3}+(\\mathcal {O}({\\dot{\\wp }}r^{-1}){\\tilde{r}}L +\\mathcal {O}({\\dot{\\wp }}r^{-2})\\Omega +\\mathcal {O}({\\dot{\\wp }}r^{-2})T)({\\tilde{r}}L)^{{\\tilde{k}}_1-1}\\Omega ^{{\\tilde{k}}_2} T^{{\\tilde{k}}_3}.\\end{split}$ The second term can again be put in the desired form by the induction hypothesis.", "Similarly by the induction hypothesis we can rearrange the first $k-1$ derivatives in the first term to put this in the desired form.", "If $X_k=T$ and ${\\tilde{k}}_1=0$ but ${\\tilde{k}}_2\\ne 0$ then by Lemma REF we can write $I$ as $\\begin{split}\\Omega T\\Omega ^{{\\tilde{k}}_2-1}T^{{\\tilde{k}}_1}+(\\mathcal {O}({\\dot{\\wp }}){\\tilde{r}}L + \\mathcal {O}({\\dot{\\wp }})\\Omega +\\mathcal {O}({\\dot{\\wp }})T)\\Omega ^{{\\tilde{k}}_2-1}T^{{\\tilde{k}}_1},\\end{split}$ which, by the induction hypothesis, can be arranged into the desired form, using the same argument as above.", "The case where $X_k=T$ and ${\\tilde{k}}_1\\ne 0$ is similar.", "In view of Lemma REF , in order to estimate $X_k\\dots X_1$ , with $X_i$ as in the lemma, it suffices to consider only the rearrangement $({\\tilde{r}}L)^{k_1}\\Omega ^{k_2}T^{k_3}$ , so it suffices to consider commutators with (REF ) in this order.", "This commutator is calculated in the next lemma.", "For this, we let ${\\widetilde{\\mathcal {P}}}$ denote the non-perturbative part of the operator on the right-hand side of (REF ), that is, $\\begin{split}{\\widetilde{\\mathcal {P}}}:=-{\\underline{L}}L+\\frac{1}{{\\tilde{r}}^2}\\sum \\Omega ^2-\\frac{(n-1)(n-3)}{4{\\tilde{r}}^2}.\\end{split}$ Lemma 8.4 For any function $$ and any integers $k_1,k_2,k_3\\ge 0$ , and with ${\\tilde{}}={\\tilde{r}}^{\\frac{n-1}{2}}$ and $k=k_1+k_2+k_3$ , $({\\tilde{r}}L+1)^{k_1}\\Omega ^{k_2}T^{k_2}({\\tilde{r}}^{\\frac{n-1}{2}}\\mathcal {P})={\\widetilde{\\mathcal {P}}}(({\\tilde{r}}L)^{k_1}\\Omega ^{k_2}T^{k_3}{\\tilde{}})+{\\mathrm {Err}}_{k_1,k_2,k_3}[{\\tilde{}}],\\\\$ where ${\\mathrm {Err}}_{k_1,k_2,k_3}[{\\tilde{}}]&=\\sum _{j=0}^{k_1-1}c_{j,k_1}{\\widetilde{\\mathcal {P}}}_1(({\\tilde{r}}L)^{j}\\Omega ^{k_2}T^{k_3}{\\tilde{}})+\\mathcal {O}(r^{-4})L^2({\\tilde{r}}L)^{k_1}\\Omega ^{k_2}T^{k_3}{\\tilde{}}+\\mathcal {O}(r^{-4}){\\underline{L}}^2({\\tilde{r}}L)^{k_1}\\Omega ^{k_2}T^{k_3}{\\tilde{}}\\nonumber \\\\&\\quad +\\mathcal {O}(r^{-5})\\Omega L({\\tilde{r}}L)^{k_1}\\Omega ^{k_2}T^{k_3}{\\tilde{}}+\\mathcal {O}(r^{-5}){\\underline{L}}\\Omega ({\\tilde{r}}L)^{k_1}\\Omega ^{k_2}T^{k_3}{\\tilde{}}\\nonumber \\\\&\\quad +\\mathcal {O}({\\dot{\\wp }}^{\\le 2k+2}+r^{-5})\\sum _{j_1+j_2+j_3\\le k}L({\\tilde{r}}L)^{j_1}\\Omega ^{j_2}T^{j_3}{\\tilde{}}\\nonumber \\\\&\\quad +\\mathcal {O}({\\dot{\\wp }}^{\\le 2k+2} r^{-2}+r^{-4})\\sum _{j_1+j_2+j_3\\le k}{\\underline{L}}({\\tilde{r}}L)^{j_1}\\Omega ^{j_2}T^{j_3}{\\tilde{}}\\\\&\\quad +\\mathcal {O}({\\dot{\\wp }}^{\\le 2k+2} r^{-2}+r^{-6})\\sum _{j_1+j_2+j_3\\le k}\\Omega ({\\tilde{r}}L)^{j_1}\\Omega ^{j_2}T^{j_3}{\\tilde{}}\\nonumber \\\\&\\quad +\\mathcal {O}({\\dot{\\wp }}^{\\le 2k+2} r^{-2}+r^{-5})\\sum _{j_1+j_2+j_3\\le k}({\\tilde{r}}L)^{j_1}\\Omega ^{j_2}T^{j_3}{\\tilde{}},\\nonumber $ for some constants $c_{j,k_1}$ (which are nonzero only if $k_1\\ge 1$ ), with $c_{k_1-1,k_1}=-k_1$ , and where $\\begin{split}{\\widetilde{\\mathcal {P}}}_1=LL+\\frac{1}{{\\tilde{r}}^2}\\sum \\Omega ^2-\\frac{(n-1)(n-3)}{4{\\tilde{r}}^2}.\\end{split}$ Note that the terms involving ${\\widetilde{\\mathcal {P}}}_1$ are what would come up by commuting $({\\tilde{r}}L+1)^{k_1}$ if the parameters were treated as fixed.", "The proof is by induction on $k$ , applied to every term on the right-hand side of (REF ).", "The treatment of the different terms is similar, so here we present the details only for ${\\underline{L}}L{\\tilde{}}$ .", "Starting with $T$ , by Lemma REF , and recalling that ${\\underline{L}}=2T-L$ , $\\begin{split}T{\\underline{L}}L{\\tilde{}}&= {\\underline{L}}L T{\\tilde{}}+[T,{\\underline{L}}]L{\\tilde{}}+{\\underline{L}}[T,L]{\\tilde{}}\\\\&={\\underline{L}}L T{\\tilde{}}+\\mathcal {O}({\\dot{\\wp }}^{\\le 2})L L{\\tilde{}}+ \\mathcal {O}({\\dot{\\wp }}^{\\le 2})TL{\\tilde{}}+\\mathcal {O}({\\dot{\\wp }}^{\\le 2}){\\underline{L}}T{\\tilde{}}+\\mathcal {O}({\\dot{\\wp }}^{\\le 2}r^{-2}){\\underline{L}}\\Omega {\\tilde{}}+\\mathcal {O}({\\dot{\\wp }}^{\\le 2}r^{-2})\\Omega L{\\tilde{}}\\\\&\\quad +\\mathcal {O}({\\dot{\\wp }}^{\\le 3})L{\\tilde{}}+\\mathcal {O}({\\dot{\\wp }}^{\\le 3}r^{-2}){\\underline{L}}{\\tilde{}}+\\mathcal {O}({\\dot{\\wp }}^{\\le 3}r^{-2})\\Omega {\\tilde{}}\\\\&={\\underline{L}}L T{\\tilde{}}+\\mathcal {O}({\\dot{\\wp }}^{\\le 2})L L{\\tilde{}}+ \\mathcal {O}({\\dot{\\wp }}^{\\le 2})LT{\\tilde{}}+\\mathcal {O}({\\dot{\\wp }}^{\\le 2}){\\underline{L}}T{\\tilde{}}+\\mathcal {O}({\\dot{\\wp }}^{\\le 2}r^{-2}){\\underline{L}}\\Omega {\\tilde{}}+\\mathcal {O}({\\dot{\\wp }}^{\\le 2}r^{-2})\\Omega L{\\tilde{}}\\\\&\\quad +\\mathcal {O}({\\dot{\\wp }}^{\\le 3})L{\\tilde{}}+\\mathcal {O}({\\dot{\\wp }}^{\\le 3}r^{-2}){\\underline{L}}{\\tilde{}}+\\mathcal {O}({\\dot{\\wp }}^{\\le 3}r^{-2})\\Omega {\\tilde{}},\\end{split}$ which has the desired structure.", "We can then inductively apply this same identity together with Lemmas REF and REF to conclude that $\\begin{split}T^{k_3}{\\underline{L}}L{\\tilde{}}={\\underline{L}}LT^{k_3}{\\tilde{}}+{\\mathrm {Err}},\\end{split}$ where ${\\mathrm {Err}}$ has the structure given in the statement of the lemma.", "Next we apply $\\Omega $ to (REF ).", "Note that $\\Omega $ applied to ${\\mathrm {Err}}$ in (REF ) has the desired structure by Lemmas REF and REF .", "For the main term, again by Lemma REF , and with $\\Psi =T^k{\\tilde{}}$ , $\\begin{split}\\Omega {\\underline{L}}L \\Psi &= {\\underline{L}}L \\Omega \\Psi +[\\Omega ,{\\underline{L}}]L\\Psi +{\\underline{L}}[\\Omega ,L]\\Psi \\\\&={\\underline{L}}L \\Omega \\Psi +\\mathcal {O}({\\dot{\\wp }}){\\tilde{r}}L L\\Psi +\\mathcal {O}({\\dot{\\wp }})\\Omega L\\Psi +\\mathcal {O}({\\dot{\\wp }})TL\\Psi +\\mathcal {O}({\\dot{\\wp }}r^{-2}){\\underline{L}}\\Omega \\Psi +\\mathcal {O}({\\dot{\\wp }}r^{-2}){\\underline{L}}T\\Psi \\\\&\\quad +\\mathcal {O}({\\dot{\\wp }}^{\\le 2}r^{-1})L\\Psi +\\mathcal {O}({\\dot{\\wp }}^{\\le 2}r^{-2})\\Omega \\Psi +\\mathcal {O}({\\dot{\\wp }}^{\\le 2}r^{-2}){\\underline{L}}\\Psi \\\\&={\\underline{L}}L \\Omega \\Psi +\\mathcal {O}({\\dot{\\wp }})L({\\tilde{r}}L)\\Psi +\\mathcal {O}({\\dot{\\wp }}) L\\Omega \\Psi +\\mathcal {O}({\\dot{\\wp }})LT\\Psi +\\mathcal {O}({\\dot{\\wp }}r^{-2}){\\underline{L}}\\Omega \\Psi +\\mathcal {O}({\\dot{\\wp }}r^{-2}){\\underline{L}}T\\Psi \\\\&\\quad +\\mathcal {O}({\\dot{\\wp }}^{\\le 3})L\\Psi +\\mathcal {O}({\\dot{\\wp }}^{\\le 3}r^{-2})\\Omega \\Psi +\\mathcal {O}({\\dot{\\wp }}^{\\le 3}r^{-2}){\\underline{L}}\\Psi ,\\end{split}$ which has the desired structure.", "Here we have used the fact that ${\\tilde{r}}L^2\\Psi = L({\\tilde{r}}L)\\Psi +\\mathcal {O}(1+{\\dot{\\wp }})L\\Psi $ .", "As for $T^{k_3}$ we can apply this identity inductively and use Lemmas REF and REF to conclude that $\\begin{split}\\Omega ^{k_2}T^{k_3}{\\underline{L}}L{\\tilde{}}={\\underline{L}}L\\Omega ^{k_2}T^{k_3}{\\tilde{}}+{\\mathrm {Err}},\\end{split}$ where ${\\mathrm {Err}}$ has the structure given in the statement of the lemma.", "Finally we apply ${\\tilde{r}}L+1$ to (REF ).", "Again by Lemmas REF and REF the contribution of ${\\mathrm {Err}}$ in (REF ) has the desired form.", "For the main term we have, using Lemma REF and with $\\Psi =\\Omega ^{k_2}T^{k_1}{\\tilde{}}$ , $\\begin{split}({\\tilde{r}}L+1){\\underline{L}}L \\Psi &= {\\underline{L}}L({\\tilde{r}}L+1)\\Psi +{\\tilde{r}}[L,{\\underline{L}}]L\\Psi +[{\\tilde{r}},{\\underline{L}}]L^2\\Psi +{\\underline{L}}([{\\tilde{r}},L]L\\Psi )\\\\&={\\underline{L}}L({\\tilde{r}}L+1)\\Psi -{\\underline{L}}L\\Psi +LL\\Psi +\\mathcal {O}({\\dot{\\wp }}^{\\le 2}){\\tilde{r}}L L\\Psi +\\mathcal {O}({\\dot{\\wp }}r^{-1})TL\\Psi \\\\&\\quad +\\mathcal {O}({\\dot{\\wp }}^{\\le 2}r^{-1})\\Omega L\\Psi +\\mathcal {O}({\\dot{\\wp }}r^{-2})L\\Psi \\\\&={\\underline{L}}L({\\tilde{r}}L+1)\\Psi -{\\underline{L}}L\\Psi +LL\\Psi +\\mathcal {O}({\\dot{\\wp }}^{\\le 2})L ({\\tilde{r}}L)\\Psi +\\mathcal {O}({\\dot{\\wp }}r^{-1})LT\\Psi \\\\&\\quad +\\mathcal {O}({\\dot{\\wp }}^{\\le 2}r^{-1}) L\\Omega \\Psi +\\mathcal {O}({\\dot{\\wp }}^{\\le 3})L\\Psi +\\mathcal {O}({\\dot{\\wp }}^{\\le 3}r^{-2}){\\underline{L}}\\Psi +\\mathcal {O}({\\dot{\\wp }}^{\\le 3}r^{-3})\\Omega \\Psi .\\end{split}$ The terms $-{\\underline{L}}L\\Psi +LL\\Psi $ will contribute to the terms involving $\\mathcal {P}_1$ on the right-hand side of (REF ) and the remaining terms have the expected form.", "The desired structure now follows by inductively applying this identity and using Lemmas REF and REF .", "Here the commutators with $L^2\\Psi $ and the remaining terms on the right-hand side of (REF ) are treated inductively in a similar way as with ${\\underline{L}}L$ above.", "We end this subsection by deriving expansions for the source and the cubic terms in equation (REF ).", "The cubic term refers to the part of the term (recall that $\\varphi $ and $\\phi $ are related by the conjugation (REF )) $\\begin{split}\\frac{\\nabla ^\\mu \\varphi \\nabla ^\\nu \\varphi }{1+\\nabla ^\\alpha v \\nabla _\\alpha v}\\nabla ^2_{\\mu \\nu }\\varphi \\end{split}$ in (REF ) where no factors of $Q$ appear, which we expect to be the most difficult term in the nonlinearity.", "Recall from (REF ), that in the exterior region $\\begin{split}Q\\equiv Q_\\wp =Q(rA_\\ell \\Theta -\\gamma \\langle r\\rangle \\ell )=Q({\\tilde{r}}).\\end{split}$ In view of (REF ) the source term $\\mathcal {F}_0$ is given by $\\begin{split}\\mathcal {F}_0=\\Box _m Q-(1+\\nabla ^\\alpha Q \\nabla _\\alpha Q)^{-1}\\nabla ^\\mu Q\\nabla ^\\nu Q\\nabla ^2_{\\mu \\nu }Q.\\end{split}$ The more precise structure of $\\mathcal {F}_0$ is calculated in the next lemma.", "Lemma 8.5 The source term $\\mathcal {F}_0$ satisfies the following estimate in the hyperboloidal region $\\mathcal {C}_{\\mathrm {hyp}}$ : $\\begin{split}{{\\bf T}}^k\\mathcal {F}_0= \\mathcal {O}({\\dot{\\wp }}^{1+k\\le \\cdot \\le 3+k}r^{-n+1}),\\quad k=1,2,3.\\end{split}$ Recall that if $Q$ were a maximal embedding then $\\mathcal {F}_0$ would vanish.", "In particular (by a slight abuse of notation we are identifying $Q(y)=Q(|y|)$ with a function of a single variable), $Q$ satisfies equation (REF ).", "Moreover, $\\Omega Q=0$ by construction.", "It follow from these facts and Lemma REF that $\\begin{split}\\mathcal {F}_0=\\mathcal {O}({\\dot{\\wp }}^{\\le 3})Q^{\\prime }+\\mathcal {O}({\\dot{\\wp }}r)Q^{\\prime \\prime },\\end{split}$ which proves the desired bound for $k=0$ .", "The higher order bounds are obtained similarly by differentiating the equation.", "Turning to (REF ), we have the following expansion of the purely cubic part of this nonlinearity.", "Lemma 8.6 $\\nabla ^\\mu \\varphi \\nabla ^\\nu \\varphi \\nabla ^2_{\\mu \\nu }\\varphi $ can be written as a linear combination of terms of the following forms in the hyperboloidal region $\\mathcal {C}_{\\mathrm {hyp}}$ : Quasilinear terms: $(L\\varphi )^2{\\underline{L}}^2\\varphi $ , $(L\\varphi {\\underline{L}}\\varphi ){\\underline{L}}L\\varphi $ , $({\\underline{L}}\\varphi )^2L^2\\varphi $ , $(L\\varphi e_A\\varphi )e_A{\\underline{L}}\\varphi $ , $({\\underline{L}}\\varphi e_A\\varphi )e_AL\\varphi $ , $(e_A\\varphi e_B\\varphi )e_Ae_B\\varphi $ .", "Semilinear terms: $\\mathcal {O}({\\dot{\\wp }}^{\\le 2}r^{-3})({\\underline{L}}\\varphi )^2L\\varphi $ , $\\mathcal {O}({\\dot{\\wp }}r^{-3})e_A\\varphi $ , $\\mathcal {O}({\\dot{\\wp }}^{\\le 2})(L\\varphi )^2{\\underline{L}}\\varphi $ , $\\mathcal {O}({\\dot{\\wp }}^{\\le 2}r^{-1}){\\underline{L}}\\varphi L\\varphi e_A \\varphi $ , $\\mathcal {O}({\\dot{\\wp }}r^{-1}){\\underline{L}}\\varphi e_A\\varphi e_B\\varphi $ , $\\mathcal {O}({\\dot{\\wp }})L\\varphi e_A\\varphi e_B\\varphi $ , $\\mathcal {O}(1)e_A\\varphi e_B\\varphi e_C\\varphi $ .", "This follows by writing this expression as $\\begin{split}(m^{-1})^{II^{\\prime }}(m^{-1})^{JJ^{\\prime }}(D_{I^{\\prime }}\\varphi D_{J^{\\prime }}\\varphi )(D_{I}D_J\\varphi -\\Gamma _{IJ}^KD_K\\varphi ).\\end{split}$ The relevant connection coefficients can be calculated using the Koszul formula as in the proof of Lemma REF and are given by $\\begin{split}&\\Gamma _{LL}^{\\underline{L}}=0,\\quad \\Gamma _{LL}^L=\\mathcal {O}({\\dot{\\wp }}^{\\le 2}r^{-2}),\\quad \\Gamma _{LL}^A=\\mathcal {O}({\\dot{\\wp }}r^{-3}),\\quad \\Gamma _{{\\underline{L}}L}^{\\underline{L}}=0, \\quad \\Gamma _{{\\underline{L}}L}^L=\\mathcal {O}({\\dot{\\wp }}^{\\le 2}), \\quad \\Gamma _{{\\underline{L}}L}^{\\underline{L}}=0,\\\\&\\Gamma _{{\\underline{L}}L}^A=\\mathcal {O}({\\dot{\\wp }}^{\\le 2}r^{-1}),\\quad \\Gamma _{A L}^L=\\mathcal {O}({\\dot{\\wp }}^{-2}r^{-1}),\\quad \\Gamma _{AL}^{\\underline{L}}=0,\\quad \\Gamma _{AL}^B=\\mathcal {O}({\\dot{\\wp }}r^{-1}), \\quad \\Gamma _{A{\\underline{L}}}^{\\underline{L}}=\\mathcal {O}({\\dot{\\wp }}^{\\le 2}r^{-1}),\\\\&\\Gamma _{A{\\underline{L}}}^L=0,\\quad \\Gamma _{A{\\underline{L}}}^B=\\mathcal {O}({\\dot{\\wp }}), \\quad \\Gamma _{AB}^L=\\mathcal {O}({\\dot{\\wp }}),\\quad \\Gamma _{AB}^{\\underline{L}}=\\mathcal {O}({\\dot{\\wp }}r^{-1}), \\quad \\Gamma _{AB}^C=\\mathcal {O}(1).\\end{split}$" ], [ "The Main $r^p$ Multiplier Identity", "This section contains the main $r^p$ multiplier identity for $\\mathcal {P}_{\\mathrm {graph}}$ .", "Recall that this operator arises when using the conjugated variable $\\varphi $ defined in terms of $\\phi $ in (REF ).", "Since we will be interested in the exterior hyperboloidal region, we fix a cutoff function $\\chi _{\\ge {\\tilde{R}}}$ supported in the region $\\lbrace r\\ge {\\tilde{R}}\\rbrace \\subseteq \\mathcal {C}_{\\mathrm {hyp}}$ .", "We will also use the notation $\\chi _{\\le {\\tilde{R}}}=1-\\chi _{\\ge {\\tilde{R}}}$ .", "Given $$ with $\\begin{split}\\mathcal {P}_{\\mathrm {graph}}=f,\\end{split}$ we let ${\\tilde{}}={\\tilde{r}}^{\\frac{n-1}{2}}$ and ${\\tilde{f}}={\\tilde{r}}^{\\frac{n-1}{2}}f$ as usual.", "Suppose $X_1,\\dots ,X_k$ , $k=k_1+k_2+k_3$ , are a collection of vectorfields from $\\lbrace {\\tilde{r}}L, \\Omega , T\\rbrace $ , with $X_1,\\dots X_{k_3}=T$ , $X_{k_3+1},\\dots X_{k_3+k_2}=\\Omega $ , and $X_{k_3+k_2+1},\\dots X_k={\\tilde{r}}L$ .", "We let $\\begin{split}{\\tilde{}}_k=X_k\\dots X_1{\\tilde{}},\\qquad {\\tilde{f}}_k = X_k\\dots X_1{\\tilde{f}},\\end{split}$ and if the precise choice of the vectorfields is important we write $\\begin{split}{\\tilde{}}_k={\\tilde{}}_{k_1,k_2,k_2},\\qquad {\\tilde{f}}_{k}={\\tilde{f}}_{k_1,k_2,k_3}.\\end{split}$ The basic $r^p$ boundary and bulk energies are defined as follows.", "For any $p\\in [0,2]$ , $\\begin{split}&\\mathcal {E}^p_k(\\sigma )\\equiv \\mathcal {E}_p^k[](\\sigma ):=\\int _{\\Sigma _\\sigma } \\chi _{\\tilde{R}}{\\tilde{r}}^p (L{\\tilde{}}_k)^2 \\mathrm {d}\\theta \\mathrm {d}r,\\\\&\\mathcal {B}_{k}^{p}(\\sigma _1,\\sigma _2)\\equiv \\mathcal {B}_{k}^{p}[](\\sigma _1,\\sigma _2):=\\int _{\\sigma _1}^{\\sigma _2}\\int _{\\Sigma _\\tau }\\chi _{\\tilde{R}}{\\tilde{r}}^{p-1}\\big ((L{\\tilde{}}_k)^2+\\big (\\frac{2-p}{{\\tilde{r}}^2}\\big )(|\\Omega {\\tilde{}}_k|^2+{\\tilde{}}_k^2)\\big )\\mathrm {d}S \\mathrm {d}r \\mathrm {d}\\tau .\\end{split}$ When there is a need to distinguish between the vectorfields applied to ${\\tilde{}}$ we write $\\begin{split}\\mathcal {E}_{k_1,k_2,k_3}^p(\\sigma ) {\\ \\ \\text{and} \\ \\ }\\mathcal {B}_{k_1,k_2,k_3}^p(\\sigma _1,\\sigma _2)\\end{split}$ for the corresponding energies.", "We also define the standard energy (note that the definition agrees with $E$ in (REF ) when $k=0$ ) $\\begin{split}E_k(\\tau )&\\equiv E_k[](\\tau )\\\\&:=\\int _{\\Sigma _\\tau }\\chi _{\\le {\\tilde{R}}}(|\\partial \\partial ^k|^2+\\langle \\rho \\rangle ^{-2}|\\partial ^k|^2)\\mathrm {d}V+\\int _{\\Sigma _\\tau }\\chi _{\\ge {\\tilde{R}}}(|\\partial _\\Sigma X^k |^2+r^{-2}|TX^k|^2+r^{-2}|X^k|^2) \\mathrm {d}V.\\end{split}$ Lemma 8.7 Suppose $\\mathcal {P}_{\\mathrm {graph}}=f$ and let ${\\tilde{}}={\\tilde{r}}^{\\frac{n-1}{2}}$ and ${\\tilde{f}}={\\tilde{r}}^{\\frac{n-1}{2}}f$ .", "If the bootstrap assumptions (REF )–() hold, then with the notation introduced in (REF ), (REF ), (REF ), with $k=k_1+k_2+k_3$ , and for any $0\\le p\\le 2$ and any $\\tau _1<\\tau _2$ , $\\begin{split}&\\sum _{j\\le k_1}\\big (\\sup _{\\tau \\in [\\tau _1,\\tau _2]}\\mathcal {E}_{j,k_2,k_3}^p(\\tau )+\\mathcal {B}_{j,k_2,k_3}^{p-1}(\\tau _1,\\tau _2)\\big )\\\\&\\le C\\sum _{j\\le k_1}\\mathcal {E}_{j,k_2,k_3}^{p}(\\tau _1)+C\\sum _{j\\le k_1}\\int _{\\tau _1}^{\\tau _2}\\int _{\\Sigma _\\tau }\\chi _{\\tilde{R}}{\\tilde{r}}^p {\\tilde{f}}_k (L{\\tilde{}}_{j,k_2,k_3} +\\mathcal {O}(r^{-5})\\Omega {\\tilde{}}_{j,k_2,k_3})\\mathrm {d}\\theta \\mathrm {d}r \\mathrm {d}\\tau \\\\&\\quad +C_{\\tilde{R}}\\int _{\\tau _1}^{\\tau _2}\\int _{\\Sigma _\\tau }|\\partial \\chi _{\\tilde{R}}|(|\\partial _{\\tau ,x}{\\tilde{}}_k|^2+|{\\tilde{}}_k|^2)\\mathrm {d}\\theta \\mathrm {d}r \\mathrm {d}\\tau \\\\&\\quad +C\\sum _{j\\le k}\\sup _{\\tau \\in [\\tau _1,\\tau _2]}E_j(\\tau )+C\\delta \\sup _{\\tau \\in [\\tau _1,\\tau _2]}\\sum _{j\\le k}\\mathcal {E}_j^p(\\tau )\\\\&\\quad +C\\delta \\sum _{j\\le k}\\int _{\\tau _1}^{\\tau _2}\\int _{\\Sigma _\\tau }\\chi _{\\tilde{R}}(|\\partial _{\\tau ,x}{\\tilde{}}_j|^2+{\\tilde{r}}^{-2}|{\\tilde{}}_j|^2){\\tilde{r}}^{-1-\\alpha }\\mathrm {d}\\theta \\mathrm {d}r \\mathrm {d}\\tau .\\end{split}$ Here $C$ and $C_{\\tilde{R}}$ are large constants constants and $\\delta =o(\\epsilon )+o({\\tilde{R}})$ is a small constant that is independent of $C$ and $C_{\\tilde{R}}$ .", "To simplify notation we write $\\chi $ for $\\chi _{\\tilde{R}}$ and ${\\tilde{}}_k$ for ${\\tilde{}}_{k_1,k_2,k_3}$ , and multiply each term in the expansion (REF ), (REF ) by $\\chi {\\tilde{r}}^p L{\\tilde{}}$ .", "$\\begin{split}-(1+\\mathcal {O}(r^{-4}))\\chi {\\underline{L}}L{\\tilde{}}_k L{\\tilde{}}_k {\\tilde{r}}^p&=-\\frac{1}{2}{\\underline{L}}((1+\\mathcal {O}(r^{-4}))\\chi (L{\\tilde{}}_k)^2{\\tilde{r}}^p)-\\frac{p}{2}\\chi {\\tilde{r}}^{p-1}(L{\\tilde{}}_k)^2\\\\&\\quad +\\mathcal {O}({\\dot{\\wp }})(L{\\tilde{}}_k)^2{\\tilde{r}}^p+\\mathcal {O}(r^{-4}){\\tilde{r}}^{p-1}(L{\\tilde{}}_k)^2+\\mathcal {O}(1)({\\underline{L}}\\chi ){\\tilde{r}}^p(L{\\tilde{}}_k)^2.\\end{split}$ Similarly, using also (REF ) and Cauchy-Schwarz, $\\begin{split}(1+\\mathcal {O}(r^{-4}))\\chi \\Omega ^2{\\tilde{}}_k L {\\tilde{}}_k {\\tilde{r}}^{p-2}&=-\\frac{1}{2}L((1+\\mathcal {O}(r^{-4}))\\chi (\\frac{\\Omega }{{\\tilde{r}}}{\\tilde{}}_k)^2{\\tilde{r}}^{p})-\\frac{2-p}{2}\\chi (\\frac{\\Omega }{{\\tilde{r}}}{\\tilde{}}_k)^2{\\tilde{r}}^{p-1}\\\\&\\quad +\\Omega ((1+\\mathcal {O}(r^{-4}))\\chi {\\tilde{r}}^{p-2}\\Omega {\\tilde{}}L{\\tilde{}}_k)\\\\&\\quad +\\mathcal {O}(1)(\\Omega \\chi )\\Omega {\\tilde{}}_kL{\\tilde{}}_k{\\tilde{r}}^{p-1}+\\mathcal {O}(1)(L\\chi )(\\frac{1}{{\\tilde{r}}}\\Omega {\\tilde{}}_k)^2{\\tilde{r}}^{p}\\\\&\\quad +\\mathcal {O}(r^{-4})\\chi (\\frac{\\Omega }{{\\tilde{r}}}{\\tilde{}}_k)^2{\\tilde{r}}^{p-1}+\\mathcal {O}({\\dot{\\wp }}r^{-2})\\chi (L{\\tilde{}}_k)^2{\\tilde{r}}^p\\\\&\\quad +\\mathcal {O}({\\dot{\\wp }}r^{-2})\\chi (\\frac{\\Omega }{{\\tilde{r}}}{\\tilde{}})^2{\\tilde{r}}^p+\\mathcal {O}({\\dot{\\wp }}r^{-4})\\chi ({\\underline{L}}{\\tilde{}})^2{\\tilde{r}}^p,\\end{split}$ and $\\begin{split}-(1+\\mathcal {O}(r^{-4}))\\chi {\\tilde{}}_k L{\\tilde{}}_k {\\tilde{r}}^{p-2} &=-\\frac{1}{2}L((1+\\mathcal {O}(r^{-4}))\\chi (\\frac{{\\tilde{}}_k}{{\\tilde{r}}})^2{\\tilde{r}}^p)-\\frac{2-p}{2}\\chi (\\frac{{\\tilde{}}_k}{{\\tilde{r}}})^2{\\tilde{r}}^{p-1}\\\\&\\quad +\\mathcal {O}(1)(L\\chi ){\\tilde{}}_k^2{\\tilde{r}}^{p-2}+\\mathcal {O}(r^{-4})\\chi (\\frac{{\\tilde{}}_k}{{\\tilde{r}}})^2{\\tilde{r}}^{p-1}+\\mathcal {O}({\\dot{\\wp }}r^{-2})(\\frac{{\\tilde{}}_k}{{\\tilde{r}}})^2{\\tilde{r}}^p.\\end{split}$ Note that integrating the last three identities already gives the desired control on the left-hand side of (REF ).", "Here the terms involving ${\\dot{\\wp }}$ can be integrated in $\\tau $ and absorbed by the left-hand side of (REF ) or the standard energy $E_k$ .", "Turning to the error terms, we first consider $\\begin{split}\\mathcal {O}(r^{-4})\\chi L^2{\\tilde{}}_k^2L{\\tilde{}}_k {\\tilde{r}}^p&=L(\\mathcal {O}(r^{-4})\\chi (L{\\tilde{}}_k)^2{\\tilde{r}}^p)+\\mathcal {O}(r^{-4})\\chi (L{\\tilde{}}_k)^2{\\tilde{r}}^{p-1}+\\mathcal {O}(1)(L\\chi )(L{\\tilde{}}_k)^2.\\end{split}$ Similarly, using also (REF ), $\\begin{split}\\mathcal {O}(r^{-4})\\chi {\\underline{L}}^2{\\tilde{}}_k L{\\tilde{}}_k {\\tilde{r}}^p&={\\underline{L}}(\\mathcal {O}(r^{-4})\\chi {\\underline{L}}{\\tilde{}}_k L{\\tilde{}}_k{\\tilde{r}}^p)+L(\\mathcal {O}(r^{-4})\\chi ({\\underline{L}}{\\tilde{}}_k)^2{\\tilde{r}}^p)+\\mathcal {O}(r^{-4})({\\underline{L}}{\\tilde{}}_k)^2{\\tilde{r}}^{p-1}\\\\&\\quad +\\mathcal {O}(r^{-5})\\chi L{\\tilde{}}_k{\\underline{L}}{\\tilde{}}_k {\\tilde{r}}^{p-1}+\\mathcal {O}({\\dot{\\wp }}r^{-4})L{\\tilde{}}_k{\\underline{L}}{\\tilde{}}_k {\\tilde{r}}^{p}+\\mathcal {O}(r^{-4})({\\underline{L}}\\chi )L{\\tilde{}}_k{\\underline{L}}{\\tilde{}}_k{\\tilde{r}}^{p}\\\\&\\quad \\mathcal {O}({\\dot{\\wp }}r^{-5})(\\frac{\\Omega }{{\\tilde{r}}}{\\tilde{}}_k){\\underline{L}}{\\tilde{}}_k{\\tilde{r}}^p+\\mathcal {O}(r^{-4})(L\\chi )({\\underline{L}}{\\tilde{}}_k)^2{\\tilde{r}}^p,\\end{split}$ and $\\begin{split}\\mathcal {O}(r^{-5})\\chi \\Omega L{\\tilde{}}_k L{\\tilde{}}_k {\\tilde{r}}^p=\\Omega (\\mathcal {O}(r)^{-5}\\chi (L{\\tilde{}}_k)^2{\\tilde{r}}^p)+\\mathcal {O}(r^{-4})\\chi (L{\\tilde{}}_k)^2{\\tilde{r}}^{p-1}+\\mathcal {O}(r^{-5})(\\Omega \\chi )(L{\\tilde{}}_k)^2{\\tilde{r}}^p.\\end{split}$ For the last term on the second line of (REF ) first observe that $\\begin{split}\\mathcal {O}(r^{-5})\\chi {\\underline{L}}\\Omega {\\tilde{}}_k L{\\tilde{}}_k {\\tilde{r}}^p &= {\\underline{L}}(\\mathcal {O}(r^{-5})\\chi \\Omega {\\tilde{}}_k L{\\tilde{}}_k {\\tilde{r}}^p)+\\mathcal {O}(r^{-5})\\chi \\Omega {\\tilde{}}_k{\\underline{L}}L{\\tilde{}}_k{\\tilde{r}}^p+\\mathcal {O}({\\dot{\\wp }}r^{-4})\\chi (\\frac{\\Omega }{{\\tilde{r}}}{\\tilde{}}_k)L{\\tilde{}}_k{\\tilde{r}}^p\\\\&\\quad +\\mathcal {O}(r^{-5})\\chi (\\frac{\\Omega }{{\\tilde{r}}}{\\tilde{}}_k)L{\\tilde{}}_k{\\tilde{r}}^p+\\mathcal {O}( r^{-4})({\\underline{L}}\\chi )(\\frac{\\Omega }{{\\tilde{r}}}{\\tilde{}}_k)L{\\tilde{}}_k{\\tilde{r}}^p\\end{split}$ To treat the second term on the right, we use equation (REF ) to solve for ${\\underline{L}}L{\\tilde{}}$ , and replace this term by $\\begin{split}\\mathcal {O}(r^{-5})\\chi \\Omega {\\tilde{}}_k(\\mathcal {P}_{\\mathrm {graph}}+{\\underline{L}}L){\\tilde{}}_k{\\tilde{r}}^p+\\mathcal {O}(r^{-5})\\chi \\Omega {\\tilde{}}_k {\\mathrm {Err}}_k [{\\tilde{}}]{\\tilde{r}}^p.\\end{split}$ These terms can then be treated using similar considerations as above and below.", "The term $\\chi {\\mathrm {Err}}_k[{\\tilde{}}]L {\\tilde{}}_k{\\tilde{r}}^p$ in (REF ) is treated using repeated applications of the product rule (for integration by parts) and Lemmas REF and REF to write this term as a total derivative plus acceptable terms.", "We discuss only the contribution of ${\\widetilde{\\mathcal {P}}}_1$ , where by an abuse of notation we write ${\\tilde{}}_{j}$ for ${\\tilde{}}_{j,k_2,k_3}$ .", "For the top order term ${\\widetilde{\\mathcal {P}}}_1{\\tilde{}}_{k_1-1}$ the favorable sign of the coefficient $c_{k_1-1,k_1}$ is important, as we will illustrate with the term $L^2{\\tilde{}}_{k_1-1}$ appearing in ${\\widetilde{\\mathcal {P}}}_1{\\tilde{}}_{k_1-1}$ , for which we write $\\begin{split}L^2{\\tilde{}}_{k_1-1}L{\\tilde{}}_{k_1} {\\tilde{r}}^2= L({\\tilde{r}}^{-1}{\\tilde{}}_{k_1})L{\\tilde{}}_{k_1}{\\tilde{r}}^2= (L{\\tilde{}}_{k_1})^2{\\tilde{r}}-\\frac{1}{2}L({\\tilde{r}}^2(L{\\tilde{}}_{k_1})^2)+\\mathcal {O}({\\dot{\\wp }}){\\tilde{}}_{k_1} L {\\tilde{}}_{k_1}.\\end{split}$ Here the first term on the right has a favorable sign after multiplication by $c_{k_1-1,k_1}$ and the other terms can be bounded by the energy fluxes.", "The other terms in ${\\widetilde{\\mathcal {P}}}_1{\\tilde{}}_{k_1-1}$ are treated similarly, with a few more integration by parts and commutations between $\\Omega $ and $L$ , again with the sign of $c_{k_1-1,k_1}$ playing an important role for the main bulk term.", "The contributions of ${\\widetilde{\\mathcal {P}}}_1{\\tilde{}}_{j}$ , $j\\le k_1-2$ , are treated similarly where the error terms are absorbed inductively by adding a suitable multiple of the estimates for lower values of $k_1$ .", "For instance $\\begin{split}\\Omega ^2{\\tilde{}}_j L{\\tilde{}}_{k_1}= \\frac{1}{{\\tilde{r}}}\\Omega {\\tilde{}}_j\\Omega {\\tilde{}}_{k_1}+\\Omega (\\Omega {\\tilde{}}_{j}L{\\tilde{}}_{k_1})-L(\\Omega {\\tilde{}}_{j+1}\\Omega {\\tilde{}}_{k_1})+[L,\\Omega ]{\\tilde{}}_j\\Omega {\\tilde{}}_{k_1}-\\Omega {\\tilde{}}_{j}[\\Omega ,L]{\\tilde{}}_k,\\end{split}$ and the first term on the right is bounded by $\\delta |{\\tilde{r}}^{-1}\\Omega {\\tilde{}}_{k_1}|^2{\\tilde{r}}+C_\\delta |{\\tilde{r}}^{-1}\\Omega {\\tilde{}}_{j+1}|^2{\\tilde{r}}$ .", "The first term is absorbed by a corresponding term coming from ${\\widetilde{\\mathcal {P}}}_1{\\tilde{}}_{k_1-1}$ and the second term by a similar term in the multiplier identity for ${\\tilde{}}_{j+1}$ (instead of ${\\tilde{}}_{k_1}$ ), after adding a suitably large multiple of that identity.", "The contribution of the last four lines of (REF ) need no further manipulations.", "Putting everything together and applying Cauchy-Schwartz we obtain (to be precise, as explained above, we should write this first for ${\\tilde{}}={\\tilde{}}_{1,k_2,k_3}$ and derive the corresponding estimate, and then inductively build up to ${\\tilde{}}={\\tilde{}}_{k_1,k_2,k_3}$ ) $\\begin{split}{\\tilde{f}}L{\\tilde{}}{\\tilde{r}}^p&=-\\frac{1}{2}{\\underline{L}}\\big (\\chi (L{\\tilde{}})^2{\\tilde{r}}^p+{\\mathrm {Err}}_{\\underline{L}}\\big )-L\\big ({\\tilde{r}}^{p-2} \\big ((\\Omega {\\tilde{}})^2+\\frac{(n-1)(n-3)}{4}{\\tilde{}}^2\\big )+{\\mathrm {Err}}_{L}\\big )\\\\&\\quad +\\Omega ( {\\mathrm {Err}}_\\Omega )+{\\mathrm {Err}}_{\\mathrm {int}}+\\mathcal {O}(\\langle r\\rangle ^{-5})\\Omega {\\tilde{}}{\\tilde{f}}{\\tilde{r}}^p\\\\&\\quad -\\frac{p}{2}\\chi {\\tilde{r}}^{p-1}(L{\\tilde{}})^2-\\frac{2-p}{2}\\chi (\\frac{\\Omega }{{\\tilde{r}}}{\\tilde{}})^2{\\tilde{r}}^{p-1}-\\frac{(2-p)(n-1)(n-3)}{8}\\chi ({\\tilde{r}}^{-1}{\\tilde{}})^2{\\tilde{r}}^{p-1},\\end{split}$ where $\\begin{split}&{\\mathrm {Err}}_{\\underline{L}}\\le C\\chi \\sum _{j\\le k} ((\\partial _{\\tau ,x}{\\tilde{}}_j)^2+{\\tilde{r}}^{-2}{\\tilde{}}_j^2),\\\\&{\\mathrm {Err}}_{L}\\le C\\chi {\\tilde{r}}^p\\sum _{j\\le k}\\sum _{j\\le k} ((L{\\tilde{}})^2+{\\tilde{r}}^{-2}(\\Omega {\\tilde{}})^2+{\\tilde{r}}^{-2}{\\tilde{}}_j^2+{\\tilde{r}}^{-2}({\\underline{L}}{\\tilde{}})^2),\\\\&{\\mathrm {Err}}_{\\Omega }\\le C\\chi {\\tilde{r}}^{p-1}\\sum _{j\\le k}\\sum _{j\\le k} ((L{\\tilde{}})^2+{\\tilde{r}}^{-2}(\\Omega {\\tilde{}})^2+{\\tilde{r}}^{-2}{\\tilde{}}_j^2+{\\tilde{r}}^{-2}({\\underline{L}}{\\tilde{}})^2),\\\\&{\\mathrm {Err}}_{\\mathrm {int}}\\le C_{\\tilde{R}}\\sum _{j\\le k}|\\partial \\chi |(|\\partial _{\\tau ,x}{\\tilde{}}_j|^2+|{\\tilde{}}_j|^2)+C\\chi \\sum _{j\\le k}(|\\partial _{\\tau ,x}{\\tilde{}}_j|^2+{\\tilde{r}}^{-2}|{\\tilde{}}_j|^2){\\tilde{r}}^{-1-\\alpha }\\\\&\\phantom{{\\mathrm {Err}}_{\\mathrm {int}}\\le }+\\mathcal {O}({\\dot{\\wp }})\\sum _{j\\le k}\\big ((L{\\tilde{}}_j)^2{\\tilde{r}}^p+({\\tilde{r}}^{-1}\\Omega {\\tilde{}}_j)^2+({\\tilde{r}}^{-1}{\\tilde{}}_j)^2+{\\tilde{r}}^{-2}({\\underline{L}}{\\tilde{}}_j)^2\\big ).\\end{split}$ The desired estimate (REF ) now follows from integrating (REF ).", "Here note when integrating (REF ) we also encounter a term involving $\\mathrm {div}\\,V$ , for $V=L$ , ${\\underline{L}}$ , or $\\Omega $ (from the difference of $V^\\mu \\partial _\\mu u$ and $\\partial _\\mu (V^\\mu u$ )), but these terms come with ${\\dot{\\wp }}$ , which has extra $\\tau $ integrability, and can be absorbed." ], [ "Nonlinear Energy and Local Energy Decay Estimates", "In this section we again use the variable $\\phi $ , not the conjugated version $\\varphi $ (see (REF )).", "However, in view of the definition (REF ), and under our bootstrap assumptions, the estimates on $\\varphi $ easily transfer to estimates on $\\phi $ .", "As a first step in the proof of Proposition REF we apply the results of Section  to derive energy and local energy decay estimates for $\\phi $ .", "Let $\\begin{split}\\Vert \\phi \\Vert _{LE_k[\\sigma _1,\\sigma _2]}^2:=\\Vert \\chi _{\\le R}\\partial ^k\\phi \\Vert _{LE[\\sigma _1,\\sigma _2]}^2+\\Vert \\chi _{\\ge R}X^k\\phi \\Vert _{LE[\\sigma _1,\\sigma _2]}^2.\\end{split}$ Our goal is to prove the following result.", "Proposition 8.8 If the bootstrap assumptions (REF )–() are satisfied and $\\epsilon $ is sufficiently small, then $&\\sup _{\\sigma _1\\le \\sigma \\le \\sigma _2} E_k[\\phi ](\\sigma )+\\Vert \\phi \\Vert _{LE_k[\\sigma _1,\\sigma _2]}^2\\lesssim \\sum _{i\\le k}E_i[\\phi ](\\sigma _1)+\\epsilon ^2{}_0(\\sigma _1), \\quad k\\le M,\\\\&\\sup _{\\sigma _1\\le \\sigma \\le \\sigma _2} E_k[{{\\bf T}}\\phi ](\\sigma )+\\Vert {{\\bf T}}\\phi \\Vert _{LE_k[\\sigma _1,\\sigma _2]}^2\\lesssim \\sum _{i\\le k}E_i[{{\\bf T}}\\phi ](\\sigma _1)+\\epsilon ^2{}_1(\\sigma _1), \\quad k\\le M-1,\\\\&\\sup _{\\sigma _1\\le \\sigma \\le \\sigma _2} E_k[{{\\bf T}}^2\\phi ](\\sigma )+\\Vert {{\\bf T}}^2\\phi \\Vert _{LE_k[\\sigma _1,\\sigma _2]}^2\\lesssim \\sum _{i\\le k}E_i[{{\\bf T}}^2\\phi ](\\sigma _1)+\\epsilon ^2{}_2(\\sigma _1), \\quad k\\le M-2.,$ where ${}_j(\\sigma _1)=\\sigma _1^{-2-2j}$ if $k+3j\\le M-2$ and $_j(\\sigma _1)=1$ otherwise.", "We start with some estimates on the source term defined in (REF ) (see also Lemma REF ).", "Lemma 8.9 Under the assumptions of Proposition REF , and with $\\mathcal {R}_{\\sigma _1}^{\\sigma _2}=\\cup _{\\sigma =\\sigma _1}^{\\sigma _2}\\Sigma _\\sigma $ , $&\\Vert \\langle r\\rangle (\\chi _{r\\ge R}|X^k\\mathcal {F}_0|+\\chi _{r\\le R}|\\partial ^k\\mathcal {F}_0|)\\Vert _{L^2(\\Sigma _\\tau )}\\lesssim \\delta _\\wp \\epsilon \\tau ^{-\\frac{5}{2}+\\kappa },\\\\&\\Vert \\langle r\\rangle (\\chi _{r\\ge R}|X^k{{\\bf T}}\\mathcal {F}_0|+\\chi _{r\\le R}|\\partial ^k{{\\bf T}}\\mathcal {F}_0|)\\Vert _{L^2(\\Sigma _\\tau )}\\lesssim \\delta _\\wp \\epsilon \\tau ^{-3},\\\\&\\Vert \\langle r\\rangle (\\chi _{r\\ge R}|X^k{{\\bf T}}^j\\mathcal {F}_0|+\\chi _{r\\le R}|\\partial ^k{{\\bf T}}^j\\mathcal {F}_0|)\\Vert _{L^2(\\mathcal {R}_{\\sigma _1}^{\\sigma _2})}\\lesssim \\delta _\\wp \\epsilon \\sigma _1^{-4+2\\kappa }+\\delta _\\wp \\Vert {{\\bf T}}^j\\phi \\Vert _{LE([\\sigma _1,\\sigma _2])},\\nonumber \\\\&\\phantom{\\Vert \\langle r\\rangle (\\chi _{r\\ge R}|X^k{{\\bf T}}^j\\mathcal {F}_0|+\\chi _{r\\le R}|\\partial ^k{{\\bf T}}^j\\mathcal {F}_0|)\\Vert _{L^2(\\mathcal {R}_{\\sigma _1}^{\\sigma _2})}\\lesssim } j=0,1,2.$ The exterior estimates follow from Lemma REF , Proposition REF , and Lemma REF .", "For the interior the spatial decay of the source term is not important, so the estimates follow simply by counting the number of ${{\\bf T}}$ derivatives and Proposition REF and Lemma REF .", "Note that even though the source term $\\mathcal {F}_0$ was derived using the conjugated variable $\\varphi $ in (REF ), bounding $s$ in (REF ) using our bootstrap assumptions, the same estimates are satisfied by the source term in the equation for $\\phi $ (see (REF )).", "We can now prove Proposition REF .", "Using the global coordinates $(,,)$ , after commuting any number of $\\partial _={{\\bf T}}$ derivatives we collect the leading order terms to write the equation in the form (REF ) with $\\mathcal {P}$ as in Section .", "We start with the proof of the estimates when $k=0$ .", "Applying Propositions REF and REF , the nonlinear terms (including products of derivatives of ${\\dot{\\wp }}$ and $\\phi $ ) can be estimated using the bootstrap assumptions (REF )–(), simply by treating them as quadratic.", "The contribution of the source terms is estimated using Lemma REF , and the contribution of $\\Omega _k$ (see Remark REF ) using Lemma REF , where we use the smallness of $\\delta _\\wp $ to absorb the $LE$ norms appearing in (REF ) and ().", "Note that by the same procedure we can prove the estimates for higher powers of ${{\\bf T}}$ without gaining extra decay (that is, by treating powers of ${{\\bf T}}$ which are higher than three as arbitrary derivatives).", "There is one point that deserves further explanation in this process.", "Among the error terms after commuting $\\partial _^k$ , there will be terms of the forms (recall that the part of the equation without spatial decay is given by (REF ); below $\\partial _y$ denotes an arbitrary tangential derivative of size one) $\\begin{split}\\mathcal {O}({\\dot{\\wp }}^{(j)})\\partial ^2_y{{\\bf T}}^{k-j}\\phi \\quad {\\ \\ \\text{and} \\ \\ }\\quad \\mathcal {O}({\\dot{\\wp }}^{(j)})(\\partial _+\\frac{n-1}{2})\\partial _{{\\bf T}}^{k-j}\\phi ,\\end{split}$ with $j\\ge 1$ , and the multipliers for the energy and LED estimates contain terms of the form $\\mathcal {O}(1)\\partial _{{\\bf T}}^k\\phi $ .", "Since $\\partial _\\tau {{\\bf T}}^k\\phi $ cannot be placed in the energy flux for ${{\\bf T}}^k\\phi $ in the hyperboloidal part of the foliation, some integration by parts are necessary to deal with these terms.", "For the error terms of the form $\\mathcal {O}({\\dot{\\wp }}^{(j)})\\partial ^2_y{{\\bf T}}^{k-j}\\phi $ , we can integrate by parts twice to obtain terms of the forms $\\begin{split}\\mathcal {O}({\\dot{\\wp }}^{(j)})\\partial _y{{\\bf T}}^{k-j+1}\\phi \\partial _y {{\\bf T}}^k\\phi , \\quad \\mathcal {O}({\\dot{\\wp }}^{(j+1)})\\partial _y{{\\bf T}}^{k-j}\\phi \\partial _y {{\\bf T}}^k\\phi ,\\quad \\mathcal {O}({\\dot{\\wp }}^{(j)}^{-1})\\partial _y{{\\bf T}}^{k-j}\\phi \\partial _{{\\bf T}}^k\\phi .\\end{split}$ These can be bounded, respectively, as (where $L^2_y$ denotes $L^2(\\Sigma _)$ ) $\\begin{split}&\\Vert {\\dot{\\wp }}^{(j)}\\Vert _{L^1_}\\Vert \\partial _y{{\\bf T}}^{k-j+1}\\phi \\Vert _{L^\\infty _{}L^2_y}\\Vert \\partial _y {{\\bf T}}^k\\Vert _{L^\\infty _L^2_y},\\quad \\Vert {\\dot{\\wp }}^{(j+1)}\\Vert _{L^1_}\\Vert \\partial _y{{\\bf T}}^{k-j}\\phi \\Vert _{L^\\infty _{}L^2_y}\\Vert \\partial _y {{\\bf T}}^k\\Vert _{L^\\infty _L^2_y},\\\\& \\Vert {\\dot{\\wp }}^{(j)}\\Vert _{L^2_}\\Vert \\partial _y{{\\bf T}}^{k-j}\\phi \\Vert _{L^\\infty _{}L^2_y}\\Vert {{\\bf T}}^k\\Vert _{LE}.\\end{split}$ For the error terms of the form $\\mathcal {O}({\\dot{\\wp }}^{(j)})(\\partial _+\\frac{n-1}{2})\\partial _{{\\bf T}}^{k-j}\\phi $ , we use the equation for ${{\\bf T}}^{k-j}\\phi $ (again see (REF )) to replace them by terms of the forms that were already handled above, or have better spatial decay.", "Next, we use elliptic estimates to obtain energy and local energy estimates for arbitrary, size one, derivatives applied on $\\phi $ .", "For this, recall the decomposition of the operator $\\mathcal {P}$ as $\\begin{split}\\mathcal {P}=\\mathcal {P}_{\\mathrm {ell}}+\\mathcal {P}_,\\qquad {\\ \\ \\text{where} \\ \\ }\\qquad \\mathcal {P}_=\\mathcal {O}(1)\\partial {{\\bf T}}+\\mathcal {O}(\\langle ^{-1}\\rangle ){{\\bf T}}.\\end{split}$ Using the estimates for ${{\\bf T}}\\phi $ , we can use elliptic estimates to bound $\\sup _{\\sigma _1\\le \\sigma \\le \\sigma _2}\\Vert \\partial _\\Sigma \\phi \\Vert _{E(\\Sigma _\\sigma )}+\\Vert \\partial _\\Sigma \\phi \\Vert _{LE[\\sigma _1,\\sigma _2]}$ by the right-hand side of (REF ).", "Here for the $LE$ norm the spatial norms can be inserted in the elliptic bound by writing $\\begin{split}\\phi =\\chi _{\\le 1}\\phi +\\sum _{j\\ge 1}\\chi _j \\phi ,\\end{split}$ where $\\chi _{\\le 1}()$ is supported in the region $\\lbrace ||\\le 1\\rbrace $ , and $\\chi _j()$ in the region $\\lbrace ||\\simeq 2^j\\rbrace $ , and applying the elliptic estimate on each annulus separately.", "The estimates for $\\partial _\\Sigma ^k{{\\bf T}}^j\\phi $ are proved similarly.", "To upgrade the size one derivatives $\\partial _\\Sigma $ to vectorfield derivatives $X$ in the exterior, we argue as follows.", "Suppose we are commuting $X^k$ for some $k>0$ .", "In view of Lemma REF we can arrange to commute first the $T$ vectorfields, then the $\\Omega $ vectorfields, and last the ${\\tilde{r}}L$ vectorfields.", "The case where all vectorfields are $T$ was already discussed above.", "When there are $\\Omega $ vectorfields, but no ${\\tilde{r}}L$ vectorfields the argument is similar with the following points to keep in mind.", "First, since the equation in Lemma REF was calculated in terms of the conjugated variable ${\\tilde{\\varphi }}={\\tilde{r}}^{\\frac{n-1}{2}}\\varphi $ , we can directly carry out the energy and LED multiplier arguments in this setting.", "Indeed, with $\\chi $ denoting a cutoff supported in the hyperboloidal region of the foliation, for the energy estimate we multiply the equation by $\\chi TX^k{\\tilde{\\varphi }}$ while for the LED we use the two multipliers $\\chi \\beta (L-{\\underline{L}})X^l{\\tilde{\\varphi }}$ (for the analogue of (REF )) and $\\chi \\beta ^{\\prime } {\\tilde{\\varphi }}$ (for the analogue of (REF )).", "The errors which result from the derivatives falling on $\\chi $ during the integration by parts are then absorbed by the LED estimate for the size one derivatives.", "Except for the treatment of the error terms of the form $\\mathcal {O}({\\dot{\\wp }}^{(\\ell )})LX^m{\\tilde{\\varphi }}$ on the right-hand side of (REF ) (note that since we are not yet commuting ${\\tilde{r}}L$ the terms involving ${\\widetilde{\\mathcal {P}}}_1$ are not present), the remainder of the energy and LED estimate are similar to what has already been carried out, so we omit the details (see also below for the case $X={\\tilde{r}}L$ where more details are worked out).", "The difficulty with $\\mathcal {O}({\\dot{\\wp }}^{(\\ell )})LX^m{\\tilde{\\varphi }}$ errors is that when for the part of the multiplier which is of the form $\\mathcal {O}(1){\\underline{L}}X^k{\\tilde{\\varphi }}$ (with $k\\ne m$ ) we cannot simply use the $$ decay of ${\\dot{\\wp }}$ to bound this by the energy, as the unweighted ${\\underline{L}}$ derivatives are not bounded by the energy flux.", "For the term $\\mathcal {O}({\\dot{\\wp }}^{(\\ell )})LX^m{\\tilde{\\varphi }}$ , recall that in (REF )–() we want to estimate the corresponding contribution by $\\epsilon ^2\\sigma _1^{-2j}$ for ${{\\bf T}}^j\\phi $ , $j=1,2,3$ (in the more difficult case $3j+k\\le M-2$ ).", "The corresponding term we need to estimate in the multiplier identities is then the space-time integral of $\\begin{split}\\mathcal {O}({\\dot{\\wp }}^{(1+i)})(LX^m{{\\bf T}}^{j-i}{\\tilde{\\varphi }}) ({\\underline{L}}X^k {{\\bf T}}^j{\\tilde{\\varphi }}),\\quad 0\\le i\\le j.\\end{split}$ Considering the extreme cases $i=j$ and $i=0$ , and with the same notation as above this is bounded, using Lemma REF and the bootstrap assumptions (REF )–(), by (here the ${\\tilde{r}}^{n-1}$ part of the measure in the $L^2_y$ and $LE$ norms is already incorporated in ${\\tilde{\\varphi }}$ ) $\\begin{split}\\Vert {\\dot{\\wp }}^{(1+j)}\\Vert _{L^2_}\\Vert {\\tilde{r}}^{\\frac{1+\\alpha }{2}}LX^m{\\tilde{\\varphi }}\\Vert _{L^\\infty _L^2_y}\\Vert X^m{{\\bf T}}^j{\\tilde{\\varphi }}\\Vert _{LE}\\lesssim \\epsilon \\delta _\\wp \\sigma _1^{-\\frac{1-\\alpha }{2}} \\Vert X^m{{\\bf T}}^j{\\tilde{\\varphi }}\\Vert _{LE}^2\\lesssim \\epsilon ^3 \\sigma _1^{-2j-2},\\end{split}$ when $i=j$ , and $\\begin{split}\\Vert {\\dot{\\wp }}\\Vert _{L^2_}\\Vert {\\tilde{r}}^{\\frac{1+\\alpha }{2}}LX^m{{\\bf T}}^j{\\tilde{\\varphi }}\\Vert _{L^\\infty _L^2_y}\\Vert X^m{{\\bf T}}^j{\\tilde{\\varphi }}\\Vert _{LE}\\lesssim \\epsilon ^2\\sigma _1^{-2+\\kappa }\\sigma _1^{-j+\\frac{1+\\alpha }{2}}\\Vert X^m{{\\bf T}}^j{\\tilde{\\varphi }}\\Vert _{LE}\\lesssim \\epsilon ^3 \\sigma _1^{-2j-2},\\end{split}$ when $i=0$ .", "Finally, we consider the case where some of the $X$ vectorfields are ${\\tilde{r}}L$ .", "The only difference with what was already considered above is that now we have to deal with the contribution of ${\\widetilde{\\mathcal {P}}}_1$ in (REF ).", "For this, we prove the energy and LED estimate simultaneously, using the multiplier $\\chi \\beta L X^k{\\tilde{\\varphi }}$ , where $\\chi $ is as above.", "The error terms when derivatives fall on $\\chi $ can again be absorbed using the LED estimate for derivatives of size one.", "For simplicity of notation we consider first the case ${\\tilde{r}}L{\\tilde{\\varphi }}$ (that is, when $k=1$ and $X={\\tilde{r}}L$ ), and then the case $({\\tilde{r}}L)^2{\\tilde{\\varphi }}$ to demonstrate how to treat higher powers of ${\\tilde{r}}L$ inductively.", "One can of course replace ${\\tilde{\\varphi }}$ by $X^j{\\tilde{\\varphi }}$ where $X^j$ are a string of $\\Omega $ and $T$ vectorfields.", "In the case $k=1$ , a calculation using Lemma REF gives (here $X={\\tilde{r}}L$ , and we are using the notation of Lemma REF ) $\\begin{split}LX{\\tilde{\\varphi }}({\\widetilde{\\mathcal {P}}}X{\\tilde{\\varphi }}-{\\widetilde{\\mathcal {P}}}_1{\\tilde{\\varphi }})&=-\\frac{1}{2}{\\underline{L}}(LX{\\tilde{\\varphi }})^2+L({\\mathrm {Err}}_L)+\\Omega ({\\mathrm {Err}}_\\Omega )\\\\&\\quad -\\frac{1}{{\\tilde{r}}}\\sum (\\frac{1}{{\\tilde{r}}}\\Omega X{\\tilde{\\varphi }})^2-\\frac{(n-1)(n-3)}{4{\\tilde{r}}}(\\frac{{\\tilde{\\varphi }}}{{\\tilde{r}}})^2-{\\tilde{r}}(L^2{\\tilde{\\varphi }})^2-\\frac{1}{{\\tilde{r}}}(L\\Omega {\\tilde{\\varphi }})^2\\\\&\\quad +\\mathcal {O}({\\tilde{r}}^{-1})({\\tilde{r}}^{-1}\\Omega {\\tilde{\\varphi }})^2+\\mathcal {O}({\\tilde{r}}^{-1})({\\tilde{r}}^{-1}{\\tilde{\\varphi }})^2+\\mathcal {O}({\\dot{\\wp }}){\\mathrm {Err}}_\\wp ,\\end{split}$ where $\\begin{split}|{\\mathrm {Err}}_\\wp |+|{\\mathrm {Err}}_L|&\\lesssim (LX{\\tilde{\\varphi }})^2+(L{\\tilde{\\varphi }})^2+(L\\Omega {\\tilde{\\varphi }})^2+({\\tilde{r}}^{-1}\\Omega X{\\tilde{\\varphi }})^2\\\\&\\quad +({\\tilde{r}}^{-1}\\Omega {\\tilde{\\varphi }})^2+({\\tilde{r}}^{-1}{\\underline{L}}X{\\tilde{\\varphi }})^2+({\\tilde{r}}^{-1}{\\underline{L}}{\\tilde{\\varphi }})^2+({\\tilde{r}}^{-1}X{\\tilde{\\varphi }})^2+({\\tilde{r}}^{-1}{\\tilde{\\varphi }})^2.\\end{split}$ Multiplying (REF ) by $\\chi $ , integrating, and adding a multiple of the LED estimate for size one derivatives, we get control of (the remaining error terms in (REF ) have better $$ or ${\\tilde{r}}$ decay and can be handled more easily) $\\begin{split}\\int _{\\Sigma _\\tau }\\chi (LX{\\tilde{\\varphi }})^2\\mathrm {d}\\theta \\mathrm {d}r\\quad {\\ \\ \\text{and} \\ \\ }\\int _{\\sigma _1}^{\\sigma _2}\\int _{\\Sigma _\\tau }\\chi ({\\tilde{r}}(L^2{\\tilde{\\varphi }})^2+{\\tilde{r}}^{-1}(\\frac{\\Omega }{{\\tilde{r}}} X{\\tilde{\\varphi }})^2+{\\tilde{r}}^{-1}(\\frac{X{\\tilde{\\varphi }}}{{\\tilde{r}}})^2)\\mathrm {d}\\theta \\mathrm {d}r \\mathrm {d}\\tau .\\end{split}$ Note that since we already have control of the $LE$ norm of $\\phi $ , the bulk term ${\\tilde{r}}(L^2{\\tilde{\\varphi }})^2$ gives control of the term ${\\tilde{r}}^{-1-\\alpha }(LX{\\tilde{\\varphi }})^2$ , but we will need this stronger estimate to treat the higher powers of ${\\tilde{r}}L$ inductively.", "To control the remaining terms in the energy and local energy norms we argue as follows.", "First, for the energy norm, note that $\\begin{split}\\frac{1}{{\\tilde{r}}}\\Omega X{\\tilde{\\varphi }}= L\\Omega {\\tilde{\\varphi }}+ [\\Omega ,L]{\\tilde{\\varphi }}\\quad {\\ \\ \\text{and} \\ \\ }\\quad \\frac{1}{{\\tilde{r}}}T X {\\tilde{\\varphi }}= LT{\\tilde{\\varphi }}+ \\frac{T{\\tilde{r}}}{{\\tilde{r}}} L{\\tilde{\\varphi }}+ [T,L]{\\tilde{\\varphi }},\\end{split}$ and all of the terms on the right-hand sides are already controlled by the energies of ${\\tilde{\\varphi }}$ , $T{\\tilde{\\varphi }}$ , and $\\Omega {\\tilde{\\varphi }}$ .", "Similarly, for the local energy norm, $\\begin{split}{\\tilde{r}}^{-1-\\alpha }({\\underline{L}}X{\\tilde{\\varphi }})^2\\lesssim {\\tilde{r}}^{-1-\\alpha }({\\underline{L}}{\\tilde{r}})^2(L{\\tilde{\\varphi }})^2+{\\tilde{r}}^{1-\\alpha }({\\underline{L}}L{\\tilde{\\varphi }}).\\end{split}$ The first term is already controlled by the local energy norm of $\\phi $ , while for the second term we use the equation for ${\\tilde{\\varphi }}$ to replace ${\\underline{L}}L{\\tilde{\\varphi }}$ by terms which we have already estimated.", "Next, we consider the error terms when commuting $X^2$ , $X={\\tilde{r}}L$ .", "The term in the multiplier argument that needs a different treatment is ${\\widetilde{\\mathcal {P}}}_1{\\tilde{\\varphi }}LX^2{\\tilde{\\varphi }}$ , where we no longer want to use the sign of the coefficient $c_{0,2}$ in (REF ).", "These error terms can be estimated using the space-time control of ${\\tilde{r}}(L^2{\\tilde{\\varphi }})^2$ above.", "For instance, the term $L^2{\\tilde{\\varphi }}$ in ${\\widetilde{\\mathcal {P}}}_1$ contributes terms of the form $\\begin{split}{\\tilde{r}}^{2-j}L^2{\\tilde{\\varphi }}L^{3-j}{\\tilde{\\varphi }},\\qquad 0\\le j\\le 2,\\end{split}$ all of which can be estimated in terms of $r(L^2{\\tilde{\\varphi }})^2$ and the $LE$ norm of ${\\tilde{\\varphi }}$ after a few integration by parts.", "The other terms in ${\\widetilde{\\mathcal {P}}}_1$ are treated similarly.", "We can now proceed inductively to prove energy and LED estimates for higher powers of ${\\tilde{r}}L$ ." ], [ "Proof of Proposition ", "We start by proving $\\tau ^{-2}$ decay for the energy at lower orders, and boundedness at higher orders.", "Lemma 8.10 Suppose the bootstrap assumptions (REF )–() hold.", "Then for $k\\le M-2$ , $\\begin{split}E_k[\\phi ](\\tau )\\lesssim \\epsilon ^2\\tau ^{-2},\\end{split}$ and $\\begin{split}\\int _{\\Sigma _\\tau }\\chi _{\\ge {\\tilde{R}}}((L+\\frac{n-1}{2})X^k\\phi )^2 r\\mathrm {d}V\\lesssim \\epsilon \\tau ^{-1}.\\end{split}$ Moreover, for any $k\\le M$ , $E_k[\\phi ](\\tau )+\\int _{\\Sigma _\\tau }\\chi _{\\ge {\\tilde{R}}}\\big [r^2((L+\\frac{n-1}{2})X^k\\phi )^2+r(r^{-1}\\Omega X^k\\phi )^2+r(r^{-1}X^k\\phi )^2\\big ]\\mathrm {d}V\\lesssim \\epsilon ^2.$ The estimate for $E_k$ in (REF ) follows directly from (REF ).", "The estimate for the second term on the left-hand side of (REF ) follows from Lemma REF .", "Here the error terms in estimate (REF ) are absorbed by adding a suitable multiple of the local energy bound in (REF ), and the contribution of ${\\tilde{f}}_k$ in (REF ) is treated in the same way as in the proof of (REF ) below.", "We turn to the details for the proof of (REF ).", "We introduce some auxiliary notation to avoid repeated long expressions in the proof: $\\begin{split}&{{E}}_k^p(\\tau ):=\\int _{\\Sigma _\\tau }\\chi _{\\le {\\tilde{R}}}|\\partial \\partial ^k\\phi |^2\\mathrm {d}V+\\int _{\\Sigma _\\tau }\\chi _{\\ge {\\tilde{R}}}((L+\\frac{n-1}{2})X^k\\phi )^2 r^p\\mathrm {d}V,\\\\&{{B}}_k^p(\\tau ):=\\int _{\\Sigma _\\tau }\\chi _{\\le {\\tilde{R}}}|\\partial \\partial ^k\\phi |^2\\mathrm {d}V+\\int _{\\Sigma _\\tau }\\chi _{\\ge {\\tilde{R}}}\\big [((L+\\frac{n-1}{2})X^k\\phi )^2 +(2-p)(r^{-1}\\Omega X^k\\phi )^2\\\\&\\phantom{{{B}}_k^p(\\tau ):=\\int _{\\Sigma _\\tau }\\chi _{\\le {\\tilde{R}}}|\\partial \\partial ^k\\phi |^2\\mathrm {d}V+\\int _{\\Sigma _\\tau }\\chi _{\\ge {\\tilde{R}}}\\big [}\\quad +(2-p)(r^{-1}X^k\\phi )^2+r^{-p-\\alpha }(TX^k\\phi )^2\\big ]r^{p-1}\\mathrm {d}V.\\end{split}$ Adding a suitable multiple of the LED estimate (REF ) at one higher order (to compensate for the degeneracy in the LE norm) to (REF ) in Lemma REF with $\\psi =\\varphi $ , for any $p\\le 2$ and $\\sigma _1<\\sigma _2$ gives, $\\begin{split}\\sum _{j\\le k}{{E}}_j^p(\\sigma _2)+\\sum _{j\\le k}\\int _{\\sigma _1}^{\\sigma _2}{{B}}_j^p(\\tau )\\mathrm {d}\\tau \\lesssim {{E}}_{k+1}^p(\\sigma _1)+\\sum _{j\\le k}\\Big |\\int _{\\sigma _1}^{\\sigma _2}\\int _{\\Sigma _\\tau }\\chi _{\\ge {\\tilde{R}}}{\\tilde{f}}_j L{\\tilde{\\varphi }}_j {\\tilde{r}}^p\\mathrm {d}\\theta \\mathrm {d}r\\mathrm {d}\\tau \\Big |.\\end{split}$ Applying this identity with $p=2$ , $k= M-1$ , $\\sigma _1=0$ , and $\\sigma _2=\\sigma $ arbitrary, and using the boundedness of ${{E}}_{j+1}^2$ for $j\\le M-1$ , we get $\\begin{split}\\sum _{j\\le {M-1}}{{E}}_j^2(\\sigma )+\\sum _{j\\le M-1}\\int _{0}^{\\sigma }{{B}}_j^2(\\tau )\\mathrm {d}\\tau \\lesssim \\epsilon ^2+\\sum _{j\\le M-1}\\Big |\\int _{0}^{\\sigma }\\int _{\\Sigma _\\tau }\\chi _{\\ge {\\tilde{R}}}{\\tilde{f}}_j L{\\tilde{\\varphi }}_j {\\tilde{r}}^2\\mathrm {d}\\theta \\mathrm {d}r\\mathrm {d}\\tau \\Big |.\\end{split}$ We claim that the contribution of the last term is bounded or can be absorbed on the left.", "Here and in what follows we carry out the details for the proof of the calculations for the two representative terms for ${\\tilde{f}}_k$ corresponding to the contributions of the source term $\\mathcal {F}_0$ and cubic term $\\nabla ^\\mu \\varphi \\nabla ^\\nu \\varphi \\nabla _{\\mu \\nu }\\varphi $ .", "See (REF ) and Lemmas REF and REF .", "For the contribution of $\\mathcal {F}_0$ we simply use estimate (), and absorb the $LE$ norm on the left.", "For the contribution of the cubic term, in view of Lemma REF and the bootstrap assumptions (), (), (), (), these satisfy the same types of estimates as the error terms on the right-hand side of (REF ), with extra additional smallness, so their contribution can be bounded in the same way as in the proof of Lemma REF .", "Going back to (REF ), we conclude that the left-hand side of this estimate is bounded by $\\epsilon ^2$ , and therefore, there is an increasing sequence of dyadic ${\\tilde{\\tau }}_m$ such that ${{B}}_j^2({\\tilde{\\tau }}_n)\\lesssim {\\tilde{\\tau }}_n^{-1}\\epsilon ^2$ for $j\\le M-1$ .", "Since ${{E}}_j^1\\lesssim {{B}}_j^2$ , another application of (REF ), but on $[{\\tilde{\\tau }}_{m-1},{\\tilde{\\tau }}_m]$ and with $p=1$ and $k=M-2$ gives $\\begin{split}\\sum _{j\\le M-2}{{E}}_j^1({\\tilde{\\tau }}_m)+\\sum _{j\\le M-2}\\int _{{\\tilde{\\tau }}_{m-1}}^{{\\tilde{\\tau }}_m}{{B}}_j^1(\\tau )\\mathrm {d}\\tau \\lesssim \\epsilon {\\tilde{\\tau }}_m^{-1}+\\sum _{j\\le M-2}\\Big |\\int _{{\\tilde{\\tau }}_{m-1}}^{{\\tilde{\\tau }}_m}\\int _{\\Sigma _\\tau }\\chi _{\\ge {\\tilde{R}}}{\\tilde{f}}_j L{\\tilde{\\varphi }}_j {\\tilde{r}}\\mathrm {d}\\theta \\mathrm {d}r\\mathrm {d}\\tau \\Big |.\\end{split}$ Arguing as above, the contribution of the last term on the right can be absorbed or bounded by $\\epsilon ^2{\\tilde{\\tau }}_m^{-1}$ , and we can find a possibly different increasing dyadic sequence $\\tau _m\\simeq {\\tilde{\\tau }}_m$ such that ${{B}}_j^1(\\tau _m)\\lesssim \\epsilon ^2\\tau _m^{-2}$ for $j\\le M-2$ .", "Since $E_j\\lesssim {{B}}_j^1$ , estimate (REF ) follows from another application of the energy estimate (REF ).", "Finally for (REF ), note that by (REF ) we already have this estimate for $\\tau =\\tau _m$ .", "The estimate for all $\\tau $ now follows from another application of (REF ) with $p=1$ and with arbitrary $\\sigma _2\\in (\\tau _m,\\tau _{m+1})$ and $\\sigma _1=\\tau _m$ .", "To improve the pointwise decay assumptions (), (), (), () we need better decay for the energies of ${{\\bf T}}\\phi $ and ${{\\bf T}}^2\\phi $ .", "This is the content of the next lemma.", "Lemma 8.11 Suppose the bootstrap assumptions (REF )–() hold.", "Then for any $k\\le M-5$ , $\\begin{split}E_k[{{\\bf T}}\\phi ](\\tau )\\lesssim \\epsilon ^2\\tau ^{-4},\\end{split}$ and for any $k\\le M-8$ , (with constant that is independent of (REF )) $\\begin{split}E_k[{{\\bf T}}^2\\phi ](\\tau )\\lesssim \\epsilon ^2\\tau ^{-6}.\\end{split}$ The main observation is that in view of Lemma REF , in particular using equation (REF ), we can estimate (note that in view of (REF ) the difference between ${{\\bf T}}$ and $T$ comes with factors of ${\\dot{\\wp }}$ which give extra decay) $\\begin{split}\\sum _{j\\le k}{{E}}^2_j[{{\\bf T}}\\phi ](\\tau )\\lesssim \\epsilon ^2\\tau ^{-2}+\\sum _{j\\le k+1}E_j[\\phi ](\\tau )\\lesssim \\epsilon ^2\\tau ^{-2}.\\end{split}$ Here for the first inequality we have used the bootstrap assumptions (), (), (), () as well as (REF ) to estimate the left-hand side of (REF ), and for the second in equality we have used Lemma REF .", "We can now repeat the proof of Lemma REF , starting by applying (REF ) applied to $T\\phi $ on an increasing dyadic sequence $\\tau _m$ .", "By the observations we just made, the right-hand side is now bounded by $\\tau _m^{-2}$ , so repeating the proof of Lemma REF we obtain (REF ).", "Returning to (REF ) and repeating this argument we obtain (REF ).", "Lemma REF and elliptic estimates contained in the next lemma allow us to obtain decay of higher derivative norms of $\\phi $ for arbitrary derivatives.", "To state the lemma we recall from Remark REF , part (1), that in the global coordinates $(,,)$ , $\\mathcal {P}$ admits the decomposition $\\begin{split}\\mathcal {P}=\\mathcal {P}_+\\mathcal {P}_{\\mathrm {ell}},\\end{split}$ satisfying the properties stated there.", "Lemma 8.12 Suppose $\\mathcal {P}_{\\mathrm {ell}}=g$ on $\\Sigma _\\tau $ , and that the bootstrap assumptions (REF )–() hold, then (here the sum is over the $L^2(\\Sigma _\\tau )$ inner products with the truncated eigenfunctions $Z_\\mu ,Z_1,\\dots ,Z_n$ ) $\\begin{split}\\Vert \\langle r\\rangle ^{-2}\\Vert _{L^2(\\Sigma _\\tau )}+\\Vert \\partial ^2\\Vert _{L^2(\\Sigma _\\tau )}\\lesssim \\Vert g\\Vert _{L^2(\\Sigma _\\tau )}+\\sum |\\langle ,Z_i\\rangle |+\\tau ^{-\\frac{5}{2}+\\kappa }E(),\\end{split}$ and for $s\\in (2,\\frac{5}{2})$ , $\\begin{split}\\Vert \\langle r\\rangle ^{-s}\\Vert _{L^2(\\Sigma _\\tau )}\\lesssim \\Vert \\partial _\\Sigma g\\Vert _{L^2(\\Sigma _\\tau )}^{s-2}\\Vert g\\Vert _{L^2(\\Sigma _\\tau )}^{3-s}+\\sum |\\langle ,Z_i\\rangle |+\\tau ^{-\\frac{5}{2}+\\kappa }\\sum _{j\\le 1}E_j().\\end{split}$ Moreover, for small ${\\tilde{\\kappa }}$ , $\\begin{split}\\Vert \\partial _\\Sigma ^3\\Vert _{L^2(\\Sigma _\\tau )}\\lesssim \\Vert \\partial _\\Sigma g\\Vert _{L^2(\\Sigma _\\tau )}+ \\Vert \\partial _\\Sigma g\\Vert _{L^2(\\Sigma _\\tau )}^{\\frac{1}{2}-{\\tilde{\\kappa }}} \\Vert g\\Vert _{L^2(\\Sigma _\\tau )}^{\\frac{1}{2}+{\\tilde{\\kappa }}}+\\sum |\\langle ,Z_i\\rangle |+\\tau ^{-\\frac{5}{2}+\\kappa }\\sum _{j\\le 1}E_j().\\end{split}$ Since all norms are on $\\Sigma _\\tau $ we drop $\\Sigma _\\tau $ from the notation.", "Recalling the decomposition $\\mathcal {P}_{\\mathrm {ell}}=\\Delta _{\\underline{\\mathcal {C}}}+V+\\mathcal {P}_{\\mathrm {ell}}^{{\\mathrm {pert}}}$ , let $_{\\mathrm {far}}$ be a solution to $\\begin{split}(\\Delta _{\\underline{\\mathcal {C}}}+V_{\\mathrm {far}})_{\\mathrm {far}}=g-\\mathcal {P}_{\\mathrm {ell}}^{\\mathrm {pert}},\\end{split}$ where $V_{\\mathrm {far}}$ is a potential that vanishes inside a large compact set, and is equal to $V$ outsides a larger compact set.", "We further decompose $g-\\mathcal {P}_{\\mathrm {ell}}^{\\mathrm {pert}}$ as $\\begin{split}g-\\mathcal {P}_{\\mathrm {ell}}^{\\mathrm {pert}}={\\tilde{g}}+o_{\\wp ,{{\\color {deepgreen} R}}}(1)(\\partial ^2_\\Sigma +\\langle \\rangle ^{-1}\\partial _\\Sigma +\\langle \\rangle ^{-2}),\\end{split}$ where ${\\tilde{g}}=g+\\mathcal {O}({\\dot{\\wp }})\\partial _\\Sigma +\\mathcal {O}({\\dot{\\wp }})\\langle \\rangle ^{-2}$ .", "We will prove estimates (REF ), (REF ), (REF ) with the last term on the right-hand sides removed, and with $g$ replaced by ${\\tilde{g}}$ .", "The desired estimate then follows from the triangle inequality and the bootstrap assumptions.", "Treating $V_{\\mathrm {far}}$ perturbatively, and writing $\\varepsilon $ for $o_{\\wp ,{{\\color {deepgreen} R}}}(1)$ , we have $\\begin{split}\\Vert \\langle \\rangle ^{-2}_{\\mathrm {far}}\\Vert _{L^2}+\\Vert \\partial _\\Sigma ^2_{\\mathrm {far}}\\Vert _{L^2}\\lesssim \\Vert {\\tilde{g}}\\Vert _{L^2}+\\varepsilon (\\partial ^2_\\Sigma +\\langle \\rangle ^{-1}\\partial _\\Sigma +\\langle \\rangle ^{-2}).\\end{split}$ Note that $_{\\mathrm {near}}:=-_{\\mathrm {far}}$ satisfies $\\begin{split}(\\Delta _{\\underline{\\mathcal {C}}}+V)_{\\mathrm {near}}=V_{\\mathrm {near}}_{\\mathrm {far}},\\end{split}$ where $V_{\\mathrm {near}}=V_{\\mathrm {far}}-V$ .", "We further decompose $_{\\mathrm {near}}$ as (where the sum is over the truncated eigenfunctions $Z_i$ of $\\Delta _{\\underline{\\mathcal {C}}}+V$ , which we assumed are normalized in $L^2(\\Sigma _\\tau )$ ) $\\begin{split}&_{\\mathrm {near}}=_{\\mathrm {near}}^\\perp +\\sum \\langle _{\\mathrm {near}},Z_i\\rangle Z_i,\\qquad \\langle _{\\mathrm {near}}^\\perp ,Z_i\\rangle =0,\\\\&(\\Delta _{\\underline{\\mathcal {C}}}+V)_{\\mathrm {near}}^\\perp =V_{\\mathrm {near}}_{\\mathrm {far}}+\\sum \\langle _{\\mathrm {near}},Z_i\\rangle (\\Delta _{\\underline{\\mathcal {C}}}+V)Z_i.\\end{split}$ Since $_{\\mathrm {near}}^\\perp $ is transversal to the eigenfunctions of $\\Delta _{\\underline{\\mathcal {C}}}+V$ , $\\begin{split}\\Vert \\langle \\rangle ^{-1}_{\\mathrm {near}}^\\perp \\Vert _{L^2}&\\lesssim \\Vert \\partial _\\Sigma _{\\mathrm {near}}^\\perp \\Vert _{L^2}\\lesssim \\Vert \\langle \\rangle V_{\\mathrm {near}}_{\\mathrm {far}}\\Vert _{L^2}+\\sum |\\langle _{\\mathrm {near}},Z_i\\rangle |\\\\&\\lesssim \\Vert {\\tilde{g}}\\Vert _{L^2}+\\sum |\\langle ,Z_i\\rangle |+\\varepsilon (\\partial ^2_\\Sigma +\\langle \\rangle ^{-1}\\partial _\\Sigma +\\langle \\rangle ^{-2}),\\end{split}$ where we have used the fact that $V_{\\mathrm {near}}$ and $Z_i$ are compactly supported.", "But using the decomposition $_{\\mathrm {near}}=_{\\mathrm {near}}^\\perp +\\sum \\langle _{\\mathrm {near}},Z_i\\rangle Z_i$ we can replace $\\Vert \\langle \\rangle ^{-1}_{\\mathrm {near}}^\\perp \\Vert _{L^2}$ on the left-hand side of the estimate above by $\\Vert \\langle \\rangle ^{-1}_{\\mathrm {near}}\\Vert _{L^2}$ .", "Using equation (REF ) again, $\\begin{split}\\Vert \\partial ^2_\\Sigma _{\\mathrm {near}}\\Vert _{L^2}&\\lesssim \\Vert \\langle \\rangle ^{-2}_{\\mathrm {near}}\\Vert _{L^2}+\\Vert \\langle \\rangle ^{-2}_{\\mathrm {far}}\\Vert _{L^2}\\\\&\\lesssim \\Vert {\\tilde{g}}\\Vert _{L^2}+\\sum |\\langle ,Z_i\\rangle |+\\varepsilon (\\partial ^2_\\Sigma +\\langle \\rangle ^{-1}\\partial _\\Sigma +\\langle \\rangle ^{-2}).\\end{split}$ Estimate (REF ) follows by combining the last two estimates with (REF ) and observing that $|\\langle \\rangle ^{-2}|\\lesssim |\\langle \\rangle ^{-2}_{\\mathrm {near}}|+|\\langle \\rangle ^{-2}_{\\mathrm {far}}|$ .", "To prove (REF ) and (REF ) we use the global coordinates $(,,)$ , with $(,)$ coordinates on $\\Sigma _\\tau $ , to define the operator $\\mathcal {P}_{\\mathrm {euc}}$ by smoothly modifying the coefficients of $\\mathcal {P}_{\\mathrm {ell}}$ such that $\\begin{split}\\mathcal {P}_{\\mathrm {euc}}={\\left\\lbrace \\begin{array}{ll}\\Delta _{{\\mathrm {euc}}}\\quad &\\le R_{\\mathrm {euc}}\\\\ \\mathcal {P}_{\\mathrm {ell}}\\quad &\\ge R_{\\mathrm {euc}}+1\\end{array}\\right.", "}.\\end{split}$ Here $R_{\\mathrm {euc}}$ is a fixed large constant and $\\Delta _{{\\mathrm {euc}}}=\\partial _^2+\\frac{4}{}\\partial _+\\frac{1}{^2}{{\\Delta }}_{\\mathbb {S}^{4}}$ .", "Let $\\chi \\equiv \\chi ()$ and ${\\tilde{\\chi }}\\equiv ()$ be cutoff functions supported in the large $$ region such that ${\\tilde{\\chi }}\\chi =\\chi $ , let $_{\\mathrm {euc}}$ be the solution to $\\begin{split}\\mathcal {P}_{\\mathrm {euc}}_{\\mathrm {euc}}= \\chi {\\tilde{g}},\\end{split}$ and let $_{\\mathrm {euc}}={\\tilde{\\chi }}_{\\mathrm {euc}}$ .", "The functions $_{\\mathrm {euc}}$ and $_{\\mathrm {euc}}$ are defined in terms of the coordinates $(,)$ , but since $_{\\mathrm {euc}}$ is supported in the large $$ region, we can view it as a function on $\\Sigma _\\tau $ as well.", "By the Euclidean theory and treating the difference between $\\Delta _{\\mathrm {euc}}$ and $\\mathcal {P}_{\\mathrm {euc}}$ in $\\lbrace \\ge R_{\\mathrm {euc}}\\rbrace $ perturbativly (here fractional derivatives are defined on $\\mathbb {R}^5$ using the coordinates $(,)$ , $\\partial _\\Sigma $ denotes the coordinate derivatives $\\partial _$ and $^{-1}\\partial _$ , and the volume form is also as in $\\mathbb {R}^5$ , which is comparable with the geometric volume form on $\\Sigma _\\tau $ for large $$ ), $\\begin{split}\\sum _{j=0}^2\\Vert \\langle \\rangle ^{j-s}\\partial _\\Sigma ^j_{\\mathrm {euc}}\\Vert _{L^2}+\\Vert (-\\Delta _{{\\mathrm {euc}}})^{\\frac{s}{2}}_{\\mathrm {euc}}\\Vert _{L^2}\\lesssim \\Vert {\\tilde{\\chi }}\\partial _\\Sigma {\\tilde{g}}\\Vert _{L^2}^{s-2}\\Vert {\\tilde{\\chi }}{\\tilde{g}}\\Vert _{L^2}^{3-s}.\\end{split}$ Now $_{\\mathrm {cat}}:=-_{\\mathrm {euc}}$ satisfies $\\begin{split}\\mathcal {P}_{\\mathrm {ell}}_{\\mathrm {cat}}=(1-\\chi ){\\tilde{g}}-[\\mathcal {P}_{\\mathrm {ell}},{\\tilde{\\chi }}]_{\\mathrm {euc}}-{\\tilde{\\chi }}(\\mathcal {P}_{\\mathrm {ell}}-\\mathcal {P}_{\\mathrm {euc}})_{\\mathrm {euc}}.\\end{split}$ We again decompose $_{\\mathrm {cat}}$ as $_{\\mathrm {cat}}=_{\\mathrm {cat}}^\\perp +\\langle _{\\mathrm {cat}},Z_i\\rangle Z_i$ .", "In view of the compact support of the right-hand side of (REF ) (note that the terms involving $_{\\mathrm {euc}}$ are compactly supported in the large $$ region, so they can be viewed as functions on $\\Sigma _\\tau $ ), and by (REF ) and (REF ), and arguing as we did for $_{\\mathrm {near}}$ above, $\\begin{split}\\Vert \\langle \\rangle ^{-1}_{\\mathrm {cat}}\\Vert _{L^2}\\lesssim \\Vert \\partial _\\Sigma _{\\mathrm {cat}}\\Vert _{L^2}\\lesssim \\Vert \\partial _\\Sigma {\\tilde{g}}\\Vert _{L^2}^{s-2}\\Vert {\\tilde{g}}\\Vert _{L^2}^{3-s}+\\sum |\\langle ,Z_i\\rangle |,\\end{split}$ and (REF ) follows by observing that $|\\langle \\rangle ^{-s}|\\lesssim |\\langle \\rangle ^{-s}_{\\mathrm {euc}}|+|\\langle \\rangle ^{-1}_{\\mathrm {cat}}|$ .", "Note that the same argument in fact gives (REF ) with $\\Vert \\langle \\rangle ^{1-s}\\partial _\\Sigma \\Vert _{L^2}$ added on the left-hand side.", "Therefore, to prove (REF ), in view of the equation $\\Delta _{\\underline{\\mathcal {C}}}=-V+{\\tilde{g}}+o_{\\wp ,{{\\color {deepgreen} R}}}(1)(\\partial ^2_\\Sigma +\\langle \\rangle ^{-1}\\partial _\\Sigma +\\langle \\rangle ^{-2})$ , we can use the already established estimates and elliptic estimates for $\\Delta _{\\underline{\\mathcal {C}}}$ , to get $\\begin{split}\\Vert \\partial _\\Sigma ^3\\Vert _{L^2}&\\lesssim \\Vert \\partial _\\Sigma {\\tilde{g}}\\Vert _{L^2}+\\Vert \\partial _\\Sigma ( V)\\Vert _{L^2}\\lesssim \\Vert \\partial _\\Sigma {\\tilde{g}}\\Vert _{L^2}+\\Vert \\langle \\rangle ^{-\\frac{5}{2}+{\\tilde{\\kappa }}}\\Vert _{L^2}+\\Vert \\langle \\rangle ^{1-\\frac{5}{2}+{\\tilde{\\kappa }}}\\partial _\\Sigma \\Vert _{L^2} \\\\&\\lesssim \\Vert \\partial _\\Sigma {\\tilde{g}}\\Vert _{L^2}+\\Vert \\partial _\\Sigma {\\tilde{g}}\\Vert _{L^2}^{\\frac{1}{2}-\\kappa }\\Vert {\\tilde{g}}\\Vert _{L^2}^{\\frac{1}{2}+\\kappa }+\\sum |\\langle ,Z_i\\rangle |.\\end{split}$ The $L^2(\\Sigma _\\tau )$ decay of arbitrary higher derivatives is now a corollary of the previous two lemmas.", "Corollary 8.13 Suppose the bootstrap assumptions (REF )–() hold.", "Then for $k\\le M-3$ , $\\begin{split}\\Vert \\partial ^2_\\Sigma (\\chi _{\\le {\\tilde{R}}}\\partial ^k\\phi )\\Vert _{L^2(\\Sigma _\\tau )}+\\Vert \\partial ^2_\\Sigma (\\chi _{\\ge {\\tilde{R}}}X^k\\phi )\\Vert _{L^2(\\Sigma _\\tau )}\\lesssim \\epsilon \\tau ^{-2},\\end{split}$ and for $k\\le M-4$ , $\\begin{split}\\Vert \\partial ^2_\\Sigma (\\chi _{\\le {\\tilde{R}}}{{\\bf T}}\\partial ^k\\phi )\\Vert _{L^2(\\Sigma _\\tau )}+\\Vert \\partial ^2_\\Sigma (\\chi _{\\ge {\\tilde{R}}}{{\\bf T}}X^k\\phi )\\Vert _{L^2(\\Sigma _\\tau )}\\lesssim \\epsilon \\tau ^{-3}.\\end{split}$ For $k\\le M-4$ and $s\\ge \\frac{5}{2}-\\kappa $ , $\\begin{split}&\\Vert \\partial ^3_\\Sigma (\\chi _{\\le {\\tilde{R}}}\\partial ^k\\phi )\\Vert _{L^2(\\Sigma _\\tau )}+\\Vert \\partial ^3_\\Sigma (\\chi _{\\ge {\\tilde{R}}}X^k\\phi )\\Vert _{L^2(\\Sigma _\\tau )}\\\\&+\\Vert \\chi _{\\le {\\tilde{R}}}\\partial ^k\\phi \\Vert _{L^2(\\Sigma _\\tau )}+\\Vert \\langle r\\rangle ^{-s}(\\chi _{\\ge {\\tilde{R}}}X^k\\phi )\\Vert _{L^2(\\Sigma _\\tau )}\\lesssim \\epsilon \\tau ^{-\\frac{5}{2}+\\kappa }.\\end{split}$ The implicit constants in (REF ), (REF ), (REF ) are independent of $C_k$ in () and (REF ).", "We start with the decomposition $\\begin{split}\\mathcal {P}_{\\mathrm {ell}}\\phi =\\mathcal {P}\\phi +\\mathcal {O}(1)\\partial {{\\bf T}}\\phi + \\mathcal {O}(r^{-1}){{\\bf T}}\\phi .\\end{split}$ It follows from Lemmas REF and REF , as well as (), (), (), (), (REF ), that $\\begin{split}\\Vert \\partial ^2_\\Sigma \\phi \\Vert _{L^2(\\Sigma _\\tau )}\\lesssim \\epsilon \\tau ^{-2},\\end{split}$ which, using Lemma REF again, implies (REF ) for $k=0$ .", "Note that here the contribution of $|\\langle \\phi ,Z_i\\rangle |$ is bounded in terms of $\\Omega _i(\\phi )$ and applying Lemma REF .", "The case of higher $k$ is derived similarly using the $k$ times commuted equations.", "For (REF ), arguing as above we write $\\begin{split}\\mathcal {P}_{\\mathrm {ell}}{{\\bf T}}\\phi =[\\mathcal {P}_{\\mathrm {ell}},{{\\bf T}}]\\phi +{{\\bf T}}\\mathcal {P}\\phi +\\mathcal {O}(1)\\partial {{\\bf T}}^2\\phi + \\mathcal {O}(r^{-1}){{\\bf T}}^2\\phi +\\mathcal {O}({\\dot{\\wp }})\\partial {{\\bf T}}\\phi +\\mathcal {O}({\\dot{\\wp }}r^{-1}){{\\bf T}}\\phi ,\\end{split}$ to get $\\begin{split}\\Vert \\partial ^2 T\\phi \\Vert _{L^2(\\Sigma _\\tau )}\\lesssim \\tau ^{-3}.\\end{split}$ Here we have again bounded $\\langle {{\\bf T}}\\phi ,Z_i\\rangle $ in terms of $\\Omega _i({{\\bf T}}\\phi )$ , which is bounded by $o_{\\wp ,{R_1}}(1)\\epsilon ^{-3}$ by the same arguments as in Lemmas REF and REF .", "Note that the factors $o_{\\wp ,{R_1}}(1)$ here and $\\delta _\\wp $ in () make the constants in this estimate independent of the bootstrap constants in (REF ).", "By the same reasoning, and using equation (REF ) and estimates (REF ) and (REF ) we get (REF ) with a constant that is independent of ().", "The estimate for higher $k$ is proved similarly.", "Corollary REF and the following standard Gagliardo-Nirenberg inequality (which we state without proof) allow us to close the bootstrap assumptions () and ().", "Lemma 8.14 For any function $\\in H^3(\\Sigma _\\tau )$ , $\\begin{split}\\Vert \\Vert _{L^\\infty (\\Sigma _\\tau )}\\lesssim \\Vert \\partial _\\Sigma ^2\\Vert _{L^2(\\Sigma _\\tau )}^{\\frac{1}{2}}\\Vert \\partial _\\Sigma ^3\\Vert _{L^2(\\Sigma _\\tau )}^{\\frac{1}{2}}.\\end{split}$ We now have all the ingredients to prove Proposition REF .", "Since the implicit constants in Corollary REF are independent of those in () and (REF ), estimates () and () follow if $C$ is chosen sufficiently large.", "Estimates (REF ) and () then follow from () and () and Lemma REF .", "Estimates () and () were also already proved in the proof of Lemmas REF and REF .", "To prove (), first note that for any function $$ and for any $r_1> {\\tilde{R}}\\gg 1$ , by the fundamental theorem of calculus, $\\begin{split}\\int _{\\Sigma _\\tau \\cap \\lbrace r=r_1\\rbrace }(r^{\\frac{3}{2}})^2\\mathrm {d}\\theta &\\lesssim \\int _{\\Sigma _\\tau \\cap \\lbrace r={\\tilde{R}}\\rbrace }(r^{\\frac{3}{2}})^2\\mathrm {d}\\theta +\\int _{\\Sigma _\\tau \\cap \\lbrace {\\tilde{R}}\\le r\\le r_1\\rbrace }|r^{\\frac{3}{2}}||\\partial _r(r^{\\frac{3}{2}})|\\mathrm {d}\\theta \\mathrm {d}r\\\\&\\lesssim E[](\\tau ).\\end{split}$ Here to pass to the second line we have used the trace inequality for the integral on $\\Sigma _\\tau \\cap \\lbrace r=r_0\\rbrace $ , and (REF ) to express $\\partial _r$ in terms of $L,T,\\Omega $ for the integral on $\\Sigma _\\tau \\cap \\lbrace r_0\\le r\\le r_1\\rbrace $ .", "Applying the Sobolev inequality on the (non-geometric) sphere $\\Sigma _\\tau \\cap \\lbrace r=r_1\\rbrace $ to $\\phi $ , and using (REF ) to express angular derivatives in terms of $L,\\Omega ,T$ , for $r\\ge {\\tilde{R}}$ we get $\\begin{split}|r^{\\frac{3}{2}}\\phi (\\tau )|^2\\lesssim \\sum _{j\\le 3}E_j[\\phi ].\\end{split}$ Estimate () for $k=0$ now follows from (REF ), and the estimate for higher $k$ is proved similarly.", "For () we start with $\\begin{split}\\int _{\\Sigma _\\tau \\cap \\lbrace r=r_1\\rbrace }({\\tilde{r}}^{2})^2\\mathrm {d}\\theta &\\lesssim \\int _{\\Sigma _\\tau \\cap \\lbrace r={\\tilde{R}}\\rbrace }({\\tilde{r}}^{2})^2\\mathrm {d}\\theta +\\int _{\\Sigma _\\tau \\cap \\lbrace {\\tilde{R}}\\le r\\le r_1\\rbrace }|{\\tilde{r}}^{2}||\\partial _r({\\tilde{r}}^{2})|\\mathrm {d}\\theta \\mathrm {d}r\\\\&\\lesssim E[](\\tau )+\\Big (\\int _{\\Sigma _\\tau }\\chi _{\\ge {\\tilde{R}}}((L+\\frac{n-1}{2{\\tilde{r}}}))^2 r^2\\mathrm {d}V\\Big )^{\\frac{1}{2}}(E[](\\tau ))^{\\frac{1}{2}},\\end{split}$ where we have again used the trace inequality and (REF ) to pass to the last line.", "Estimate () now follows by the Sobolev estimate on the sphere $\\Sigma _\\tau \\cap \\lbrace r=r_1\\rbrace $ as above, as well as (REF ) and  (REF ).", "Jonas Lührmann Department of Mathematics, Texas A&M University Blocker 620B, College Station, TX 77843-3368, U.S.A. Sung-Jin Oh Department of Mathematics, UC Berkeley Evans Hall 970, Berkeley, CA 94720-3840, U.S.A. Sohrab Shahshahani Department of Mathematics, University of Massachusetts, Amherst 710 N. Pleasant Street, Amherst, MA 01003-9305, U.S.A." ] ]
2212.05620
[ [ "The Critical and Cocritical Degrees of a Totally Acyclic Complex over a\n Complete Intersection" ], [ "Abstract It is widely known that the minimal free resolution of a module over a complete intersection ring has nice patterns eventually arising in its Betti sequence.", "In 1997, Avramov, Gasharov, and Peeva defined the notion of critical degree for finitely generated modules, proving that this degree is finite whenever the module has finite CI-dimension.", "This paper extends the notion of critical degree via complete resolutions, thus defining the critical and cocritical degrees of an object in the category of totally acyclic complexes over a complete intersection ring of the form $R=Q/(f_1,\\dots,f_c)$.", "In particular, we provide the appropriate dual analogue to critical degree which enables us to introduce a new measure for complexes, called the critical diameter." ], [ "Introduction", "Interest in the growth of Betti numbers of finitely generated modules over particular classes of commutative rings has been widespread and longstanding.", "Free resolutions over regular, local rings are finite ($\\!\\!$[2]), yet when we take such a ring and quotient out by a regular sequence we often find infinite free resolutions.", "Let $(Q,{\\mathfrak {m}},{\\mathbb {k}})$ be a commutative noetherian, local ring with $\\dim Q=\\dim _{{\\mathbb {k}}}{\\mathfrak {m}}/{\\mathfrak {m}}^2$ and take a complete intersection of the form $R=Q/(f_1,\\dots ,f_c)$ where $f_1,\\dots , f_c$ is a $Q$ -regular sequence in ${\\mathfrak {m}}$ .", "For this paper, our focus is on growth in the Betti sequence of a finitely generated $R$ -module $M$ .", "Minimal free resolutions of $M$ exhibit realizable patterns in their syzygy sequence.", "In 1954, Tate showed that $b_n^R({\\mathbb {k}})$ is eventually given by a polynomial ($\\!\\!$[23]) and, subsequently in 1974, Gulliksen proved that each $b_n^R(M)$ is a quasi-polynomial of period 2 with degree smaller than the codimension ($\\!\\!$[14]).", "Building off of Gullisken's work, Eisenbud demonstrated in 1980 that whenever $R$ is a hypersurface, the free resolution of $M$ is periodic of period 2 ($\\!\\!$[11]).", "Later, Avramov demonstrated that $b_n^R({\\mathbb {k}})$ has exponential growth unless $R$ is a complete intersection ($\\!\\!$[3]).", "Finally, in 1997, Avramov, Gasharov, and Peeva demonstrated that, although the beginning of a free resolution over a complete intersection is often unstable, patterns do emerge at infinity.", "In particular, they proved that $\\lbrace b_n^R(M)\\rbrace $ is eventually either strictly increasing or constant.", "Of special significance is their generalization of modules over a complete intersection to modules of finite CI-dimension, for which the same statement holds (see [5]).", "Both complexity and the Hilbert-Poincaré series help us better measure the asymptotic stability of infinite free resolutions, but an interesting perspective taken by the authors of [5] was to define the notion critical degree of an $R$ -module, which essentially represents a flag for when asymptotically stable patterns are guaranteed to arise.", "While the periodicity of a free resolution over a hypersurface is guaranteed to start right away ($\\!\\!$[11]), this does not necessarily occur when $\\operatorname{codim}\\nolimits (R,Q)\\ge 2$ .", "Stability in these resolutions is intimately connected with maximal Cohen-Macaulay (MCM) modules and Tate cohomology, as demonstrated by Buchweitz in 1986 within his article [9].", "We ask the question: when the syzygy sequence stabilizes to MCM modules, does this necessarily imply that the asymptotic patterns begin?", "Eisenbud demonstrated that for a hypersurface, the pattern of periodicity arises immediately.", "However, for $\\operatorname{cx}\\nolimits _RM\\ge 2$ , this does not necessarily happen; for example, one could consider a negative syzygy module in the complete resolution of an MCM module.", "Thus, there is a distinction between the “head” of a free resolution carved out by the complete resolution defined by Buchweitz and when asymptotic patterns are guaranteed to arise in the syzygy sequence.", "It is for this reason that this paper takes the view that there is useful data one could recover about $M$ in the case where $\\operatorname{cx}\\nolimits _RM\\ge 2$ if we consider critical degree with regard to the complete picture of the syzygy sequence.", "Thus, our goal is to first extend this notion to the triangulated category of totally acyclic $R$ -complexes (which is equivalent to the stable category of MCM modules) and, in doing so, it becomes necessary to present a dual analogue to the notion.", "In Section , we begin with collecting some preliminary data needed to proceed, and then we complete our primary goal of extending [5] in Section .", "Afterward, we look towards understanding the natural connection of these definitions to Tate cohomology in Section and present an appropriate analogue to [5].", "Finally, one pitfall of critical degree, as discussed in [5], is that for $n>0$ , $\\Omega ^{-n}{M}$ will always have a greater critical degree than $M$ despite these modules having the same complexity.", "It is our hope that by shifting perspective to the complete syzygy sequence, we might be able to recover some boundedness properties of (necessarily indecomposable) modules over a fixed ring and given complexity.", "We present a new invariant of totally acyclic $R$ -complexes (and $R$ -modules) called critical diameter and provide a simple example in Section ." ], [ "Preliminaries", "Let $(Q,{\\mathfrak {m}},{\\mathbb {k}})$ be a regular, local ring and $R$ a complete intersection of the form $Q/ where $ (f1,...,fc)$ is a regular $ Q$-sequence in $ m$.", "Recall that an $ R$-complex$ C$ is \\emph {totally acyclic} if $ H(C)=0=H(C*)$ where $ C*=HomR(C,R)$ and each $ Ci$ is a finitely generated free\\footnote {The definition requires each C_i to be projective, but in our case they will additionally be free.}", "$ R$-module.", "We say that a complex $ (C, )$ is \\emph {minimal} if every homotopy equivalence $ e:CC$ is an isomorphism ($$\\cite {rel}), noting this is equivalent to the requirement that $ (C)mC$.", "On the other hand, $ C$ is \\emph {contractible}, or (homotopically) \\emph {trivial}, if the identity morphism $ 1C$ is null-homotopic and thus homotopy equivalent to the zero complex.", "There exists a decomposition of all complexes, $ C=C T$, where $C$ is a unique (up to isomorphism\\footnote {If two minimal complexes are homotopy equivalent, then they are isomorphic (c.f.", "Proposition 1.7.2 in \\cite {rel}).})", "minimal subcomplex of $ C$ and $ T$ is contractible.", "It further holds that if two complexes are homotopy equivalent, then their minimal subcomplexes must be isomorphic.", "Throughout this paper, we denote by $C$ the minimal subcomplex of $ C$ and it is understood that $ CC$.\\footnote { It is straightforward to check that the natural projection \\pi :\\!\\mathcal {C}\\rightarrow \\overline{\\mathcal {C}} is a homotopy equivalence.}", "Note further that a homotopy equivalence between $ C$ and $ D$ induces a homotopy equivalence between $ C*$ and $ D*$.", "Lastly, a chain map between two complexes always induces a map between any two respective homotopy equivalent complexes, thereby inducing a map on the minimal subcomplexes, as described by the lemma and proposition below.\\begin{lemma}Let \\mathcal {C}, \\mathcal {D}, \\mathcal {C^{\\prime }} and \\mathcal {D^{\\prime }} be Q-complexes for which there exist chain maps \\mathit {f}\\!", ":\\mathcal {C}\\rightarrow \\mathcal {D} and \\gamma \\!", ":\\mathcal {D}\\rightarrow \\mathcal {D^{\\prime }} and further suppose \\mathcal {C}\\simeq \\mathcal {C^{\\prime }}.", "Then there exists an induced chain map \\mathit {f^{\\prime }}\\!", ":\\mathcal {C}^{\\prime } \\rightarrow \\mathcal {D}^{\\prime } such that the square\\begin{center}{ \\mathcal {C} [d]_{\\mathit {f}} [r]^{\\varphi } & \\mathcal {C}^{\\prime } @{.>}[d]_{\\mathit {f^{\\prime }}} \\\\ \\mathcal {D} [r]^{\\gamma } & \\mathcal {D}^{\\prime } }\\end{center}commutes (up to homotopy) and this map is unique (up to homotopy).\\end{lemma}{\\begin{xmlelement*}{proof}It is straightforward to check that the choice of \\mathit {f^{\\prime }}=\\gamma \\mathit {f}\\varphi ^{-1} makes the square commute up to homotopy and that uniqueness of this map follows.\\end{xmlelement*}}\\begin{prop}Let \\mathcal {C} and \\mathcal {D} be R-complexes with chain map \\mathit {f}\\!", ":\\mathcal {C}\\rightarrow \\mathcal {D}.", "If \\overline{\\mathcal {C}} and \\overline{\\mathcal {D}} are the respective minimal subcomplexes, then there exists an induced chain map \\bar{f}\\!", ":\\overline{\\mathcal {C}}\\rightarrow \\overline{\\mathcal {D}}, which is unique (up to homotopy).\\end{prop}$" ], [ "The Category of Totally Acyclic Complexes", "We denote by $\\operatorname{{\\mathcal {K}}_{tac}}(R)$ , the category of totally acyclic complexes, where the objects are totally acyclic $R$ -complexes and the morphisms are homotopy equivalence classes of chain maps.", "This is a full subcategory of the homotopy category, $\\mathcal {K}(R)$ , and so, $\\operatorname{{\\mathcal {K}}_{tac}}(R)$ is triangulated with a suspension endofunctor ${\\Sigma {}}:\\!", "\\operatorname{{\\mathcal {K}}_{tac}}(R)\\rightarrow \\operatorname{{\\mathcal {K}}_{tac}}(R)$ taking a complex $(\\mathcal {C},{\\partial })$ to $({\\Sigma {\\mathcal {C}}},{\\Sigma {{\\partial }}})$ where $({\\Sigma {\\mathcal {C}}})_n:= C_{n-1}$ and $({\\partial }^{{\\Sigma {\\mathcal {C}}}})_n:=-{\\partial }_{n-1}$ .", "This functor maps the morphism $[f]:\\!", "[\\mathcal {C}]\\rightarrow [\\mathcal {D}]$ to the morphism ${\\Sigma {[f]}}:\\!", "{\\Sigma {\\mathcal {C}}}\\rightarrow {\\Sigma {\\mathcal {D}}}$ , where ${\\Sigma {[f]}}_n=[f]_{n-1}$ for each $n\\in {\\mathbb {Z}}$ .", "Throughout the paper, we discuss “endomorphisms” on $\\mathcal {C}$ of degree $-q$ with $q\\in {\\mathbb {Z}}^{+}$ , which are given by $q$ applications of the suspension endofunctor, denoted by ${\\Sigma ^{q}{(-)}}$ and it is straightforward to check that if two complexes are homotopy equivalent, say $\\mathcal {C}\\simeq \\mathcal {D}$ , then ${\\Sigma ^{q}{\\mathcal {C}}}\\simeq {\\Sigma ^{q}{\\mathcal {D}}}$ for any $q\\in {\\mathbb {Z}}^+$ .", "In Section of this paper, we additionally need the following lemma, which demonstrates a connection between endomorphisms on two isomorphic complexes in the category of $R$ -complexes, $\\mathcal {C}(R)\\!", "$ , which has as morphisms $R$ -complex chain maps (not homotopy classes).", "Lemma 1.1 Let $\\mathcal {C}$ and $\\mathcal {D}$ be isomorphic as $R$ -complexes in $\\mathcal {C}(R)\\!", "$ .", "Then there is a one-to-one correspondence between chain endomorphisms on $\\mathcal {C}$ and those on $\\mathcal {D}$ .", "Moreover, an endomorphism on $\\mathcal {C}$ is degree-wise surjective (split injective) if and only if the corresponding endomorphism on $\\mathcal {D}$ is surjective (split injective) at the same degrees.", "Take $f\\!", ":\\mathcal {C}\\rightarrow \\mathcal {D}$ to be a chain map with $f_n: C_n\\rightarrow D_n $ an $R$ -module isomorphism for each $n\\in {\\mathbb {Z}}$ .", "Furthermore, let $\\mu :\\mathcal {C}\\rightarrow {\\Sigma ^{q}{\\mathcal {C}}}$ and $\\nu :\\mathcal {D}\\rightarrow {\\Sigma ^{q}{\\mathcal {D}}}$ , noting that the diagram ${ \\mathcal {C} [d]_{\\mu } [r]^{f} & \\mathcal {D} @{->}[d]^{\\nu } \\\\ {\\Sigma ^{q}{\\mathcal {C}}} [r]^{{\\Sigma ^{q}{f}}} & {\\Sigma ^{q}{\\mathcal {D}}} }$ must commute since $\\nu f = (({\\Sigma ^{q}{f}})\\mu f^{-1})f=({\\Sigma ^{q}{f}})\\mu $ .", "Thus, it must hold that $\\operatorname{Hom}\\nolimits _{\\mathcal {C}(R)\\!", "}(\\mathcal {C},{\\Sigma ^{q}{\\mathcal {C}}})\\cong \\operatorname{Hom}\\nolimits _{\\mathcal {C}(R)\\!", "}(\\mathcal {D},{\\Sigma ^{q}{\\mathcal {D}}})$ .", "We show the second part of the lemma with respect to split injectivity, noting that the proof for surjectivity is analogous.", "First suppose $\\nu _n$ is split injective, thus implying the composition $\\nu _n f_n=({\\Sigma ^{q}{f}})_n \\mu _n$ is too.", "Therefore, $\\mu _n$ is split injective.", "Conversely, if we first assume $\\mu _n$ is split injective, then so is the composition $({\\Sigma ^{q}{f}})_n \\mu _n$ and so there exists a left inverse $\\gamma _n:({\\Sigma ^{q}{\\mathcal {D}}})_n\\rightarrow C_n$ such that $\\gamma _n\\circ ({\\Sigma ^{q}{f}})_n\\mu _n=\\operatorname{Id}\\nolimits ^{\\mathcal {C}}_n$ .", "This then implies $\\gamma _n\\circ (\\nu _nf_n)=\\operatorname{Id}\\nolimits ^{\\mathcal {C}}_n \\\\f_n\\circ \\gamma _n\\circ (\\nu _nf_n)=f_n\\circ \\operatorname{Id}\\nolimits ^{\\mathcal {C}}_n \\\\f_n\\circ \\gamma _n\\circ (\\nu _nf_n)=\\operatorname{Id}\\nolimits ^{\\mathcal {D}}_n\\circ f_n \\\\(f_n\\circ \\gamma _n)\\circ \\nu _n=\\operatorname{Id}\\nolimits ^{\\mathcal {D}}_n$ since $f_n$ is right-cancellative.", "Hence, $f_n\\circ \\gamma _n$ is a left inverse for $\\nu _n$ , implying it is split injective, as desired.", "We will forgo the notation signifying equivalence class and it will be assumed that when we refer to $\\mathcal {C}$ or $\\mathit {f}$ in $\\operatorname{{\\mathcal {K}}_{tac}}(R)$ , we mean the equivalence class or an appropriate representative of the equivalence class.", "The class of distinguished triangles in $\\operatorname{{\\mathcal {K}}_{tac}}(-)$ are determined by triangles of the form $\\mathcal {C} \\xrightarrow{} \\mathcal {D} \\xrightarrow{} {\\operatornamewithlimits{\\mathcal {M}}\\mathit {(f)}} \\xrightarrow{} {\\Sigma {\\mathcal {C}}}$ where ${\\operatornamewithlimits{\\mathcal {M}}\\mathit {(f)}}$ is the mapping cone of the chain map $f:\\!\\mathcal {C}\\rightarrow \\mathcal {D}$ .", "The reader may refer to Chapter 1 of [16] or [19] for the axioms of triangulated categories.", "Our interest in the triangulated structure of $\\operatorname{{\\mathcal {K}}_{tac}}(R)$ is centered upon the fact that the functors $\\operatorname{Hom(\\mathcal {C},\\mathcal {-})}\\!", ":\\operatorname{{\\mathcal {K}}_{tac}}(R)\\rightarrow {\\mathbb {Z}}\\text{-}\\mathrm {mod}$ and $\\operatorname{Hom(\\mathcal {-},\\mathcal {C})}\\!", ":\\operatorname{{\\mathcal {K}}_{tac}}(R)^{op}\\rightarrow {\\mathbb {Z}}\\text{-}\\mathrm {mod}$ are both cohomological and, furthermore, for any abelian category $\\mathcal {A}b$ , one has $\\operatorname{Ext}\\nolimits _{\\text{$\\mathcal {A}b$}}^i(A,B)=\\operatorname{Hom}\\nolimits _{\\mathcal {D}(\\text{$\\mathcal {A}b$})}(A,{\\Sigma ^{i}{B}})$ ." ], [ "Complete Resolutions, Tate Cohomology, and CI Operators", "There is a well defined family $\\mathfrak {t}=\\lbrace \\mathit {t_j}\\rbrace $ (with $j=1,\\dots ,c$ ) of degree $-2$ chain endomorphisms on any $R$ -complex called the CI operatorsAlso called the Eisenbud operators.", "originally made explicit in [11] for a free resolution.", "In [14], Gulliksen showed that $\\operatorname{Ext}\\nolimits _{R}^*(M,N)$ and $\\operatorname{Tor}\\nolimits _R^*(M,N)$ are graded modules over a polynomial ring with indeterminants of cohomological degree 2.", "Following Eisenbud's work, Avramov and Buchwietz provided a new variant of the CI operators, called cohomology operators ($\\!\\!$ [4]).", "Denote by $\\mathcal {S}=R[\\chi _1,\\dots ,\\chi _c]$ the ring of cohomology operators and for $R$ -modules $M$ , $N$ let $\\chi _j:\\operatorname{Ext}\\nolimits _R^i(M,N)\\rightarrow \\operatorname{Ext}\\nolimits _R^{i+2}(M,N)$ where $i\\in {\\mathbb {Z}}$ and $j=1,\\dots ,c$ (cf.", "[13]).", "Furthermore, we may consider the graded module where $N={\\mathbb {k}}$ and denote by $\\mathcal {S}_{{\\mathbb {k}}}={\\mathbb {k}}[\\chi _1,\\dots ,\\chi _c]$ .", "In this case, $\\operatorname{Ext}\\nolimits _{R}^*(M,N)$ is additionally a module over $\\mathcal {S}_{{\\mathbb {k}}}$ since $\\operatorname{Ext}\\nolimits _R^*(M,{\\mathbb {k}})$ will be annihilated by the maximal ideal ${\\mathfrak {m}}$ of $R$ .", "A complete resolution of $M$ is a diagram $\\mathcal {C} \\xrightarrow{} \\mathbb {P} \\xrightarrow{} M$ where $\\mathcal {C}$ is in $\\operatorname{{\\mathcal {K}}_{tac}}(R)$ , $\\mathbb {P}$ is a projective (free) resolution of $M$ , and $\\rho $ is a morphism of complexes such that $\\rho _n$ is bijective for all $n\\gg 0$ [6].", "Any object in $\\operatorname{{\\mathcal {K}}_{tac}}(R)$ can be realized as a complete resolution of an $R$ -module, $M=\\operatorname{Im}\\nolimits {\\partial }^{\\mathcal {C}}_n$ , and, conversely, there exists a (unique) minimal free resolution $\\mathbb {F}$ for every finitely-generated $R$ -module $M$ so we may take this resolution as the projective resolution of $M$ in the diagram.", "We may then construct a minimal totally acyclic complex $\\mathcal {C}$ such that the bijectivity condition between $\\mathcal {C}$ and $\\mathbb {F}$ holds ($\\!\\!$[6]), yielding a one-to-one correspondence between objects in $\\operatorname{{\\mathcal {K}}_{tac}}(R)$ and objects in $\\mathcal {R}\\text{-}\\mathrm {mod}\\!$ $\\!$ .", "It is important to note that within this construction, $\\operatorname{Im}\\nolimits {\\partial }^{\\mathcal {C}}_0=\\Omega ^{n}{M}$ such that $\\operatorname{depth}\\nolimits _R\\Omega ^{n-1}{M}=\\dim R-1$ .", "Finally, recall that the Betti sequence of $M$ is $\\lbrace b_n^R({M})\\rbrace $ , where each $b_n:=\\mathrm {rk}(C_n)$ or, equivalently, $b_n=\\dim _{{\\mathbb {k}}}\\operatorname{\\widehat{Ext}}\\nolimits _R^n(M,{\\mathbb {k}})$ for each $n\\in {\\mathbb {Z}}$ .", "Our interest is in the growth of this sequence, and thus the generators of the syzygy sequence $\\lbrace \\Omega ^nM\\rbrace _{n\\in {\\mathbb {Z}}}$ (syzygies and cosyzygies).", "It should further be noted that the cohomology operators defined in [4] act on $\\mathcal {C}$ just as they do on $\\mathbb {F}$ as degree 2 chain endomorphisms.", "We will consider these operators in the case where $N={\\mathbb {k}}$ so that $\\chi _j:=\\operatorname{Hom}\\nolimits _R(t_j, {\\mathbb {k}})$ for $j=1,\\dots ,c$ with $\\operatorname{codim}\\nolimits (R,Q)=c$ .", "It is indeed true that $\\operatorname{\\widehat{Ext}}\\nolimits _R^*(M,{\\mathbb {k}})$ is unambiguously a module over the ring of cohomology operators $\\mathcal {S}=R[\\chi _1,\\dots ,\\chi _c]$ as well as $\\mathcal {S}_{{\\mathbb {k}}}={\\mathbb {k}}[\\chi _1,\\dots ,\\chi _c]$ .", "In fact, $E_{+}=\\bigoplus _{i}\\operatorname{\\widehat{Ext}}\\nolimits _R^{i}(M,{\\mathbb {k}})$ for $i\\ge 0$ will be a noetherian module over $\\mathcal {S}_{{\\mathbb {k}}}$ while $E_{-}=\\bigoplus _{i}\\operatorname{\\widehat{Ext}}\\nolimits _R^{i}(M,{\\mathbb {k}})$ for $i<0$ will be an artinian module over $\\mathcal {S}_{{\\mathbb {k}}}$ ." ], [ "The Critical Degree of a Finitely Generated Module", "For ease of reference, we recall below the definition of critical degree for a finitely generated $Q$ -module, as it was originally introduced in [5].", "Definition 1.2 A $Q$ -module $M$ has critical degree of at most $s$ , denoted by $\\mathrm {crdeg}^{}_{{Q}}{M}\\le s$ , if its minimal resolution $\\mathbb {F}$ has a chain endomorphism $\\mu $ of degree $q<0$ such that $\\mu _{n-q}: F_{n-q} \\rightarrow F_n$ is surjective for all $n>s$ .", "If no such $s$ exists, then $\\mathrm {crdeg}^{}_{{Q}}{M}=\\infty $ .", "The authors prove in [5] that for any module of finite CI-dimension, the Betti sequence is non-decreasing past the critical degree, in addition to providing a cohomological characterization for the critical degree.", "However, they also give an example demonstrating that the critical degree cannot be bounded for all modules of complexity greater than 1, since taking a cosyzygy out to the right would produce a higher critical degree.", "Moreover, in [5] they show that strict growth does not necessarily signify the critical degree, thus meaning that the critical degree is where growth is guaranteed to occur but not necessarily where the growth begins.", "In [4], the authors give an effective bound on the critical degree of a finitely generated $R$ -module of complexity 2 dependent upon the Betti numbers and $g=\\operatorname{depth}\\nolimits R-\\operatorname{depth}\\nolimits _RM$ ." ], [ "Cosocle, Coregular Sequences, and Codepth of a Module", "First note that for any $\\mathcal {S}$ -module $E$ , the socle of $E$ is $\\operatorname{Soc}\\nolimits (E)=\\lbrace u\\in E\\,|\\,u\\mathfrak {X}=0\\rbrace $ where $\\mathfrak {X}=(\\chi _1, \\dots , \\chi _c)$ .", "Dually, (c.f.", "[1]), the cosocle of $E$ is defined as $\\operatorname{Cosoc}\\nolimits (E)=\\left\\lbrace \\bar{u}\\in E/\\mathfrak {X}E \\:|\\: \\bar{u}\\ne 0 \\right\\rbrace .$ By [18], if $E$ is artinian, then $\\mathfrak {X}E=E$ if and only if there exists some $x\\in \\mathfrak {X}$ such that $xE=E$ .", "This in turn implies that whenever $\\operatorname{Cosoc}\\nolimits (E)=0$ , there exists some $x\\in \\mathfrak {X}$ such that the submodule generated by $x$ returns the module $E$ .", "Equivalently, this means there is a coregular element in $\\mathcal {S}$ ($\\!\\!$[18], cf.", "[15]).", "Recall that a coregular sequence in $E$ , or an $E$ -cosequence, is a sequence $\\mathbf {\\tilde{x}}=x_1,\\dots ,x_d$ such that $x_1 E=E$ , and $x_i (0:_E (x_1,\\dots ,x_{i-1}))= (0:_E (x_1,\\dots ,x_{i-1}))$ for each $i=2,\\dots ,d$ .", "Lastly, the codepth of $E$ , denoted $\\operatorname{codepth}\\nolimits _{\\mathcal {S}}E$ , is defined to be the maximal length of an $E$ -cosequence in $\\mathfrak {X}$ ($\\!\\!$[18], cf.", "[15] and [20]).", "It should be clear that $\\operatorname{codepth}\\nolimits _{\\mathcal {S}}E=0$ implies $\\operatorname{Cosoc}\\nolimits (E)\\ne 0$ , so that existence of some nonzero element $x\\notin \\mathfrak {X}E$ is guaranteed.", "We will use this fact in Section to accomplish our goal of providing a dual analogue for the cohomological characterization of critical degree." ], [ "The Critical Degree and Duality", "Our goal in this section is to extend the notion of critical degree from free to complete resolutions.", "We begin with a complex $\\mathcal {C}$ in $\\operatorname{{\\mathcal {K}}_{tac}}(R)$ and the natural choice for such an extension." ], [ "Main Definitions", "Given an $R$ -complex with $\\mathcal {C}= \\mathcal {\\overline{\\mathcal {C}}} \\oplus \\mathcal {T}$ , then if $\\mu \\!", ":\\mathcal {C}\\rightarrow {\\Sigma ^{q}{\\mathcal {C}}}$ is a morphism in $\\operatorname{{\\mathcal {K}}_{tac}}(R)$ , we write $\\bar{\\mu }\\!", ":\\overline{\\mathcal {C}}\\rightarrow {\\Sigma ^{q}{\\overline{\\mathcal {C}}}}$ for the induced endomorphism on $\\overline{\\mathcal {C}}$ , guaranteed by Lemma .", "Definition 2.1 An $R$ -complex $\\mathcal {C}$ in $\\operatorname{{\\mathcal {K}}_{tac}}(R)$ has critical degree relative to $\\mu $ (or $\\mu $-critical degree) equal to $ \\mathrm {crdeg}^{\\mu }_{{R}}{C} = \\mathrm {inf}\\lbrace n \\mid \\bar{\\mu }_{i+q}: \\overline{C}_{i+q} \\twoheadrightarrow \\overline{C}_{i} \\: \\forall \\: i > n \\rbrace $ and the critical degree of $\\mathcal {C}$ is defined as the infimum over all $\\mu $ -critical degrees, $\\mathrm {crdeg}^{}_{{R}}{\\mathcal {C}}= \\mathrm {inf}\\lbrace \\mathrm {crdeg}^{\\mu }_{{R}}{\\mathcal {C}} \\: | \\: \\mu : \\mathcal {C} \\rightarrow {\\Sigma ^{q_\\mu }{\\mathcal {C}}} \\rbrace , $ where $\\mathrm {crdeg}^{}_{{R}}{\\mathcal {C}}=\\infty $ if all possible $\\mu $ -critical degrees are infinite.", "Remark Note that for any $R$ -complex $\\mathcal {C}$ , if we consider the $R$ -module $M=\\operatorname{Im}\\nolimits {\\partial }^{\\mathcal {C}}_0$ , then the above definition of critical degree for $\\mathcal {C}$ will indeed agree with Definition REF whenever $\\mathrm {crdeg}^{}_{{R}}{M}\\ge 0$ .", "If $\\mathrm {crdeg}^{}_{{R}}{M}<0$ , this is not necessarily the case and we later discuss a special case in which $\\mathrm {crdeg}^{}_{{R}}{M}=-1$ but $-\\infty \\le \\mathrm {crdeg}^{}_{{R}}{C}<-1$ .", "It should further be noted that, as $R$ is a complete intersection, $\\mathrm {crdeg}^{}_{{R}}{\\mathcal {C}}<\\infty $ for two reasons.", "First, by [5] and the remark above.", "This of course implies there exists an endomorphism $\\mu $ realizing this degree, which brings us to the second reason: this $\\mu $ must be a linear form of the CI operators, as presented in [11].", "We now define the dual analogue of critical degree and work towards making these definitions precise in $\\operatorname{{\\mathcal {K}}_{tac}}(R)$ .", "Definition 2.2 An $R$ -complex $\\mathcal {C}$ in $\\operatorname{{\\mathcal {K}}_{tac}}(R)$ has cocritical degree relative to $\\mu $ (or $\\mu $-cocritical degree) equal to $ \\mathrm {cocrdeg}^{\\mu }_{{R}}{\\mathcal {C}} = \\mathrm {sup}\\lbrace n \\mid \\bar{\\mu }_{i}: \\overline{C}_{i} \\hookrightarrow \\overline{C}_{i-q} \\text{ splits } \\: \\forall \\: i < n \\rbrace $ and the cocritical degree of $\\mathcal {C}$ is defined to be the supremum over all $\\mu $ -cocritical degrees, $ \\mathrm {cocrdeg}^{}_{{R}}{\\mathcal {C}}= \\mathrm {sup}\\lbrace \\mathrm {cocrdeg}^{\\mu }_{{R}}{\\mathcal {C}} \\: | \\: \\mu : \\mathcal {C} \\rightarrow {\\Sigma ^{q_\\mu }{C}} \\rbrace , $ where $\\mathrm {crdeg}^{}_{{R}}{C}=-\\infty $ if all such relative cocritical degrees are negatively infinite.", "Proposition 2.3 Let $\\mathcal {C}$ and $\\mathcal {D}$ be homotopically equivalent $R$ -complexes.", "Then $\\mathrm {crdeg}^{}_{{R}}{\\mathcal {C}}=\\mathrm {crdeg}^{}_{{R}}{\\mathcal {D}}$ and $\\mathrm {cocrdeg}^{}_{{R}}{\\mathcal {C}}=\\mathrm {cocrdeg}^{}_{{R}}{\\mathcal {D}}$ .", "The proof of this statement is a straightforward application of Lemmas and REF Corollary 2.4 If $\\mathcal {C}\\simeq 0$ , then $\\mathrm {crdeg}^{}_{{R}}{\\mathcal {C}}=-\\infty $ and $\\mathrm {cocrdeg}^{}_{{R}}{\\mathcal {C}}=\\infty $ .", "Proposition 2.5 Let $\\mathcal {C}$ be in $\\operatorname{{\\mathcal {K}}_{tac}}(R)$ with a periodic minimal subcomplex $\\overline{\\mathcal {C}}$ .", "Then $\\mathrm {crdeg}^{}_{{R}}{\\mathcal {C}}=-\\infty $ and $\\mathrm {cocrdeg}^{}_{{R}}{\\mathcal {C}}=\\infty $ .", "Without loss of generality, assume $\\mathcal {C}$ is minimal and periodic with $\\mathrm {im}{\\partial }^{\\mathcal {C}}_n\\cong \\mathrm {im}{\\partial }^{\\mathcal {C}}_{n+q}$ .", "Then define a degree $q$ endomorphism where $\\rho _n:=\\operatorname{id}\\nolimits ^{C}_n\\!", ":C_n \\rightarrow C_{n-q}\\cong C_n$ for each $n\\in {\\mathbb {Z}}$ .", "Hence, $\\rho _n$ is both surjective and split injective for all $n\\in {\\mathbb {Z}}$ , yielding $\\mathrm {crdeg}^{\\rho }_{{Q}}{C}=-\\infty $ and $\\mathrm {cocrdeg}^{\\rho }_{{Q}}{C}=\\infty $ .", "This in turn forces the critical and cocritical degrees of $\\mathcal {C}$ to be $-\\infty $ and $\\infty $ , respectively.", "As a direct result of Proposition REF , note that if $R$ is a hypersurface, then $\\mathrm {crdeg}^{}_{{R}}{\\mathcal {C}}=-\\infty $ and $\\mathrm {cocrdeg}^{}_{{R}}{\\mathcal {C}}=\\infty $ for any $R$ -complex.", "This agrees with the bound provided in [5]." ], [ "Duality", "Denote by $M^*=\\operatorname{Hom}\\nolimits _R(M,R)$ and note that it is quite easy to see the natural connection between this module and cocritical degree.", "Proposition 2.6 Suppose $\\mathrm {crdeg}^{}_{{R}}{M^*}=s^*$ , where $0\\le s^* < \\infty $ .", "Then $\\mathrm {cocrdeg}^{}_{{R}}{\\mathcal {C}}=-s^*-1$ with $\\mathcal {C}$ the complete resolution of $M$ .", "This is a direct application of complete resolutions and the contravariance of the functor $\\operatorname{Hom}\\nolimits _R(-,R)$ .", "The negative one-degree shift is due to the relabeling of degrees in $\\operatorname{Hom}\\nolimits _R(\\mathbb {F}^*,R)$ in the process of taking the complete resolution.", "Now, for any complex $\\mathcal {C}$ in $\\operatorname{{\\mathcal {K}}_{tac}}(R)$ , denote by $\\mathcal {C}^*=\\operatorname{Hom}\\nolimits _R(\\mathcal {C},R)$ which is also a totally acyclic $R$ -complex.", "Explicitly, if $\\mathcal {C}=(C_n, {\\partial }^{\\mathcal {C}}_n)$ , then $\\mathcal {C}^*=(C^*_n, {\\partial }^{\\mathcal {C}^*}_n)$ is the $R$ -complex with $C^*_n=\\operatorname{Hom}\\nolimits _R(C_{-n}, R)\\cong C_{-n}, \\text{ and} \\\\{\\partial }^{\\mathcal {C}^*}_n=\\operatorname{Hom}\\nolimits _R({\\partial }^{\\mathcal {C}}_{1-n}, R)\\cong ({\\partial }^{\\mathcal {C}}_{1-n})^T.$ where $(-)^T$ represents the transpose.", "It should then be clear that any chain map on $\\mathcal {C}$ does indeed induce a well defined chain map on $\\mathcal {C}^*$ .", "Lemma 2.7 Let $f\\!", ": R^n\\rightarrow R^m$ be a map between free $R$ -modules.", "Then $f$ is (split) surjective if and only if $f^T$ is split injective.", "This statement follows from the fact that $\\operatorname{Hom}\\nolimits _R(-,R)$ is exact on $\\mathcal {R}\\text{-}\\mathrm {mod}\\!$ and we may apply it to preserve exactness of the appropriate short exact sequences for each direction.", "The backwards direction follows as $(f^T)^T=f$ .", "Corollary 2.8 Let $g\\!", ": R^n\\rightarrow R^m$ be a map between free $R$ -modules.", "Then $g$ is split injective if and only if $g^T$ is (split) surjective.", "Apply previous lemma with $g=(f^T)$ and $g^T=(f^T)^T=f$ .", "Proposition 2.9 Let $\\mu \\!", ":\\mathcal {C}\\rightarrow {\\Sigma ^{q}{\\mathcal {C}}}$ be a chain endomorphism realizing $\\mathrm {crdeg}^{}_{{R}}{\\mathcal {C}}=s$ and $\\nu \\!", ":\\mathcal {C}\\rightarrow {\\Sigma ^{r}{\\mathcal {C}}}$ a chain endomorphism realizing $\\mathrm {cocrdeg}^{}_{{R}}{\\mathcal {C}}=t$ .", "The critical and cocritical degrees of $\\mathcal {C}^*$ are completely determined by that of $\\mathcal {C}$ , where $\\mathrm {crdeg}^{}_{{R}}{\\mathcal {C}^*}=-\\mathrm {cocrdeg}^{}_{{R}}{\\mathcal {C}} \\text{ and}$ $\\mathrm {cocrdeg}^{}_{{R}}{\\mathcal {C}^*}=q-\\mathrm {crdeg}^{}_{{R}}{\\mathcal {C}}.$ First, it should be clear that there is a one-to-one correspondence between endomorphisms on $\\mathcal {C}$ and endomorphisms on $\\mathcal {C}^*$ , as $\\operatorname{Hom}\\nolimits _R(\\operatorname{Hom}\\nolimits _R(\\mathcal {C},R),R)\\cong \\mathcal {C}$ for any $R$ -complex $\\mathcal {C}$ in $\\operatorname{{\\mathcal {K}}_{tac}}(R)$ .", "Hence, the induced endomorphisms $\\mu ^*$ and $\\nu ^*$ must realize the critical and cocritical degrees on $\\mathcal {C}^*$ .", "Apply Corollary REF to see that $\\nu ^*_{n+r}=(\\nu _{-n})^T$ is surjective whenever $\\nu _{-n}$ is split injective, which occurs for all $-n<t$ and thus for all $n>-t$ .", "Then apply Lemma REF to see that $\\mu ^*_n=(\\mu _{-n+q})^T$ is split injective whenever $\\mu _{-n+q}$ is surjective, which occurs for all $-n+q>s$ and thus for all $n<q-s$ .", "We now consider how the critical and cocritical degrees of an $R$ -complex $\\mathcal {C}$ change under the translation endofunctor, ${\\Sigma {}}\\!\\!", ":\\operatorname{{\\mathcal {K}}_{tac}}(R)\\rightarrow \\operatorname{{\\mathcal {K}}_{tac}}(R)$ .", "Recall that ${\\Sigma ^{n}{\\mathcal {C}}}$ denotes the $R$ -complex with $R$ -modules $({\\Sigma ^{n}{\\mathcal {C}}})_m=C_{m-n}$ and differentials ${\\partial }^{{\\Sigma ^{n}{\\mathcal {C}}}}_m=(-1)^n{\\partial }^{\\mathcal {C}}_{m-n}$ .", "Proposition 2.10 If $\\mathrm {crdeg}^{}_{{R}}{\\mathcal {C}}=s$ and $\\mathrm {cocrdeg}^{}_{{R}}{\\mathcal {C}}=t$ , then $\\mathrm {crdeg}^{}_{{R}}{{\\Sigma ^{n}{\\mathcal {C}}}}=s+n$ and $\\mathrm {cocrdeg}^{}_{{R}}{{\\Sigma ^{n}{\\mathcal {C}}}}=t+n$ .", "We begin with assuming $\\mathcal {C}$ is minimal, since the minimal subcomplex of any complex would coincide with a shift of itself under the translation functor.", "Note that there is a one-to-one correspondence between endomorphisms on $\\mathcal {C}$ and those on ${\\Sigma ^{n}{\\mathcal {C}}}$ .", "Now suppose $\\mathit {\\mu }\\!", ":\\mathcal {C}\\rightarrow {\\Sigma ^{q}{\\mathcal {C}}}$ is the endomorphism which realizes the critical degree on $\\mathcal {C}$ .", "Since $({\\Sigma ^{n}{\\mu }})_i=\\mu _{i-n}\\!", ": C_{i-n}\\rightarrow C_{i-n-q}$ note that $\\mu _{i+q}\\!", ":C_{i+q}\\rightarrow C_i$ surjective for all $i>s$ implies $({\\Sigma ^{n}{\\mu }})_{i+q}\\!", ":C_{i+q-n}\\rightarrow C_{i-n}$ will be surjective for all $i>n+s$ .", "Moreover, $s+n$ will be the least degree such that $({\\Sigma ^{n}{\\mu }}_{i+q})$ is surjective for all $i>s+n$ (since $s$ is the least degree such that $\\mu _{i+q}$ is surjective for all $i>s$ ).", "Hence, $\\mathrm {crdeg}^{}_{{Q}}{{\\Sigma ^{n}{\\mathcal {C}}}}=s+n$ .", "Likewise, if $\\mathit {\\nu }\\!", ":\\mathcal {C}\\rightarrow {\\Sigma ^{r}{\\mathcal {C}}}$ is the endomorphism which realizes the cocritical degree on $\\mathcal {C}$ , then ${\\Sigma ^{n}{\\nu }}\\!", ":{\\Sigma ^{n}{\\mathcal {C}}}\\rightarrow {\\Sigma ^{n+r}{\\mathcal {C}}}$ will be such that it realizes the cocritical degree on ${\\Sigma ^{n}{\\mathcal {C}}}$ .", "Therefore, $({\\Sigma ^{n}{\\nu }})_i$ will be split injective for all $i<t+n$ and $\\mathrm {cocrdeg}^{}_{{R}}{{\\Sigma ^{n}{\\mathcal {C}}}}=t+n$ ." ], [ "Critical Degrees, Complexes, and Modules", "Starting with a MCM module $M$ , our definition REF agrees with REF as long as $\\mathrm {crdeg}^{}_{{R}}{M}\\ge 0$ .", "Specifically, if $\\mathcal {C}\\rightarrow \\mathbb {F}\\mathrel {{\\rightarrow \\cr \\hspace{1.94443pt}\\rightarrow }}{M}$ is the minimal complete resolution of $M$ with $\\mathrm {crdeg}^{}_{{R}}{M}\\ge 0$ and $\\mathrm {crdeg}^{}_{{R}}{M^*}\\ge 0$ , then $\\mathrm {crdeg}^{}_{{R}}{\\mathcal {C}}=\\mathrm {crdeg}^{}_{{R}}{M}$ and $\\mathrm {cocrdeg}^{}_{{R}}{\\mathcal {C}}=-\\mathrm {crdeg}^{}_{{R}}{M^*}-1$ .", "On the other hand, if $M$ is not MCM, then ${\\left\\lbrace \\begin{array}{ll}\\mathrm {crdeg}^{}_{{R}}{\\mathcal {C}}=s-g & \\text{ if } s\\ge 0\\\\\\mathrm {cocrdeg}^{}_{{R}}{\\mathcal {C}}=s^*-g^* & \\text{ if } s^*\\ge 0\\end{array}\\right.", "}$ where $\\mathrm {crdeg}^{}_{{R}}{M}=s$ and $g=\\dim R-\\operatorname{depth}\\nolimits _RM$ (with $s^*$ and $g^*$ analogously defined for $M^*$ ).", "Observe that, in these cases, $\\mathrm {crdeg}^{}_{{R}}{\\mathcal {C}}>0$ if $g<s$ , and $\\mathrm {cocrdeg}^{}_{{R}}{\\mathcal {C}}<0$ if $g^*>s^*$ .", "Alternatively, whenever we consider an $R$ -module $M$ with $\\mathrm {crdeg}^{}_{{R}}{M}=-1$ it could be the case that $\\mathrm {crdeg}^{}_{{R}}{\\mathcal {C}}\\lneq -1$ , as demonstrated in the following example.", "Example 2.11 Let $M$ be the $R$ -module with complete resolution $\\mathcal {C}\\rightarrow \\mathbb {F}\\mathrel {{\\rightarrow \\cr \\hspace{1.94443pt}\\rightarrow }}{M}$ and further suppose that $0\\le \\mathrm {crdeg}^{}_{{R}}{M}=s<\\infty $ .", "For simplicity, assume also that $M$ is MCM.", "Now set $N=\\Omega ^{s+k}M$ for some fixed integer $k>1$ , so that $\\mathrm {crdeg}^{}_{{R}}{N}=-1$ .", "This is because $\\mu (F_{n+q})=F_n$ for some $\\mu \\!", ":\\mathbb {F}\\rightarrow {\\Sigma ^{q}{\\mathbb {F}}}$ and for all $n>s$ by assumption, but $\\mathbb {G}:=\\mathbb {F}^{>s+k}$ is the minimal free resolution of $N$ .", "Thus, $G_n=F_{n+s+k}$ so it should be clear that $\\mu (G_{n+q})=G_{n}$ for all $n>0$ since $n+s+k>s$ .", "However, note that $N^*=\\operatorname{Hom}\\nolimits _R(\\Omega ^{s+k}M,R)\\cong \\Omega ^{s+k}\\operatorname{Hom}\\nolimits _R(M,R)$ and, furthermore, we can complete the chain endomorphism on $\\mathbb {F}$ realizing the critical degree of $M$ to a chain map on $\\mathcal {C}$ ($\\!$[21]).", "Of course, there also exists a complete resolution of $N$ of the form $\\mathcal {C}\\rightarrow \\mathbb {G}\\mathrel {{\\rightarrow \\cr \\hspace{1.94443pt}\\rightarrow }}{N}$ , which is equivalent to $\\mathcal {C}\\rightarrow \\mathbb {\\mathbb {F^{\\mathit {>s+k}}}}\\mathrel {{\\rightarrow \\cr \\hspace{1.94443pt}\\rightarrow }}{ \\Omega ^{s+k}M}$ (up to isomorphism in the first two components and up to homotopy in the last).", "Therefore, we see that $\\mathrm {crdeg}^{}_{{R}}{C}\\le -k$ where $-k\\lneq -1$ by assumption.", "$\\diamond $ The previous example demonstrates that Definition REF does not always agree with Definition REF as $\\mathbb {F}$ is a bounded below $R$ -complex and so the former definition does not capture what happens in the cosyzygy sequence of $M$ .", "Proposition 2.12 Let $\\mathcal {C}$ be a minimal complex in $\\operatorname{{\\mathcal {K}}_{tac}}(R)$ and $\\mu \\in \\operatorname{Hom}\\nolimits _{\\mathcal {K}(R)}(\\mathcal {C},{\\Sigma ^{q}{\\mathcal {C}}})$ .", "If $\\mathrm {crdeg}^{\\mu }_{{R}}{\\mathcal {C}}\\le \\mathrm {cocrdeg}^{\\mu }_{{R}}{\\mathcal {C}}$ , then one of the following must be true: $0\\le \\mathrm {cocrdeg}^{\\mu }_{{R}}{\\mathcal {C}}-\\mathrm {crdeg}^{\\mu }_{{R}}{\\mathcal {C}}<2+q$ , or $\\mathcal {C}$ is periodic.", "Set $\\mathrm {crdeg}^{\\mu }_{{R}}{C}=s_{\\mu }$ and $\\mathrm {cocrdeg}^{\\mu }_{{R}}{C}=t_{\\mu }$ such that $t_{\\mu }\\ge s_{\\mu }$ .", "To demonstrate (1), we show that $t_{\\mu }-s_{\\mu }\\ge 2+q$ forces (2).", "First, let us suppose equality so that $t_{\\mu }= s_{\\mu }+(2+q)$ thus implying $\\mu _n$ is split injective for all $n<s_{\\mu }+(2+q)$ and $\\mu _{n+q}$ is surjective for all $n>s_{\\mu }$ .", "In particular, $\\mu _{t_{\\mu }-1}\\!", ":C_{t_{\\mu }-1}\\rightarrow C_{t_{\\mu }-1-q}$ is bijective, as $t_{\\mu }-1=s_{\\mu }+q+1$ and $t_{\\mu }-1-q=s_{\\mu }+1$ .", "$@R=1.25cm @C=1.5cm {\\cdots [r]^(.35){{\\partial }_{(s_{\\mu }+3)+q}} & C_{(s_{\\mu }+2)+q} @{->>}[d]^{\\mu _{(s_{\\mu }+2)+q}} [r]^(.4){{\\partial }_{(s_{\\mu }+2)+q}} & C_{t_{\\mu }-1}=C_{(s_{\\mu }+1)+q} @{^{(}->>}[d]^{\\mu _{t_{\\mu }-1}=\\mu _{(s_{\\mu }+1)+q}}_{\\cong } [r]^(.6){{\\partial }_{(s_{\\mu }+1)+q}} & C_{s_{\\mu }+q} @{^{(}->}[d]^{\\mu _{s_{\\mu }+q}} [r]^{{\\partial }_{s_{\\mu }+q}} & \\cdots \\\\\\cdots [r]^(.45){{\\partial }_{s_{\\mu }+3}} & C_{s_{\\mu }+2} [r]^(.35){{\\partial }_{s_{\\mu }+2}} & C_{t_{\\mu }-1-q}=C_{s_{\\mu }+1} [r]^(.6){{\\partial }_{s_{\\mu }+1}} & C_{s_{\\mu }} [r]^(.55){{\\partial }_{s_{\\mu }}} &\\cdots }$ First note that, by commutativity of the diagram, $\\mu _{(s_{\\mu }+1)+q}(\\mathrm {im}{\\partial }_{(s_{\\mu }+2)+q})\\subseteq \\mathrm {im}{\\partial }_{s_{\\mu }+2}=\\ker {\\partial }_{s_{\\mu }+1}.$ Conversely, surjectivity of $\\mu _{(s_{\\mu }+2)+q}$ tells us that $\\ker {\\partial }_{s_{\\mu }+1}=\\mathrm {im}{\\partial }_{s_{\\mu }+2}\\subseteq \\mu _{(s_{\\mu }+1)+q}(\\mathrm {im}{\\partial }_{(s_{\\mu }+2)+q})$ since for any $x\\in \\mathrm {im}{\\partial }_{s_{\\mu }+2}$ there exists a $y\\in C_{(s_{\\mu }+2)+q}$ such that ${\\partial }_{s_{\\mu }+2}\\mu _{(s_{\\mu }+2)+q}(y)=x=\\mu _{(s_{\\mu }+1)+q}{\\partial }_{(s_{\\mu }+2)+q}(y).$ Hence, $\\mathrm {im}{\\partial }_{(s_{\\mu }+2)+q}=\\ker {\\partial }_{(s_{\\mu }+1)+q}\\cong \\ker {\\partial }_{s_{\\mu }+1}=\\mathrm {im}{\\partial }_{(s_{\\mu }+2)}$ as $\\mu _{(s_{\\mu }+1)+q}$ is a $Q$ -module isomorphism.", "Now, set $M=\\mathrm {im}{\\partial }_{s_{\\mu }+1}$ , noting that the truncated complex $\\mathcal {C}^{>s_{\\mu }}$ is degree-wise bijective with the minimal resolution $\\mathbb {F}$ of $M$ .", "Further note that $\\Omega ^{1}{M}=\\mathrm {im}{\\partial }_{(s_{\\mu }+2)}\\cong \\mathrm {im}{\\partial }_{(s_{\\mu }+2)+q}=\\Omega ^{1+q}{M}$ and so, by uniqueness of $\\mathbb {F}$ , $\\Omega ^{1+q+n}{M}\\cong \\Omega ^{1+n}{M}$ for any $n\\in {\\mathbb {N}}$ .", "Now, apply a similar argument to $M^*=\\operatorname{Hom}\\nolimits _R(M,R)$ noting that $\\operatorname{Hom}\\nolimits _R(\\mu _{(s_{\\mu }+1)+q},R)$ will likewise be an $R$ -module isomorphism from $\\operatorname{Hom}\\nolimits _R(C_{s_{\\mu }+1},Q)\\cong C_{s_{\\mu }+1}$ to $\\operatorname{Hom}\\nolimits _R(C_{(s_{\\mu }+1)+q},R)\\cong C_{(s_{\\mu }+1)+q}$ .", "Then, $\\operatorname{Hom}\\nolimits _R(\\mathcal {C}^{>s_{\\mu }},R)$ is periodic and thus $\\mathcal {C}^{<s_{\\mu }}$ is periodic as well.", "From here, reconstruction of $\\mathcal {C}$ from $\\mathcal {C}^{<s_{\\mu }}$ and $\\mathcal {C}^{>s_{\\mu }}$ demonstrates that $\\mathcal {C}$ is periodic of period $q$ , forcing $\\mathrm {crdeg}^{}_{{R}}{\\mathcal {C}}=-\\infty $ and $\\mathrm {cocrdeg}^{}_{{R}}{\\mathcal {C}}=\\infty $ by Proposition REF .", "Note that this argument involved at least one degree-wise isomorphism, so it is straightforward to see that the same arguments holds for the case in which $t_{\\mu }-s_{\\mu }>2+q$ .", "Corollary 2.13 If $\\mathrm {crdeg}^{}_{{R}}{\\mathcal {C}}$ and $\\mathrm {cocrdeg}^{}_{{R}}{\\mathcal {C}}$ are realized by the same degree $q$ endomorphism, then $\\mathrm {crdeg}^{}_{{R}}{\\mathcal {C}}\\le \\mathrm {cocrdeg}^{}_{{R}}{\\mathcal {C}}-(2+q)$ if and only if $\\mathrm {crdeg}^{}_{{R}}{\\mathcal {C}}=-\\infty $ and $\\mathrm {cocrdeg}^{}_{{R}}{\\mathcal {C}}=\\infty $ .", "As a consequence of these results, two questions arise: Are the critical and cocritical degrees of a totally acyclic $R$ -complex always realized by the same endomorphism?", "If not, then is it possible that a non-periodic minimal complex has non-infinite cocritical degree larger than the critical degree by an unbounded amount?" ], [ "Tate Cohomology and Critical Degree", "In this section, we have two primary goals: the first is to give an appropriate analogue for [5] and the second is to provide a partial answer to question (i.)", "listed at the end of the previous section.", "The importance of generalizing the notion of critical degree to $\\operatorname{{\\mathcal {K}}_{tac}}(R)$ comes in the form of its triangulated structure, as well as the equivalency of categories $\\operatorname{{\\mathcal {K}}_{tac}}(R)\\simeq \\operatorname{\\underline{\\mathrm {MCM}}}(R)$ as each finitely generated $R$ -module has a MCM approximation ($\\!\\!$[9]).", "It is then unsurprising that we may relate critical degree to $\\operatorname{\\widehat{Ext}}\\nolimits _R^n(M,{\\mathbb {k}})=\\operatorname{H}\\nolimits ^n\\operatorname{Hom}\\nolimits _R(\\mathcal {C},{\\mathbb {k}})$ equipped with comparison homomorphisms $\\varepsilon _R^n(M,{\\mathbb {k}}):\\operatorname{Ext}\\nolimits ^n_R(M,{\\mathbb {k}})\\rightarrow \\operatorname{\\widehat{Ext}}\\nolimits _R^n(M,{\\mathbb {k}})$ as described in [6] (cf.", "[9], [24], [10]).", "Lemma 3.1 Let M be maximal Cohen-Macaulay.", "Given complete resolutions $\\mathcal {K}\\xrightarrow{}\\mathbb {F}\\rightarrow {\\mathbb {k}}$ and $\\mathcal {C}\\rightarrow \\mathbb {G}\\rightarrow M$ , there exists an isomorphism $\\operatorname{Ext}\\nolimits _{\\mathcal {K}(R)}^n(\\mathcal {C},\\mathcal {K})\\cong \\operatorname{\\widehat{Ext}}\\nolimits _{R}^n(M,{\\mathbb {k}})$ for each $n\\in {\\mathbb {Z}}$ .", "The isomorphism is given by ${\\operatorname{Ext}\\nolimits _{\\mathcal {K}(R)}^n(\\mathcal {C},\\mathcal {K}) @{<:>}[r] @{=}[d] & \\operatorname{\\widehat{Ext}}\\nolimits _{R}^n(M,{\\mathbb {k}}) @/_/[d]_{\\Phi _n} \\\\ \\operatorname{Hom}\\nolimits _{\\mathcal {K}(R)}(\\mathcal {C},{\\Sigma ^{n}{\\mathcal {K}}}) @{=}[r] & \\operatorname{Hom}\\nolimits _{\\mathcal {K}(R)}(\\overline{\\mathcal {C}},{\\Sigma ^{n}{\\overline{\\mathcal {K}}}}) @/_/[u]_{\\Theta _n}^{\\cong }}$ where the first (left-hand) equality is by definition (cf.", "[23]) and the second (bottom) equality is due to the fact $\\mathcal {C}\\cong \\overline{\\mathcal {C}}$ in $\\mathcal {K}(R)$ .", "The third (right-hand) isomorphism is by the Comparison Theorem (cf.", "[6]) where $\\Phi _n$ extends any $R$ -module map $f:M_n\\rightarrow {\\mathbb {k}}$ to a chain map $\\hat{f}:\\overline{\\mathcal {C}}\\rightarrow {\\Sigma ^{n}{\\overline{\\mathcal {K}}}}$ and $\\Theta _n$ restricts in the other direction, as indicated by the diagram ${\\mathcal {C} [r] @{..>}[d]^{\\hat{f}} & (\\mathbb {F})^{\\ge n} @{..>}[d]^{\\bar{f}} [r] & M_n [d]^{f\\vert _{M_n}} \\\\ \\mathcal {K}[r] & \\mathbb {K} [r] & {\\mathbb {k}}\\\\ }$ with $M_n=\\operatorname{coker}\\nolimits ({\\partial }^{\\mathcal {C}}_{n+1})$ .", "The intermediary step to providing a cohomological characterization of critical degree in $\\operatorname{{\\mathcal {K}}_{tac}}(R)$ yields a triangulated approach to Definitions REF and REF .", "Take $\\mathcal {K}\\rightarrow \\mathbb {K}\\mathrel {{\\rightarrow \\cr \\hspace{1.94443pt}\\rightarrow }}{\\mathbb {k}}$ to be the minimal complete resolution of the residue field ${\\mathbb {k}}=R/{\\mathfrak {m}}$ and note that for any endomorphism $\\mathit {\\mu }\\!", ":\\mathcal {C}\\rightarrow {\\Sigma ^{q}{C}}$ , the distinguished triangle ${{\\mathcal {C}}}\\xrightarrow{}{{{\\Sigma ^{q}{\\mathcal {C}}}}}\\xrightarrow{}{\\mathcal {{\\operatornamewithlimits{\\mathcal {M}}\\mathit {(\\mu )}}}}\\xrightarrow{}{{\\Sigma {\\mathcal {C}}}}$ yields the long exact sequence of abelian groups $\\cdots \\rightarrow \\operatorname{Ext}\\nolimits _{\\mathcal {K}(R)}^n({\\operatornamewithlimits{\\mathcal {M}}\\mathit {(\\mu )}},\\mathcal {K})\\rightarrow \\operatorname{Ext}\\nolimits _{\\mathcal {K}(R)}^n({\\Sigma ^{q}{\\mathcal {C}}},\\mathcal {K})\\xrightarrow{}\\operatorname{Ext}\\nolimits _{\\mathcal {K}(R)}^n(\\mathcal {C},\\mathcal {K}) \\\\ \\rightarrow \\operatorname{Ext}\\nolimits _{\\mathcal {K}(R)}^{n+1}({\\operatornamewithlimits{\\mathcal {M}}\\mathit {(\\mu )}},\\mathcal {K})\\rightarrow \\operatorname{Ext}\\nolimits _{\\mathcal {K}(R)}^{n+1}({\\Sigma ^{q}{\\mathcal {C}}},\\mathcal {K})\\rightarrow \\cdots $ where $\\mu ^{n}=\\operatorname{Hom}\\nolimits _{\\mathcal {K}(R)}(\\mu _{n},\\mathcal {K})$ (cf.", "[16]).", "Proposition 3.2 The $\\mu $-critical degree of $\\mathcal {C}$ is the least homological degree $s_{\\mu }$ for which $\\mu ^{n+q}$ is (split) injective for all $n>s_{\\mu }$ , $\\mathrm {crdeg}^{\\mu }_{{R}}{\\mathcal {C}}=\\inf \\lbrace i\\:\\: \\mu ^{n+q}\\!", ":\\operatorname{Ext}\\nolimits _{\\mathcal {K}(R)}^{n+q}({\\Sigma ^{q}{\\mathcal {C}}},\\mathcal {K})\\hookrightarrow \\operatorname{Ext}\\nolimits _{\\mathcal {K}(R)}^{n+q}(\\mathcal {C}, \\mathcal {K}) \\text{ for all } n>i \\rbrace .$ Likewise, the $\\mu $-cocritical degree of $\\mathcal {C}$ is the greatest homological degree $t_{\\mu }$ for which $\\mu ^n$ is (split) surjective for all $n<t_{\\mu }$ , $\\mathrm {cocrdeg}^{\\mu }_{{R}}{\\mathcal {C}}=\\sup \\lbrace i\\:\\: \\mu ^n\\!", ":\\operatorname{Ext}\\nolimits _{\\mathcal {K}(R)}^{n}({\\Sigma ^{q}{\\mathcal {C}}}, \\mathcal {K})\\twoheadrightarrow \\operatorname{Ext}\\nolimits _{\\mathcal {K}(R)}^{n}(\\mathcal {C}, \\mathcal {K}) \\text{ for all } n<i \\rbrace .$ Remark If there exists no such infimum $s_{\\mu }$ , then, by definition, $\\mathrm {crdeg}^{\\mu }_{{R}}{\\mathcal {C}}=\\infty $ and, similarly, if there exists no such supremum $t_{\\mu }$ then $\\mathrm {crdeg}^{\\mu }_{{R}}{\\mathcal {C}}=-\\infty $ .", "Further note that we may define the critical degree of $\\mathbf {\\mathcal {C}}$ in the same way as Definition REF and the cocritical degree of $\\mathbf {\\mathcal {C}}$ in the same way as Definition REF .", "The only distinction here is how we are defining the relative critical and cocritical degrees for a given $R$ -complex and chain endomorphism in $\\operatorname{{\\mathcal {K}}_{tac}}(R)$ .", "Without loss of generality, assume $\\mathcal {C}$ is minimal.", "First note that Lemma REF gives us the induced map $\\hat{\\mu }^n$ for any $n\\in {\\mathbb {Z}}$ , as depicted in the diagram ${\\operatorname{Ext}\\nolimits _{\\mathcal {K}(R)}^n({\\Sigma ^{q}{\\mathcal {C}}},\\mathcal {K}) [r]^{\\mu ^n} @/^/[d]^{\\Phi _j} & \\operatorname{Ext}\\nolimits _{\\mathcal {K}(R)}^n(\\mathcal {C},\\mathcal {K}) @/_/[d]_{\\Phi _n} \\\\ \\operatorname{\\widehat{Ext}}\\nolimits _{R}^{n}(M_{-q},{\\mathbb {k}}) @{..>}[r]^{\\hat{\\mu }^n} @/^/[u]^{\\Theta _n}_{\\cong } & \\operatorname{\\widehat{Ext}}\\nolimits _R^{n}(M,{\\mathbb {k}}) @/_/[u]_{\\Theta _n}^{\\cong }}$ where $M_{-q}:=\\operatorname{Im}\\nolimits {\\partial }^{{\\Sigma ^{q}{\\mathcal {C}}}}_0$ .", "Hence, it is enough to show that, for $n\\in {\\mathbb {Z}}$ , $\\mu _n$ is surjective (split injective) if and only if $\\hat{\\mu }^n$ is split injective (surjective).", "We begin with the forward direction of both cases.", "First note that $\\mu _n$ surjective yields the short exact sequence $0\\rightarrow \\ker (\\mu _{n})\\rightarrow C_{n}\\xrightarrow{} C_{n-q}\\rightarrow 0$ which splits.", "Then, applying $\\operatorname{Hom}\\nolimits _R(-,{\\mathbb {k}})$ , we see that the short exact sequence $0\\rightarrow \\operatorname{Hom}\\nolimits _R(C_{n-q},{\\mathbb {k}})\\xrightarrow{}\\operatorname{Hom}\\nolimits _R(C_{n},{\\mathbb {k}})\\rightarrow \\operatorname{Hom}\\nolimits _R(\\ker (\\mu _{n}),{\\mathbb {k}})\\rightarrow 0$ must split as well and, thus, $\\hat{\\mu }^n=\\operatorname{Hom}\\nolimits (\\mu _{n},{\\mathbb {k}})$ is split injective.", "On the other hand, if we suppose $\\mu _n$ is split injective then the short exact sequence $0\\rightarrow C_n{\\mu _n} C_{n-q}\\rightarrow A \\rightarrow 0$ splits, where $C_{n-q}=A\\oplus \\mathrm {im}(\\mu _n)$ .", "Once again, applying $\\operatorname{Hom}\\nolimits _R(-, {\\mathbb {k}})$ yields the short exact sequence $0\\rightarrow \\operatorname{Hom}\\nolimits _R(A,{\\mathbb {k}})\\rightarrow \\operatorname{Hom}\\nolimits _R(C_{n-q},{\\mathbb {k}}) \\xrightarrow{}\\operatorname{Hom}\\nolimits _R(C_n,{\\mathbb {k}}) \\rightarrow 0$ which must also split, implying that $\\hat{\\mu }^n$ is surjective as needed.", "Now, to address the backwards direction, first observe that $\\hat{\\mu }^n$ split injective is equivalent to being left-cancellative so that $ \\hat{\\mu }^{n}(\\alpha )=\\hat{\\mu }^{n}(\\beta )$ implies $\\alpha =\\beta $ for any two cocycles $\\alpha $ , $\\beta \\in \\operatorname{Hom}\\nolimits _R(C_{n-q}\\,{\\mathbb {k}})$ .", "Of course, due to the action of $\\hat{\\mu }^n$ on $\\operatorname{\\widehat{Ext}}\\nolimits _{R}(M_{-q},{\\mathbb {k}})$ , this is equivalent to the statement that $\\alpha \\mu _{n}=\\beta \\mu _{n}$ implies $\\alpha =\\beta $ for any two morphisms $\\alpha $ , $\\beta \\!", ":C_{n-q}\\rightarrow {\\mathbb {k}}$ .", "Equivalently, $\\mu _{n}$ is right-cancellative, and thus surjective.", "The same approach does not work for $\\hat{\\mu }_n$ surjective, so instead denote by $\\lbrace e_i\\rbrace $ a basis for $C_n$ and define $\\pi _j\\!\\!", ":C_n\\rightarrow {\\mathbb {k}}$ so that $\\pi _j(e_i)=\\delta _{ij}$ .", "Note that since ${\\partial }(\\mathcal {C})\\subseteq {\\mathfrak {m}}\\mathcal {C}$ , each $\\pi _j$ will be a cocycle in $\\operatorname{Hom}\\nolimits _R(C_n,{\\mathbb {k}})$ .", "By the surjectivity of $\\hat{\\mu }^n$ there exist maps $\\rho _j\\!\\!", ":C_{n-q}\\rightarrow {\\mathbb {k}}$ such that $\\pi _j=\\rho _j\\mu _n$ in turn implying $\\lbrace \\mu _n(e_i)\\rbrace $ forms a linearly independent subset of a basis $\\mathcal {E}$ for $C_{n-q}$ by Nakayama.", "As a linearly independent sub-basis of $C_{n-q}$ (defined by $\\mathrm {im}(\\mu _n)$ ) is in one-to-one correspondence with a basis for $C_n$ , it must hold that $\\mu _n$ is injective.", "Furthermore, $\\mathcal {E}=(\\mathcal {E}\\setminus \\lbrace \\mu _n(e_i)\\rbrace )\\cup \\lbrace \\mu _n(e_i)\\rbrace $ and, if we denote by $A$ the subspace generated by $\\mathcal {E}\\setminus \\lbrace \\mu _n(e_i)\\rbrace $ , then we may write $C_{n-q}=A\\oplus \\mathrm {im}(\\mu _n)$ showing $\\mu _n$ is split injective." ], [ "Cohomological Characterizations", "We are now ready to present our results, analogous to Proposition 7.2 in [5], which give the cohomological characterizations of the critical and cocritical degrees in $\\operatorname{{\\mathcal {K}}_{tac}}(R)$ .", "Proposition 3.3 If $\\mathcal {C}\\lnot \\simeq 0$ is a totally acyclic $R$ -complex with $M=\\mathrm {im}{\\partial }^{\\mathcal {C}}_0$ , then $\\mathrm {crdeg}^{}_{{R}}{\\mathcal {C}}<\\infty $ and the following equalities hold: $\\mathrm {crdeg}^{}_{{R}}{\\mathcal {C}}=\\sup \\lbrace r\\in {\\mathbb {Z}}\\:|\\:\\operatorname{depth}\\nolimits _{\\mathcal {S}}\\operatorname{Ext}\\nolimits _{\\mathcal {K}(R)}^{\\ge r}(\\mathcal {C},\\mathcal {K})=0\\rbrace $ $\\hspace{28.45274pt}=\\sup \\lbrace r\\in {\\mathbb {Z}}\\:|\\:\\operatorname{depth}\\nolimits _{\\mathcal {S}}\\operatorname{\\widehat{Ext}}\\nolimits _{R}^{\\ge r}(M,{\\mathbb {k}})=0\\rbrace .$ In the proposition above, recall that $\\mathrm {Ext}_{\\mathcal {K}(R)}^{\\ast }({\\mathcal {C}},{\\mathcal {K}})$ is a graded $\\mathcal {S}$ -module where $\\mathcal {S}=R[\\chi _1,\\dots ,\\chi _c]$ and denote by $\\mathfrak {X}=(\\chi _1,\\dots ,\\chi _c)$ (cf.", "[13]).", "Thus, for each $r\\in {\\mathbb {Z}}$ , note that $\\operatorname{Ext}\\nolimits _{\\mathcal {K}(R)}^{\\ge r}(\\mathcal {C},\\mathcal {K})$ is a submodule containing all elements of degree greater than or equal to $r$ .", "Note that we only need show the second equality, as the first equality will then be given by Proposition REF .", "In the following proof, we handle the cases for $\\mathrm {crdeg}^{}_{{R}}{\\mathcal {C}}\\ge 0$ and $\\mathrm {crdeg}^{}_{{R}}{\\mathcal {C}}< 0$ separately, as the connection to Definition REF simplifies the former case.", "For ease of notation, set $\\mathrm {crdeg}^{}_{{R}}{\\mathcal {C}}=s$ and $r^*=\\sup \\lbrace r\\in {\\mathbb {Z}}\\:|\\:\\operatorname{depth}\\nolimits _{\\hat{\\mathcal {S}}}\\operatorname{\\widehat{Ext}}\\nolimits _{R}^{\\ge r}(M,{\\mathbb {k}})=0\\rbrace .$ Finiteness of $s$ is guaranteed by the existence of a linear form $\\ell \\in \\mathcal {S}$ which is eventually a non zero-divisor on the truncation $\\operatorname{Ext}\\nolimits _R^{\\ge N}(M,{\\mathbb {k}})$ for some $N\\gg 0$ ($\\!\\!$[11]).", "Whenever $s\\ge 0$ , $\\mathrm {crdeg}^{}_{{R}}{\\mathcal {C}}=\\mathrm {crdeg}^{}_{{R}}{M}$ and we can apply [5] to see that $s=\\sup \\lbrace r\\in {\\mathbb {N}}\\cup \\lbrace 0\\rbrace \\,|\\, \\operatorname{depth}\\nolimits _{\\mathcal {S}}\\operatorname{Ext}\\nolimits _R^{\\ge r}(M,{\\mathbb {k}})=0\\rbrace .$ It should be clear that, in this case, $r^*=s$ since $\\operatorname{\\widehat{Ext}}\\nolimits _{R}^{\\ge r}(M,{\\mathbb {k}})=\\operatorname{Ext}\\nolimits _{R}^{\\ge r}(M,{\\mathbb {k}})$ for each $r\\ge 0$ .", "Likewise, if $r^*\\ge 0$ , this forces $s\\ge 0$ and $r^*=s$ .", "So now suppose $r^*\\le s<0$ and consider the $R$ -complex ${\\Sigma ^{\\mid r^* \\mid }{\\mathcal {C}}}$ which has non-negative critical degree since $\\mathrm {crdeg}^{}_{{R}}{{\\Sigma ^{\\mid r^* \\mid }{\\mathcal {C}}}}=s\\: +\\mid r^* \\mid $ by Proposition REF .", "As $\\mathrm {im}{\\partial }^{{\\Sigma ^{\\mid r^* \\mid }{\\mathcal {C}}}}_0=\\mathrm {im}{\\partial }^{\\mathcal {C}}_{r^*}$ , it should be clear that $s\\:+\\mid r^* \\mid =\\sup \\lbrace r\\in {\\mathbb {N}}\\cup \\lbrace 0\\rbrace \\,|\\,\\operatorname{depth}\\nolimits _{\\mathcal {S}}\\operatorname{Ext}\\nolimits _{R}^{\\ge r}(M_{r^*},{\\mathbb {k}})=0\\rbrace $ and so there is a nonzero socle element of degree $s\\:+\\mid r^* \\mid $ .", "However, there exists some $\\chi \\in \\mathfrak {X}$ that is a non zero-divisor on $\\operatorname{Ext}\\nolimits _R^{>s+\\mid r^* \\mid }(M_{r^*},{\\mathbb {k}})$ meaning $\\operatorname{depth}\\nolimits _{\\mathcal {S}}\\operatorname{Ext}\\nolimits _R^{>s+\\mid r^* \\mid }(M_{r^*},{\\mathbb {k}})\\ne 0.$ By [23], there is an isomorphism $\\operatorname{\\widehat{Ext}}\\nolimits _R^{s+\\mid r^* \\mid +i}(M_{r^*},{\\mathbb {k}})\\cong \\operatorname{\\widehat{Ext}}\\nolimits _R^{s+i}(\\Omega ^{\\mid r^* \\mid }M_{r^*},{\\mathbb {k}})$ thus implying $\\operatorname{Ext}\\nolimits _R^{s+\\mid r^* \\mid +i}(M_{r^*},{\\mathbb {k}})\\cong \\operatorname{\\widehat{Ext}}\\nolimits _R^{s+i}(M,{\\mathbb {k}})$ for each $i\\in {\\mathbb {Z}}$ as well.", "Therefore, $\\operatorname{depth}\\nolimits _{\\mathcal {S}}\\operatorname{\\widehat{Ext}}\\nolimits _R^{\\ge s+i}(M,{\\mathbb {k}})\\ne 0$ for all $i>0$ but it must hold that $\\operatorname{depth}\\nolimits _{\\mathcal {S}}\\operatorname{\\widehat{Ext}}\\nolimits _R^{\\ge s}(M,{\\mathbb {k}})=0$ , in turn implying $r^*=s$ .", "If we instead suppose $s\\le r^*<0$ and consider the $R$ -complex ${\\Sigma ^{\\mid s \\mid }{C}}$ , we can apply the same argument as above since $\\mathrm {crdeg}^{}_{{R}}{{\\Sigma ^{\\mid s \\mid }{C}}}=0$ and there exists an isomorphism $\\operatorname{\\widehat{Ext}}\\nolimits _R^{s+\\mid s \\mid +i}(M_s,{\\mathbb {k}})\\cong \\operatorname{\\widehat{Ext}}\\nolimits _R^{s+i}(\\Omega ^{\\mid s \\mid }M_s,{\\mathbb {k}})$ so that $\\operatorname{\\widehat{Ext}}\\nolimits _R^{i}(M_s,{\\mathbb {k}})\\cong \\operatorname{\\widehat{Ext}}\\nolimits _R^{s+i}(M,{\\mathbb {k}})$ for each $i>0$ .", "We now work to show the cohomological characterization for cocritical degree, which aligns nicely with what we understand about critical degree.", "Before doing so, we first need a lemma that establishes a significant relationship between $\\operatorname{\\widehat{Ext}}\\nolimits _{R}^{\\ast }({M},{{\\mathbb {k}}})$ and $\\operatorname{\\widehat{Ext}}\\nolimits _{R}^{\\ast }({M^*},{{\\mathbb {k}}})$ .", "Lemma 3.4 Let $\\mathcal {C}\\lnot \\simeq 0$ be in $\\operatorname{{\\mathcal {K}}_{tac}}(R)$ with $\\mathcal {C}\\rightarrow \\mathbb {F}\\mathrel {{\\rightarrow \\cr \\hspace{1.94443pt}\\rightarrow }}{M}$ the minimal complete resolution of $M=\\mathrm {im}{\\partial }^{\\mathcal {C}}_0$ and ${\\Sigma ^{-1}{\\mathcal {C}^*}}\\rightarrow ({\\Sigma ^{-1}{\\mathcal {C}^*}})^{\\ge 1}\\twoheadrightarrow M^*$ the (minimal) complete resolution of $M^*\\cong \\mathrm {im}({\\Sigma ^{-1}{{\\partial }^*}})_0=\\mathrm {im}{\\partial }^*_1$ .", "Then: $\\operatorname{depth}\\nolimits _{\\mathcal {S}}\\operatorname{\\widehat{Ext}}\\nolimits _R^{\\ge r}(M^*,{\\mathbb {k}})=0 \\text{ if and only if } \\operatorname{codepth}\\nolimits _{\\mathcal {S}}\\operatorname{\\widehat{Ext}}\\nolimits _R^{\\le -r}(M,{\\mathbb {k}})=0.$ First note that $\\operatorname{depth}\\nolimits _{\\mathcal {S}}\\operatorname{\\widehat{Ext}}\\nolimits ^{>r}_R(M^*,{\\mathbb {k}})\\ne 0$ if and only if there exists some $\\chi \\in \\mathfrak {X}$ which is a non zero-divisor on $\\operatorname{\\widehat{Ext}}\\nolimits ^{>r}_R(M^*,{\\mathbb {k}})$ .", "This is equivalent to multiplication by $\\chi $ on $\\mathrm {Ext}_{R}^{\\ast }({M^*},{{\\mathbb {k}}})$ being injective for elements of degree $n\\ge r$ .", "Denote by $q=\\deg (\\chi )$ and define $(\\hat{\\mu }^*)^n\\!", ":\\operatorname{\\widehat{Ext}}\\nolimits _{R}^n(M^*_{-q},{\\mathbb {k}})\\rightarrow \\operatorname{\\widehat{Ext}}\\nolimits _{R}^n(M^*,{\\mathbb {k}})$ by multiplication of $\\chi $ .", "By the same argument in the proof of Proposition REF , $(\\hat{\\mu }^*)^n$ is injective if and only if $(\\mu ^*)_{n+q}\\!", ":C^*_{n+q}\\rightarrow C^*_n$ is surjective for each $n\\ge r$ , where $(\\hat{\\mu }^*)^{n+q}=\\operatorname{Hom}\\nolimits (\\mu ^*_{n+q},{\\mathbb {k}})$ .", "Furthermore, by Lemma REF , this is equivalent to the map $\\mu _{n}=(\\mu ^*_{n+q})^T\\!", ":C_{n}\\rightarrow C_{n-q}$ being split injective for $n\\le -r$ .", "Now, applying the same argument from Proposition REF once more, we see that this is equivalent to the map $\\hat{\\mu }^n=\\operatorname{Hom}\\nolimits (\\mu _n,{\\mathbb {k}})\\!", ":\\operatorname{\\widehat{Ext}}\\nolimits _{R}^n(M_{-q},{\\mathbb {k}})\\rightarrow \\operatorname{\\widehat{Ext}}\\nolimits _{R}^n(M,{\\mathbb {k}})$ being surjective for $n\\le -r$ .", "Equivalently, there exists an element $\\chi ^{\\prime }\\in \\mathfrak {X}$ for which multiplication by $\\chi ^{\\prime }$ on $\\mathrm {Ext}_{R}^{\\ast }({M},{{\\mathbb {k}}})$ is surjective for all elements of degree $n\\le -r$ .", "Lastly, $\\chi ^{\\prime } \\operatorname{\\widehat{Ext}}\\nolimits ^{<-r}_R(M,{\\mathbb {k}})=\\operatorname{\\widehat{Ext}}\\nolimits ^{<-r}_R(M,{\\mathbb {k}})$ if and only if $\\operatorname{codepth}\\nolimits _{\\mathcal {S}}\\operatorname{\\widehat{Ext}}\\nolimits ^{<-r}_R(M,{\\mathbb {k}})\\ne 0$ .", "We provide the following visual proof of Lemma REF , as well as its relevancy to the critical and cocritical degrees $\\!\\!$ : $@R=.75cm @C=2.25cm { *++[F]{\\text{Surjective for all } n>r} @/^3pc/@{<~>}[r]|{\\text{dualizing}} @{<=>}[r]^{\\text{Lemma \\ref {Lem:sur-inj-maps1}}} & *++[F]{\\text{Split Injective for all } n<-r} @{<:>}[d]|{\\text{Prop.", "\\ref {Prop:tri-cr&cocr}}} \\\\ *++[F.]{\\textbf { \\leavevmode {\\color {gray}Injective on L.E.S.}}}", "@{<:>}[u]|{\\text{Prop.", "\\ref {Prop:tri-cr&cocr}}} & *++[F.]{\\textbf { \\leavevmode {\\color {gray}Surjective on L.E.S.}}}", "@{<=>}[d]|{\\text{\\color {gray}(Equivalent)}} \\\\ *++[F=]{\\textbf { \\leavevmode {\\color {gray}Regular element on }} \\operatorname{Ext}\\nolimits _R^{>r}(M^*,{\\mathbb {k}})} @{<=>}[u]|{\\text{\\color {gray}(Equivalent)}} @{<.>}[r]^{\\text{Equivalency}}_{\\text{of notions}} & *+++[F=]{\\textbf { \\leavevmode {\\color {gray}Coregular element on }} \\operatorname{\\widehat{Ext}}\\nolimits _R^{<-r}(M,{\\mathbb {k}})} \\\\ *++[F.]{\\textbf {\\leavevmode {\\color {gray}Highest Degree Socle Element}}} @{<:}[u]|{\\color {gray}(\\mathrm {crdeg}^{}_{{R}}{M^*})} & *+++[F.]{\\textbf {\\leavevmode {\\color {gray}Lowest Degree Cosocle Element}}} @{<:}[u]|{\\color {gray}(\\mathrm {cocrdeg}^{}_{{R}}{M})}}$ Proposition 3.5 If $\\mathcal {C}\\lnot \\simeq 0$ is a totally acyclic $R$ -complex with $M=\\mathrm {im}{\\partial }^{\\mathcal {C}}_0$ , then $\\mathrm {cocrdeg}^{}_{{R}}{C}=t>-\\infty $ and the following equalities hold: $\\mathrm {cocrdeg}^{}_{{R}}{C}=\\inf \\lbrace r\\in {\\mathbb {Z}}\\:|\\:\\operatorname{codepth}\\nolimits _{\\hat{\\mathcal {S}}}\\operatorname{Ext}\\nolimits _{\\mathcal {K}(R)}^{\\le r}(\\mathcal {C},\\mathcal {K})=0\\rbrace $ $\\hspace{28.45274pt}=\\inf \\lbrace r\\in {\\mathbb {Z}}\\:|\\:\\operatorname{codepth}\\nolimits _{\\hat{\\mathcal {S}}}\\operatorname{\\widehat{Ext}}\\nolimits _{R}^{\\le r}(M,{\\mathbb {k}})=0\\rbrace .$ Note that if we take $s+1$ to be the lowest degree such that there exists a non zero-divisor on $\\operatorname{\\widehat{Ext}}\\nolimits _R^{>r}(M^*,{\\mathbb {k}})$ , then $s$ is the highest degree of a nonzero element in $\\operatorname{Soc}\\nolimits (\\operatorname{\\widehat{Ext}}\\nolimits _{R}^{\\ast }({M^*},{{\\mathbb {k}}}))$ .", "And since this correlates to the highest degree for which there exists a generating (coregular) element, apply [18] to see that $-s$ should be the lowest degree for which there exists a nonzero element in $\\operatorname{Cosoc}\\nolimits (\\operatorname{\\widehat{Ext}}\\nolimits _{R}^{\\ast }({M},{{\\mathbb {k}}}))$ .", "Further note that since $\\operatorname{depth}\\nolimits _{\\mathcal {Z^*}}\\operatorname{Ext}\\nolimits _R(M^*,{\\mathbb {k}})$ coincides with $\\operatorname{depth}\\nolimits _{\\mathcal {S}}\\operatorname{Ext}\\nolimits _R(M^*,{\\mathbb {k}})$ by [5], there must exist a nonzero coregular element from $\\mathfrak {X}$ on the greatest truncation of $\\operatorname{\\widehat{Ext}}\\nolimits _R^{<-r}(M,{\\mathbb {k}})$ for which there exists such a generating element.", "That is to say, there exists some $\\chi \\in \\mathfrak {X}$ such that $\\chi \\operatorname{\\widehat{Ext}}\\nolimits _R^{\\le t}(M,{\\mathbb {k}})=\\operatorname{\\widehat{Ext}}\\nolimits _R^{\\le t}(M,{\\mathbb {k}})$ where $t=\\mathrm {cocrdeg}^{}_{{R}}{C}$ .", "Thus, the cocritical degree of a complex over a complete intersection is (negatively) finite and, moreover, the equalities in the proposition hold." ], [ "An Eventual Degree-wise Epimorphism and Monomorphism", "We now explore question (i.)", "presented at the end of Section .", "Our goal is to employ an analogous argument to the proof of [11] in order to demonstrate existence of an endomorphism on $\\mathcal {C}$ in $\\operatorname{{\\mathcal {K}}_{tac}}(R)$ that is eventually degree-wise surjective on the left and split injective on the right.", "Theorem 3.6 Let $Q$ be a commutative local, regular ring with infinite residue ${\\mathbb {k}}$ and $R$ a complete intersection of the form $Q/ where $ is a regular $Q$ -sequence and $\\operatorname{codim}\\nolimits (R,Q)=c$ .", "Further let $\\mathcal {C}$ be a totally acyclic $R$ -complex with minimal subcomplex $\\overline{\\mathcal {C}}$ .", "Then there exists a degree 2 chain endomorphism such that $\\textbf {u}\\!", ":\\bar{C}_{n+2}\\rightarrow \\bar{C}_n$ is an epimorphism for all $n\\gg 0$ and a monomorphism for all $n+2\\ll 0$ .", "First, set $M=\\operatorname{Im}\\nolimits {\\partial }^{\\mathcal {C}}_0$ and $M^*=\\operatorname{Hom}\\nolimits _R(M,R)$ .", "Our goal is to demonstrate that there exists some $\\chi $ in the ideal $\\mathfrak {X}$ of $\\mathcal {S}_{{\\mathbb {k}}}={\\mathbb {k}}[\\chi _1,\\dots ,\\chi _c]$ such that $\\chi $ is both a non zero-divisor on $\\operatorname{\\widehat{Ext}}\\nolimits _R^{>s}(M,{\\mathbb {k}})$ for some $s\\in {\\mathbb {Z}}$ and $\\chi \\operatorname{\\widehat{Ext}}\\nolimits _R^{<t}(M,{\\mathbb {k}})=\\operatorname{\\widehat{Ext}}\\nolimits _R^{<t}(M,{\\mathbb {k}})$ for some $t\\in {\\mathbb {Z}}$ .", "As $\\operatorname{\\widehat{Ext}}\\nolimits _{R}^{\\ast }({M},{{\\mathbb {k}}})$ and $\\operatorname{\\widehat{Ext}}\\nolimits _{R}^{\\ast }({M^*},{{\\mathbb {k}}})$ are both graded modules over $\\mathcal {S}_{{\\mathbb {k}}}$ , the action of this ring remains the same on each of the modules.", "For ease of discussion, denote by $E_1=\\operatorname{\\widehat{Ext}}\\nolimits _{R}^{\\ge 0}(M,{\\mathbb {k}})=\\operatorname{Ext}\\nolimits _{R}^{\\ge 0}(M,{\\mathbb {k}})$ and $E_2=\\operatorname{\\widehat{Ext}}\\nolimits _{R}^{\\ge 0}(M^*,{\\mathbb {k}})=\\operatorname{Ext}\\nolimits _{R}^{\\ge 0}(M^*,{\\mathbb {k}})$ so that $E_1$ and $E_2$ are both noetherian modules over $\\mathcal {S}_{{\\mathbb {k}}}$ .", "Furthermore, denote by $A_i=\\operatorname{Soc}\\nolimits (E_i)$ for each $i=1,2$ and note each $A_i$ is an artinian submodule of the respective graded Tate module.", "Now, as each $A_i$ is both artinian and noetherian, it is of finite length, implying there exists some $N_i=\\sup \\lbrace n\\in {\\mathbb {N}}\\cup \\lbrace 0\\rbrace \\,|\\,\\deg (x)=n \\text{ for } x\\in A_i\\rbrace $ for each $i=1,2$ .", "Set $E_1^{>N_0}=\\operatorname{Ext}\\nolimits _{R}^{>N_0}(M,{\\mathbb {k}})$ and $E_2^{>N_0}=\\operatorname{Ext}\\nolimits _{R}^{>N_0}(M^*,{\\mathbb {k}})$ where $N_0=\\max \\lbrace N_1,N_2\\rbrace $ , so that neither truncation contains a nonzero element annihilated by $\\mathfrak {X}$ .", "Then denote by $P_1,\\dots ,P_q$ the associated primes of $0\\in E_1^{>N_0}$ and $Q_1,\\dots ,Q_r$ the associated primes of $0\\in E_2^{>N_0}$ .", "Hence, $P_1\\cup \\cdots \\cup P_q\\cup Q_1\\cup \\cdots \\cup Q_r$ contains the set of zero-divisors on both $E_1^{>N_0}$ and $E_2^{>N_0}$ .", "Now consider the set $\\chi _1+\\sum _{i=2}^c{\\mathbb {k}}\\chi _i,$ which generates $\\mathfrak {X}$ , so it cannot be contained in any $P_k$ or $Q_k$ since there is no element of $\\mathfrak {X}$ which is a zero-divisor on $E_i^{\\ge N_0}$ .", "Moreover, ${\\mathbb {k}}$ is infinite, thus yielding a translation of this set which is a subspace of $\\mathcal {S}_{{\\mathbb {k}}}$ and this subspace cannot be contained within $P_1\\cup \\cdots \\cup P_q\\cup Q_1\\cup \\cdots \\cup Q_r$ .", "Meaning, there exists a linear form $\\hat{\\chi }=\\chi _1+\\sum _{i=2}^c \\alpha _j \\chi _j$ with $\\alpha _j\\in {\\mathbb {k}}$ such that $\\chi $ is a non zero-divisor on both $E_1^{\\ge N_0}$ and $E_2^{\\ge N_0}$ .", "For each $j=1,\\dots ,c$ set $a_j$ equal to a pre-image of $\\alpha _j$ in $R$ so that $\\chi =\\chi _1+\\sum _{i=2}^c a_j \\chi _j\\in \\mathcal {S}.$ Lastly, note that $\\chi $ is a non zero-divisor on $E_1^{\\ge N_0}$ if and only if $\\chi ^{n+2}\\!", ":\\operatorname{Ext}\\nolimits _R^{n}(M,{\\mathbb {k}})\\rightarrow \\operatorname{Ext}\\nolimits _R^{n+2}(M,{\\mathbb {k}})$ is injective for all $n>N_0$ if and only if $\\textbf {u}=u_1+\\sum _{i=2}^c a_j u_j$ is surjective for all $n>N_0$ , where $\\chi =\\operatorname{Hom}\\nolimits _R(\\textbf {u}, {\\mathbb {k}})$ .", "Similarly, we see that $\\chi $ is additionally a non zero-divisor on $E^{\\ge N_0}_2=\\operatorname{Ext}\\nolimits _R^{\\ge N_0}(M^*,{\\mathbb {k}})$ .", "As demonstrated in Lemma REF , there is a one-to-one correspondence between regular elements on $\\operatorname{Ext}\\nolimits _R^{\\ge N_0}(M^*,{\\mathbb {k}})$ and coregular elements on $\\operatorname{\\widehat{Ext}}\\nolimits _R^{\\le -N_0}(M,{\\mathbb {k}})$ .", "Consequently, $\\chi $ is “generating element” of $\\operatorname{\\widehat{Ext}}\\nolimits _R^{\\le -N_0}(M,{\\mathbb {k}})$ if and only if $\\chi ^{n+2}\\!", ":\\operatorname{Ext}\\nolimits _R^{n}(M,{\\mathbb {k}})\\rightarrow \\operatorname{Ext}\\nolimits _R^{n+2}(M,{\\mathbb {k}})$ is surjective for all $n+2<-N_0$ if and only if $\\chi ^m\\!", ":\\operatorname{Ext}\\nolimits _R^{m-2}(M,{\\mathbb {k}})\\rightarrow \\operatorname{Ext}\\nolimits _R^{m}(M,{\\mathbb {k}})$ is surjective for all $m<-N_0$ if and only if the linear form $\\textbf {u}\\!", ":\\bar{C}_{n}\\rightarrow \\bar{C}_{n-2}$ is split injective for all $n<-N_0$ .", "While this illustrates that at least one of $\\mathrm {crdeg}^{}_{{R}}{\\mathcal {C}}$ or $\\mathrm {cocrdeg}^{}_{{R}}{\\mathcal {C}}$ is realized by either $N_1$ or $-N_2$ , the question remains on whether $N_1=N_2$ in the proof above, implying that the critical and cocritical degrees are realized by the same degree 2 endomorphism.", "We suspect this holds at least for the complete resolution of the residue field.", "Corollary 3.7 Let $Q$ be a commutative local, regular ring with infinite residue ${\\mathbb {k}}$ and $R$ a complete intersection of the form $Q/ where $ is a regular $Q$ -sequence and $\\operatorname{codim}\\nolimits (R,Q)=c$ .", "Further let $\\mathcal {C}$ be a totally acyclic $R$ -complex with minimal subcomplex $\\overline{\\mathcal {C}}$ .", "Then one of the following cases must hold: If $\\mathrm {crdeg}^{\\mu }_{{R}}{\\mathcal {C}}=\\mathrm {crdeg}^{}_{{R}}{\\mathcal {C}}$ , then $\\mu $ is a linear form in $\\mathcal {S}_{{\\mathbb {k}}}$ and $\\mathrm {cocrdeg}^{\\mu }_{{R}}{\\mathcal {C}}\\le \\mathrm {cocrdeg}^{}_{{R}}{\\mathcal {C}}.$ If $\\mathrm {cocrdeg}^{\\mu }_{{R}}{\\mathcal {C}}=\\mathrm {cocrdeg}^{}_{{R}}{\\mathcal {C}}$ , then $\\mu $ is a linear form in $\\mathcal {S}_{{\\mathbb {k}}}$ and $\\mathrm {crdeg}^{\\mu }_{{R}}{\\mathcal {C}}\\ge \\mathrm {crdeg}^{}_{{R}}{\\mathcal {C}}.$" ], [ "Direct Sums and Retracts", "We begin with discussing critical degree of the (co)product in $\\operatorname{{\\mathcal {K}}_{tac}}(R)$ , given by the direct sum of two $R$ -complexes $@R=0.25cm @C=0.25cm { \\mathcal {C} @{_{(}->} [dr]^{\\iota _{\\mathcal {C}}} & & \\mathcal {D} @{^{(}->} [dl]_{\\iota _{\\mathcal {D}}} \\\\ & \\mathcal {C} \\oplus \\mathcal {D} @{->>}[dl]^{\\pi _{\\mathcal {C}}} @{->>}[dr]_{\\pi _{\\mathcal {D}}} & \\\\ \\mathcal {C}& & \\mathcal {D} }$ where $ \\mathcal {C} \\oplus \\mathcal {D}$ is the complex with $R$ -modules $( \\mathcal {C} \\oplus \\mathcal {D})_n=C_n\\oplus D_n$ and $R$ -module homomorphisms ${\\partial }^{ \\mathcal {C} \\oplus \\mathcal {D}}_n=\\begin{pmatrix}{\\partial }_n^{\\mathcal {C}} & 0 \\\\ 0 & {\\partial }_n^{\\mathcal {D}}\\end{pmatrix}.$ Note that minimality is preserved over direct sums, where the decomposition of $ \\mathcal {C} \\oplus \\mathcal {D}$ is given by $ \\mathcal {(\\overline{\\mathcal { \\mathcal {C} \\oplus \\mathcal {D}}})} \\oplus \\mathcal {\\mathcal {T}}$ with $\\overline{\\mathcal { \\mathcal {C} \\oplus \\mathcal {D}}}= \\mathcal {\\overline{\\mathcal {C}}} \\oplus \\mathcal {\\overline{\\mathcal {D}}}$ and $\\mathcal {T}$ contractible.", "Proposition 4.1 Let $R$ be a complete intersection of the form $Q/($ , with $f_1,\\dots , f_c$ a regular $Q$ -sequence.", "Further suppose $\\mathcal {C}\\in \\operatorname{{\\mathcal {K}}_{tac}}(R)$ and $\\mathcal {D}\\in \\operatorname{{\\mathcal {K}}_{tac}}(R)$ , so that $ \\mathcal {C} \\oplus \\mathcal {D}\\in \\operatorname{{\\mathcal {K}}_{tac}}(R)$ .", "Denote by $\\mathrm {crdeg}^{}_{{R}}{\\mathcal {C}}=s_1$ , $\\mathrm {crdeg}^{}_{{R}}{\\mathcal {D}}=s_2$ , $\\mathrm {cocrdeg}^{}_{{R}}{\\mathcal {C}}=t_1$ , and $\\mathrm {cocrdeg}^{}_{{R}}{\\mathcal {D}}=t_2$ .", "Then: $\\mathrm {crdeg}^{}_{{R}}{( \\mathcal {C} \\oplus \\mathcal {D})}=\\max \\lbrace s_1,s_2\\rbrace ,\\text{ and }$ $\\mathrm {cocrdeg}^{}_{{R}}{( \\mathcal {C} \\oplus \\mathcal {D})}=\\min \\lbrace t_1,t_2\\rbrace .$ Suppose $s=\\mathrm {crdeg}^{}_{{R}}{( \\mathcal {C} \\oplus \\mathcal {D})}$ .", "Further let $M=\\operatorname{Im}\\nolimits {\\partial }^{\\mathcal {C}}_0$ and $N=\\operatorname{Im}\\nolimits {\\partial }^{\\mathcal {D}}_0$ , so that $M\\oplus N=\\operatorname{Im}\\nolimits {\\partial }^{ \\mathcal {C} \\oplus \\mathcal {D}}_0$ noting that there exists an isomorphism between the $\\mathcal {S}$ -modules $\\operatorname{\\widehat{Ext}}\\nolimits _R^*(M\\oplus N, {\\mathbb {k}})\\cong \\operatorname{\\widehat{Ext}}\\nolimits _R^*(M, {\\mathbb {k}})\\oplus \\operatorname{\\widehat{Ext}}\\nolimits _R^*(N, {\\mathbb {k}})$ .", "So take $0\\ne \\mathfrak {x}\\in \\operatorname{Soc}\\nolimits (\\operatorname{\\widehat{Ext}}\\nolimits _{R}^{\\ast }({M},{{\\mathbb {k}}}))$ with $\\deg (\\mathfrak {x})=s_1$ and $0\\ne \\mathfrak {z}\\in \\operatorname{Soc}\\nolimits (\\operatorname{\\widehat{Ext}}\\nolimits _{R}^{\\ast }({N},{{\\mathbb {k}}}))$ with $\\deg (\\mathfrak {z})=s_2$ .", "Meaning, $\\mathfrak {x}$ is a nonzero socle element of highest degree in the former graded $\\mathcal {S}$ -module and likewise for $\\mathfrak {z}$ in the latter module.", "Given this, note that $(\\mathfrak {x},0)\\in \\operatorname{\\widehat{Ext}}\\nolimits _R^{s_1}(M\\oplus N, {\\mathbb {k}})$ and $(0,\\mathfrak {z})\\in \\operatorname{\\widehat{Ext}}\\nolimits _R^{s_2}(M\\oplus N, {\\mathbb {k}})$ must both be annihilated by the maximal ideal $\\mathfrak {X}\\subseteq \\mathcal {S}$ and so, $s\\ge \\max \\lbrace s_1,s_2\\rbrace $ .", "Conversely, we know there exists some $0\\ne (\\mathfrak {x^{\\prime }},\\mathfrak {z^{\\prime }})\\in \\operatorname{Soc}\\nolimits (\\operatorname{\\widehat{Ext}}\\nolimits _{R}^{\\ast }({M\\oplus N},{{\\mathbb {k}}}))$ with $\\mathfrak {x^{\\prime }}\\in \\operatorname{\\widehat{Ext}}\\nolimits _R^s(M,{\\mathbb {k}})$ and $\\mathfrak {z^{\\prime }}\\in \\operatorname{\\widehat{Ext}}\\nolimits _R^s(N,{\\mathbb {k}})$ such that $(\\mathfrak {x^{\\prime }},\\mathfrak {z^{\\prime }})\\in (0:_{\\hat{E}} \\mathfrak {X})$ .", "Therefore, either $\\mathfrak {x^{\\prime }}$ or $\\mathfrak {z^{\\prime }}$ (or both) must be nonzero and annihilated by $\\mathfrak {X}$ , implying that $s=\\max \\lbrace s_1,s_2\\rbrace $ .", "This gives proof of the first equality.", "To show the second equality, we give an appropriate analogue to the argument for critical degree.", "First note that the existence of nonzero elements $\\bar{\\mathfrak {x}}\\in \\operatorname{\\widehat{Ext}}\\nolimits _{R}^{\\le t_1}(M,{\\mathbb {k}})/\\mathfrak {X}\\operatorname{\\widehat{Ext}}\\nolimits _{R}^{\\le t_1}(M,{\\mathbb {k}})$ and $\\bar{\\mathfrak {z}}\\in \\operatorname{\\widehat{Ext}}\\nolimits _{R}^{\\le t_2}(N,{\\mathbb {k}})/\\mathfrak {X}\\operatorname{\\widehat{Ext}}\\nolimits _{R}^{\\le t_2}(N,{\\mathbb {k}})$ implies the existence of nonzero elements $(\\bar{\\mathfrak {x}},0)\\in \\operatorname{\\widehat{Ext}}\\nolimits _{R}^{\\le t_1}(M\\oplus N,{\\mathbb {k}})/\\mathfrak {X}\\operatorname{\\widehat{Ext}}\\nolimits _{R}^{\\le t_1}(M\\oplus N,{\\mathbb {k}})$ and $(0,\\bar{\\mathfrak {z}})\\in \\operatorname{\\widehat{Ext}}\\nolimits _{R}^{\\le t_2}(M\\oplus N,{\\mathbb {k}})/\\mathfrak {X}\\operatorname{\\widehat{Ext}}\\nolimits _{R}^{\\le t_2}(M\\oplus N,{\\mathbb {k}})$ .", "Hence, $t\\le t_1$ and $t\\le t_2$ , so that $t\\le \\min \\lbrace t_1,t_2\\rbrace $ .", "Conversely, the existence of a nonzero element $(\\bar{\\mathfrak {x^{\\prime }}},\\bar{\\mathfrak {z^{\\prime }}})\\in \\operatorname{\\widehat{Ext}}\\nolimits _{R}^{\\le t}(M\\oplus N,{\\mathbb {k}})/\\mathfrak {X}\\operatorname{\\widehat{Ext}}\\nolimits _{R}^{\\le t}(M\\oplus N,{\\mathbb {k}})$ implies that at least $\\bar{\\mathfrak {x^{\\prime }}}\\ne 0$ or $\\bar{\\mathfrak {z^{\\prime }}}\\ne 0$ , thereby proving the equality $t=\\min \\lbrace t_1,t_2\\rbrace $ .", "Remark Notice that we did not deal with the case of infinite critical or cocritical degrees.", "Recall that whenever $R$ is a complete intersection the critical degree of any $R$ -complex (and $R$ -module) will be positively finite and, likewise, the cocritical degree will always be negatively finite.", "If at least one of $\\mathcal {C}$ or $\\mathcal {D}$ is periodic, then the given statements (and arguments) still apply.", "As $\\operatorname{{\\mathcal {K}}_{tac}}(R)$ is a thick subcategory of the homotopy category, $\\mathcal {K}(R)$ , it contains all retracts.", "Thus, if a complex $\\mathcal {E}$ in $\\operatorname{{\\mathcal {K}}_{tac}}(R)$ can be written as a direct sum, say $ \\mathcal {C} \\oplus \\mathcal {D}$ , then we want to consider the critical and cocritical degrees of each summand $\\mathcal {C}$ and $\\mathcal {D}$ .", "Whenever $\\mathcal {E}$ is minimal, each summand $\\mathcal {C}$ and $\\mathcal {D}$ must be minimal as well, and ${\\partial }^{\\mathcal {E}}={\\partial }^{\\mathcal {C}}\\oplus {\\partial }^{\\mathcal {D}}\\subseteq {\\mathfrak {m}}\\mathcal {C}\\oplus {\\mathfrak {m}}\\mathcal {D}$ .", "For an endomorphism $\\mu \\!", ":\\mathcal {E}\\rightarrow {\\Sigma ^{q}{\\mathcal {E}}}$ , note that we get four induced maps (two on each summand): $\\mu _1\\!", ":\\mathcal {C}\\rightarrow {\\Sigma ^{q}{\\mathcal {C}}} \\\\\\mu _2\\!", ":\\mathcal {D}\\rightarrow {\\Sigma ^{q}{\\mathcal {C}}} \\\\\\mu _3\\!", ":\\mathcal {C}\\rightarrow {\\Sigma ^{q}{\\mathcal {D}}} \\\\\\mu _4\\!", ":\\mathcal {D}\\rightarrow {\\Sigma ^{q}{\\mathcal {D}}}.$ If $\\mathrm {crdeg}^{\\mu }_{{R}}{\\mathcal {E}}=s$ , then there is some $\\mu _{n+q}\\!", ":C_{n+q}\\oplus D_{n+q}\\rightarrow C_n\\oplus D_n$ that is surjective for all $n>s$ .", "However, note that the surjectivity onto one summand, take $C_n$ for example, may not occur from $\\mu _{1,(n+q)}$ alone as the map $\\mu _{2,(n+q)}$ theoretically could contribute in part to the surjectivity.", "Likewise, if $\\mathrm {cocrdeg}^{\\mu }_{{R}}{\\mathcal {E}}=t$ , the same could occur with injectivity for $n<t$ .", "In either case, it is difficult to deduce what the critical and cocritical degrees are on each summand from Definitions REF and REF alone.", "Hence, we again employ Propositions REF and REF to understand the critical and cocritical degrees of retracts in $\\operatorname{{\\mathcal {K}}_{tac}}(R)$ .", "Proposition 4.2 Let $R=Q/($ , with $f_1,\\dots , f_c$ a regular $Q$ -sequence, and further suppose $\\mathcal {E}\\in \\operatorname{{\\mathcal {K}}_{tac}}(R)$ with decomposition $\\mathcal {E}= \\mathcal {C} \\oplus \\mathcal {D}$ (neither $\\mathcal {C}$ nor $\\mathcal {D}$ contractible).", "If $\\mathrm {crdeg}^{}_{{R}}{\\mathcal {E}}=s$ then $\\mathrm {crdeg}^{}_{{R}}{\\mathcal {C}}\\le s$ and $\\mathrm {crdeg}^{}_{{R}}{\\mathcal {D}}\\le s$ , with at least one being an equality.", "Likewise, if $\\mathrm {cocrdeg}^{}_{{R}}{\\mathcal {E}}=t$ then $\\mathrm {cocrdeg}^{}_{{R}}{\\mathcal {C}}\\ge t$ and $\\mathrm {cocrdeg}^{}_{{R}}{\\mathcal {D}}\\ge t$ , with at least one being an equality.", "For simplicity, assume $\\mathcal {E}$ is minimal and denote by $M\\oplus N=\\operatorname{Im}\\nolimits {\\partial }^{\\mathcal {E}}_0$ where $M=\\mathrm {im}{\\partial }^{\\mathcal {C}}_0$ and $N=\\mathrm {im}{\\partial }^{\\mathcal {D}}_0$ .", "Proposition REF implies that $s$ is the maximal degree of a nonzero element $(\\mathfrak {x},\\mathfrak {z})\\in \\hat{E}=\\operatorname{\\widehat{Ext}}\\nolimits _{R}^{\\ast }({M},{{\\mathbb {k}}})\\oplus \\operatorname{\\widehat{Ext}}\\nolimits _{R}^{\\ast }({N},{{\\mathbb {k}}}) \\cong \\operatorname{\\widehat{Ext}}\\nolimits _{R}^{\\ast }({M\\oplus N},{{\\mathbb {k}}})$ with $(\\mathfrak {x},\\mathfrak {z})\\in (0:_{\\hat{E}}\\mathfrak {X})$ .", "Hence, $\\mathfrak {x}\\in \\hat{E}_M=\\operatorname{\\widehat{Ext}}\\nolimits _{R}^s(M,{\\mathbb {k}})$ and $\\mathfrak {z}\\in \\hat{E}_N=\\operatorname{\\widehat{Ext}}\\nolimits _{R}^s(N,{\\mathbb {k}})$ (with at least one nonzero) such that $\\mathfrak {x}\\in (0:_{\\hat{E}_M}\\mathfrak {X})$ and $\\mathfrak {z}\\in (0:_{\\hat{E}_N}\\mathfrak {X})$ .", "This implies either $\\mathrm {crdeg}^{}_{{R}}{\\mathcal {C}}\\ge s$ or $\\mathrm {crdeg}^{}_{{R}}{\\mathcal {D}}\\ge s$ (or both, in the case that $\\mathfrak {x}\\ne 0 \\ne \\mathfrak {z}$ ).", "Now suppose $\\mathrm {crdeg}^{}_{{R}}{\\mathcal {C}}=s^{\\prime }\\gneq s$ so that there exists some nonzero element $\\mathfrak {x^{\\prime }}\\in \\operatorname{\\widehat{Ext}}\\nolimits _{R}^{s^{\\prime }}(M,{\\mathbb {k}})\\subset \\operatorname{\\widehat{Ext}}\\nolimits _{R}^{>s}(M,{\\mathbb {k}})$ such that $\\mathfrak {x^{\\prime }}\\mathfrak {X}=0$ .", "Of course, this would then imply the element $(\\mathfrak {x^{\\prime }},0)\\in \\operatorname{\\widehat{Ext}}\\nolimits _{R}^{s^{\\prime }}(M\\oplus N, {\\mathbb {k}})$ is annihilated by $\\mathfrak {X}$ thus contradicting $s$ as the highest degree socle element.", "The same argument can be applied to $\\operatorname{\\widehat{Ext}}\\nolimits _{R}^{\\ast }({N},{{\\mathbb {k}}})$ , so that we see both $\\mathrm {crdeg}^{}_{{R}}{\\mathcal {C}}$ and $\\mathrm {crdeg}^{}_{{R}}{\\mathcal {D}}$ must be bounded above by $s$ .", "Lastly, it is straightforward to see that at least one of $\\mathrm {crdeg}^{}_{{R}}{\\mathcal {C}}$ or $\\mathrm {crdeg}^{}_{{R}}{\\mathcal {D}}$ must be equal to $s$ .", "For cocritical degree, there must exist a lowest degree nonzero element $(\\mathfrak {\\bar{x}},\\mathfrak {\\bar{z}})\\in \\operatorname{Cosoc}\\nolimits (\\operatorname{\\widehat{Ext}}\\nolimits _{R}^{\\ast }({M\\oplus N},{{\\mathbb {k}}}))$ with $\\mathfrak {x}\\in \\widehat{\\operatorname{Ext}\\nolimits }^{t}(M,{\\mathbb {k}})$ and $\\mathfrak {z}\\in \\widehat{\\operatorname{Ext}\\nolimits }^{t}(N,{\\mathbb {k}})$ (at least one nonzero).", "That is, there exists some nonzero element of degree $t$ such that $(\\mathfrak {x},\\mathfrak {z})\\notin \\mathfrak {X}\\widehat{\\operatorname{Ext}\\nolimits }^{\\le t}(M\\oplus N,{\\mathbb {k}})$ .", "Suppose that $\\mathrm {cocrdeg}^{}_{{R}}{\\mathcal {C}}=t^{\\prime }\\lneq t$ so that there exists some $0\\ne \\mathfrak {x^{\\prime }}\\notin \\mathfrak {X}\\widehat{\\operatorname{Ext}\\nolimits }^{\\le t^{\\prime }}(M,{\\mathbb {k}})$ , implying $(\\mathfrak {x^{\\prime }},0)\\notin \\mathfrak {X}\\widehat{\\operatorname{Ext}\\nolimits }^{\\le t^{\\prime }}(M\\oplus N,{\\mathbb {k}})$ and contradicting the assumption that $t$ is the lowest degree of such an element.", "The same argument can be applied to $\\mathrm {cocrdeg}^{}_{{R}}{\\mathcal {D}}$ , so we see that the cocritical degrees of $\\mathcal {C}$ and $\\mathcal {D}$ are bounded below by $t$ .", "On the other hand, since there exists some nonzero element of degree $t$ such that $(\\mathfrak {x},\\mathfrak {z})\\notin \\mathfrak {X}\\widehat{\\operatorname{Ext}\\nolimits }^{\\le t}(M\\oplus N,{\\mathbb {k}})$ , either $0\\ne \\mathfrak {x}\\notin \\mathfrak {X}\\widehat{\\operatorname{Ext}\\nolimits }^{\\le t}(M,{\\mathbb {k}})$ or $0\\ne \\mathfrak {z}\\notin \\mathfrak {X}\\widehat{\\operatorname{Ext}\\nolimits }^{\\le t}(N,{\\mathbb {k}})$ , demonstrating that we must have equality for at least one of the cocritical degrees.", "Example 4.3 Let $R={\\mathbb {k}}[\\!", "[x_1,\\dots ,x_n]\\!", "]/ where $ (f1, ..., fc)$ in $ m=(x1, ..., xn)$.", "Furthermore, denote by $ K$ the (minimal) totally acyclic complex in the complete resolution of $ M=k$.", "By \\cite [§2]{Eisenbud}, $ crdegRk=-1$.", "Now denote by $ C=K5K$ and note that Propositions \\ref {Prop:crco-shifts} and \\ref {Prop:sums-crco} tell us that $ crdegRC={0,5}=5$.", "Similarly, we expect $ cocrdegRC={-1,4}=-1$.$ The previous example demonstrates that the lowest degree Betti number may not occur at homological degree 0, or even at the critical degree.", "Rather, it could occur at the cocritical degree, and is always guaranteed to occur between the critical and cocritical degrees (inclusive).", "Hence, this motivates questions about the distance between the critical and cocritical degrees of an $R$ -complex." ], [ "Critical Diameter", "Although the authors of [4] provide a sufficient bound for $\\mathrm {crdeg}^{}_{{R}}{M}$ when $\\operatorname{cx}\\nolimits _RM=2$ , such bounds are unknown for greater complexity.", "Moreover, the bound given is dependent upon the Betti sequence of the given module.", "Example 7.5 from [5] demonstrates an obstacle when trying to find appropriate bounds for $\\mathrm {crdeg}^{}_{{R}}{M}$ over a given ring or modules of the same complexity: taking a higher cosyzygy (or syzygy) module will always yield a higher (resp.", "lower) critical degree.", "We provide the following example to demonstrate the same issue, even when we transition to the definitions in $\\operatorname{{\\mathcal {K}}_{tac}}(R)$ .", "Example 4.4 Denote by $\\mathcal {C}\\in \\operatorname{{\\mathcal {K}}_{tac}}(R)$ the minimal $R$ -complex with $\\operatorname{Im}\\nolimits {\\partial }^{\\mathcal {C}}_0=M$ and $\\mathcal {D}\\in \\operatorname{{\\mathcal {K}}_{tac}}(R)$ the minimal $R$ -complex with $\\operatorname{Im}\\nolimits {\\partial }^{\\mathcal {D}}_0=N$ , so that there exist complete resolutions $\\mathcal {\\mathcal {C}}\\rightarrow \\mathbb {\\mathbb {F}}\\mathrel {{\\rightarrow \\cr \\hspace{1.94443pt}\\rightarrow }}{M}$ and $\\mathcal {\\mathcal {D}}\\rightarrow \\mathbb {\\mathbb {G}}\\mathrel {{\\rightarrow \\cr \\hspace{1.94443pt}\\rightarrow }}{N}$ .", "Now suppose $M$ and $N$ are syzygies of each other, say $N=\\Omega ^{-n}M$ and $M=\\Omega ^nN$ .", "Then note that $\\mathcal {D}\\simeq {\\Sigma ^{n}{\\mathcal {C}}}$ (in fact, they are isomorphic!).", "If $\\mathrm {crdeg}^{}_{{R}}{\\mathcal {C}}=s$ and $\\mathrm {cocrdeg}^{}_{{R}}{\\mathcal {C}}=t$ , then $\\mathrm {crdeg}^{}_{{R}}{\\mathcal {D}}=s+n$ and $\\mathrm {cocrdeg}^{}_{{R}}{\\mathcal {D}}=t+n$ by Proposition REF .", "However, suppose we instead start with the assumption that $\\mathrm {crdeg}^{}_{{R}}{\\mathcal {D}}=s^{\\prime }$ and $\\mathrm {cocrdeg}^{}_{{R}}{\\mathcal {D}}=t^{\\prime }$ , viewing $\\mathcal {C}={\\Sigma ^{-n}{\\mathcal {D}}}$ .", "Once again applying Proposition REF , it is not difficult to see that $\\mathrm {crdeg}^{}_{{R}}{\\mathcal {C}}=s^{\\prime }-n$ and $\\mathrm {cocrdeg}^{}_{{R}}{\\mathcal {C}}=t^{\\prime }-n$ .", "Note that in doing so, $s^{\\prime }-n=s$ and $t^{\\prime }-n=t$ .", "$\\diamond $ In the example above, note that while the critical and cocritical degrees change under translations, the difference between these two degrees does not alter: $\\mathrm {crdeg}^{}_{{R}}{\\mathcal {C}}-\\mathrm {cocrdeg}^{}_{{R}}{\\mathcal {C}}=s-t=(s+n)-(t+n)=\\mathrm {crdeg}^{}_{{R}}{\\mathcal {D}}-\\mathrm {cocrdeg}^{}_{{R}}{\\mathcal {D}}.$ Definition 4.5 Let $\\mathcal {C}$ be a totally acyclic $R$ -complex with minimal subcomplex $\\overline{\\mathcal {C}}$ such that $\\operatorname{cx}\\nolimits _R\\overline{\\mathcal {C}}>1$ .", "Then the $R$ -diameter of $\\mathcal {C}$ is the distance between the critical and cocritical degrees of $\\mathcal {C}$ , $\\operatorname{diam}_{R}(\\mathcal {C})=\\mathrm {crdeg}^{}_{{R}}{\\mathcal {C}}-\\mathrm {cocrdeg}^{}_{{R}}{\\mathcal {C}}.$ Define $\\operatorname{diam}_{R}(\\mathcal {C})=-\\infty $ for any $\\mathcal {C}$ with $\\operatorname{cx}\\nolimits _Q\\overline{\\mathcal {C}}=1$ or if $\\mathcal {C}\\simeq 0$ .", "Remark Note that since $R$ is a complete intersection, $\\operatorname{diam}_{R}(\\mathcal {C})<\\infty $ but, relaxing to a local ring $Q$ (not regular!", "), $\\operatorname{diam}_{Q}(\\mathcal {C})=\\infty $ if and only if $\\mathrm {crdeg}^{}_{{Q}}{\\mathcal {C}}=\\infty $ or $\\mathrm {cocrdeg}^{}_{{Q}}{\\mathcal {C}}=-\\infty $ .", "In the above definition, it should be clear that $\\operatorname{diam}_{R}(\\mathcal {C})\\le \\inf \\lbrace \\mathrm {crdeg}^{\\mu }_{{R}}{\\mathcal {C}}-\\mathrm {cocrdeg}^{\\mu }_{{R}}{\\mathcal {C}} \\: : \\:\\mu \\in \\operatorname{End}\\nolimits _{\\mathcal {K}(R)}(\\mathcal {C})\\rbrace $ with equality whenever $\\mu $ realizes both the critical and cocritical degrees.", "Proposition 4.6 If $\\mathcal {C}\\simeq \\mathcal {D}$ , then $\\operatorname{diam}_{R}(\\mathcal {C})=\\operatorname{diam}_{R}(\\mathcal {D})$ .", "By Proposition REF , we have that $\\mathrm {crdeg}^{}_{{R}}{\\mathcal {C}}=\\mathrm {crdeg}^{}_{{R}}{\\mathcal {D}}$ and $\\mathrm {cocrdeg}^{}_{{R}}{\\mathcal {C}}=\\mathrm {cocrdeg}^{}_{{R}}{\\mathcal {D}}$ .", "Thus, the statement follows directly from these observations.", "Proposition 4.7 For non-periodic complexes $\\mathcal {C}$ and $\\mathcal {D}$ in $\\operatorname{{\\mathcal {K}}_{tac}}(R)$ , it holds that: $\\operatorname{diam}_{R}(\\mathcal {C^*})=\\operatorname{diam}_{R}(\\mathcal {C})-q$ $\\operatorname{diam}_{R}(\\mathcal { \\mathcal {C} \\oplus \\mathcal {D}})=\\max \\lbrace \\mathrm {crdeg}^{}_{{R}}{\\mathcal {C}},\\mathrm {crdeg}^{}_{{R}}{\\mathcal {D}}\\rbrace -\\min \\lbrace \\mathrm {cocrdeg}^{}_{{R}}{\\mathcal {C}},\\mathrm {cocrdeg}^{}_{{R}}{\\mathcal {D}}\\rbrace $ where $q=\\deg (\\mu )$ and $\\mu $ the endomorphism realizing $\\mathrm {crdeg}^{}_{{Q}}{\\mathcal {C}}$ .", "This is a straightforward application of Propositions REF and REF .", "Definition 4.8 Let $M$ be a finitely-generated $R$ -module with $\\operatorname{cx}\\nolimits _RM>1$ .", "Then the critical diameter of $M$ is defined to be $\\operatorname{diam}_{R}({M})=\\operatorname{diam}_{R}(\\mathcal {C})$ where $\\mathcal {C}\\rightarrow \\mathbb {F}\\rightarrow {M}$ is the minimal complete resolution of $M$ .", "Furthermore, define $\\operatorname{diam}_{R}({M})=-\\infty $ for any $R$ -module with $\\operatorname{cx}\\nolimits _RM=1$ and set $\\operatorname{diam}_{R}({M})=0$ if $\\operatorname{pd}\\nolimits _RM<\\infty $ .", "Example 4.9 Let $Q={\\mathbb {k}}[\\![x,y]\\!", "]$ and take $R=Q/(x^2,y^2)$ so $\\operatorname{codim}\\nolimits (Q,R)=2$ and we may expect two CI operators.", "Now, suppose $M={\\mathbb {k}}=\\operatorname{coker}\\nolimits (f)$ where $f=(x, y)$ from $R^2\\rightarrow R$ and take a minimal complete resolution $\\mathcal {K}\\rightarrow \\mathbb {F}\\rightarrow {\\mathbb {k}}$ .", "Then the CI (Eisenbud) operators on $\\mathcal {K}$ form degree $-2$ maps on $\\mathcal {K}$ as follows as follows: $@R=.5cm @C=1.2cm{R^{4}[r]^{\\begin{psmallmatrix} y & x & 0 & 0 \\\\ 0 & -y & x & 0 \\\\ 0 & 0 & y & x \\end{psmallmatrix}} [d]_{t_3} & R^{3}[r]^{\\begin{psmallmatrix} y & x & 0 \\\\ 0 & -y & x \\end{psmallmatrix}} [d]_{t_2} & R^{2}[r]^{\\begin{psmallmatrix}x & y \\end{psmallmatrix}} [d]_{t_1} & R[r]^{\\begin{psmallmatrix}xy \\end{psmallmatrix}} [d]_{t_0} & R[r]^{\\begin{psmallmatrix}x \\\\ y \\end{psmallmatrix}} [d]_{t_{-1}} &R^{2}[r]^{\\begin{psmallmatrix} y & 0 \\\\ -x & y \\\\ 0 & x \\end{psmallmatrix}} [d]_{t_{-2}} & R^3[r]^{\\begin{psmallmatrix} y & 0 & 0 \\\\ x & y & 0 \\\\ 0 & -x & y \\\\ 0 & 0 & x \\end{psmallmatrix}} [d]_{t_{-3}} & R^4 [d]_{t_{-3}}\\\\ R^{2}[r] & R[r] & R[r] & R^2 [r] & R^3 [r] &R^{4}[r] &R^5 [r] & R^6 \\\\}$ $@R=.05cm @C=0.95cm{\\color {gray}\\text{Deg: } (1) &\\color {gray} \\hspace{-5.69054pt}(0) & \\hspace{2.84526pt}\\color {gray}(-1) & \\hspace{2.84526pt} \\color {gray}(-2) & \\color {gray}(-3) &\\color {gray} (-4) & \\color {gray}(-5) & \\color {gray}(-6) }$ where $t_{x,3} = \\begin{psmallmatrix} 0 & 0 & 1 & 0 \\\\ 0 & 0 & 0 & 1 \\end{psmallmatrix}$ and $t_{y,3} = \\begin{psmallmatrix} 1 & 0 & 0 & 0 \\\\ 0 & 1 & 0 & 0 \\end{psmallmatrix} $ , $t_{x,2} = \\begin{psmallmatrix} 0 & 0 & 1 \\end{psmallmatrix} $ and $t_{y,2} = \\begin{psmallmatrix} 1 & 0 & 0 \\end{psmallmatrix} $ , $t_{x,1} = \\begin{psmallmatrix} 0 & y \\end{psmallmatrix}$ and $t_{y,1} = \\begin{psmallmatrix} x & 0 \\end{psmallmatrix}$ , $t_{x,0} = \\begin{psmallmatrix} 0 \\\\ y \\end{psmallmatrix}$ and $t_{y,0} = \\begin{psmallmatrix} x \\\\ 0 \\end{psmallmatrix}$ , $t_{x,-1} =\\begin{psmallmatrix} 0 \\\\ 0 \\\\ 1 \\end{psmallmatrix}$ and $t_{y,-1} =\\begin{psmallmatrix} 1 \\\\ 0 \\\\ 0 \\end{psmallmatrix}$ , $t_{x,-2}= \\begin{psmallmatrix} 0 & 0 \\\\ 0 &0 \\\\ 1 & 0 \\\\ 0 & 1 \\end{psmallmatrix}$ and $t_{y,-2}= \\begin{psmallmatrix} 1 & 0 \\\\ 0 &1 \\\\ 0 & 0 \\\\ 0 & 0 \\end{psmallmatrix}$ , $t_{x,-3}= \\begin{psmallmatrix} 0 & 0 & 0\\\\ 0 &0 & 0\\\\ 1 & 0 &0 \\\\ 0 & 1 & 0 \\\\ 0 &0 & 1 \\end{psmallmatrix}$ and $t_{y,-3}= \\begin{psmallmatrix} 1 & 0 &0\\\\ 0 &1 &0\\\\ 0 & 0 & 1 \\\\ 0 & 0 & 0 \\\\0 & 0& 0 \\end{psmallmatrix}$ , $t_{x,-4}= \\begin{psmallmatrix} 0 & 0 & 0 & 0 \\\\0 & 0 &0 & 0\\\\ 1 & 0 &0 & 0\\\\ 0 & 1 & 0 & 0\\\\ 0 &0 & 1 & 0 \\\\ 0 & 0 & 0 & 1 \\end{psmallmatrix}$ and $t_{y,-4}= \\begin{psmallmatrix} 1 & 0 &0 & 0\\\\ 0 &1 &0 & 0\\\\ 0 & 0 & 1 & 0\\\\ 0 & 0 & 0 & 1 \\\\ 0 & 0& 0 & 0 \\\\ 0 & 0& 0 & 0 \\end{psmallmatrix}$ As Eisenbud demonstrated in [11], each $t_{i+2}$ is surjective for $i\\ge 0$ and thus $\\mathrm {crdeg}^{}_{{R}}{C}=-1$ in this case.", "Interestingly, $\\mathrm {cocrdeg}^{}_{{R}}{C}=-1$ as well since we see that the earliest injective map on the right is $t_{-1}$ .", "This indicates $\\operatorname{diam}_{R}(\\mathcal {K})=-1-(-1)=0=\\operatorname{diam}_{R}({{\\mathbb {k}}})$ .", "In general, we conjecture that the residue field over any complete intersection ring will have a critical diameter of 0." ], [ "Boundedness Problems on Diameter", "Note first that while $\\mathrm {crdeg}^{}_{{R}}{M}\\ne \\mathrm {crdeg}^{}_{{R}}{\\Omega ^{n}{M}}$ for any $n\\in {\\mathbb {Z}}^{+}\\cup {\\mathbb {Z}}^{-}$ , it should be clear that $\\operatorname{diam}_{R}({M})=\\operatorname{diam}_{R}({\\Omega ^{n}{M}})$ .", "However, given two $R$ -modules $N$ and $M$ with $2\\le \\operatorname{cx}\\nolimits _RM=\\operatorname{cx}\\nolimits _RN$ , does there exist a common upper bound for $\\operatorname{diam}_{R}({M})$ and $\\operatorname{diam}_{R}({N})$ ?", "We now shift perspective to an $R$ -complex to partly address this question.", "Example 4.10 Let $M$ be a finitely-generated MCM $R$ -module with minimal complete resolution $\\mathcal {C}\\rightarrow \\mathbb {F}\\mathrel {{\\rightarrow \\cr \\hspace{1.94443pt}\\rightarrow }}{M}$ and suppose $\\operatorname{cx}\\nolimits _RM>1$ .", "Denote by $\\operatorname{diam}_{R}({M})=\\omega _M=\\operatorname{diam}_{R}(\\mathcal {C})$ .", "Consider the $R$ -complex $\\mathcal {C}\\oplus {\\Sigma ^{n}{\\mathcal {C}}}$ which is associated to the (minimal) complete resolution $\\mathcal {C}\\oplus {\\Sigma ^{n}{\\mathcal {C}}}\\rightarrow \\mathbb {F}\\oplus {\\Sigma ^{n}{\\mathbb {F}}}\\twoheadrightarrow M\\oplus \\Omega ^nM$ for some fixed integer $n\\in {\\mathbb {Z}}$ .", "Hence, $\\operatorname{cx}\\nolimits _R(M\\oplus \\Omega ^nM)=\\operatorname{cx}\\nolimits _RM$ and yet $\\omega _{M\\oplus \\Omega ^nM}<\\omega _{M\\oplus \\Omega ^{n+1}M}$ for any such integer $n\\in {\\mathbb {Z}}$ .", "This demonstrates that the diameter is not necessarily bounded for all modules (or complexes) of a given complexity (greater than one).", "We leave the reader with the same question for an indecomposable MCM module (complex).", "Open Problem 4.1 Let $R$ be a complete intersection ring and denote by $M$ a finitely-generated MCM $R$ -module with minimal complete resolution $\\mathcal {C}\\rightarrow \\mathbb {F}\\mathrel {{\\rightarrow \\cr \\hspace{1.94443pt}\\rightarrow }}{M}$ such that $\\mathcal {C}$ is indecomposable.", "Further suppose that $\\operatorname{cx}\\nolimits _RM>1$ and denote by $\\operatorname{diam}_{R}({M})=\\omega _M=\\operatorname{diam}_{R}(\\mathcal {C})$ .", "Then does there exist some $d\\in {\\mathbb {N}}$ such that $\\omega _M\\le d$ for all such finitely-generated MCM $R$ -modules $M$ with indecomposable complete resolution and the same complexity (greater than one)?" ], [ "Acknowledgments", "I would like to sincerely thank my advisor David Jorgensen, who provided direction for this work in addition to having numerous discussions about it.", "I would also like to thank Tyler Anway for many, many conversations and comments throughout the writing process." ] ]
2212.05567
[ [ "Human-Robot Team Performance Compared to Full Robot Autonomy in 16\n Real-World Search and Rescue Missions: Adaptation of the DARPA Subterranean\n Challenge" ], [ "Abstract Human operators in human-robot teams are commonly perceived to be critical for mission success.", "To explore the direct and perceived impact of operator input on task success and team performance, 16 real-world missions (10 hrs) were conducted based on the DARPA Subterranean Challenge.", "These missions were to deploy a heterogeneous team of robots for a search task to locate and identify artifacts such as climbing rope, drills and mannequins representing human survivors.", "Two conditions were evaluated: human operators that could control the robot team with state-of-the-art autonomy (Human-Robot Team) compared to autonomous missions without human operator input (Robot-Autonomy).", "Human-Robot Teams were often in directed autonomy mode (70% of mission time), found more items, traversed more distance, covered more unique ground, and had a higher time between safety-related events.", "Human-Robot Teams were faster at finding the first artifact, but slower to respond to information from the robot team.", "In routine conditions, scores were comparable for artifacts, distance, and coverage.", "Reasons for intervention included creating waypoints to prioritise high-yield areas, and to navigate through error-prone spaces.", "After observing robot autonomy, operators reported increases in robot competency and trust, but that robot behaviour was not always transparent and understandable, even after high mission performance." ], [ "Introduction", "Autonomous robots combine numerous subsystems together such as sensing, navigation, localisation, planning, and control, to enable platforms to carry out tasks without human intervention.", "More capable autonomous robots can therefore become effective teammates that work alongside humans in a human-robot team scenario.", "For example, autonomous robots assisting their human teammates to achieve a mission-related goal [1].", "To achieve success in human-robot teams, effective teamwork between humans and robots is essential.", "Effective teamwork involves the balance between the need for close operator supervision and full independent robot autonomy without any oversight on their actions [1], [2].", "Therefore, human-robot teams often involve a task-load split between humans and robots that best suits the task.", "Heterogeneous robot teams often provide performance gains over homogeneous teams in relation to different capabilities to contribute to the task, which offers new opportunities for robots to contribute in beneficial ways under different levels of risk tolerance [3], [4].", "In turn, operators provide their own unique strengths to the team, such as the capacity to conduct high-order goal planning and decision-making with incomplete information.", "In unison, human-robot teams can work together to overcome challenges that humans and robots alone are unable to do, helping to accelerate the utility and impact of human-robot teams to translate into real-world outcomes [5].", "In the human-robot team process, it is not always clear what the most suitable level of autonomy/supervision is to create successful human-robot teams, and what is the direct and specific benefit that human operators provide to human-robot teamwork above robot autonomy alone.", "To investigate such a research question involves the completion of tasks that both include and exclude human operators into the process.", "In this paper, we present a series of experiments to assess the impact of human operators working with state-of-the-art robot autonomy compared to robot autonomy alone in a set of outdoor field deployments.", "The experiment scenarios are modelled on the DARPA Subterranean (SubT) Challenge using the CSIRO Data61 Team as a test case [6].", "The objective of the SubT challenge is to cover a large volume of unknown terrain to find as many artifacts as possible, which simulates a search mission using a robot team in the aftermath of a natural or industrial disaster to locate survivors [6].", "We conducted a set of 16 real-world search missions over a total of 10 hrs paired with detailed data analysis of team performance scores to assess the impact and involvement of the operator in mission-related outcomes.", "This experimental set evaluated human-robot team performance on key mission-relevant metrics such as total map coverage, artefacts found, and safety-related events, showing how human-robot teams compare to fully autonomous operation alone, and what advantages operators offer when they are able to intervene and direct the mission." ], [ "Background", "In this section, we will briefly review pertinent background literature in human-robot teaming, focusing on human-robot teaming in search and rescue.", "We will then review the format and scoring fo the DARPA Subterranean Challenge, which is used as the framework of our study." ], [ "Human-Robot Teams with Robotic Teammates", "Robots that can sense, navigate, localise and plan in an effective human-robot team configuration can contribute to beneficial mission-related outcomes in human-robot team scenarios [1], [7].", "Human-robot teams have been tested across several different domain types for their effectiveness and capability to contribute to task success.", "Examples includes within urban search and rescue expeditions [8], [9] and space exploration [10].", "Robot teammates that make intelligent and effective decisions on their own can help to extend the capacity and reach of the human involved in the team, otherwise referred to as the human operator.", "Robot teams are often under the direct control of a human operator, either through the direct teleoperation of robot movements, or by supervising autonomous robots to execute the task or action [11].", "The operators role and level of involvement also can widely vary, depending on autonomy level in their robot teammates, which can span anywhere from full teleoperation all the way through to infrequent or brief involvement with the robot [12].", "For example, robots within human-robot teams that continue on the initial actions set by the operator to then explore new regions, traverse more ground, or coordinate together in an autonomous way to assist the operator's mission directive [9], [13], [4].", "The human operator is often spending the most amount of time on direct teleoperation tasks, especially when the robot teammates are not able to make their own decisions.", "Instead, intelligent robot behaviours can help to reduce operator workload to allow the operator to focus on more urgent or pressing tasks during the mission, such as to focus on more important tasks, outcomes, or other team members [14], [11].", "More effective operator time can be critical to the mission, given that operators are often unable to directly control more than one robot at a time during complex tasks.", "Instead, operators can better control 4-8 robots that are acting in a semi-autonomous way [15].", "Human-robot teams are showing notable promise for future applications, but the role of the operator for the level and type of involvement in the task can be important for team success [1], [16], [9].", "As robot teammates become more capable to contribute to mission-based outcomes, the role of the operator can transition into a more supervisory role rather than direct robot control [7].", "Human operators can offer a strong sense of foresight, contextual awareness and higher-level prioritisation to ensure that the most critical and urgent tasks are addressed first during exploration [7], [11], [9].", "In real-world missions, operators can often spend their time assessing the robots' current state, and combining visual information provided by the robots to update their own view of the situation and environment to determine the next steps in the task [9].", "Where communications links permit, operators can assist by providing guidance or teleoperation to avoid critical incidents such as the robot becoming stuck, slipping, or colliding with objects, or re-directing the robot away from exploring areas that have already been well covered [17].", "To operate the team, operators often process a large volume of information related to the mission, including robot status updates, team-related errors, multi-agent coordination, robot navigation choices and trajectories, human-robot team task allocation, communications links, environmental conditions and topography, key objectives for the search and rescue mission, as well as additional mission constraints such as total time [9].", "Therefore, operators must have access to operator control tools and interfaces that can allow the operator to build up sufficient awareness of the situation that the robot team is currently experiencing [18].", "Human operators can also control more than one robot in the team, which can create even greater complexity with coordination between multiple robot viewpoints, functionalities, capabilities, and level of technical skill required for each task [15].", "Due to the increased complexity of the supervisory task, human operators can also inadvertently contribute to negative outcomes during the mission.", "Operators are often affected by high cognitive load demands when working and supervising multiple robots, which can directly influence mission and task performance [19], as well as increased cognitive load when operators are required to monitor more robots [20].", "For example, human operators contribute to more than 50% of robot failures [4], [21], and operators that attempt to control too many robots in a single team can reach a clear limit on human-robot team operation [22], eventually leading to deterioration in team performance [15].", "While operators play a clear role in directing robots to achieve better outcomes, operators can also inadvertently contribute to performance errors and interruptions.", "Therefore, it critical to understand how to best utilize operators in the loop, and where operator intervention could best be used to minimise cognitive load while maximising mission outcomes." ], [ "Human-Robot Teams for Search and Rescue", "Disaster response and search and rescue is an area of human-robot teams in which humans and robots can work together to find as many survivors as possible without risking the lives of emergency personnel [4].", "Human-robot teams have been utilized to assist in human recovery after disaster-related events, such as at the World Trade Center bombing, La Conchita mudslide, and Hurricane Charley [23].", "Search and rescue missions that are led by human-robot teams often focus on covering as much ground as possible in an attempt to find the largest number of survivors, ensuring that emergency personnel can make informed decisions based on the most relevant and available information about the event [4].", "In the context of search and rescue, a mission outcome can involve directing the robot enter a hard-to-reach environment and create the next task set to explore additional areas to better understand the environmental layout to increase the success rate of finding survivors.", "Human-robot teams for search and rescue can help to protect and coordinate rescue personnel to reduce the need to enter dangerous and hazardous zones, minimising the risk of physical harm to people [4].", "Robots can also provide real-time data about the scenario to help emergency personnel to get critical information from hard-to-reach places, such as to take images of the location to send back to operators for their review and action [4], [9].", "Robot teammates to support tasks in disaster response has been linked to better field performance, and has helped to assist operators to complete their mission objective [24].", "To date, human-robot teams have often been tested used detailed simulations which often involve elements of search and rescue tasks [25], [26], [27].", "Human-robot teams in simulated tasks were reported to have located a higher number of victims, covered a larger area [26], total scene exploration time and task performance compared to semi-autonomous and teleoperation modes [27].", "There is a continued need to further explore the utility, improvement and deployment challenges related to human-robot teams in search and rescue tasks, including with real world testing outside of simulation-based tasks." ], [ "DARPA Subterranean Challenge", "Robotics challenges have been proposed and created to help accelerate the testing and development of human-robot teams, such as the DARPA Subterranean (SubT) Challenge.", "Our experimental protocol aims to recreate conditions encountered during the recently-concluded SubT Challenge, and carried out by one of the top teams from that event, Team CSIRO Data61 (a collaboration between CSIRO Data61, Emesent and Georgia Tech).", "The overall goal of the SubT challenge was to identify and locate the most artifacts to within 5 m in a set of unknown courses, each of which presented a variety of different obstacles and challenge elements to overcome.", "These challenges were designed to push teams to consider heterogeneous teams of robots with a strong emphasis on sensing, autonomy, information exchange, and hardware robustness.", "Competing teams were required to build a human-robot team solution that would involve operators supervising robots to navigate tunnels with vertical shafts, tunnels with varying levels and narrow passages, expansive cave networks with diverse structures and caverns, as well as urban areas with expansive and challenging layouts.", "Challenge artifacts included a set of objects that required different detection modalities, such as a cube (visual and Bluetooth signatures), helmet, rope, fire extinguisher, drill, vent, gas (CO$_2$ concentration), backpack, cell phone (visual, Bluetooth and WiFi signature), and survivor (mannequin with a visual and thermal signature).", "All teams had approximately 12 months to develop, integrate and test their solutions for the final stage.", "The Team CSIRO Data61 solution (detailed in  [6]) provided a range of supervision options to the operator, including teleoperation, waypoint navigation, directed autonomy and full autonomy.", "The most effective mode of operation during the SubT challenge was found to be directed autonomy, where the system operates autonomously utilising a multi-agent task allocation system.", "In this mode, the operator can influence the autonomous operation, either by directly assigning tasks, or by applying geometric prioritisation regions, either within a particular spatial region or for paths that cross through a region (where the latter is particularly effective for prioritising exploration of spaces with a priori unknown extents).", "The user interface concepts are described in more detail in [28].", "The majority of a robots' time is spent performing autonomous exploration tasks.", "Other autonomous tasks include synchronising data (i.e., navigating towards the base until all data is uploaded to and downloaded from the base), and returning on low battery.", "Robots exchange data with each other, such that one robot can simultaneously execute another robot's synchronisation task as well as its own.", "Mapping data are exchanged and solved independently on each robot, such that any robot can continue an unfinished exploration task (e.g., a branch of a junction that was not followed) of any other robot.", "Tasks that can be manually generated include “go to” and “drop communications node”.", "As with autonomous tasks, robots collaboratively bid on these tasks to determine the robot best-positioned and equipped to execute the task.", "Droppable communications nodes extend the communications range deeper into the subterranean environment.", "The challenge was broken up into two phases: the circuit phase and the final phase.", "In the circuit phase, participating teams competed in three preliminary events that were approximately six months apart: tunnel systems, urban underground and natural cave networks.", "The mission time was limited to 60 mins in the Circuit Events and the Final Prize Run of the Final Event, and 30 mins in the Preliminary Rounds of Final Event.", "The cave circuit event was cancelled due to COVID-19, and Team CSIRO Data61 staged their own event in natural caves in Chillagoe, Queensland; this data is utilised in Experiment 1 below.", "The Final Prize Round was held at the Louisville Mega Cavern in Kentucky in September 2021.", "In each of the runs, there were limits on the number of artifact reports that could be submitted to the scoring server to discourage spurious reporting, obliging the teams to perform a thorough review of the detections before submission.", "In the circuit events, a maximum of 40 artifact reports were allowed with 20 artifacts hidden in the courses.", "At the final prize run, only a maximum of 45 artifact reports were allowed with 40 artifacts hidden in the course, creating a significant incentive to avoid spurious detections.", "Each artifact report sent to the DARPA scoring server by the operator at the base station consisted of the artifact class, and the location relative to the reference frame provided by fiducial markers on the starting gate of the course.", "If the artifact class is correct and the reported location has a Euclidean distance of less than 5 m from the ground truth location of the artifact, one point is scored.", "The operator will be immediately notified whether the report scored a point or not.", "Team CSIRO Data61's base station operator interface evolved over the three year competition with various views and screens for operating the robot team as well as for reviewing and verifying the artifact reports before sending them to the DARPA scoring server.", "To avoid spurious artifact detections, all robot detected artefacts were reviewed by the human operator prior to submission.", "Each robot had a perception system consisting of a machine-learning based object detection and a lidar based SLAM solution to detect, classify and localise potential artifacts.", "Detection reports bundle the classification and localisation data together to the central operator station for review by the human operator.", "The operator reviews the detections as a list on the GUI, and can review both the classification image as well as localisation information marked on an updated map.", "The operator is tasked with reviewing the detection classification and localisation data to ensure they corroborate each other.", "If so, the operator then sends the reviewed detection report for scoring.", "An external scoring server receives scoring reports from the operator and will pass or fail the report resulting in a potential score increase.", "The automated detectors had to accept high false alarm rates in order to achieve adequate detection performance, due to the generalisation error of the detector, operating in the a priori unseen environment.", "However, flooding the base station with candidate artifact reports from multiple robots would overwhelm the operator reducing their effectiveness in reviewing and sending verified reports to the scoring server.", "In the final event, Team CSIRO Data61 used automated spatial tracking of detected artifacts onboard the robots to significantly reduce the number of candidate artifact reports sent to the base station while maintaining a low detection threshold/high false alarm rate.", "After achieving the equal top score at the Final Event, Team CSIRO Data61 came second on the tie-breaker criterion of time of last detection.", "Further detailed information about the SubT Challenge rules can be found in [29].", "Figure: Team CSIRO Data61 Operator Interface.", "Top: GUI Commands.", "Bottom: Object DetectionThere is an excellent opportunity to further explore the role and contribution that operators provide to human-robot team performance, including when the task is related to a search and rescue mission.", "Such an experimental investigation would enable the close exploration of how different types of team performance metrics are achieved with and without the inclusion of a human operator in the mission, such as number of items found, unique distance travelled, and the total number of safety-related events.", "To inform a controlled experiment, an initial analysis was retrospectively conducted on a pre-collected human-robot team run dataset to investigate the type, level of operator involvement and its impact on performance across four SubT Challenge runs that took place in a subterranean cave environment.", "No set hypotheses were proposed.", "The intention of this initial analysis was to explore the following research questions: What type of operator intervention is often used?", "How often does an operator intervene, and for what purpose?", "What are the outcomes achieved by an intervention?", "How does an intervention influence the mission score?", "The data analysed was Team CSIRO Data61's staging of a cave circuit event, in lieu of the formal DARPA event which was cancelled due to COVID-19.", "Details of the platforms and systems can be found in [6]." ], [ "Dataset Analysis and Results", "The data consisted of team mission logs capturing robot state, mapping data, object detections and operator commands.", "This data was then replayed and processed offline to enable analysis.", "Analysis was conducted to identify operator involvement points through four course runs, which will be referred to as Alpha 1, Alpha 2, Beta 1 and Beta 2 (See Figure REF ).", "Initial results found there were four time-frames in which the operator intervened to control the robot team to explore new areas.", "A total of 16 out of 44 (36.36%) artifacts were detected and reported correctly in the run: 14 (87.5%) were detected by ground robots and 2 (12.5%) were identified by the operator upon inspection of the map.", "The robot team travelled a total of 2,425 m. Total intervention time via teleoperation was 8.65 mins across all runs (3.6%) with the intervention task to command the robots to explore other map areas.", "Ground robots were often more active during the first half of the run, which left the operator with sufficient time to go through the automated detection list and make the artifact reports, including to filter out most of the false alarm detection reports made by the robot team.", "When operators did intervene, their role was often to redirect the robots to new areas.", "Operator intervention in Alpha 1 and Beta 2 therefore resulted in beneficial mission-related outcomes for distance covered and artifact scoring.", "For instance, operator direction to explore new regions resulted in 2 additional artifact detections.", "All 4 runs had equivalent scores (4 points) with the Alpha 2 run having the lowest active time and travel distance to achieve the score.", "Operators had limited intervention (2 mins on average), although involvement did contribute to improving mission-related goals, such as greater distance travelled and artifacts found.", "This analysis found that operators had very little involvement in directing the robots, suggesting that full robot autonomy for this task may be possible, and motivating a more detailed investigation into the role of the human operator.", "Figure: Course Map for four runs: Alpha 1, Alpha 2, Beta 1, Beta 2.", "The blue, red, green and yellow lines represent the trajectories of the robots.", "Yellow lines in Beta course represent drone trajectories; all other lines represent UGV trajectories." ], [ "Experiment 2: Testing Robot Autonomy Compared to Operator Involvement for Performance Metrics in Twelve Real-World Experimental Course Set Runs based on DARPA Subterranean Challenge (30 min runs)", "Given the initial analysis from Experiment 1, a controlled experiment was designed to enable a clear comparison between human-robot team operation (i.e.", "operators working with a fully autonomous robot team) and full robot autonomy without human intervention.", "Experiment 2 was designed as a re-adaption of the SubT Challenge Final Event.", "In this experiment, we selected key team performance and mission-related metrics to identify the role and perceptions of the operator and evaluate human-robot team performance on human-machine team metrics [30], such as total map coverage, number of found artifacts, and safety-related events, showing how human-robot teams compare to fully autonomous operation.", "Experiment 2 aimed to investigate performance-related impact as well as operator perceptions under two conditions: operators could directly control a robot team with state-of-the-art robot autonomy (Human-Robot Team Condition, CH); this was compared to observing autonomous mission execution by the robots without operator input (Robot Autonomy Condition, CA).", "Note that, in the Robot Autonomy Condition, the operator was still responsible for reviewing and submitting the artefact reports.", "It is hypothesized that the Human-Robot Team mode (Condition H, CH) will outperform the Autonomous Exploration mode (Condition A, CA) on the following metric list: Higher final mission score for total number of found artifacts Greater distance and total course map coverage Fewer total number of safety-related events Faster recovery time from error-related events Higher levels of cognitive load on the operators" ], [ "Human-Robot Team Composition", "A single operator was asked to control a heterogeneous team of ground robots to find hidden artifacts in a set course outline, similar to SubT Challenge requirements.", "A total of four robots were available to use in the experiment: two BIA5 All Terrain Robots (ATRs) and two Spot Robots from Boston Dynamics.", "Nearly all runs were conducted with only three robots in each run (two ATRs and one Spot robot) with a single run using four robots for comparison purposes.", "The robot platforms can be seen in Figure REF .", "Due to complex considerations with communications node placement, these tasks are not generated automatically, and require operator initiation.", "For this reason, during this experiment, nodes were pre-positioned.", "Objects are scored if they are correctly identified and located to within an accuracy of 5 m. This process is equivalent to that used at the SubT Challenge.", "In a small number of cases, the objects were not correctly positioned in the map in the scoring server; in these cases, failed scores were manually analysed and corrected in post-analysis.", "Figure: BIA5 All Terrain Robot (ATR) and Spot Robot in the Starting Gate Prior to a Course Run" ], [ "Experimental Conditions", "This experiment was conducted using a between-group research design for two conditions: Human-Robot Teams (CH) and Autonomous Exploration (CA):" ], [ "Autonomous Exploration Condition (CA)", "This condition did not have any direct operator supervision of robot actions.", "To commence the run, the operator instructed the robots to a common starting point prior to being launched into autonomous exploration.", "After this event, the operator was not permitted to intervene; robots were followed by safety pilots, who would intervene only if the robot was about to encounter a high-risk condition or damage-related event.", "All robot autonomy choices were allowed to go ahead, such as if the robot was stuck, disorientated in its current location, or entering a segment of terrain which the operator knew the robot would struggle to traverse.", "Operators were asked to confirm artifact detections provided by robots.", "As discussed in Section REF , the generalisation error operating in an unknown environment necessitated operation with a high false alarm rate, and hence operator confirmation of autonomous detections was essential.", "In the Autonomous Exploration Condition, operators were asked to confirm artifact detections that were correctly identified and located.", "The operator was not permitted to correct errors in identity or location even if the images provided information that would allow that to be performed.", "This allowed for a fair comparison between conditions, to focus on operator control and robot autonomy.", "Time to human intervention was recorded for both conditions, including the time the robot first detected the artifact and an operator reviewed the detected image, as well as the time between when the operator reviewed the artifact report and scored it.", "However, it should be qualified by the fact that optimising this time was not part of the operational doctrine.", "Screen and audio recordings that were taken of the autonomous exploration runs were manually reviewed by an independent third party who did not contribute as an operator in the experiment to ensure that operators were scoring fairly across both conditions." ], [ "Human-Robot Team Condition (CH)", "This condition allowed the operator to have full control over the robot team if they chose to intervene at any time, replicating the operator involvement allowed in the SubT Challenge.", "In addition to the functionalities described in the Autonomous Exploration Condition (CA), intervention actions included the ability to teleoperate the robot to specific locations, to modify the robots' waypoint or goal points, and to change the robots' intended exploration area, direction or task." ], [ "Course Preparation and Runs", "Each course run went for a total of 30 mins at the CSIRO testing facility site in Brisbane, Australia.", "Twelve full course runs (also known as missions) were conducted over three sequential days.", "The testing schedule was conducted over three days to prevent hardware failures from other robot use influencing the experimental results, and to minimise software updates or changes influencing robot performance.", "Each operator was assigned a morning or afternoon session with a Human-Robot Team Condition (CH) run conducted first, followed by the Autonomous Exploration Condition (CA) run.", "The course was altered each day using temporary fencing, safety barriers and barrels to create more dynamic tunnels and pathways to explore, as well as dead ends that may or may not have an artifact (See Figure REF and REF ).", "Operators were not permitted to review or walk through the course before each trial.", "Each run contained a total of 16 artifacts with the artifact positions changing for each course variant.", "An automated system was utilised to keep track of the run score (i.e., the number of objects correctly detected within 5 m of their ground truth location).", "First, a map is automatically generated by navigating the course with the robots, and aligning that map to the reference frame established by the global origin at the “starting gate”.", "Subsequently, on each day as artifacts are placed, the artifacts are located in a prior map based on photographs (e.g., Figure REF ) and entered into the automated scoring system.", "Since deployment of communication nodes was not automated, nodes were pre-positioned within the course to enable reasonable communications within the course bounds." ], [ "Course Layouts", "Course 1 consisted of four missions (mission 1-4).", "Course 1 in Figure REF was used for Missions 1-4.", "Mission 1 and 2 had three robots (2 x ATR and 1 x Spot) whereas Mission 3 and 4 used four robots (2 x ATR and 2 x Spot).", "There was a total of 16 artifacts: 4 helmets (A, H, I, N), 4 ropes (B, G, J, O), 3 backpacks (E, F, M), 2 drills (K, M), 1 fire extinguisher (C), 1 survivor (D) and 1 vent (P).", "Course 2 in Figure REF was used for Missions 5-8.", "There was a total of 16 artifacts: 4 ropes (A, F, G, H), 4 helmets (B, I, M, O), 3 backpacks (D, K, N), 2 drills (E, P), 1 vent (C), 1 fire extinguisher (L) and 1 survivor (J).", "Course 3 in Figure REF was used for Missions 9-12.", "There was a total of 16 artifacts: 4 ropes (C, I, J, P), 3 backpacks (A, E, M), 2 drills (D, N), 4 helmets (F, G, H, L), 1 survivor (K), 1 vent (B), and 1 fire extinguisher (O).", "Figure: Course 1 Setup and Artifact LocationsFigure: Course 2 Setup and Artifact LocationsFigure: Course 3 Setup and Artifact LocationsFigure: Terrain Challenges: Rocks, Ramps, Darkness, Loose Ground, Inclines, Declines, Stairs, Uneven GroundFigure: Human-Made Challenges: Roped Areas, Barrels, Barriers, Closed Doors, Building Components" ], [ "Procedure", "All robots were thoroughly checked by the site team to ensure that each robot had sufficient power with no malfunction or errors before each run.", "All personnel not involved in the experiment were also asked to vacate the course.", "The operator was based in a demountable building in the middle of the course for the duration of the experiment.", "Inside the command centre, the operator was left to conduct the run with an experimenter present as a quiet observer for any assistance or information requests during the testing session.", "Each run was timed to have a duration of 30 mins with clear start and stop time markers.", "Operators were given time markers of how long was left, including for 20, 15, 10 and 5 mins.", "Time markers were given to operators if they requested additional information on the remaining time available to complete the run.", "All team members (operators, experimenters and safety observers) had radio communication set up for sharing critical course and run information, including potential hazards or challenges.", "Operators were permitted to communicate with the experimenter to ask for experiment-related information, such as remaining time.", "Operator utterances were also captured via audio recording, given their importance to describing the operators style, decision-making and planning [24], but operator utterance data was not analysed for this paper.", "Operators from different runs were not permitted to discuss course runs with each other and were encouraged to have minimal contact between runs, such as to remain off-site when they were not involved in the run to help minimise cross-contamination effects.", "Once the experiment was ready to go, all robots were taken to the starting zone to be activated in the open-space area.", "In each run, the operator commenced by locating the gate in order to re-establish the reference coordinate system, in which object locations are reported.", "Safety observers were allocated to each robot to follow at a suitable distance to observe, but not interfere, with the robot's current task.", "Observers were permitted to intervene via emergency stop (eStop) if the robot was going to damage itself, a building, object or person.", "Operator screens were recorded, including over-the-shoulder recordings for operator movements, communication and screen interaction.", "Operators wore a micro recorder with a lavalier microphone to comment on or narrate their current operation methods.", "Operators were instructed to maintain their natural communication method as they normally would during testing and were allowed to narrate of their actions and methods during their run if they wished to do so.", "Once the run was complete, communication was disseminated to all safety operators to monitor the robots as they returned to base.", "Once operators had finished all of their duties to relocate the robots back to the start position, operators were then prepared to start the qualitative interview segment.", "After the run, operators were asked to complete an assessment of task load for operator-assisted runs and autonomous exploration runs.", "Operators were then given the chance to request any further information or ask questions.", "See Figure REF for an example of the mission run course outline.", "Figure: Example SLAM Map from an Autonomous Exploration run - Mission 12" ], [ "Qualitative Interview", "Operators completed a 10 min semi-structured interview after each run.", "Interview questions were related to the course run and chosen to better understand operator performance, the level of involvement from the operator, and to collect more information on unexpected or error-related events during the course run.", "Examples included, but were not limited to, the following questions: Can you tell me how this course run went?", "How would you describe this course set up?", "How would you describe your performance as an operator?", "What would you have done differently as an operator?", "Did anything unexpected or challenging occur during the run?", "Is there anything else you would like to comment on for this run?", "Qualitative interviews were transcribed as strict verbatim pacific transcription via third party professional transcription services with ISO 9001 (Quality Management Systems) certification.", "All qualitative responses were stored, collated and analysed with NVivo Version 22.", "Manifest content analysis was conducted using a standardised qualitative method [31], [32], [33].", "In initial data preparation, interview responses were reviewed to classify emergent patterns and themes using initial notes and prospective data codes.", "In the organisation phase, an inductive approach was taken to create open code categories under relevant headings to prepare for subsequent analysis.", "In the reporting phase, clustered codes were assigned into a final set of categories and checked for accuracy and category allocation [31], [32], [33].", "All codes were checked, confirmed or reallocated to a more suitable category by the lead author to confirm the final category set.", "For clarity across the qualitative data analysis, select examples of robot terminology has been amended for continuity between the two operators, such as robot names and technical task names.", "In this section, composite and individual mission data will be presented to examine the differences between Autonomous Exploration (CA) and Human-Robot Team (CH) on human-robot team performance." ], [ "Operator Characteristics", "Two expert operators were invited to participate.", "Selected operators were very similar in their level of training and familiarity with the SubT Challenge.", "Operator (O1 and O2) were males between 35-40 years of age with tertiary-level education in mechatronics.", "Operator 1 and 2 had both been the lead operators for approximately 50 runs and the non-lead operator for another 50 runs to make a total of 100 human-robot team runs and 100 hrs of experience.", "Both operators had a total of 24-30 months of experience with the system with both reporting a score of 9/10 for DARPA challenge knowledge, 9/10 for ATR operation and 9/10 for Spot operation.", "Figure: Operator Ranking of Course Difficulty for the Autonomous Exploration (CA) and Human-Robot Team (CH) Condition | Course Run Summary - NASA-TLX Scores.", "*p=<.05p =<.05" ], [ "Operator Task Ratings, Task Load and Intervention Level", "On average, operators were involved in teleoperating the robot team for a total of 2 mins and 3 s (7.6% of total mission time) in the Human-Robot Team missions (Range, 0:00 to 03:05).", "Except for Course 2 for Operator O2, both operators rated their perceived difficulty to complete the course higher for the Autonomous Exploration (CA) condition compared to the Human-Robot Team (CH) condition (See Figure REF ).", "Across 12 missions, both operators rated the mental demand, physical demand, temporal demand, and task effort to be significantly higher for operating the run during the Human-Robot Team (CH) condition compared to the Autonomous Exploration (CA) condition (See Figure REF ).", "There was no significant score difference in their perceived performance outcome between the Human-Robot Team (CH) condition compared to the Autonomous Exploration (CA) condition.", "Figure: Examples: Artifact Detection and Location" ], [ "Final Mission Score", "The Human-Robot Team (CH) condition scored 90 artifacts (94%), and the Robot Autonomy (CA) condition scored 81 artifacts (84%) out of a total of 96 artifacts (+10.52%).", "In Mission 8, two artifacts (2H and 2I) were classified outside of the 30 min run, and were therefore not scored.", "In reviewing matched mission pairs, there were four instances in which both conditions missed the same artifacts, three instances in which the robot missed an artifact that the human-robot team found, and one instance in which the human-robot team missed an artifact that the robot team found.", "Examples of detection images and their location can be seen in Fig REF ." ], [ "Course Modes and Times", "For course modes, there were four main categories reported for the experiment: eStop, teleoperation (teleop), directed autonomy mode led by the operators actions (directed) and autonomy mode without any operator directions or input (autonomy).", "Figure   and   demonstrates the mode types for each run with odd numbers representing Human-Robot Team runs, and even numbers representing Robot Autonomy runs.", "In the Human-Robot Team Condition, operators often used directed autonomy tasks during the mission run (70% of mission time) compared to fully autonomous tasks (12% of mission time) and teleoperation time (1.25%).", "Figure: Robot Autonomy Mode for Mission 1 to 6Figure: Robot Autonomy Mode for Mission 7 to 12The Human-Robot Team condition was faster on average to obtain the first detection (2:24 min) compared to the robot autonomy team (3:02 min).", "The Human-Robot Team was slower in the average time between the robot first detecting the artifact and an operator reviewing the detected image (1:43 min, CH) compared to the Robot Autonomy condition (1:08 min, CA).", "The Human-Robot Team was slower in the average time between reviewing the artifact report and scoring it (1:41 min, CH) compared to Robot Autonomy condition (0:31 min, CA) when using list wise deletion.", "In Mission 1 and 2, the Human-Robot Team was 11:30 mins faster to achieve 15 artifacts compared to the Robot Autonomy team with the same artifact total.", "However, in Mission 11 and 12 the Robot Autonomy team was 4:01 mins faster to achieve 16 artifacts compared to the Human-Robot Team condition.", "Figure: DARPA Scoring Results for Mission 1 to 2Figure: DARPA Scoring Results for Mission 3 to 4Figure: DARPA Scoring Results for Mission 5 to 6Figure: DARPA Scoring Results for Mission 7 to 8Figure: DARPA Scoring Results for Mission 9 to 10Figure: DARPA Scoring Results for Mission 11 to 12Figure: Totals for eStop Function for Safety or Error-Related Event.", "Asterisks mark exceptions as recorded in the text." ], [ "Total Course Map Coverage and Total Distance", "Total distance and course map coverage was calculated using a composite total of meters squared covered by all robots for the full 30 min runs.", "Total distance travelled was the full distance based on a composite score of each robot's distance travelled during each mission run, and total course map coverage calculated as the unique course coverage found by the full robot team.", "Human-robot team condition covered more unique ground (+1777.00 m$^{2}$ or +5830.052 sqft, +10.56%) compared to robot autonomy (total coverage: 17713.75 m$^{2}$ , CH; 15936.75 m$^{2}$ , CA).", "The human-robot team achieved a higher total distance covered using total trajectory of all of the robots (+1130.38 m or +3708.596 ft, +12.71%) compared to robot autonomy condition (total distance: 9454.78 m, CH; 8324.40 m, CA).", "As seen in Figure REF , Mission 1 and 2 human operators had higher scores in both total coverage (left graph) and distance traveled (right graph), with Mission 1 and 2 seeing a converging point on total unique coverage but human operators having traversed even greater ground to achieve this goal.", "A similar pattern was seen in Figure REF with robot autonomy course coverage at the 13 min mark higher than the human-robot team coverage.", "Mission 1 and 2 used three robots whereas Mission 3 and 4 used four robots, but there were limited differences between the two matched-pairs for unique coverage, but higher scores for total distance covered.", "As seen in Figure REF , the robot autonomy team was stuck in an area in which new ground was not easily covered by the robot, which was instead overcome by the equivalent human-robot team run through operator intervention.", "This was caused by addition of an obstacle which created a narrow constriction, making access to the section of the course beyond that point challenging.", "As seen in Figure REF where the same course was used by the robot autonomy team, the final outcome was roughly similar in coverage and distance human-robot and autonomy runs.", "As seen in Figure REF , in Missions 9 to 12 all teams reached equivalent unique coverage with greater distance covered on one run for the autonomy condition, and another on the human condition.", "In both pairs, one team covered more ground than the other, but ended with equivalent coverage.", "Individual robot performance for total course map coverage and total distance can be seen in the Appendix.", "Figure: Course Performance for Mission 3 (Human-Robot Team) and Mission 4 (Robot Autonomy)Figure: Course Performance for Mission 7 (Human-Robot Team) and Mission 8 (Robot Autonomy)Figure: Course Performance for Mission 11 (Human-Robot Team) and Mission 12 (Robot Autonomy)" ], [ "Safety and Error-Related Events", "Safety and error-related events were measured based on the total number of times a robot was eStopped during the mission (See Fig  .", "Events related to eStop use included but were not limited to robots falling over or breaking down, robots coming into close proximity to people or buildings, fatal errors caused by the robots being stuck in difficult situations, and parts of the robot being trapped in certain areas.", "E-stop was applied under the following conditions: 1) until the first use of the robot in each run, 2) if a safety-related event occurred, in which case the robot remained stopped for the remainder of the run, 3) if a course-related event occurred, in which case the course issue was remediated, and then the robot was permitted to continue.", "For Mission 2, the robot was caught on equipment (R1) and a fall occurred that while the eStop was not applied, the incident was counted as a safety-related event (as denoted by the asterisk for R2/Mission 2 in Fig REF ).", "In Mission 3, the robot attempted to climb a steep ramp and the eStop was applied to preemptively prevent fall damage.", "In Mission 6, the robot pulled on temporary fencing, and in Mission 7, the robot fell into a ditch.", "One course-related event occurred with robot R5 in Mission 8 where the robot entered the staging area, which was intended to be closed off.", "The robot was temporarily eStopped, the course error remediated, and the robot was permitted to continue.", "As denoted by the asterisk in Fig REF , this was not counted as a safety event.", "Safety-related events included risk of damage to static equipment, or robots falling over.", "According to Figure REF , the Human-Robot Team condition had a lower eStop use rate (twice) compared to the Robot Autonomy condition (three times).", "These represent one stop per 563 min and 371 min of robot time, respectively, a 34% reduction for the Human-Robot Team condition." ], [ "Results - Qualitative Data", "The following section describes the operator interviews and analysis of their experience and expectations of the missions run, including discussion around notable events, challenges and key decision-making points.", "Six major themes arose from operator interviews: 1) perceived and actual need for human intervention, 2) specific scenarios that trigger operator intervention, 3) robot autonomy compared to operator choices, 4) robot failure, dangerous situations and events of error-related recovery, 5) operator cognitive load in response to challenging events, and 6) operation with more agents and team co-ordination.", "In summary, reasons for intervention included to speed up mission objectives, to cover more ground in an optimal way, to better control robots through rough terrain areas, to use higher-order knowledge to prioritise high-yield areas, and to maintain tighter coordination.", "After observing the autonomous missions, operators reported an increase in perceived robot competency, trust in robot teams to contribute to mission outcomes, and were less likely to intervene in future missions.", "However, operators also reported that autonomous robot behaviour was not always clear and understandable, even if autonomous robot teams achieved equivalent mission performance scores.", "Subsections and quotes will be presented in bold face below to highlight a brief summary of the theme and key takeaways." ], [ "Perceived and Actual Need for Human Intervention", "Operators reported their attitude shift over time from close management to learning to trust robot autonomy.", "This process was accelerated when operators watched autonomous runs on similar courses they had completed themselves.", "As described by one operator, “I think this is a really good lesson, because the autonomy works really well, so the autonomy does split the agents up really well.", "It really doesn't take a lot of intervening by me” [Mission 4, O1, CA] which helped to grow “confidence in the system” [Mission 11, A, CH].", "This method also produced a change in perspective on their future operation style: “I think it was a really good showcase today of what the autonomy can do, and even to the point where even the intervention I did have would have saved small amounts of time compared to what they did” and that “it just reinforces just leave them be, unless there's a clear-cut reason why you should intervene” [Mission 8, O1, CA].", "This resulted in greater appreciation for robot autonomy, and learning to use the system at the expense of longer completion times: “There were times when I could have - what I would have done would have saved time, for sure, but eventually, they got around to it before the end of the run, which was enough, but yeah.", "I definitely could have saved time, but they did make logical decisions” [Mission 8, O1, CA] and that “trusting in the autonomy but where you can, try and send multiple platforms” [Mission 12, O1, CA].", "Furthermore, while robot choices did not always appear to be transparent or logical from an external viewpoint, robot outcomes were at times, still beneficial: “I think I predicted incorrectly.", "I thought they chose to go in certain paths that would have led them to not having time to do the coverage that they did, but they very quickly did that coverage and surprised me” ... “Sometimes, it doesn't, but it was very impressive to see it work in such a tight way today.” [Mission 8, O1, CA].", "“The big lesson from this for me was that the autonomous runs are actually very impressive, particularly on this course.", "The learning lesson, for me is to let the robots do their thing.", "The interventions that are required are really just if the robot’s doing something adverse, and that should be pretty obvious.", "Keeping an eye on it when it’s going through narrow gaps and taking over when necessary, but otherwise trying to be hands-off for this course” [Mission 11, O1, CH] “It reinforces my confidence with the robots and the more I run with them, the more I see them run, the more confident I am and for a new course, you know, always going in with the mindset that you trust the autonomy before trying to take over is a very important thing” [Mission 12, O1, CA] One operator reflected on a previous time in which intervention caused more harm than good, showing how operator involvement can instead be detrimental.", "One operator comments on a separate run that occurred outside of the experimental mission set in a cave scenario: “There were moments that we [the operators] didn't have confidence in the autonomy.", "In the process of taking over from autonomy to manually intervened, we caused the robots to roll over and damage hardware, so the autonomy is improved to a level where the confidence in that autonomy is something that operators should trust” [Cave Run, O1, CA].", "When operators were asked about if they would change anything about their operation style, one operator said “I think I might rely on the autonomy a bit more” and “I think the autonomous runs have proven to me that they’re more capable than I gave them credit for.", "Just probably not as efficient as when I can jump in” [Mission 10, A, CA], showing a growing relationship to robot autonomy trust to instead focus on other critical operator-related tasks.", "The same operator continued with “there’s been some cases in the last three runs where I was pleasantly surprised that the robots did as much as they did” [Mission 10, A, CA].", "As one operator reported, “The very first time, it's a big shock, and how you operate on that course is very different to how you operate the last time” [Mission 11, O1, CH].", "The same operator also reported that passively observing robot autonomy in action would be beneficial for inducting new operators to the system and to help build trust for new team members: “It's probably a good one for showing an operator what is possible if you just leave them do their thing, so that's an important thing.", "This is something we have struggled with over time, [Operator] and I, where you just want to take over because you don't trust them.", "But we've been able to build that trust over time, and the capabilities of the autonomy has advanced significantly in the last six to 12 months, which is amazing.", "So it's really good see it do this type of course, and if you were to train a new person, it would be great to let them know and experience what it's like for them to just do it” [Mission 8, O1, CA] and “If someone new came in, probably the first thing that they should do is just be really ingrained and confident in what the robots can do” [Mission 12, O1, CA]." ], [ "Triggers for Operator Intervention", "Operators often reported different events and situations in which their involvement was considered to be advantageous or necessary to improve team performance or to meet mission objectives.", "Operator-driven interventions often involved adjusting the robots' movement to reassign exploration points “to guide the robot down a set path” [Mission 7, O1, CH], to help the robots to cover more ground to increase the opportunity to discover more artifacts, or to better overcome challenging terrain.", "When operators did intervene, operators often stated that their involvement was minimal.", "For example, “I don’t think I had any issues with what the robots were doing” [Mission 9, A, CH] and “I gave it a couple of hints where to go, but I don't think they were necessarily necessary” [Mission 1, A, CH].", "Operator intervention was further described below in one mission run: “The robots were autonomous most of the time, very little teleoping, or waypoints or other manual interventions ... my involvement was very minimal” [Mission 11, O1, CH].", "Operators were often content for the robot team to continue their objective: “I think most of the robots were good in autonomy, and I helped out where I could.", "As far as I know” [Mission 9, A, CH].", "Operators were also reporting that they would be comfortable to leave the robot team alone when the team was performing well.", "For example, “I wouldn't have had done anything.", "They seemed to be going perfectly” [Mission 2, A, CA].", "At times, operators reported an intervention need to take a more direct approach to rapidly address priority areas.", "For example, “I don’t think I needed probably any involvement to be honest, but I think I helped speed up some sections” [Mission 9, A, CH].", "Involvement was also considered more necessary for events or scenarios that were time-sensitive to complete.", "For example, “I did guide them [the robots] down the back of S block, and that was just a case of saving time” towards the end of a mission run.", "There were many other teleoperation events reported to save time and redirect to the mission outcome with another mentioned below: “There were a couple of cases where robots were being a bit slow, because they were caught up in nearby obstacles, I would give them a quick teleop touch out of the way.", "Other than the teleoperation and the prioritisation regions, I didn’t have a lot of input in the robot navigation” [Mission 9, A, CH].", "Furthermore, “they [the robots] would need to burn through all of their exploration points before they'd potentially cover that area” [Mission 3, O1, CH].", "Another example is described below: “Planning that has to happen for the robot to run autonomously takes time.", "It takes time to plan.", "The max velocity off the robot is linked to how far away it is from its path, and I can bypass both of those things using my human brain and go max velocity, which is something that the autonomy can't do easily, or at this level that we have it” [Mission 7, O1, CH].", "Operator intervention was also perceived to be required to help robots to navigate challenging terrain: “Some of the ATRs [robots] required a bit more hand-holding.", "That might be because the course was slightly more difficult in terms of some of the dead ends that it had and having to turn around and some constrictions that were added” [Mission 5, A, CH] This intervention type was also used in areas with greater constrictions or dead-ends compared to open-spaced areas.", "For example, “the difficulty may have been the addition of the playground (See Fig  REF .", "Second image from the right on the top row), but I pretty quickly cleared that out.", "It didn’t seem to be as maze-like as the last one, there seemed to be fewer constrictions” [Mission 9, A, CH].", "Other reasons included to help find artifacts when the robots had not found one yet, or were perceived to be less likely to find it without operator support.", "In an autonomous run, “one artifact that wasn't detected was visible in another frame, so I possibly could have seen that where it wasn't detected by our autonomy” [Mission 4, O1, CA].", "This was further explained below: “There were two objects which I had to MID (a map-informed detection, which means the detection that we actually got didn't seem to be perfectly correct, so I had to click around it to find the object).", "I did that for at least one object, so I was directly involved with one scoring point which I don't think would have been possible without an operator” [Mission 1, A, CH] Operators also believed that they were better positioned to reason about when an area was considered to be complete, and requested actions because of it.", "For example, “There was no obvious areas of the course that I could see that I had missed” [Mission 6, A, CH].", "Other example actions included to improve operation success by taking into consideration other events that might happen during the run, such as “to split the robots up to have redundancy for the other agent that went into the dark by itself” [Mission 4, O1, CA].", "Operators also had different styles relative to their preferred control level over the robot team.", "This included managing time and resource intensity of direct operation through teleoperation at the expense of task awareness, stating that its “not a good idea when you've got multiple robots running around”.", "The last resort was identified using “teleop when the robot can't do something that you need it to do.” [Mission 7, O1, CH].", "Operators also acknowledged that their operation actions changed based on key information and constraints.", "In one instance, teleoperation was justified by the following event: “I could see obviously the Spot [robot] went down and the ATR [robot] wasn't making fast enough progress, and I know the time was running out.", "It's always faster to get on the sticks if I've got good comms, and it's much faster to get the robot to where you want it” [Mission 7, O1, CH].", "When asked about operator decision-making for switching from autonomy to teleoperation, one operator reported “panic” and knowing “that there are things left in the mission” as well as when “it's fairly obvious when a robot can't do certain things” [Mission 7, O1, CH].", "Two examples are provided below: “Although I only spotted it [an open door] at the last second, I did prioritise getting that robot to that location and probably spent the last five minutes trying to get that robot in there and as far along that section as possible.” [Mission 6, A, CH] “It did a reasonably good job of autonomously getting to the waypoint that I set, but at some point, I decided that it wasn't going quick enough, so I took over, grabbed the controller and teleoped it as fast as I could into the side of the branch.", "Unfortunately, wasn't able to complete the entirety of that section of the course, so I probably left some stuff on the table, and that might be where the two objects that I couldn't find were located” [Mission 5, A, CH] Reasons for intervention also included the belief that the robot was not capable of completing the task, especially “if there's no other way to do it” [Mission 7, O1, CH].", "Other intervention factors that influenced their need to take control included operator mood.", "For example: “I'm in a bit worse of a mood than I was yesterday, just because of robot troubles.", "I definitely felt the need to or the want to intervene a bit more often.", "Whether that's because the course was more difficult or because I just was in that sort of temperament” [Mission 5, O2, CH] Considering the robot autonomy runs, operators clearly noted specific events or situations in which if they were able to intervene, they would have taken different actions.", "For example, “I could see the two robots were converging on the area, and they got distracted by some nooks and crannies.", "I could have just expedited that” [Mission 8, O1, CA].", "Another example was as follows: “I would say as - if as an operator I could have jumped in, I would have scored two extra points, which would have ended up beating the first run, I believe” [Mission 2, O2, CA].", "A further example is as follows: “There was definitely a couple of cases where I would have intervened but didn't, and whether that helped or not, I'm not fully sure.", "But, for example, I saw the Spot go into the grass area, and as an operator, I would have intervened and grabbed it and moved it away, and it potentially would have stopped it from falling.” [Mission 2, O2, CA] Operators also had the important advantage of foresight, as well as a more rigorous understanding of future events and situations in which each robot could have been used to the best of its mechanical ability to explore course sections.", "“The only thing I would have done with the Spot [robot], as well, is I would have sent it into the other section of the course, because from what I had already seen of the course from the first agent, there seemed to be more areas for the Spot to explore” ... “I would have put Spot into the tunnel rather than in the direction that it did go.” ... “because in the tunnel, there's more nooks and crannies and diverging branches and things that I think a Spot [robot] is better at doing.” [Mission 2, O2, CA] “Somewhere towards the end of the run, I realised that there was a third branch, probably a large part of the course that I had missed completely, and so at that point, I tried to get bear out of the tunnel as quickly as possible to get over there” [Mission 5, O2, CH] Operators also reported the importance of information collection to make their next decision and how to best intervene.", "For instance, “when the robots have a motor fault, I don’t really have a way of knowing, I have to wait until I get to that robot and see that it hasn’t moved” [Mission 11, O1, CH].", "In addition, “information forwarded up to me, and more of an alert system, would be beneficial to get to the robots more timely” [Mission 11, O1, CH].", "Operators often acknowledged the importance of knowledge discovery in their operation method, stating that when learning about important information or knowledge about the situation, different choices could have been more effective: “Ideally, I would have spotted that third section of the course earlier, and I would have taken that third robot and put it into that section first” [Mission 5, O2, CH].", "Another example is as follows: “In the tunnel, there is a section leading from the outside the tunnel area back into the tunnel that has a bit of a zig-zag, narrow corridor, and as an operator, I would have just put a waypoint at the end of that, so that it could get through that constriction non-autonomously - well, semi-autonomously, but it needed that little bit of a push from an operator” [Mission 6, O2, CA] “There was an area in the tunnel that was not explored at all during the run and it would have been great if the ATR had sort of figured that out and gone there but it didn’t happen” ... “I definitely would have wrangled the ATR to explore the sections of the course that we missed” [Mission 10, O2, CA] Future operator features were also mentioned, bringing attention to new methods that operators could use to better improve their performance, and to reduce the need to intervene, such as “colourising the point clouds” to create information that would be “very valuable for an operator” to use [Mission 10, O2, CA]." ], [ "Robot Autonomy Compared to Operator Choices", "Operators used the opportunity to view the robot autonomy runs as a learning experience, and to compare actions and outcomes across both conditions.", "For example, “The ATR [robot] that went up the tunnel was slower, in general.", "I think in the first run, we got to the end of the tunnel earlier than we did last time” [Mission 2, O2, CA].", "In this process, operators often reported varying opinions between their interpretation of robot autonomy choices.", "After some runs, operators reported being impressed with robot autonomy choices and options: “It’s really good to see how the robots split themselves up” [Mission 12, O1, CA].", "In other course runs, robot autonomy appeared to be strongly suited to the course layout, which in some cases, the robot team surprised the supervising operator on its capacity to independently complete the course: “they [the robots] really didn’t waste a lot of time.", "There wasn’t even a whole lot of opportunities where I would have been faster if I directed them.", "They really smashed it out.” [Mission 12, O1, CA].", "In some course runs, operators were also viewing robot autonomy behaviour that they believed could have outperformed their own run: “I think the robots actually covered the course quicker than I did, if I’m being honest.", "They seemed to get the detections before I did”.", "[Mission 12, O1, CA].", "Operators still encountered some surprising outcomes: “there were a few reports that I didn’t report until very close, or at the end of the run, due to being distracted by robots doing cool things” [Mission 12, O1, CA].", "The most evident was that robot autonomy choices were sometimes perceived by the operators to be less efficient and direct in their efforts to reach the mission outcome of finding artifacts.", "For example when robots were searching for items, “some of those robots weren't really all that productive” [Mission 4, O1, CA].", "In some cases, operators reported their acknowledged risk/reward trade-off when they were reflecting on choices about when and how they would have intervened if they were operating the team while observing the robot autonomy course run.", "For example: “It's slightly different to what I would have done, because usually, like I did in the last run, you send in a second robot as a redundancy measure, but one Spot was able to clear that whole tunnel by itself, which was very good.", "But also, if something had have happened, we wouldn't have been able to do anything about it” [Mission 4, O1, CA] “In the first run [Mission 1], Spot explored underneath the landing in the tunnel, whereas in this run [Mission 2] it didn't, because the Spot went in a different direction.", "The ATR that did go into the tunnel went all the way to the end of the tunnel and then didn't have time to come back and look at all the nooks and crannies” [Mission 2, O2, CA] “It’s a lesson for us to try not to touch them where we can and it’s really - as I’ve mentioned over the runs, it’s only if they choose an incorrect path or it’s maybe to get a bit of speed that the operator could intervene and guide it down a path.", "However, in this case today, they did exactly what I would have done” [Mission 12, O1, CA].", "Operators also had extensive experience with the human-robot team setup, which suggests that operators had already come across most scenarios in the past.", "For example, “I feel like I've seen it all at this point.” [Mission 2, O2, CA], “often you do encounter narrow passages and you've got to deal with those, but they're not wholeheartedly unexpected” [Mission 7, O1, CH], and “it doesn’t necessarily surprise me that it chose to go in the direction that it did.", "Sometimes, it's a 50/50 call, and it chose the direction that it chose” [Mission 2, O2, CA].", "However, viewing robot behaviour without being able to intervene was reported as “a bit frustrating to see the robots do the wrong thing” [Mission 10, O2, CA].", "This also involved operators reporting that the robot team did not always adequately cover certain areas.", "For example, “The only part that I had to take a second look at, there was a very narrow opening at the barrel area (See Fig  REF ), I just wanted to make sure that I covered it, so I sent a robot back there once” [Mission 11, O1, CH].", "Furthermore, certain course orientations did create some some unusual circumstances for robots to behave in unexpected ways.", "For example: “It's a little bit unexpected that the robots couldn't get past - in the S block constriction (See Fig  REF .", "Bottom right image), the robot did get past the constriction but then didn't continue on.", "That's a little bit surprising that” ... “a task wasn't generated at that point for it to continue” [Mission 6, O2, CH] “I think the other robots had sort of taken up all the big frontiers already and so it was just trying to find all the nooks and crannies and other frontiers and tasks that it could do and didn’t have the awareness to realise that there was a chunk of the course that it should have gone towards.” [Mission 10, O2, CA] Operators also drew attention to inconsistencies in which the robot autonomy made decisions around its organisation and exploration choices that would have been different if the operator was involved: “Not nearly as good as the first run, purely because there were some hard and difficult constrictions to navigate, which really seemed to require operator input.", "Similarly, with the constriction with the chair behind S block, it just needed an operator to teleop it past the chair and a bit further beyond the chair, so it started generating tasks again, or else a couple of waypoints probably would have done the trick as well” [Mission 6, O2, CA] “The only difference would have been just some very high-level guiding.", "There isn't really anything else I would have done to change what they did, so I don't think they made too many errors with regards to where they went” ... “the only thing I could have done is to direct them at a high level, but eventually they got there anyway” [Mission 8, O1, CA] At times, viewing robot autonomy without being permitted to intervene evoked a notable emotional response from the operator, showing insight into how the operator processed the concept of an autonomous robot team that could score higher without an operator involved at all.", "Over time, operators were starting to be drawn to the notion that their assistance may not be needed at all, challenging the commonly held idea that operators are used to closely direct and control autonomous robot teams to achieve better mission outcomes, indicating a potential phase shift from “operator as necessary” to “operator as optional”: “[there] seemed to be the right number of robots and they selected the directions to go very logically.", "There weren't any areas that were missed and it was very similar to how I ran, actually” [Mission 12, O1, CA].", "Despite robots having a strong level of intelligence and autonomy to complete the mission on their own, there were several times in which operators were unable to decipher their intention, leading into limited levels of understanding and explainability in the robots behaviour.", "For instance: “operation outdoors in general is not exactly the way the autonomy is programmed to operate so it does get stuck sometimes in outdoor environments because it sees gigantic frontiers that it thinks it should do.” ... “So it’d be ideal if it could understand that no, you’re good, you’ve explored this area, go do something else” [Mission 10, O2, CA].", "When operators were asked about ways the robots could behave that would improve their level of use and explainability to the operator, a few suggestions were provided: “better tuning in situational awareness for robots to avoid difficult areas” and “in terms of autonomy, being able to handle outdoor environments with huge frontiers is probably one of them if exploration in outdoor environments is something that we’d want to tackle” [Mission 10, O2, CA].", "In addition, “I think reliability of this system in general, I think I’ve realised is still not there and that does increase the stress of the operator when - especially if the operator’s involved in getting the robots up and running” [Mission 10, O2, CA].", "Furthermore: “It’s very possible that if I’d started with Spot, it could have gone into the playground and it almost certainly would have catastrophically fallen over.", "Luckily, autonomy chose not to do that but that would be something that I would increase my belief in the robot’s capabilities if I knew that a Spot could sense that hey, this area is dangerous, I’m not going to go in there” [Mission 10, O2, CA]." ], [ "Robot Failure, Dangerous Situations and Events of Error-related Recovery", "Across all missions, robot failures and error-related events occurred in both conditions, even when operators were involved.", "More often than not, these events were often low risk/damage, such as the robot ending up in scenario or situation that was not favourable.", "This included the robot being stuck in dead-end corridors or being unable to navigate its way out, which often did not result in critical outcomes.", "In one example, “there was one time when a Spot robot walked on another Spot robot and had to be eStopped for safety reasons and restarted” [Mission 8, O1, CA].", "Operators had some necessary tools to mitigate some low-level errors and failures well ahead of time, and that “it was very good to see the controllers work as they've been designed and getting into those tight spaces” [Mission 8, O1, CA].", "Certain error types also required direct operator involvement to successfully overcome the event with one example presented below: “I think one extra entrance was blocked, and there were a few other little nice things that would have blocked the ATRs [robots], but they managed to push their way through autonomously in one case, and in one case, it was a lot of manual intervention” [Mission 7, O1, CH] In other runs, the robots ended up in more serious or hazardous places.", "For example, “I saw Spot [robot] in dangerous positions” [Mission 10, O2, CA] and in an autonomous robot run, the robot was hooked on an equipment piece [Mission 2, O2, CA] that required the immediate use of the eStop.", "Failures and errors were also attributable to support systems that were in operation during each run.", "One operator described this as follows: “I think the other part of that is the comms situation, which is probably a different side to the story.", "But often, that is a big component as to why you can't go somewhere, why you don't get data back or why your planning doesn't work, if a node doesn't come online” [Mission 3, O1, CH] Further examples are provided below: “I think it got too close to some of the orange netting fencing and may have even started to chew it up into the tracks, at which point the safety operator e-stopped it, and I think that was eventually pulled out of the tracks right at the end of the run” ... “Similarly, an ATR came back to the pit area, I think it got too close to some of our charging equipment, and the safety operator briefly eStopped it” [Mission 6, O2, CA] Some possible error-related events were often prevented beforehand, such as being cautious “when robots start up together” [Mission 12, O1, CA].", "Operators also reported their capability level to prevent or correct failure, errors or dangerous situations.", "For instance, during one robot autonomy run, “Spot did get into a situation that it shouldn't have, but again, that's one of those cases that an operator would jump in and try and resolve that issue” [Mission 2, O2, CA].", "In response to failures and to mitigate damage, operators helped to nudge the robots to a path or goal that resulted in less risk: “It was just simply picking a path for them to go, and then the parts at the end where they were getting quite close, it was just mitigating damage and making sure that they separated, went one way, and the other went the other way” [Mission 3, O1, CH] “If it’s a fully autonomous run, Spot [robot] would need to be able to sense the environment better and know that, for example up here, there’s big rocks with big gaps in between.", "The resolution of all of that [unclear] just isn’t enough for it to realise that that’s dangerous and it might get a foot stuck in there and fall.", "In terms of an operator run, because I have some amount of prior knowledge, I know that that’s dangerous but if I could get, again, better - finer resolution of the course terrain, I could sort of be able to tell the robot if it couldn't figure it out itself, that it’s more dangerous than it knows” [Mission 10, O2, CA] Operator involvement and absence from involvement had varying results during dynamic situations.", "For instance, “one of the ATRs [robots] did collide with an obstacle in a difficult location that we are aware of being difficult.", "I don't think that's anything to do with an operator versus autonomy.", "It's just the luck of the draw sometimes” [Mission 2, O2, CA].", "Furthermore, one operator stated that it was possible that the robots did enter dangerous positions or areas, but they were simply not aware of the situation occurring from their operation viewpoint.", "For instance, ”as far as I know, they didn't get into any precarious or dangerous positions, there was no e-stops, no catastrophic failures” [Mission 9, O2, CH].", "Operators also acknowledged their limitations to fully protect robots from damage: “there are definitely environments out there that I have no knowledge of that if I was to send a robot in, I would not have adequate situation awareness to protect the robot from doing dangerous things” [Mission 10, O2, CA].", "This was followed by this potential suggestion: “If I could get that feedback in some form to show me that this area is potentially dangerous for a robot, that would help me as an operator protect the robot from doing things that are potentially dangerous for itself and others” [Mission 10, O2, CA] Other times, mission outcomes were impacted by chance factors such as environmental terrain that can cause the robot to fall over: “It could have fallen over at the start.", "Sometimes, they fall over.", "Once they fall over, we don't have a way of correcting and re-standing at this point” [Mission 7, O1, CH].", "Operators also reported allowing the robot team to attempt some difficult scenarios on their own prior to intervening in the situation to assist with rerouting the robot back on track.", "In one example, “it was a very tight door, very tight scenario, and potentially we could have got in there autonomously, but again, it would have taken a lot of time, and the robot had already tried a few times” [Mission 7, O1, CH].", "Other types of safety-related events were higher-risk than others, but nearly all of these events occurred during the robot autonomy condition because operators were not able to intervene to either prevent the failure or event from being likely to happen in the first place, or to take rapid actions to mitigate the impact of it.", "The missions also provided new insights into how operators could improve their operation style to avoid future setbacks and error-related events.", "Operators also acknowledged that their involvement in the run may have also had adverse effects, stating that in some instances, it is more beneficial to leave it to the autonomy to resolve its own issue instead of intervening at the risk of greater damage: “I probably would have done more damage had I tried to take over from the door controller, which I had to do in my run, because it was wasting a lot of time” [Mission 8, O1, CA].", "This included building in more fail-safe behaviours: “what I probably should have done, if I did this run again, was always follow a Spot with another robot.", "That's a pretty good lesson from this run” [Mission 7, O1, CH].", "This included for recommending new features that could also contribute to reducing cognitive load, so that “someone else could come in and be able to operate much easier without having to button hop and grab different controls” [Mission 12, O1, CA].", "Some level of responsibility for robot failure was reported by operators based on course design rather than robot performance.", "In commenting on one error-related event, “we left a rogue Spot [robot] in a traversable area, so it certainly wasn't the fault of the robot, and the robot recovered well” [Mission 8, O1, CA].", "In relation to recovery from error, there were some instances in which errors provided limited impact to the overall run and a simple restart was all that was needed to continue the robot to the mission goals.", "Given the experimental set-up, it was also acknowledged that should a robot have had a similar error in other real-world scenarios, robot recovery would not have been possible, which is likely to cause greater impact to mission success.", "Furthermore, error-related events prompted operators to use new strategies in the remaining missions to mitigate their effects: “I didn't at this time, send both ATRs far away from the Spot.", "One ATR was close to the Spot at all times, which I didn't need, but it was a good peace of mind” [Mission 11, O1, CH]." ], [ "Operator Cognitive Load in Response to Challenging Events", "In response to challenging events, experiment operators were highly experienced, which resulted in strong performance during most of the course run.", "For instance, operators reported that there were few events or scenarios that were surprising, or events that had not seen before in the past: “It wasn't really anything I haven't seen before” [Mission 7, O1, CH] and “I think everything they did was very logical” [Mission 8, O1, CA], but at the same time, “always expect the unexpected, so some things went wrong, but nothing too crazy” [Mission 1, O2, CH].", "Unexpected errors or challenges in communication methods between the operator and robots also contributed to a large portion of reported cognitive load, especially when events did not go according to the operators level of expectation: “One of the objects seemed to have potentially been incorrectly placed on the scoring server, because I found it and I localised it well, as far as I could tell, and sent a bunch of reports” [Mission 1, O2, CH] Stress was also a common factor that was reported to affect cognitive load: “If robots aren’t playing nice, the stress levels of the operator increases just in general, which leads - I think leads to poorer performance” [Mission 10, O2, CA].", "The same operator reported that they achieve better performance “as an operator when I’m calm at the start and if my stress levels are increased by external factors prior to the run, that’ll carry over into the run and I think degrade my performance as an operator” [Mission 10, O2, CA] and that the robot, course or other external factors can “slow everything down, which sort of builds the stress levels because obviously we’re on a bit of a time schedule” [Mission 10, O2, CA].", "Furthermore, “shorter runs definitely increased the stress levels” [Mission 9, O2, CH].", "Cognitive load was also unintentionally attributed to the experiment itself: “as part of the team that’s setting up these experiments to some degree, that stress level can carry over occasionally and that, I think, effects my ability to operate, to some degree” [Mission 10, O2, CA].", "Experimental runs presented unique events or scenarios, which required operators to become more involved and therefore, to exert more effort to overcome these obstacles.", "Operators reported that certain direct hands-on functions were high on cognitive load, reducing the potential to conduct other critical tasks.", "In the instance of teleoperation, “as soon as you have to grab the joystick, you can't do anything else.", "If there is something else that needs attention, that is in your back of mind” [Mission 7, O1, CH].", "This can push operators into making critical decisions based on mission outcome gains.", "For instance: “if one ATR can get through a door, down the back of W block, and the other ATR needed to be teleoped to the top of the hill, I couldn't do it.", "I would have had to have chosen one”.", "[Mission 7, O1, CH].", "The operator further commented using the following scenario as an example: “There'll be an indication of which one you can switch back to autonomy.", "So if you can give a robot 10 seconds to get it through something and then switch it back to autonomy, obviously that's a priority, but yeah, it really depends.", "It's not an easy thing sometimes to change a strategy partway through a run and decide what's the best thing to do, given the time that you have.", "Yeah, so intuition is a hard thing to know” [Mission 7, O1, CH].", "Operators commented on the cognitive load difference between the same runs when comparing to operator involvement compared to robot autonomy alone, which included the cognitive process to self-regulate their perceived need to intervene: “It's probably less stressful in the second run [robot autonomy].", "It's a little bit more frustrating, if you make the distinction between stress and frustrating, because I wanted to jump in.", "I wanted to help the robots, but it was definitely less stressful in the second run, because I could just sit back and watch robots and don't have to worry about making bad decisions” [Mission 2, O2, CA] Reported cognitive load and operator stress was also mitigated to some extent in the course runs given that other trained safety operators were monitoring robots during the course.", "Therefore, distributed responsibility for the operation of robot teams can have an important influence on operator cognitive load and their direct contribution to risk or mission failure: “I know there's safety operators out there, so I can offload that worry to them.” ... “If it was just me monitoring the robots, I might have been more stressed, because I would be responsible for stopping robots if they're in dangerous situations.” ... “I really don't concentrate too much on the safety aspects of the robots, because I know there's people watching them” [Mission 2, O2, CA]" ], [ "Operation with More Agents and Team Co-ordination", "This experimental set-up required operators to control, monitor and direct a heterogeneous multi-agent team of 3 to 4 robots over a course with many obstacles and roadblocks, which had its own advantages and disadvantages.", "First of all, more robots can be more strenuous, given that “working with three agents in such a short amount of time maybe increased the skill level a little bit” [Mission 9, O2, CH].", "Furthermore, “having three agents means you can watch them a bit better, there’s less load on the operator to manage them, whereas previously, when you run with four, five or six agents, you literally cannot micromanage too many agents at all” [Mission 9, O2, CH].", "In the course run, when allocating different robots in different areas, one operator said that “it reinforces the point that it’s better to have multiple robots covering - multiple different morphology robots covering the same area” [Mission 12, O2, CA].", "However, the success of these arrangements can also be by chance as well: “it’s a lot of luck that the Spot [robot] went that way in the autonomous run but this is a case in point where you want robots with multiple morphologies kind of in all areas” [Mission 12, O2, CA].", "In addition, operators also noted an ideal ratio of robots to mission objectives and course size: “I think in this size course, more robots wouldn’t help but obviously scaling the course and having more branches is ideal to have more robots to do that” [Mission 12, O1, CA].", "One operator also acknowledged that its “more efficient for me to just be doing the high level direction and going through some of the reports” than team co-ordination [Mission 12, O1, CA], but that more robots could result in reduced performance.", "This was further explained below: “I think in this course size with more robots, it would possibly complicate things.", "We've seen in previous test cases where robots do interfere with each other but this is a good example of where robots can go and do their thing.", "So timeframe, if there was a longer run, it’s pretty impressive what we can do in half an hour but an hour run, it doesn't detract - having done hour runs before, the robots behave very, very well over that time.", "They’re very robust platforms” [Mission 12, O1, CA].", "In completing the mission, there were some notable decision-making options for how operators could organise their team and how robots chose to organise themselves, such as using a consistent strategy each time to achieve high success levels based on the robots strengths.", "For instance, “Spots do very well in the tunnel, so I've kept that strategy this whole time” [Mission 11, O1, CH].", "Mission outcomes influence choices for team arrangement as one operator stated that “it does influence the result, depending on the type of platform and what direction they take” [Mission 8, O1, CA].", "Operators reported that there is an optimal robot-to-course ratio that can provide the most benefit at the expense of operator involvement, but more agents can also be more challenging.", "For instance, “this time, having three agents was more difficult, because the course was more difficult.” [Mission 5, O2, CH].", "When asked about team co-ordination, one operator reported that “the hardest decision is determining where Spots versus ATRs should go” but that after these decisions were made that “everything else was pretty standard in terms of decision-making” [Mission 9, O2, CH].", "The robot team formation and number of robots also influenced how operators make decisions on assigning areas to certain robots.", "One operator described their decision-making points around team co-ordination below: “With three robots, it’s a bit more difficult to know which way to send two robots, so initially I sent two robots down to the barrels and behind S-Block, and half way through that, I switched that and sent one of those robots to go back to the tunnel.", "I’m hoping that was an important decision, because I feel like we may not have explored part of the tunnel if I hadn't done that” [Mission 9, O2, CH] Operator decisions to arrange heterogeneous multi-agent teams to ensure robot safety was also a key decision point.", "For instance, “I could tell that the playground area of the course was dangerous for robots, so I made sure that no other robots would go in there once it was satisfactorily explored” [Mission 9, O2, CH].", "One operator described their use of the robot team based on environmental constraints and mission objectives, for instance, “Spots are very capable and they're very fast, but obviously, ATRs can do this course very well” [Mission 8, O1, CA].", "One operator also commented on the balance between mission objectives, number of robots, and time limit: “I’d probably have said four would be the limit, in a half hour.", "The course would have to probably get substantially bigger to warrant any more than that” [Mission 9, O2, CH].", "Lastly, in some runs where the operator was using three robots, having an additional robot could have been beneficial to ensure greater confidence to explore all possible areas: “I would hope that if we had that fourth agent, I could have sent it to explore more thoroughly in the right side of the course, and I think potentially we might have found that last object that we were missing.", "You never really know, but I would hope that that would be the case, because I have a feeling that we might have just sent - the only agent we sent down the right side of the course might have skipped past an area and not thoroughly explored enough to find that object”[Mission 1, O2, CH]" ], [ "Discussion", "The presented experiments explore how operators contribute to robot team performance.", "Experimental results found that there were notable differences between human-robot teams and full robot autonomy on key metrics such as mission score, time to first artifact discovery, total number of eStop use, total distance and unique coverage.", "Human-robot teams with operation led by a trained operator were more likely to cover unique ground in a shorter period of time, travel a longer overall distance with the robot team and have fewer events related to eStop usage, but required increased perceived operator effort to manage the operation.", "One matched pair mission was equivalent in performance when the terrain was more traversable and the course layout was more predictable, demonstrating that in favourable conditions, robot autonomy can perform as well as human-robot teams.", "The operator interviews provided further explanation and understanding behind operator actions, their reported level of involvement and direct contribution to human-robot team performance." ], [ "Overcoming Challenges and Reasons for Intervention", "Human operator contributions were valuable to overcome major issues and failure-related events.", "Operator involvement appeared to be most critical during events that could have caused damage or harm to the robot, during time sensitive events, or to help the robot from getting stuck in challenging areas that would prevent further progression.", "Operator intervention in these event types allowed autonomous robot teams to continue to maintain steady progress, while robot autonomy teams were at times stuck, or failed to break through a major challenge in the course run.", "Operators often intervened in an attempt to speed up mission run time to take shorter paths to cover new ground, to better teleoperate robots through tough terrain, to use higher-order knowledge to prioritise areas that were more likely to have artifacts, and to have an overall tighter control over robot operation and coordination across multiple robots.", "While operators do improve team performance scores, they can introduce delays in team coordination, which could be critical depending on how urgent or serious a 30-70 s delay to select the next robot action would represent for the mission.", "Operators reported that their direct involvement contributed to a greater sense of mental demand, physical demand, temporal demand, level of perceived effort and frustration.", "This spike occurred despite some course runs that had similar mission performance even without any operator involvement.", "However, unpredictable scenarios without operator intervention did result in more error-related events.", "To assist in reducing operator load while maintaining the benefits of operator input, better communication between humans and robots is needed.", "This could include information exchange around what the robot team is finding difficult at present, and what their plans are to address it as a way to reduce operator intervention and cognitive load to resolve these issues [34], [35], [36].", "Alternatively, human operators could receive further practical training to reduce intervention frequency, building a greater sense of trust that the robot autonomy team will contribute to the goal in a beneficial way.", "Further improvements to human-robot team performance may also come from passive and/or continued exposure to the capability of the autonomous robot team, as well as further refinement of individual operator style to minimise their need to intervene, such as directing robots away from high risk areas well ahead of time.", "More operational improvements may also arise from new interfaces and systems being integrated into each mission run, given that human-robot collaboration research has shown the impact of natural language, shared cognition [34], interaction dynamics [35], emotion-led statements and affective expression to create effective human-robot teamwork [36].", "Therefore, the advantageous benefits of operator intervention might be better paired with new information exchange strategies that helps to reduce cognitive load, while at the same time, retaining the benefits of operator involvement.", "Operators reported that after viewing the fully autonomous runs, they increased their estimation of robot autonomy, including being less likely to intervene as often in future, showing greater trust in robot capabilities to complete the mission [37], [38].", "Transparency was reported to be an important factor in future human-robot team improvements, with requests that the robot team communicate its view of the world and its plans more effectively [38], [39].", "The operators had extensive experience with the system, mission objectives, and robotic hardware, but still reported that they were, to some extent, still learning and creating an accurate perception of the system's capability when viewing its use in different scenarios.", "This finding demonstrated that the limited communication of error states or robot intent may have resulted in operators intervening when they might not have needed to intervene to achieve the same level of performance.", "While autonomous behaviour may have improved team performance, this could have also contributed to a weakened state of situational awareness [20]." ], [ "Strengths and Limitations", "The experimental set has both strengths and limitations that should be noted, given the real-world nature of the deployment.", "Human operators involved in the missions had high levels of experience in the mission scenario, meaning that operators were familiar with the prospective bounds of the possible course layouts, and possible places where artifacts may have been hidden.", "Prior experience in a similar test site also meant that the operators could have changed their actions based on prior knowledge and experience to optimise their score.", "To conduct the mission runs, operators needed to be familiar with the system to operate it, and the difficulty level did not allow for less experienced staff to lead the missions.", "However, experienced operators provided a new insight into how human intervention can improve team-related outcomes, and interview data provided clearer insight into key variables relevant to expert operators.", "Experienced operators also provided insights about further features operators could use in the future.", "It also allowed operators to reflect on their performance in relation to previous missions, meaning that more skilled, nuanced and critical factors of human-robot team operations could be explored, especially compared to novice operators.", "Experienced operators were also required to run a more realistic scenario in the real world with physical robots, providing new challenges and scenarios not present in simulation.", "Furthermore, operators were blinded from knowing critical information about the experiment ahead of time, but it was not possible to completely ensure that no course information was known prior to the run.", "For instance, the course was required to be set up prior to the operator attending the demountable building.", "Operators were not permitted to discuss information between the team prior to experimental testing, and an external experimenter was present at all times to observe the changeover of operators, and to assist in quality control of the experiment.", "While some runs were inadvertently impacted by factors outside of the control of the experimenter and operators, these factors again represented more closely what can happen in real-life deployments, such as robots failing to start, or being separated from the operator station.", "While the real world nature of the experiment was more closely representative to a search and rescue mission compared to simulation experiments, the experiment required a standard set of artifacts and standardized course covering.", "While this reduces some of the realism, the design was intended to maintain similar tasks and principles that would otherwise be seen in a search and rescue mission with minimal-to-no additional robot help from safety observers and mission-run data collected as it was encountered on the field for each run.", "The design also only tested two human-machine team configurations, and other configurations and task-load allocation could be considered in future tests, such as mixed composition teams with multiple operators  [40] with a shared-pool of robots [20], or more closely testing different workload components related to the mission [14].", "Furthermore, while the travelled distance did show fewer gains at the end of the mission time, operators did not appear to show task complacency or disengagement, representing a calibrated level of robot autonomy, task involvement, and course length to achieve the experimental outcome [2]." ], [ "Conclusion", "A total of 16 real-world missions found that human-robot team operators do create notable advantages to search and rescue missions when paired with state-of-the-art robot autonomy, compared to robot autonomy alone.", "Operators contribute to improved mission-based outcomes, help to overcome challenges that robots encounter during course runs that can impede progress, and help to recover robots faster out of scenarios that could lead to detrimental outcomes.", "For now, human-robot teams for search and rescue continue to be the recommended construct for covering the most ground, and helping to find the most items or victims in the shortest time possible.", "However, operators engaged in robot supervision are slower on average to review the information provided by the robots to determine a response, and so future consideration must be made to determine if the operators should either control fewer robots to help increase review and response time during the mission, or to forfeit their involvement in other human-robot leadership and control tasks to instead offer rapid response times to reach more global mission-related goals.", "We would like to thank the technical team who assisted in monitoring robot safety during each course run, and the research students who assisted with data preparation.", "We would like to thank the CSIRO Data61 team who contributed to the development of the final system solution for the SubT Challenge from which this experiment was based on.", "We would like to thank the Collaborative Intelligence Future Science Platform (CINTEL FSP) for supporting this experiment.", "We would also like to acknowledge the support from other robot operators that assisted with this experiment: Mark Cox, Pavan Sikka, Md Komol, Tom Hines, Tirtha Bandyopadhyay, and Ben Tam.", "We would also like to acknowledge the help and support from Tom Hines for his assistance with the scoring server.", "Figure: All Mission Courses - All Robots for Total Coverage in m 2 ^{2} vs time.", "R1 is ATR 1, R2 is Spot 2, R3 is ATR 2, and R5 is Spot 1.Figure: All Mission Courses - All Robots for Total Trajectory Distance in metres vs time.", "R1 is ATR 1, R2 is Spot 2, R3 is ATR 2, and R5 is Spot 1." ] ]
2212.05626
[ [ "Energy-based General Sequential Episodic Memory Networks at the\n Adiabatic Limit" ], [ "Abstract The General Associative Memory Model (GAMM) has a constant state-dependant energy surface that leads the output dynamics to fixed points, retrieving single memories from a collection of memories that can be asynchronously preloaded.", "We introduce a new class of General Sequential Episodic Memory Models (GSEMM) that, in the adiabatic limit, exhibit temporally changing energy surface, leading to a series of meta-stable states that are sequential episodic memories.", "The dynamic energy surface is enabled by newly introduced asymmetric synapses with signal propagation delays in the network's hidden layer.", "We study the theoretical and empirical properties of two memory models from the GSEMM class, differing in their activation functions.", "LISEM has non-linearities in the feature layer, whereas DSEM has non-linearity in the hidden layer.", "In principle, DSEM has a storage capacity that grows exponentially with the number of neurons in the network.", "We introduce a learning rule for the synapses based on the energy minimization principle and show it can learn single memories and their sequential relationships online.", "This rule is similar to the Hebbian learning algorithm and Spike-Timing Dependent Plasticity (STDP), which describe conditions under which synapses between neurons change strength.", "Thus, GSEMM combines the static and dynamic properties of episodic memory under a single theoretical framework and bridges neuroscience, machine learning, and artificial intelligence." ], [ "Introduction", "Episodic memory refers to the conscious recollection of facts or subjective past experiences and forms an essential component of long-term memory [1], [2], [3].", "The recollection process may have both singleton and sequential characteristics.", "Singleton retrieval is the associative recall of a single memory from a retrieval cue.", "This memory could be the description of a particular object of interest or important dates of events.", "Sequential retrieval leads to a recollection process that is not just a single memory but a chain of sequentially connected memories.", "Episodic memory connects temporally related memories so that the retrieval process may consist of not just a single memory but sequential trajectories of these memories.", "Memories organized into these trajectories are called episodes.", "Memories may come together in episodes allowing us to link and retrieve sometimes distinct and representationally unrelated memories.", "The Sequential Episodic Memory (SEM) problem in Recurrent Neural Networks (RNNs) pertains to creating and manipulating these memories and their sequential relationships by encoding relevant information in some form in the synapses.", "To date, associative recall based RNNs form the bulk of singleton episodic memory models.", "Recent advances in energy-based associative memory showed how the memory recall property is universal for a class of symmetrically connected neural networks called the General Associative Memory Model (GAMM).", "Associative memory models based on GAMM are models for singleton episodic memory since the retrieval process extracts a single memory associated with a retrieval cue.", "One can imagine that if such an associative memory model stores sequential episodic memory, the episodes are stored as key-value pair mappings with time as the key and memory as the value.", "However, there is a plethora of evidence to support the claim that neuronal populations encode episodic memory in the ordinal structure of their dynamic behavior [4], [5], [6], [7].", "This evidence motivates the requirement for developing RNNs with sequential state transition characteristics.", "The requirement is augmented by the many biological and machine learning systems [8], [9], [10] that have sequential state transition characteristics.", "In cognitive sciences, the systems underlie a range of processes related to working memory [10], perception [8], long-term decision making [11], and inference and recall based on previous experiences [12], [13].", "This paper explores a new class of General Sequential Episodic Memory Models (GSEMM) derived by introducing delay-based synapses in the General Associative Memory Model.", "We define a slow-changing energy function that characterizes the dynamical nature of models from the GSEMM class as instantaneuos fixed point attractor dynamics.", "We study the SEM properties of two practical variants of GSEMM - Linear Interaction SEM (LISEM) and Dense SEM (DSEM), based on the type of interaction between memories in the energy function.", "The memories have linear interactions in LISEM which are analogous to a model of sequentially activated memory [14].", "In DSEM, we introduce non-linearity in the interactions between synapses [15].", "We show that this introduction of non-linear interactions results in an exponential increase in SEM retrieval capacity over LISEM.", "Further, we use the energy paradigm to derive a learning rule for the synapses from the general theory so that DSEM can acquire new episodes online without preloading.", "We show how the derived learning rules for the synapses are related to current biological plasticity rules: Hebbian, and Spike Timing Dependent Plasticity (STDP) [16]." ], [ "Energy-based Models", "The energy paradigm for memory was introduced by Hopfield [17], [18], who defined energy as a quadratic function of the neural activity in symmetrically connected networks with binary model neurons.", "A single memory is stored as a local minimum of the energy function.", "The network dynamics converges to one of the local minima and retrieves a stable activity state representing a single episodic memory.", "The Hopfield network model has subsequently been generalized along two directions.", "The first direction focuses on memory capacity.", "Capacity relates to the number of neurons required in the ensemble to store and retrieve a certain number of memories without corruption.", "The capacity of the original Hopfield model was 14% of the number of neurons, a small fraction of the number of neurons in the population[19], [20], [21].", "A significant breakthrough in capacity came with the introduction of Dense Associative Memory [15], which introduced a polynomial non-linearity to separate the contribution of each memory to the energy minimum.", "The non-linearity enabled the models to store more memories than the number of neurons (hence the term dense) with the caveat of introducing non-biological three-body interactions [22].", "Further studies extended these ideas to continuous state spaces, and exponential memory capacity [23].", "Currently, these models form the fundamental components of transformer architectures [24], [25] with high levels of performance on large-scale natural language processing tasks [26], [27] and computer vision [28] tasks.", "Recently, General Associative Memory Model (GAMM) [22] unified these advances in associative memory in a single theoretical framework.", "GAMM succeeded in explaining the capacity improvements through a simple energy function that characterized the long-term behavior of these models just like its predecessors.", "However, GAMM's state-parameterized constant energy surface restricts it to singleton episodic memories.", "The second research direction focused on extending energy-based models to non-equilibrium dynamical conditions.", "In contrast to memories in singleton episodic memory, memories in non-equilibrium models are meta-stable states in the dynamical evolution [29], [30], [31].", "The non-equilibrium and sequential nature of the meta-stable states is an essential aspect of sequential episodic memory models.", "Some of the first works to produce sequential meta-stable memory [14], [32] used a combination of symmetric interactions, asymmetric interactions, and delay signals to produce stable sequential activation of memory patterns.", "However, these models required additional mechanisms to selectively raise the energies of states, which added complications to the use of the energy paradigm and showed the difficulty of reconciling the static nature of the energy surface with the dynamical nature of models required for sequential memory retrieval.", "One way to alleviate this difficulty is the introduction of stochasticity [33], [8] with sufficient noise to push the system's state beyond the basin of memory to another memory [34], [35].", "Models developed along these directions relaxed the symmetric constraints on the neural interactions of Hopfield Networks, resulting in a rich repertoire of dynamics [36], [37].", "Theoretical proposals for meta-stable memory models used non-equilibrium landscapes where the energy function and a probability flux together determined the stability of memory states [38].", "In these models, stochasticity played a major role in determining the stability of meta-stable states.", "Recent evidence from biology [39], [40], [41] has emphasized the importance of multiple timescales in SEM tasks.", "Empirical models [42], [43] also use multiple timescales to generate SEM.", "However, current models do not take advantage of multiple timescales, and the SEM capacity of these models is only about 7% of the number of neurons [42].", "In GSEMM, we extend the energy paradigm of GAMM to the non-equilibrium case using two timescales to define the dynamic behavior of the model.", "In the process, we discover mechanisms to significantly improve the sequential episodic memory capacity of non-equilibrium networks.", "We derive learning rules that point to an intimate connection of local biological learning rules with a global energy minimization principle.", "We provide a mathematical description of GSEMM as a two layer system of interacting neurons organized according to the General Associative Memory Model (GAMM) [22] with the addition of delay based intra-layer interactions between neurons in the hidden layer.", "One of the layers is called the feature layer.", "This layer is mainly concerned with the input and output of the model.", "There are no synaptic connections between neurons in this layer.", "The second layer is called the hidden layer.", "The activity of neurons in this layer encodes abstract information about stored memories.", "In contrast to the feature layer, the hidden layer neurons are connected using synapses that delay the signal from the feature layer.", "These intra-hidden layer connections enable interactions between memories.", "In the most general case, there is no restriction on the nature (in terms of symmetry) of connections between neurons in this layer.", "In addition to these intra-layer connections, the neurons in the two layers are connected through symmetric synaptic interactions.", "The architecture for the model is shown in Figure REF .", "We use common linear algebra notations and indexed notations to denote states and synapses in our model.", "We use mainly indexed notation but switch to matrix and vector notation wherever convenient.", "We now provide the mathematical description of GSEMM.", "Let $(V_f)_i$ be the current through the $i^{\\text{th}}$ neuron of the feature layer, $\\sigma _f(V_f)$ be the activation function for the feature layer, $(V_h)_j$ be the current through the $j^{\\text{th}}$ neuron of the hidden layer, $\\sigma _h$ be the activation function for the hidden layer, and $(V_d)_i$ be the delayed feature neuron signal from the $i^{\\text{th}}$ feature neuron.", "The states $V_f, V_h, V_d$ evolve with characteristic timescales $\\mathcal {T}_f, \\mathcal {T}_h, \\mathcal {T}_d$ respectively.", "Let $\\Xi _{i j}$ be the strength of the synaptic connection between the neuron $i$ in the feature layer to the neuron $j$ in the hidden layer, $\\Phi _{k j}$ be the strength of the synaptic connection from the $k^{\\text{th}}$ hidden neuron to the $j^{\\text{th}}$ hidden neuron.", "Similar to how memories are loaded in GAMM, each column of the matrix $\\Xi $ stores individual memories.", "We introduce two scalar parameters to control the strength of signals through the synapses.", "Let $\\alpha _s, \\alpha _c$ be the strength of signals through the synapses $\\Xi $ and $\\Phi $ respectively.", "The governing dynamics are given by: equation Tf (Vf)it = s   j=1Nh i j   (h(Vh))j - (Vf)i   , Th (Vh)jt = s i=1Nf i j   (f(Vf))i + c k=1Nh iNf k j   i k   (Vd)i - (Vh)j   , Td (Vd)it =   (f(Vf))i - (Vd)i   .", "The dynamic evolution equations are analogous to GAMM [22] except with the addition of intra-layer synapses $\\Phi $ , and two strength parameters $\\alpha _s$ and $\\alpha _c$ .", "The timescale $\\mathcal {T}_d$ characterizes the timescale of delay and is assumed to be higher than the timescale of the feature and hidden layers.", "The delay signal is obtained by applying a continuous convolution operator [14] of the feature layer signal.", "(Vd)i = 1Td 0(f(Vf(t-x)))i  (-xTd) dx .", "We transformed the convolution operation to a dynamical state variable update ${V_d}{t}$ to simplify the theoretical analysis of the system.", "The General Associative Memory Model, which did not include intra-layer synapses in the hidden layer, has properties of associative memory.", "This means that for certain conditions on the set of functions $\\sigma _f$ and $\\sigma _h$ , the long-term behavior of the state of the feature layer neurons converged to one of the stored memories.", "The crucial condition required for convergence is that the dynamical trajectory of the system followed an energy function with minima near the stored memory states.", "The delay-based synapses we introduced enable the energy function to change with time, so the long-term behavior is not just a single memory but a sequence of related memories.", "The energy dynamics of the system is analyzed by considering the new delay variable $V_d$ as a control parameter.", "We show that for a delay signal $V_d$ that is changing sufficiently slowly compared to $V_h$ and $V_f$ , the energy function evaluated at the instantaneous state $V_d$ can still be used to characterize the dynamical nature of $V_f$ and $V_h$ .", "The term sufficiently slowly means that $V_f$ and $V_h$ converge to their instantaneous attractor states before $V_d$ changes the energy surface.", "To derive the energy function, we use two Lagrangian terms $L_f$ and $L_h$ for the feature and hidden neurons respectively [22], defined as $\\sigma _f(V_f) = \\mathcal {J}(L_f)^\\top , \\,\\, \\text{and} \\,\\, \\sigma _h(V_h) = \\mathcal {J}(L_h)^\\top \\,.$ The new energy function (SI Appendix: Energy Function for GSEMM) for GSEMM is derived as.", "E = [ Vf  f(Vf) - Lf] + [ Vh  h(Vh) - Lh ] - s   [ h(Vf)     h(Vh) ] - c [ Vd      h(Vh) ]   .", "At this point, it is instructive to note that without the additional synapses $\\Phi $ and strength parameter $\\alpha _s = 1$ , the system and the associated energy function reduce to GAMM energy with only singleton episodic memory.", "In order to analyze how the dynamics of energy change with the introduction of delay based synapses, we take the time derivative of the GSEMM energy function along the dynamical trajectory of the system.", "We assume the conditions of positive semi-definite Hessians of the Legrangian terms and bounded activation functions $\\sigma _f$ and $\\sigma _h$ [22].", "It is to be noted that the full state description of the system consists of three vectors $V_f$ , $V_d$ , and $V_h$ .", "These states are grouped as a fast subsystem - $V_f$ and $V_h$ , and a slow subsystem $V_d$ .", "The analysis becomes easier when we consider the slow subsystem as a control variable of the fast subsystem.", "This allows the characterization of the state dynamics of the fast subsystem as instantaneous fixed point attractor dynamics modulated by input from the slow subsystem.", "The dynamical evolution of the energy function after separating the slow and fast subsystems is given as (SI Appendix: Energy Function Dynamics), Et = F(Vft, Vht) + G(Vdt)   .", "$F({V_f}{t}, {V_h}{t}) =& - \\Bigg [ \\mathcal {T}_f \\left({V_f}{t}\\right)^\\top \\, \\mathcal {H}(L_f) \\, {V_f}{t} + \\\\& \\mathcal {T}_h \\left({V_h}{t}\\right)^\\top \\, \\mathcal {H}(L_h) \\, {V_h}{t} \\Bigg ] \\, .", "\\\\G({V_d}{t})=& - \\alpha _c \\Bigg [ \\sigma _h(V_h)^\\top \\, \\Phi ^\\top \\, \\Xi ^\\top \\, {V_d}{t} \\Bigg ] \\, .$ $F$ and $G$ separate the contributions of the two timescales - the fast ($\\lbrace \\mathcal {T}_f, \\mathcal {T}_h\\rbrace $ ) and slow ($\\lbrace \\mathcal {T}_d \\rbrace $ ).", "It can be easily seen that among the two terms, only $G$ is affected by the timescale of the delay signal.", "Just like in GAMM, under the assumption of positive semi-definite Hessian of the Lagrangian and bounded energy, we get, F(Vft, Vht) 0   .", "The inequality means that the fast subsystem can have two possible long-term behaviors when $F$ eventually converges to zero.", "One behavior is convergence to a single stable state corresponding to minima of the energy function leading to fixed point attractor dynamics.", "The second possible behavior is when the system moves in an iso-energetic trajectory without convergence.", "In this paper, we focus only on the case of the fixed point attractor behavior of the system.", "Like in GAMM, the fixed point attractor behavior of the system acts to stabilize the dynamics on the energy surface such that the energy is non-increasing and convergent, but unlike GAMM, delay based synapses lead to another term $G$ .", "G(Vdt) = - c [ h(Vh)      Vdt ]   .", "It may be difficult to specify the behavior of the system for any general choice of $\\Phi $ , $\\sigma _h$ , and $V_d$ .", "However, in the adiabatic limit of the slow subsystem (under the condition that $\\mathcal {T}_d \\gg 1$ and $\\mathcal {T}_d \\gg \\mathcal {T}_f, \\mathcal {T}_h$ ), the system can still exhibit a non-increasing energy function because ${V_d}{t} \\rightarrow 0$ in this limit and $G \\rightarrow 0$ .", "This condition is especially true when analyzing the dynamic properties of the fast subsystem (${V_f}{t} \\ne 0$ and ${V_h}{t} \\ne 0$ ) which is the property that seems to be relevant in dynamic memory models.", "The delay signals thus have two functions.", "The slow changing nature of the delay signal helps to stabilize the dynamics of the fast subsystem on the energy surface.", "The second function is that the delay signal changes the energy surface to create new minima and destroy old minima.", "In our numerical simulations, we consider high enough settings of $\\mathcal {T}_d$ such that $V_d$ changes sufficiently slowly for the energy function to characterize the dynamics but not so high as to prevent the system from exhibiting state transitions in a reasonable time.", "The theory of GSEMM alone is not practical enough to be applicable in a sequence generation task as it does not specify the activation functions for each of the layers.", "We derive two variants depending on the settings of activation functions and apply them to a sequential state generation task.", "Analogous to how practical models are derived from GAMM, we consider the diabatic limit of hidden neurons for the two variants.", "In the diabatic hidden neuron limit, (Vh)j = s i=1Nf i j   (f(Vf))i + c k=1Nh iNf k j   i k   (Vd)i   .", "Substituting this in the dynamical evolution of feature neurons we get, $\\mathcal {T}_f {(V_f)_i}{t} =& \\sqrt{\\alpha _s} \\, \\sum _{j=1}^{N_h} \\Xi _{i j} \\, \\sigma _h(\\Xi ^\\top \\, \\sigma _f(V_f) + \\alpha _c \\Phi ^\\top \\, \\Xi ^\\top \\, V_d)_j\\\\ &- (V_f)_i \\, .$ It can be seen from the dynamical evolution of feature neurons that depending on the settings of the two activation functions, the feature-hidden synapses may interact linearly with hidden-feature synapses and hidden-hidden synapses.", "The two variants of the general theory are constructed based on the presence or absence of these inter-synapse interactions.", "In the first variant, Linear Interaction SEM, the feature layer activation function is non-linear, and hidden layer activation is identity allowing linear interactions between synapses.", "In the second variant, Dense SEM, the hidden layer activation function is non-linear which prevents linear interactions between synapses.", "LISEM is characterized by linear interactions between synapses.", "This model closely resembles an RNN model with sequentially activated patterns explored previously in [14].", "To analyze the dynamical properties of the model, we assume $N_h$ random binary vectors of dimension $N_f$ as memories preloaded in $\\Xi $ .", "We assume a specific structure for $\\Phi $ that gives rise to networks with sequential transitions: = 1s   G where $G$ is the graph's adjacency matrix with $N_h$ nodes and directed edges that represent sequential relationships between memories in the stored episodes.", "This structure for interneuron connections allows us to encode episodes with Markovian memory transitions in our network.", "In the diabatic limit of hidden neuron activity, $\\mathcal {T}_h \\rightarrow 0$ , identity activation for the hidden layer, $\\sigma _h(V_h) = V_h$ , and $\\tanh $ activation in the feature layer, $\\sigma _f(V_f) = \\tanh (\\gamma V_f)$ , the governing dynamics reduce to: equation Tf (Vf)it = s j=1Nh k=1Nf i k   j k   (  (Vf)k) + c l=1Nh j=1Nh m=1Nf i j l j l m (Vd)m - (Vf)i   , Td (Vd)it = (  (Vf)i) - (Vd)i   .", "To elucidate how a robust recall of the next memory may be possible with the model, we analyze the dynamics of the energy function.", "As discussed before, due to the dynamical nature of the system's long-term behavior and changing delay signal, the system no longer follows a global energy function with minima near memories like in associative memory models.", "Instead, the system follows the instantaneuos minima of the energy function and goes from memory to memory via slow updates to the energy function.", "We analyze the energy function of LISEM for the case where the memories are orthogonal column vectors of $\\Xi $ .", "$E_{\\text{\\tiny LISEM}} =& \\Bigg [ V_f^\\top \\, \\tanh (V_f) - \\log (|\\cosh (V_f)|) - \\\\ & \\frac{\\alpha _s}{2} \\tanh (V_f)^\\top \\, \\Xi \\, \\Xi ^\\top \\, \\tanh (V_f) \\Bigg ] \\\\& + \\Bigg [\\alpha _c \\, \\tanh (V_f)^\\top \\, \\Xi \\, \\Phi ^\\top \\Xi ^\\top \\, V_d \\Bigg ] \\\\& + \\Bigg [\\frac{\\alpha _c^2}{2 \\alpha _s} V_d^\\top \\, \\Xi \\, \\Phi \\, \\Phi ^\\top \\, \\Xi ^\\top \\, V_d \\Bigg ] \\, .$ The energy function can be separated to three components as follows.", "equation ELISEM = Eassoc + Eseq + Ec   .", "Eassoc = [ Vf  (Vf) - (|(Vf)|)   .", "- s2 (Vf)      (Vf) ]   .", "Eseq = c (Vf)      Vd   .", "Ec = c22   s Vd          Vd   .", "$E_{\\text{\\tiny assoc}}$ creates minima of $V_f$ near all columns (memories) defined in $\\Xi $ .", "This term is independent of $V_d$ and hence does not change over time.", "$E_{\\text{\\tiny c}}$ is independent of $V_f$ and just translates the energy surface.", "Since energy functions are invariant under translation, the effect of this term can be safely ignored in the analysis.", "Unlike $E_{\\text{\\tiny assoc}}$ and $E_{\\text{\\tiny c}}$ , $E_{\\text{\\tiny seq}}$ is modulated by the the delay neurons $V_d$ with the strength $\\alpha _c$ .", "Thus, depending on the state of $V_d$ and the parameter $\\alpha _c$ , the energy function creates different minima over time.", "For the matrix $\\Phi $ defined, the new minima are in the sequentially connected neighbor of the memory.", "According to the theoretical analysis of the energy function dynamics of GSEMM, in the limit of slow changing signal $V_d$ , the fast subsystem would follow the instantaneous minima of the energy function.", "We validate this with simulation.", "To show how the state transition behavior is exhibited by LISEM, we plot the energy function without $E_c$ and the state of the system as time progresses for a simulated episode of the system in Figure REF .", "Since the state space is high dimensional and difficult to visualize, we show only the comparison between the energy of all states the system takes in $one$ simulation.", "The figures also reveal how the system evolution follows the instantaneuous minimum of the energy surface.", "Figure: Capacity of LISEM and DSEM shows the exponential capacity of DSEM and the linear capacity of LISEM.", "The scaling relationship between the average number of neurons in the feature layer (N f N_f) of LISEM and DSEM to the number of memories that can be stored and retrieved (KK).", "The blue-shaded region shows the regions within one standard deviation of the mean value of 100 trials.", "The number of feature neurons required for LISEM to encode episodes with a certain number of memories is shown to be significantly greater than DSEM.", "A LISEM seems to exhibit close to a linear relationship between N f N_f and KK B DSEM shows a exponential relationship between N f N_f and KK.The second variant of GSEMM is a model with structural improvements that greatly increase sequential episodic memory capacity to include longer sequence lengths $K$ exponential in the number of feature neurons $N_f$ .", "We use exponential interactions with contrastive normalization in the activation function of hidden layer neurons analogous to the Modern Hopfield Network [44].", "$\\left( \\sigma _h(V_h) \\right)_i = \\frac{\\exp (\\gamma (V_h)_i)}{\\sum _{j=1}^{N_h} \\exp (\\gamma (V_h)_j)} \\, .$ DSEM uses the identity activation for the feature layer $\\sigma _f(V_f) = V_f$ leaving the non-linearity to the hidden layer activation function.", "Under the diabatic conditions for the hidden layer, we get the dynamical equations for DSEM as: equation Tf (Vf)it = s   jNh i j   (h(s     Vf + c     Vd))j - (Vf)i   .", "Vh = s     Vh + c     Vd   .", "Td (Vd)it = (Vf)i - (Vd)i   .", "Figure REF demonstrates that the local and global flow to attractors is similar to that of LISEM.", "It is observed in the figure that the memory transitions are quicker than in LISEM which could be due to the rapid convergence rate of the associative memory system this model is based on [25].", "The energy function of DSEM is EDSEM = 12 Vf  Vf + Vh  h(Vh) - logsumexp((Vh)) - s   Vf    h(Vh) - c Vd      (Vh)   .", "The energy dynamics shown in Figure REF also exhibit similar behavior to LISEM.", "The momentary loss in stability of fixed points near memories that allow for state transition is observed clearly in the figure.", "The crucial difference between LISEM and DSEM is the improved SEM capacity.", "Figure REF compares LISEM and DSEM based on their SEM capacity, defined as the number of memories $K$ per sequential episode for a network with $N_f$ feature neurons.", "The required number of feature neurons is averaged over 100 trials with different binary vectors encoding the memories.", "To make the experiment computationally tractable, we set the maximum number of feature neurons at 500 neurons.", "The computational simulations suggest that DSEM is exponentially superior in capacity to LISEM, which stores only memories linear in the number of feature neurons.", "While Hopfield-like models assume preloaded memories by fixing weights rather than by learning, we propose here an online learning procedure that updates weights online to include more episodes.", "The online learning rule tunes the synaptic connections as the stimulus is provided as input to the model so that new sequential episodes can be learned by the model.", "The learning rule is derived from GSEMM using the following update rule for each synaptic connection $W = \\lbrace \\Xi , \\Phi \\rbrace $ .", "TWL Wt = - E(xtarget)W + c E(xcurrent)W   .", "$\\mathcal {T}^W_L$ is the characteristic learning rate for the synaptic connection $W$ .", "$x^{\\text{current}}$ and $x^{\\text{target}}$ are neuronal signals ($V_f, V_d, V_h$ ) estimated from the current and expected memory of the neurons respectively.", "There are two different terms in the learning rule.", "The first term $-{E(x^{\\text{target}})}{W_{i j}}$ changes the model parameters such that the energy of the target memory decreases.", "This promotes the creation of attractor basins near the target memory.", "The second term $\\beta _c {E(x^{\\text{current}})}{W_{i j}}$ increases the energy of the current memory such that it destabilizes to flow to the target memory.", "The learning rule is designed to mimic the expected dynamical behavior of the energy function when the system produces the required memory transitions.", "We derive learning rules for the two synaptic interactions of DSEM based on our learning paradigm.", "For $\\Xi $ and $\\Phi $ , the learning rules are TL t = s [ Vftarget h(Vhtarget) - Vfcurrent h(Vhcurrent) ] + c   Vd   ( h(Vhtarget) - h(Vhcurrent) )     .", "TL t = c     ( Vd   h(Vhtarget)- Vd   h(Vhcurrent))   .", "The term $V_f^{\\text{target/current}} \\, \\sigma _h(V_h^{\\text{target/current}})^{\\top }$ is the Hebbian learning rule between the feature neurons and the hidden neurons.", "Similarly, $V_d \\, \\sigma _h(V_h^{\\text{target/current}})^\\top $ is the Hebbian rule between the delayed feature neurons and the hidden neurons.", "However, since delayed feature neurons store a delayed signal from the feature neurons, this Hebbian term is actually the STDP learning rule between the feature neurons and the hidden neurons.", "The STDP terms dominate the learning rule for $\\Phi $ , which suggests a connection between temporally aware STDP learning and the temporal nature of the information stored in $\\Phi $ .", "STDP and Hebbian learning rules use local information on the activity of just the post and pre-synaptic neurons without considering global network computation.", "This relationship between biological learning rules and our learning rule points to the vital role of local biological learning in global network energy minimization.", "In Figure REF , we demonstrate the effectiveness of these learning rules in learning new memories along with their sequential relationships from input stimulus.", "The synapses are initialized uniformly randomly so that no new memories are preloaded.", "We use a 4-memory sequence as a sequential cyclical episode to learn - $M_1 \\rightarrow M_2 \\rightarrow M_3 \\rightarrow M_4 \\rightarrow M_1$ .", "After learning, Figure REF shows the test time behavior of the learned model and how the learned synapses organize such that the memory representations are stored in $\\Xi $ and the sequential relationships between memories in $\\Phi $ .", "The memories are consolidated in the feature hidden layer interaction $\\Xi $ as $ M_1 - \\xi _{18}$ , $M_3 - \\xi _7$ , $M_3 - \\xi _{12}$ , and $M_4 - \\xi _{6}$ where $\\xi _{i}$ is a vector representing the strength of interactions between all the feature layer neurons and the $i^{\\text{th}}$ hidden neuron.", "We introduced the General Sequential Episodic Memory Model that can encode memories and their sequential relationships.", "Central to this capability is the slowly-changing energy surface controlled by the newly introduced delay-based synapses.", "We showed how the energy surface's slow-changing nature helps the system to instantaneously follow the fixed points on the energy surface.", "We studied two models from the GSEMM class.", "Linear Interaction Sequential Episodic Memory, with linear synaptic interactions, that is analogous to a popular sequential episodic memory model.", "Dense Sequential Episodic Memory has non-linear synaptic interactions that exponentially improve episodic memory capacity.", "We further proposed a learning rule for DSEM and showed how it is related to online versions of biological learning rules: Hebbian, and Spike-Timing Dependent Plasticity.", "The generality of GSEMM theory could impact the future design and analysis of sequential episodic memory in both biological memory systems and machine learning.", "The energy-based learning rule shows the role of biological learning rules that use local neuron information in minimizing the energy function of the global network.", "Further research is needed to explore GSEMM and the connection between the energy paradigm and other aspects of neural networks in both neural systems and network models.", "The scaling improvements of DSEM may be directly applied to solve problems requiring sequential memory with low overhead and high storage capacity.", "Neuroscience and machine learning have much to learn from each other to improve our understanding of the dynamics of memory in intelligence.", "We used the fourth order Range-Kutta numerical procedure with step size $0.01$ for numerical simulations.", "The output of the models is the state of their feature neurons and evaluated using the overlap of the feature neuron state with each memory in the system defined as $m_{i}(V_f) = (1/N_f) \\, \\sum _{j}^{N_f} (\\xi _{i})_j \\, (\\sigma _f(V_f))_j$ where $\\xi ^{(i)}$ is the $i^{th}$ memory in the system.", "Seven memories are encoded in each of the models with each memory in the system a random binary vector such that $\\Pr [\\xi ^{(i)}_j = +1] = \\Pr [\\xi ^{(i)}_j = -1] = 1/2$ .", "These memories are organized as 2 separate cyclical episodes: $\\xi _1 \\rightarrow \\xi _2 \\rightarrow \\xi _3 \\rightarrow \\xi _1$ and $\\xi _4 \\rightarrow \\xi _5 \\rightarrow \\xi _6 \\rightarrow \\xi _7 \\rightarrow \\xi _4$ with their sequential relationships stored as an adjacency matrix in $G$ .", "Two key factors were considered when we used two episodes for evaluation.", "One factor is to demonstrate the ability of the models to extract only the memories about the related stored episode, even in the presence of other episodes.", "The second factor is that the successful generation of the stored episode requires long-term non-equilibrium behavior, meaning that the meta-stable states observed do not lead to an equilibrium ground state.", "We used an iterative process to find fixed points of energy surface starting from some neuron state on the energy landscape, the state is updated to follow the direction of the energy slope till no more updates are possible, indicating convergence to a fixed point on the energy surface.", "This fixed point is also one meta-stable point in the network dynamics.", "We simulate LISEM with $N_f = 100, \\gamma = 1.0, \\alpha _s = 0.05, \\alpha _c = 4.9, \\mathcal {T}_f = 1.0,$ and $\\mathcal {T}_d = 100.0$ .", "The output of the system is evaluated by considering the overlaps $m_{i}(V_f)$ of the state of feature neurons with $i^{\\text{th}}$ preloaded memory of the system defined as $m_{i}(V_f) = (1/N_f) \\, \\sum _{j}^{N_f} (\\xi _{i})_j \\, (V_f)_j$ .", "We simulate DSEM with $N_f = 100, \\gamma = 1.0, \\alpha _s = 1.0, \\alpha _c = 4.9, \\mathcal {T}_f = 1.0$ , and $\\mathcal {T}_d = 100.0,$ .", "The network synapses are randomly initialized with values from the range $[-1, 1]$ .", "The sequence of memories is presented one after the other, with each memory supplied as input to $V_f$ for 4500 timesteps.", "The learning algorithm is run with a learning timescale of $\\mathcal {T}^{\\Xi }_L = 6.2e5$ , $\\mathcal {T}^{\\Phi }_L = 6.2e7$ and the model parameters $\\alpha _c = 0.991$ , $\\beta _c = 0.621$ , $\\alpha _s = 1.0$ .", "we give mathematical derivations we used to introduce essential concepts in the main text.", "The most important aspect of the model we discussed is the energy function.", "We use the function to show the behavior of the system in the adiabatic case and compute instantaneuos attractors.", "Here, we will derive the Energy function of GSEMM starting from a previously derived energy function used for associative memory.", "Assume a signal $\\mathcal {I}_h$ applied to the neurons in the hidden layer.", "E = [ Vf  f(Vf) - Lf] + [ (Vh - IH)  h(Vh) - Lh ] - [ s   f(Vf)     h(Vh) ] In our case, the input signal comes from the delay signal activity $V_d$ and is given as $\\mathcal {I}_h = \\alpha _c \\Phi ^\\top \\, \\Xi ^\\top \\, V_{d}$ from our governing dynamics.", "Substituting this in the energy equation E = [ Vf  f(Vf) - Lf] + [ (Vh - c     Vd)  h(Vh) - Lh ] - [ s   f(Vf)    h(Vh) ] Expanding this equation, we get E = [ Vf  f(Vf) - Lf] + [ Vh  h(Vh) - Lh ] - [ s   f(Vf)    h(Vh) ] - c [ Vd    h(Vh) ] To find how the energy function behaves along the dynamical trajectory of the system.", "Taking the derivative of the energy function with respect time Et = [ Vf  J(f)   Vft + f(Vf)Vft - Lft ] + [ VhJ(h)  Vht + h(Vh)  Vht - Lht ] - [ s   f(Vf)    J(h)   Vht + s   h(Vh)    J(f)   Vft ] - c   t[ Vd    J(h) Vht + h(Vh)    Vdt ] The derivatives of the legrangian terms can be converted as ${L_f}{t} = \\sigma _f(V_f)^\\top {V}{t}$ and ${L_h}{t} = f(V_h)^\\top {V_h}{t}$ .", "Substituting these.", "Et = [ Vf  J(f)   Vft + VhJ(h)  Vht ] - [ s   f(Vf)    J   Vht + s   h(Vh)    J(f)   Vft ] - c   t[ Vd    J(h) Vht + h(Vh)    Vdt ] Rearranging terms Et = - [ (s   h(Vh)  - Vf)   J(f)   Vft + ( s   f(Vf)  + c Vd    - Vh) J(h)  Vht ] - c [ h(Vh)    Vdt ] Substituting from dynamical equations Et = - [ Tf   Vft  H(Lf)   Vft + Th VhtH(Lh)  Vht ] - [ h(Vh)    Vdt ] In the paper, we discuss how the energy based learning connects to some well known biological learning rules.", "In this section, we derive the relations we used using the new energy function.", "We use the following rule to make changes to $\\Xi $ .", "TL t = - E(Vftarget) + E(Vfcurrent) TL t = [ s   Vftarget h(Vhtarget) + c Vd   (h(Vhtarget)) ] - [ s   Vfcurrent h(Vhcurrent) + c Vd   (h(Vhcurrent)) ] TL t = s [ Vftarget h(Vhtarget) - Vfcurrent h(Vhcurrent) ] + c   Vd   ( h(Vhtarget) - h(Vhcurrent) )   We use the following rule to make changes to $\\Phi $ .", "TL t = - E(Vftarget) + E(Vfcurrent) TL t = [ c   Vd   h(Vhtarget)] - [ c   Vd   h(Vhcurrent)] TL t = c     ( Vd   h(Vhtarget)- Vd   h(Vhcurrent))" ] ]
2212.05563
[ [ "Bayesian inversion with {\\alpha}-stable priors" ], [ "Abstract We propose to use L\\'evy {\\alpha}-stable distributions for constructing priors for Bayesian inverse problems.", "The construction is based on Markov fields with stable-distributed increments.", "Special cases include the Cauchy and Gaussian distributions, with stability indices {\\alpha} = 1, and {\\alpha} = 2, respectively.", "Our target is to show that these priors provide a rich class of priors for modelling rough features.", "The main technical issue is that the {\\alpha}-stable probability density functions do not have closed-form expressions in general, and this limits their applicability.", "For practical purposes, we need to approximate probability density functions through numerical integration or series expansions.", "Current available approximation methods are either too time-consuming or do not function within the range of stability and radius arguments needed in Bayesian inversion.", "To address the issue, we propose a new hybrid approximation method for symmetric univariate and bivariate {\\alpha}-stable distributions, which is both fast to evaluate, and accurate enough from a practical viewpoint.", "Then we use approximation method in the numerical implementation of {\\alpha}-stable random field priors.", "We demonstrate the applicability of the constructed priors on selected Bayesian inverse problems which include the deconvolution problem, and the inversion of a function governed by an elliptic partial differential equation.", "We also demonstrate hierarchical {\\alpha}-stable priors in the one-dimensional deconvolution problem.", "We employ maximum-a-posterior-based estimation at all the numerical examples.", "To that end, we exploit the limited-memory BFGS and its bounded variant for the estimator." ], [ "Introduction", "Inverse problems is the mathematical theory and practical interpretation of noise-perturbed indirect observations.", "Bayesian statistical inversion is the effort to formulate real-world inverse problems as Bayesian statistical estimation problems , .", "Bayesian inverse problems can be found in medical and subsurface imaging, industrial applications, and near-space remote sensing.", "The objective, for example in industrial tomography, is to detect different materials, which may have isotropic, anisotropic, or inhomogeneous features.", "This means that we typically aim at reconstructing a hidden substance from indirect noise-perturbed measurements.", "Inhomogeneities include, for example, material interfaces and rough features, and these are the main topics of this paper.", "Inverse problems are often formulated through a noise-perturbed measurement equation $\\mathbf {y}=\\mathcal {G}(u) + \\mathbf {\\eta }, \\quad \\mathbf {\\eta } \\sim \\mathcal {N}(\\mathbf {0},\\mathbf {C}),$ where $\\mathbf {y} \\in \\mathbb {R}^M$ are noisy finite-dimensional measurements, $\\mathcal {G}$ is a linear or non-linear mapping from some function space to $\\mathbb {R}^M$ , $u\\in \\mathbb {R}^d$ is the unknown with typically $d = 1,2,3$ , and $\\mathbf {\\eta }$ is noise, which we assume to be Gaussian.", "Our aim is to estimate $u$ given one realisation of $\\mathbf {y}$ .", "Inverse problems methods can be roughly divided to deterministic and statistical methods.", "In statistical framework, we model $\\mathbf {y}, u, \\mathbf {\\eta }$ as random objects .", "For practical computations, we discretize the unknown $U$ , and denote it by $\\mathbf {u}$ .", "Then the solution, within Bayesian inverse problems framework, can be represented through probability distributions via Bayes theorem, that is, the posterior distribution $\\pi (\\mathbf {u}\\mid \\mathbf {y}) = \\frac{\\ \\pi ( \\mathbf {y}\\mid \\mathbf {u}) \\pi (\\mathbf {u})}{\\pi ( \\mathbf {y}) } \\nonumber \\propto \\pi ( \\mathbf {y}\\mid \\mathbf {u}) \\pi (\\mathbf {u}), $ where $\\pi ( \\mathbf {y}\\mid \\mathbf {u})$ is the likelihood, and $\\pi ( \\mathbf {u})$ is the prior distribution of the unknown.", "We omit the normalization constant $\\pi ( \\mathbf {y})$ , and from hereon we simply use unnormalized posterior distribution.", "The choice of the prior $\\pi (\\mathbf {u})$ is practically the only tuneable object in inversion.", "The traditional choices in inverse problems are Gaussian and total variation priors for smoothing and edge-preserving inversion, respectively , , , .", "In this paper, we build upon the research line starting from the observation that total variation priors do not provide invariant estimators under mesh refinement .", "The solution to this problem was proposed through Besov priors on wavelet basis .", "In our previous papers , , we have proposed Cauchy difference priors as alternatives to Besov priors.", "Here, we extend the study from Cauchy priors to $\\alpha $ -stable priors, of which the Cauchy priors are special cases with $\\alpha =1$ , and Gaussian priors are similarly special cases with $\\alpha =2$ .", "In order to leverage $\\alpha $ -stable laws for Bayesian inverse problems, we need approximations of $\\alpha $ -stable probability densities evaluated very fast with reasonable precision , .", "Our particular interest is to implement and use discretized $\\alpha $ -stable random fields in Bayesian continuous-parameter estimation.", "We note that traditionally, stable distributions have been employed in financial applications, like modeling of asset time series .", "They have also been applied in biomedical engineering , remote sensing , statistical analysis of network traffic , and digital signal processing , some to mention.", "Here extend the usage to inverse problems." ], [ "Contributions", "Our objective is to implement numerical approximations of symmetric $\\alpha $ -stable priors for Bayesian inverse problems, which requires evaluating the univariate or multivariate probability density functions of $\\alpha $ -stable random variables.", "The symmetric $\\alpha $ -stable probability density functions do not have elementary function expressions, except for the two special cases of Gaussian and Cauchy distributions, so the evaluation requires incorporation of an appropriate approximation method , .", "A straightforward approximation is to evaluate the inverse Fourier transform of the characteristic function of the $\\alpha $ -stable distribution.", "In fact, the $\\alpha $ -stable distributions are often treated and even defined through their characteristic functions.", "The inverse Fourier transform can be approximated with an adaptive numerical quadrature integration, or with the help of a discrete Fourier transform .", "The drawback of the numerical integration of the inverse Fourier transform is the cumbersome integrand, which decays slowly when $\\alpha $ is small, and oscillates considerably when the argument $r$ of the transform is large , .", "The discrete Fourier transform method exploits the low computational complexity of the Fast Fourier Transform Fourier transform, but requires interpolation to evaluate the density at outside the grid .", "Additionally, there is an alternative integral representation formula for the univariate $\\alpha $ -stable probability density function , which does not involve improper integrals with oscillatory integrands, but which cannot be used when $\\alpha $ is close to 1.", "The density functions of $\\alpha $ -stable distributions can also be approximated using series expansions .", "Some of the existing series expansions converge to the true probability density function pointwise for any $r > 0$ , while the others are asymptotic for either $r \\rightarrow 0^+$ or $r \\rightarrow \\infty $ , .", "The latter are particularly useful for approximating the tails of the probability density functions, that may be difficult for the methods based on numerical integration.", "Unfortunately, none of the existing $\\alpha $ -stable density function approximation methods are optimal for our needs because they are either computationally too heavy to evaluate within Bayesian inversion, or not applicable for a wide range of values of $r$ and stability indices $\\alpha $ .", "For this reason, we introduce a fast hybrid method to approximate the $\\alpha $ -stable laws that uses both bicubic spline interpolations at pre-computed probability density grids, as well as asymptotic series approximations.", "We also establish error bounds for the method.", "We demonstrate various $\\alpha $ -stable priors on a range of Bayesian inverse problems.", "These include deconvolution problems in one- and two-dimensional grids.", "Finally, we illustrate nonlinear Bayesian inversion governed by an elliptic PDE through $\\alpha $ -stable priors.", "In the numerical examples, we resort to maximum a posteriori (MAP) estimators: $\\mathbf {u}_{\\rm {MAP}} := \\arg \\max _{\\mathbf {u}} \\pi (\\mathbf {u}\\mid \\mathbf {y}).$" ], [ "Outline", "This paper is organized as follows: In Section , we provide the necessary background material required for the paper as an introduction into $\\alpha $ -stable priors.", "This will lead onto Section , where we briefly review the existing methods and our hybrid method for approximating $\\alpha $ -stable probability density functions, and provide error bounds related to our method.", "Numerical experiments with the $\\alpha $ -stable priors are provided in Section , where we test our priors on the example problems.", "A summary of our findings and future work are provided in Section .", "Derivations of the error bounds are provided in the Appendix." ], [ "Models", "In this section we review and discuss the necessary prior forms based on $\\alpha $ -stable distributions.", "We also present some basic properties, and then present the multivariate setting and how they can be defined." ], [ "Stable distributions", "A random variable $W$ corresponding to a symmetric stable distribution, also known as $\\alpha $ -stable and Lévy $\\alpha $ -stable distribution, can be characterized in terms of a stability index $\\alpha \\in (0,2]$ (sometimes also called the tail index or the characteristic exponent), and a scale parameter $\\sigma > 0$ , in the sense that its characteristic function is given by $\\mathbb {E}[ \\exp (\\mathrm {i}\\theta W) ] = \\exp \\bigl (-(\\sigma |\\theta |)^\\alpha \\bigr ), \\quad \\theta \\in \\mathbb {R},$ in which case we write $W \\sim \\mathcal {S}_{\\alpha }( \\sigma ).$ The parameter $\\alpha $ is called the stability index because if $W_1$ and $W_2$ are two independent copies of $W$ and $A$ , $B > 0$ , then $A W_1 + B W_2 \\stackrel{d}{=} C W,$ with $C^\\alpha = A^\\alpha + B^\\alpha .$ Hence, the symmetric $\\alpha $ -stable distributions are a family of continuous probability distributions that are infinitely divisible, and closed under convolution.", "The monograph is the standard reference for stable distributions, including the wide class of non-symmetric stable distributions which we do not consider here.", "It is immediate from (REF ) that for $\\alpha = 2$ , $W$ is normally distributed (with zero mean and variance $2\\sigma ^2$ ), and that for $\\alpha = 1$ , it has a Cauchy distribution (with zero median and scale parameter being $\\sigma $ ).", "Besides these two special cases of $\\alpha $ -stable laws, there are no known closed-form expressions based on elementary function for the density functions of symmetric stable distributions (the other special cases where closed-form expressions are known consist of non-symmetric distributions, such as the univariate Holtsmark distribution).", "For $\\alpha < 2$ , it holds that $\\mathbb {E}[|W|^\\alpha ] = \\infty $ , which means that in general an $\\alpha $ -stable distribution has infinite variance, and for $\\alpha \\le 1$ its mean is not well-defined either.", "However, it does hold that $\\mathbb {E}[|W|^\\lambda ] < \\infty $ for all $\\lambda \\in (0,\\alpha )$ .", "Multivariate stable distributions can be defined in a similar but more complicated manner, with a spectral measure $\\Lambda $ in place of the scale parameter $\\sigma $ ; see .", "For our purposes it suffices to recall the definition of spherically contoured stable distributions: a random vector $\\bi {W} =\\mathrel {\\mathop :}(W_1,W_2,\\cdots ,W_d)$ on $\\mathbb {R}^d$ is said to have an spherically contoured stable distribution if its characteristic function is given by $\\mathbb {E}\\Bigl [ \\exp \\Bigl (\\mathrm {i}\\sum _{j=1}^d \\theta _j W_j \\Bigr ) \\Bigr ] = \\exp \\bigl (-(\\sigma |\\theta |)^\\alpha \\bigr ), \\quad \\theta \\in \\mathbb {R}^d,$ where $\\alpha \\in (0,2]$ is again a stability index, $\\sigma > 0$ is a scale parameter and $|\\cdot |$ stands for the standard $\\ell ^2$ -based Euclidean norm on $\\mathbb {R}^d$ .", "We refer to for a treatment of such distributions." ], [ "$\\alpha $ -stable priors", "A stochastic process $(W_t)_{t \\ge 0}$ is a symmetric $\\alpha $ -stable process if $\\sum _{j=1}^n a_j W_{t_j}$ is a symmetric $\\alpha $ -stable random variable for all finite $\\lbrace t_1,\\cdots ,t_n\\rbrace \\subset [0,\\infty )$ and $\\lbrace a_1,\\cdots ,a_n\\rbrace \\subset \\mathbb {R}$ .", "We refer to for the existence and construction of a wide variety of such processes.", "The process is called $\\alpha $ -stable field, if the previous definition is satisfied for the multivariate case $\\lbrace t_1,\\cdots ,t_n\\rbrace \\subset \\mathbb {R}^K$ .", "In particular, we aim to apply discretized priors corresponding to a Lévy $\\alpha $ -stable motion, which we take to mean an $\\alpha $ -stable process $(W_t)_{t \\ge 0}$ with some given initial distribution $W_0 \\sim \\mu $ and independent increments that satisfies $W_t - W_s \\sim \\mathcal {S}_{\\alpha }( |t-s|^{1/\\alpha } ) \\quad \\textrm {for all} \\quad s,\\,t \\in [0,\\infty ), \\, t \\ne s.$ For $\\alpha < 2$ , the Lévy $\\alpha $ -stable motion generally does not have continuous sample paths.", "However, by , there exists a version of this process with cádlág paths satisfying $\\mathbb {P}( W_t = W_{t-}) = 1 \\quad \\textrm {for all} \\quad t > 0.$ We more generally refer to for an overview of the analytical properties of Lévy $\\alpha $ -stable motions and related processes, including a description of their infinitesimal generators.", "A Lévy $\\alpha $ -stable motion with initial distribution $\\mu $ can be discretized as follows.", "For $\\triangle \\in (0,1)$ , define the Markov chain $(u^\\triangle _k)_{k \\ge 0}$ by $u^\\triangle _0 \\sim \\mu $ , and $u^\\triangle _{k+1} - u^\\triangle _k \\sim \\mathcal {S}_{\\alpha }( \\triangle ^{1/\\alpha } )$ independently for all $k \\ge 0$ .", "Then by writing $(W^\\triangle _t)_{t \\ge 0}$ for the appropriately-scaled, piecewise constant cádlág process stemming from the Markov chain $(u^\\triangle _k)_{k \\ge 0}$ , i.e.", "$W^\\triangle _t \\mathrel {\\mathop :}=u^\\triangle _{\\lfloor t/\\triangle \\rfloor },$ it is easy to verify using the basic properties of stable distributions along with (REF ) that $\\lim _{\\triangle \\rightarrow 0^+} W^\\triangle = W$ in the sense of finite-dimensional distributions, i.e.", "$\\lim _{\\triangle \\rightarrow 0^+} \\mathbb {E}\\bigl [h\\bigl (W^\\triangle _{t_1},\\cdots ,W^\\triangle _{t_n}\\bigr )\\bigr ] = \\mathbb {E}\\bigl [h\\bigl (W_{t_1},\\cdots ,W_{t_n}\\bigr )\\bigr ],$ for all finite $\\lbrace t_1,\\cdots ,t_n\\rbrace \\subset [0,\\infty )$ and bounded and continuous functions $h\\colon \\mathbb {R}^n\\rightarrow \\mathbb {R}$ .", "An alternative way to construct such a discretization, localized to a finite interval, is to partition the interval by $N$ equispaced points, and define the unnormalized density function of $\\mathbf {u} \\mathrel {\\mathop :}=(u_i)_{i=1}^{N}$ on these points as $\\pi (\\mathbf {u}) \\propto \\mu (u_1) \\prod _{i=2}^N f( u_i - u_{i-1}; \\alpha , \\sigma ),$ where $\\mu $ is the initial distribution of the process $(W_t)_{t\\ge 0}$ above and $f(\\, \\cdot \\,;\\alpha , \\sigma )$ stands for the stable density function with stability index $\\alpha $ and appropriately-chosen scale parameter $\\sigma $ .", "It has recently been shown in that a certain class of $\\alpha $ -stable priors (including the Lévy $\\alpha $ -stable motion) for Bayesian inverse problems are discretization invariant, in the sense that the posteriors corresponding to the finite-dimensional discretized priors converge to the infinite-dimensional posterior corresponding to the original $\\alpha $ -stable process.", "This is an attractive property for numerical edge-preserving inversion, and in stark contrast to finite-variance priors, whose discretizations always converges to a Gaussian process.", "The only two-dimensional $\\alpha $ -stable field we consider in this paper is a simple generalization of the quasi-isotropic Cauchy first order difference prior , defined analogously to (REF ).", "That is, the probability density function of an $\\alpha $ -stable random field $\\mathbf {u}$ discretized through finite differences on a two-dimensional rectangular domain $\\Omega \\subset \\mathbb {R}^2$ is proportional to $\\pi (\\mathbf {u}) \\propto \\pi _{\\partial \\Omega }(\\mathbf {u}_{\\partial \\Omega }) \\prod _{i,j \\notin \\partial \\Omega } f_B(u_{i,j}-u_{i,j-1},u_{i,i}-u_{i-1,j}; \\alpha ; \\sigma ),$ where $\\partial \\Omega $ denotes the set of the left and bottom indices on the grid, and $f_B(\\cdot ,\\cdot ; \\alpha , \\sigma )$ the symmetric bivariate $\\alpha $ -stable probability density function.", "The probability density function $ \\pi _{\\partial \\Omega }$ is applied on the grid points at the left and bottom boundary of the grid to make the resulting distribution of $\\mathbf {u}$ proper." ], [ "Hierarchical $\\alpha $ -stable priors", "Hierarchical priors are dominantly used within Gaussian priors , , .", "With these priors, we can model discontinuities and other features with varying scale or smoothness at the target function.", "Unfortunately, the computational complexity of the canonical Gaussian priors is cubic with respect to the number of training points, unless a special formulation of the process is employed, like a stochastic partial differential equation .", "The hierarchical priors might require several layers on top of each other to perform well, while having too many layers may not offer any additional expression capability but rather overfit to the data.", "We aim to construct and demonstrate simple two-layer Markovian hierarchical $\\alpha $ -stable priors that might prove useful without the same computational or implementation complexity of the hierarchical Gaussian processes.", "We model the scale or the stability of a discretized $\\alpha $ -stable process as another $\\alpha $ -stable process.", "This is possible due to the simple Markovian construction of the first order difference prior, that effectively allows expressing the normalization constant of the joint distribution of the discretized process and its parameters processes in a closed form.", "Specifically, a hierarchical $\\alpha $ -stable difference process $\\mathbf {u}$ with scale $\\sigma = G(c)$ and stability $\\alpha = H(s)$ being both based on other discretized $\\alpha $ -stable difference processes, could be constructed as follows: $& \\pi (\\mathbf {u},\\mathbf {c}, \\mathbf {s}| \\mathbf {y}) \\propto \\pi ( \\mathbf {y}|\\mathbf {u}) \\pi (\\mathbf {u}|\\mathbf {c},\\mathbf {s}) \\pi (\\mathbf {c}) \\pi (\\mathbf {s}) = \\\\ \\nonumber &\\pi (\\mathbf {y}|\\mathbf {u}) f\\Big ( u_1; H(s_1), G(c_1) \\Big ) f( c_1; \\alpha _c, \\sigma _c ) f( s_1; \\alpha _s, \\sigma _s ) \\\\ \\nonumber & \\cdot \\prod _{i=2}^N f\\Big ( u_i - u_{i-1}; H(s_i), G(c_i) \\Big ) f( c_i-c_{i-1};\\alpha _c, \\sigma _c ) f( s_i-s_{i-1}; \\alpha _s, \\sigma _s ),$ where $H$ and $G$ are nonlinear functions with $\\textrm {Range}(G) \\subseteq \\mathbb {R}^+$ , and $ \\textrm {Range}(H) \\subseteq (0,2]$ .", "$f(\\cdot ;\\alpha ,\\sigma )$ denotes the probability density function of a univariate $\\alpha $ -stable random variable with stability $\\alpha $ , scale $\\sigma $ , and skewness parameter $\\beta =0$ .", "The conditional distribution $\\pi (\\mathbf {u}|\\mathbf {c},\\mathbf {s})$ integrates to a constant, regardless of $\\mathbf {c}$ and $\\mathbf {s}$ , as do the priors $\\pi (\\mathbf {c}) $ and $ \\pi (\\mathbf {s})$ , since $\\sigma _s,\\sigma _c,\\alpha _s$ and $\\alpha _c$ are fixed.", "The overall joint prior distribution is thus proper.", "To the best of our knowledge, the convergence properties of the hierarchical $\\alpha $ -stable processes in the continuous-time limit are unknown – a sum of two Lévy $\\alpha $ -stable random variables with different stability indices does not obey an $\\alpha $ -stable distribution.", "However, continuous-time processes with local stability index varying with the state of the process, commonly called stable-like processes, are well-studied in the literature; see e.g.", "Chapter 7 in and Theorem 5.2 in .", "Further studies are thus needed regarding the matter, but as we demonstrate in the numerical experiments, the hierarchical $\\alpha $ -stable process constructions are promising.", "Unfortunately, the presented hierarchical priors cannot be applied to the $\\alpha $ -stable difference priors when the spatial dimension greater than one.", "That is because the normalization constant of the priors are intractable due to their construction upon the distributions of increments between nearest neighbors.", "However, a Matérn-like stochastic partial differential equation prior could be optionally employed instead of the difference priors , what would effectively allow incorporating deep $\\alpha $ -stable processes thanks to the tractable normalization constants." ], [ "Approximation of $\\alpha $ -stable probability density functions", "We provide a brief literature overview on existing methods approximating $\\alpha $ -stable density functions.", "Afterwards we present our hybrid method for the approximation, which we subsequently deploy in the numerical experiments section.", "Various relative error bounds for the probability density approximations are provided.", "For simplicity, we denote with $r$ both the argument of the univariate probability density functions, and the Euclidean distance of the arguments of the multivariate $\\alpha $ -stable probability density functions to the origin.", "Unless otherwise indicated, the approximations are applied for $\\sigma =1$ .", "Recall that for general symmetric $\\alpha $ -stable laws, the probability density functions for the other scale parameters are given by $f(r;\\alpha ,\\sigma ) = \\frac{1}{\\sigma ^d}f(\\frac{r}{\\sigma };\\alpha ,1) $ , where $d$ stands for the dimensionality of the distribution .", "A canonical method to approximate the $\\alpha $ -stable probability density functions is to evaluate the inverse Fourier transform of the characteristic function.", "For the symmetric univariate $\\alpha $ -stable distributions given by (REF ) with $\\sigma = 1$ , the probability density function can be expressed as $f(r;\\alpha ) = \\frac{1}{\\pi } \\int _0^{\\infty } \\cos (rt) \\exp (-t^{\\alpha }) \\mathrm {d}t.$ In theory, numerical integration allows evaluating the density of any $\\alpha $ -stable distribution at an arbitrary point $r$ with specified precision.", "The integral may be impractical to evaluate for large $r$ and small $\\alpha $ due to the severe oscillations , so oscillatory integral techniques have been proposed to address the issue .", "Additionally, there is an alternative integral representation formula , that we call Nolan's method for short, for the univariate $\\alpha $ -stable density function when $\\alpha \\ne 1$ .", "For simplicity, if we consider the case $\\beta = 0$ and $\\alpha \\ne 1$ , the law for an $\\alpha $ -stable random variable with $\\mu =0$ and $\\sigma =1$ given by the method is $f(r;\\alpha ) = \\frac{\\alpha |r|^{\\frac{1}{\\alpha -1}}}{\\pi |\\alpha -1|} \\int _0^{\\pi /2} Q(t,\\alpha ) \\exp \\left(-|r|^{\\frac{\\alpha }{\\alpha -1} } Q(t,\\alpha )\\right) \\mathrm {d}t,$ where $Q(t,\\alpha ) = \\left( \\frac{\\cos (t)}{\\sin (\\alpha t )} \\right)^{\\frac{\\alpha }{\\alpha -1}} \\frac{\\cos (\\alpha t -t)}{\\cos (t)}$ .", "In contrast to the inverse Fourier method, the integrand in Nolan's method is compactly supported and non-oscillatory.", "Unfortunately, the integrand becomes extremely peaky and narrow when $|\\alpha -1| < 0.02$ , and then the method is unpractical to use unless arbitrary precision arithmetic is used for the integration .", "Thus, the univariate symmetric $\\alpha $ -stable laws can be accurately evaluated with either Nolan's method or the inverse Fourier transform method depending on the values of $\\alpha $ and $r$ .", "The approximation methods based on the Fast Fourier transform , , are also worth mentioning.", "They are simple to implement and relatively fast to evaluate, but must be used in conjuction with interpolation to approximate the density at a point which is not part of the FFT grid.", "It has been reported that the FFT based approximation is accurate only for large $\\alpha $ .", "The approximations based on the integral representations of $\\alpha $ -stable laws are complemented by series expansions.", "A well-known series expansion for the univariate $\\alpha $ -stable density is of form $f(r;\\alpha ) \\sim -\\frac{1}{\\pi } \\sum _{k=1}^{\\infty } \\frac{(-1)^k \\Gamma (k\\alpha + 1) \\sin (\\frac{k\\pi \\alpha }{2})}{k!}", "r^{-k\\alpha -1},$ which is an asymptotic expansion for $\\alpha \\in (1,2)$ for $r \\rightarrow \\infty $ , and converges pointwise to the true density for $\\alpha \\in (0,1)$ .", "There is a similar series expansion, also outlined in , which is an asymptotic series for $\\alpha \\in (0,1)$ and a converging series for $\\alpha \\in (1,2)$ at $ r\\rightarrow 0^+$ .", "Furthermore, there exist methods that provide a converging series approximation for the symmetric univariate probability densities for $\\alpha \\in (0,2)$ by combining two separate series expansions , or approximate the inverse Fourier transform of the characteristic function by domain splitting and implementing different series expansions within them .", "The methodology for approximating spherically contoured multivariate $\\alpha $ -stable distributions is similar to the univariate one.", "There are several integral expressions for their probability density functions (see e.g.", "), such as $\\begin{split}f_M(r;\\alpha _M) & = \\frac{2^{1-d/2}}{\\Gamma (d/2)} \\int _0^{\\infty } (rs)^{\\frac{d}{2}} J_{d/2-1} (rs) \\exp {\\left( - s^{\\alpha }\\right)} \\mathrm {d}s,\\end{split}$ where $J_\\nu $ is the Bessel function of the first kind and $r := |\\mathbf {r}|$ .", "Analogously to the univariate case, multivariate spherically contoured $\\alpha $ -stable laws have an absolutely converging series expansion for $r > 0$ and $\\alpha \\in (0,1)$ , which is an asymptotic expansion for $\\alpha \\in (1,2)$ and $r \\rightarrow \\infty $ : ${f_M(r;\\alpha ) \\sim \\frac{-1}{2\\pi ^{d/2+1} } \\sum _{k=1}^{\\infty } \\frac{(-1)^{k}\\Gamma (\\frac{k\\alpha +2}{2})\\Gamma (\\frac{k\\alpha + d}{2})\\sin ( \\frac{k\\alpha \\pi }{2})}{ k!", "}\\Bigl ( \\frac{r}{2}\\Bigr )^{-(k\\alpha +1)}.", "}$ Likewise, there is an absolutely converging series expansion for $r > 0$ and $\\alpha \\in (1,2)$ , which is an asymptotic expansions for $\\alpha \\in (0,1)$ and $r\\rightarrow 0^+$ ." ], [ "Hybrid method for approximating $\\alpha $ -stable laws", "None of the presented approximation method is suitable to be used within Bayesian inversion.", "The asymptotic series expansions are fast to evaluate, but not accurate enough or even applicable for all $r$ , not to mention $\\alpha $ .", "The presented numerical integration methods and the advanced series expansions , are too time-consuming to perform within Bayesian continuous-parameter estimation, since the probability density functions must be evaluated up to several hundreds of thousands of times even in the modest-dimensional settings.", "To address the issues, our hybrid approximation method is two-part.", "When $r$ is small, we approximate the $\\alpha $ -stable laws with two-variable bicubic splines that are fitted on grids of precomputed $\\alpha $ -stable log-densities with varying radius $r$ and stability $\\alpha $ .", "We employ the Julia library Interpolations.jl to evaluate the bicubic splines.", "The densities within the grid nodes are computed through the direct integral method (REF ) and Nolan's method (REF ).", "Additionally, an asymptotic series expansion of (REF ) is employed for the tails when $r$ is large in the case of univariate $\\alpha $ -stable laws.", "The methodology is the same for the bivariate symmetric $\\alpha $ -stable laws, as we use the integral expression with the Bessel function in (REF ) to build the spline grids, and (REF ) for the tail approximations.", "We use three terms in the series expansion approximations.", "The first bicubic spline grid of precomputed log-densities is applied when $r \\in [0,0.9], \\, \\alpha \\in [0.5, 1.9] $ .", "We divide the domain $[0,1.0]\\times [0.5,1.9]$ uniformly to the intervals of $h_{r} = 0.01 $ and $ h_{\\alpha }=5\\cdot 10^{-4}$ , and evaluate numerically the densities using the integral methods.", "When $|\\alpha - 1| < 0.2$ , the Fourier integral is used from (REF ), otherwise we employ the Nolan's method of (REF ).", "Even though it is reported the Nolan's method works for $|\\alpha -1| \\ge 0.02$ , the integrand in can be still difficult to evaluate numerically, since the domain of integration should be evaluated in parts near the peak of the integrand.", "The peak is located at $t_p$ which satisfies the equation $Q(t_p,\\alpha ) |r|^{\\frac{\\alpha }{\\alpha -1}} = 1$ .", "Instead of introducing improvised heuristics for the integration and domain splitting, we count on the Fourier integral for the aforementioned stability values that are tricky in Nolan's method.", "The numerically evaluated densities of both of the methods agree which each with a least 12 decimals for $|\\alpha -1| \\ge 0.2$ , so even using only the Fourier approach would be enough for our needs.", "The second bicubic spline grid is constructed in $r \\in [0,30], \\, \\alpha \\in [0.5, 1.9]$ .", "We use the same grid node spacings of $h_r = 0.01 $ and $ h_{\\alpha }=5\\cdot 10^{-4}$ in the domain $[0,30]\\times [0.5,1.9]$ for precomputation of the log-densities, and use Nolan's method for approximating the densities when $|\\alpha -1| > 0.2$ .", "However, we limit the usage of the spline only for $r \\in (0.9,29.6), \\, \\alpha \\in [0.5, 1.9]$ .", "In fact, the third bicubic approximation is employed for $r \\in [29.6,30], \\, \\alpha \\in [0.5, 1.9]$ .", "We call this region as the transition region of the approximation.", "Let us denote $r_a := 29.6$ , $r_b = 30.0$ , and $\\triangle = r_b- r_a$ .", "Let the log-density approximation given by the asymptotic series expansion from Equation (REF ) at the spline grid node $r_a,\\alpha _j$ be $f^s_{a,j}$ and its derivative with respect to $r$ be $D^s_{a,j}$ .", "Additionally, let the approximation given by the numerical integration at $r_b,\\alpha _j$ be $f^d_{b,j}$ , and its derivative with respect to $r $ by $D^d_{b,j}$ .", "We set the value in the transition region spline grid point value at $i,j$ to follow an auxiliary cubic Hermite interpolation as follows: $\\begin{split}f_{i,j} &= \\frac{(3\\triangle q_i^2-2q_i^3)}{\\triangle ^3} f^s_{b,j} + \\frac{(\\triangle ^3-3\\triangle q_i^2+2q_i^3)}{\\triangle ^3} f^d_{a,j} + \\frac{q_i^2 (q_i-\\triangle )}{\\triangle ^2}D^d_{b,j} \\cr &+ \\frac{q_i(q_i-\\triangle )^2}{\\triangle ^2}D^s_{a,j},\\end{split} $ where $q_i := r_i - r_a$ .", "Equation (REF ) is applied for each stability $\\alpha _j$ within the transition grid separately.", "The Hermite interpolation is only applied during the construction of the transition interpolation spline, because the evaluation of the constructed grid is performed by the bicubic spline library in Julia.", "Introducing the Hermite interpolated data points as an additional step at the transition region helps to avoid abrupt changes in the derivatives of the log-densities near the boundary of the transition region and the tail approximation, although we enforce $C^1$ continuity of the overall log-density approximation.", "The $C^1$ continuity is enforced by setting the values of derivatives with respect to $r$ on the boundary $r=0.9$ on the first spline to agree with the second spline.", "The derivatives with respect to $r$ on the boundaries $r=29.6$ and $r=30.0$ of the third spline are set to follow the values given by the second spline and the tail approximation, respectively.", "On the second spline, the second order derivatives with respect to $r$ are set to zero.", "The splines for $r\\in [0,0.9]$ and $r\\in [29.6,30.0]$ are used because the resulting systems of equations of the spline coefficients involving non-zero boundary conditions are smaller, and hence easier to solve than directly incorporating them into the coefficients of the largest spline grid.", "The overall approximation method is depicted in Figure REF .", "We do not consider the cases $\\alpha < 0.5$ or $\\alpha > 1.9$ .", "Low stability values are not in our interest, and including them would require increasing the number of nodes in the precomputed log-probability density grids to sustain the accuracy of the approximation.", "Likewise, our error estimates for the approximation would grow significantly, and the asymptotic series expansions would need to be evaluated way further from the origin than the current threshold of $r>30$ , if $\\alpha $ was very close to 2.", "The same interpolation methods and region partitioning are employed for both the univariate and bivariate symmetric $\\alpha $ -stable log-probability densities.", "Evaluation of an $\\alpha $ -stable log-density takes approximately 100 nanoseconds in the domain of the spline grids, and about 400 nanoseconds in the asymptotic tail expansions on a workstation equipped with Intel Xeon CPU E5-2698 v4 central processing unit." ], [ "Error bounds of the approximation", "To derive the error bounds for our hybrid approximation method, we assume the integration error of the probability densities as zero within the spline grid points.", "For simplicity, we ignore the transition region from the error estimates as it can be anyway left out from the method with the expense of having less regular approximation, respectively.", "Then we make use of the properties of bicubic splines as follows.", "The error estimates below can be found in .", "Theorem 3.1 Write $f_T \\mathrel {\\mathop :}=f(r;\\alpha )$ for the (true) density of the symmetric $\\alpha $ -stable distribution with $\\sigma = 1$ , and by $f_A \\mathrel {\\mathop :}=f_A(r;\\alpha )$ the bicubic spline interpolation described above.", "The error caused by the bicubic spline approximation can then be approximated by , $\\begin{split}|| \\log f_T - \\log f_A||_{\\infty } \\le & \\frac{5}{384}||(\\log f_T)^{(4,0)}||_{\\infty } h_r^4 + \\frac{81}{64}||(\\log f_T)^{(2,2)}||_{\\infty } h_r^2 h_{\\alpha }^2 \\\\&\\qquad + \\frac{5}{384}||(\\log f_T)^{(0,4)}||_{\\infty } h_{\\alpha }^4,\\end{split}$ where $h_r$ and $h_{\\alpha }$ stand for the lengths of the $\\log $ -density interpolation grid cells in the directions of the radius and the stability index respectively, and the superscripts $^{(i,j)}$ stand for partial derivatives of the form $\\frac{\\partial ^{i+j}}{\\partial r^i \\partial \\alpha ^j}$ .", "Estimating the partial derivatives of $\\log f_T$ appearing in the suprema in (REF ) involves estimating the partial derivatives of $f_T$ from above, and $f_T$ itself from below.", "Both types of estimate are tricky to do in a precise manner, due to the lack of any sort of a closed-form expression for $f_T$ .", "We will make use of several strategies which are variably efficient for different regions of $(r;\\alpha )$ when estimating the partial derivatives of $f_T$ , and subsequently use the first-order variants of these estimates in conjunction with a precomputed grid (similar to the one detailed in the context of spline approximations above) and the fundamental theorem of calculus for the lower bounds of $f_T$ .", "The full details of these estimates are presented in the Supplementary Material, but we give a taste of the methodology and results here, starting with the univariate case.", "First, we can obtain crude uniform bounds (with respect to $r$ ) for each $(f_T)^{(i,j)}$ by e.g.", "differentiating (REF ) under the integral sign and eliminating the resulting oscillatory term simply using the triangle inequality.", "We can somewhat refine this pointwise for “moderate” $r > 1$ by using standard oscillatory integral techniques (which basically amount to partial integration against a sufficiently quickly vanishing function, resulting in an upper bound of order $r^{-1}$ ).", "This is largely the best we can do for moderate values of $r$ , where neither of the asymptotic expansions (see (REF ) and the subsequent discussion) come close to approximating $f_T$ well with only a couple summands – the region of said “moderate” values will of course depend on $\\alpha $ .", "For larger values of $r$ , we may exploit the expansion (REF ) pointwise with e.g.", "2–3 summands and an explicit (albeit complicated) expression for the remainder term, due to Bergström .", "Some of the integrals we encounter here (corresponding to the remainder term, particularly with partial derivatives with respect to $\\alpha $ ) are highly intractable in the mathematical sense of the word, but non-oscillating and well-behaved enough for efficient numerical estimation, yielding an upper bound that decreases to an order roughly comparable to that of $f_T$ for $r \\rightarrow \\infty $ , and where the asymptotic constants stay sufficiently tame for $\\alpha \\in [0.5,1.9]$ .", "This all holds, mutatis mutandis, for $r \\rightarrow 0^{+}$ as well.", "We thus obtain several different kinds of upper bounds for $|(f_T)^{(i,j)}(r;\\alpha )|$ , pointwise with respect to $(r,\\alpha )$ .", "With some minor additional work, we may loosen the estimates very slightly so that they will be uniform for $r \\in [r_j,r_{j+1}]$ and $\\alpha \\in [\\alpha _i,\\alpha _{i+1}]$ , where $\\bigcup _{i,\\, j} [\\alpha _i,\\alpha _{i+1}] \\times [r_j,r_{j+1}] = [0.5,1.9] \\times [0,30],$ is a tiling of the relevant parameter space with $r_{j+1}-r_j = \\alpha _{i+1} - \\alpha _i = \\triangle > 0$ .", "Here $\\triangle $ stands for a (sufficiently small) discretization parameter; $\\triangle = \\frac{2}{10^3}$ in our numerical simulations.", "With the above upper estimates for the partial derivatives of $f_T$ , we are in a place to estimate $f_T$ from below with reasonable accuracy.", "Namely, noting that $f_T(r;\\alpha )$ is for fixed $\\alpha $ always a decreasing function of $r$ , we may precompute $f_T(r_j;\\alpha _i)$ at the nodes of the discretized grid, and thus use the fundamental theorem of calculus to obtain $\\begin{split}& \\inf _{\\alpha _i \\le \\alpha \\le \\alpha _{i+1}, \\;r_j \\le r \\le r_{j+1}}f_T(r;\\alpha )\\ge \\min \\bigl ( f_T(r_{j+1};\\alpha _i), f_T(r_{j+1};\\alpha _{i+1}) \\bigr ) \\\\&\\qquad \\qquad \\qquad \\qquad - \\frac{\\triangle }{2} \\sup _{\\alpha _i \\le \\alpha \\le \\alpha _{i+1}, \\;r_j \\le r \\le r_{j+1}}(f_T)^{(0,1)}(r;\\alpha ) .\\end{split}$ The discussion above also applies to the bivariate case, with the additional difficulty of the Bessel function of the first kind $J_0$ in the representation (REF ).", "In the Supplementary Material, we present analogous asymptotic expansions with respect to $r \\rightarrow 0^{+}$ and $r \\rightarrow \\infty $ with quantitative remainder term estimates.", "In particular for $r \\rightarrow \\infty $ , we present a modification of Bergström's complex-analytical treatment, which allows us to obtain estimates for the remainder term in the bivariate case which are not immediate from the asymptotic expansions presented in .", "For the univariate $\\alpha $ -stable log densities, the obtained bounds for the partial derivatives lead to the final absolute error estimates of $\\sup _{r \\le 30, \\; 0.5 \\le \\alpha \\le 1.9} \\,|\\log \\,f_T(r;\\alpha ) - \\log \\, f_A(r;\\alpha )| \\le {\\left\\lbrace \\begin{array}{ll}0.00038 & \\quad \\textrm {(univariate case);} \\\\0.22 & \\quad \\textrm {(bivariate case).}\\end{array}\\right.", "}$ The accuracy of the approximations are enough for our needs.", "The error estimate for the bivariate log-density is orders of magnitudes higher than for the univariate one due to the significantly larger suprema for the partial derivatives within the domain of the splines, particularly with small $\\alpha $ .", "If the lower bound of $\\alpha $ of the approximation domain was increased to 0.7, the bivariate log-density error estimate would be decreased to 0.013, accordingly.", "For the relative error bounds of tails, we have the following estimates.", "Denoting by $\\mathcal {S}_3(r;\\alpha )$ the sum in (REF ) (resp.", "(REF )) with 3 in place of $\\infty $ , and by $f_T(r;\\alpha )$ the true density, we have $\\sup _{r > 30, \\; 0.5 \\le \\alpha \\le 1.9} \\,|\\log \\,f_T(r;\\alpha ) - \\log \\, \\mathcal {S}_3(r;\\alpha )| \\le {\\left\\lbrace \\begin{array}{ll}0.00097 & \\quad \\textrm {(univariate case);} \\\\0.0017 & \\quad \\textrm {(bivariate case).}\\end{array}\\right.", "}$ These estimates are also detailed further in the appendix.", "Figure: Regions of the hybrid interpolation method for approximating symmetric α\\alpha -stable laws.", "Turquoise: the first bicubic interpolation grid.", "Violet: the second bicubic interpolation grid.", "Red: the transition region of the spline and the asymptotic series.", "Yellow: the asymptotic series expansion for r→∞r \\rightarrow \\infty .", "Gray: the implemented approximation method is not employed." ], [ "Numerical examples", "We demonstrate the-$\\alpha $ -stable priors in three numerical experiments.", "We employ the priors first in a deconvolution, which is a well-known linear inverse problem.", "Moreover, the same priors are used in estimating the conductivity field of an elliptic PDE in two spatial dimensions.", "For now, we use only MAP-estimators in the reconstructions, because full Bayesian inference with the presented random field priors requires usage of MCMC methods that has been shown to struggle with such heavy-tailed priors .", "Since the assessment of the reconstructions in inverse problems cannot be usually accomplished in a unified manner, we do not intentionally tabulate any metrics of the reconstructions, such as $L^2$ errors of the reconstructions, in the manuscript.", "The Julia codes of the experiments can be found from https://github.com/suurj/alpha-stable." ], [ "MAP estimation", "Evaluation of the MAP estimates (Equation (REF )) in Bayesian continuous-parameter estimation is usually performed with the help of a nonlinear conjugate gradient algorithm, a quasi-Newton method, a matrix-free truncated Newton method or a combination of them , , , .", "Thanks to their generality, the methods are applicable for both linear and nonlinear inverse problems.", "Additionally, the methods do not require the exact full Hessian of the objective function to be evaluated unlike the standard Newton method does, what is crucial from a computational perspective.", "Under certain assumptions regarding the convexity of the objective function and its Hessian-gradient products, the standard Newton method is quadratically convergent, while the quasi-Newton methods are asymptotically superlinearly convergent , and the nonlinear conjugate gradient method is either linearly or superlinearly convergent depending on how the algorithm is initialized .", "Hence, a quasi-Newton method such as the limited-memory Broyden–Fletcher–Goldfarb–Shannon algorithm (L-BFGS) is often selected instead of the conjugate gradient method in large-scale convex optimization .", "It is worth mentioning that a set of new optimization algorithms for large-scale problems have been recently proposed in the field of deep learning.", "Algorithms such as adaptive moment estimation (Adam) have been proposed to accelerate training of neural networks with large input datasets, and to alleviate the issue of over-fitting the network parameters through various ensemble training techniques.", "However, the optimization algorithms tailored for deep learning do not offer any noteworthy advantages over the classical optimization methods in our numerical examples, as no training or large datasets are involved.", "For these reasons, maximization of the log-posteriors is done through the L-BFGS method in the deconvolution experiments.", "Moreover, we resort to the bounded L-BFGS algorithm at inversion of the conductivity field of a linear elliptic PDE.", "As the numerical implementations of the limited-memory BFGS algorithms, we use Optim.jl for the unconstrained L-BFGS, and a Julia wrapper LBFGSB.jl of the original Fortran-implementation of L-BGFS-B .", "Lastly, we want to emphasize that the presented $\\alpha $ -stable random field priors often make the posteriors multimodal , and finding global maxima from such distributions is virtually impossible.", "Employing a different optimization algorithm for the MAP estimator than those used here would likely affect only the needed computational time until convergence is achieved." ], [ "One-dimensional deconvolution", "As the first numerical experiment, we demonstrate the first order $\\alpha $ -stable difference priors with varying stability and scale.", "The discretized target function $\\mathbf {u}$ is divided in 500 grid points, which we evaluate with a normalized convolution kernel $g(s,t) = 25 \\exp (-50 |s-t|),$ at 60 equispaced points within the support of the target function.", "Function $\\mathbf {u}$ includes both discontinuities and piecewise realizations of a Gaussian process with Matérn covariance to demonstrate the properties of the priors.", "Finally, we add white Gaussian noise with variance of $0.02^2$ to the convolutions so the likelihood function of the experiment is Gaussian.", "In the MAP estimation, we use a matrix approximation of the convolution operator like we do in the measurement data generation step.", "In addition to using fixed scale and stability in the first order $\\alpha $ -stable prior, we also consider them as functions to be estimated as well through the hierarchy defined in Equation REF .", "The ground truth function is plotted in Figure REF , and the reconstructions in Figures REF , REF , REF and REF .", "For all the MAP estimates, we use equispaced grids with 120 points.", "This implies the posterior distribution is 360-dimensional if both the scale and the stability are considered as processes.", "The MAP estimates with the non-hierarchical $\\alpha $ -stable first order difference priors in Figure REF demonstrate the effect of altering the stability $\\alpha $ or scale $\\sigma $ of the distribution of the increments in the prior.", "As a rule of thumb, the smaller the stability $\\alpha $ is, the stronger the prior favors non-Gaussian increments, so they are usually close to zero.", "The larger the scale $\\sigma $ is, the greater the variability is allowed within the increments.", "Stability values for $ 1 \\le \\sigma \\le 2$ , are particularly useful for reconstructing the target function in this case.", "Those priors are able to favor the existence of Gaussian-like parts of the ground truth function when needed.", "If the estimation was done using stationary Gaussian priors, the MAP estimate would be either over-smoothed and incapable to locate the discontinuity at the boxcar, or it could detect the discontinuity at the expense of being very sensitive to noise.", "Considering the stability of the prior as another first order $\\alpha $ -stable process, turned out to be less successful.", "We let the scale of the process $\\mathbf {u}$ to follow an $\\alpha $ -stable process with scale $\\sigma =0.01$ , and set its untransformed stability process $\\mathbf {s}$ to follow an $\\alpha $ -stable process with the parameters tabulated in Figure REF .", "To guarantee that $0.51 \\le \\alpha \\le 1.9$ , we apply a transformation $\\alpha = 0.51 + 1.39 S(s),$ where $S(x) = \\frac{1}{1+ e^{-x}}$ .", "In both Figures REF and REF , the stability processes are shown in their transformed values.", "The stability process seems to be either close to constant ($ \\approx 1.25$ ) or decreasing towards the right side of the domain in all the tabulated cases.", "However, there is some variation in the stability process in the middle of the domain when the untransformed process have the parameters $\\alpha =0.8, \\sigma =0.1$ .", "The phenomenon may suggest that having the stability as a process does not work well as a prior.", "When the stability of the untransformed stability process is $\\alpha _s=1.4$ and its scale $\\sigma _s=0.05$ (Equation (REF )), the reconstruction of $\\mathbf {u}$ is smooth at first, but as the stability decreases, the reconstruction becomes more discontinuous and non-Gaussian.", "In the one-dimensional deconvolution experiment, the best results in terms of the reconstruction agreement with the ground truth are obtained when the scale of $\\mathbf {u}$ process is considered as a process instead of its stability.", "We fix the stability of $\\mathbf {u}$ to $\\alpha =1.9$ , and instead modeled the untransformed scale process $\\mathbf {c}$ with another $\\alpha $ -stable process with parameters shown in Figure REF .", "The final scale process is given by $\\sigma = 0.001 + 0.05 S(c).$ The reconstructions where the untransformed scale process have scale of $0.05 \\le \\sigma _c \\le 0.1$ and stability between $0.8\\le \\alpha _c \\le 1.9$ , agree well with the ground truth and even with each other.", "For the last, setting both the scale and the stability of $\\mathbf {u}$ as $\\alpha $ -stable processes seem to suffer from the same issue as the stability process case.", "Namely, either or both of the parameter processes remain close to constant throughout the domain, and the MAP estimates for $\\mathbf {u}$ are no better than in the simpler $\\alpha $ -stable priors.", "We set the scale of the untransformed stability index process to $\\sigma _s=0.05$ (Equation (REF )), and the stability of the untransformed scale parameter process to $\\alpha _c=1.9$ .", "Hence, the scale parameters $\\sigma $ in Figure REF refer to the scale of the untransformed stability process ($\\sigma _s$ ), and the the stability indices to the untransformed scale process ($\\alpha _c$ ).", "We transform the processes with the same sigmoid functions as in the other two cases, using Equations (REF ) and (REF ).", "Whether the poor reconstructions are caused by overfitting, poorly selected hyperparameters, unidentifiability, or something else, they shall be investigated.", "Figure: Ground truth, measurements, and the reconstructions with fixed stability and scale parameters in the one-dimensional deconvolution experiment with α\\alpha -stable difference prior.", "Red lines: ground truth.", "Black lines: MAP estimate for the function.Figure: Reconstructions when considering the stability α\\alpha as a function.", "Red lines: ground truth.", "Black lines: MAP estimate for the function.", "Blue lines: MAP estimate of the stability process.Figure: Reconstructions when considering the scale σ\\sigma as a process.", "Red lines: ground truth.", "Black lines: MAP estimate for the function.", "Green lines: MAP estimate of the scale process.Figure: Reconstructions when considering both the stability α\\alpha and the scale σ\\sigma as processes.", "Red lines: ground truth.", "Black lines: MAP estimate for the function.", "Blue lines: MAP estimate of the stability process.", "Green lines: MAP estimate of the scale process.", "The left axes of the subfigures stand for the stability, the right ones denote to the scale." ], [ "Two-dimensional deconvolution", "We conduct a deconvolution experiment also in two dimensions.", "The ground truth function and the reconstructions are plotted in Figure REF .", "We estimate the blurred test function with the help of spherically symmetric bivariate $\\alpha $ -stable first order difference priors REF .", "The ground truth function is supported on $[-1,1]^2$ .", "It is evaluated at a uniform grid of size $333\\times 333$ , after which the synthetic measurement dataset is approximated thorough interpolation at $100^2$ points that are scattered according to a low-discrepancy sequence within the domain of the target function.", "A Gaussian convolution kernel $h$ was employed in the blurring: $g(\\mathbf {s},\\mathbf {t}) = \\frac{150}{\\pi } \\exp \\left(-150 ||\\mathbf {s}-\\mathbf {t}||^2 \\right).$ The convolution is computed with a matrix approximation, like in the one-dimensional case.", "A grid of $256\\times 256$ nodes is used in the MAP estimators.", "The MAP estimates of the reconstructions are consistent with the one-dimensional deconvolution experiment.", "Increasing the stability of the $\\alpha $ -stable difference priors manifests in more Gaussian-like features in the MAP estimates.", "The distribution $\\alpha =0.51$ and $\\sigma =0.01$ is probably too spiky and heavy-tailed as the difference prior, since the reconstruction lacks any features resembling the ground truth objects.", "A notable feature is the existence of diagonal discontinuities at certain MAP estimates, like in the case $\\alpha =0.8$ and $\\sigma =0.1$ in the object that consists of two overlapping spheres.", "Although the construction of the $\\alpha $ -stable difference prior incorporates bivariate symmetrically contoured $\\alpha $ -stable distributions, the prior is likely not fully isotropic.", "In fact, even the isotropic and upwind total variation priors are not perfectly isotropic, and a method has been proposed to alleviate the issue .", "Unfortunately, the technique cannot be applied to the presented $\\alpha $ -stable priors, so the matter of improving the isotropicity must be considered separately.", "Figure: Ground truth function, its deconvolution, and the MAP estimate reconstructions of the two-dimensional deconvolution problem with α\\alpha -stable difference priors with varying scale σ\\sigma and stability α\\alpha ." ], [ "Inversion of an elliptic partial differential equation", "As the third numerical experiment with the $\\alpha $ -stable priors, we consider the nonlinear inverse problem of estimating a conductivity field $k \\in L^{\\infty }(\\Omega )$ , with Lipschitz domain $\\Omega \\subset \\mathbb {R}^2$ of an elliptic partial differential equation: $\\begin{split}-\\nabla \\cdot (k\\nabla u&) = g, \\quad x \\in \\Omega \\\\&u = 0, \\quad x \\in \\partial \\Omega ,\\end{split}$ with prescribed zero Dirichlet boundary conditions, where $u \\in H^1_0(\\Omega )$ denotes the solution of the PDE, and $g\\in L^{\\infty }(\\Omega )$ .", "The inversion is done using noisy observations $\\mathbf {y}$ of the solution of the PDE as the likelihood for $k$ .", "We discretize the PDE with the standard centred finite difference method.", "The noise is assumed to be Gaussian with observations taking the form $y_i = u_i + \\epsilon _i, \\quad \\epsilon _i \\sim \\mathcal {N}(0,0.001^2).$ Like in the other two experiments, we evaluate only the MAP estimator of the problem.", "We use a bounded limited-memory BGFS algorithm (L-BFGS-B) to calculate the MAP estimate.", "We employ the constraint $ 10^{-5} < k < 10^2$ to ensure the well-posedness of the elliptic PDE and to keep the condition number of the matrix of the discretized system of equations for the PDE small.", "The gradient of the log-posterior with respect to the discretized $k$ is calculated through a discrete adjoint method , .", "That is, we solve the adjoint equation to get the adjoint $\\mathbf {q}$ through the equation $\\left( \\frac{\\partial \\mathbf {E}}{\\partial \\mathbf {u}}\\right)^T \\mathbf {q} = \\left(\\frac{\\partial \\pi \\bigg (\\mathbf {y}|\\mathbf {u}(\\mathbf {k})\\bigg )}{\\partial \\mathbf {u}}\\right)^T,$ where $\\mathbf {E}$ denotes the system of finite difference equations of the discretized PDE, and $\\pi \\bigg (\\mathbf {y}|\\mathbf {u}(\\mathbf {k})\\bigg )$ the Gaussian likelihood function, which merely consists of solving the PDE with given $\\mathbf {k}$ and evaluating the fidelity of the obtained solution with respect to $\\mathbf {y}$ .", "The gradient of the log-posterior $\\pi (\\mathbf {k}|\\mathbf {y})$ with respect to the discretized conductivity field $\\mathbf {k}$ is then $\\frac{\\partial \\pi (\\mathbf {k}|\\mathbf {y})}{\\partial \\mathbf {k}} = -\\mathbf {q} \\frac{\\partial \\mathbf {E}}{\\partial \\mathbf {k} },$ since the likelihood depends on $\\mathbf {k}$ only through $\\mathbf {u}$ .", "We estimate the conductivity field with the same bivariate $\\alpha $ -stable difference priors as we do in the two-dimensional deconvolution.", "We use a reconstruction grid of $128\\times 128$ .", "To simulate the measurements and to avoid committing an inverse crime, we calculate the solution of the PDE using a larger finite difference grid with a size of $223\\times 223$ , and interpolate the solution at $25\\times 25$ points that are positioned at the reconstruction grid using a low-discrepancy sequence.", "The source term function $g$ of (REF ), the solution of the PDE and the ground truth conductivity, as well as the reconstructions are plotted in Figure REF .", "The shape of an double-sphere object in the conductivity field is captured the best with the smaller stability indices, while the increasing the stability seems to blur the reconstruction based on the MAP estimate.", "On the other hand, having too large scale $\\sigma $ may make the prior uninformative.", "Judging by the shape and distribution of the values within the reconstruction, the prior with $\\alpha =0.8$ and scale $\\sigma =0.1$ could be the best out of the tested parameter choices in this case.", "Figure: Functions employed in the nonlinear inversion of the conductivity function kk of an elliptic PDE, and the MAP estimates with symmetric α\\alpha -stable difference priors with varying choices of the scale σ\\sigma and stability α\\alpha ." ], [ "Conclusion", "This work was motivated by the desire to implement approximations of the $\\alpha $ -stable random field priors for Bayesian inverse problems.", "Because both the Cauchy and Gaussian fields are special cases of the $\\alpha $ -stable random fields,  our objective was to extend the prior selection to general $\\alpha $ -stable priors, which could prove useful in reconstructions where both Gaussian and non-Gaussian features are present.", "As the $\\alpha $ -stable density functions mostly lack the closed-form expressions, we introduced a computationally feasible hybrid method for approximating the symmetric univariate and bivariate $\\alpha $ -stable probability density functions.", "The novelty of the presented method in comparison with the existing approximation methods is its accuracy and, especially, its performance.", "The method allows evaluation of the $\\alpha $ -stable probability log-density functions within a stability index range of $\\alpha \\in [0.5,1.9]$ and radius argument range of $r \\in [0,\\infty )$ .", "Furthermore, we provided error bounds for the log-density approximations.", "In the numerical experiments, we employed finite-difference approximations of the $\\alpha $ -stable first order random motion priors at one- and two-dimensional deconvolution, and we also addressed the estimation of a function governed by an elliptic PDE with the same priors.", "The MAP estimation was implemented through the standard L-BFGS method and its bounded variant.", "Our objective was to illustrate how the parameters, such as the stability and scale of the $\\alpha $ -stable priors, can affect the estimation of the unknown functions.", "The results are promising in the sense that the presented priors are both computationally viable, manifest in useful MAP-estimates, and are yet novel compared to the existing random field priors like Gaussian, Cauchy, Besov, and total variation priors.", "As we introduced new $\\alpha $ -stable priors and provided examples through MAP-estimates, we consider extending the estimators to full inference as well as other $\\alpha $ -stable priors.", "For future work, we will consider Bayesian neural networks with $\\alpha $ -stable weights, which are possibly non-symmetric, $\\beta \\ne 0$ .", "We believe the developed approximations to turn out useful in that case due to the recent studies on Bayesian neural networks with Cauchy and Gaussian weights .", "Alternatively, $\\alpha $ -stable random field approximations through the stochastic partial differential equation approach could be beneficial , , .", "Another consideration would be to test these priors on ensemble Kalman methods , , , which have been used and tested with hierarchical Cauchy processes." ], [ "Acknowledgments", "We thank Sari Lasanen, Simo Särkkä and Heikki Haario for useful and interesting discussions.", "This work has been funded by Academy of Finland (project number 336787)." ], [ "Error bounds", "We establish quantitative pointwise bounds for symmetric stable density functions $f(r;\\alpha )$ and their partial derivatives, based on series expansions due to Bergström.", "In this appendix, the symbol $\\mathcal {R}$ stands for a generic remainder term associated to a partial series expansion of a given density function.", "In particular, the symbol $\\mathcal {R}$ will generally have a different meaning from one line to another.", "Its precise meaning will always be clear from context." ], [ "Univariate bounds for $r \\rightarrow 0$", "Recall that for $r > 0$ , the density is given by the inverse Fourier transform as follows: $f(r;\\alpha ) = \\frac{1}{\\pi } \\int _{0}^{\\infty } \\cos (r t) \\mathrm {e}^{-t^\\alpha } \\mathrm {d}t = \\frac{1}{\\pi } \\Re \\Bigl \\lbrace \\int _{0}^{\\infty } \\mathrm {e}^{-\\mathrm {i}r t - t^\\alpha } \\mathrm {d}t \\Bigr \\rbrace .$ Let us first recall the asymptotic expansion for $r \\rightarrow 0^+$ .", "We may apply the Taylor series expansion for the cosine function in the first integral in (REF ), yielding $f(r;\\alpha )& = \\frac{1}{\\pi } \\sum _{k=0}^n \\frac{(-1)^k r^{2k}}{(2k)!}", "\\int _{0}^{\\infty } t^{2k} \\mathrm {e}^{-t^\\alpha } \\mathrm {d}t + \\mathcal {R}(r;\\alpha ) \\\\& = \\frac{1}{\\pi \\alpha } \\sum _{k=0}^n \\frac{(-1)^k \\Gamma (\\frac{2k+1}{\\alpha })}{(2k)!}", "r^{2k} + \\mathcal {R}(r;\\alpha ),$ for any $n \\in \\mathbb {N}_0$ .", "Since $\\Vert \\cos ^{(\\ell )}\\Vert _{L^\\infty (0,\\infty )} \\le 1$ for all $\\ell \\in \\mathbb {N}$ , we can apply Taylor's theorem and arrive at the following estimate for the remainder term: $|\\mathcal {R}(r;\\alpha )| \\le \\frac{r^{2n+2}}{\\pi (2n+2)!}", "\\int _{0}^{\\infty } t^{2n+2} \\mathrm {e}^{-t^\\alpha } \\mathrm {d}t = \\frac{\\Gamma (\\frac{2n+3}{\\alpha })}{\\pi \\alpha (2n+2)!}", "r^{2n+2}.$ Similarly, for $\\ell \\in \\mathbb {N}$ we can differentiate the first integral in (REF ) with respect to $r$ to get $\\frac{\\partial ^\\ell }{\\partial r^\\ell } f(r;\\alpha ) = \\frac{(-1)^{\\lceil \\frac{\\ell }{2}\\rceil }}{\\pi \\alpha } \\sum _{k=0}^n \\frac{(-1)^k\\Gamma (\\frac{2k+1+2\\lceil \\frac{\\ell }{2}\\rceil }{\\alpha })}{(2k+o(\\ell ))!}", "r^{2k+o(\\ell )} + \\mathcal {R}(r;\\alpha ),$ where $o(\\ell ) = 1$ if $\\ell $ is odd and $o(\\ell ) = 0$ otherwise, and $|\\mathcal {R}(r;\\alpha )| \\le \\frac{\\Gamma (\\frac{2n+3+2\\lceil \\frac{\\ell }{2}\\rceil }{\\alpha })}{\\pi \\alpha (2n+2+o(\\ell ))!}", "r^{2n+2+o(\\ell )}.$ For partial derivatives with respect to $\\alpha $ , we can use the decomposition (REF ) for $\\ell _1 \\in \\mathbb {N}_0$ , $\\ell _2 \\in \\mathbb {N}$ and $\\ell \\mathrel {\\mathop :}=\\ell _1+\\ell _2 \\in \\mathbb {N}$ to get a decomposition of the form $\\frac{\\partial ^\\ell }{\\partial r^{\\ell _1} \\partial \\alpha ^{\\ell _2}} f(r;\\alpha )= \\frac{(-1)^{\\lceil \\frac{\\ell _1}{2}\\rceil }}{\\pi } \\sum _{k=0}^n \\frac{(-1)^k}{(2k+o(\\ell _1))!}", "\\frac{\\partial ^{\\ell _2}}{\\partial \\alpha ^{\\ell _2}}\\Bigl [\\frac{\\Gamma (\\frac{2k+1+2\\lceil \\frac{\\ell _1}{2}\\rceil }{\\alpha })}{\\alpha }\\Bigr ] r^{2k+o(\\ell _1)} + \\mathcal {R}(r;\\alpha ).$ Here the partial derivatives in the summands can be computed explicitly in terms of polygamma functions, and the remainder term can be estimated as $|\\mathcal {R}(r;\\alpha )| \\le \\frac{r^{2n+2+o(\\ell _1)}}{\\pi \\alpha ^{\\ell _2+1}(2n+2+o(\\ell _1))!}", "\\int _{0}^{\\infty } |\\log (t)|^{\\ell _2} t^{\\frac{2n+3+2\\lceil \\frac{\\ell _1}{2}\\rceil }{\\alpha }-1} |p_{\\ell _2}(t)| \\mathrm {e}^{-t} \\mathrm {d}t,$ where $p_{\\ell _2}$ stands for the polynomial of degree $\\ell _2$ given by $\\frac{\\partial ^{\\ell _2}}{\\partial \\alpha ^{\\ell _2}} [\\mathrm {e}^{-t^\\alpha }] = \\log (t)^{\\ell _2} p_{\\ell _2}(t^\\alpha ) \\mathrm {e}^{-t^\\alpha },$ (and $p_0 \\equiv 1$ ).", "Generally this integral cannot be evaluated exactly, but for given values of the parameters it can be estimated numerically rather efficiently, since the integrand is neither oscillating nor overly peaked.", "As in , the latter integral in (REF ) can be rotated from the positive real axis to the line $\\lbrace \\tau \\mathrm {e}^{\\mathrm {i}\\varphi } \\,:\\, \\tau > 0\\rbrace $ for an arbitrary $\\varphi \\in (-\\frac{\\pi }{\\max (2\\alpha ,1)},0)$ , resulting in $f(r;\\alpha ) = \\frac{1}{\\pi } \\Re \\Bigl \\lbrace \\mathrm {e}^{\\mathrm {i}\\varphi }\\int _0^\\infty \\mathrm {e}^{\\mathrm {e}^{\\mathrm {i}\\beta _2} r \\tau } \\mathrm {e}^{ \\mathrm {e}^{\\mathrm {i}\\beta _1 }\\tau ^\\alpha } \\mathrm {d}\\tau \\Bigr \\rbrace ,$ where $\\beta _1 \\mathrel {\\mathop :}=\\beta _1(\\alpha ) \\mathrel {\\mathop :}=\\pi + \\alpha \\varphi \\in (\\frac{\\pi }{2},\\pi )$ and $\\beta _2 \\mathrel {\\mathop :}=\\frac{3\\pi }{2} + \\varphi \\in (\\frac{\\pi }{2},\\frac{3\\pi }{2})$ .", "Expanding the term $\\mathrm {e}^{ \\mathrm {e}^{\\mathrm {i}\\beta _1 }\\tau ^\\alpha }$ and performing some elementary calculations for the summands yields $f(r;\\alpha ) & = \\sum _{k=1}^n \\frac{(-1)^{k+1}\\Gamma (k\\alpha +1)\\sin (\\frac{k\\alpha \\pi }{2})}{\\pi k!}", "r^{-k\\alpha - 1} \\\\ & \\qquad \\qquad + \\underbrace{\\frac{1}{\\pi } \\Re \\Bigl \\lbrace \\mathrm {e}^{\\mathrm {i}(\\varphi + (n+1)\\beta _1) }\\int _0^\\infty \\tau ^{(n+1)\\alpha } M_{n+1}(\\mathrm {e}^{\\mathrm {i}\\beta _1 }\\tau ^\\alpha ) \\mathrm {e}^{\\mathrm {e}^{\\mathrm {i}\\beta _2} r \\tau } \\mathrm {d}\\tau \\Bigr \\rbrace }_{=\\mathrel {\\mathop :}\\mathcal {R}(r;\\alpha )}, $ for all $n \\in \\mathbb {N}$ , where $M_{n+1}$ is (the analytic continuation of) the function $z \\mapsto (e^z - \\sum _{k=0}^n \\frac{z^k}{k!", "})/z^{n+1}$ .", "Writing $M_0(z) = \\mathrm {e}^{z}$ for notational convenience, the above expansion also holds for $n \\in \\lbrace -1,0\\rbrace $ , with the understanding that the sum is zero in this case.", "The functions $M_{k}$ above satisfy the bound $|M_{k}(z)| \\le \\frac{1}{k!", "}$ for $z$ with negative real part, as can be easily verified for $k = 0$ and consequently proved inductively using the recursive formula $\\frac{\\mathrm {d}}{\\mathrm {d}z}[z^k M_k(z)] = z^{k-1} M_{k-1}(z)$ for $k \\ge 1$ .", "Since the term $\\mathrm {e}^{i\\beta _1} \\tau ^\\alpha $ in the integral in (REF ) has a negative real part, the error term can thus be estimated by $|\\mathcal {R}(r;\\alpha )| \\le \\frac{1}{\\pi (n+1)!}", "\\int _{0}^{\\infty } \\tau ^{(n+1)\\alpha } \\mathrm {e}^{\\sin (\\varphi ) r \\tau } \\mathrm {d}\\tau = \\frac{\\Gamma ((n+1)\\alpha +1)}{\\pi (n+1)!", "|\\sin (\\varphi )|^{(n+1)\\alpha +1}} r^{-(n+1)\\alpha -1},$ and since the true value of $\\mathcal {R}$ does not actually depend on the auxiliary parameter $\\varphi \\in (-\\frac{\\pi }{\\max (2\\alpha ,1)},0)$ , the above estimate can be improved to $|\\mathcal {R}(r;\\alpha )| \\le \\frac{\\Gamma ((n+1)\\alpha +1)}{\\pi (n+1)!", "\\sin (\\pi _\\alpha )^{(n+1)\\alpha +1}} r^{-(n+1)\\alpha -1},$ where $\\pi _\\alpha \\mathrel {\\mathop :}=\\frac{\\pi }{2\\max (\\alpha ,1)}$ .", "Differentiating (REF ) termwise with respect to $r$ , we have $\\frac{\\partial ^\\ell }{\\partial r^{\\ell }} f(r;\\alpha )= \\frac{(-1)^{\\ell }}{\\pi }\\sum _{k=1}^n \\frac{(-1)^{k+1} }{k!}", "\\frac{\\Gamma (k\\alpha +\\ell +1)\\sin (\\frac{k\\alpha \\pi }{2})}{r^{k\\alpha +\\ell + 1}} + \\mathcal {R}(r;\\alpha ),$ with $|\\mathcal {R}(r;\\alpha )| \\le \\frac{\\Gamma ((n+1)\\alpha +\\ell +1)}{\\pi (n+1)!", "\\sin (\\pi _\\alpha )^{(n+1)\\alpha +\\ell +1}} r^{-(n+1)\\alpha -\\ell -1}.$ Finally, for pure and mixed partial derivatives with respect to $\\alpha $ , we may use the previous expansion as a stepping stone to obtain $\\frac{\\partial ^\\ell }{\\partial r^{\\ell _1} \\partial \\alpha ^{\\ell _2}} f(r;\\alpha )= \\frac{(-1)^{\\ell _1}}{\\pi }\\sum _{k=1}^n \\frac{(-1)^{k+1} }{k!}", "\\frac{\\partial ^{\\ell _2}}{\\partial \\alpha ^{\\ell _2}}\\Bigl [\\frac{\\Gamma (k\\alpha +\\ell _1+1)\\sin (\\frac{k\\alpha \\pi }{2})}{r^{k\\alpha +\\ell _1+ 1}}\\Bigr ] + \\mathcal {R}(r;\\alpha ),$ with $|\\mathcal {R}(r;\\alpha )| \\le \\frac{1}{\\pi } \\int _{0}^{\\infty } \\tau ^{\\ell _1} \\big |\\frac{\\partial ^{\\ell _2}}{\\partial \\alpha ^{\\ell _2}}\\bigl [ \\mathrm {e}^{\\mathrm {i}(n+1)\\beta _1} \\tau ^{(n+1)\\alpha }M_{n+1}\\bigl (\\mathrm {e}^{\\mathrm {i}\\beta _1}\\tau ^{\\alpha }\\bigr )\\bigr ] \\big | \\mathrm {e}^{\\sin (\\varphi )r\\tau } \\mathrm {d}\\tau .$ In (REF ), the derivatives in the summands can again be computed in terms of the polygamma functions if necessary.", "In (REF ), the partial derivatives can be estimated in terms of the functions $M_k$ introduced above by iterating the recursive formula $M_k^{\\prime } = M_k - k M_{k+1}$ .", "The integral in (REF ) will be of the order $\\mathcal {O}(r^{-(n+1)\\alpha -\\ell _1-1} \\log (r)^{\\ell _2})$ for large values of $r$ , and one can a posteriori take $\\varphi \\rightarrow -\\pi _\\alpha $ since again the true value of $\\mathcal {R}(r;\\alpha )$ does not depend on $\\varphi $ .", "A random two-dimensional vector $\\bi {X}$ obeying a symmetric and spherically contoured bivariate stable distribution with stability index $\\alpha \\in (0,2)$ can be described in terms of its characteristic function: $\\mathbb {E}[\\exp (\\mathrm {i}\\bi {t}^{\\intercal } \\bi {X})] = \\exp (-|\\bi {X}|^\\alpha ), \\qquad \\bi {t} \\in \\mathbb {R}^2,$ where $|\\cdot |$ refers to the standard $\\ell ^2$ -based Euclidean norm.", "We refer to e.g.", "for a comprehensive account on such distributions.", "The density function $f_{\\bi {X}}$ of $\\bi {X}$ can be expressed at $\\bi {x} \\in \\mathbb {R}^2$ by the inverse Fourier transform of the characteristic function above, resulting in $f_{\\bi {X}}(\\bi {x};\\alpha ) = \\frac{1}{2\\pi } \\int _{0}^{\\infty } J_0(|\\bi {x}| t) t \\mathrm {e}^{-t^\\alpha } \\mathrm {d}t;$ see e.g. .", "Here $J_\\nu $ with $\\nu = 0$ stands for the Bessel function of the first kind, which can for $\\nu \\in \\mathbb {N}_0$ and $z \\in \\mathbb {C}$ be expressed as $J_\\nu (z) = \\sum _{k=0}^{\\infty } \\frac{(-1)^k}{k!", "(k+\\nu )!", "}\\Bigl (\\frac{z}{2}\\Bigr )^{2k+\\nu }.$ In this section we are interested in estimating partial derivatives of the density function $f_{\\bi {X}}$ in terms of $|\\bi {x}|$ and $\\alpha $ .", "For this purpose, we write $f(r;\\alpha ) \\mathrel {\\mathop :}=\\frac{1}{2\\pi }\\int _{0}^{\\infty } J_0(rt) t \\mathrm {e}^{-t^\\alpha } \\mathrm {d}t, \\quad r \\ge 0,$ for the radial density function of $\\bi {X}$ .", "By well-known integral representation formulas , we have the uniform bounds $\\Vert J_\\nu \\Vert _{L^\\infty (\\mathbb {R})} \\le 1$ on the real line for all $\\nu \\in \\mathbb {N}_0$ , and by standard recurrence relations we thus have $\\Vert J^{(\\ell )}_\\nu \\Vert _{L^{\\infty }(\\mathbb {R})} \\le 1$ for all $\\nu \\in \\mathbb {N}_0$ and derivatives $J^{(\\ell )}_\\nu $ of $J_\\nu $ .", "Hence we may expand the function $J_0$ in (REF ) to obtain $f(r;\\alpha ) = \\frac{1}{2\\pi \\alpha } \\sum _{k=0}^{n} \\frac{(-1)^k\\Gamma (\\frac{2k+2}{\\alpha })}{(2^k k!", ")^2} r^{2k} + \\mathcal {R}(r;\\alpha ),$ with $|\\mathcal {R}(r;\\alpha )| \\le \\frac{\\Gamma (\\frac{2n+4}{\\alpha })}{2\\pi \\alpha (2n+2)!", "}r^{2n+2}.$ Similarly, differentiating (REF ) with respect to $r$ , and (REF ) for $\\nu = 0$ with respect to $z$ , yields $& \\qquad \\frac{\\partial ^\\ell }{\\partial r^\\ell } f(r;\\alpha ) \\\\& = \\frac{(-1)^{\\lceil \\frac{\\ell }{2}\\rceil }}{2\\pi \\alpha } \\sum _{k=0}^n \\frac{(-1)^k \\bigl \\lbrace \\prod _{i=1}^{\\ell } (2k+o(\\ell )+i) \\bigr \\rbrace \\Gamma (\\frac{2k+2+2\\lceil \\frac{\\ell }{2}\\rceil }{\\alpha })}{(2^{k+\\lceil \\frac{\\ell }{2}\\rceil }(k+\\lceil \\frac{\\ell }{2}\\rceil )!", ")^2} r^{2k+o(\\ell )} + \\mathcal {R}(r;\\alpha ),$ for all $\\ell \\in \\mathbb {N}$ , where $|\\mathcal {R}(r;\\alpha )| \\le \\frac{\\Gamma (\\frac{2n+4+2\\lceil \\frac{\\ell }{2}\\rceil }{\\alpha })}{2\\pi \\alpha (2n+2+o(\\ell ))!}", "r^{2n+2+o(\\ell )}.$ Subsequently $\\frac{\\partial ^\\ell }{\\partial r^{\\ell _1} \\partial \\alpha ^{\\ell _2}} f(r;\\alpha ) & = \\frac{(-1)^{\\lceil \\frac{\\ell _1}{2}\\rceil }}{2\\pi } \\sum _{k=0}^n \\frac{(-1)^k \\bigl \\lbrace \\prod _{i=1}^{\\ell _1} (2k+o(\\ell _1)+i) \\bigr \\rbrace }{(2^{k+\\lceil \\frac{\\ell _1}{2}\\rceil }(k+\\lceil \\frac{\\ell _1}{2}\\rceil )!", ")^2} \\frac{\\partial ^{\\ell _2}}{\\partial \\alpha ^{\\ell _2}}\\Bigl [\\frac{\\Gamma (\\frac{2k+2+2\\lceil \\frac{\\ell _1}{2}\\rceil }{\\alpha })}{\\alpha } \\Bigr ]r^{2k+o(\\ell _1)} \\\\& \\qquad \\qquad + \\mathcal {R}(r;\\alpha ),$ for all $\\ell _1 \\in \\mathbb {N}_0$ and $\\ell _2 \\in \\mathbb {N}$ , where $|\\mathcal {R}(r;\\alpha )| \\le \\frac{r^{2n+2+o(\\ell _1)}}{2\\pi \\alpha ^{\\ell _2+1}(2n+2+o(\\ell _1))!}", "\\int _{0}^{\\infty } |\\log (t)|^{\\ell _2} t^{\\frac{2n+4+2\\lceil \\frac{\\ell _1}{2}\\rceil }{\\alpha }-1} |p_{\\ell _2}(t)| \\mathrm {e}^{-t}\\mathrm {d}t.$ Here $p_{\\ell _2}$ stands for the polynomials introduced in (REF ) above.", "Again, the integrals appearing on the right-hand side well-behaved enough for numerical approximation; see the discussion following (REF ).", "Nolan records an asymptotic expansion for the density function of the amplitude distribution of $\\bi {X}$ as $r \\rightarrow \\infty $ , which for the radial density function (REF ) translates as $f(r;\\alpha ) = \\frac{1}{\\pi ^2} \\sum _{k=1}^{n} \\frac{(-1)^{k+1}2^{k\\alpha }\\Gamma (\\frac{k\\alpha +2}{2})^2 \\sin (\\frac{k\\alpha \\pi }{2})}{k!", "}r^{-k\\alpha -2} + \\mathcal {O}\\bigl ( r^{-(n+1)\\alpha -2}\\bigr ),$ for $\\alpha \\in (0,2)$ , $r > 0$ and $n \\in \\mathbb {N}$ .", "This can be obtained by expressing $\\bi {X}$ as a sub-Gaussian vector with respect to a certain totally skewed univariate stable distribution, which admits a similar asymptotic series expansion as described in .", "Below we will obtain (REF ) in an alternative way that allows us to quantitatively control the error term.", "We may rewrite (REF ) as $f(r;\\alpha ) = \\frac{1}{2\\pi } \\Re \\Bigl \\lbrace \\int _{0}^{\\infty } H_0(r t) t \\mathrm {e}^{-t^\\alpha } \\mathrm {d}t\\Bigr \\rbrace ,$ where $H_0 \\colon \\mathbb {C}\\setminus (-\\infty ,0] \\rightarrow \\mathbb {C}$ stands for the so-called Hankel function of the first kind , defined in terms of the Bessel functions of the first kind ($J_0$ ) and second kind ($Y_0$ ) by $H_0(z) = J_0(z) + \\mathrm {i}Y_0 (z).$ The functions $H_\\nu $ , $\\nu \\in \\mathbb {N}_0$ , can be defined similarly in terms of $J_\\nu $ and $Y_\\nu $ .", "By well-known recurrection relations and connection formulas , each derivative $H_{\\nu }^{(\\ell )}$ can again be expressed as a linear combination of functions $H_{\\nu ^{\\prime }}$ with $(\\nu -\\ell )_+ \\le \\nu ^{\\prime } \\le \\nu +\\ell $ .", "Following the contour integration procedure described in , we can then rotate the integral in (REF ) from the positive real axis to the line $\\lbrace \\mathrm {e}^{i\\varphi } \\tau \\,:\\, \\tau > 0\\rbrace $ for an arbitrary $\\varphi \\in (0, \\frac{\\pi }{\\max (2\\alpha ),1})$ to get $f(r;\\alpha ) = \\frac{1}{2\\pi } \\Re \\Bigl \\lbrace \\mathrm {e}^{2\\mathrm {i}\\varphi } \\int _{0}^{\\infty } H_0\\bigl (\\mathrm {e}^{\\mathrm {i}\\varphi } r\\tau \\bigr ) \\tau \\mathrm {e}^{\\mathrm {e}^{\\mathrm {i}\\beta _1} \\tau ^\\alpha } \\mathrm {d}\\tau \\Bigr \\rbrace ,$ where $\\beta _1 \\mathrel {\\mathop :}=\\beta _1(\\alpha ) \\mathrel {\\mathop :}=\\pi + \\alpha \\varphi $ is as in (REF ).", "More precisely, this contour integration and the associated limiting procedure is permitted because $|H_{\\nu }(z)| \\le {\\left\\lbrace \\begin{array}{ll}c_{1,\\nu } \\bigl (|z|^{-\\nu } + |\\log (z)|\\bigr ) & \\quad \\text{for} \\quad 0 < |z| \\le 1\\\\c_{2,\\nu } |z|^{-\\frac{1}{2}}\\mathrm {e}^{-\\Im (z)} & \\quad \\text{for} \\quad |z| \\ge 1\\end{array}\\right.", "}$ for all $\\nu \\in \\mathbb {N}_0$ , with certain constants $c_{i,\\nu }$ that we will not elaborate on; we refer to for this and more comprehensive asymptotic expansions for Hankel functions.", "Because of (REF ), we calso expand the term $\\mathrm {e}^{\\mathrm {e}^{\\mathrm {i}\\beta _1} \\tau ^\\alpha }$ in (REF ) and integrate termwise to get $f(r;\\alpha ) = \\frac{1}{2\\pi } \\sum _{k=0}^{n} \\frac{1}{k!}", "\\Re \\Bigl \\lbrace \\mathrm {e}^{\\mathrm {i}(2\\varphi + k\\beta _1)} \\int _{0}^{\\infty } H_0\\bigl (\\mathrm {e}^{\\mathrm {i}\\varphi } \\tau \\bigr ) \\tau ^{k \\alpha + 1} \\mathrm {d}\\tau \\Bigr \\rbrace r^{-k\\alpha -2} + \\mathcal {R}(r;\\alpha ).$ If we can show that the remainder term above is of the order $\\mathcal {O}(r^{-(n+1)\\alpha -2})$ as $r \\rightarrow \\infty $ , it follows automatically from the uniqueness property of asymptotic expansions that the principal term coincides with the one in (REF ).", "To this end we recall the functions $M_k$ from (REF ) and write $\\mathcal {R}(r;\\alpha ) = \\frac{1}{2\\pi }\\Re \\Bigl \\lbrace \\mathrm {e}^{\\mathrm {i}(2\\varphi + (n+1)\\beta _1)} \\int _{0}^{\\infty } H_0\\bigl (\\mathrm {e}^{\\mathrm {i}\\varphi } r\\tau \\bigr ) \\tau ^{(n+1)\\alpha + 1} M_{n+1}\\bigl (\\mathrm {e}^{\\mathrm {i}\\beta _1} \\tau ^\\alpha \\bigr ) \\mathrm {d}\\tau \\Bigr \\rbrace ,$ and since the term $\\mathrm {e}^{\\mathrm {i}\\beta _1} \\tau ^\\alpha $ above has a negative real part by assumption, we may estimate $|\\mathcal {R}(r;\\alpha )| \\le \\frac{1}{2\\pi (n+1)!}", "\\Bigl (\\int _{0}^{\\infty } |H_0(e^{\\mathrm {i}\\varphi }\\tau )| \\tau ^{(n+1) \\alpha +1} \\mathrm {d}\\tau \\Bigr ) r^{-(n+1)\\alpha -2}$ which is of the desired form.", "In light of (REF ), we may further take $\\varphi \\rightarrow \\frac{\\pi }{2 \\max (\\alpha ,1)} = \\pi _\\alpha $ so that $|\\mathcal {R}(r;\\alpha )| \\le \\frac{1}{2\\pi (n+1)!}", "\\Bigl (\\int _{0}^{\\infty } |H_0(e^{\\mathrm {i}\\pi _\\alpha }\\tau )| \\tau ^{(n+1)\\alpha +1} \\mathrm {d}\\tau \\Bigr ) r^{-(n+1)\\alpha -2}.$ Similarly, $\\frac{\\partial ^\\ell }{\\partial r^\\ell } f(r;\\alpha )& = \\frac{(-1)^{\\ell }}{\\pi ^2} \\sum _{k=1}^{n} \\frac{(-1)^{k+1}2^{k\\alpha }\\bigl \\lbrace \\prod _{i=2}^{\\ell +1}(k\\alpha +i)\\bigr \\rbrace \\Gamma (\\frac{k\\alpha +2}{2})^2 \\sin (\\frac{k\\alpha \\pi }{2})}{k!", "}r^{-k\\alpha -\\ell -2} \\\\& \\qquad \\qquad + \\mathcal {R}(r;\\alpha )$ with $|\\mathcal {R}(r;\\alpha )| \\le \\frac{1}{2\\pi (n+1)!}", "\\Bigl (\\int _{0}^{\\infty } |H_0^{(\\ell )}(e^{\\mathrm {i}\\pi _\\alpha }\\tau )| \\tau ^{(n+1)\\alpha +\\ell +1} \\mathrm {d}\\tau \\Bigr ) r^{-(n+1)\\alpha -\\ell -2},$ and further $&\\frac{\\partial ^\\ell }{\\partial r^{\\ell _1} \\partial \\alpha ^{\\ell _2}} f(r;\\alpha )= \\\\ &\\frac{(-1)^{\\ell _1}}{\\pi ^2} \\sum _{k=1}^{n} \\frac{(-1)^{k+1}}{k!", "}\\frac{\\partial ^{\\ell _2}}{\\partial \\alpha ^{\\ell _2}}\\Bigl [\\frac{2^{k\\alpha }\\bigl \\lbrace \\prod _{i=2}^{\\ell +1}(k\\alpha +i)\\bigr \\rbrace \\Gamma (\\frac{k\\alpha +2}{2})^2 \\sin (\\frac{k\\alpha \\pi }{2})}{r^{k\\alpha +\\ell _1+2}}\\Bigr ]+ \\mathcal {R}(r;\\alpha ),$ with $|\\mathcal {R}(r;\\alpha )| \\le \\frac{1}{2\\pi } \\int _{0}^{\\infty } \\big |H_0^{(\\ell _1)}\\bigl (\\mathrm {e}^{\\mathrm {i}\\varphi } r\\tau \\bigr ) \\big | \\tau ^{\\ell _1 + 1} \\Big | \\frac{\\partial ^{\\ell _2}}{\\partial \\alpha ^{\\ell _2}}\\bigl [\\mathrm {e}^{\\mathrm {i}(n+1)\\beta _1}\\tau ^{(n+1)\\alpha } M_{n+1}\\bigl (\\mathrm {e}^{\\mathrm {i}\\beta _1} \\tau ^\\alpha \\bigr ) \\bigr ] \\Big | \\mathrm {d}\\tau .$ The latter integrand can again be estimated in terms of the functions $M_k$ ; see (REF ) and the relevant discussion.", "One will then end up with integrals that are analytically untractable, but not outside the reach of numerical estimation.", "Here we present rather crude uniform bounds, and some refinements based on oscillatory integral techniques, of $|\\frac{\\partial ^{\\ell _1 + \\ell _2}}{\\partial r^{\\ell _1} \\partial \\alpha ^{\\ell _2}} f(r;\\alpha )|$ for “moderate” values of $r$ , where none of the asymptotic expansions discussed earlier come close to approximating the true function with e.g.", "2–3 summands.", "The precise range of these “moderate” values of $r$ of course depends on $\\alpha $ and the $\\ell _i$ 's.", "We present these estimates only in the univariate case, since the bivariate case can be handled with minor adjustments.", "First, from (REF ), we have $\\frac{\\partial ^{\\ell _1 + \\ell _2}}{\\partial r^{\\ell _1} \\partial \\alpha ^{\\ell _2}} f(r;\\alpha ) = \\frac{1}{\\pi } \\int _{0}^{\\infty } \\cos ^{(\\ell _1)} (r t) \\log (t)^{\\ell _2} t^{\\ell _1} p_{\\ell _2}(t^\\alpha ) \\mathrm {e}^{-t^\\alpha } \\mathrm {d}t,$ where $\\cos ^{(\\ell _1)}$ stands for the $\\ell _1$ 'th derivative of the cosine function, and $p_{\\ell _2}$ stands for the polynomial introduced in (REF ).", "Thus, a simple application of the triangle inequality yields $\\Bigl | \\frac{\\partial ^{\\ell _1 + \\ell _2}}{\\partial r^{\\ell _1} \\partial \\alpha ^{\\ell _2}} f(r;\\alpha ) \\Bigr |\\le \\frac{1}{\\pi \\alpha ^{\\ell _2+1}} \\int _{0}^{\\infty } |\\log (t)|^{\\ell _2} t^{\\frac{\\ell _1+1}{\\alpha } - 1} |p_{\\ell _2}(t)| \\mathrm {e}^{-t} \\mathrm {d}t.$ The latter integral is again usually intractable (although it can be expressed in terms of the standard gamma function if $\\ell _2 = 0$ ), but numerically estimable.", "In case $\\ell _2 = 0$ and $\\ell _1 \\mathrel {\\mathop :}=\\ell > 0$ , we can slightly refine (REF ) using the oscillatory integral technique of partial integration against a function that decays sufficiently fast as $t \\rightarrow 0^+$ and $t \\rightarrow \\infty $ : $ \\Bigl | \\frac{\\partial ^{\\ell }}{\\partial r^{\\ell } } f(r;\\alpha ) \\Bigr | = \\frac{1}{\\pi r} \\Bigl | \\int _{0}^{\\infty } \\cos ^{(\\ell -1)}(rt) \\frac{\\mathrm {d}}{\\mathrm {d}t}\\bigl [ t^{\\ell } \\mathrm {e}^{-t^\\alpha }\\bigr ] \\mathrm {d}t \\Bigr | \\le \\frac{1}{rt} \\int _{0}^{\\infty } \\bigl | \\frac{\\mathrm {d}}{\\mathrm {d}t}\\bigl [ t^{\\ell } \\mathrm {e}^{-t^\\alpha }\\bigr ] \\bigr | \\mathrm {d}t.$ The derivative inside the last integral can be easily computed, and one arrives at an upper bound for the integral that can be expressed in terms of the gamma function.", "Similarly, when $\\ell _1 = 0$ and and $\\ell _2 \\mathrel {\\mathop :}=\\ell > 0$ , we have $\\Bigl | \\frac{\\partial ^{\\ell }}{\\partial \\alpha ^{\\ell }} f(r;\\alpha ) \\Bigr |= \\frac{1}{\\pi r} \\Bigl | \\int _{0}^{\\infty } \\sin (rt) \\frac{\\mathrm {d}^{\\ell +1}}{\\mathrm {d}t \\, \\mathrm {d}\\alpha ^{\\ell }}\\bigl [ \\mathrm {e}^{-t^\\alpha }\\bigr ] \\mathrm {d}t \\Bigr | \\le \\frac{1}{\\pi r} \\int _{0}^{\\infty } \\bigl | \\frac{\\mathrm {d}^{\\ell +1}}{\\mathrm {d}t \\, \\mathrm {d}\\alpha ^{\\ell }}\\bigl [ \\mathrm {e}^{-t^\\alpha }\\bigr ] \\bigr | \\mathrm {d}t.$ Again, the derivatives inside the last integral can be computed, and after a suitable change of variables we arrive at an integral directly directly proportional to $\\alpha ^{-\\ell }$ , where the coefficient of $\\alpha ^{-\\ell }$ can be calculated numerically.", "This approach could also in principle be used for mixed derivatives of $f(r;\\alpha )$ , and in (REF ) the partial integration trick could be applied $\\ell $ times instead of once.", "In both cases however the computations become very unwieldy, and thus we discard them.", "Recall from the main paper (Section 3.1) that one of our primary goals is to estimate $ \\sup _{0 < r < 30,\\; 0.5 \\le \\alpha \\le 1.9}\\Big | \\frac{\\partial ^{\\ell _1+\\ell _2}}{\\partial r^{\\ell _1} \\partial \\alpha ^{\\ell _2}} \\log f(r;\\alpha ) \\Big |$ for $(\\ell _1,\\ell _2) \\in \\lbrace (4,0), (2,2), (0,4)\\rbrace $ .", "Recalling that e.g.", "$& \\bigl (\\log f(r;\\alpha )\\bigr )^{(4,0)} = -6 \\frac{f^{(1,0)}(r;\\alpha )^4}{f(r;\\alpha )^4} + 12 \\frac{f^{(2,0)}(r;\\alpha ) f^{(1,0)}(r;\\alpha )^2}{f(r;\\alpha )^3} \\\\& \\qquad \\qquad - 4 \\frac{f^{(3,0)}(r;\\alpha ) f^{(1,0)}(r;\\alpha )}{f(r;\\alpha )^2} - 3 \\frac{f^{(2,0)}(r;\\alpha )^2}{f(r;\\alpha )^2} + \\frac{f^{(4,0)}(r;\\alpha )}{f(r;\\alpha )},$ and similarly for the other relevant partial derivatives, we may estimate the suprema in (REF ) by applying the triangle inequality on sums of the above sort, and by estimating the absolute values of the partial derivatives $f^{(\\ell _1,\\ell _2)}(r;\\alpha )$ from above, relying on the pointwise estimates established in the previous sections, and $f(r;\\alpha )$ itself from below.", "In this section we will give an overview of a numerical procedure to estimate these functions in a manner that is applicable for estimating (REF ).", "We start with a fixed grid of the $(r,\\alpha )$ -space, with density given by a parameter $\\triangle > 0$ such that $\\frac{1.9-0.5}{\\triangle } =\\mathrel {\\mathop :}i^*, \\quad \\text{and} \\quad \\frac{30}{\\triangle } =\\mathrel {\\mathop :}j^* ,$ are positive integers.", "In our numerical simulations we have used $\\triangle \\mathrel {\\mathop :}=\\frac{2}{10^3}$ in order to conserve computational resources, but other choices are possible as well.", "Write then $Q_{i,j} \\mathrel {\\mathop :}=\\bigl [ (j-1)\\triangle ,j\\triangle \\bigr ] \\times \\bigl [0.5 + (i-1)\\triangle , 0.5 + i\\triangle \\bigr ] \\quad \\text{for } i \\in 1{:}i^* \\text{ and } j \\in 1{:}j^*,$ so that $\\bigcup _{i,j} Q_{i,j} = [0,30]\\times [0.5,1.9],$ and the interiors of the $Q_{i,j}$ 's are pairwise disjoint.", "The idea is to numerically estimate $f(r;\\alpha )$ and its partial derivatives uniformly in these squares, of which there is a finite amount, and the uniform estimates will not be too far off from the pointwise estimates we obtained in the earlier sections if the parameter $\\triangle $ is sufficiently small.", "To this direction, let us first discuss the estimates of the absolute values of the partial derivatives $f^{(\\ell _1,\\ell _2)}(r;\\alpha )$ for $(r,\\alpha ) \\in Q_{i,j}$ .", "Many of the pointwise estimates we have discussed above – for example (REF ), (REF ), (REF ), (REF ), (REF ), (REF ),(REF ), (REF ), (REF ) and the bivariate versions of these estimates – consist of terms that are monotonous with respect to both $r$ and $\\alpha $ either directly, or after some simple additional upper estimates, such as applying the triangle inequality in the sums like the ones in (REF ) and (REF ), and considering the cases $r < 1$ and $r \\ge 1$ separately for terms of the form $r^{k\\alpha + \\ell + 1}$ .", "Some additional care has to be taken for some of the more complicated terms appearing in the series expansions for the partial derivatives involving the variable $\\alpha $ , as well as the respective remainder terms.", "We explain this with examples pertaining to the univariate case, but everything here applies to the bivariate case as well with obvious modifications.", "First, the derivatives with respect to $\\alpha $ appearing in the summands in (REF ) and (REF ) can with some effort be computed for $\\ell _2 \\in \\lbrace 1,2\\rbrace $ , resulting terms involving the gamma function itself and the so-called polygamma functions of orders 0 and 1, which we denote here by $\\psi (0,\\cdot )$ and $\\psi (1,\\cdot )$ respectively.", "The function $\\psi (1,\\cdot )$ is strictly positive and decreasing on the entire positive real axis (see e.g.", "), which implies the functions $|\\psi (0,t)|$ and $|\\Gamma (t)|$ are decreasing for $0 < t \\le t_0$ and increasing for $t > t_0$ , where $t_0 \\approx 1.46$ is the positive zero of $\\psi (0,\\cdot )$ , which has to be taken into account when estimating the derivatives involving $\\Gamma $ .", "For $\\ell _2 > 2$ , we use (REF ) and (REF ) only with $n = -1$ and $n = 0$ respectively, avoiding the need to to estimate these derivatives at all.", "Secondly, the integrals appearing in (REF ) and (REF ) depend on $\\alpha $ (the latter also on $r$ ) in less than immediately obvious ways.", "For (REF ), we simply note that $ t^{\\frac{2n+3+2\\lceil \\frac{\\ell _1}{2}\\rceil }{\\alpha }} \\le \\max \\Bigl ( t^{\\frac{2n+3+2\\lceil \\frac{\\ell _1}{2}\\rceil }{\\alpha _-}}, t^{\\frac{2n+3+2\\lceil \\frac{\\ell _1}{2}\\rceil }{\\alpha _+}}\\Bigr ) \\quad \\forall t > 0,$ if $\\alpha _- \\le \\alpha \\le \\alpha _+$ , so it suffices to numerically precompute $\\int _{0}^{\\infty } |\\log (t)|^{\\ell _2} \\max \\Bigl ( t^{\\frac{2n+3+2\\lceil \\frac{\\ell _1}{2}\\rceil }{0.5 + (i-1)\\triangle }},t^{\\frac{2n+3+2\\lceil \\frac{\\ell _1}{2}\\rceil }{0.5 + i\\triangle }} \\Bigr ) t^{-1}|p_{\\ell _2}(t)| \\mathrm {e}^{-t} \\mathrm {d}t,$ for $\\ell _2 \\in 1{:}4$ , $\\lceil \\frac{\\ell _1}{2}\\rceil \\in \\lbrace 0,1\\rbrace $ and $i \\in 1{:}i^*$ , and use each of these for in the respective square $Q_{i,j}$ .", "For numerical integration, we use the Julia library QuadGK.jl.", "Concerning integrals of the form (REF ), we recall that parameter $\\beta _1 = \\pi + \\alpha \\varphi $ depends on $\\alpha $ , and note that it possible to write $\\frac{\\partial ^{\\ell _2}}{\\partial \\alpha ^{\\ell _2}}\\bigl [ \\mathrm {e}^{\\mathrm {i}(n+1)\\beta _1} \\tau ^{(n+1)\\alpha }M_{n+1}\\bigl (\\mathrm {e}^{\\mathrm {i}\\beta _1}\\tau ^{\\alpha }\\bigr )\\bigr ]= \\tau ^{(n+1)\\alpha } \\bigl (\\log (\\tau ) + \\mathrm {i}\\varphi \\bigr )^{\\ell _2}\\sum _{k=0}^{\\ell _2} b_{n,k} \\tau ^{k\\alpha } M^{(k)}\\bigl ( \\mathrm {e}^{\\mathrm {i}\\beta _1}\\tau ^{\\alpha } \\bigr ),$ where the coefficients $b_{n,k}$ depend on $\\alpha $ but their absolute values do not.", "For example, for $\\ell _2 = 2$ , the latter expression can be written as $\\bigl ( \\mathrm {e}^{\\mathrm {i}\\beta _1} \\tau ^\\alpha \\bigr )^{n+1} \\bigl (\\log (t) + \\mathrm {i}\\varphi \\bigr )^{2}\\Bigl ( & (n+1)^2 M\\bigl (\\mathrm {e}^{\\mathrm {i}\\beta _1}\\tau ^{\\alpha }\\bigr ) - (2n + 3) \\mathrm {e}^{\\mathrm {i}\\alpha \\varphi } \\tau ^{\\alpha }M^{\\prime }\\bigl (\\mathrm {e}^{\\mathrm {i}\\beta _1}\\tau ^{\\alpha }\\bigr ) \\\\& \\qquad + \\bigl (\\mathrm {e}^{\\mathrm {i}\\alpha \\varphi } \\tau ^{\\alpha }\\bigr )^2 M^{\\prime \\prime }\\bigl (\\mathrm {e}^{\\mathrm {i}\\beta _1}\\tau ^{\\alpha } \\bigr )\\Bigr ).$ The crux of all this is that by applying the triangle inequality to (REF ) and using simple uniform estimates for $M$ and its derivatives (which is possible since $\\mathrm {e}^{\\mathrm {i}\\beta _1}\\tau ^{\\alpha }$ by construction always has negative real part) , the integral appearing in (REF ) can be decomposed as a linear combination of integrals of the form $\\int _{0}^{\\infty } \\big |\\log (\\tau ) + \\mathrm {i}\\varphi \\big |^{\\ell _2} \\tau ^{k\\alpha + \\ell _1} \\mathrm {e}^{\\sin (\\varphi ) r\\tau } \\mathrm {d}t, \\quad n+1 \\le k \\le n+\\ell _2+1.$ Making the substitution $r \\tau =\\mathrel {\\mathop :}t$ , doing some elementary massaging for the resulting integrand, taking $\\varphi \\rightarrow -\\pi _\\alpha $ and using a discretization estimate similar to (REF ), we again end up with a finite collection of integrals that can be precomputed.", "The bivariate version of this estimate, (REF ), comes with the additional ingredient of the Hankel functions, which we use the Julia library SpecialFunctions.jl to compute.", "All in all, we have described several different ways to bound $\\sup _{(r,\\alpha ) \\in Q_{i,j}} | f^{(\\ell _1,\\ell _2)}(r;\\alpha ) |,$ and for each square $Q_{i,j}$ , we may take the smallest out of these bounds as the ultimate bound (REF ).", "It then remains to estimate $\\inf _{(r,\\alpha ) \\in Q_{i,j}} f(r;\\alpha ),$ from below.", "For this purpose, we have first precomputed $f(r;\\alpha )$ at the corners of the $Q_{i,j}$ 's using different numerical integration routines for different ranges of the parameters; see Section 3.1 of the main paper, where this is explained in the context of the spline approximation.", "We then observe that $f(r;\\alpha )$ is for fixed $\\alpha $ a decreasing function of $r > 0$ , both in the univariate and in the bivariate case.", "Thus, writing $Q_{i,j} =\\mathrel {\\mathop :}[r_{-,j},r_{+,j}]\\times [\\alpha _{-,i},\\alpha _{-,i}]$ , we may use the fundamental theorem of calculus to obtain $\\inf _{(r,\\alpha ) \\in Q_{i,j}} f(r;\\alpha ) &= \\inf _{\\alpha \\in [\\alpha _{-,i},\\alpha _{-,i}]} f(r_{+,j};\\alpha ) \\\\& \\ge \\min \\bigl ( f(r_{+,j};\\alpha _{-,i}),f(r_{+,j};\\alpha _{+,i}) \\bigr )- \\frac{\\triangle }{2} \\sup _{(r,\\alpha ) \\in Q_{i,j}} | f^{(0,1)}(r;\\alpha ) |,$ where the latter can further be estimated using the bounds discussed in the context of (REF ).", "It turns out that with a small enough $\\triangle $ , such as $\\triangle = \\frac{2}{10^3}$ , this yields a fair lower bound for a function like $f(r;\\alpha )$ which is otherwise very difficult to bound from below.", "Here we establish relative error bounds for the series decompositions (REF ) and (REF ), used in the main paper with $n = 3$ and $r > 30$ .", "First, in the univariate case, we can rewrite (REF ) as $f(r;\\alpha ) & = \\sum _{k=1}^{3} \\frac{(-1)^{k+1}\\Gamma (k\\alpha +1)\\sin (\\frac{k\\alpha \\pi }{2})}{\\pi k!}", "r^{-k\\alpha -1} + \\mathcal {R}_4(r;\\alpha ) \\\\& \\qquad \\qquad =\\mathrel {\\mathop :}\\sum _{k=1}^{3} c^{\\mathcal {S}}_k(\\alpha ) r^{-k\\alpha -1} + \\mathcal {R}_4(r;\\alpha ) =\\mathrel {\\mathop :}\\mathcal {S}_3(r;\\alpha ) + \\mathcal {R}_4(r;\\alpha ),$ with $|\\mathcal {R}_4(r;\\alpha )| \\le \\frac{\\Gamma (4\\alpha +1)}{24\\pi \\sin (\\pi _\\alpha )^{4\\alpha +1}}r^{-4\\alpha -1} \\mathrel {\\mathop :}=c^{\\mathcal {R}}_4(\\alpha ) r^{-4\\alpha -1}.$ Thus, for $r \\ge 30$ , we have the following estimates: $\\Big |1 - \\frac{f(r;\\alpha )}{\\mathcal {S}_3(r;\\alpha )}\\Big | = \\Big |\\frac{\\mathcal {R}_4(r;\\alpha )}{\\mathcal {S}_3(r;\\alpha )}\\Big | & \\le \\frac{c^{\\mathcal {R}}_4(\\alpha ) r^{-4\\alpha -1}}{|c^{\\mathcal {S}}_1(\\alpha )|r^{-\\alpha -1} - |c^{\\mathcal {S}}_2(\\alpha )|r^{-2\\alpha -1} - |c^{\\mathcal {S}}_3(\\alpha )|r^{-3\\alpha -1}} \\\\& = \\frac{c^{\\mathcal {R}}_4(\\alpha ) r^{-3\\alpha }}{|c^{\\mathcal {S}}_1(\\alpha )| - |c^{\\mathcal {S}}_2(\\alpha )|r^{-\\alpha } - |c^{\\mathcal {S}}_3(\\alpha )|r^{-2\\alpha }},$ assuming these calculations are valid in the sense of the latter denumerator being positive for $r = 30$ – numerical considerations show that this is indeed the case.", "The rightmost quantity is then a decreasing function of $r \\ge 30$ , and we thus get a uniform error bound by investigating it with $r = 30$ : $\\sup _{r > 30, \\; 0.5 \\le \\alpha \\le 1.9} \\,\\Big |1 - \\frac{f(r;\\alpha )}{\\mathcal {S}_3(r;\\alpha )}\\Big | \\le 0.00096.$ In the bivariate case, we may proceed similarly, this time with (REF ) for $n = 3$ and bounding the remainder term with (REF ).", "We can estimate the integral in (REF ) using grid-based numerical upper bounds like in the previous section.", "We thus find $\\sup _{r > 30, \\; 0.5 \\le \\alpha \\le 1.9} \\,\\Big |1 - \\frac{f(r;\\alpha )}{\\mathcal {S}_3(r;\\alpha )}\\Big | \\le 0.0016.$ We may then use the logarithm function's basic continuity properties near 1 to infer $\\sup _{r > 30, \\; 0.5 \\le \\alpha \\le 1.9} \\, |\\log \\,f(r;\\alpha ) - \\log \\, \\mathcal {S}_3(r;\\alpha )| \\le {\\left\\lbrace \\begin{array}{ll}0.00097 & \\quad \\textrm {(univariate case);} \\\\0.0017 & \\quad \\textrm {(bivariate case).}\\end{array}\\right.", "}$" ] ]
2212.05555
[ [ "A Study of Slang Representation Methods" ], [ "Abstract Warning: this paper contains content that may be offensive or upsetting.", "Considering the large amount of content created online by the minute, slang-aware automatic tools are critically needed to promote social good, and assist policymakers and moderators in restricting the spread of offensive language, abuse, and hate speech.", "Despite the success of large language models and the spontaneous emergence of slang dictionaries, it is unclear how far their combination goes in terms of slang understanding for downstream social good tasks.", "In this paper, we provide a framework to study different combinations of representation learning models and knowledge resources for a variety of downstream tasks that rely on slang understanding.", "Our experiments show the superiority of models that have been pre-trained on social media data, while the impact of dictionaries is positive only for static word embeddings.", "Our error analysis identifies core challenges for slang representation learning, including out-of-vocabulary words, polysemy, variance, and annotation disagreements, which can be traced to characteristics of slang as a quickly evolving and highly subjective language." ], [ "Introduction", "The UN Sustainable Development Goals [4] emphasize the importance of gender equality, peace, and justice.", "Initiatives that envision the role of AI for social good [31] provide guidelines for supporting these goals through practical measures.", "A key application of AI is building tools that assist social media policymakers and moderators to restrict the spread of offensive language, abuse, and hate speech, e.g., sexism and misogyny, which are still prevalent all over the globe [21].", "A recent study [12] reveals worrying patterns of online abuse, estimating 1.1 million toxic tweets sent to women over one year.", "Another study by [25] demonstrates how “the Manosphere”, a conglomerate of men-centered online communities, may serve as a gateway to far-right movements.", "Web content moderation policies, or the lack thereof, can have serious implications on individuals, groups, and society as a whole.", "On the one hand, content moderators may react late, inconsistently, or unfairly, thus angering users [18], as well as contributing to reinforcing and exacerbating conspiratorial narratives [9].", "On the other hand, minimal content moderation may permit coordinated influence operations [16] or enable the spontaneous formation of toxic and dangerous communities [25], [21].", "While (computational) linguistics has typically focused on formal documents, like books, and curated text corpora, like Wikipedia, language on the internet is informal, dynamic, ever-evolving, and bottom-up [26].", "Thus, automatic content analysis tools designed to assist social media policymakers and moderators need to possess the ability to understand informal language, i.e., slang.", "Slang, defined as “a peculiar kind of vagabond language, always hanging on the outskirts of legitimate speech, but continually straying or forcing its way into the most respectable company” [17], has long been of interest to linguists and social historians [1], [29], [10].", "With the velocity and the volume of informal content on the Web, the notion of slang has recently broadened [26]: slang has moved beyond its traditional categories, like school, intergenerational, and intragenerational slang [1], to loosely-defined communities and subcultures, like QAnon followers or men-centered online communities [25].", "How can we build representation learning methods that are capable of understanding slang?", "Can we apply or adapt large language models (LMs) for this purpose?", "Do dictionaries of slang or large social media datasets provide an effective opening for building specialized slang methods?", "Recognizing that most of the current LMs have been trained on formal English datasets that fail to capture the nuances of the social media language, prior work has devised tasks that test their abilities [34], [30], methods that fine-tune them on benchmark-specific data [28], and resources that define and describe a comprehensive collection of slang words [27].", "It is unclear to what extent recent models and knowledge sources can understand slang, and what are the key limitations that affect their reasoning on slang-centric downstream tasks.", "Figure: Overview of the Framework.", "RF is Random Forest Classifier and LSVC is Linear SVC classifier.", "UD is Urban Dictionary and OSD is online slang dictionary.In this paper, we study the ability of combinations of language models and knowledge sources to comprehend slang in different tasks involving offense, hate, and abuse on social media.", "We analyze their behavior quantitatively and qualitatively, in order to surface the chief weaknesses of these models in terms of slang understanding and point to potential solutions going forward.", "We make the following contributions: We design and implement a framework for slang representation learning methods.", "The framework integrates language model encoders, knowledge sources, and evaluation tasks.", "We design experiments to answer key research questions empirically.", "Our experiments with three models, two knowledge sources, and three tasks reveal differences between the ability of models to capture slang, as well as inherent challenges that require innovative approaches.", "We reflect on our findings, connecting them to the characteristics of slang as a quickly evolving language, and as a subjective and circumstantial phenomenon causing ambiguity in downstream tasks.", "We make our code and data available at https://github.com/usc-isi-i2/slang-representation-learning." ], [ "Slang Representation Framework", "Our study is based on a framework that combines different language models, knowledge sources, and evaluation tasks (Figure REF ).", "We describe the framework's components in turn." ], [ "Models", "FastText FastText embeddings represent each word as a bag of character n-grams, which helps in overcoming the limitation of models that ignore the morphology of words [5].", "In [32], the authors train a FastText model on the Urban Dictionary dataset,https://www.urbandictionary.com/ and evaluate these embeddings on both intrinsic tasks: Semantic Similarity and Clustering, and extrinsic tasks: Sentiment Analysis and Sarcasm Detection.", "In this work, we train FastText embeddings using the skip-gram architecture on Urban Dictionary and Online Slang Dictionary data.", "To make predictions with the FastText embeddings on discriminative tasks, we use Random Forest and Linear SVC classifiers on all the evaluation datasets.", "Table: Examples from the evaluation datasets with corresponding labels.BERT Bidirectional Encoder Representations from Transformers (BERT) [13] is a large language model that outputs textual embeddings conditioned on both left and right context.", "BERT models can be pretrained in an unsupervised manner to get general language representations that can be used for downstream tasks.", "BERT [13] is trained in the formal English language from Wikipedia.", "As the BERT model has been largely trained on formal language in Wikipedia and Book Corpus, it can be anticipated to handle social media data suboptimally.", "Thus, we also experiment with training a BERT model from scratch on slang-specific datasets.", "BERTTweet As we expect that a BERT model trained on social media data is better equipped to capture slang language, we also experiment with the BERTTweet [28] model.", "BERTTweet is pretrained on a large corpus of Twitter data containing around 873M tweets ranging from 2010 to 2019.", "BERTTweet can be expected to capture the commonly occurring slang words, for example, the word \"lol\" would be correctly tokenized as \"lol\".", "Table: Data Partitions of the Evaluation Datasets" ], [ "Knowledge Sources", "Urban Dictionary (UD) has been a standard source of slang in many works such as [32], [33], and [15].", "Urban Dictionary is a crowd-sourced collection of slang words along with their definitions and usage examples, containing over two million entries.", "Urban Dictionary, however, has its own demerits.", "The dictionary's voting system and its low threshold to include the new content result in noisy and opinionated entries.", "A detailed exploratory study of the urban dictionary is presented in [27].", "For training our models, we only work with the usage examples from the Urban Dictionary.", "There are two usage examples for each word on average resulting in around 4 million data entries.", "Online Slang Dictionary (OSD) contains fewer entries in comparison to the Urban Dictionary, around 15k in total.http://onlineslangdictionary.com/ OSD contains words/phrases, their meaning, and their usage in sentences.", "We extract the 12k entries from OSD that contain usage examples.", "In addition to the Urban Dictionary and the Online Slang Dictionary, we also explore Green's Dictionary of Slang.https://greensdictofslang.com/ Green's Dictionary of Slang describes the origin of words and phrases.", "In our context, this data source is not useful as it does not contain any usage examples and hence is not included in the reported experiments.", "We also explored purely slang-based Twitter corpora from [19].", "We experimented by retraining the BERTTweet model on this dataset to address the frequency of the slang words issue in the slang dictionaries dataset.", "However, the retrained model reported lower accuracy than that of the base BERTTweet model." ], [ "Classifiers", "We select commonly used classifiers such as Random Forest and Linear SVC to classify the FastText embeddings of the tweets in each of the evaluation tasks.", "Random Forest is an ensemble learning-based machine learning model that constructs multiple decision trees on the training data.", "The classification prediction of Random Forest is the class predicted by most decision trees.", "Linear SVC is a support vector machine-based classification model.", "This model tries to find the best hyperplane that maximizes the distance between the samples from different classes.", "The first layers of the BERT model learn generic linguistic patterns and the last few layers learn task-specific patterns.", "For downstream tasks such as classification, the first layers are frozen and the last layers are trained." ], [ "Model Training Details", "Training setup We train the FastText models in an unsupervised manner on UD and OSD data.", "To train the FastText models, we use the standard skip-gram architecture and finetune it over the training data of each task.", "We report the results from Random Forest as it performs slightly better than SVC, noting that the difference is negligible.", "The BERT model is pre-trained from scratch on the UD and OSD data with a masked language modeling objective.", "The BERTTweet model is re-trained on UD and OSD data and is finetuned on the HateEval and OffenseEval datasets.", "BERT and BERTTweet are finetuned for the Sequence Classification task on the evaluation datasets.", "For UD, we reuse the dataset provided by [32] containing over 4 million usage entries of slang words and their usage examples.", "For OSD, we manually extract and clean the data entries resulting in a dataset of size 12k.", "Parameters We train the FastText model for 10 epochs with an embedding dimension of 300.", "We finetune the BERTTweet model using the Adam optimizer with a learning rate of 1e-6 and a cross-entropy loss for 6 epochs.", "We retrain the model only for 2 epochs as increasing the number of retraining epochs caused the model to distort its original knowledge acquired from the Twitter dataset, leading to worse performance on the downstream tasks." ], [ "Evaluation", "Evaluation tasks We evaluate our models on three classification tasks: Sentiment Analysis, Hate Speech Detection, and Offense Detection.", "We select these tasks because slang terminology is central to comprehension of hateful, emotional, and offensive content in social media communication.", "Data For sentiment analysis, we use the SemEval 2017 task 4 [30] dataset containing 50k training entries and 12k testing entries.", "SemEval 2019 [3] task 5 focuses on the detection of hate speech against women and immigrants with a dataset of 13k tweets in English.", "SemEval 2020 [34] task 12 describes the task of offense detection in social media with a dataset containing 15k entries.", "We show examples for each task with their corresponding labels in Table REF .", "We partition the evaluation datasets by randomly holding out 20% of each dataset as a test dataset, and using the rest for training.", "Statistics per dataset are provided in Table REF .", "We evaluate our models by using the customary metrics of precision, recall, and F1-score.", "Table: Model performance evaluated on Sentiment Analysis, Hate speech and Offense detection." ], [ "Research Questions", " Do resulting models understand slang?", "Large language models often perform well on the downstream classification tasks [14], [7].", "But can these models comprehend the quickly evolving slang in social media platforms and beyond, and can they leverage the slang terminology to detect inappropriate better?", "To answer these questions, we apply our framework and observe the performance of various combinations of language models and knowledge sources per task.", "Which language model is best equipped for understanding slang?", "We compare static and contextual models, and we compare models that have been tuned to social media data to those that have not.", "We compare their performance and their qualitative behavior.", "Which knowledge source provides more useful knowledge for adapting models?", "We evaluate various knowledge sources to see which source best helps the models capture the domain.", "Here, we compare two slang dictionaries with different sizes and content types against the vanilla models without direct slang source adaptation.", "Which cases are difficult for our models?", "We closely examine the failure cases and hypothesize the potential causes for the model's erratic behavior.", "We connect these failure categories intuitively to architectural and training decisions in the models and our framework." ], [ "Main Results", "We show the results of the different combinations of models and knowledge sources in Table REF .", "The BERTTweet model performs best in all the evaluation tasks.", "This is intuitive, as the model trained on tweets can be expected to capture the social media domain best and can give the best results on the evaluation datasets as they are also based on Twitter.", "This hypothesis is verified to be true by the results of the evaluation datasets.", "To confirm that the performance is owed to better coverage of slang, we test BERTTweet model manually on random sentences with slang words masked.", "For the sentence: This place is amazing.", "It is [MASK], the [MASK] is predicted as \"awesome\".", "For the sentence, I got very angry at her.", "In the moment, i [MASK]-slapped her in my head., the model predicts the masked value as \"bitch\", confirming that this model is able to understand the slang words to some extent.", "Table: Top 5 nearest neighbours for FastText trained on Wikipedia and on slang dictionaries.We also observe that the BERTTweet model trained on the initial Twitter dataset and retrained on the Urban Dictionary data does not show major improvements.", "This can be attributed to the low frequency of the slang words in the UD+OSD dataset, where each slang word occurs at most three-four times in the dataset.", "If the slang word is not commonly occurring in the pretrained tweet dataset, then the word is not captured by the BERT or BERTTweet models.", "The FastText model trained only on usage examples from the UD or OSD data performs better than the baseline models for each task.", "These embeddings also capture the relationship between various slang words.", "Upon examining the top 10 nearest neighbors of commonly occurring slang words in the evaluation dataset, as shown in Table REF , we obtain the words \"whore\" and \"hoe\" as nearest neighbors for \"bitch\".", "Similarly, for the word \"whore\", we get \"skank\" and \"slut\" as nearest neighbors.", "They are also able to capture the abbreviations in some cases: \"nevermind\" is the nearest neighbor to its abbreviation \"nvm\", while \"wazup\" is the nearest neighbor to the phrase \"what is up\".", "The BERT uncased model trained on Wikipedia data gives better results on the evaluation datasets than the FastText models.", "But on a closer examination of the tokenization of the slang words in BERT, they are tokenized incorrectly.", "For example, \"lol\" is tokenized as \"lo\" and \"l\", \"slut\" is tokenized as \"s\" and \"lut\" and so on, indicating that the model tokenizer fails to capture the slang domain, i.e., BERT often treats slang words as out-of-vocabulary terms.", "The BERT model trained on UD also fails to capture social media language.", "This can be due to the low frequency of slang words in the dataset.", "When this pretrained BERT model is tested on random sentences with the slang words masked, the model gives out nonsensical or no results.", "For the sentence: This place is amazing.", "It is [MASK], the prediction for [MASK] is \".\".", "For the sentence, I got very angry at her.", "In the moment, i [MASK]-slapped her in my head., the model predicts the masked value as \"and\".", "These results show that the model is unable to learn the context of the slang words.", "In terms of overall accuracy, this model yields similar results as the baseline BERT model on the evaluation datasets.", "To overcome the incorrect tokenization of slang words by the BERT-based models, we also deliberately extend the BERTTweet tokenizer vocabulary with the slang words from the Online Slang Dictionary.", "This extension leads to an appropriate tokenization of the slang words, however, the model performance does not improve.", "The results of the extended model are shown in Table REF .", "Table: Evaluation of BERTTweet with an extended vocabulary." ], [ "Error Analysis", "Which cases are difficult for our models?", "Through qualitative error analysis, we observe the misclassification occurs primarily due to five factors: Incorrect tokenization of infrequent terms.", "Due to the low frequency of the slang words in the dataset, the BERT models do not tokenize the slang words as expected.", "For example, \"whore\" is tokenized as \"who\" and \"re\", \"hoe\" is tokenized as \"ho\" and \"e\".", "The tweets containing such words are incorrectly classified, in most cases, as not hate speech.", "Tweets consisting mostly of URLs are misclassified.", "This is because the content of the URL is not known without resolving it with an HTTP request.", "For example, the tweet “Austria proposes sending troops abroad to stop migrant movement https://t.co/cnbxbFYdBU, Immigration Jihad in action https://t.co/VuFN3DktJ7” is difficult to classify without including the contents of the URL as additional context.", "Polysemy between slang and non-slang words such as \"sick\" and \"bitch\" is difficult to cover for our models.", "For example, the word “sick” can be used with a negative, formal language, connotation: \"I am feeling very sick\", or a positive slang connotation \"These beats are sick\".", "In addition, while the word “bitch” is a slang term that is mostly associated with a negative sentiment, it may also be used to express a positive emotion \"Bitch !", "I am happy you got in\".", "The polysemy makes it difficult for the models to capture the context the slang words are associated with.", "Variance in spelling is non-trivial to handle for embedding models.", "A common feature of a non-stabilized language, like slang or a dialect, is using a novel spelling of the word to emphasize the tone of the word or express emotion.", "An example is the infamous “heyyyy” that turned into a meme.", "Additionally, given that the language is informal, there is often no single correct way to spell a word: \"wazzzup\", \"wassup\", \"wassssssup\", \"wasup\", and \"sup\" are all acceptable slang expressions.", "Some tweets are difficult to agree on between annotators.", "We came across tweets like “Girls bitch about how immature guys are and then they do shot like That #WomenSuck”, and “I know you liked how that pussy taste First of all, I don’t have tastebuds bitch” that are classified as not hate speech by the authors.", "This showcases that classifying tweets into hate speech or offensiveness categories is subjective and inherently dependent on the background knowledge and beliefs of the annotators." ], [ "Discussion", "In summary, we observe that models that are trained on large-scale social media data are most capable of understanding slang in downstream tasks.", "Slang-specific sources are mainly beneficial for static models that have been trained on Wikipedia corpora before.", "While out-of-vocabulary terms represent a key challenge for contextual models like BERT and BERTTweet, adapting these models or their tokenizers does not manifest in better performance in downstream reasoning tasks.", "Further issues relating to polysemy, variance in spelling, and annotation disagreements point to characteristics of the phenomenon of slang as a quickly evolving and subjective language, which we discuss next.", "Slang as a quickly evolving language Internet linguistics accelerates the inherent property of language to evolve over time.", "While the factors that influence language evolution are still hypothesized [23], it is apparent that this evolution is largely accelerated with the emergence of Internet linguistics [26].", "The language on the Internet, expressed through novel forms including tweets and memes, leads to the quick evolution of expressions and the spreading of novel slang terms within communities, eventually forming an Internet folklore.", "On the one hand, this motivates the need for integrated AI solutions that mix many forms of Internet data: cultural tropes,E.g., https://tvtropes.org/ memes,E.g., https://knowyourmeme.com/ UD, Hatebase [6], and other forms of usage data.", "On the other hand, it is possible that slang can adapt very quickly to moderation and thus thwart this type of approach.", "In fact, it seems that slang and some memes are already specifically designed as euphemisms in order to \"fly under the radar\" of censorship, such as Pepe the Frog.https://en.wikipedia.org/wiki/Pepe_the_Frog From this perspective, it would be interesting to measure the influence/endogeneity of moderation and censorship on slang, the speed of adaptation, and the real efficiency of moderation to stop the diffusion of inappropriate content.", "Slang as a subjective phenomenon Tasks that involve hate speech or offense detection are inherently subjective and depend on the annotator's demographics, knowledge, and experience.", "Prior work reports that disagreement for sentiment analysis ranges between 40–60% for low-quality annotations, and between 25–35% even for high-quality annotations [20].", "Rather than ignoring the disagreement and evaluating on the majority label, a more sophisticated idea is to train and evaluate AI methods that can handle and predict human disagreement [24].", "As suggested in [22], personalized models can be trained to detect and mimic individual or community profiles, thus treating the disagreement as a signal rather than noise.", "These models can be trained to explicitly model the psychological traits of the individual or group of annotators, inspired by the approach in [2]." ], [ "Related Work", "[33] created a sentiment dictionary for slang words for sentiment analysis of social media data.", "[15] built a WordNet-like resource for slang words and neologisms and the efficacy of this resource was evaluated over Word Sense Disambiguation algorithms for English social media data.", "[32] explored generating slang embeddings for Urban Dictionary data using the FastText framework.", "These embeddings were evaluated on a variety of tasks such as Sentiment Analysis and Sarcasm Detection.", "[10] attempts to detect and identify slang in Twitter data by using LSTM-based networks with feature boosting.", "Automated tools that capture slang have been attempted to associate Web content (on social media) to the psychological traits of the writer [2].", "In our work, we explore static models such as FastText and also state-of-the-art contextual large language models such as BERT and BERTTweet, comparing their performance across evaluation tasks.", "We closely examine the model's failure cases and hypothesize the potential causes for these failures.", "Many attempts have been made to understand social media language by detecting offense and hate in tweets and conversational data.", "[34] poses a challenge of multilingual hate speech detection in social media.", "[11] built an automated framework for hate speech detection and separation of hate speech from offensive language.", "[8] retrains BERT model for abusive language detection in social media.", "Hate speech and offensive language often contain slang words.", "We evaluate our models on these tasks to verify if the language models capture slang words and their context." ], [ "Conclusions", "In this paper, we devised a slang understanding framework that combined language models and knowledge sources to help moderators better combat harmful, offensive, or inappropriate content on social media platforms.", "We applied this framework to three tasks that intuitively rely on slang language, observing that the best performance was obtained by Transformer language models that have been adapted to social media data, such as BERTTweet.", "Typically, slang usage repositories like Urban Dictionary improved the performance of embedding models that were not adapted to such data before.", "We found that retraining the models on social media or slang data did not bring consistent gain across tasks.", "Our error analysis identified five main challenges for slang understanding at scale: incorrect tokenization for infrequent words, presence of URLs, polysemy between formal and slang words, variance in spelling, and misclassifications in the ground truth data.", "Our experiments point to two key aspects that need to be considered seriously in future work.", "First, Internet slang is an unprecedented form of language evolution that may be solvable by collecting a comprehensive collection of representative data, including memes, slang dictionaries, and usage data.", "The counterpoint to this data-driven approach is that slang and meme expressions may already be partially designed to avoid censorship, thus anticipating the data-driven solution.", "Second, as judging the harmfulness of online content is inherently subjective, we suggest a shift from the dominant practice of enforcing a single agreed perspective to the new and emerging practice of modeling disagreement as a signal rather than noise.", "This would lead to optimizing AI models to make decisions based on the perspectives of individuals or groups, rather than a single best decision." ], [ "Acknowledgements", "The first two authors have been supported by armasuisse Science and Technology, Switzerland under contract No.", "8003532866.", "The experiments were run on a cluster provided by armasuisse Science and Technology." ] ]
2212.05613
[ [ "Finding two-level systems in glasses through machine learning" ], [ "Abstract Two-level systems (TLS) are rare quantum tunneling defects which govern the physics of glasses at very low temperature.", "Because of their extremely low density, it is very hard to directly identify them in computer simulations of model glasses.", "We introduce a machine learning approach to efficiently explore the potential energy landscape of glass models and identify two-level tunneling defects.", "We design an algorithm that is able to rapidly predict the quantum splitting between any two amorphous configurations produced by classical simulations.", "This in turn allows us to shift the computational effort towards the collection and identification of a larger number of TLS, rather than the useless characterization of non-tunneling defects which are much more abundant.", "Finally, we interpret our machine learning model to understand how TLS are identified and characterized, thus giving physical insight into the features responsible for their presence." ], [ "Introduction", "When a glass-forming liquid is cooled rapidly, its viscosity increases dramatically and it eventually transforms into an amorphous solid, called a glass, whose physical properties are profoundly different from those of ordered crystalline solids [1].", "At even lower temperature, around 1K, the specific heat of a disordered solid is much larger than that of its crystalline counterpart as it scales linearly rather than cubically with temperature.", "Similarly, the temperature evolution of the thermal conductivity in glasses is quadratic, rather than cubic [2], [3], [4], [5], [6], [7], [8], [9], [10], [11].", "A theoretical framework rationalizing such anomalous behavior was provided by Anderson, Halperin and Varma [12] and by Phillips [13], [14].", "They argued that the energy landscape of amorphous solids contains many nearly-degenerate minima, connected by localized motions of a few atoms, that can act as tunneling defects, called two-level systems (TLS).", "Understanding the microscopic origin of TLS and how to control their density and physical properties has gained significant interest, not only because they provide a large contribution to the specific heat and to the thermal conductivity, but also because their presence may impact the performance of certain quantum devices [15].", "Unfortunately, understanding their microscopic nature and their almost ubiquitous presence in low-temperature glasses remains a challenge [16], [17], [18], [19], [20].", "With the development of the swap Monte Carlo algorithm [21], [22] it has become possible to create computer glasses at unprecedentedly low temperatures.", "Combined with landscape exploration algorithms [23], [24], [25], [26], [27], [28], [29], [30], [31], [32] this provides a method to investigate the nature of TLS in materials prepared under conditions comparable to experimental studies [33], [34].", "These tools have enabled computational studies that have confirmed the experimental observation [7], [8], [10], [35], [11] that as the kinetic stability of a glass increases, the density of tunneling defects is strongly depleted [33], [36].", "The direct detection of TLS revealed some of their microscopic features, namely that the participation ratio decreases in more stable glasses [33], and that TLS do not seem to be in a one-to-one correspondence with soft harmonic [37], [33], [34] or localized [38], [34] modes.", "The main issue that limits the applicability of the direct landscape exploration method is that it remains computationally very expensive, and it is thus hard to construct the large library of TLS needed to enable a robust statistical analysis of their physical properties.", "After accumulating a large number of inherent structures (IS), one must run an expensive algorithm to find the relaxation pathway connecting pairs of IS to then determine if this pair forms a proper TLS (namely, has an energy splitting within thermal energy at 1K).", "Due to the large number of IS pairs detected, it is impossible to characterize all of them.", "In previous work, some ad hoc filtering rules were introduced [33], [34], [36] but the success rate of such filters is poor.", "After a significant exploration effort, it was possible to identify about 60 TLS starting from the identification of $\\sim 10^8$ IS.", "It is then obvious that most of the computational effort has been wasted in the study of pairs that form defects which do not tunnel at low temperatures.", "In this paper we show that it is possible to predict with enhanced accuracy whether a pair of inherent structures forms a TLS by using machine learning techniques.", "Recently, machine learning [39], [40], [41], [42], [43], [44], [45], [46], [47], [48] has been shown to be extremely effective in using structural indicators to predict structural, dynamical or mechanical properties of glassy systems.", "In a similar vein, we use supervised learning to streamline the process of TLS identification.", "Our study has two goals: (i) develop a faster way to identify TLS compared to the standard approach outlined in Ref.", "[33] and described in Sec.", ", in order to collect a statistically significant number of TLS; (ii) understand what are the structural and dynamical features characterizing TLS, and if/how they change with the preparation temperature.", "To address (i) we show that our machine learning model can be trained in a few minutes using a small quantity of data, after which the model is able to identify candidate TLS with high speed and accuracy.", "To address (ii) we show which static features are the most important for the model prediction and we show that states which are dynamically distant can nevertheless be TLS.", "We conclude by explaining how the ML model distinguishes TLS from non-TLS and how it is able to identify glasses prepared at different temperatures." ], [ "Standard approach to TLS identification", "The standard procedure [30], [31], [33] to identify TLS is sketched in Fig.", "REF .", "It consists of the following first steps which aim at identifying potential candidates for TLS: Equilibrate the system at the preparation temperature $T_f$ .", "Glasses with lower $T_f$ have a larger glass stability.", "Run molecular dynamics to sample configurations along a dynamical trajectory at the exploration temperature $T<T_f$ .", "Perform energy minimization from the sampled configurations to produce a time series of energy minima, or inherent structures (IS).", "Analyze the transition rates between pairs of IS, and select the pairs of IS that are explored consecutively.", "Step 4 was necessary because it is computationally impossible to analyze all pairs of IS, as the number of pairs scales quadratically with the number of minima.", "The filter defined in step 4 was physically motivated by the fact that TLS tend to originate from IS that are not too distinct in order to have a reasonable tunneling probability.", "As such it is likely that those pairs of IS get explored one after the other during the exploration dynamics in step 2.", "Overall, given $N_{IS}$ inherent structures, this procedure selects for $\\mathcal {O}(N_{IS})$ pairs to be analyzed.", "However, many pairs of IS can be close but not sampled consecutively during the dynamics, owing to the complex structure of the potential energy landscape.", "Instead, our machine learning approach allows to consider all pairs of IS.", "As shown below, our approach is able to detect TLS which are otherwise excluded by the above step 4.", "Once potential candidates are selected, the procedure continues as follows: For each selected pair of IS, look for the minimum energy path and the classical barrier between them by running a minimum energy path finding algorithm, such as nudge elastic band (NEB) [49], [50], [51].", "This provides the value of the potential along the minimum energy path between the pair $V(\\xi )$ , where $0\\le \\xi \\le 1$ is the reaction coordinate.", "Select pairs whose energy profile $V(\\xi )$ has the form of a double well (DW), i.e., exclude paths with multiple wells.", "Solve the one-dimensional Schrödinger equation: $-\\frac{\\hbar ^2}{2md^2\\epsilon }\\partial ^2_{\\xi } \\Psi (\\xi )+V(\\xi )\\Psi (\\xi )=\\mathcal {E}\\Psi (\\xi ),$ where $\\xi $ is a normalized distance along the reaction path $\\xi =x/d$ and energy is normalized by a Lennard-Jones energy scale $\\epsilon $ , the effective mass $m$ and the distance $d$ are calculated as in Ref. [33].", "We obtain the quantum splitting (QS) $E_{qs}=\\mathcal {E}_2-\\mathcal {E}_1$ from the first two energy levels $\\mathcal {E}_1$ and $\\mathcal {E}_2$ .", "The quantum splitting is the most relevant parameter because when $E_{qs} \\sim T$ the system can go from one state to the other via quantum tunneling [13], creating what is called a two-level system (TLS).", "In particular since we choose to report the data in units that correspond to Argon [33], a double well excitation will be an active TLS at $T=1$ K when $E_{qs} < 0.0015 \\epsilon $ , where $\\epsilon $ sets the energy scale of the pair interactions in the simulated model.", "Overall, since at low temperature the landscape exploration dynamics is slow, one would like to spend most of the computational time by doing steps 2-3 to construct a large library of pairs of IS.", "A first problem is that when the library of IS grows larger it takes a lot of time to perform steps 5-7.", "Moreover, the main bottleneck lies in the fact that most of the pairs that go through the full procedure are not found to be TLS in the end, and so this large computational time is effectively wasted." ], [ "Machine learning approach to TLS identification", "As in any machine learning (ML) approach, we distinguish two phases: training and deployment.", "Our supervised training approach, detailed in the next section, takes just a few hours of training on a single CPU.", "It requires an initial dataset of $\\mathcal {O}(10^4)$ full NEB calculations, whose collection is the most time consuming part of the training phase.", "Once training is complete, the ML model can be deployed to identify new TLS.", "Its workflow is similar to the standard one, with some major improvements.", "It proceeds with the following steps: - 3.", "The first 3 steps are similar to the standard procedure to obtain a collection of inherent structures from a dynamical exploration.", "Apply the ML model to all possible pairs of IS to predict which pairs form a DW potential.", "Apply the ML model to predict the quantum splitting (QS) for all predicted DW and filter out the configurations that are not predicted to be TLS by the ML model.", "- 8.", "Run NEB, select pairs that form DW potential and solve the one-dimensional Schrödinger equation for the TLS candidates only in order to obtain the exact value of the quantum splitting.", "It is possible to use steps 4-5 as a single shot or as an iterative training approach (see Sec.", "REF ).", "In the Supplementary Information (SI) we provide details on how steps 1-3 are performed: glass preparation, exploration of the potential energy landscape via molecular dynamics simulations and minimization procedure, as well as NEB computation, see Sec.", ".", "Noticeably, if the model is well-trained then the ML approach has two significant advantages over the standard approach.", "First, $\\mathcal {O}(N_{IS}^2)$ pairs of IS are scanned to identify TLS, compared to a much smaller number $\\mathcal {O}(N_{IS})$ in the standard procedure.", "Second, if a pair of IS passes step 5 and goes through the full procedure it is very likely to be a real TLS.", "As a consequence, by using the ML approach one can spend more time doing steps 2-3 to produce new IS, since fewer pairs pass step 5.", "At the same time, for any given number of IS, the ML approach can analyze all possible pairs and is therefore able to identify many more TLS, as we demonstrate below." ], [ "Machine learning model", "In Refs.", "[33], [34], the authors analyze a library of 14202, 23535 and 117370 pairs of inherent structures for a continuously polydisperse system of soft repulsive particles, equilibrated at reduced temperatures $T = 0.062$ , 0.07, and 0.092, respectively.", "The standard approach (Sec. )", "leads to the identification of 61, 291 and 1008 TLS for the three temperatures, respectively.", "Notice that this approach uses pairs of IS that are selected by the dynamical information contained in the transition matrix between pairs of IS [33].", "This was done to filter out all non double well potential.", "For all pairs in this small subset, the quantum splitting was then calculated.", "Instead, the ML approach starts by independently evaluating the relevant information contained in each IS and constructs all possible combinations, even for pairs that are not dynamically connected.", "Following the steps discussed in Sec.", "the model is then able to predict the quantum splitting of all the pairs, that were predicted to form a DW, very accurately.", "From a quantitative perspective, this means that the same trajectories now contain many more TLS candidates in the ML approach compared to the standard approach.", "Figure: Flowchart of the machine learning approach.", "The first block represents the construction of the dataset by comparing all the pairs of inherent structures, focusing on the MM particles that displace the most.", "Then, required features are extracted to construct the input vector XX.", "We then train a classifier to predict whether a pair of IS is a DW or not.", "The DW are finally processed using a multi-layer stacking strategy to predict the quantum splitting energy.", "Our pipeline analyses a given pair of IS in about ∼10 -4 \\sim 10^{-4}s.In this section we describe the flowchart of the model summarised in Fig.", "REF .", "We first discuss how we construct the dataset and extract the relevant features (Sec.", "REF ).", "We then explain the two main blocks consisting in a DW classifier followed by a quantum splitting predictor, both of which have similar model architecture and inputs.", "Finally we evaluate the performance of this model by showing its ability to accurately predict the quantum splitting.", "We conclude by introducing the iterative training technique that represents the optimal way to efficiently expand the library of TLS." ], [ "Dataset and features construction", "The first step is the evaluation of a set of static quantities for all the available IS.", "This set consists of: energy, particle positions, averaged bond-orientational order parameters [52] determined via a Voronoi tessellation, from $q_2$ to $q_{12}$ , and finally particle radii.", "The cost of this operation scales as the number of available states $N_{IS}$ , but we use these quantities to calculate the features of $\\sim N_{IS}^2$ pairs.", "A detailed analysis (see Sec.", "in the SI) shows that the bond orientational parameters and the particle sizes are not very useful for the ML model.", "Since their calculation is slower than all the other features, we do not include them in the final version of the ML approach.", "To construct the input features for each pair of IS we combine the information of the two states evaluating the following: Energy splitting $\\Delta E$ : energy difference between the two IS.", "Displacements $\\Delta \\vec{r}_i$ : displacement vector of particle $i$ between the two configurations.", "Total displacement $d$ : total distance between the two IS defined as $d^2=\\sum _i |\\Delta \\vec{r}_i|^2$ .", "Participation ratio $PR$ : defined as $PR=(d^2)^2/\\left( \\sum _i |\\Delta \\vec{r}_i|^4\\right)$ .", "Distance from the displacement center $|\\vec{r}_0-\\vec{r}_i|$ : we measure the average distance of particle $i$ from the center of displacement $\\vec{r}_0$ , identified as the average position of the particle that moves the most.", "This quantity identifies the typical size of the region of particles that rearrange.", "Transition matrix $T_{ij}$ (and $T_{ji}$ ): number of times that the exploration dynamics traverses IS$_j$ just after IS$_i$ (and vice versa).", "The crucial step of the feature construction is that we can reduce the number of features by considering only the $M$ particles whose displacement is the largest between pairs of IS.", "We make this assumption because we expect that the low temperature dynamics is characterized by localised rearrangements involving only a small fraction of the particles [7], [8], [10], [35], [11], [33].", "In Sec.", "of the SI, we confirm this assumption by showing that the ML model achieves optimal performances even when $M$ is very small.", "So, the choice of $M \\ll N$ makes the ML model computationally effective without any performance drop." ], [ "Double well classifier", "A necessary condition in order for a pair of IS to be a TLS is that the transition between the pair forms a double well (DW) potential.", "A DW is defined when the minimum energy path between the two IS resembles a quartic potential, as sketched in Fig.", "REF (c).", "The final goal of the ML model is to predict the quantum splitting of the pair to identify pairs with low values of the QS.", "The first obstacle in the identification of TLS is that DW represent only a small subgroup of all IS pairs.", "For instance in Ref.", "[33], only $\\sim 0.5\\%$ of all the IS pairs are DW at the lowest temperature.", "It is then mandatory to filter out pairs that are not likely to be a DW.", "In the machine learning field there are usually many different models that can be trained to achieve similar performances, with complexity ranging from polynomial regression to deep neural networks.", "Here, we perform model ensembling and use ensembles both for DW classification and QS prediction.", "Model ensembling consists in averaging the output of different ML models to achieve better predictions compared to each of them separately.", "We do so using the publicly available AutoGluon library [53].", "In this approach, we train in a few minutes a single-stack ensemble that is able to classify DW with $>95\\%$ accuracy.", "In Sec.", "of the SI we justify this choice of ML model and provide details on performances and hyperparameters.", "Overall, since the DW classifier is accurate and rapid, we use it to filter out the pairs that do not require the attention of the QS predictor because they cannot be TLS anyway." ], [ "Quantum splitting predictor", "We want to predict the quantum splitting of a pair of IS for which the features discussed in Sec.", "REF have been computed.", "We need this prediction to be very precise, because we know that a pair can be considered a TLS when $E_{qs}<0.0015 \\epsilon $ , but $E_{qs}$ can vary significantly so errors may be large.", "In the SI (see Sec. )", "we show that models such as deep neural networks and regression are not stable or powerful enough to achieve satisfying results.", "We thus perform model ensembling by using the AutoGluon library [53].", "This achieves superior performances while also allowing us to compare and rank single models.", "We perform two types of model ensembling: (i) stacking: different layers of models are applied in series creating a stack (schematized in Fig.", "REF ), and (ii) bagging: we divide the data in subsets that we use to train multiple instances of the same models and we later combine all their predictions.", "As shown in the SI, the best results are obtained while using ensembles of gradient boosting methods (in particular CatBoost [54]), which have proven to be the optimal choice in similar semi-empirical quantum-mechanical calculations [55].", "Overall, the ensemble structure of our final model relieves us from hyperparameter optimization and makes the results more stable.", "Figure: (a)-(c) Quantum splitting and (d)-(f) energy barrier predicted by the ML model compared to the exact value, reported using a double logarithmic scale.", "We report predictions for IS pairs that the ML model has not seen during the training.", "(a) and (d) correspond to T f =0.062T_f=0.062, with training for 7000 samples and information of the M=3M=3 particles with largest displacements.", "(b) and (e) correspond to T f =0.07T_f=0.07, 10000 samples and M=3M=3.", "(c) and (f) correspond to T f =0.092T_f=0.092, 30000 samples and M=3M=3.", "All models have been trained for ∼10\\sim 10 hours of single CPU time.In order to train the model we first collect a set of $E_{qs}$ examples.", "The size of this training set is discussed in the SI (see Fig.", "REF ) where we find that the minimum number is around $10^4$ .", "We can use some of the data already collected in previous work in Ref.", "[33] for the training.", "Moreover, since we are interested in estimating with more precision the lowest values of $E_{qs}$ we train the model to minimize the following loss function $\\mathcal {L} = \\frac{ \\sum _{i=1}^n w_i \\left( E_{qs,\\mathrm {true}} -E_{qs,\\mathrm {predicted}} \\right)^2 }{n \\sum _{i=1}^n w_i },$ which is a weighted mean-squared error.", "The weights correspond to $w_i = 1/E_{qs,\\mathrm {true}}$ in order to give more importance to low $E_{qs}$ values.", "We thus train our model to provide a very accurate prediction of the value $E_{qs}$ for any given pair.", "Once the model is trained it takes only $\\sim 10^{-4}$ s to predict the QS of a new pair (compared to 1 minute to run the standard procedure).", "If we predict a value $E_{qs}<0.0015\\epsilon $ , then we have identified a TLS much faster.", "To showcase the performance of the ML model we report in Fig.", "REF (a)-(c) the exact quantum splitting calculated from the NEB procedure, compared with the value predicted by the model.", "We have trained three independent models to work at the three different temperatures.", "As explained before (Sec.", "REF ), the model needs the information about only the $M \\ll N$ particles that are displaced the most to achieve the excellent precision demonstrated in Fig.", "REF .", "In the Fig.", "REF (SI) we find that the optimal value is $M=3$ , confirming that the participation ratio in TLS is quite low, because only three particles are needed for the model to identify TLS.", "Furthermore, the models have been trained using the smallest number of samples, randomly selected from all the IS pairs available, that allows the model to reach its top performance.", "We have also performed an analysis of the optimal training time.", "Details on these points are provided in Sec.", "of the SI.", "The performances presented in Fig.", "REF are achieved by training the model for $\\sim 10$ hours of single CPU time, but we also show in the SI that it is possible to already achieve $>90\\%$ of this performance by training the ensemble for only 10 minutes.", "The ML approach that we have just introduced is also easily generalizable to target any state-to-state transition, like excitations and higher energy effects.", "Here we modified the quantum splitting predictor to instead predict the classical energy barrier between two IS states.", "If the minimal energy path between two IS forms a DW, we define the classical energy barrier as the maximum value of the energy along this path.", "In Fig.", "REF (d)-(f) we report the value of the energy barrier predicted by the ML model (y-axis) compared to the exact value calculated from the NEB procedure (x-axis).", "The hyperparameters and the features are the ones used for the quantum splitting predictor.", "Such a high performance demonstrates that our ML approach can predict other types of transitions between states." ], [ "Iterative training procedure", "We finally introduce an approach to optimally employ our ML model to process new data: the iterative training procedure.", "In previous sections and in Fig.", "REF we trained the model once using a subset of the already available data.", "This is a natural way to proceed when the goal is to process new data that are very similar to the training set, and the training set is itself large enough.", "However, since the goal of the proposed ML model is to ultimately drive the landscape exploration and collect new samples, the single-training approach may encounter two types of problems.", "First, at the beginning there may be not enough data and second the findings of the model do not provide any additional feedback.", "To solve both problems we introduce the iterative training procedure.", "The idea of iterative training is to use the predictive power of ML to create and expand its own training set, consequently enhancing its performance by iteratively retraining over the new data.", "Details on the method and parameters are discussed in Sec.", "of the SI.", "In practice, we start from a training set of $K_0 \\sim 10^3-10^4$ randomly selected pairs to have an initial idea of the relation between input and output.", "We then use the ML approach outlined in Fig.", "REF to predict the $K_i=500$ pairs with the lowest QS.", "For these TLS candidates, we perform the full procedure to calculate the true QS and determine whether the pair is a DW or a TLS.", "In Fig.", "REF (SI), we report the result of this procedure when we process a new set of trajectories from the same polydisperse soft sphere potential as in Ref. [33].", "In general the first few iterations of iterative training have a poor performance.", "In fact we find that $>70\\%$ of the first $K_i$ pairs are actually non-DW.", "After collecting additional $K_i$ measurements, we retrain the model.", "We report in Tab.", "REF the average time for each step of the ML procedure.", "The retraining can be done in $\\sim 10$ min, after which the model is able to predict the next $K_i$ pairs with lowest QS.", "Overall, to process $N_\\mathrm {IS pairs}$ we estimate that the computational time of the iterative approach is $t_{i}=\\left[ K_0\\cdot 10^{2} + N_{iter}\\left(K_i\\cdot 10^{2} + 10^{3} + N_\\mathrm {IS pairs}\\cdot 10^{-5} \\right) \\right]s$ .", "If $N_\\mathrm {IS pairs}>10^{9}$ it is possible to significantly reduce $t_i$ by permanently discarding the worst pairs, but this is not needed here.", "We iterate this procedure $N_{iter}$ times, until the last batch of $K_i$ candidates contains less than $1\\%$ of the total number of TLS.", "We believe that continuing this iterative procedure would lead to the identification of even more TLS/DW, but this is out of the scope of this paper.", "Table: Computational time needed to perform our ML approach, on a standard laptop." ], [ "Results and discussion", "We now use ML in order to speed up the TLS search.", "This highly efficient method allows us to collect a library of TLS of unprecedented size, generated from numerical simulations with the same interaction potential as in Ref. [33].", "First of all, we reprocess the data produced to obtain the results presented in Ref.", "[33] with our new ML method, obtaining new information about the connection between TLS and dynamics.", "Next, we perform ML-guided exploration to collect as many TLS as possible.", "This sizable library of TLS allows us to perform for the first time a detailed statistical analysis of TLS and compare their distribution to the distribution of double wells.", "We perform this analysis for glasses of three different stabilities.", "Finally, we discuss the microscopic features of TLS not only by looking at their statistics, but also by analyzing what the ML model has learned, and how it expresses its prediction." ], [ "Capturing elusive TLS with machine learning", "Prior to this paper, it was not possible to evaluate all the IS pairs collected in Ref. [33].", "For this reason the authors discarded a priori all pairs where the number of forward and backward jumps between IS is such that $\\min \\left(T_{ij},T_{ji}\\right)<4$ , based on the assumption that high transition rates during the dynamic landscape exploration is a good indicator that two IS form a DW.", "This reduced the number of pairs to 14202, 21109 and 117339 for glasses prepared at $T_f=0.062$ , 0.07 and $0.092$ respectively.", "In order to have comparable data at the three temperatures, for $T_f=0.092$ we only consider a subset of glasses corresponding to 30920 IS pairs.", "The results of the TLS search are summarized in the red columns of Tab.", "REF .", "Overall the standard procedure reaches a rate of TLS found over calculations performed of $4 \\cdot 10^{-3}$ , $13\\cdot 10^{-3}$ , and $8\\cdot 10^{-3}$ for the three temperatures, respectively.", "We compare this with our iterative training procedure applied on the same data, whose results are reported in the green columns of Tab.", "REF .", "We immediately notice two major improvements: the overall number of TLS that we find from the same data set is more than twice larger and the ratio of TLS per calculation is more than 15 times larger, corresponding to $62\\cdot 10^{-3}$ , $211\\cdot 10^{-3}$ , and $194\\cdot 10^{-3}$ for the same three temperatures.", "We conclude that the iterative ML approach is much more efficient than the standard procedure, and also that TLS do not necessarily have a large dynamical transition rate, since the dynamical-filtering approach misses more than half of them.", "Table: Analysis of data collected in Ref. .", "We compare the standard procedure with our ML approach using iterative training.", "We report results for different glass stabilities, decreasing from top to bottom, using Argon units (Ar)  and NiP metallic glass parameters (NiP) .", "The standard procedure finds less than half of the TLS and is computationally much more expensive." ], [ "Differences between DW and TLS", "With our ML-driven exploration of the energy landscape we can restrict the numerical effort to DW and favorable TLS candidates, while processing a larger number and/or longer exploration trajectories.", "This allows us to consider a larger set of independent glasses of the same type as those treated in Ref.", "[33], which is particularly relevant for ultrastable glasses generated at the lowest temperature $T_f=0.062$ .", "While in Ref.", "[33] the collection of 61 TLS required $>14000$ NEB calculations, we are able to identify 864 TLS running 11 iterations of iterative training using only a total of 5500 NEB calculations in addition to the $\\sim 6000$ used for pretraining.", "In the next section we analyze these results to discuss the nature of TLS.", "Figure: Results of ML driven exploration, leading to a library of TLS of unprecedented size.", "(a) We compare the model predictions to the calculated values at the end of our iterative training, at T f =0.062T_f=0.062.", "We color coded in the background the confusion matrix.", "The black dashed horizontal lines report the percentage of TLS that are predicted below that value of quantum splitting, showing that more than 95%95\\% of the TLS are within twice the TLS threshold of 0.00150.0015.", "(b) Cumulative distribution of energy splitting n(E qs )n(E_{qs}) divided by E qs E_{qs}.", "(c) Histograms of the number of TLS and DW per glass, at the three preparation temperatures.", "In total we have considered 237, 30, 5 glasses equilibrated at T f =0.062T_f=0.062, 0.07, and 0.092 respectively.The database of glasses that we analyze with iterative training contains 5 times more minima than in Ref.", "[33], but we are able to find 15 times more TLS running around half of the NEB calculations.", "Furthermore, in Fig.", "REF (a) we have color-coded the confusion matrix and we highlight that we can capture almost all of the TLS if we consider only the pairs that are predicted to be within twice the quantum splitting threshold of TLS.", "In Fig.", "REF (b) we report the saturation plateau of the cumulative density of TLS quantum splitting $n(E_{qs})$ , which scales as $n(E_{qs}) \\sim n_0 E_{qs}$ at low $E_{qs}$ .", "The ML approach allows us to collect significantly better statistics compared to Ref.", "[33], confirming that the TLS density $n_0$ decreases by several orders of magnitude from hyperquenched to ultrastable glasses.", "Lastly, in Fig.", "REF (c) we report the histograms of the number of TLS and DW per glass at the three temperatures.", "We see that when the glasses are ultrastable ($T_f=0.062$ ) most of the glasses have very few TLS.", "Conversely, poorly annealed ($T_f=0.092$ ) glasses show a very unbalanced distribution, with few glasses that contains most of the DW and TLS." ], [ "Interpretation of the ML model", "It is possible to use the ML model to gain information about the distinctive structure of TLS.", "First, the present and previous works [7], [8], [10], [35], [11], [33], [36] find that the density of TLS decreases upon increasing glass stability, which in our simulations is controlled by the preparation temperature.", "Thus, one may also expect temperature-dependent TLS features.", "In the SI we show that when the ML model is trained at $T_\\mathrm {train}$ and deployed at $T_\\mathrm {prediction} \\ne T_\\mathrm {train}$ there is only a minor performance drop and the model is able to perform reasonably well.", "This implies that the model captures distinctive signatures of TLS that do not depend strongly on the preparation temperature.", "Yet, we also show in the SI that it is very easy to train another ML model to predict the temperature itself and eventually add it to the pipeline.", "Overall, the ML model is not only able to capture the different microscopic features of TLS, but it can also suggest what is the specific influence of each feature.", "To interpret this information we calculate their Shapley values [56] and we report them in Fig.", "REF .", "The features are ranked from the most important (top) to the less important (bottom) reporting the impact that they have on the model output (SHAP value), so that a positive SHAP value predicts on average a high value of the QS.", "We see that the most important feature is the classical energy splitting $\\Delta E$ and that a large splitting (red) corresponds to a large QS.", "The second most important feature is the largest single particle displacement $\\Delta \\vec{r}_0$ , which has to be larger than a threshold corresponding to $0.3\\sigma $ in order to predict a low QS.", "The total displacement $d$ is the third most important and shows a similar effect.", "All the remaining features have a less clear and much smaller effect on the model prediction and they only collaborate collectively to the final QS prediction.", "In the SI we show that it is possible to obtain very good performance even when removing some of the features with the largest Shapley values, which means that the ML interpretation is not unique.", "Figure: Importance of the different features for the ML model for the prediction of the quantum splitting, evaluated from the Shapley values  at T f =0.062T_f=0.062.", "The features are ranked from the most important (top) to the least important, where the impact that they have on the model output corresponds to their SHAP value.", "Each point corresponds to a single IS pair.", "The color coding shows the impact from high (red) to low (green) values of each specific input.According to this Shapley analysis we explain the ML prediction in the following way: the energy difference $\\Delta E$ between two IS is the main predictor for the quantum splitting, and it has to be small for TLS.", "Then, the largest particle displacement $\\Delta \\vec{r}_0$ is necessary to understand if the two IS are similar and what is their stability (we show in the SI that $\\Delta \\vec{r}_0$ is the most important feature to identify the glass stability).", "Then the total displacement $d$ complements this information and gives local information about the displacements of the other particles.", "Lastly, all the other inputs provide fine tuning to refine the final prediction and are discussed in more detail in the SI.", "Interestingly, in the SI we also show that even without the two most important features, the ML approach can still reasonably reveal TLS candidates." ], [ "Microscopic features of TLS", "We have shown that by following a ML-driven approach it is possible to collect a significant library of TLS for any preparation temperature.", "However it may be useful to discuss alternative strategies to rapidly identify TLS.", "In general, since TLS are extremely rare objects [7], [8], [10], [35], [11] a filtering rule is necessary in order to reduce the number of possible candidates.", "In particular, Ref.", "[33], [34], [36] proposed to use the transition matrix to exclude pairs that do not get explored consecutively.", "This is based on the assumption that DW (and consequently TLS) correspond to IS pairs that are close to each other and therefore during an exploration run should be detected successively, hence have a non zero transition rates.", "Instead, we prove in Fig.", "REF that a filter based on dynamical information only is a poor predictor.", "In Fig.", "REF (a) we report the distribution of dynamical transitions between two inherent structures $i$ and $j$ for TLS, DW and all the pairs, measured at $T_f=0.062$ .", "While the slowly decaying tail of TLS and DW suggests that they often exhibit a large transition rate, actually most of the TLS and DW are characterized by no transition at all between them.", "Our interpretation is that even though the transition towards the other side of the double well is favourable, the landscape has such a large dimensionality ($3N$ ) that even very favorable transitions may never take place in a finite exploration time.", "This issue can become more severe when the trajectories are shorter, for example if the exploration is performed in parallel.", "We confirmed this observation by the results that are already reported in Tab.", "REF , where we have used our iterative training approach to re-analyze the data of Ref.", "[33], including pairs with no transition, thus finding many more TLS.", "We conclude that even though the transition rates are the most important single characteristics for DW prediction (see Fig.", "REF ), a filter based solely on them is still missing a lot of interesting pairs and therefore is not the most efficient.", "In Fig.", "REF (b) we focus on the distribution of classical splitting $\\Delta E$ .", "When $\\Delta E$ is large we rarely see DW and TLS (red region).", "On the other hand, there are many pairs that show a very small $\\Delta E$ value (yellow region), but they are not more likely to be TLS.", "Ultimately we find a `sweet spot' (green region), where TLS are more frequent.", "The ML model also captures this feature, as we can see from the SHAP parameter of $\\Delta E$ in Fig.", "REF .", "The next most important feature according to the ML model is the largest particle displacement $\\Delta \\vec{r}_0$ , reported in Fig.", "REF (c).", "When it is larger than $\\sim 0.8\\sigma $ we rarely find TLS and DW, but we do not find them also when $\\Delta \\vec{r}_0<0.3\\sigma $ .", "The SHAP parameter in Fig.", "REF confirms that the ML model has discovered this feature.", "Finally in Fig.", "REF (d) we report the total displacement $d$ .", "If $d>0.9\\sigma $ the pair is so different that it is not likely to be a TLS or DW, while this probability increases for smaller $d$ .", "Overall, if it is necessary to identify TLS with a `quick and dirty' method, we propose to use a transition matrix to filter DW from non-DW, and then select a sweet spot for the energy splitting and the displacement for selecting optimal TLS candidates.", "Figure: Microscopic features of TLS and DW for ultrastable glasses at T f =0.062T_f=0.062.", "We report the probability distribution functions of: (a) number of transitions between the two inherent structures T ij T_{ij} and T ji T_{ji}, (b) classical energy splitting ΔE\\Delta E, (c) largest particle displacement Δr → 0 \\Delta \\vec{r}_0 and (d) total displacement dd.", "We color coded in red the regions of parameters where we do not expect to find TLS, which instead concentrate in the green regions.", "These green regions could serve as an alternative to rapidly identify TLS." ], [ "Conclusion", "In this paper we have introduced a machine-learning approach to explore complex energy landscapes, with the goal of efficiently locating double wells (DW) and two-level systems (TLS).", "We demonstrate that it is possible to use ML to rapidly estimate the quantum splitting of a pair of inherent structures (IS) and accurately predict if a DW is a TLS or not.", "We also show that our ML approach can be used to predict very accurately the energy barrier between pairs of IS.", "Overall, this approach allows us to collect a TLS library of unprecedented size that would be impossible to obtain without the support of ML.", "The ML model uses as input information calculated from a pair of inherent structures.", "After just a few minutes of supervised training it is able to infer with high accuracy the quantum splitting of any new pair of inherent structures.", "We establish that the ML model that we develop using the Autogluon library is fast and precise.", "Its efficiency allows us to introduce an iterative training procedure, where we perform a small batch of prediction and then retrain the model.", "After performing statistical analysis over the unprecedented number of TLS collected with our method, we have discovered that many DW and TLS are not consecutively explored during the dynamics.", "We then reanalyzed the data of Ref.", "[33] finding that more than half of the TLS had been missed, because the analysis of Ref.", "[33] was based on this dynamic assumption.", "Our ML approach not only finds more than twice the number of TLS from the same data, but it also requires significantly fewer calculations.", "Overall we conclude that ML significantly improves the existing approaches.", "It also shows that if it is not possible to use the ML procedure that we outline, an effective `quick and dirty' way to predict TLS should be: a) use $T_{ij},T_{ji}$ for predicting DW; b) for predicted DW use energy splitting between two IS for predicting TLS.", "We also discuss the microscopic nature of DW and TLS.", "We perform a Shapley analysis to dissect the ML model and understand what it learns, and we compare this with the extended statistics of TLS that we are able to collect.", "We find that the quantum splitting is mostly related to the classical energy splitting and the displacements of the particles.", "Overall, the Shapley analysis suggests that TLS are characterized by one particle that displaces between $0.3$ and $0.9$ of its size, while the total displacement and the energy difference between the two states remains small.", "The local structure around the particle is not as important, nor is the number of times we actually see this transition during the exploration dynamics.", "Lastly we investigate the effect that glass stability (equivalent to the preparation temperature in our simulations) has on double wells and TLS.", "The ML model learns that at higher temperatures all the pairs are characterized by more collective rearrangements, but TLS are similar for any preparation temperature.", "Ultimately, since our ML approach is extremely efficient in exploring the energy landscape and is easy to generalize to target any type of state-to-state transition (as we show for the energy barriers), we hope that our method will be used in the future to analyze not only TLS, but also many other examples of phenomena related to specific transitions between states in complex classical and quantum settings.", "We thank E. Flenner, G. Folena, M. Ozawa, J. Sethna, S. Elliott, G. Ruocco and W. Schirmacher for useful discussions.", "This work was granted access to the HPC resources of MesoPSL financed by the Region Ile de France and the project Equip@Meso (reference ANR-10-EQPX-29-01) of the programme Investissements d’Avenir supervised by the Agence Nationale pour la Recherche.", "This project received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program, Grant No.", "723955 – GlassUniversality (FZ), and from the Simons Foundation (#454933, LB, #454955, FZ, #454951 DR) and by a Visiting Professorship from the Leverhulme Trust (VP1-2019-029, LB).", "CS acknowledges support from the Herchel Smith Fund and Sidney Sussex College, University of Cambridge." ], [ "Model", "We study a three-dimensional polydisperse mixture of particles interacting via the potential $v(r_{ij}) = \\epsilon (\\sigma _{ij}/r_{ij})^{12}+\\epsilon F(r_{ij}/\\sigma _{ij}),$ only if $r_{ij}=1.25\\sigma _{ij}$ .", "We use non-additive interactions $\\sigma _{ij} = 0.5(\\sigma _i + \\sigma _j ) (1 - 0.2|\\sigma _i - \\sigma _j |)$ .", "The function $F$ is a fourth-order polynomial ensuring continuity of $v(r_{ij})$ , its first and second derivatives, at the interaction cutoff.", "The particle diameters $\\sigma _i$ are drawn from the normalized distribution $P(0.73 < \\sigma <1.62) \\propto 1/\\sigma ^3$ .", "This model glassformer is efficiently equilibrated with the particle-swap Monte Carlo algorithm, with a fluid robust against fractionation and crystallization down to low temperature [21].", "We employ the swap Monte Carlo algorithm implemented in the LAMMPS package, with the optimal parameters provided in [22].", "With such parameters supercooled liquid configurations can be generated down to $T =$ 0.062, below the experimental glass transition $T_g=0.067$ .", "In this work, we use configurations prepared in equilibrium conditions at temperatures $T_f = 0.092, 0.07, 0.062$ ." ], [ "MD dynamics", "To explore the energy landscape of glasses generated at $T_f$ , we use classical Molecular Dynamics (MD) simulations.", "Glasses are first thermalized to $T_{MD}$ = 0.04 using a Berendsen thermostat.", "Such temperature is low enough to suppress diffusion, but sufficiently high for quick exploration of the energy landscape.", "We then run MD simulations in the NVE ensemble, using an integration time step of $dt =$ 0.01 (LJ time units).", "Configurations along the MD trajectory are used as the starting point for energy minimization via a conjugate gradient algorithm, which brings them to their inherent structure (IS).", "MD configurations are minimized every 20, 10, 5 time steps for $T_f$ = 0.062, 0.07, 0.092 respectively.", "The values are a compromise between too frequent minimization which sample the same IS many times consecutively, and too infrequent which would miss intermediate IS.", "For each initial configuration we perform 100,100,200 MD runs with different initial velocities for $T_f$ = 0.062, 0.07, 0.092, respectively.", "Each run lasts 40000, 100000, 10000 time steps.", "We performed our calculations on 200, 50, 5 glasses for $T_f$ = 0.062, 0.07, 0.092.", "For $T_f=0.092$ glasses we used a subset of the original data obtained in [33]." ], [ "NEB", "To analyze the transition between two IS we compute the multi-dimensional minimum energy path separating them, along with its one-dimensional potential energy profile.", "This is done by the nudged elastic band (NEB) method [49], [50] implemented in the LAMMPS package.", "We use 40 images to interpolate the minimal energy path, that are connected by springs of constant $\\kappa =1.0$ $\\epsilon \\sigma ^{-2}$ , and use the FIRE algorithm in the minimization procedure [49], [51]." ], [ "Comparison between ensembled and non-ensembled models", "In the main manuscript we discuss how to transform the problem of computing the quantum splitting of two inherent structures into a supervised learning regression problem.", "Still, many machine learning models can in principle answer this question.", "We opted for the AutoGluon library [53] which is based on model ensembling to find an optimal solution.", "Model ensembling consists in averaging the output of different ML models and create an ensemble-averaged model that outperforms each one of its components.", "In particular, the AutoGluon library performs model ensembling in the form of stacking (stacking different layers of models that are applied in series) and bagging (training multiple instances of the same model on different subsets of data and then combining their prediction).", "In Fig.", "REF (a) we report the most performant ensembling after 20 hours of training, at $T_f=0.062$ , $M=5$ and $N_\\mathrm {samples}=10^4$ .", "The R2-score reported in the figure shows that the most performant model is a bagged ensemble of CatBoost, a gradient boosting method [54], which then corresponds to the final WeightedEnsemble for this training instance.", "Notice that in the main paper, we always use the best ensembled model.", "In addition to CatBoost, very good performances are also usually achieved by LightGBM, another gradient boosting method [57], and ExtraTrees, which is an extremely randomized tree ensemble [58].", "Overall, gradient boosting methods typically achieve the highest R2-score.", "Thus, in the main text when we report results obtained with the `best models', we refer to ensembles of gradient boosting methods assembled with Autogluon.", "Figure: Comparison of different ML model ensembles to predict the quantum splitting at T f =0.062T_f=0.062 using M=5M=5 and N samples =7·10 4 N_\\mathrm {samples}=7\\cdot 10^4.", "(a) R2-score of the test set obtained for an ensemble of models.", "The full collection was trained for 20 hours of CPU time.", "In the main manuscript we use the WeightedEnsemble.", "(b) An optimal Neural Network (MLP with 10 hidden layers of size 1.1·N features 1.1\\cdot N_\\mathrm {features}) does not produce good predictions (R2<0R2<0).We motivate our choice of complex ensemble model constructed with Autogluon by showing that simpler models do not perform at the same level.", "For the system at $T_f=0.062$ and in the optimal condition corresponding to $M=5$ and $N_\\mathrm {samples}=7000$ the simplest model to at least capture some correlation was a Multi-layer-perceptron (MLP) with 10 hidden layers of size $1.1\\cdot N_\\mathrm {features}$ , using the same input and output structure as in the main text.", "A first disadvantage of this approach is that several additional hyperparameters have to be selected, such as learning rate, number and size of hidden layers, activation function, etc.", "We varied these parameters and report in Fig.", "REF (b) results for the model with the best performance.", "This optimal MLP was trained until the iteration per epoch was consistently below $10^{-10}$ , corresponding to $\\sim 10$ hours.", "Still, the results in Fig.", "REF (b) demonstrate that the MLP performance is much worse than that of the ensemble models used in the main text.", "In addition, it is much slower to train.", "While in this case the predictions could be scaled by a constant to achieve significant improvement, the problem with such an approach is that it is not reliable, and this scaling constant is not known a priori adding an additional layer of complication.", "For this reason we prefer to use more complex machine-learning models." ], [ "Parameters to predict the quantum splitting", "A significant advantage of the model ensemble approach is that most of the hyperparameters are automatically optimized by the ensembling.", "Still, some degrees of freedom have to be fixed.", "In particular we need to fix the number of particles to be considered ($M$ ), how many samples to use ($N_\\mathrm {samples}$ ) and for how long to train the ensemble.", "In the next sections we motivate the choices reported in the main text.", "Figure: Optimizing the Quantum splitting predictor.", "Effect of (a) the number of particles MM, (b) number of samples N samples N_\\mathrm {samples} and (c) training time on performance, while other parameters assume their optimal value.", "We report the R2-score after training on the data of Ref. .", "Overall we conclude that optimal performances can be achieved for M=3M=3, ∼10 4 \\sim 10^4 samples in just 10 minutes of training.", "$M$ : input particles – In Fig.", "REF (a) we report the effect of $M$ , the number of particles considered in the ML procedure.", "A large $M$ hinders the performance of the ML model because it has to process a larger input vector.", "The peak performance is achieved at $M=3$ , which is a relatively low number of particles to define a TLS.", "This confirms that the participation rate in TLS is low [33].", "$N_\\mathrm {samples}$ : number of samples – To evaluate the optimal value of $N_\\mathrm {samples}$ we report in Fig.", "REF (b) the variation of the R2-score upon using more samples for training.", "Notice that we keep the other parameters in the optimal condition for each temperature.", "The shape of the curves let us conclude that a good number of training samples is $7000, 10000, 30000$ over a total of $14202, 23535, 117370$ for $T_f=0.062, 0.07, 0.092$ respectively.", "Notice that the results depend on the specific quality of the samples provided for training.", "It is possible to achieve better predictions by selecting better initial samples, more uniformly distributed, which are different from the random choice we make for simplicity.", "Still, the values reported consistently produce good results, independently from the initial sample composition.", "Training time – In Fig.", "REF (c) we report the effect of the training time on performance.", "The R2-score approaches the plateau in 10 minutes of training on a single CPU and reaches the final value after $10^3$ minutes ($\\sim 16$ hours).", "Overall, we find that in iterative training, where we retrain the model several times, a good balance between performance and speed can be reached by training for $\\sim 10$ minutes.", "Instead, in a more classic single training approach we suggest to train for $>10^3$ minutes.", "Performing the training in parallel can further reduce the training time." ], [ "Parameters to identify double wells", "The ML model has to process a large number of unfiltered minima obtained during the exploration simulation.", "While we can anticipate that only a small fraction of inherent structure pairs will be a TLS, we also know that the minimum energy path connecting any pair of IS will not form a DW and as it is likely to contain intermediate minima.", "To exclude those non-DW pairs, we train a classifier using as input the same quantities that we measure to predict the QS, as described in the main text.", "In Fig.", "REF we report the results of a hyperparameter scan at $T_f=0.062$ .", "These results are similar to those obtained for the QS prediction suggesting that at $T_f=0.062$ we can use (a) $M=3$ , (b) $10^2$ s of training and (c) $\\sim 10^4$ samples in order to achieve $>95\\%$ accuracy, measured over a validation set.", "We conclude that we can use the same hyperparameters for the DW classifier and the QS predictor." ], [ "Removing irrelevant features", "After finding the optimal parameters to train the ML model, we can extract the specific effect and overall importance of each input parameter, as well as the score drop when each of them is removed.", "Since each additional feature corresponds to additional computational time and memory, we investigate which features are crucial, and which ones can be excluded to make the ML pipeline faster and more efficient without affecting performance.", "Figure: Importance of different features on the quantum splitting predictor.", "(a) Performance drop when each specific feature is removed, normalized to the first most important.", "Different colors correspond to different glass preparation temperatures.", "(b) Shapley values calculated at T f =0.062T_f=0.062.", "In general, at any temperature the most important feature is the energy difference (ΔE\\Delta E) of the two IS, which is followed by the value of the displacement of the MM particles.", "The Shapley values on the right also report the effect of the specific features: the splitting ΔE\\Delta E has to be small (green) in order to predict low QS (i.e.", "negative SHAP), while instead the displacements have to be large (red) in order to point towards low QS.In Fig.", "REF (a) we report the feature importance (in log scale) after a single iteration of training containing all the data from Ref. [33].", "Different colors refer to glasses prepared at different temperatures.", "Independently from the temperature, the most important features are the energy difference $\\Delta E$ between the two IS, followed by the total displacement of particles $\\sum _i \\Delta \\vec{r}_i$ .", "The third most important information is the positions of the particles that displaced the most $\\sum _i | \\vec{r}_0 -\\vec{r}_i|$ .", "After them, we find that the arrangement of the first shell of neighbors represented by the $q$ parameters and the particle sizes $\\sigma _i$ are insignificant (and so is their variation between the two IS of the pair $\\Delta \\cdot $ reported in Fig.", "REF ), since their importance is more than two orders of magnitude smaller than the one of the energy and displacements.", "Since the $q$ parameters are also the slowest to compute, we decided to not include them in the final pipeline reported in the main manuscript.", "Next, to quantify the role of those inputs, we calculate their Shapley values [56] shown in Fig.", "REF (b) for $T_f=0.062$ .", "The color codes for the value taken by the specific feature (red when the value is high, green when low) and reports its scaled impact on the model output.", "The x axis reports the Shapley value, where SHAP$>0$ implies that the specific feature is pushing towards predicting a high QS, while SHAP$<0$ indicates that the feature promotes low QS values.", "While no input feature alone can predict a TLS (because $|$ SHAP$|$ is small) there are some visible trends: (i) the energy asymmetry is the most important feature and should be small in order to predict a low QS, (ii) the particle displacements have to be larger than a threshold in order to predict low QS.", "According to these results we rationalize the ML prediction.", "The energy difference between two IS is the main predictor for the quantum splitting, which is never too large for DW, then the displacements are necessary to understand if the two IS are similar and what their stability is (see the temperature classification section in the SI).", "The displacement nucleus size complements the information contained in the displacements and gives local information about the participation ratio.", "Finally, all the other features do not provide any improvement.", "Since the bond order parameters are computationally expensive to calculate, we do not include them in the final ML approach we propose in the main text.", "The performance reported in the main text confirms that our choice is justified." ], [ "Reducing the number of features", "We justify the choice of features discussed in sec.", "IVa and Fig.", "2 by presenting the performance of our ML model as a function of feature number.", "In Tab.", "REF we rank the features according to their Shapley values (see Fig.", "6).", "We report (blue line) in Fig.", "REF the accuracy of the DW classifier (a) and the QS predictor (b) as a function of number of features, following the Shapley ranking of Tab.", "REF .", "The DW classifier reaches its best accuracy with six features.", "In the initial study Ref.", "[33], [36], the matrix $\\mathbf {T}$ was the only information used to analyze the pair of IS.", "Here, the classifier already reaches $\\sim 90\\%$ accuracy using the transition matrix only (two features).", "We add (orange line) the performances when we exclude $\\Delta E, T_{ij}$ and $T_{ji}$ , which are the most important features overall.", "Surprisingly, the DW classifier still reaches its maximum accuracy with two features ($\\Delta \\vec{r}_2$ and $|\\vec{r}_0-\\vec{r}_2|$ ) as highlighted in the inset.", "The QS predictor shows instead very good predictions even using a single feature ($\\Delta E$ in the blue curve, or $\\Delta \\vec{r}_0$ in the orange).", "Table: Ranking the features used in the main manuscript by their Shapley values.Figure: Effect of the number of features.", "Accuracy of the DW classifier (a) and Pearson correlation score of the QS predictor (b), as a function of the number of features used.", "Along the blue curves the features are ranked by their importance using their Shapley values, so the first features to be used are the most important.", "For the orange curve we exclude ΔE\\Delta E, T ij T_{ij} and T ji T_{ji} to evaluate the performances of the ML model without the features with the best Shapley values." ], [ "Iterative training", "The standard supervised learning approach consists in collecting a significant amount of data, then using them to train the ML model and finally use the model predictions.", "While overall very powerful, this scheme is not optimal when the goal of the model is to drive the exploration in a space much larger than the available data.", "This is the case of our main study where we employ the ML model to identify IS pairs with low QS.", "In order to make our approach more efficient and generalizable we developed the iterative training procedure.", "Figure: Performance of the iterative training procedure at T f =0.062T_f=0.062 for the data collected in the main manuscript.", "Starting from K 0 =5000K_0=5000 samples we perform iterations of K i =500K_i=500 predictions for 11 iterations.", "In (a) we report the cumulative number of double wells (DW), two-level systems (TLS) and non double-wells (Non-DW).", "In (b) we break down this result by color coding the confusion matrix of the ML model at each iteration of iterative training.We start the iterative training procedure from a sample of $K_0$ pairs for which we calculate the QS.", "We empirically find that $K_0\\sim 5000$ achieves a good balance between precision and time.", "This sample has to be balanced between DW and non-DW, but there is no need to include TLS in the initial sample.", "It is possible to start from a smaller $K_0$ , at the cost of a performance drop in the first few iterations of training.", "From the initial sample we perform a first training that takes 10 minutes for the DW classifier and 10 minutes for the QS predictor.", "We then use the model to predict the $K_i=500$ IS pairs with the lowest QS.", "We report the cumulative number of TLS, DW, and non-DW at each iteration in Fig.", "REF (a).", "From Fig.", "REF (b) we see that during the first iteration most of the $K_i=500$ best candidates are actually non-DW, so the model is performing poorly.", "This is the reason why we suggest to perform only $K_i=500$ NEBs for each iterations, otherwise the first iteration would lead to wasting time to perform calculations over non-DW.", "On the other hand, in the first iteration 72 TLS are already found by running 500 NEBs.", "This is to be compared with Ref.", "[33] in which 61 TLS were found by running $>14000$ NEBs.", "After the first retraining (during iteration 2) the performance of the ML model is already excellent and less than $30\\%$ of the 500 best pairs are non-DW, while 134 are newly found TLS.", "We also used our iterative training to reprocess the data of Ref.", "[33], and we report the results in Table 1 (main text).", "We show in Fig.", "REF a detailed analysis of the procedure at $T_f=0.062$ .", "While the standard approach was able to identify 61 TLS running $>14000$ NEB+Schrödinger calculations, iterative training finds 156 TLS, by running only 2500 NEBs.", "This confirms that more than half of the total TLS were hidden among the pairs discarded by Ref. [33].", "In details, Fig.", "REF (a) shows the cumulative number of DW and TLS that we find, while Fig.", "REF (b) reports the confusion matrix for the different steps of iterative training.", "Figure: Reprocessing the data of Ref.", "at T f =0.062T_f=0.062 with iterative training.", "In (a) we report the cumulative number of double wells (DW), two-level systems (TLS) and non double-wells (Non-DW).", "While Ref.", "(red dashed line) identified 61 TLS running >14000>14000 NEBs, iterative training finds 156 TLS from 2500 NEBs.", "In (b) we report the confusion matrix of the 5 steps of iterative training that we run.Overall, these results demonstrate that the ML approach is not only faster than a manual filtering rule based on the transition matrix, but also much more effective.", "The pool of IS pairs excluded from the analysis in the original approach effectively contains a significant number of TLS.", "In conclusion, a ML driven exploration is not only more efficient than manual filtering, but also necessary to capture the correct statistics." ], [ "Temperature and stability", "Our ML approach is able to predict the quantum splitting of a pair of IS from static information.", "It is known that TLS have different features depending on their preparation temperature, or glass stability [33].", "Here we address the following questions: (i) how difficult is it for a machine to distinguish data corresponding to different temperatures, (ii) what are the most important features for this task, and (iii) does our ML approach learn temperature-independent features?" ], [ "Temperature classification", "To understand how TLS and IS features evolve with temperature, we trained a multi-layer-perceptron (MLP) to classify the temperature corresponding to a specific pair.", "We find that it is possible to rapidly train a classifier reaching accuracy $>95\\%$ .", "Depending on the value of $M$ , the input layer is composed by $N_\\mathrm {features}$ neurons, where the features are explained in the main text.", "After the input layer, there are $n$ hidden fully connected layers, all of the same size $s$ .", "The activation function for each neuron-neuron connection is a ReLu function.", "The last layer is composed of 3 neurons that represent the probability that a given input belongs to one of the three classes: $T_f=0.062$ or $T_f=0.07$ or $T_f=0.092$ .", "The performance of the MLP are evaluated by measuring the accuracy, which is the percentage of the corrected predictions.", "In Fig.", "REF (a) we report the effect of increasing the size of the hidden layers, while in Fig.", "REF (b) we report the effect of a larger number of hidden layers.", "Figure: Temperature classifier accuracy as a function of number of hidden layers (a) and their size (b).", "We report results for M=5M=5.", "Results suggest that already with 2 hidden layers of 60 neurons it is possible to achieve an accuracy above 95%95\\%.Our results show that a MLP with 2 hidden layers of 60 neurons has a $95\\%$ accuracy after less than 1h of supervised training.", "The answer to question (i), is that one can identify the temperature at which a pair of IS was obtained." ], [ "Static signatures of glass stability", "We answer question (ii) by measuring the Shapley (SHAP) values from the temperature classifier.", "In Fig.", "REF (a) we report the SHAP value of the most important input features for the MLP prediction that considers only the $M=5$ particles that displaced the most.", "The smallest ($i=(M-1)=4$ ) and the largest ($i=0$ ) particle displacements emerge as the most important input, noted $\\Delta \\vec{r}_4$ and $\\Delta \\vec{r}_0$ , respectively.", "Their average effect on the model prediction is shown in Fig.REF (b,c).", "The particle that displaced the most ($i=0$ ) shows a large displacement at low preparation temperature, while the particle that displaced the least ($i=M-1$ ) exhibits a large displacement at high temperature.", "Our results suggest that higher temperatures are characterized by more collective rearrangements, leading to large $\\Delta \\vec{r}_4$ .", "In glasses prepared at low temperature instead, transitions are characterized by a particle displacing significantly more than the others.", "Figure: Microscopic differences originating from different temperature preparation/glass stability, quantified using the Shapley values for the temperature classifier.", "(a) Summary of the SHAP values for the 9 most important input features, measured for a subset of 1000 pairs processed by the TT-classifier.", "Predicted glass preparation temperature as a function of (b) Δr → 4 \\Delta \\vec{r}_4, and (c) Δr → 0 \\Delta \\vec{r}_0, with all other inputs taking their average value." ], [ "Transferability and crossvalidation", "We address question (iii) by testing how our model performs when trained at $T_\\mathrm {train}$ and deployed to predict the quantum splitting of pairs at $T_\\mathrm {predict}\\ne T_\\mathrm {train}$ .", "In Fig.", "REF each column corresponds to a training temperature: $T_\\mathrm {train}=0.092$ (left), $0.07$ (middle) and $0.062$ (right).", "Then, computed quantum splittings are compared with ML prediction made on samples obtained at $T_\\mathrm {predict}$ , decreasing from top to bottom.", "We see in Fig.", "REF that the predictive power of a model trained at a different temperature is slightly lower compared to the model trained at the same temperature (Fig.", "REF of main).", "Still, the transferability of the model is good, especially when the model is trained at $T_\\mathrm {train}>T_\\mathrm {predict}$ .", "This situation is the most useful, since it is easier to obtain data at higher $T$ .", "We conclude that if not enough data is available at low temperature, it is possible to rely on model transferability.", "This relies on the fact that TLS share some general temperature-independent features.", "In summary, we have seen that IS and TLS have different microscopic features when generated from different stabilities.", "It is then optimal to train the ML model at a fixed temperature only.", "If no easier way to measure $T$ /stability are available, it is possible to use ML to classify the stability of each IS by adding another block to the workflow, as reported in the main manuscript.", "Alternatively, at the cost of a finite accuracy drop it is even possible to transfer the model predictions at different temperatures and thus train where data collection is fast and easy and apply it where it is not.", "Figure: Transferability and crossvalidation of the machine learning model.", "Exact quantum splitting against ML prediction.", "The quantum splitting predictor is trained at T train T_\\mathrm {train} and deployed at T prediction ≠T train T_\\mathrm {prediction}\\ne T_\\mathrm {train}.", "From left to right: T train =0.092,0.07,0.062T_\\mathrm {train} = 0.092, 0.07, 0.062.", "From top to bottom, decreasing T prediction T_\\mathrm {prediction}.", "Performances are not on par with Fig.", ", but good in particular when T train >T predict T_\\mathrm {train}>T_\\mathrm {predict}, which is of practical interest." ] ]
2212.05582
[ [ "A Euclidean comparison theory for the size of sets" ], [ "Abstract We discuss two main ways in comparing and evaluating the size of sets: the \"Cantorian\" way, grounded on the so called Hume principle (two sets have equal size if they are equipotent), and the \"Euclidean\" way, maintaining Euclid's principle \"the whole is greater than the part\".", "The former being deeply investigated since the very birth of set theory, we concentrate here on the \"Euclidean\" notion of size (numerosity), that maintains the Cantorain defiitions of order, addition and multiplication, while preserving the natural idea that a set is (strictly) larger than its proper subsets.", "These numerosities satisfy the five Euclid's common notions, and constitute a semiring of nonstandarda natural numbers, thus enjoying the best arithmetic.", "Most relevant is the natural set theoretic definition} of the set-preordering: $$X\\prec Y\\ \\ \\Iff\\ \\ \\exists Z\\ X\\simeq Z\\subset Y$$ Extending this ``proper subset property\" from countable to uncountable sets has been the main open question in this area from the beginning of the century." ], [ "Introduction", "In the history of Mathematics the problem of comparing (and measuring) the size of mathematical objects has been extensively studied.", "In particular, different methods have been experienced for associating to sets suitable kinds of numbers.", "A satisfactory notion of measure of size should abide by the famous five common notions of Euclid's Elements, which traditionally embody the properties of any kind of magnitudines (see [13]): Things equal to the same thing are also equal to one another.", "And if equals be added to equals, the wholes are equal.", "And if equals be subtracted from equals, the remainders are equal.", "Things [exactly] applying onto one another are equal to one another.", "Here we translate $\\epsilon \\phi {\\alpha }\\rho \\mu o\\zeta o\\nu \\tau \\!", "{\\alpha }$ by “[exactly] applying onto\", instead of the usual “coinciding with\".", "As pointed out by T.L.", "Heath in his commentary [13], this translation seems to give a more appropriate rendering of the mathematical usage of the verb $\\epsilon \\phi {\\alpha }\\rho \\mu o\\zeta \\epsilon \\iota \\nu $ .", "The whole is greater than the part.", "Following the ancient praxis of comparing magnitudes of homogeneous objects, a very general notion of size of sets, so as to comprehend cardinality, measure, probability, numerosity, etc.", "whose essential property is general comparability of sizes, can be given through a total preordering Recall that a preordering is a reflexive and transitive (binary) relation; it is total if any two elements are comparable; the corresponding equivalence $\\simeq $ is $A\\simeq B\\ \\mbox{$\\Leftrightarrow $}\\ A\\preceq B\\preceq A$ ; the corresponding strict inequality $\\prec $ is $A\\prec B\\ \\mbox{$\\Leftrightarrow $}\\ A\\preceq B\\lnot \\preceq A$ .", "$\\preceq $ of sets according to their sizes, with the intended meaning that equinumerosity, i.e.", "equality of size, is the corresponding equivalence relation $\\ A\\simeq B\\ \\mbox{$\\Longleftrightarrow $}\\ A\\preceq B\\preceq A$ , so the first Euclidean common notion (E1)       Things equal to the same thing are also equal to one another is subsumed, because $\\simeq $ is an equivalence.", "This comparison should naturally extend set-theoretic inclusion (and be consistent with equinumerosity).", "So one has to assume that $A\\preceq B\\ \\ \\mbox{$\\Longleftrightarrow $}\\ \\ \\exists A^{\\prime }, B^{\\prime } \\big (A\\simeq A^{\\prime }\\subseteq B^{\\prime }\\simeq B\\big ),$ Clearly.", "the equivalence classes modulo $\\simeq $ ,  called the magnitudines of the theory, are totally ordered by the ordering induced by $\\preceq $ .", "In set theory the usual measure for the size of sets is is given by the classical Cantorian notion of “cardinality”, whose ground is the so called Hume's Principle Two sets have the same size if and only if there exists a biunique correspondence between them.", "This principle amounts to encompass the largest possible class of “exact\" applications (congruences) admissible in the fourth Euclidean notion, namely all bijections.", "This assumption might seem natural, and even implicit in the notion of counting; but it strongly violates the equally natural Euclid's principle applied to (infinite) sets A set is greater than its proper subsets, which in turn seems implicit in the notion of magnitudo, even for sets.", "So one could distinguish two basic kinds of size theories for sets: A size theory is Cantorian if, for all $A,B$ : $(\\mathsf {{HP}})$ ${(Hume^{\\prime }s Principle)~~~}\\ A\\preceq B\\ \\ \\mbox{$\\Longleftrightarrow $}\\ \\ \\exists f:A\\rightarrow B$ 1-to-1 (Cantor-Bernstein's theorem making this an equivalent formulation) A size theory is Euclidean if, for all $A,B$ : $(\\mathsf {{EP}})$ ${(Euclide^{\\prime }s Principle)}\\ \\ \\ A\\prec B \\ \\mbox{$\\Longleftrightarrow $}\\ \\ \\exists A^{\\prime }, B^{\\prime } \\big (A\\simeq A^{\\prime }\\subset B^{\\prime }\\simeq B\\big ).$ (Remark the use of proper inclusion in defining strict comparison of sets) The spectacular development of Cantorian set theory in the entire twentieth century has put Euclid's principle in oblivion.", "Only the new millennium has seen a limited resurgence of proposals of so called “numerosities\" including it, at the cost of severe limitations of Hume's principle (see the excellent survey [18] and the references therein).", "The main value of the Euclidean theories is the excellent arithmetic they allow, namely that of an ordered semiring, to be contrasted with the awkward cardinal arithmetic.", "However, the main problem arising in the Euclidean theories lies in the fact that the preordering of sets, defined by the natural set theoretic characterization (EP), should induce the “algebraic\" total ordering of the semiring of numerosities, which in turn is equivalent to the subtraction property $ {\\sf {(Diff)}}~~~~~~~~~~A\\prec B\\ \\mbox{$\\Longleftrightarrow $}\\ \\exists C\\ne \\emptyset \\ \\big ( A\\cap C=\\emptyset \\ \\&\\ A\\cup C\\simeq B\\big ).~~~~~~~~$ A notion of “number of elements” (numerosity) that completely fulfills the Euclidean principle (EP) has been found up to now only for special countable sets, in [1], and generalized later to point sets on countable lines in [7], [14].", "The consistency of the full principle (EP) for uncountable sets appeared problematic from the beginning, and this question has been posed in several papers (see [2], [3], [10]), where only the literal set-theoretic translation of the fifth Euclidean notion, i.e.", "the weaker principle requiring the sole left pointing arrow of $ \\textsf {(EP)}$ , $ \\sf {(E5)} {~~~~~~~~~~~~~~~~~~~~~~~~~~ ~}A\\subset B\\ \\ \\mbox{$\\Longrightarrow $}\\ \\ A\\prec B{~~~~~~~~~~~~~~~~~~~~~~~~~},$ has been obtained.", "(On the other hand, it is worth recalling that also the totality of the Cantorian weak cardinal ordering had to wait a couple of decades before Zermelo's new axiom of choice established it!)", "In this paper we present a Euclidean numerosity theory for suitable collections ${\\mathbb {W}}$ of point sets of finite dimensional spaces over lines ${\\mathbb {L}}$ of arbitrary cardinality, satisfying the full principle (EP); this theory might be extended to the whole universe of sets $V$ following the procedure outlined in [2], under mild set theoretic assumptions, e.g.", "Von Neumann's axiom, that gives a (class-)bijection between the universe $V$ and the class $Ord$ of all ordinals." ], [ "Euclidean or Aristotelian comparison theories", "As pointed out in the introduction, in comparing magnitudes of homogeneous objects, the essential property is the general comparability of sizes, but a Euclidean comparison $\\prec $ among sets should naturally extend set theoretic inclusion $\\subset $, according to the fifth Euclidean common notion, (and be consistent with equinumerosity).", "Definition 1.1 Call Euclidean comparison theory a pair $( \\mathbb {W},\\preceq ) $ where $\\mathbb {W}$ is a family of sets closed under binary unions and intersections, subsets, and also under Cartesian products; $\\,\\preceq \\,$ is a total preordering such that, for all $A, B\\in \\mathbb {W},$ ${\\textsf {(EP)}} ~~~~~~~~~~~~~~~~~~~~~~A\\prec B\\ \\ \\mbox{$\\Longleftrightarrow $}\\ \\, \\exists B^{\\prime }\\in {\\mathbb {W}}\\ A\\subset B^{\\prime }\\simeq B~~~~~~~~~~~~~~~~~~~~~~~~~$ The universe of the theory $\\left( \\mathbb {W},\\preceq \\right) $ is the union set $W=\\bigcup \\mathbb {W},$ and a permutation of the universe ${\\sigma }\\in {\\mathfrak {S}}(W)$  ${\\mathfrak {S}}(X)$ denotes the group of all permutations of a set $X$ .", "is a congruence for $({\\mathbb {W}},\\preceq )$ ) if $\\ {}~~~~~~~~~~~~~~~~~~~~~~~for \\ all\\ A\\in \\mathbb {W}\\ \\ {\\sigma }(A)\\in {\\mathbb {W}}\\ {and} \\ {\\sigma }(A)\\simeq A.~~~~~~~~~~~~~~~~~~~~~$ The quotient set $\\ {\\mathfrak {N}}=\\mathbb {W}/\\simeq \\,$ is the set of numerosities of the theory, and the canonical map $\\ {\\mathfrak {n}}:{\\mathbb {W}}\\rightarrow {\\mathfrak {N}}$ is the numerosity function of the theory.", "Clearly ${\\mathfrak {N}}$ is totally ordered by the ordering induced by $\\preceq $ ." ], [ "Natural congruences", "First of all, once the general Hume's principle cannot be assumed, the fourth Euclid's common notion (E4)     Things exactly applying onto one another are equal to one another        is left in need of an adequate interpretation that identifies an appropriate class of natural exact applications that preserve size (called congruences).", "So we have to isolate a family of congruences, for the considered notion of size, as a subset ${\\mathfrak {C}}(W)$ of the group of all permutations ${\\mathfrak {S}}(W)$ of the universe $W$ .", "Call “natural transformation” of tuples any biunique correspondence $\\tau $ that preserves support, i.e.", "the set of components of a tuple: $supp(a_1,\\ldots ,a_n)=\\lbrace a_1,\\ldots ,a_n\\rbrace $ ): these applications are useful when comparing sets of different dimensions, so they seem a good basis to be put in ${\\mathfrak {G}}(W)$ by any Euclidean theory involving sets of tuples.", "These transformations may not preserve dimension.", "When dimension is relevant, e.g.", "when the diagonal $D_A=\\lbrace (a,a)\\mid a\\in A\\rbrace $ shoud require a different size from $A$ , one could restrict consideration to permutation of components and/or rearranging of parentheses,  i.e.", "$\\tau :(a_1,\\ldots ,a_n)\\mapsto [a_{{\\sigma }1},\\ldots ,a_{{\\sigma }n}]$ , where $[\\ldots ]$ represents any distribution of parentheses.", "So we assume that all natural transformation of tuples belong to ${\\mathfrak {G}}(W),$ and postulate ${\\sf {(CP)}}\\, ({Congruence\\ Principle})\\ \\ \\tau \\in {\\mathfrak {C}}(W)\\ \\mbox{$\\Longrightarrow $}\\ \\forall A\\in \\mathbb {W}\\, \\big (\\tau [A]\\in {\\mathbb {W}}\\ \\mathrm {\\&} \\ \\tau [A]\\simeq A\\big )$" ], [ "Addition of numerosities", "In general one wants not only compare, but also add and subtract magnitudines, according to the second and third Euclidean common notions (E2)          ... if equals be added to equals, the wholes are equal.", "(E3)   ... if equals be subtracted from equals, the remainders are equal.", "When dealing with sets, it is natural to take addition to be (disjoint) union, and subtraction to be (relative) complement, so it is convenient to call additive a Euclidean comparison theory verifying the following principle for all $A,B\\in {\\mathbb {W}}$ : $(\\mathsf {{AP}})$ (Aristotle's Principle)  This priciple has been named Aristotle's Principle in [7], [14], because it resembles Aristotle's preferred example of a “general axiom\".", "It is especially relevant in this context, because (AP) implies both the second and the third Euclidean common notions, and also the fifth provided that no nonempty set is equivalent to $\\emptyset $ , see below.", "${~~~~~~~~~~~}A\\simeq B\\ \\Longleftrightarrow \\ A\\setminus B \\simeq B\\setminus A.$ This principle yields both the second and third Euclidean common notions, namely Proposition 1.2 ([10], [7]) ${}$ Assuming $\\textsf {(AP)}$ , the following properties hold for all $A,B,A^{\\prime },B^{\\prime }\\in {\\mathbb {W}}$ : $(\\mathsf {{E2}})$ ${~~~~~} A\\simeq A^{\\prime },\\ B\\simeq B^{\\prime }, A\\cap B=A^{\\prime }\\cap B^{\\prime } =\\emptyset \\ \\ \\mbox{$\\Longrightarrow $}\\ \\ A\\cup B\\simeq A^{\\prime }\\cup B^{\\prime }$ $(\\mathsf {{E3}})$ ${~~~~~~~~} B\\subset A,\\ B^{\\prime }\\subset A^{\\prime },A\\simeq A^{\\prime },\\ B\\simeq B^{\\prime }\\Longrightarrow A\\backslash B\\simeq A^{\\prime }\\backslash B^{\\prime }.$ We omit the proofs, which would be identical to those given in [10], [7].", "$\\Box $ Remark 1.1 The sole principle (AP) is not enaugh for providing a Euclidean comparison: for instance, it is fulfilled both by Peano and Lebesgue measures, by taking $\\mathbb {W}$ to be a suitable subset of $\\ \\bigcup _{n\\in \\omega }\\mathbb {R}^{n}$ .", "In general, various probability spaces might be taken to be ${\\mathbb {W}}$ , but these theories are non-Euclidean, unless the probability is regular (i.e.", "only the whole space has probability 1).", "Actually, by simply adding to (AP) the natural principle $\\textsf {(E0)}~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\ \\ A\\simeq \\emptyset \\ \\ \\mbox{$\\Longrightarrow $}\\ \\ A=\\emptyset ,~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$ one obtains the literal set theoretic version of the fifth Euclidean notion, namely ${ \\sf {(E5)} }{~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~}A\\subset B\\ \\ \\mbox{$\\Longrightarrow $}\\ \\ A\\prec B{~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~},$ Clearly these assumptions yield that $\\emptyset $ is the unique least set, and that all singletons have equal size and come immediately after $\\emptyset $ , for no nonempty set can be smaller than a singleton.", "Moreover the successor of the size of a given set is obtained by adding a single element, and similarly the immediate predecessor is obtained by removing one element, so the induced total ordering is discrete.", "Hence we may assume without loss of generality that all finite sets receives their number of elements as size (thus the natural numbers ${\\mathbb {N}}$ can be viewed as an initial segment of all numerosities).", "Call Aristotelian a size theory satisfying both principles (AP) and (E0):  To be sure, Aristotle had never accepted a theory where $1=2$ !", "the important feature of Aristotelian theories is that (AP) and (E0) are necessary and sufficient conditions for obtaining an excellent additive arithmetic of the infinite: simply define an addition of numerosities, i.e.", "equivalence classes modulo $\\simeq $ , by means of disjoint union ${\\mathfrak {n}}(A)+{\\mathfrak {n}}(B)={\\mathfrak {n}}(A\\cup B)\\ \\ for\\ all \\ \\ A,B\\in {\\mathbb {W}}\\ such\\ that\\ \\ A\\cap B=\\emptyset ,$ then Theorem 1.2 Let $\\left( \\mathbb {W},\\preceq \\right) $ be an Aristotelian comparison theory.", "Then an additon can be defined on the corresponding set of numerositiess ${\\mathfrak {N}}=\\mathbb {W}/\\simeq \\,$   in such a way that ${\\mathfrak {n}}(A)+{\\mathfrak {n}}(B)={\\mathfrak {n}}(A\\cup B)+{\\mathfrak {n}}(A\\cap B)\\ \\ for\\ all \\ \\ A,B\\in {\\mathbb {W}},$ $ \\exists C\\lnot \\subseteq A \\ ( {\\mathfrak {n}}(A)+{\\mathfrak {n}}(C)={\\mathfrak {n}}(B))\\ \\ \\mbox{$\\Longrightarrow $}\\ \\ {\\mathfrak {n}}(A)<{\\mathfrak {n}}(B),$ and then $({\\mathfrak {N}},+,0,\\le )$ is a commutative cancellative zerosumfree monoid whose algebraic ordering  Recall that any cancellative zerosumfree monoid comes with the algebraic partial ordering $\\le $ defined by letting $\\ a\\le b\\ \\Longleftrightarrow \\exists c.\\,(\\,a+c=b\\,);$ this ordering is total if and only if the monoid is semisubtractive i.e.", "for all $a,b$ there exists $c$ such that either $a+c=b$ or $b+c=a$ .", "Hence the preordering $\\preceq $ on sets induces the algebraic ordering of ${\\mathfrak {N}}$ if and only if the latter is total.", "$\\le $ is weaker than that induced by the total preordering $\\preceq $ .", "Therefore ${\\mathfrak {N}}$ can be isomorphically embedded into the nonnegative part $\\,{\\mathfrak {A}}^{\\le 0}$ of an ordered abelian group $\\,{\\mathfrak {A}}$ .", "The monoid ${\\mathfrak {N}}$ is semisubtractive if and only if $\\left( \\mathbb {W},\\preceq \\right) $ is Euclidean, and then $ \\exists C\\lnot \\subseteq A \\ ( {\\mathfrak {n}}(A)+{\\mathfrak {n}}(C)={\\mathfrak {n}}(B))\\ \\ \\mbox{$\\Longleftrightarrow $}\\ \\ {\\mathfrak {n}}(A)<{\\mathfrak {n}}(B),$ so $\\preceq $ induces the algebraic ordering of ${\\mathfrak {N}}$ , and $\\,{\\mathfrak {N}}\\cong {\\mathfrak {A}}^{\\le 0}$ .", "Proof.", "First of all, $({\\mathfrak {N}},+,0,\\le )$ is a a commutative cancellative monoid, because addition is associative and commutative by definition, and the property (E3), that follows from (AP), provides the cancellation property ${\\mathfrak {n}}(A)+{\\mathfrak {n}}(C)={\\mathfrak {n}}(B)+{\\mathfrak {n}}(C)\\ \\ \\mbox{$\\Longrightarrow $}\\ \\ {\\mathfrak {n}}(A)={\\mathfrak {n}}(B).$ ${\\mathfrak {N}}$ is zerosumfree because $0={\\mathfrak {n}}(\\emptyset )$ is the unique additively neutral element, and so $\\ {\\mathfrak {n}}(A)+{\\mathfrak {n}}(B)=0={\\mathfrak {n}}(\\emptyset )\\ \\ \\mbox{$\\Longrightarrow $}\\ \\ A=B=\\emptyset .$ Finally (EP) is equivalent to semisubtractivity, because given $A,B\\in {\\mathbb {W}}$ there exists $C\\in {\\mathbb {W}}$ , disjoint from both $A$ and $B$ , such that either $A\\cup C\\simeq B$ or $B\\cup C\\simeq A$ , and so the algebraic ordering of ${\\mathfrak {N}}$ coincides with that induced by the total preordering $\\preceq $ .", "The monoid ${\\mathfrak {N}}$ generates an abelian group ${\\mathfrak {A}}$ , whose elements can be identified with the equivalence classes of the differences ${\\mathfrak {n}}(A)-{\\mathfrak {n}}(B)$ modulo the equivalence $\\ \\ \\ {\\mathfrak {n}}(A)-{\\mathfrak {n}}(B)\\approx {\\mathfrak {n}}(A^{\\prime })-{\\mathfrak {n}}(B^{\\prime })\\ \\mbox{$\\Longleftrightarrow $}\\ \\ {\\mathfrak {n}}(A^{\\prime })+{\\mathfrak {n}}(B)={\\mathfrak {n}}(A)+{\\mathfrak {n}}(B^{\\prime }).$ Clearly one has $\\ {\\mathfrak {n}}(A)-{\\mathfrak {n}}(B)> 0$ in ${\\mathfrak {A}}$ if $\\exists C\\ne \\emptyset \\ ({\\mathfrak {n}}(A)={\\mathfrak {n}}(B)+{\\mathfrak {n}}(C)$ , and the reverse implication holds if and only if $\\,{\\mathfrak {N}}$ is semisubtractive.", "$\\Box $ The fact that the preordering $\\prec $ on sets induces the algebraic total ordering on ${\\mathfrak {N}},$ yields the “most wanted Subtraction Principle\" of [2] $\\ {\\sf {(Diff)}}~~~~~~~~~~ \\emph { $ B $\\Longleftrightarrow $ C (C(AB)=, (CA)B)$.~~~~~~~~~~~~~~~~~~}$$Clearly the equivalence class of the set $ C$ is uniquely determined, and in the group $ A$ every element has the form$ n(C)$ with $ n(C)N$.$ The problem of the relative consistency of (EP) with (AP) (thus yielding the Subtraction Principle) has been posed in several papers dealing with Aristotelian notions of size for sets (see e.g.", "[2], [3]).", "However a positive answer has been obtained, up to now, only for countable sets in [10], [7], [14], thanks to the consistency of selective or quasiselective ultrafilters on ${\\mathbb {N}}$ .", "The consistency of the Subtraction Principle with both (E5) and (AP) for sets of arbitrary cardinality (which is equivalent to the existence of Euclidean ultrafilters, as proved in [11]) is in fact the main result of this paper, but leaves open the consistency problem for the conjuction of full (EP) and (AP).", "Remark 1.3 Let us call weakly additive a comparison theory satisfying the condition (E2), but not (E3).", "It is well known that any Cantorian theory is weakly additive, in fact the corresponding set of magnitudines ${\\mathfrak {M}}=\\mathbb {{\\mathbb {W}}}/\\simeq \\,$   can be identified with a set of cardinal numbers.", "So the corresponding sum of infinite numbers is trivialized by the general equality ${\\mathfrak {a}}+{\\mathfrak {b}}= \\mbox{\\rm max}\\;\\lbrace {\\mathfrak {a}},{\\mathfrak {b}}\\rbrace ,$ and cannot admit an inverse operation.", "Actually, the very failure of the Euclid's principle has been taken as definition of infinity by Dedekind." ], [ "Multiplication of numerosities", "In classical mathematics, only homogeneous magnitudes are comparable, and geometric figures having different dimensions are never compared, so a multiplicative version of Euclid's second common notion ... if equals be multiplied by equals, the products are equal was not considered, in the presence of different “dimensions”.", "On the other hand, in modern mathematics a single set of “numbers\", the real numbers ${\\,\\mathbb {R}}$ , is used as a common scale for measuring the size of figures of any dimension.", "In a general set theoretic context it seems natural to consider abstract sets as homogeneous mathematical objects, without distinctions based on dimension, and Georg Cantor introduced his theory of cardinal numbers in order to give a measure of size of arbitrary sets.", "In the same vein, we introduce the notion of numerosity, aiming to provide a general Euclidean measure of size for sets, more adherent to the classical conception of magnitudo.", "A satisfying arithmetic of numerosities needs a product (and a corresponding unit), and we adhere to the natural Cantorian choice of introducing a product through Cartesian products,  CAVEAT: the Cartesian product is optimal when any two sets $A, B$ are multipliable in the sense that their Cartesian product is disjoint from their union, but when entire transitive universes like $V_{\\kappa }, H({\\kappa })$ , or $L$ are considered, the usual set-theoretic coding of pairs makes it untenable (e.g.", "already $V_{\\omega }\\times \\lbrace x\\rbrace \\subset V_{\\omega }$ for any $x\\in V_{\\omega }$ ), hence, as already done for addition, in these cases one should assume the existence of suitable multipliable copies of any set.", "and taking singletons as unitary.", "Although the Cartesian product is neither commutative nor associative stricto sensu, nevertheless the corresponding natural transformations have been taken among the congruences in the set ${\\mathfrak {C}}(W)$ , hence are numerosity preserving.", "Moreover, it seems natural that multiplication by “suitable\"  As remarked above, not all singletons may be “suitable\" for a Euclidean theory.", "singletons be such a “congruence” to be put into ${\\mathfrak {G}}(W)$ , so as to have at disposal disjoint equinumerous copies to be used in summing and multiplying numerosities.", "In doing so, each product $A\\times \\lbrace b\\rbrace ,\\ b\\in B$ may be viewed as a disjoint equinumerous copy of $A$ , thus making their (disjoint) union $A\\times B$ the sum of “$B$ -many copies of $A$ \", in accord with the arithmetic interpretation of multiplication." ], [ " Aristotelian and Euclidean numerosity theories", "The preceding discussion leads to the following definition Definition 1.3 An Aristotelian comparison theory $\\,( \\mathbb {W},\\preceq ) $ is a numerosity  in the sequel, for sake of brevity, we often omit the specification `Aristotelian': actually, the principles $(\\mathsf {PP}) $ and (UP) together imply (E0), otherwise all sets are null.", "theory if, for all $A,B,C\\in {\\mathbb {W}},$ with $C\\ne \\emptyset $ , and all $w\\in W$ $(\\mathsf {PP}) $ $((A\\cup B)\\times C) \\,\\cap (A\\cup B\\cup C) =\\emptyset \\ \\ \\mbox{$\\Longrightarrow $}\\ \\big (A\\preceq B \\ \\ \\mbox{$\\Longleftrightarrow $}\\ \\ A\\times C\\preceq B\\times C\\big )$ ; (UP) $\\ A\\simeq (A\\times \\lbrace w\\rbrace )$ for all $w$ such that $(A\\times \\lbrace w\\rbrace )\\cap A=\\emptyset $ .", "The numerosity is Euclidean if the full principle (EP) holds.", "The above Principles provide the set of numerosities ${\\mathfrak {N}}=\\mathbb {W}/\\simeq $ with the best algebraic properties, namely Theorem 1.4 Let $\\left\\lbrace \\mathbb {W},\\preceq \\right\\rbrace $ be a numerosity.", "Then the set of numerosities ${\\mathfrak {N}}$ has a natural structure of ordered semiring, where addition corresponds to disjoint union and multiplication to Cartesian product.", "Therefore there exist an ordered ring ${\\mathfrak {A}}$ and an embedding ${\\mathfrak {n}}:\\,\\mathbb {W}\\rightarrow {\\mathfrak {A}}^{\\ge 0}$ such that, for all $A,B\\in {\\mathbb {W}}$ , ${\\mathfrak {n}}(A)+{\\mathfrak {n}}(B)={\\mathfrak {n}}(A\\cup B)+{\\mathfrak {n}}(A\\cap B),\\ {\\mathfrak {n}}(A\\times B)={\\mathfrak {n}}(A)\\cdot {\\mathfrak {n}}(B),\\ A\\!\\subset \\!", "B \\mbox{$\\ \\Rightarrow \\ $}{\\mathfrak {n}}(A)\\!\\!<\\!", "{\\mathfrak {n}}(B)$ In particular the five Euclid's Common Notions (E1-5) are satisfied, and all finite sets receive their number of elements as numerosity, so ${\\mathfrak {N}}$ contains as initial segment an isomorphic copy of the natural numbers ${\\mathbb {N}}$ .", "Proof.", "The proof is close to that of Theorem 4.2 in [7]: here axiom (CP) makes multiplication commutative and associative, Principles (AP), $(\\mathsf {PP}) $ and (UP) are the Axioms (E1),(E4) and (E3) of [7], and together imply (E0) (unless the numerosity is trivially 0), hence also (E5) , while (E2) of [7]  Actually (E2) of [7] is equivalent to the proper subset property of Subsection REF , which is even stronger than (EP).", "can be replaced here by totality of the preordering $\\preceq $ .", "We have already proved Theorem REF that $({\\mathfrak {N}},+,0,\\le )$ is a commutative cancellative zerosumfree semisubtractive monoid, with the algebraic ordering $\\le $ possibly weaker than that induced by the set-theoretic preordering $\\preceq $ .", "Now also $({\\mathfrak {N}},\\cdot ,1)$ is a commutative and associative monoid, that is distributive w.r.t.", "$+$ , annihilated by 0, cancellative (because the Cartesian product of nonempty sets is nonempty), and has multiplicative unit $1\\!={\\mathfrak {n}}(\\lbrace w\\rbrace )$ by (UP).", "Finally, the abelian group ${\\mathfrak {A}}$ defined in the proof of Theorem REF becomes an ordered ring by simply defining ${\\mathfrak {n}}(A)\\cdot {\\mathfrak {n}}(B)={\\mathfrak {n}}(A\\times B)$ .", "Clearly there exists a unique numerosity where ${\\mathbb {W}}$ is the family of all finite sets, namely that given by the number of elements, so the corresponding set of numerosities ${\\mathfrak {N}}$ is an isomorphic copy of the natural numbers ${\\mathbb {N}}$ , which is an initial segment of every semiring of numerosities.", "$\\Box $" ], [ "Numerosity theories for “Punktmengen\"", "In this section we specify the notion of (Aristotelian or Euclidean) numerosity for “Punktmengen\" (finitary point-sets) over a “line\" ${\\mathbb {L}}$ , i.e.", "subsets $A\\subseteq \\bigcup _{n\\in {\\mathbb {N}}} {\\mathbb {L}}^n)$ such that $\\lbrace a\\in A \\mid supp(a)=i\\rbrace $ is finite for all $i\\in {\\mathbb {L}}^{<{\\omega }}$ .", "Actually, in a general set-theoretic context, there are no “geometric\" or “analytic\" properties to be considered: the sole relevant characteristic of the line ${\\mathbb {L}}$ remains cardinality, so a possible choice seems to be simply identifying ${\\mathbb {L}}$ with its cardinal ${\\kappa }$ , thus obtaining the fringe benefit that no pair of ordinals is an ordinal, and Cartesian products may be freely used.", "In any case it seems convenient to assume that the line ${\\mathbb {L}}$ is disjoint from its square ${\\mathbb {L}}^2$ .", "Grounding on the preceding discussion, we pose the following definition Definition 2.1 ${}$ $({\\mathbb {W}},\\preceq )$ is a numerosity theory for “Punktmengen\" over ${\\mathbb {L}}$ if ${\\mathbb {W}}\\subseteq {\\mathcal {P}}(\\bigcup _{n\\in {\\mathbb {N}}} {\\mathbb {L}}^n)$ is the collection of all finitary point sets over the line ${\\mathbb {L}}$ ; $\\preceq $ is a total preordering on ${\\mathbb {W}}$ , and $\\simeq $ the equivalence generated by $\\preceq $ ; the following conditions are satisfied for all $A,B,C\\in {\\mathbb {W}}$ : $(\\mathsf {{AP}})$ $A\\simeq B\\ \\Longleftrightarrow \\ A\\setminus B \\simeq B\\setminus A;$ $(\\mathsf {PP}) $ $A\\simeq B \\ \\ \\mbox{$\\Longleftrightarrow $}\\ \\ A\\times C\\simeq B\\times C$ (for all $C\\ne \\emptyset $ ); (UP) $A\\simeq A\\times \\lbrace w\\rbrace $ for all $w\\in W=\\bigcup {\\mathbb {W}}$ ; (CP) $\\tau [A]\\simeq A$ for all $\\tau \\in {\\mathfrak {C}}({\\mathbb {L}}).$  Recall that ${\\mathfrak {C}}({\\mathbb {L}})$ is the set of all support preserving bijections, see Subsect.", "REF The numerosity $({\\mathbb {W}},\\preceq )$ is Euclidean  Remark that $(\\mathsf {PP}) $ and (UP) together imply (E0), so $({\\mathbb {W}},\\preceq )$ is always Aristotelian.", "if satisfies the Euclidean principle $~~(\\mathsf {{EP}})\\ A\\prec B\\ \\mbox{$\\Longleftrightarrow $}\\ \\exists B^{\\prime }(A\\subset B^{\\prime }\\simeq B),$ with strict inclusion and preordering.", "The idea that global properties of sets might be tested “locally” suggests the following Definition 2.2 Let $(\\mathbb {W},\\preceq ) $ be a numerosity for “Punktmengen\" over ${\\mathbb {L}}$ , let $ {\\mathbb {I}}=[{\\mathbb {L}}]^{<{\\omega }}$ be the set of all finite subsets of ${\\mathbb {L}}$ , and for $\\ A\\in {\\mathbb {W}}$ and $ \\ i\\in {\\mathbb {I}}$ , let $A_i=\\lbrace a\\in A\\!\\mid \\!", "supp(a)\\subseteq i\\rbrace .$ the counting function $ \\Phi :{\\mathbb {W}}\\rightarrow {\\mathbb {N}}^{\\mathbb {I}}$ is given by $\\Phi (A)=\\langle |A_i|\\mid i\\in {\\mathbb {I}}\\rangle $ .", "$(\\mathbb {W},\\preceq ) $ is finitely approximable if $~\\forall i\\in {\\mathbb {I}}\\,\\,(|A_i|\\le |B_i|)\\ \\mbox{$\\Longrightarrow $}\\ A\\preceq B.$ The following lemma shows the “algebraic character\" of finitary approximable numerosities.", "Lemma 2.3 The set of all differences $\\Phi (A)-\\Phi (B)$ , with $A,B\\in {\\mathbb {W}}$ covers the whole ring ${\\mathbb {Z}}^{{\\mathbb {I}}}$ .", "Proof.", "Wellorder ${\\mathbb {L}}$ in type ${\\kappa }$ and then the set ${\\mathbb {I}}$ of all finite subsets of ${\\mathbb {L}}$ according to $\\ i<j\\ \\mbox{$\\Leftrightarrow $}\\ \\mbox{\\rm max}\\;(i\\Delta j\\in j)$ (hence again in type ${\\kappa }$ ): this ordering is such that all proper subsets of any $i_{\\alpha }\\in {\\mathbb {I}}$ appear at stages ${\\beta }<{\\alpha }$ .", "Given ${\\mathfrak {z}}\\in {\\mathbb {Z}}^{\\mathbb {I}}$ , define inductively on ${\\alpha }$ increasing sequences of finite sets $A^{({\\alpha })}\\subseteq A,\\ B^{({\\alpha })}\\subseteq B$ whose support is exactly $i_{\\alpha }$ , in such a way that $|{A_i}|-|{B_i}|$ be the $i^{th}$ component of ${\\mathfrak {z}}$ .", "Assuming this for all ${\\beta }<{\\alpha }$ , pick a number of tuples whose support is exactly $i_{\\alpha }$ and put them either in $A^{({\\alpha })}$ or in $B^{({\\alpha })}$ , so as to adjust the value of $|A_{i_{\\alpha }}|-|B_{i_{\\alpha }}|$ to be $z_{i_{\\alpha }}$ .", "These new elements cannot have been taken before, because the elements of any $A_j, B_j$ with $j$ preceding $i$ cannot have support $\\supseteq i$ , hence the ${\\alpha }^{th}$ step can be done, because of the inclusion-exclusion principle: $|A_{i}|=\\sum _{j\\subset i}(-1)^{|i\\setminus j|}\\left( \\begin{array}{c} |i| \\\\ |j| \\end{array} \\right)|A_{j}|$ : $\\Box $ Following [12], call Euclidean a fine ultrafilters  Recall that a filter on ${\\mathbb {I}}$ is fine if it contains all “cones\" $C(j)=\\lbrace i\\in {\\mathbb {I}}\\mid j\\subseteq i\\rbrace .$ $\\,{\\mathcal {U}}$ on ${\\mathbb {I}}$ if $\\forall \\psi \\in {\\mathbb {N}}^{\\mathbb {I}}\\,\\exists U_\\psi \\in {\\mathcal {U}}\\,\\forall i,j\\in {\\mathcal {U}}_\\psi \\ \\big (i\\subset j\\ \\mbox{$\\Longrightarrow $}\\ \\psi (i)\\le \\psi (j)\\big ).$ Finite approximability allows for a “concrete\" strengthening of Theorem REF , leading naturally to hypernatural numbers as numerosities, namely Theorem 2.1 There is a biunique correspondence between finitely approximable numerosities $(\\mathbb {W},\\preceq ) $ and fine ultrafilters $\\,{\\mathcal {U}}$ on ${\\mathbb {I}}$ .", "If $\\,{\\mathcal {U}}$ corresponds to $(\\mathbb {W},\\preceq ) $ in this correspondence, then $ (\\ref {ult})~~~~~~~~~~~\\forall A,B\\in {\\mathbb {W}}\\!\\ \\big ( A\\preceq B\\ \\mbox{$\\Longleftrightarrow $}\\ \\lbrace i\\in {\\mathbb {I}}\\mid |A_i|\\le |B_i|\\rbrace \\in {\\mathcal {U}}\\big ),~~~~~~~~~~~~$ and there is an isomorphic embedding ${\\varphi }$ of the semiring of numerosities ${\\mathfrak {N}}\\!=\\!", "{\\mathbb {W}}/\\!\\!\\simeq $ into the ultrapower $N^{{\\mathbb {I}}}_{\\; {\\mathcal {U}}}$ that makes the following diagram commute: ${\\mathfrak {N}}$${\\mathbb {N}}^{{\\mathbb {I}}}_{\\; {\\mathcal {U}}}\\!\\subset {\\mathbb {Z}}^{{\\mathbb {I}}}_{\\; {\\mathcal {U}}}$${\\mathbb {W}}$$(*)$${\\mathbb {N}}^{\\mathbb {I}}\\subset {\\mathbb {Z}}^{\\mathbb {I}}$${\\varphi }$$\\Phi $${\\mathfrak {n}}$$\\pi _{\\mathcal {U}}$ $($ where $\\Phi $ maps any $A\\in {\\mathbb {W}}$ to its counting function $ \\Phi ({A}):{\\mathbb {I}}\\rightarrow {\\mathbb {N}})$ Moreover the set of congruence can be taken to be ${\\mathfrak {C}}_{\\mathcal {U}}({\\mathbb {W}})=\\lbrace \\tau \\mid \\exists U\\in {\\mathcal {U}}\\,\\forall i\\in U\\,\\tau [i]=i\\rbrace .$ Finally the numerosity is Euclidean if and only if the ultrafilter is Euclidean.", "Proof.", "Given a fine ultrafilter $\\,{\\mathcal {U}}$ on ${\\mathbb {I}}$ , define $\\preceq $ by $(\\ref {ult})$ .", "Then (CP) holds because congruent sets share the same counting functions, (UP) holds taking $\\ d=supp(w)$ and $\\ U=C(d)\\in {\\mathcal {U}}$ .", "(AP) holds because $|(A\\cup B)_i|=|A_i\\cup B_i|=|A_i|+|B_i|$ , if $(A\\cap B)=\\emptyset $ , and so $|(A\\setminus B)_i|+|(A\\cap B)_i|=|A_i|$ .", "$(\\mathsf {PP}) $ holds because $supp(a,b)=supp(a)\\cup supp(b)$ , hence ${~~~~~~~~~~~}(A\\times B)_i=\\lbrace (a,b)\\mid supp(a)\\cup supp(b)\\subseteq i\\rbrace =A_i\\times B_i$ .", "Clearly these equalities continue to hold by passing to the ultrapower modulo the fine ultrafilter $\\,{\\mathcal {U}}$ , hence $(\\mathbb {W},\\preceq ) $ is finitely approximable, and the unique map ${\\varphi }:{\\mathfrak {n}}(A)\\mapsto \\pi _{\\mathcal {U}}(\\Phi (A))$ is welldefined and preserves sums and products.", "Finally, the preordering $\\preceq $ is total in any case, because ${\\mathcal {U}}$ is ultra.", "Moreover, if $\\,{\\mathcal {U}}$ is Euclidean and $A\\preceq B$ , i.e.", "the difference $\\Phi (B)-\\Phi (A)$ is nonnegative on a set $U\\in {\\mathcal {U}}$ , or equivalently is equal on $U$ to some $\\psi \\in {{\\mathbb {N}}^{\\mathbb {I}}}$ , then $\\pi _{\\mathcal {U}}(\\psi )\\in {\\mathbb {N}}^{{\\mathbb {I}}}_{\\; {\\mathcal {U}}}$ and there exists $C$ such that $\\psi =\\phi ({\\mathfrak {n}}(C))$ , equivalently ${\\mathfrak {n}}(B)+{\\mathfrak {n}}(C)={\\mathfrak {n}}(A)$ , so (Diff) holds.", "Conversely, given $(\\mathbb {W},\\preceq ) $ finitely approximable, the family of sets ${\\mathcal {F}}=\\big \\lbrace U_{AB}= \\lbrace i\\in {\\mathbb {I}}\\mid |A_i|=|B_i|\\rbrace \\mid A\\simeq B\\big \\rbrace $ has the FIP, so it generates a filter, which is fine because $C(\\lbrace w,v\\rbrace )=U_{\\lbrace w\\rbrace \\lbrace v\\rbrace }$ .", "The ring ${\\mathfrak {A}}$ generated by $\\Phi [{\\mathbb {W}}]$ is ${{\\mathbb {Z}}^{\\mathbb {I}}}$ , by Lemma REF , and the total preordering $\\preceq $ induces a total ordering on ${\\mathbb {Z}}^{{\\mathbb {I}}}_{\\; {\\mathcal {U}}}$ , by$( \\ref {ult})$ , hence $\\,{\\mathcal {U}}$ is ultra, and the map $\\phi $ is injective and preserves sums, products, and ordering of ${\\mathfrak {N}}$ .", "Moreover $\\tau \\in {\\mathfrak {C}}_{\\mathcal {U}}({\\mathbb {W}})$ trivially implies $\\lbrace i\\in {\\mathbb {I}}\\mid |\\tau [A]_i|=|A_i|\\rbrace \\in {\\mathcal {U}}$ .", "Finally (EP) implies that the difference of two counting functions, when positive on a set in $\\,U$ , is equivalent modulo $\\,{\\mathcal {U}}$ to some counting function, hence the set ${\\mathfrak {N}}$ of all numerosities is mapped by ${\\varphi }$ onto the ultrapower ${\\mathbb {N}}^{{\\mathbb {I}}}_{\\; {\\mathcal {U}}}$ .", "$\\Box $ Therefore finitely approximable numerosities exist for lines ${\\mathbb {L}}$ of arbitrary cardinality, and Euclidean approximable numerosities exist for ${\\mathbb {L}}$ if and only if there are Euclidean ultrafilters on ${\\mathbb {I}}$ .", "We are left with the question of the existence of Euclidean ultrafilters.", "The paper [15] studies the partition property $\\ [A]^{<\\omega }\\rightarrow (\\omega ,{cofin})^2_\\subset $ affirming that any 2-partition $G:[[A]^{<\\omega }]]^2\\rightarrow \\lbrace 0,1\\rbrace $ of the pairs of finite subsets of $A$ has a $\\subset $ -cofinal homogeneous set $H\\subseteq A^{<{\\omega }}$ .", "In [17] its validity for $|A|={\\aleph _{1}}$ is established, but the problem for larger sets is left open.", "The recent paper [12] introduces the similar “unbalanced\" property $\\ [A]^{<\\omega }\\rightarrow (\\omega ,{cofin})^2_\\subset $ affirming that any 2-partition $G:[[A]^{<\\omega }]]^2\\rightarrow \\lbrace 0,1\\rbrace $ of the pairs of finite subsets of $A$ , either admits a 0-chain (i.e.", "a $\\subset $ -increasing sequence $a_n\\in [A]^{<{\\omega }}$ with $G(a_n,a_{n+1})=0)$ , or it has a $\\subset $ -cofinal homogeneous set $H\\subseteq A^{<{\\omega }}$ (hence necessarily $G(a,b)=1$ for all $a,b\\in H,\\ a\\subset b).$ This partition property is all that is needed in order to have Euclidean ultrafilters, hence Euclidean numerosities, namely Lemma 2.4 ${}$ (see Lemma $1.4$ of [11]) If $\\ {\\mathbb {I}}\\rightarrow ({\\omega },cofin)^2_\\subset $ holds, then there are Euclidean ultrafilters on ${\\mathbb {I}}$ .", "Proof.", "For $\\psi \\in {{\\mathbb {N}}^{\\mathbb {I}}}$ , define the partition $G_{\\psi }:[{\\mathbb {I}}]^2_\\subset \\rightarrow \\lbrace 0,1\\rbrace \\ \\ \\textrm {by}\\ \\ G_\\psi (i,j)={\\left\\lbrace \\begin{array}{ll}0 & \\text{if } \\psi (i)>\\psi (j), \\\\1 & \\text{otherwise}.\\end{array}\\right.", "}$ Given finitely many $\\psi _k\\in {{\\mathbb {N}}^{\\mathbb {I}}}$ , let $\\psi =\\prod _k\\psi _k$ : then $\\psi $ cannot admit 0-chains, so there is a $\\subset $ -cofinal 1-homogeneous set $H_\\psi $ , which is simultaneously 1-homogeneous for all $G_{\\psi _k}$ .", "Hence the family ${\\mathcal {H}}=\\lbrace H_\\psi \\mid \\psi \\in {{\\mathbb {N}}^{\\mathbb {I}}}\\rbrace $ has the FIP, and any fine ultrafilter ${\\mathcal {U}}$ on ${\\mathbb {I}}$ including ${\\mathcal {H}}$ is Euclidean.", "$\\Box $ Actually, the partition property $\\ [A]^{<\\omega }\\rightarrow (\\omega ,\\text{cofinal})^2_\\subset $ has been stated for all sets $A$ of any cardinality ${\\kappa }$ as the Main Theorem of [12], so the existence of Euclidean ultrafilters on ${\\mathbb {I}}$ , hence of Euclidean numerosities satisfying the subtraction property (Diff) is granted on the family ${\\mathbb {W}}$ of all (finitary) point sets over lines ${\\mathbb {L}}$ of arbitrary cardinality." ], [ "The Weak Hume's Principle", "We remark that the Cantorian theory of cardinality and the Euclidean theory of numerosity might be reconciled by weakening the first one, while slightly strengthening the latter.", "In fact, on the one hand, the Cantorian theory uses this form of Hume's Principle $\\textsf {(HP)}{~~~~~~~~~~~~~~~~~~~~~~~} A\\preceq B\\ \\ \\ \\mbox{$\\Longleftrightarrow $}\\ \\ \\ \\exists f: A\\rightarrow B,\\, f\\, 1\\mbox{- to -}1.", "{~~~~~~~~~~~~~~~~~~~~~~~~~~}$ instead of Euclid's principle (EP) in defining the (weak) preordering $\\preceq $ of sets.", "Then the principles $\\textsf {(E2)}$ and $(\\mathsf {PP}) $ ,(UP),(CP) become provable, while $\\textsf {(E3)}$ and $\\sf {(E5)} $ (hence also (AP) and (EP)) are refuted.", "On the other hand, perhaps the best way to view Aristotelian or Euclidean numerosities is looking at them as a refinement of Cantorian cardinality, able to separate sets that, although equipotent, have in fact really different sizes, in particular when they are proper subsets or supersets of one another.", "This conception amounts in “weakly Cantorianizing\" the numerosity theory by adding the sole “only if \" part of Hume's principle: This property is called “Half Cantor Principle\" in [2].", "If two sets are equinumerous, then there exists a biunique correspondence between them." ], [ "Weakly Humean numerosity", "The idea of getting a “weakly Cantorian\" numerosity theory is realized by simply adding the right pointing arrow of (HP) (named Half Cantor Principle in [2]) to the principle (AP) namely Definition 3.1 An (Aristotelian or Euclidean) numerosity theory is weakly Humean if it satisfies the following ${\\textsf {(WHP)}}\\ \\ (Weak\\ Hume^{\\prime }s\\ Principle)~~~~~A\\preceq B\\ \\ \\mbox{$\\Longrightarrow $}\\ \\ \\exists f\\, 1\\mbox{- to -}1,\\, f: A\\rightarrow B.", "{~~~~}$ If one wants to maintain also the finite approximability, it amounts to require that the fine ultrafilter ${\\mathcal {U}}$ of Theorem REF contains all sets $Q^{<}_{AB}=\\lbrace i\\in {\\mathbb {I}}\\mid |A_i|< |B_i|\\rbrace ,\\ \\textrm {for }\\ |A|<|B|,$ and this family of sets may be contained in a fine ultrafilter on ${\\mathbb {I}}$ if and only if it enjoys the FIP together with the family of the cones $C(d), \\,d\\in {\\mathbb {I}}$ .", "Actually, this property is provable by an argument similar to that of Theorem 3.2 of [3].", "Lemma 3.2 Let $d\\in {\\mathbb {I}}$ and finitely many sets $\\ A^{st}\\in {\\mathbb {W}},$ for $ s\\!\\in \\!\\!", "S_t,\\ 1\\le t\\le n$ be given such that $S_t$ is finite for all $t$ , and $ |A^{st}|={\\kappa }_t,\\ {\\aleph _{0}}\\le {\\kappa }_t<{\\kappa }_u\\le {\\kappa }$ for $t<u$ .", "Then there exists $i\\in C(d)$ such that $|A^{st}_i|< |A^{ru}_i|$ for $ s\\in S_t, r\\in S_u, t< u$ .", "Proof.", "Suppose chosen such an $i_m$ good for all $t\\le m$ , and let $B_{m}=\\bigcup \\lbrace A^{st}\\,\\mid \\ t\\le m\\rbrace , \\ I_m=d\\,\\cup \\!\\bigcup _{b\\in B_m}\\!\\!supp(b),\\ \\ k_m=\\mbox{\\rm max}\\;\\lbrace |A^{st}_{i_m}|\\!\\mid \\!", "s\\in S_t, t\\le m\\rbrace \\!", ":$ then pick $k_{m+1}>k_m$ elements with supports not included in any $j\\in I_m$ in each set $A_{s\\,m+1}\\setminus B_m, \\ s\\in S_{m+1}$ , existing because $|B_m|=|I_m|={\\kappa }_m$ and $|A_{s\\,m+1}|>{\\kappa }_m.$ Let $i_{m+1}$ be the union of $i_m$ with the supports of the new elements: then $|A^{st}_{i_{m+1}}|=|A^{st}_{i_{m}}|\\le k_m$ for all $t\\le m$ , while $|A^{s\\,m+1}_{i_{m+1}}|\\ge k_{m+1}>k_m$ for $s\\in S_{m+1}$ .", "Proceeding in this way we come to a set $i=i_n\\in {\\mathbb {I}}$ belonging to $~~~~~~~~~~~~~~~~~~C(d)\\cap \\bigcap \\lbrace Q^<_{{A^{st}}{A^{ru}}}\\mid s\\in S_t, r\\in S_u, t<u\\rbrace .$ $\\Box $ So the family ${\\mathcal {Q}}=\\lbrace Q^{<}_{AB}\\mid |A|<|B|\\rbrace \\cup \\lbrace C(d)\\mid d\\in {\\mathbb {I}}\\rbrace $ has the FIP, and any ultrafilter $\\,{\\mathcal {U}}\\supseteq {\\mathcal {Q}}$ provides numerosities satisfying the weak Hume's principle.", "Thus we obtain the reasonable effect that the ordering of the Aristotelian numerosities refines the cardinal ordering on its universe.", "We conjecture that the family ${\\mathcal {Q}}$ shares the FIP also with the family ${\\mathcal {H}}$ of Lemma REF , so it might be included in an Euclidean ultrafilter, and the consistency of the weak Hume principle (WHP) with the difference property (Diff) will follow, but now this question remains open." ], [ "Extending comparison to the whole universe $V$", "A simple way of extending an Aristotelian or Euclidean numerosity to the whole universe of sets $V$ could be obtained by coding the universe by ordinals, and associating to each set $X$ a set of ordinals $A_X$ in such a way that $A_X\\cup A_Y= A_{X\\cup Y}$ if $X\\cap Y=\\emptyset ;$ $A_X\\times A_Y\\simeq A_{X\\times Y}$ if $(X\\cup Y)\\cap ( X\\times Y)=\\emptyset =X\\cap Y.$ Recalling that finite set of ordinals are appropriately coded by the natural sum of the powers $2^{\\alpha }$ of its members, we put ${\\gamma }_\\emptyset =0,~~~~~{\\gamma }_{\\lbrace x_1,\\ldots ,x_n\\rbrace }=\\bigoplus _{i=1}^n2^{{\\gamma }_{x_i}},$ where ${\\gamma }_x$ is the ordinal coding the set $x$ , so, in particular, each hereditarily finite set in $V_\\omega $ receives its natural code in ${\\omega }$ .", "Now, in order to avoid clashings with finite sets, the codes of infinite sets have to be chosen additively indecomposable, i.e.", "pure powers of 2 (but avoiding the so called ${\\epsilon }$ -numers ${\\epsilon }=2^{\\epsilon }$ , which woud be confused with their singletons).", "Therefore we put ${\\gamma }_{\\alpha }=2^{{\\alpha }+1}$ for each infinite ordinal ${\\alpha }$ , and ${\\gamma }_x={\\omega }^{\\xi (x)}=2^{{\\omega }\\xi (x)}$ , where $\\xi $ picks an ordinal in $\\beth _{{\\alpha }+1}\\setminus \\beth _{\\alpha }$ for each infinite set $x\\in V_{{\\alpha }+1}\\setminus V_{{\\alpha }}$ .", "Then let $A_X=\\lbrace {\\gamma }_x\\mid x\\in X\\rbrace ,\\ \\ A_X*A_Y= A_{\\lbrace \\lbrace x,y\\rbrace \\mid x\\in X,\\ y\\in Y\\rbrace }=\\lbrace {\\gamma }_{\\lbrace x,y\\rbrace }\\mid x\\in X, y\\in Y\\rbrace .$ Then $A_X$ is a set of ordinals, $A_X\\cup A_Y= A_{X\\cup Y}$ if $X\\cap Y=\\emptyset ,$ and the “Russellian  Actually Bertrand Russell warmly suggested the use of this product, which is naturally commutative and associative.", "doubleton-product\" $A_X*A_Y$ might replace the Cartesian product $X\\times Y$ when $X\\cap Y\\cap \\lbrace \\lbrace x,y\\rbrace \\mid x\\in X,\\ y\\in Y\\rbrace =\\emptyset .$ Let ${\\mathbb {I}}$ be the class of all finite sets of ordinals, and define the counting function $f_X: X\\rightarrow {\\mathbb {Z}}^{\\mathbb {I}}$ by $f_X(i)=|(A_X)_i|=|A_X\\cap i|$ : then $|(A_X)_i|+{(A_Y)_i|= |(A_{X\\cup Y})_i} $ if $X\\cap Y=\\emptyset $ , and $|(A_{X})_i|\\cdot |(A_Y)_i|=|(A_X*A_Y)_i|$ if $X\\cap Y\\cap \\lbrace \\lbrace x,y\\rbrace \\mid x\\in X,\\ y\\in Y\\rbrace =\\emptyset $ .", "When considering, instead of the universal class $V$ , an initial segment $V_{\\kappa }$ with ${\\kappa }$ any inaccessible cardinal, then all works as in Theorem REF , giving a biunique correspondence between fine ultrafilters ${\\mathcal {U}}_{\\kappa }$ on ${\\mathbb {I}}_{\\kappa }=[{\\kappa }]^{<{\\omega }}$ and finitely approximable numerosities $(V_{\\kappa },\\prec )$ , and the semiring of numerosities would become the ultrapower ${\\mathfrak {N}}_{\\kappa }={\\mathbb {N}}^{{\\mathbb {I}}_{\\kappa }}_{\\; {\\mathcal {U}}_{\\kappa }}$ .", "But in the case of the whole universe $V$ , the coding function ${\\Gamma }:x\\mapsto {\\gamma }_x$ as well as ${\\mathbb {I}}$ and $Ord$ are proper classes, so one cannot operate in ZFC, but has to work in some theory of classes, like Gödel-Bernays theory $\\textsf {GB}$ (and assume global choice, which is stronger than Zermelo's $\\textsf {AC}$ ).", "Then one can proceed as in [2]: fix an unbounded increasing sequence of cardinals ${\\kappa }$ and “coherent\" fine ultrafilters ${\\mathcal {U}}_{\\kappa }$ on ${\\mathbb {I}}_{\\kappa }=[{\\kappa }]^{<{\\omega }}$ .", "I.e., if ${\\kappa }^{\\prime }$ is the successor of ${\\kappa }$ , then ${\\mathcal {U}}_{{\\kappa }^{\\prime }}$ induces on ${\\mathbb {I}}_{\\kappa }$ the equivalence $\\equiv _{{\\mathcal {U}}_{\\kappa }}$ ; at limit steps take the union: then the class ${\\mathfrak {N}}$ , direct limit of the ultrapowers ${\\mathbb {N}}^{{\\mathbb {I}}_{\\kappa }}_{\\; {\\mathcal {U}}_{\\kappa }}$ , is a proper class semiring of hyperintegers suitable for assigning a numerosity to all sets of the universe." ], [ "The Subset Property", "An interesting consequence of assuming $\\textsf {(WHP)}$ is the Subset Property ${\\sf {(SubP)}}~~~~~~~~~~~~~~~~~~~~~~~~ A\\prec B\\ \\ \\mbox{$\\Longleftrightarrow $}\\ \\ \\exists A^{\\prime } (A\\simeq A^{\\prime }\\subset B).~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$ The set of numerosities is very large, having the same size as the universe $W$ : this is a necessary consequence of Euclid's Principle, since one can define strictly increasing chains of sets of arbitrary length.", "However, any set $A$ has only $2^{|A|}$ subsets, and so, by assuming the Subset Property $\\sf {(SubP)}$ , instead of defining the preordering (as we did here) through the superset property $\\textsf {(EP)}$ , we would obtain that the initial segment of numerosities generated by ${\\mathfrak {n}}(A)$ has size $2^{|A|}$ , contrary, e.g., to the large ultrapower models of [2], [3].", "This remark proves that the set of numerosities of sets of cardinality not exceeding ${\\kappa }$ is not forced a priori to have cardinality exceeding $2^{\\kappa }$ , independently of the size of the universe, by simply assuming the Weak Hume Principle (WHP)." ], [ "The power of numerosities", "According to Theorem REF , the power ${\\mathfrak {m}}^{{\\mathfrak {n}}}$ of infinite numerosities is always well-defined, since numerosities are positive nonstandard integers.", "By using finite approximations given by intersections with suitable finite sets the interesting relation $2^{{\\mathfrak {n}}(X)} = {\\mathfrak {n}}([X]^{<{\\omega }})$ has been obtained in [2].", "Similarly, the principle $\\mathsf {FAP} $ might be adapted to provide the natural general set theoretic interpretation of powers: ${\\mathfrak {m}}(Y)^{{\\mathfrak {n}}(X)} = {\\mathfrak {n}}(\\lbrace f:X\\rightarrow Y\\mid |f| <{\\aleph _{0}}\\rbrace ).$ E.g.", "one might assign to a finite function $f$ between sets in ${\\mathbb {W}}$ a “support\" equal to the union of the supports of the elements of domain and range of $f$ .", "The difficult problem of finding appropriately defined arithmetic operations that give instead the numerosity of the function sets $Y^X$ , or even only the full powersets ${\\mathcal {P}}(X)$ , requires a quite different approach, and the history of the same problem for cardinalities suggests that it could not be completely solved." ] ]
2212.05527
[ [ "Gaussian random projections of convex cones: approximate kinematic\n formulae and applications" ], [ "Abstract Understanding the stochastic behavior of random projections of geometric sets constitutes a fundamental problem in high dimension probability that finds wide applications in diverse fields.", "This paper provides a kinematic description for the behavior of Gaussian random projections of closed convex cones, in analogy to that of randomly rotated cones studied in [ALMT14].", "Formally, let $K$ be a closed convex cone in $\\mathbb{R}^n$, and $G\\in \\mathbb{R}^{m\\times n}$ be a Gaussian matrix with i.i.d.", "$\\mathcal{N}(0,1)$ entries.", "We show that $GK\\equiv \\{G\\mu: \\mu \\in K\\}$ behaves like a randomly rotated cone in $\\mathbb{R}^m$ with statistical dimension $\\min\\{\\delta(K),m\\}$, in the following kinematic sense: for any fixed closed convex cone $L$ in $\\mathbb{R}^m$, \\begin{align*} &\\delta(L)+\\delta(K)\\ll m\\, \\Rightarrow\\, L\\cap GK = \\{0\\} \\hbox{ with high probability},\\\\ &\\delta(L)+\\delta(K)\\gg m\\, \\Rightarrow\\, L\\cap GK \\neq \\{0\\} \\hbox{ with high probability}.", "\\end{align*} A similar kinematic description is obtained for $G^{-1}L\\equiv \\{\\mu \\in \\mathbb{R}^n: G\\mu \\in L\\}$.", "The practical usefulness and broad applicability of the prescribed approximate kinematic formulae are demonstrated in a number of distinct problems arising from statistical learning, mathematical programming and asymptotic geometric analysis.", "In particular, we prove (i) new phase transitions of the existence of cone constrained maximum likelihood estimators in logistic regression, (ii) new phase transitions of the cost optimum of deterministic conic programs with random constraints, and (iii) a local version of the Gaussian Dvoretzky-Milman theorem that describes almost deterministic, low-dimensional behaviors of subspace sections of randomly projected convex sets." ], [ "Approximate kinematic formulae", "Let $m,n$ be two positive integers, $K$ be a closed convex cone in $\\mathbb {R}^n$ .", "We reserve the notation $G \\in \\mathbb {R}^{m\\times n}$ for a standard Gaussian matrix in that $G \\in \\mathbb {R}^{m\\times n} \\hbox{ contains i.i.d.", "$\\mathcal {N}(0,1)$ entries}.$ We will be interested in the stochastic behavior of the Gaussian random projection of $K\\subset \\mathbb {R}^m$ , defined as $GK\\equiv \\lbrace G \\mu : \\mu \\in K\\rbrace \\subset \\mathbb {R}^m.$ The study of the stochastic behavior of Gaussian random projections has a long history in the high dimensional probability literature; see e.g., [38], [33], [5], [1], [48] for textbook treatments on this topic.", "The lasting interest in the theory and idea of (Gaussian) random projections is also partly due to its wide applications in applied problems arising from signal processing, statistics and computational mathematics; the interested readers are referred to, e.g., [47], [3], [14], [15], [8], [32], [17], [16], [12] for a highly in-exhaustive, but diverse list of concrete applications.", "In typical applications related to our setting (REF ), the ambient dimension $n$ in which the convex cone $K$ lives is (much) larger than the projection dimension $m$ in which the random convex cone $GK$ resides.", "The central question of interest is therefore to understand how the stochastic geometry of the relatively low-dimensional random cone $GK$ can be described by that of the deterministic, but possibly high-dimensional cone $K$ .", "One particular easy case to understand the behavior of $GK$ is when $K$ is a linear subspace of $\\mathbb {R}^n$ .", "In this case, $GK$ is simply distributed as a uniform random subspace of $\\mathbb {R}^m$ with dimension $\\min \\lbrace \\dim (K),m\\rbrace $ .", "Clearly, both the notion of dimension and the prescribed exact distribution description are specific to $K$ being subspaces.", "The former issue is less severe: there is a natural generalization of the notion of dimension for linear subspaces to the so-called statistical dimension $\\delta (K)$ for general convex cones $K$ , formally defined as $\\delta (K)\\equiv \\operatorname{\\mathbb {E}}\\Vert \\Pi _K(g)\\Vert _{}^2,\\quad g\\sim \\mathcal {N}(0,I_n).$ Here $\\Vert \\cdot \\Vert _{}$ denotes the Euclidean norm in $\\mathbb {R}^n$ , and $\\Pi _K$ denotes the metric projection onto $K$ defined as $\\Pi _K(\\cdot )=\\operatornamewithlimits{arg\\,min\\,}_{\\mu \\in K}\\Vert \\cdot -\\mu \\Vert _{}$ .The readers are referred to Section below and [4] for more technical discussions on the notion of statistical dimension and other related results.", "The major difficulty, however, appears to be the lack of natural generalizations of the exact distributional descriptions of $GK$ from $K$ being subspaces to general convex cones.", "Nonetheless, the classical theory of Gaussian random projections gives useful hints on the stochastic behavior of $GK$ .", "For instance, when $\\delta (K)\\gg m$ , the Gaussian Dvoretzky-Milman Theorem (cf.", "[18], [34], [38], [1]) and the cone property of $K$ indicate that $GK$ is trivially the full space $\\mathbb {R}^m$ with high probability.", "On the other hand, when $m\\gg \\delta (K)$ , the Johnson-Lindenstrauss Embedding Theorem (cf.", "[29], [31]) suggests that $GK\\subset \\mathbb {R}^m$ is locally almost isomorphic to $K\\subset \\mathbb {R}^n$ .", "Combined with the rotation invariance of Gaussian random projections, it is then natural to conjecture that $&\\hbox{$GK$ behaves like a randomly rotated cone in $\\mathbb {R}^m$}\\nonumber \\\\&\\qquad \\hbox{with statistical dimension $\\min \\lbrace \\delta (K),m\\rbrace $}.$ The first goal of this paper is to formalize the heuristic (REF ) for an arbitrary closed convex cone $K$ , using the kinematic formulation in [4] that describes the behavior of randomly rotated cones.", "In particular, let $O_n$ be distributed according to the Haar measure on the orthogonal group $\\mathrm {O}(n)$ .", "[4] describes the kinematic behavior of $O_n K$ as follows: there exists some universal constant $C>0$ such that for any fixed closed convex cone $L\\subset \\mathbb {R}^n$ and $t\\ge 1$ , $\\sqrt{\\delta (K)}-\\sqrt{n-\\delta (L)}\\le -C\\sqrt{t}\\, &\\Rightarrow \\, \\operatorname{\\mathbb {P}}\\big (L\\cap O_n K = \\lbrace 0\\rbrace \\big )\\ge 1-e^{-t},\\nonumber \\\\\\sqrt{\\delta (K)}-\\sqrt{n-\\delta (L)}\\ge C\\sqrt{t}\\, &\\Rightarrow \\, \\operatorname{\\mathbb {P}}\\big (L\\cap O_n K \\ne \\lbrace 0\\rbrace \\big )\\ge 1-e^{-t}.$ The above results (REF ) are called approximate kinematic formulae in [4], as they serve as approximations for the exact kinematic formulae for $\\operatorname{\\mathbb {P}}\\big (L\\cap O_n K \\ne \\lbrace 0\\rbrace \\big )$ , the form of which can be found in (REF ) in Section REF below.", "Using the kinematic formulation (REF ), we may now give a formal description of (REF ) as follows.", "Theorem 1.1 Suppose that $K \\subset \\mathbb {R}^n$ and $L\\subset \\mathbb {R}^m$ are non-trivial closed convex cones.", "There exists some universal constant $C>0$ such that the following statements hold for $t\\ge 1$ .", "If $\\sqrt{\\delta (K)}-\\sqrt{m-\\delta (L)}\\le -C\\sqrt{t}$ , then $\\operatorname{\\mathbb {P}}\\big (L\\cap GK = \\lbrace 0\\rbrace \\big )\\ge 1-e^{-t}.$ If $\\sqrt{\\delta (K)}-\\sqrt{m-\\delta (L)}\\ge C\\sqrt{t}$ , then $\\operatorname{\\mathbb {P}}\\big (L\\cap GK \\ne \\lbrace 0\\rbrace \\big )\\ge 1-e^{-t}.$ Remarkably, Theorem REF provides a complete analogue to the approximate kinematic formulae (REF ) in its full generality, despite that exact formulae for the probability $\\operatorname{\\mathbb {P}}(L\\cap GK\\ne \\lbrace 0\\rbrace )$ are apparently unavailable (more technical comments on this can be found in Section REF below).", "Due to the apparent similar appearance to (REF ), we will call the results in Theorem REF approximate kinematic formulae for $\\operatorname{\\mathbb {P}}(L\\cap GK\\ne \\lbrace 0\\rbrace )$ .", "From a different angle, the main theme between (REF ) and Theorem REF studies the proximity of random orthogonal matrices to Gaussian random matrices in the sense that $\\operatorname{\\mathbb {P}}(L\\cap O_nK\\ne \\lbrace 0\\rbrace )\\approx \\operatorname{\\mathbb {P}}(L\\cap GK\\ne \\lbrace 0\\rbrace )$ , at least when the closed convex cones $K,L$ are of the same dimensionality.", "A separate line in the random matrix theory studies similar proximity phenomena for random orthogonal and Gaussian matrices in various other senses; the interested readers are referred to [28], [9], [46], [44], [30] and references therein for further details in this direction." ], [ "Brief review of the conic integral geometry method in {{cite:33627fcbcfcf36234bddc1604cd5fa3e518c3dde}}", "Before detailing the proof techniques of Theorem REF , let us first review the method of proof in [4] for (REF ) with a similar appearance.", "The proof of (REF ) in [4] is based on the exact kinematic formula (cf.", "[42]): $\\operatorname{\\mathbb {P}}\\big (K\\cap O_n L\\ne \\lbrace 0\\rbrace \\big )=\\sum _{i=0}^n \\big (1+(-1)^{i+1}\\big )\\sum _{j=i}^n {V}_j(K)\\cdot {V}_{n+i-j}(L).$ Here $\\lbrace {V}_j(K)\\rbrace \\subset [0,1]$ 's are the so-called intrinsic volumes of the cone $K$ satisfying $\\sum _{j=0}^n {V}_j(K)=1$ ; see [4] for more background knowledge of the notion of intrinsic volumes.", "The key step in [4] to derive (REF ) from the exact formula (REF ) is to prove `concentration' of $\\lbrace {V}_j(K)\\rbrace $ viewed as a probability measure on $\\lbrace 0,\\ldots ,n\\rbrace $ .", "To illustrate the main idea, suppose further $L$ is a subspace of dimension $\\ell $ .", "Then ${V}_j(L)=\\mathbf {1}_{j=\\ell }$ , and (REF ) reduces to the Crofton formula: $\\operatorname{\\mathbb {P}}\\big (K\\cap O_n L\\ne \\lbrace 0\\rbrace \\big )= 2\\sum _{\\begin{array}{c}j=n-\\ell +1,\\\\ j-(n-\\ell +1)\\textrm { is even}\\end{array}}^n {V}_j(K)\\equiv 2{H}_{n-\\ell +1}(K).$ As a consequence of (i) the concentration of $\\lbrace {V}_j(K)\\rbrace $ around its mean $\\delta (K)$ (cf.", "[4], [37]), and (ii) the interlacing property $\\sum _{j=n-\\ell +1}^n {V}_j(K) \\le 2{H}_{n-\\ell +1}(K)\\le \\sum _{j=n-\\ell }^n {V}_j(K)$ (cf.", "[4]), the right hand side of the above display (REF ) is either close to 0 or 1, according to whether $\\ell +\\delta (K)\\ll n$ or $\\ell +\\delta (K)\\gg n$ .", "As have been clear now, the conic integral geometry approach of [4] to prove (REF ) relies heavily on the existence of the exact formula (REF ); see also [22].", "On the other hand, such exact kinematic formulae are proved by exploiting, in an essential way, the fact that $O_n$ induces the unique invariant probability (Haar) measure on $\\mathrm {O}(n)$ (cf.", "[42]), and therefore unfortunately do not admit a direct extension to the probability $\\operatorname{\\mathbb {P}}(L\\cap GK\\ne \\lbrace 0\\rbrace )$ of interest in Theorem REF , where $L,K$ are cones of possibly different dimensions.", "See also [4] for some related discussions." ], [ "Our method of proof", "Here we take a different, random process approach to prove Theorem REF .", "Our method of proof is based on a two-sided version of Gordon's Gaussian min-max comparison inequality [23], [24], known as the convex Gaussian min-max theorem in the statistical learning and information theory literature, cf.", "[41], [43]; see Theorem REF for a formal statement.", "In essence, the convex Gaussian min-max theorem compares the cost optimum of a min-max optimization problem involving the standard Gaussian matrix $G\\in \\mathbb {R}^{m\\times n}$ with i.i.d.", "$\\mathcal {N}(0,1)$ entries, to that of a reduced Gordon's min-max problem involving two independent Gaussian vectors $g\\sim \\mathcal {N}(0,I_n),h\\sim \\mathcal {N}(0,I_m)$ .", "In our setting, this comparison principle is used to produce both probabilistic upper and lower estimates for the support function of suitable versions of $L\\cap G K$ .", "Recall for a generic closed convex set $K_0\\subset \\mathbb {R}^n$ , its support function $\\mathsf {h}_{K_0}:\\mathbb {R}^n\\rightarrow \\mathbb {R}$ is defined pointwise as $\\mathsf {h}_{K_0}(x)\\equiv \\max _{\\mu \\in K_0}\\langle x,\\mu \\rangle ,\\quad x \\in \\mathbb {R}^n.$ Basic properties of the support function can be found in, e.g., [40].", "With $B_n,\\partial B_n$ denoting the unit ball and sphere in $\\mathbb {R}^n$ , and $K_n\\equiv K\\cap B_n$ , the (uniform) upper estimate takes the form $&\\sup _{x \\in L\\cap \\partial B_m}\\mathsf {h}_{L\\cap G K_n}(x)\\stackrel{\\operatorname{\\mathbb {P}}}{\\le } \\sup _{ \\begin{array}{c}x \\in L\\cap \\partial B_m,\\\\ \\mu \\in K_n\\end{array} } \\sup _{\\begin{array}{c}v\\in L,\\\\ \\langle g,\\mu \\rangle \\ge \\Vert \\,\\Vert \\mu \\Vert _{}h-v\\Vert _{}\\end{array}}\\langle x,v\\rangle .$ The lower estimate, which holds for individual $x \\in L\\cap \\partial B_m$ 's, takes a different form: there exists some universal constant $c>0$ , $\\mathsf {h}_{L\\cap G K_n}(x)\\stackrel{\\operatorname{\\mathbb {P}}}{\\ge } c\\cdot \\bigg \\lbrace \\bigg (\\sup _{\\mu \\in K\\cap B_n}\\langle g,\\mu \\rangle \\bigg )^2-\\bigg (\\sup _{v \\in L^\\circ \\cap B_m} \\langle h,v\\rangle \\bigg )^2\\bigg \\rbrace _+^{1/2}.$ See the proofs of Propositions REF and REF for precise versions of (REF )-(REF ).", "The relevance of the estimates (REF ) and (REF ) to Theorem REF can now be easily seen: If $\\delta (K)+\\delta (L)\\ll m$ , then with high probability the feasible set for the supremum over $v$ in (REF ) is $\\lbrace 0\\rbrace $ , and therefore $\\sup _{x \\in L\\cap \\partial B_m}\\mathsf {h}_{L\\cap GK_n}(x)=0$ with high probability.", "This is equivalent to $L\\cap GK=\\lbrace 0\\rbrace $ .", "If $\\delta (K)+\\delta (L)\\gg m$ , then for each $x \\in L\\cap \\partial B_m$ , with high probability $\\mathsf {h}_{L\\cap GK_n}(x)>0$ , which implies $L\\cap GK\\ne \\lbrace 0\\rbrace $ .", "It should be noted that the upper and lower estimates in (REF )-(REF ) do not take exactly the same form (ignoring the supremum over $x \\in L\\cap \\partial B_m$ ), mainly due to generality of the problem setup with arbitrary closed convex cones $K,L$ .", "In fact, the two estimates (REF )-(REF ) are proved by exploiting different max-min representations of the support function (or its supremum version).", "Interestingly, different forms of the estimates (REF )-(REF ) lead to complementary quantitative conclusions required in Theorem REF .", "This stands in sharp contrast to most applications of the convex Gaussian min-max theorem in the literature, cf.", "[45], [43], [36], [10], [35], [26], [27], [25], that aim at giving exact characterizations for the cost optimum of certain min-max optimization problems involving a standard Gaussian matrix $G$ ." ], [ "Connections to Gordon's Escape Theorem", "Incidentally, the random process method is known to produce effective probabilistic upper estimates in a special case of (REF ) where $L$ is further taken to be a subspace, now known as the Gordon's Escape Theorem [24].", "Interestingly, our Theorem REF immediately implies an optimal, two-sided version of Gordon's Escape Theorem for spherically convex sets, by setting $K$ (note that the notation of $K,L$ is flipped) therein to be a subspace.", "We formally record this result below.", "Corollary 1.2 Let $1\\le \\ell \\le n$ be an integer and $L\\subset \\mathbb {R}^n$ be any fixed subspace with $\\dim (L)=\\ell $ .", "Let $K\\subset \\mathbb {R}^n$ be a non-trivial closed convex cone.", "There exists some universal constant $C>0$ such that the following statements hold for $t\\ge 1$ .", "If $\\sqrt{\\ell }-\\sqrt{n-\\delta (K)}\\le -C\\sqrt{t}$ , then $\\operatorname{\\mathbb {P}}\\big (K\\cap O_n L= \\lbrace 0\\rbrace \\big )\\ge 1-e^{-t}.$ If $\\sqrt{\\ell }-\\sqrt{n-\\delta (K)}\\ge C\\sqrt{t}$ , then $\\operatorname{\\mathbb {P}}\\big (K\\cap O_nL\\ne \\lbrace 0\\rbrace \\big )\\ge 1-e^{-t}.$ Here recall $O_n$ is uniformly distributed on $\\mathrm {O}(n)$ .", "As mentioned above, the upper estimate part (1) (with more general $K$ 's) is proved essentially by [24] via a one-sided comparison principle for Gaussian processes.", "The more difficult lower estimate part (2) is proved much later by [4] via the exact Crofton formula (REF ) and the concentration of intrinsic volumes.", "As Corollary REF now follows as an immediate consequence of Theorem REF , an interesting proof-theoretic implication of our approach is that it provides a purely random process proof for the classical Gordon's Escape Theorem, without resorting to conic integral geometry method as employed in [4]." ], [ "Applications of Theorem ", "To demonstrate the broad applicability of Theorem REF and its method of proof described in the previous subsection, we now present applications thereof into three distinct problems arising from statistical learning, mathematical programming, and asymptotic geometric analysis." ], [ "Application I: Cone constrained MLEs in logistic regression", "Suppose $m$ i.i.d.", "samples $(X_i,Y_i) \\in \\mathbb {R}^n\\times \\lbrace \\pm 1\\rbrace , i \\in [m]$ are observed with $\\operatorname{\\mathbb {P}}(Y_i=1|X_i)=1-\\operatorname{\\mathbb {P}}(Y_i=-1|X_i)= \\mu (X_i^\\top \\beta _0)$ for some unknown regression vector $\\beta _0 \\in \\mathbb {R}^n$ and a known link function $\\mu : \\mathbb {R}\\rightarrow \\mathbb {R}$ .", "The statistical problem is to estimate $\\beta _0 \\in \\mathbb {R}^n$ based on the observations $\\lbrace (X_i,Y_i): i \\in [m]\\rbrace $ .", "The most common choice $\\mu (x)\\equiv \\frac{e^x}{1+e^x}$ corresponds to the logistic regression model.", "In this logistic regression model, the log-likelihood is given by $\\ell (\\beta ) = -\\sum _{i=1}^m \\log \\Big [1+\\exp \\big (-Y_i \\cdot X_i^\\top \\beta \\big )\\Big ].$ Let $K\\subset \\mathbb {R}^n$ be a non-trivial closed convex cone.", "The cone constrained maximum likelihood estimator (MLE) $\\widehat{\\beta }_K$ is defined as any maximizer of the map $\\beta \\mapsto \\ell (\\beta ),\\beta \\in K$ , whenever such a maximizer exists.", "It is well-known that the existence of the MLE, already in the unconstrained case (i.e., $K=\\mathbb {R}^n$ ), is a highly non-trivial matter.", "In particular, under the global null $\\beta _0=0$ and in the proportional high dimensional regime $n/m\\rightarrow \\kappa \\in (0,\\infty )$ , the seminal results of [11] show that if $\\kappa <1/2$ , the unconstrained MLE $\\widehat{\\beta }_{\\mathbb {R}^n}$ exists with asymptotically probability (w.a.p.)", "1; on the other hand, if $\\kappa >1/2$ , unconstrained MLEs do not exist w.a.p.", "1, or equivalently, the map $\\beta \\mapsto \\ell (\\beta ), \\beta \\in \\mathbb {R}^n$ attains its maximum at $\\infty $ w.a.p.", "1.", "Here with the help of Theorem REF , we prove a generalization of the prescribed phase transition for cone constrained MLEs under a standard Gaussian design for $\\lbrace X_i\\rbrace $ and the global null $\\beta _0=0$ : (i) if $m>2\\delta (K)$ , the cone constrained MLE $\\widehat{\\beta }_K$ exists with high probability; (ii) if $m<2\\delta (K)$ , cone constrained MLEs do not exist, i.e., $\\beta \\mapsto \\ell (\\beta ), \\beta \\in K$ attains its maximum at $\\infty $ , with high probability.", "The formal statement is as follows.", "Theorem 1.3 Suppose that $\\lbrace (X_i,Y_i)\\in \\mathbb {R}^n\\times \\lbrace \\pm 1\\rbrace : i \\in [m]\\rbrace $ are i.i.d.", "samples generated from the model (REF ) with link function $\\mu $ given by (REF ).", "Suppose further that $X_1,\\ldots ,X_m$ are i.i.d.", "$\\mathcal {N}(0,I_n)$ and $\\beta _0=0$ .", "There exists some universal constant $C>0$ such that the following statements hold for $t\\ge 1$ .", "If $\\sqrt{m}\\le \\sqrt{2\\delta (K)}-C\\sqrt{t}$ , then $\\operatorname{\\mathbb {P}}\\big (\\hbox{The cone constrained MLE $\\widehat{\\beta }_K$ does not exist}\\big )\\ge 1-e^{-t},$ If $\\sqrt{m}\\ge \\sqrt{2\\delta (K)}+C\\sqrt{t}$ , then $\\operatorname{\\mathbb {P}}\\big (\\hbox{The cone constrained MLE $\\widehat{\\beta }_K$ exists}\\big )\\ge 1-e^{-t}.$ The key to link the above result to Theorem REF is the approximate equivalence $\\big \\lbrace \\hbox{The cone constrained MLE $\\widehat{\\beta }_K$ does not exist}\\big \\rbrace \\approx \\big \\lbrace GK\\cap \\mathbb {R}_{\\ge 0}^m\\ne \\lbrace 0\\rbrace \\big \\rbrace .$ See Proposition REF for a precise statement of (REF ).", "Clearly, Theorem REF recovers the asymptotic phase transitions in [11], [13] for the non-/existence of the unconstrained MLE $\\widehat{\\beta }_{\\mathbb {R}^n}$ under the global null $\\beta _0=0$ , with an optimal probability estimate.", "Extending the phase transitions in Theorem REF for the constrained MLE $\\widehat{\\beta }_K$ to general $\\beta _0$ 's is beyond the scope of a direct application of Theorem REF , and will therefore be pursued elsewhere." ], [ "Application II: Conic programs with random constraints", "Let $K\\subset \\mathbb {R}^n$ be a closed convex cone.", "For given $x \\in \\mathbb {R}^n$ and $A \\in \\mathbb {R}^{m\\times n}$ and $b \\in \\mathbb {R}^m$ , consider the following standard form of conic program (CP): $&\\max _{\\mu \\in \\mathbb {R}^n}\\, \\langle x,\\mu \\rangle \\quad \\hbox{subject to}\\quad A\\mu = b,\\, \\mu \\in K.$ Several canonical examples of conic programs include linear programs, second-order cone programs and semi-definite programs; the readers are referred to [6], [7] for more details on CPs.", "The main purpose here is to show that under a Gaussian random constraint $A=G$ and a deterministic choice of $x \\in \\partial B_n$ , the CP (REF ) undergoes different phase transitions according to whether $b=0$ or $b\\ne 0$ , the behaviors of which can be described using the following geometric quantity associated with $K$ and $x$ : $K_x\\equiv K\\cap \\lbrace \\mu \\in \\mathbb {R}^n: \\langle x,\\mu \\rangle \\ge 0\\rbrace .$ Clearly $K_x\\subset K$ and $\\delta (K_x)\\le \\delta (K)$ .", "We also recall some standard terminology in mathematical programming: the CP (REF ) is called feasible if the constraint set $\\lbrace A\\mu =b,\\mu \\in K\\rbrace $ is non-empty, and is called infeasible if this constraint set is empty.", "Theorem 1.4 Fix two positive integers $m,n\\in \\mathbb {N}$ , $x \\in \\partial B_n$ and a non-trivial closed convex cone $K\\subset \\mathbb {R}^n$ .", "Consider the conic programming (REF ) with the standard Gaussian random constraint $A=G \\in \\mathbb {R}^{m\\times n}$ .", "There exists some universal constant $C>0$ such that the following statements hold for $t\\ge 1$ : (Homogeneous case $b=0$) If $\\sqrt{m}\\ge \\sqrt{\\delta (K_x)}+C\\sqrt{t}$ , then $\\operatorname{\\mathbb {P}}\\big (\\hbox{The value of the CP (\\ref {def:conic_program})}=0\\big )\\ge 1-e^{-t}.$ If $\\sqrt{m}\\le \\sqrt{\\delta (K_x)}-C\\sqrt{t}$ , then $\\operatorname{\\mathbb {P}}\\big (\\hbox{The value of the CP (\\ref {def:conic_program})}=\\infty \\big )\\ge 1-e^{-t}.$ (In-homogeneous case $b\\ne 0$) If $\\sqrt{m}\\ge \\sqrt{\\delta (K)}+C\\sqrt{t}$ , then $\\operatorname{\\mathbb {P}}\\big (\\hbox{The CP (\\ref {def:conic_program}) is infeasible}\\big )\\ge 1-e^{-t}.$ If $\\sqrt{\\delta (K_x)}+C\\sqrt{t}\\le \\sqrt{m}\\le \\sqrt{\\delta (K)}-C\\sqrt{t}$ , then $\\operatorname{\\mathbb {P}}\\big (\\hbox{The value of the CP (\\ref {def:conic_program})}\\le 0\\big )\\ge 1-e^{-t}.$ If $\\sqrt{m}\\le \\sqrt{\\delta (K_x)}-C\\sqrt{t}$ , then $\\operatorname{\\mathbb {P}}\\big (\\hbox{The value of the CP (\\ref {def:conic_program})}=\\infty \\big )\\ge 1-e^{-t}.$ The above theorem is closely related to [2] that studies the CP (REF ) with the same Gaussian random constraint $A=G$ , and the additional assumption $x\\sim \\mathcal {N}(0,I_n)$ .", "Under this additional Gaussian assumption on $x$ , [2] provide exact kinematic formulae for (slight variations of) some probabilities in the above theorem, expressed via the intrinsic volumes associated with $K$ .", "Using concentration of intrinsic volumes, [4] then shows that in the in-homogeneous case $b\\ne 0$ with $x\\sim \\mathcal {N}(0,I_n)$ , the phase transition reads: If $\\sqrt{m}\\ge \\sqrt{\\delta (K)}+C\\sqrt{t}$ , then the CP (REF ) is infeasible with probability at least $1-e^{-t}$ .", "If $\\sqrt{m}\\le \\sqrt{\\delta (K)}-C\\sqrt{t}$ , then the value of the CP (REF ) is $\\infty $ with probability at least $1-e^{-t}$ .", "Compared to the results for the in-homogeneous case of Theorem REF , it is immediately seen that a deterministic choice of $x \\in \\partial B_n$ significantly changes the phase transition behavior of the CP (REF ).", "In particular, a new intermediate regime $\\delta (K_x)\\ll m\\ll \\delta (K)$ appears in which the CP (REF ) is feasible but its value is finite.", "We now mention the connection between Theorem REF and the above Theorem REF .", "The proof of Theorem REF heavily borrows from the idea of the estimates (REF )-(REF ) used in the proof of Theorem REF .", "In particular, for any non-trivial closed convex cone $L \\subset \\mathbb {R}^m$ , the following two-sided estimate holds: $\\max _{\\mu \\in K\\cap B_n: G\\mu \\in L}\\langle x,\\mu \\rangle \\stackrel{\\operatorname{\\mathbb {P}}}{\\approx } \\sup _{\\mu \\in K\\cap B_n,\\langle g,\\mu \\rangle \\ge \\sup \\limits _{v\\in L^\\circ \\cap \\partial B_m}\\langle h,v\\rangle }\\langle x,\\mu \\rangle .$ See Proposition REF for a precise formulation of the above probabilistic equivalence.", "As an interesting consequence of the estimate (REF ), we obtain an approximate kinematic formula for the Gaussian random pre-image $G^{-1}L$ for a fixed closed convex cone $L\\subset \\mathbb {R}^m$ , formally defined by $G^{-1}L\\equiv \\lbrace \\mu \\in \\mathbb {R}^n: G\\mu \\in L\\rbrace .$ The approximate kinematic formula below describes the behavior of $G^{-1}L$ .", "Theorem 1.5 Suppose that $K \\subset \\mathbb {R}^n$ and $L\\subset \\mathbb {R}^m$ are non-trivial closed convex cones.", "There exists some universal constant $C>0$ such that the following statements hold for $t\\ge 1$ .", "If $\\sqrt{\\delta (K)}-\\sqrt{m-\\delta (L)}\\le -C\\sqrt{t}$ , then $\\operatorname{\\mathbb {P}}\\big (K\\cap G^{-1}L = \\lbrace 0\\rbrace \\big )\\ge 1-e^{-t}.$ If $\\sqrt{\\delta (K)}-\\sqrt{m-\\delta (L)}\\ge C\\sqrt{t}$ , then $\\operatorname{\\mathbb {P}}\\big (K\\cap G^{-1}L \\ne \\lbrace 0\\rbrace \\big )\\ge 1-e^{-t}.$ Despite the highly similar formulation of the above theorem to that of Theorem REF , the kinematic formula in the above Theorem REF is far from a direct consequence of Theorem REF .", "Moreover, the kinematic interpretation of the two approximate formulae in Theorems REF and REF is markedly different.", "In particular, the informal interpretation (REF ) for the behavior of the random projection $GK$ requires a crucial modification for the purpose of describing the kinematics of $G^{-1}L$ : Theorem REF shows that the random pre-image $G^{-1}L$ should now be regarded as a randomly rotated cone in $\\mathbb {R}^n$ with statistical dimension $\\max \\lbrace 0,\\delta (L)+n-m\\rbrace $ , where the extra `dimension' $n-m$ comes from the null space behavior of $G$ (at least when $m\\le n$ ).", "As a side remark, we note that the Gordon's Escape Theorem for spherically convex sets in the form of Corollary REF can also be recovered by applying the above Theorem REF with $m\\le n$ and $L=\\lbrace 0\\rbrace $ .", "This follows by noting the simple fact that $G^{-1}\\lbrace 0\\rbrace \\stackrel{d}{=}O_n L_{n-m}$ , where $L_{n-m}$ is any fixed linear subspace of $\\mathbb {R}^n$ with $\\dim (L_{n-m})=n-m$ ." ], [ "Application III: A local Gaussian Dvoretzky-Milman Theorem", "The classical Gaussian Dvoretzky-Milman Theorem ([18], [19], [34]), when adapted to our setting, says that a sufficiently low dimensional image of the possibly high dimension random set $G(K\\cap B_n)$ is an approximately deterministic round ball with radius $\\sqrt{\\delta (K)}$ .", "Its precise form reads as follows: there exists some universal constant $c>0$ such that for any $\\varepsilon \\in (0,1/2)$ , if $k$ is an integer with $1\\le k \\le c\\varepsilon ^2 \\delta (K)$ , then with probability at least $1-\\exp (-c\\varepsilon ^2 \\delta (K))$ , $(1-\\varepsilon )\\cdot B_k\\big (\\sqrt{\\delta (K)}\\big ) \\subset \\Pi _{m\\rightarrow k} \\big (G K_n\\big )\\subset (1+\\varepsilon )\\cdot B_k\\big (\\sqrt{\\delta (K)}\\big ).$ Here $K_n\\equiv K\\cap B_n$ , and $\\Pi _{m\\rightarrow k}:\\mathbb {R}^m\\rightarrow \\mathbb {R}^k$ is the natural projection map onto the first $k$ -coordinatesNote that while we may simply write $\\Pi _{m\\rightarrow k}(G K_n)$ as $G_{k,n}K_n$ where $G_{k,n} \\in \\mathbb {R}^{k\\times n}$ contains i.i.d.", "$\\mathcal {N}(0,1)$ .", "Here we opt to the form $\\Pi _{m\\rightarrow k}(G K_n)$ in (REF ), as it connects naturally to the `local' version we examine in Theorem REF ., whereas $B_k(r)$ denotes an $\\ell _2$ ball of $\\mathbb {R}^k$ with radius $r$ .", "For more general versions/other variations of (REF ), the readers are referred to e.g., [38], [1], and [48].", "Below we provide a `local' version of (REF ) by looking at low-dimensional images of the random set $L\\cap GK_n$ , where $L$ is a fixed subspace.", "In particular, the following theorem shows that the shape of $\\Pi _{m\\rightarrow k}(L\\cap GK_n)$ is again approximately a deterministic round ball, but with a possible shrinkage in the radius.", "Theorem 1.6 Suppose that $K \\subset \\mathbb {R}^n$ is a non-trivial closed convex cone and $L\\equiv \\lbrace x \\in \\mathbb {R}^m: x|_{(\\ell :m]}=0\\rbrace $ for some integer $\\ell $ with $\\max \\lbrace 1, (m-\\delta (K))_+\\rbrace \\le \\ell \\le m$ .", "Further assume that there exists some $\\tau \\in (0,1/2)$ such that $(m-\\ell )\\le (1-\\tau )\\cdot \\delta (K)$ .", "Then there exists some small constant $c=c(\\tau )>0$ with the following statement holding true.", "For any $\\varepsilon \\in (0,1/2)$ , if $k$ is an integer with $1\\le k\\le \\min \\lbrace \\ell , c\\cdot {L}\\big (\\varepsilon ^2\\delta (K)\\big )\\rbrace $ , then with probability at least $1-\\exp \\big (-c \\varepsilon ^2 \\delta (K)\\big )$ , $(1-\\varepsilon )\\cdot B_k(\\sqrt{\\delta (K)-m+\\ell }) \\subset \\Pi _{m\\rightarrow k} \\big (L\\cap G K_n\\big )\\subset (1+\\varepsilon )\\cdot B_k(\\sqrt{\\delta (K)-m+\\ell }).$ Here $K_n$ , $\\Pi _{m\\rightarrow k}:\\mathbb {R}^m\\rightarrow \\mathbb {R}^k$ are as above, and ${L}(x)\\equiv x/\\log \\big (e \\vee (1/x)\\big )$ .", "Clearly, the standard form of Gaussian Dvoretzky-Milman Theorem (REF ) can be recovered using the above theorem by setting $L=\\mathbb {R}^m$ (with a logarithmically worse dependence of the dimension $k$ with respect to $\\varepsilon $ ).", "From a different perspective, Theorem REF above also provides further understanding to the stochastic geometry of $L\\cap GK_n$ beyond the scope of Theorem REF , at least when $L$ is a subspace.", "In particular, in the most interesting regime $\\ell +\\delta (K)\\gg m$ , Theorem REF -(2) only proves that $L\\cap GK_n \\ne \\lbrace 0\\rbrace $ is a non-trivial random set with high probability.", "Here Theorem REF provides a substantially refined, almost deterministic low dimensional description of the stochastic geometry of $L\\cap G K_n$ : with high probability, $\\Pi _{m\\rightarrow k}(L\\cap GK_n)\\approx B_k(\\sqrt{\\delta (K)-m+\\ell })$ .", "The proof of Theorem REF is based on the following improved two-sided estimate for (REF )-(REF ) that exploits the additional subspace structure of $L$ : $\\mathsf {h}_{L\\cap GK_n}(x)\\stackrel{\\operatorname{\\mathbb {P}}}{\\approx } \\bigg \\lbrace \\bigg (\\sup _{\\mu \\in K\\cap B_n}\\langle g,\\mu \\rangle \\bigg )^2-\\bigg (\\sup _{v \\in L^\\circ \\cap B_m} \\langle h,v\\rangle \\bigg )^2\\bigg \\rbrace _+^{1/2}.$ In particular, we prove in Propositions REF and REF a precise version of (REF ) with optimal Gaussian tail estimates for each and every $x\\in L\\cap \\partial B_m$ .", "The claim of Theorem REF then follows by a standard $\\varepsilon $ -net argument." ], [ "Organization", "The rest of the paper is organized as follows.", "Section presents technical preliminaries on convex geometry and Gaussian process tools.", "The approximate kinematic formulae in Theorem REF and its direct consequence Corollary REF are then proved in Section .", "Proofs for the applications of Theorem REF to various problems in Section REF are detailed in Section -." ], [ "Some further notation", "For any pair of positive integers $m\\le n$ , let $[m:n]\\equiv \\lbrace m,\\ldots ,n\\rbrace $ , and $(m:n]\\equiv [m:n]\\setminus \\lbrace m\\rbrace $ , $[m:n)\\equiv [m:n]\\setminus \\lbrace n\\rbrace $ .", "We often write $[n]$ for $[1:n]$ .", "For $a,b \\in \\mathbb {R}$ , $a\\vee b\\equiv \\max \\lbrace a,b\\rbrace $ and $a\\wedge b\\equiv \\min \\lbrace a,b\\rbrace $ .", "For $a \\in \\mathbb {R}$ , let $a_+\\equiv a\\vee 0$ and $a_- \\equiv (-a)\\vee 0$ .", "For $x \\in \\mathbb {R}^n$ , let $\\Vert x\\Vert _{p}=\\Vert x\\Vert _{\\ell _p(\\mathbb {R}^n)}$ denote its $p$ -norm $(1\\le p\\le \\infty )$ with $\\Vert x\\Vert _{2}$ abbreviated as $\\Vert x\\Vert _{}$ .", "Let $B_n(r;x)\\equiv \\lbrace z \\in \\mathbb {R}^n: \\Vert z-x\\Vert _{}\\le r\\rbrace $ and recall $B_n(r)= B_n(r;0), B_n= B_n(1)$ .", "For $x,y \\in \\mathbb {R}^n$ , we write $\\langle x,y\\rangle \\equiv \\sum _{i=1}^n x_iy_i$ .", "For a matrix $A \\in \\mathbb {R}^{m\\times n}$ and a measurable set $T$ , we follow the notation (REF ) to write $AT\\equiv \\lbrace At: t \\in T\\rbrace \\subset \\mathbb {R}^m$ .", "For any subspace $L\\subset \\mathbb {R}^n$ , let $\\operatorname{\\mathsf {P}}_L: \\mathbb {R}^n\\rightarrow \\mathbb {R}^n$ be the orthogonal projection onto $L$ and $\\operatorname{\\mathsf {P}}_L^\\perp \\equiv \\mathrm {Id}-\\operatorname{\\mathsf {P}}_L$ .", "We use $C_{x}$ to denote a generic constant that depends only on $x$ , whose numeric value may change from line to line unless otherwise specified.", "$a\\lesssim _{x} b$ and $a\\gtrsim _x b$ mean $a\\le C_x b$ and $a\\ge C_x b$ respectively, and $a\\asymp _x b$ means $a\\lesssim _{x} b$ and $a\\gtrsim _x b$ ." ], [ "Some basic convex geometry", "For any closed convex cone $K\\subset \\mathbb {R}^n$ , its polar cone is defined as $K^\\circ \\equiv \\left\\lbrace v\\in \\mathbb {R}^n: \\langle v,\\mu \\rangle \\le 0, \\text{ for all }\\mu \\in K\\right\\rbrace .$ The following lemma provides a useful characterization of a closed convex cone $K$ via its polar cone $K^\\circ $ .", "Lemma 2.1 Let $K$ be a closed convex cone.", "The following hold: $&\\mu \\in K \\Leftrightarrow \\sup _{v \\in K^\\circ } \\langle v,\\mu \\rangle =0\\Leftrightarrow \\inf _{v \\in -K^\\circ } \\langle v,\\mu \\rangle =0,\\nonumber \\\\& \\mu \\notin K \\Leftrightarrow \\sup _{v \\in K^\\circ } \\langle v,\\mu \\rangle =\\infty \\Leftrightarrow \\inf _{v \\in -K^\\circ } \\langle v,\\mu \\rangle =-\\infty .$ Suppose $\\mu \\in K$ .", "By definition of $K^\\circ $ , $\\sup _{v \\in K^\\circ }\\langle v,\\mu \\rangle \\le 0$ with equality achieved for $v=0$ .", "This proves the direction $\\mu \\in K\\Rightarrow \\sup _{v \\in K^\\circ } \\langle v,\\mu \\rangle =0$ .", "For the other direction, as for any $\\mu \\in \\mathbb {R}^n$ , $\\sup _{v \\in K^\\circ } \\langle v,\\mu \\rangle \\in \\lbrace 0,\\infty \\rbrace $ , it remains to prove that $\\mu \\notin K \\Rightarrow \\sup _{v \\in K^\\circ } \\langle v,\\mu \\rangle = \\infty $ .", "To see this, suppose $\\mu \\in K$ is such that $\\sup _{v \\in K^\\circ } \\langle v,\\mu \\rangle $ is finite.", "This means $\\sup _{v \\in K^\\circ } \\langle v,\\mu \\rangle =0$ , or equivalently, $\\langle v,\\mu \\rangle \\le 0$ for all $v \\in K^\\circ $ .", "Consequently, $\\mu \\in (K^\\circ )^\\circ =K$ , where the last identity follows by e.g.", "[39].", "Another important property of the polar cone is the following orthogonal decomposition known as Moreau's theorem [39].", "Lemma 2.2 For any $v\\in \\mathbb {R}^n$ , we have the orthogonal decomposition $v = \\Pi _K(v) + \\Pi _{K^\\circ }(v) \\, \\hbox{ with }\\, \\langle \\Pi _K(v),\\Pi _{K^\\circ }(v)\\rangle = 0.$ The notion of statistical dimension defined in (REF ) is intrinsically related to the so-called Gaussian width.", "A formal definition is given as follows.", "For a compact convex set $K_0\\subset \\mathbb {R}^n$ , we define its Gaussian width $\\operatorname{\\mathfrak {w}}(K_0)$ by $\\operatorname{\\mathfrak {w}}(K_0)\\equiv \\operatorname{\\mathbb {E}}\\sup _{\\mu \\in K_0}\\langle g,\\mu \\rangle ,\\quad g\\sim \\mathcal {N}(0,I_n).$ We shall frequently use the following properties of the statistical dimension (REF ) and Gaussian width (REF ) in the proofs ahead.", "Proposition 2.3 Let $K\\subset \\mathbb {R}^n$ be a closed convex cone.", "$\\delta (K)=\\operatorname{\\mathbb {E}}\\big (\\sup _{\\mu \\in K\\cap B_n}\\langle g,\\mu \\rangle \\big )^2$ , where $g\\sim \\mathcal {N}(0,I_n)$ .", "$\\delta (K)+\\delta (K^\\circ )=n$ .", "For nontrivial $K\\ne \\lbrace 0\\rbrace $ , $\\operatorname{\\mathfrak {w}}(K\\cap \\partial B_n) \\le \\operatorname{\\mathfrak {w}}(K\\cap B_n)\\le \\operatorname{\\mathfrak {w}}(K\\cap \\partial B_n)+1$ .", "$\\operatorname{\\mathfrak {w}}(K\\cap B_n)\\le \\sqrt{\\delta (K)}\\le \\operatorname{\\mathfrak {w}}(K\\cap B_n)+1 $ .", "(1)-(2).", "The claims follow from [4].", "(3).", "Note that $\\sup _{\\mu \\in K\\cap \\partial B_n}\\langle g,\\mu \\rangle &\\le \\sup _{\\mu \\in K\\cap B_n}\\langle g,\\mu \\rangle = \\sup _{0\\le r\\le 1} \\sup _{\\mu \\in K, \\Vert \\mu \\Vert _{}=r}\\langle g,\\mu \\rangle \\\\&= \\sup _{0\\le r\\le 1} \\bigg (r\\cdot \\sup _{\\mu \\in K\\cap \\partial B_n}\\langle g,\\mu \\rangle \\bigg ) = 0 \\vee \\sup _{\\mu \\in K\\cap \\partial B_n}\\langle g,\\mu \\rangle .$ On the other hand, using the 1-Lipschitz property of $g\\mapsto \\sup _{\\mu \\in K\\cap \\partial B_n}\\langle g,\\mu \\rangle $ , we obtain by Gaussian-Poincaré inequality that $\\operatorname{Var}\\bigg (\\sup _{\\mu \\in K\\cap \\partial B_n}\\langle g,\\mu \\rangle \\bigg )= \\operatorname{\\mathbb {E}}\\bigg (\\sup _{\\mu \\in K\\cap \\partial B_n}\\langle g,\\mu \\rangle \\bigg )^2-\\operatorname{\\mathfrak {w}}^2(K\\cap \\partial B_n)\\le 1.$ Consequently, using the above two inequalities and the fact that $\\operatorname{\\mathfrak {w}}(K\\cap \\partial B_n)\\ge 0$ , $\\operatorname{\\mathfrak {w}}(K\\cap \\partial B_n)&\\le \\operatorname{\\mathfrak {w}}(K\\cap B_n)\\le \\operatorname{\\mathbb {E}}\\bigg [0 \\vee \\sup _{\\mu \\in K\\cap \\partial B_n}\\langle g,\\mu \\rangle \\bigg ]\\\\&\\le \\operatorname{\\mathbb {E}}^{1/2}\\bigg (\\sup _{\\mu \\in K\\cap \\partial B_n}\\langle g,\\mu \\rangle \\bigg )^2 \\le \\operatorname{\\mathfrak {w}}(K\\cap \\partial B_n)+1.$ (4).", "We may conclude the claim by using the representation in (1) and Gaussian-Poincaré inequality similarly as in the proof of (2).", "Details are omitted." ], [ "Gaussian process tools", "The following version of convex Gaussian min-max theorem, proved using Gordon's min-max theorem [23], [24], is taken from [35].", "Theorem 2.4 (Convex Gaussian Min-Max Theorem) Suppose $D_u \\in \\mathbb {R}^{n_1+n_2}, D_v \\in \\mathbb {R}^{m_1+m_2}$ are compact sets, and $Q: D_u\\times D_v \\rightarrow \\mathbb {R}$ is continuous.", "Let $G=(G_{ij})_{i \\in [n_1],j\\in [m_1]}$ with $G_{ij}$ 's i.i.d.", "$\\mathcal {N}(0,1)$ , and $g \\sim \\mathcal {N}(0,I_{n_1})$ , $h \\sim \\mathcal {N}(0,I_{m_1})$ be independent Gaussian vectors.", "For $u \\in \\mathbb {R}^{n_1+n_2}, v \\in \\mathbb {R}^{m_1+m_2}$ , write $u_1\\equiv u_{[n_1]}\\in \\mathbb {R}^{n_1}, v_1\\equiv v_{[m_1]} \\in \\mathbb {R}^{m_1}$ .", "Define $\\Phi ^{\\textrm {p}} (G)& = \\max _{u \\in D_u}\\min _{v \\in D_v} \\Big ( u_1^\\top G v_1 + Q(u,v)\\Big ), \\nonumber \\\\\\Phi ^{\\textrm {a}}(g,h)& = \\max _{u \\in D_u}\\min _{v \\in D_v} \\Big (\\Vert v_1\\Vert _{} g^\\top u_1 + \\Vert u_1\\Vert _{} h^\\top v_1+ Q(u,v)\\Big ).$ Then the following hold.", "For all $t \\in \\mathbb {R}$ , $\\operatorname{\\mathbb {P}}\\big (\\Phi ^{\\textrm {p}} (G)\\ge t\\big )\\le 2 \\operatorname{\\mathbb {P}}\\big (\\Phi ^{\\textrm {a}}(g,h)\\ge t\\big ).$ If $(u,v)\\mapsto u_1^\\top G v_1+ Q(u,v)$ satisfies the conditions of Sion's min-max theorem for the pair $(D_u,D_v)$ a.s. (for instance, $D_u,D_v$ are convex, and $Q$ is concave-convex), then for any $t \\in \\mathbb {R}$ , $\\operatorname{\\mathbb {P}}\\big (\\Phi ^{\\textrm {p}} (G)\\le t\\big )\\le 2 \\operatorname{\\mathbb {P}}\\big (\\Phi ^{\\textrm {a}}(g,h)\\le t\\big ).$ Clearly, $\\ge $ (resp.", "$\\le $ ) in (1) (resp.", "(2)) can be replaced with $>$ (resp $<$ ).", "In the proofs below, we shall assume without loss of generality that $G,g,h$ are independent Gaussian matrix/vectors defined on the same probability space.", "The following Gaussian concentration inequality will be used frequently; its statement is taken from e.g., [33], or [21].", "Theorem 2.5 Let $g\\sim \\mathcal {N}(0,I_n)$ and $F:\\mathbb {R}^n\\rightarrow \\mathbb {R}$ be a 1-Lipschitz map.", "Then $\\operatorname{\\mathbb {P}}\\big (\\vert F(g)-\\mathrm {Med}(F(g))\\vert \\ge t\\big )\\le 2e^{-t^2/2},\\quad t\\ge 0.$ Here $\\mathrm {Med}(F(g))$ denotes the median of $F(g)$ .", "We will mostly use the following form of the Gaussian concentration inequality in the proofs below.", "Proposition 2.6 There exists some universal constant $C>0$ such that for any compact set $K_0\\subset \\mathbb {R}^n$ and all $t\\ge 1$ , with probability at least $1-e^{-t}$ , $\\big \\vert \\sup _{\\mu \\in K_0}\\langle g,\\mu \\rangle - \\operatorname{\\mathbb {E}}\\sup _{\\mu \\in K_0}\\langle g,\\mu \\rangle \\big \\vert \\le C \\cdot \\sup _{\\mu \\in K_0}\\Vert \\mu \\Vert _{}\\cdot \\sqrt{t}.$ By rescaling, we may assume without loss of generality that $K_0\\subset B_n$ .", "Let $F(g)\\equiv \\sup _{\\mu \\in K_0}\\langle g,\\mu \\rangle $ .", "It is easy to see that $\\vert F(g)-F(g^{\\prime })\\vert \\le \\Vert g-g^{\\prime }\\Vert _{}$ holds for any pair $g,g^{\\prime } \\in \\mathbb {R}^n$ , so $F$ is 1-Lipschitz.", "Using the Gaussian concentration inequality in Theorem REF , for any $t>0$ , with probability at least $1-2e^{-t/2}$ we have $\\vert F(g)-\\mathrm {Med}(F(g))\\vert \\le \\sqrt{t}.$ Integrating the tail, we obtain $\\vert \\operatorname{\\mathbb {E}}F(g)-\\mathrm {Med}(F(g))\\vert \\le C$ for some universal constant $C>0$ .", "Combining the above two displays, for any $t>0$ , with probability at least $1-2e^{-t/2}$ we have $\\vert F(g)- \\operatorname{\\mathbb {E}}F(g)\\vert \\le \\sqrt{t}+C.$ Now adjusting the constants to conclude." ], [ "A continuity lemma", "Lemma 2.7 Let $f: \\mathbb {R}^n\\rightarrow \\mathbb {R}$ be a continuous function.", "Suppose $\\lbrace S_k: k\\in \\mathbb {N}\\rbrace $ is a sequence of non-increasing compact sets in $\\mathbb {R}^n$ .", "Then with $S_\\infty \\equiv \\cap _{k\\in \\mathbb {N}} S_k$ , we have $\\sup _{x \\in S_k} f(x)\\downarrow \\sup _{x \\in S_\\infty } f(x)$ .", "Suppose $\\lbrace T_k: k\\in \\mathbb {N}\\rbrace $ is a sequence of non-decreasing compact sets in $\\mathbb {R}^n$ such that $T_k\\subset T$ for some compact set $T\\subset \\mathbb {R}^n$ for all $k \\in \\mathbb {N}$ .", "Then with $T_\\infty \\equiv \\mathrm {cl}\\big (\\cup _{k\\in \\mathbb {N}} T_k\\big )$ , we have $\\sup _{x \\in T_k} f(x)\\uparrow \\sup _{x \\in T_\\infty } f(x)$ .", "(1).", "As $f$ is continuous and $S_k$ is compact, we may find some $x_k \\in S_k$ such that $f(x_k)=\\sup _{x \\in S_k}f(x)$ .", "As the sequence $\\lbrace x_k: k\\in \\mathbb {N}\\rbrace $ is contained in some compact set, say, $S_1$ , we may assume without loss generality that $x_k \\rightarrow x_\\infty $ for some $x_\\infty \\in S_1$ .", "As all subsequential limits of $\\lbrace S_k\\rbrace $ are contained in $S_\\infty $ due to the monotonicity of $S_k$ , we have $x_\\infty \\in S_\\infty $ .", "This means $\\sup _{x \\in S_k} f(x)=f(x_k)\\rightarrow f(x_\\infty )\\le \\sup _{x \\in S_\\infty } f(x)$ , i.e., $\\operatornamewithlimits{\\overline{lim}}_k \\sup _{x \\in S_k} f(x)\\le \\sup _{x \\in S_\\infty } f(x)$ .", "The other direction is trivial.", "(2).", "For any $y_\\infty \\in \\cup _{k\\in \\mathbb {N}} T_k$ , there exists some $k \\in \\mathbb {N}$ such that $y_\\infty \\in T_k$ .", "This means $\\sup _{y \\in T_k} f(y)\\ge f(y_\\infty )$ .", "As $\\lbrace T_k\\rbrace $ is non-decreasing, we have $\\operatornamewithlimits{\\underline{lim}}_k \\sup _{y \\in T_k} f(y)\\ge f(y_\\infty )$ .", "Maximizing over $y_\\infty \\in \\cup _{k\\in \\mathbb {N}} T_k$ and using the continuity of $f$ , we have $\\operatornamewithlimits{\\underline{lim}}_k \\sup _{y \\in T_k} f(y)\\ge \\sup _{y_\\infty \\in \\cup _{k\\in \\mathbb {N}} T_k} f(y_\\infty )=\\sup _{y \\in T_\\infty } f(y)$ .", "The other direction is again trivial." ], [ "Proof of Theorem ", "The following proposition shows that for `small' values of $\\delta (K)$ , the support function of $L\\cap G(K\\cap B_n)$ will be trivial uniformly in all directions with high probability.", "Proposition 3.1 Suppose that $K \\subset \\mathbb {R}^n$ and $L\\subset \\mathbb {R}^m$ are closed convex cones with $K\\ne \\lbrace 0\\rbrace , L\\notin \\lbrace \\lbrace 0\\rbrace ,\\mathbb {R}^m\\rbrace $ .", "Fix $x \\in L\\cap \\partial B_m$ .", "Then there exists some universal constant $C>0$ such that for any $t\\ge 1$ , $\\operatorname{\\mathbb {P}}\\Big (\\sup _{x \\in L\\cap \\partial B_m}\\mathsf {h}_{L\\cap G K_n}(x)\\ne 0\\Big )\\le \\mathbf {1}\\Big (\\sqrt{\\delta (K)}> \\sqrt{\\delta (L^\\circ )}-C\\sqrt{t}\\Big )+e^{-t}.$ Here $K_n\\equiv K\\cap B_n$ .", "We shall drop the subscript in $\\mathsf {h}_{L\\cap G K_n}$ for notational convenience in the proof.", "Let $E(R)\\equiv \\lbrace \\Vert G\\Vert _{\\operatorname{op}}\\le R\\rbrace $ .", "Note that $\\mathsf {h}(x) = \\sup _{\\mu \\in K_n, G\\mu \\in L}\\langle x,G\\mu \\rangle =\\sup _{v \\in L, \\mu \\in K_n} \\inf _{w \\in \\mathbb {R}^m} \\Big \\lbrace \\langle x,v\\rangle +\\langle w,G\\mu -v\\rangle \\Big \\rbrace ,$ and on the event $E(R)$ , the supremum above over $v \\in L$ can be restricted to $v \\in L\\cap B_m(R)$ uniformly for all $x \\in L\\cap \\partial B_m$ .", "This means on the event $E(R)$ , $\\sup _{x \\in L\\cap \\partial B_m} \\mathsf {h}(x) = \\sup _{\\begin{array}{c}x \\in L\\cap \\partial B_m, \\\\v \\in L \\cap B_m(R), \\mu \\in K_n\\end{array}} \\inf _{w \\in \\mathbb {R}^m} \\Big \\lbrace \\langle x,v\\rangle +\\langle w,G\\mu -v\\rangle \\Big \\rbrace ,$ By the one-sided Gaussian min-max theorem (cf.", "Theorem REF -(1)), for any $z \\in \\mathbb {R}$ and $R,R_1>0$ , we have $&\\operatorname{\\mathbb {P}}\\bigg (\\sup _{x \\in L\\cap \\partial B_m} \\mathsf {h}(x) >z\\bigg )-\\operatorname{\\mathbb {P}}(E(R)^c)\\\\&\\le \\operatorname{\\mathbb {P}}\\bigg (\\sup _{\\begin{array}{c}x \\in L\\cap \\partial B_m, \\\\v \\in L \\cap B_m(R), \\mu \\in K_n\\end{array}} \\inf _{w \\in B_m(R_1)} \\Big \\lbrace \\langle x,v\\rangle +\\langle w,G\\mu -v\\rangle \\Big \\rbrace >z\\bigg )\\nonumber \\\\&\\le 2\\operatorname{\\mathbb {P}}\\bigg (\\sup _{\\begin{array}{c}x \\in L\\cap \\partial B_m, \\\\v \\in L \\cap B_m(R), \\mu \\in K_n\\end{array}} \\inf _{w \\in B_m(R_1)} \\Big \\lbrace \\langle x,v\\rangle -\\langle w,v\\rangle +\\Vert \\mu \\Vert _{}\\langle h,w\\rangle +\\Vert w\\Vert _{}\\langle g,\\mu \\rangle \\Big \\rbrace >z\\bigg )\\nonumber .$ On the other hand, $&\\sup _{\\begin{array}{c}x \\in L\\cap \\partial B_m, \\\\v \\in L \\cap B_m(R), \\mu \\in K_n\\end{array}} \\inf _{w \\in B_m(R_1)} \\Big \\lbrace \\langle x,v\\rangle -\\langle w,v\\rangle +\\Vert \\mu \\Vert _{}\\langle h,w\\rangle +\\Vert w\\Vert _{}\\langle g,\\mu \\rangle \\Big \\rbrace \\nonumber \\\\& = \\sup _{\\begin{array}{c}x \\in L\\cap \\partial B_m, \\\\v \\in L \\cap B_m(R), \\mu \\in K_n\\end{array}} \\inf _{0\\le \\beta \\le R_1} \\Big \\lbrace \\beta \\cdot \\Big (\\langle g,\\mu \\rangle -\\big \\Vert \\Vert \\mu \\Vert _{}h-v \\big \\Vert _{}\\Big )+\\langle x,v\\rangle \\Big \\rbrace .$ As the above max-min problem must take non-negative cost optimum, and $\\langle x,v\\rangle \\le R$ for $x \\in L\\cap \\partial B_m,v \\in L \\cap B_m(R)$ , it follows that any pair of $(\\mu ,v) $ such that $\\langle g,\\mu \\rangle -\\Vert \\, \\Vert \\mu \\Vert _{}h-v \\Vert _{}<-R/R_1$ is not a feasible maximizer of (REF ).", "This means that the supremum of (REF ) can be further restricted as follows: $(\\ref {ineq:support_fcn_proj_sup_upper_2})&=\\sup _{\\begin{array}{c}x \\in L\\cap \\partial B_m, \\\\v \\in L \\cap B_m(R), \\mu \\in K_n,\\\\ \\langle g,\\mu \\rangle -\\Vert \\,\\Vert \\mu \\Vert _{}h-v \\Vert _{}\\ge -R/R_1\\end{array}} \\inf _{0\\le \\beta \\le R_1} \\Big \\lbrace \\beta \\cdot \\Big (\\langle g,\\mu \\rangle -\\big \\Vert \\Vert \\mu \\Vert _{}h-v \\big \\Vert _{}\\Big )+\\langle x,v\\rangle \\Big \\rbrace \\nonumber \\\\& \\stackrel{(\\ast )}{\\le } \\sup _{\\begin{array}{c}x \\in L\\cap \\partial B_m, \\\\v \\in L \\cap B_m(R), \\mu \\in K_n\\end{array}} \\langle x,v\\rangle \\,\\, \\hbox{ subject to } \\langle g,\\mu \\rangle -\\big \\Vert \\Vert \\mu \\Vert _{}h-v \\big \\Vert _{}\\ge -R/R_1.$ The inequality $(\\ast )$ in the above display follows by taking $\\beta =0$ in the infimum.", "Using Moreau's theorem (cf.", "Lemma REF ), we may write $h=\\Pi _L(h)+\\Pi _{L^\\circ }(h)$ with $\\langle \\Pi _L(h),\\Pi _{L^\\circ }(h)\\rangle =0$ , so $\\big \\Vert \\Vert \\mu \\Vert _{}h-v \\big \\Vert _{}^2&= \\big \\Vert \\Vert \\mu \\Vert _{}\\Pi _L(h)-v+ \\Vert \\mu \\Vert _{}\\Pi _{L^\\circ }(h) \\big \\Vert _{}^2\\\\& = \\big \\Vert \\Vert \\mu \\Vert _{}\\Pi _L(h)-v\\big \\Vert _{}^2+ \\Vert \\mu \\Vert _{}^2\\cdot \\Vert \\Pi _{L^\\circ }(h)\\Vert _{}^2+2\\langle \\Vert \\mu \\Vert _{}\\Pi _L(h)-v,\\Vert \\mu \\Vert _{}\\Pi _{L^\\circ }(h) \\rangle \\\\&\\stackrel{(\\ast \\ast )}{\\ge } \\big \\Vert \\Vert \\mu \\Vert _{}\\Pi _L(h)-v\\big \\Vert _{}^2+ \\Vert \\mu \\Vert _{}^2\\cdot \\Vert \\Pi _{L^\\circ }(h)\\Vert _{}^2,$ where in the inequality $(\\ast \\ast )$ we used that $\\langle -v,\\Pi _{L^\\circ }(h)\\rangle \\ge 0$ .", "Combining the above inequality with (REF )-(REF ), we then have $\\operatorname{\\mathbb {P}}\\bigg (\\sup _{x \\in L\\cap \\partial B_m} \\mathsf {h}(x) >z\\bigg )\\le 2\\operatorname{\\mathbb {P}}\\bigg (\\sup _{\\begin{array}{c}x \\in L\\cap \\partial B_m, v \\in {Q}(R,R_1) \\end{array} } \\langle x,v\\rangle >z\\bigg )+\\operatorname{\\mathbb {P}}(E(R)^c),$ where ${Q}(R,R_1)&\\equiv {Q}(R,R_1;g,h)\\equiv \\bigg \\lbrace v\\in L\\cap B_m(R):\\exists \\mu \\in K_n,\\\\&\\qquad \\qquad \\hbox{ s.t.", "}\\langle g,\\mu \\rangle \\ge \\sqrt{\\big \\Vert \\Vert \\mu \\Vert _{}\\Pi _L(h)-v\\big \\Vert _{}^2+ \\Vert \\mu \\Vert _{}^2\\Vert \\Pi _{L^\\circ }(h)\\Vert _{}^2}-\\frac{R}{R_1}\\bigg \\rbrace .$ Clearly ${Q}(R,R_1)$ is compact.", "Note that $R_1 \\mapsto {Q}(R,R_1)$ is non-increasing as $R_1\\uparrow \\infty $ , so with $\\overline{{Q}}(R)\\equiv \\cap _{R_1>0}{Q}(R,R_1)\\subset {Q}(R,\\infty )$ , by using Lemma REF we have $\\sup _{\\begin{array}{c}x \\in L\\cap \\partial B_m, v \\in {Q}(R,R_1) \\end{array} } \\langle x,v\\rangle \\downarrow \\sup _{\\begin{array}{c}x \\in L\\cap \\partial B_m, v \\in \\overline{{Q}}(R) \\end{array} } \\langle x,v\\rangle \\le \\sup _{\\begin{array}{c}x \\in L\\cap \\partial B_m, v \\in {Q}(R,\\infty ) \\end{array} } \\langle x,v\\rangle .$ Now taking the limit $R_1\\uparrow \\infty $ on the right hand side of (REF ), we have for all $z\\in \\mathbb {R}$ and $R>0$ , $&\\operatorname{\\mathbb {P}}\\bigg (\\sup _{x \\in L\\cap \\partial B_m} \\mathsf {h}(x) >z\\bigg )\\le 2\\operatorname{\\mathbb {P}}\\bigg (\\sup _{\\begin{array}{c}x \\in L\\cap \\partial B_m, v \\in {Q}(R,\\infty ) \\end{array} } \\langle x,v\\rangle \\ge z\\bigg )+\\operatorname{\\mathbb {P}}(E(R)^c).$ By Gaussian concentration in Proposition REF , there exists some universal constant $C_0>0$ such that for all $t\\ge 1$ , on an event $E_0(t)$ with probability at least $1-e^{-t}$ , $\\big \\vert \\sup _{\\mu \\in K\\cap \\partial B_n}\\langle g,\\mu \\rangle -\\operatorname{\\mathfrak {w}}(K\\cap \\partial B_n)\\big \\vert \\vee \\big \\vert \\Vert \\Pi _{L^\\circ }(h)\\Vert _{}- \\operatorname{\\mathfrak {w}}(L^\\circ \\cap \\partial B_m)\\big \\vert \\le C_0\\sqrt{t}.$ Here, for the concentration claim concerning $\\Vert \\Pi _{L^\\circ }(h)\\Vert _{}$ , we used (i) the simple fact that $\\Vert \\Pi _{L^\\circ }(h)\\Vert _{}=\\sup _{v \\in L^\\circ \\cap B_m}\\langle h,v\\rangle $ (see, e.g., [4] for a proof of this fact), and (ii) Proposition REF -(3) to replace $\\operatorname{\\mathfrak {w}}(L^\\circ \\cap B_m)$ with $\\operatorname{\\mathfrak {w}}(L^\\circ \\cap \\partial B_m)$ using the condition that $L^\\circ $ is non-trivial.", "We shall now show that on the event $E_0(t)$ defined in (REF ) above, $\\operatorname{\\mathfrak {w}}(K\\cap \\partial B_n)<\\operatorname{\\mathfrak {w}}(L\\cap \\partial B_m)-2C_0\\sqrt{t}\\,\\, \\Rightarrow \\,\\, {Q}(R,\\infty )=\\lbrace 0\\rbrace .$ To see this, if there exists some $v \\in {Q}(R,\\infty )$ with $v\\ne 0$ , then there must exist some $\\mu \\in K_n\\setminus \\lbrace 0\\rbrace $ such that $\\langle g,\\mu \\rangle \\ge \\sqrt{\\big \\Vert \\Vert \\mu \\Vert _{}\\Pi _L(h)-v\\big \\Vert _{}^2+ \\Vert \\mu \\Vert _{}^2\\Vert \\Pi _{L^\\circ }(h)\\Vert _{}^2}\\ge \\Vert \\mu \\Vert _{} \\Vert \\Pi _{L^\\circ }(h)\\Vert _{}.$ This means that $\\Big \\lbrace {Q}(R,\\infty )\\setminus \\lbrace 0\\rbrace \\ne \\emptyset \\Big \\rbrace \\cap E_0(t)&\\subset \\Big \\lbrace \\sup _{\\mu \\in K\\cap \\partial B_n} \\langle g,\\mu \\rangle \\ge \\Vert \\Pi _{L^\\circ }(h)\\Vert _{}\\Big \\rbrace \\cap E_0(t)\\\\&\\subset \\Big \\lbrace \\operatorname{\\mathfrak {w}}(K\\cap \\partial B_n)\\ge \\operatorname{\\mathfrak {w}}(L\\cap \\partial B_m)-2C_0\\sqrt{t}\\Big \\rbrace \\cap E_0(t),$ proving the claim (REF ).", "Now using (REF ), for any $z>0$ , we have $\\operatorname{\\mathbb {P}}\\bigg (\\sup _{x \\in L\\cap \\partial B_m} \\mathsf {h}(x) >z\\bigg )\\le \\mathbf {1}\\Big (\\operatorname{\\mathfrak {w}}(K\\cap \\partial B_n)\\ge \\operatorname{\\mathfrak {w}}(L\\cap \\partial B_m)-2C_0\\sqrt{t}\\Big )+2e^{-t}+\\operatorname{\\mathbb {P}}(E(R)^c).$ Finally taking $z\\downarrow 0$ and $R\\uparrow \\infty $ to conclude, upon noting that (i) $\\sup _{x \\in L\\cap \\partial B_m} \\mathsf {h}(x) \\ne 0$ is equivalent to $\\sup _{x \\in L\\cap \\partial B_m} \\mathsf {h}(x) > 0$ , and (ii) the quantities $\\operatorname{\\mathfrak {w}}(K\\cap \\partial B_n), \\operatorname{\\mathfrak {w}}(L\\cap \\partial B_m)$ can be replaced by $\\sqrt{\\delta (K)}$ and $\\sqrt{\\delta (L)}$ for non-trivial $K,L$ via an application of Proposition REF -(3)(4).", "Below we shall give a different proof for a `pointwise' version of Proposition REF : Under the same setup as in Proposition REF , we will show that there exists some universal constant $C>0$ such that for any $x \\in L\\cap \\partial B_m$ and $t\\ge 1$ , $\\operatorname{\\mathbb {P}}\\Big (\\mathsf {h}_{L\\cap G K_n}(x)\\ne 0\\Big )\\le \\mathbf {1}\\Big (\\sqrt{\\delta (K)}> \\sqrt{\\delta (L^\\circ )}-C\\sqrt{t}\\Big )+e^{-t}.$ While the conclusion (REF ) per se is weaker, the proof below exploits a different max-min representation of the support function $\\mathsf {h}_{L\\cap G K_n}$ from the one used in (REF ).", "In particular, using Lemma REF , we may write $\\mathsf {h}_{L\\cap G K_n}(x) &= \\sup _{\\mu \\in K_n, G\\mu \\in L}\\langle x,G\\mu \\rangle = \\sup _{\\mu \\in K_n}\\inf _{v \\in L^\\circ } \\Big \\lbrace \\langle x,G\\mu \\rangle -\\langle v,G\\mu \\rangle \\Big \\rbrace .$ This representation (REF ) will be more convenient to work with in the `large $\\delta (K)$ regime' in Proposition REF below, as well as in the proofs of Theorems REF and REF in Sections and ahead.", "We again write $\\mathsf {h}\\equiv \\mathsf {h}_{L\\cap G K_n}$ for notational convenience in the proof.", "By the one-sided Gaussian min-max theorem (cf.", "Theorem REF -(1)), for any $z \\in \\mathbb {R}$ and $R>0$ , $\\operatorname{\\mathbb {P}}\\big (\\mathsf {h}(x)>z\\big )&\\le \\operatorname{\\mathbb {P}}\\bigg ( \\sup _{\\mu \\in K_n}\\inf _{v \\in L^\\circ \\cap B_m(R)} \\langle x-v,G\\mu \\rangle >z \\bigg )\\nonumber \\\\&\\le 2 \\operatorname{\\mathbb {P}}\\bigg (\\sup _{\\mu \\in K_n}\\inf _{v \\in L^\\circ \\cap B_m(R)} \\Big \\lbrace -\\Vert \\mu \\Vert _{}\\langle h,v-x\\rangle +\\Vert v-x\\Vert _{}\\langle g,\\mu \\rangle \\Big \\rbrace >z \\bigg ).$ Using that $\\lbrace v \\in L^\\circ : \\Vert v-x\\Vert _{}\\le R-1\\rbrace \\subset L^\\circ \\cap B_m(R)$ , for any $R>2$ , with $L^\\circ (\\beta ;x)\\equiv \\lbrace (v-x)/\\Vert v-x\\Vert _{}:v \\in L^\\circ , \\Vert v-x\\Vert _{}=\\beta \\rbrace \\subset \\partial B_m,$ we have $& \\sup _{\\mu \\in K_n}\\inf _{v \\in L^\\circ \\cap B_m(R)} \\Big \\lbrace -\\Vert \\mu \\Vert _{}\\langle h,v-x\\rangle +\\Vert v-x\\Vert _{}\\langle g,\\mu \\rangle \\Big \\rbrace \\nonumber \\\\& \\le \\sup _{\\mu \\in K_n} \\inf _{1\\le \\beta \\le R-1} \\beta \\bigg \\lbrace -\\Vert \\mu \\Vert _{}\\sup _{w \\in L^\\circ (\\beta ;x)} \\langle h,w\\rangle +\\langle g,\\mu \\rangle \\bigg \\rbrace .$ Now taking $\\beta =R-1$ in the infimum, and using the cone property of $K$ , we have $& \\sup _{\\mu \\in K_n}\\inf _{v \\in L^\\circ \\cap B_m(R)} \\Big \\lbrace -\\Vert \\mu \\Vert _{}\\langle h,v-x\\rangle +\\Vert v-x\\Vert _{}\\langle g,\\mu \\rangle \\Big \\rbrace \\nonumber \\\\&\\le (R-1)\\cdot \\sup _{\\mu \\in K_n}\\bigg (\\langle g,\\mu \\rangle - \\Vert \\mu \\Vert _{}\\sup _{w \\in L^\\circ (R-1;x)} \\langle h,w\\rangle \\bigg )\\nonumber \\\\&=(R-1)\\cdot \\bigg (\\sup _{\\mu \\in K\\cap \\partial B_n} \\langle g,\\mu \\rangle -\\sup _{w \\in L^\\circ (R-1;x)} \\langle h,w\\rangle \\bigg )_+.$ By Gaussian concentration in Proposition REF , for $t\\ge 1$ , on an event $E_R(t)$ with probability $1-e^{-t}$ , $&\\big \\vert \\sup _{\\mu \\in K\\cap \\partial B_n} \\langle g,\\mu \\rangle -\\operatorname{\\mathfrak {w}}(K\\cap \\partial B_n)\\big \\vert \\nonumber \\\\&\\qquad \\vee \\big \\vert \\sup _{w \\in L^\\circ (R-1;x)} \\langle h,w\\rangle -\\operatorname{\\mathfrak {w}}\\big (L^\\circ (R-1;x)\\big )\\big \\vert \\le C\\sqrt{t}.$ Combined with (REF ), on the event $E_R(t)$ we have, $& \\sup _{\\mu \\in K_n}\\inf _{v \\in L^\\circ \\cap B_m(R)} \\Big \\lbrace -\\Vert \\mu \\Vert _{}\\langle h,v-x\\rangle +\\Vert v-x\\Vert _{}\\langle g,\\mu \\rangle \\Big \\rbrace \\nonumber \\\\&\\le (R-1) \\cdot \\Big (\\operatorname{\\mathfrak {w}}(K\\cap \\partial B_n)- \\operatorname{\\mathfrak {w}}\\big (L^\\circ (R-1;x)\\big )+C\\sqrt{t}\\Big )_+.$ On the other hand, as for $\\beta >1$ , $&\\big \\vert \\sup _{w \\in L^\\circ (\\beta ;x)} \\langle h,w\\rangle - \\sup _{v \\in L^\\circ \\cap \\partial B_m} \\langle h,v\\rangle \\big \\vert \\le \\Vert h\\Vert _{}\\sup _{v \\in L^\\circ , \\Vert v\\Vert _{}\\ge \\beta -1}\\bigg \\Vert \\frac{v-x}{\\Vert v-x\\Vert _{}}-\\frac{v}{\\Vert v\\Vert _{}} \\bigg \\Vert _{}\\le \\frac{2\\Vert h\\Vert _{}}{\\beta -1},$ we have almost surely $\\lim _{\\beta \\uparrow \\infty } \\sup _{w \\in L^\\circ (\\beta ;x)} \\langle h,w\\rangle = \\sup _{v \\in L^\\circ \\cap \\partial B_m} \\langle h,v\\rangle $ , and therefore by integrability of the suprema of Gaussian processes, $\\lim _{\\beta \\uparrow \\infty } \\operatorname{\\mathfrak {w}}\\big (L^\\circ (\\beta ;x) \\big )= \\operatorname{\\mathfrak {w}}\\big (L^\\circ \\cap \\partial B_m\\big ).$ This means that for $t\\ge 1$ , we may find some large enough deterministic $R_t>0$ such that for all $R\\ge R_t$ , it holds that $\\vert \\operatorname{\\mathfrak {w}}\\big (L^\\circ (R-1;x) \\big )-\\operatorname{\\mathfrak {w}}\\big (L^\\circ \\cap \\partial B_m\\big )\\vert \\le \\sqrt{t}.$ Combined with (REF ), for $R\\ge R_t$ , on the event $E_R(t)$ , $& \\sup _{\\mu \\in K_n}\\inf _{v \\in L^\\circ \\cap B_m(R)} \\Big \\lbrace -\\Vert \\mu \\Vert _{}\\langle h,v-x\\rangle +\\Vert v-x\\Vert _{}\\langle g,\\mu \\rangle \\Big \\rbrace \\nonumber \\\\&\\le (R-1) \\cdot \\Big (\\operatorname{\\mathfrak {w}}(K\\cap \\partial B_n)- \\operatorname{\\mathfrak {w}}\\big (L^\\circ \\cap \\partial B_m\\big )+C\\sqrt{t}\\Big )_+.$ Now using (REF ) with $z=0$ , for any $t\\ge 1$ and choosing $R\\ge R_t$ , $&\\operatorname{\\mathbb {P}}\\big (\\mathsf {h}(x)>z=0\\big )\\\\&\\le \\mathbf {1}\\bigg ((R-1) \\cdot \\Big (\\operatorname{\\mathfrak {w}}(K\\cap \\partial B_n)- \\operatorname{\\mathfrak {w}}\\big (L^\\circ \\cap \\partial B_m\\big )+C\\sqrt{t}\\Big )_+>0\\bigg )+2\\operatorname{\\mathbb {P}}(E_R(t)^c)\\\\& \\le \\mathbf {1}\\Big (\\operatorname{\\mathfrak {w}}(K\\cap \\partial B_n)- \\operatorname{\\mathfrak {w}}\\big (L^\\circ \\cap \\partial B_m\\big )+C\\sqrt{t}>0\\Big )+2e^{-t}.$ Finally using $\\mathsf {h}(x)>0\\Leftrightarrow \\mathsf {h}(x)\\ne 0$ , and Proposition REF -(3)(4) to conclude via adjusting constants." ], [ "Proof of Theorem ", "The following proposition shows that for `large' values of $\\delta (K)$ , the support function of $L\\cap G(K\\cap B_n)$ will be non-trivial with high probability.", "Proposition 3.2 Suppose that $K \\subset \\mathbb {R}^n$ and $L\\subset \\mathbb {R}^m$ are closed convex cones with $K\\ne \\lbrace 0\\rbrace , L\\notin \\lbrace \\lbrace 0\\rbrace ,\\mathbb {R}^m\\rbrace $ .", "Fix $x \\in L\\cap \\partial B_m$ .", "Then there exists some universal constant $C>0$ such that for any $t\\ge 1$ , $\\operatorname{\\mathbb {P}}\\Big (\\mathsf {h}_{L\\cap G K_n}(x)= 0\\Big )\\le \\mathbf {1}\\Big (\\sqrt{\\delta (K)}< \\sqrt{\\delta (L^\\circ )}+C\\sqrt{t}\\Big )+e^{-t}.$ Here $K_n\\equiv K\\cap B_n$ .", "We shall again drop the subscript in $\\mathsf {h}_{L\\cap G K_n}$ in the proof.", "Recall the representation of $\\mathsf {h}$ in (REF ).", "We shall define a surrogate function of $\\mathsf {h}$ as follows: Let for $\\varepsilon >0$ $\\mathsf {h}_{\\varepsilon }(x)&\\equiv \\sup _{\\mu \\in K_n}\\inf _{v \\in L^\\circ } \\bigg \\lbrace \\langle x,G\\mu \\rangle -\\langle v,G\\mu \\rangle +\\frac{\\varepsilon }{2}\\Vert v\\Vert _{}^2\\bigg \\rbrace \\\\&\\stackrel{(\\ast )}{=}\\sup _{\\mu \\in K_n}\\bigg \\lbrace \\langle x,G\\mu \\rangle -\\frac{ 1}{2\\varepsilon }\\bigg (\\sup _{v \\in L^\\circ \\cap B_m}\\langle v,G\\mu \\rangle \\bigg )^2\\bigg \\rbrace .$ Here the last equality $(\\ast )$ follows as $&\\inf _{v \\in L^\\circ }\\bigg \\lbrace -\\langle v,G\\mu \\rangle +\\frac{\\varepsilon }{2}\\Vert v\\Vert _{}^2\\bigg \\rbrace = -\\sup _{\\beta \\ge 0} \\bigg \\lbrace \\sup _{v \\in L^\\circ ,\\Vert v\\Vert _{}=\\beta }\\langle v,G\\mu \\rangle -\\frac{\\varepsilon }{2}\\cdot \\beta ^2\\bigg \\rbrace \\\\& = - \\max \\bigg \\lbrace 0, \\sup _{\\beta >0} \\bigg (\\beta \\sup _{v \\in L^\\circ \\cap \\partial B_m}\\langle v,G\\mu \\rangle - \\frac{\\varepsilon }{2}\\cdot \\beta ^2\\bigg ) \\bigg \\rbrace \\\\&= - \\frac{1}{2\\varepsilon } \\bigg (0\\vee \\sup _{v \\in L^\\circ \\cap \\partial B_m}\\langle v,G\\mu \\rangle \\bigg )^2=-\\frac{ 1}{2\\varepsilon }\\bigg (\\sup _{v \\in L^\\circ \\cap B_m}\\langle v,G\\mu \\rangle \\bigg )^2.$ The main purpose of $\\mathsf {h}_\\varepsilon $ , as will be clear in Step 2 below, is to induce automatic localization over $\\inf _{v \\in L^\\circ }$ that facilitates applications of the Gaussian min-max theorem.", "By definition of $\\mathsf {h}_\\varepsilon $ , clearly $\\mathsf {h}(x)\\le \\mathsf {h}_{\\varepsilon }(x)$ .", "(Step 1).", "We shall first prove $\\lim _{\\varepsilon \\downarrow 0} \\mathsf {h}_\\varepsilon (x)=\\mathsf {h}(x).$ Let $E_0\\equiv \\lbrace \\Vert G\\Vert _{\\operatorname{op}}>0\\rbrace $ .", "Then $\\operatorname{\\mathbb {P}}(E_0)=1$ .", "Note that for any $\\mu \\in K_n$ such that $\\big (\\sup _{v \\in L^\\circ \\cap B_m}\\langle v,G\\mu \\rangle \\big )^2>4\\Vert G\\Vert _{\\operatorname{op}}\\varepsilon $ , we have the simple estimate $\\langle x,G\\mu \\rangle -\\frac{1}{2\\varepsilon }\\bigg (\\sup \\limits _{v \\in L^\\circ \\cap B_m}\\langle v,G\\mu \\rangle \\bigg )^2<-\\Vert G\\Vert _{\\operatorname{op}},$ so on the event $E_0$ , such $\\mu $ 's are not feasible maximizers in the definition of $\\mathsf {h}_\\varepsilon $ , as $\\mathsf {h}_\\varepsilon (x)\\ge 0$ .", "This means on the event $E_0$ , $\\mathsf {h}(x)\\le \\mathsf {h}_{\\varepsilon }(x) &= \\sup _{\\mu \\in K_n, (\\sup \\limits _{v \\in L^\\circ \\cap B_m}\\langle v,G\\mu \\rangle )^2 \\le 4\\Vert G\\Vert _{\\operatorname{op}}\\varepsilon }\\bigg \\lbrace \\langle x,G\\mu \\rangle -\\frac{ 1}{2\\varepsilon }\\bigg (\\sup _{v \\in L^\\circ \\cap B_m}\\langle v,G\\mu \\rangle \\bigg )^2\\bigg \\rbrace \\nonumber \\\\&\\le \\sup _{\\mu \\in K_n, (\\sup \\limits _{v \\in L^\\circ \\cap B_m}\\langle v,G\\mu \\rangle )^2 \\le 4\\Vert G\\Vert _{\\operatorname{op}}\\varepsilon } \\langle x,G\\mu \\rangle .$ Note that $S_\\varepsilon \\equiv \\bigg \\lbrace \\mu \\in K_n: \\bigg (\\sup _{v \\in L^\\circ \\cap B_m}\\langle v,G\\mu \\rangle \\bigg )^2\\le 4\\Vert G\\Vert _{\\operatorname{op}}\\varepsilon \\bigg \\rbrace $ is a sequence of non-increasing sets as $\\varepsilon \\downarrow 0$ , and $&\\cap _{\\varepsilon >0}S_\\varepsilon \\subset \\bigg \\lbrace \\mu \\in K_n: \\bigg (\\sup _{v \\in L^\\circ \\cap B_m}\\langle v,G\\mu \\rangle \\bigg )^2=0\\bigg \\rbrace = \\big \\lbrace \\mu \\in K_n: G\\mu \\in L\\big \\rbrace ,$ so using (REF ) and Lemma REF , we have on the event $E_0$ , $\\mathsf {h}(x)\\le \\operatornamewithlimits{\\underline{lim}}_{\\varepsilon \\downarrow 0} \\mathsf {h}_\\varepsilon (x)\\le \\operatornamewithlimits{\\overline{lim}}_{\\varepsilon \\downarrow 0} \\mathsf {h}_\\varepsilon (x) \\le \\sup _{\\mu \\in K_n: G\\mu \\in L}\\langle x,G\\mu \\rangle = \\mathsf {h}(x).$ On $E_0^c$ , $\\Vert G\\Vert _{\\operatorname{op}}=0$ so $G=0$ and $\\mathsf {h}(x)=\\mathsf {h}_\\varepsilon (x)=0$ are trivial.", "The claim (REF ) is proven.", "(Step 2).", "Next we shall use the surrogate function $\\mathsf {h}_\\varepsilon $ and the claim (REF ) in Step 1 to prove that for any $z \\in \\mathbb {R}$ , $\\operatorname{\\mathbb {P}}\\big (\\mathsf {h}(x)\\le z\\big )\\le 2\\operatorname{\\mathbb {P}}\\bigg (\\sup _{\\mu \\in K_n}\\inf _{v \\in L^\\circ } \\Big \\lbrace -\\Vert \\mu \\Vert _{}\\langle h,v-x\\rangle +\\Vert v-x\\Vert _{}\\langle g,\\mu \\rangle \\Big \\rbrace \\le z\\bigg ).$ For $R>0$ , define the event $E_1(R)\\equiv \\big \\lbrace \\Vert G\\Vert _{\\operatorname{op}}\\le R\\big \\rbrace \\cap \\big \\lbrace \\Vert g\\Vert _{}+\\Vert h\\Vert _{}\\le R\\big \\rbrace .$ On the event $E_1(R)$ we may restrict the minimum over $v \\in L^\\circ $ to $v \\in L^\\circ \\cap B_m(R/\\varepsilon )$ , i.e., on the event $E_1(R)$ , $\\mathsf {h}_{\\varepsilon }(x)&\\equiv \\sup _{\\mu \\in K_n}\\inf _{v \\in L^\\circ \\cap B_m(R/\\varepsilon )} \\bigg \\lbrace \\langle x-v,G\\mu \\rangle +\\frac{\\varepsilon }{2}\\Vert v\\Vert _{}^2\\bigg \\rbrace .$ By the convex Gaussian min-max theorem (cf.", "Theorem REF -(2)), for any $z_0>z$ , $&\\operatorname{\\mathbb {P}}\\big (\\mathsf {h}_{\\varepsilon }(x)< z_0\\big )-\\operatorname{\\mathbb {P}}(E_1(R)^c)\\\\&\\le 2\\operatorname{\\mathbb {P}}\\bigg (\\sup _{\\mu \\in K_n}\\inf _{v \\in L^\\circ \\cap B_m(R/\\varepsilon )} \\bigg \\lbrace -\\Vert \\mu \\Vert _{}\\langle h,v-x\\rangle +\\Vert v-x\\Vert _{}\\langle g,\\mu \\rangle +\\frac{\\varepsilon }{2}\\Vert v\\Vert _{}^2 \\bigg \\rbrace < z_0\\bigg )\\\\&\\le 2\\operatorname{\\mathbb {P}}\\bigg (\\sup _{\\mu \\in K_n}\\inf _{v \\in L^\\circ } \\Big \\lbrace -\\Vert \\mu \\Vert _{}\\langle h,v-x\\rangle +\\Vert v-x\\Vert _{}\\langle g,\\mu \\rangle \\Big \\rbrace < z_0\\bigg ).$ Now taking $R\\uparrow \\infty $ and $\\varepsilon \\downarrow 0$ , and finally $z_0\\downarrow z$ proves the claim (REF ) upon using (REF ).", "(Step 3).", "In this step we prove the following key lower bound: $&\\sup _{\\mu \\in K_n}\\inf _{v \\in L^\\circ } \\Big \\lbrace -\\Vert \\mu \\Vert _{}\\langle h,v-x\\rangle +\\Vert v-x\\Vert _{}\\langle g,\\mu \\rangle \\Big \\rbrace \\nonumber \\\\& \\ge \\frac{1}{2} \\bigg \\lbrace \\bigg (\\sup _{\\mu \\in K\\cap B_n}\\langle g,\\mu \\rangle \\bigg )^2-\\bigg (\\sup _{v \\in L^\\circ \\cap B_m} \\langle h,v\\rangle \\bigg )^2\\bigg \\rbrace _+^{1/2}- \\vert \\langle h,x\\rangle \\vert .$ First note that $& \\sup _{\\mu \\in K_n}\\inf _{v \\in L^\\circ } \\Big \\lbrace -\\Vert \\mu \\Vert _{}\\langle h,v-x\\rangle +\\Vert v-x\\Vert _{}\\langle g,\\mu \\rangle \\Big \\rbrace \\nonumber \\\\&\\ge \\sup _{\\mu \\in K_n}\\inf _{v \\in L^\\circ } \\Big \\lbrace -\\Vert \\mu \\Vert _{}\\langle h,v\\rangle +\\Vert v-x\\Vert _{}\\langle g,\\mu \\rangle \\Big \\rbrace - \\vert \\langle h,x\\rangle \\vert \\nonumber \\\\& = \\sup _{\\mu \\in K_n} \\inf _{\\beta \\ge 0} \\inf _{v \\in L^\\circ ,\\Vert v\\Vert _{}=\\beta } \\bigg \\lbrace - \\Vert \\mu \\Vert _{}\\langle h,v\\rangle +\\sqrt{\\beta ^2-2\\langle v,x\\rangle +1}\\cdot \\langle g,\\mu \\rangle \\bigg \\rbrace - \\vert \\langle h,x\\rangle \\vert \\nonumber \\\\& = \\sup _{\\mu \\in K_n} \\inf _{v \\in L^\\circ \\cap \\partial B_m} \\inf _{\\beta \\ge 0} \\bigg \\lbrace - \\beta \\Vert \\mu \\Vert _{}\\langle h,v\\rangle +\\sqrt{\\beta ^2+2\\beta \\langle -v,x\\rangle +1}\\cdot \\langle g,\\mu \\rangle \\bigg \\rbrace - \\vert \\langle h,x\\rangle \\vert \\nonumber \\\\& \\equiv \\sup _{\\mu \\in K_n} \\inf _{v \\in L^\\circ \\cap \\partial B_m} \\inf _{\\beta \\ge 0}\\mathsf {P}(\\beta ;\\mu ,v) - \\vert \\langle h,x\\rangle \\vert .$ We claim that any $\\mu \\in K_n$ such that $\\langle g,\\mu \\rangle <0$ is not a feasible solution to $\\sup _{\\mu \\in K_n} \\inf _{v \\in L^\\circ \\cap \\partial B_m} \\inf _{\\beta \\ge 0}\\mathsf {P}(\\beta ;\\mu ,v)$ .", "To see this, for any such $\\mu \\in K_n$ , we have $\\inf _{v \\in L^\\circ \\cap \\partial B_m} \\inf _{\\beta \\ge 0}\\mathsf {P}(\\beta ;\\mu ,v)< \\inf _{v \\in L^\\circ \\cap \\partial B_m} \\inf _{\\beta \\ge 0} \\big \\lbrace -\\beta \\Vert \\mu \\Vert _{}\\langle h,v\\rangle \\big \\rbrace \\le 0,$ whereas the cost optimum $\\sup _{\\mu \\in K_n} \\inf _{v \\in L^\\circ \\cap \\partial B_m} \\inf _{\\beta \\ge 0}\\mathsf {P}(\\beta ;\\mu ,v)\\ge 0$ due to $0 \\in K_n$ .", "This proves that $\\sup _{\\mu \\in K_n} \\inf _{v \\in L^\\circ \\cap \\partial B_m} \\inf _{\\beta \\ge 0}\\mathsf {P}(\\beta ;\\mu ,v) = \\sup _{\\mu \\in K_n,\\langle g,\\mu \\rangle \\ge 0} \\inf _{v \\in L^\\circ \\cap \\partial B_m} \\inf _{\\beta \\ge 0}\\mathsf {P}(\\beta ;\\mu ,v).$ As the derivative of $\\beta \\mapsto \\mathsf {P}(\\beta ;\\mu ,v)$ is given by $\\mathsf {P}^{\\prime }(\\beta ;\\mu ,v)& = -\\Vert \\mu \\Vert _{}\\langle h,v\\rangle + \\langle g,\\mu \\rangle \\cdot \\frac{\\beta +\\langle -v,x\\rangle }{ \\sqrt{\\beta ^2+2\\beta \\langle -v,x\\rangle +1} },$ for any $\\mu \\in K_n$ with $\\langle g,\\mu \\rangle \\ge 0$ , the map $\\beta \\mapsto \\mathsf {P}^{\\prime }(\\beta ;\\mu ,v)$ is non-decreasing.", "Furthermore, for such $\\mu $ 's and any $v \\in L^\\circ \\cap \\partial B_m, x \\in L\\cap \\partial B_m$ , $\\mathsf {P}^{\\prime }(0;\\mu ,v)& = -\\Vert \\mu \\Vert _{}\\langle h,v\\rangle + \\langle g,\\mu \\rangle \\cdot \\langle -v,x\\rangle ,\\\\\\mathsf {P}^{\\prime }(\\infty ;\\mu ,v)& = -\\Vert \\mu \\Vert _{}\\langle h,v\\rangle + \\langle g,\\mu \\rangle .$ Fix $\\mu \\in K_n$ with $\\langle g,\\mu \\rangle \\ge 0$ and $\\inf _{v \\in L^\\circ \\cap B_m} \\mathsf {P}^{\\prime }(\\infty ;\\mu ,v) = -\\Vert \\mu \\Vert _{}\\sup _{v \\in L^\\circ \\cap B_m}\\langle h,v\\rangle +\\langle g,\\mu \\rangle \\ge 0.", "$ We consider two cases below.", "Case 1.", "Suppose $v \\in L^\\circ \\cap \\partial B_m$ is such that $\\mathsf {P}^{\\prime }(0;\\mu ,v)\\ge 0$ .", "Then $\\inf _{\\begin{array}{c} v \\in L^\\circ \\cap \\partial B_m,\\\\ \\mathsf {P}^{\\prime }(0;\\mu ,v)\\ge 0\\end{array}} \\inf _{\\beta \\ge 0}\\mathsf {P}(\\beta ;\\mu ,v) = \\inf _{\\begin{array}{c} v \\in L^\\circ \\cap \\partial B_m,\\\\ \\mathsf {P}^{\\prime }(0;\\mu ,v)\\ge 0\\end{array}} \\mathsf {P}(0;\\mu ,v) = \\langle g,\\mu \\rangle .$ Case 2.", "Suppose $v \\in L^\\circ \\cap \\partial B_m$ is such that $\\mathsf {P}^{\\prime }(0;\\mu ,v)\\le 0$ and $\\mathsf {P}^{\\prime }(\\infty ;\\mu ,v)\\ge 0$ .", "Then $\\langle h,v\\rangle \\ge 0$ and $&\\inf _{\\begin{array}{c} v \\in L^\\circ \\cap \\partial B_m,\\\\ \\mathsf {P}^{\\prime }(0;\\mu ,v)\\le 0, \\mathsf {P}^{\\prime }(\\infty ;\\mu ,v)\\ge 0 \\end{array}} \\inf _{\\beta \\ge 0}\\mathsf {P}(\\beta ;\\mu ,v) \\nonumber \\\\& =\\inf _{\\begin{array}{c} v \\in L^\\circ \\cap \\partial B_m,\\\\ \\mathsf {P}^{\\prime }(0;\\mu ,v)\\le 0, \\mathsf {P}^{\\prime }(\\infty ;\\mu ,v)\\ge 0 \\end{array}} \\bigg \\lbrace \\sqrt{\\big (1-\\langle -v,x\\rangle ^2\\big )\\big (\\langle g,\\mu \\rangle ^2-\\Vert \\mu \\Vert _{}^2\\langle h,v\\rangle ^2\\big )} +\\Vert \\mu \\Vert _{}\\langle h,v\\rangle \\langle -v,x\\rangle \\bigg \\rbrace \\nonumber \\\\&\\ge \\inf _{\\begin{array}{c} v \\in L^\\circ \\cap \\partial B_m,\\\\ \\mathsf {P}^{\\prime }(0;\\mu ,v)\\le 0, \\mathsf {P}^{\\prime }(\\infty ;\\mu ,v)\\ge 0 \\end{array}} \\bigg \\lbrace \\sqrt{\\big (1-\\langle -v,x\\rangle ^2\\big )\\big (\\langle g,\\mu \\rangle ^2-\\Vert \\mu \\Vert _{}^2\\langle h,v\\rangle ^2\\big )} +\\langle g,\\mu \\rangle \\langle -v,x\\rangle ^2\\bigg \\rbrace \\nonumber \\\\&\\ge \\inf _{\\begin{array}{c} v \\in L^\\circ \\cap \\partial B_m,\\\\ \\mathsf {P}^{\\prime }(0;\\mu ,v)\\le 0, \\mathsf {P}^{\\prime }(\\infty ;\\mu ,v)\\ge 0 \\end{array}}\\inf _{\\alpha \\in [0,1]} \\bigg \\lbrace \\sqrt{\\big (1-\\alpha \\big )\\big (\\langle g,\\mu \\rangle ^2-\\Vert \\mu \\Vert _{}^2\\langle h,v\\rangle ^2\\big )} +\\langle g,\\mu \\rangle \\alpha \\bigg \\rbrace \\nonumber \\\\&\\stackrel{(\\ast )}{\\ge } \\frac{1}{2} \\inf _{v \\in L^\\circ \\cap \\partial B_m, \\langle h,v\\rangle \\ge 0} \\min \\bigg \\lbrace \\sqrt{\\big (\\langle g,\\mu \\rangle ^2-\\Vert \\mu \\Vert _{}^2\\langle h,v\\rangle ^2\\big )_+}, \\langle g,\\mu \\rangle \\bigg \\rbrace \\nonumber \\\\& = \\frac{1}{2} \\cdot \\bigg \\lbrace \\langle g,\\mu \\rangle ^2-\\Vert \\mu \\Vert _{}^2\\bigg (\\sup _{v \\in L^\\circ \\cap B_m} \\langle h,v\\rangle \\bigg )^2\\bigg \\rbrace _+^{1/2}.$ Here the first identity follows by Lemma REF below.", "The inequality in $(\\ast )$ follows by the following simple lower bound: for any $M_1,M_2\\ge 0$ , $&\\inf _{\\alpha \\in [0,1]} \\big \\lbrace \\sqrt{1-\\alpha } M_1+\\alpha M_2\\big \\rbrace \\\\& = \\min \\bigg \\lbrace \\inf _{\\alpha \\in [0,1/2]} \\big (\\sqrt{1-\\alpha } M_1+\\alpha M_2\\big ), \\inf _{\\alpha \\in [1/2,1]} \\big (\\sqrt{1-\\alpha } M_1+\\alpha M_2\\big ) \\bigg \\rbrace \\\\&\\ge \\min \\big \\lbrace M_1/\\sqrt{2}, M_2/2\\big \\rbrace \\ge (M_1\\wedge M_2)/2.$ The last identity in (REF ) uses the fact that $\\sup _{v \\in L^\\circ \\cap \\partial B_m, \\langle h,v\\rangle \\ge 0} \\langle h,v\\rangle ^2 &= \\bigg (\\sup _{v \\in L^\\circ \\cap \\partial B_m} \\big (0\\vee \\langle h,v\\rangle \\big )\\bigg )^2 = \\bigg (\\sup _{v \\in L^\\circ \\cap B_m} \\langle h,v\\rangle \\bigg )^2.$ Combining (REF )-(REF ), we have $&\\sup _{\\mu \\in K_n}\\inf _{v \\in L^\\circ } \\Big \\lbrace -\\Vert \\mu \\Vert _{}\\langle h,v-x\\rangle +\\Vert v-x\\Vert _{}\\langle g,\\mu \\rangle \\Big \\rbrace \\nonumber \\\\&\\ge \\sup _{\\begin{array}{c}\\mu \\in K_n,\\langle g,\\mu \\rangle \\ge 0,\\\\ \\inf _{v \\in L^\\circ \\cap B_m} \\mathsf {P}^{\\prime }(\\infty ;\\mu ,v)\\ge 0\\end{array}} \\inf _{v \\in L^\\circ \\cap \\partial B_m} \\inf _{\\beta \\ge 0}\\mathsf {P}(\\beta ;\\mu ,v)- \\vert \\langle h,x\\rangle \\vert \\nonumber \\\\& \\ge \\sup _{\\begin{array}{c}\\mu \\in K_n,\\langle g,\\mu \\rangle \\ge 0,\\\\ \\langle g,\\mu \\rangle \\ge \\Vert \\mu \\Vert _{}\\sup \\limits _{v \\in L^\\circ \\cap B_m}\\langle h,v\\rangle \\end{array}} \\min \\bigg \\lbrace \\langle g,\\mu \\rangle ,\\frac{1}{2} \\bigg [\\langle g,\\mu \\rangle ^2-\\Vert \\mu \\Vert _{}^2\\bigg (\\sup _{v \\in L^\\circ \\cap B_m} \\langle h,v\\rangle \\bigg )^2\\bigg ]_+^{1/2}\\bigg \\rbrace - \\vert \\langle h,x\\rangle \\vert \\nonumber \\\\&\\ge \\frac{1}{2} \\bigg (\\sup _{\\begin{array}{c}\\mu \\in K_n, \\langle g,\\mu \\rangle \\ge 0 \\end{array} }\\bigg \\lbrace \\langle g,\\mu \\rangle ^2-\\Vert \\mu \\Vert _{}^2\\bigg (\\sup _{v \\in L^\\circ \\cap B_m} \\langle h,v\\rangle \\bigg )^2\\bigg \\rbrace _+\\bigg )^{1/2}- \\vert \\langle h,x\\rangle \\vert \\nonumber \\\\& = \\frac{1}{2} \\bigg \\lbrace \\bigg (\\sup _{\\mu \\in K\\cap B_n}\\langle g,\\mu \\rangle \\bigg )^2-\\bigg (\\sup _{v \\in L^\\circ \\cap B_m} \\langle h,v\\rangle \\bigg )^2\\bigg \\rbrace _+^{1/2}- \\vert \\langle h,x\\rangle \\vert .$ Here the last equality follows as for any $M\\ge 0$ , $&\\sup _{\\begin{array}{c}\\mu \\in K_n, \\langle g,\\mu \\rangle \\ge 0 \\end{array} }\\big \\lbrace \\langle g,\\mu \\rangle ^2-\\Vert \\mu \\Vert _{}^2\\cdot M\\big \\rbrace _+= \\sup _{0<\\beta \\le 1} \\beta ^2\\cdot \\bigg \\lbrace \\sup _{\\mu \\in K\\cap \\partial B_n,\\langle g,\\mu \\rangle \\ge 0} \\langle g,\\mu \\rangle ^2-M\\bigg \\rbrace _+\\\\& = \\bigg \\lbrace \\bigg (\\sup _{\\mu \\in K\\cap \\partial B_n} \\big (0\\vee \\langle g,\\mu \\rangle \\big )\\bigg )^2-M\\bigg \\rbrace _+ = \\bigg \\lbrace \\bigg (\\sup _{\\mu \\in K\\cap B_n}\\langle g,\\mu \\rangle \\bigg )^2-M\\bigg \\rbrace _+.$ This proves the claimed inequality (REF ).", "(Step 4).", "We now give a probabilistic lower bound for the right hand side of (REF ).", "By Gaussian concentration as in Proposition REF , there exists some universal constant $C_0>0$ such that for $t\\ge 1$ , on an event $E_0(t)$ with probability at least $1-e^{-t}$ , $\\big \\vert \\sup _{\\mu \\in K\\cap B_n}\\langle g,\\mu \\rangle -\\operatorname{\\mathfrak {w}}(K\\cap B_n)\\big \\vert \\vee \\big \\vert \\sup _{v \\in L^\\circ \\cap B_m} \\langle h,v\\rangle -\\operatorname{\\mathfrak {w}}(L^\\circ \\cap B_m)\\big \\vert \\le C_0\\sqrt{t}.$ Consequently, on the event $E_0(t)$ , $&\\bigg \\lbrace \\bigg (\\sup _{\\mu \\in K\\cap B_n}\\langle g,\\mu \\rangle \\bigg )^2-\\bigg (\\sup _{v \\in L^\\circ \\cap B_m} \\langle h,v\\rangle \\bigg )^2\\bigg \\rbrace _+^{1/2}\\\\& = \\bigg (\\sup _{\\mu \\in K\\cap B_n}\\langle g,\\mu \\rangle -\\sup _{v \\in L^\\circ \\cap B_m} \\langle h,v\\rangle \\bigg )_+^{1/2}\\cdot \\bigg (\\sup _{\\mu \\in K\\cap B_n}\\langle g,\\mu \\rangle +\\sup _{v \\in L^\\circ \\cap B_m} \\langle h,v\\rangle \\bigg )_+^{1/2}\\\\& \\ge \\Big (\\operatorname{\\mathfrak {w}}(K\\cap B_n)-\\operatorname{\\mathfrak {w}}(L^\\circ \\cap B_m)-C_0\\sqrt{t}\\Big )_+^{1/2}\\cdot \\Big (\\operatorname{\\mathfrak {w}}(K\\cap B_n)+\\operatorname{\\mathfrak {w}}(L^\\circ \\cap B_m)-C_0\\sqrt{t}\\Big )_+^{1/2}.$ So if $\\operatorname{\\mathfrak {w}}(K\\cap B_n)\\ge \\operatorname{\\mathfrak {w}}(L^\\circ \\cap B_m)+2C_0\\sqrt{t}$ , on the event $E_0(t)$ , $&\\bigg \\lbrace \\bigg (\\sup _{\\mu \\in K\\cap B_n}\\langle g,\\mu \\rangle \\bigg )^2-\\bigg (\\sup _{v \\in L^\\circ \\cap B_m} \\langle h,v\\rangle \\bigg )^2\\bigg \\rbrace _+^{1/2}\\nonumber \\\\&\\ge \\big (C_0\\sqrt{t}\\big )^{1/2}\\cdot \\big (\\operatorname{\\mathfrak {w}}(L^\\circ \\cap B_m)+C_0\\sqrt{t}\\big )_+^{1/2} \\ge C_0\\sqrt{t}.$ Using the claimed inequality (REF ), there exists some universal constant $C>0$ such that if $\\operatorname{\\mathfrak {w}}(K\\cap B_n)\\ge \\operatorname{\\mathfrak {w}}(L^\\circ \\cap B_m)+C\\sqrt{t}$ , on the event $E_0(t)$ , $&\\sup _{\\mu \\in K_n}\\inf _{v \\in L^\\circ } \\Big \\lbrace -\\Vert \\mu \\Vert _{}\\langle h,v-x\\rangle +\\Vert v-x\\Vert _{}\\langle g,\\mu \\rangle \\Big \\rbrace \\ge C^{-1}\\sqrt{t} - \\vert \\langle h,x\\rangle \\vert .$ Combined with the claimed inequality (REF ) for $z=0$ , for any $t\\ge 1$ , if $\\operatorname{\\mathfrak {w}}(K\\cap B_n)\\ge \\operatorname{\\mathfrak {w}}(L^\\circ \\cap B_m)+C\\sqrt{t}$ , $\\operatorname{\\mathbb {P}}\\big (\\mathsf {h}(x)\\le 0\\big )&\\le 2\\operatorname{\\mathbb {P}}\\bigg (\\sup _{\\mu \\in K_n}\\inf _{v \\in L^\\circ } \\Big \\lbrace -\\Vert \\mu \\Vert _{}\\langle h,v-x\\rangle +\\Vert v-x\\Vert _{}\\langle g,\\mu \\rangle \\Big \\rbrace \\le 0\\bigg )\\\\&\\le 2\\operatorname{\\mathbb {P}}\\big (\\vert \\langle h,x\\rangle \\vert >\\sqrt{t}/C\\big )+2\\operatorname{\\mathbb {P}}(E_0(t)^c)\\le C e^{-t/C}.$ The claim now follows by adjusting constants and using Proposition REF -(4).", "The proof of Proposition REF above makes use of the following lemma.", "Lemma 3.3 Let $a=(a_1,a_2,a_3)^\\top \\in \\mathbb {R}_{\\ge 0}^3$ with $a_2 \\in [0,1]$ , and let $\\mathsf {P}_a: \\mathbb {R}_{\\ge 0} \\rightarrow \\mathbb {R}$ be defined by $\\mathsf {P}_{a}(\\beta ) \\equiv - a_1\\cdot \\beta +a_3\\cdot \\sqrt{\\beta ^2+2a_2 \\beta +1}.$ Then its derivative $\\mathsf {P}_a^{\\prime }(\\beta )& = -a_1+a_3\\cdot \\frac{\\beta +a_2}{ \\sqrt{\\beta ^2+2a_2\\beta +1}}$ is non-decreasing on $[0,\\infty )$ , and $\\inf _{\\beta \\ge 0} \\mathsf {P}_a(\\beta )={\\left\\lbrace \\begin{array}{ll}a_3, & a_1< a_2a_3,\\\\\\sqrt{(a_3^2-a_1^2)(1-a_2^2)}+a_1a_2, & a_2a_3\\le a_1\\le a_3,\\\\-\\infty , & a_1>a_3.\\end{array}\\right.", "}$ The derivative $\\mathsf {P}_a^{\\prime }$ may be computed directly.", "As $a_2 \\in [0,1]$ , $\\mathsf {P}_a^{\\prime }$ is non-decreasing, and we have $\\mathsf {P}_a^{\\prime }(0) = a_2a_3-a_1, \\, \\lim _{\\beta \\uparrow \\infty }\\mathsf {P}_a^{\\prime }(\\beta ) = a_3-a_1.$ Case 1.", "Suppose $\\mathsf {P}_a^{\\prime }(0)\\ge 0$ ; equivalently $a_2a_3\\ge a_1$ .", "Then $\\mathsf {P}_a^{\\prime }\\ge 0$ globally on $[0,\\infty )$ and therefore $\\mathsf {P}_a$ is non-decreasing.", "This means $\\inf _{\\beta \\ge 0} \\mathsf {P}_a(\\beta ) = \\mathsf {P}_a(0) = a_3.$ Case 2.", "Suppose $\\mathsf {P}_a^{\\prime }(0)< 0$ and $\\mathsf {P}_a^{\\prime }(\\infty )> 0$ ; equivalently $a_2a_3<a_1<a_3$ .", "To solve the equation $\\mathsf {P}_a^{\\prime }(\\beta _\\ast )=0$ over $\\beta _\\ast \\ge 0$ , it is equivalently to solve $&\\beta _\\ast ^2+2a_2 \\beta _\\ast +\\frac{a_2^2a_3^2-a_1^2}{a_3^2-a_1^2} = 0, \\beta _\\ast \\ge 0\\, \\Leftrightarrow \\, \\beta _\\ast = -a_2 + \\sqrt{ \\frac{a_1^2(1-a_2^2)}{a_3^2-a_1^2} }.$ Consequently, with some calculations, $\\inf _{\\beta \\ge 0} \\mathsf {P}_a(\\beta ) = \\mathsf {P}_a(\\beta _\\ast )=\\sqrt{(a_3^2-a_1^2)(1-a_2^2)}+a_1a_2.$ Case 3.", "Suppose $\\mathsf {P}_a^{\\prime }(\\infty )\\le 0$ ; equivalently $a_1\\ge a_3$ .", "Then $\\mathsf {P}_a^{\\prime }\\le 0$ globally on $[0,\\infty )$ so $\\mathsf {P}_a$ is non-increasing on $[0,\\infty )$ .", "This means $\\inf _{\\beta \\ge 0} \\mathsf {P}_a(\\beta ) &= \\lim _{\\beta \\uparrow \\infty } \\mathsf {P}_a(\\beta ) \\\\&= \\lim _{\\beta \\uparrow \\infty } \\beta \\bigg (-a_1+ a_3\\sqrt{1+\\frac{2a_2}{\\beta }+\\frac{1}{\\beta ^2} }\\bigg )={\\left\\lbrace \\begin{array}{ll}a_1a_2, & a_1=a_3,\\\\-\\infty , & a_1>a_3.\\end{array}\\right.", "}$ The claim follows by combining the three cases above." ], [ "Completion of Theorem ", "When $K\\ne \\lbrace 0\\rbrace , L\\notin \\lbrace \\lbrace 0\\rbrace ,\\mathbb {R}^m\\rbrace $ , the claimed kinematic formulae follows by combining Propositions REF and REF .", "If $K\\ne \\lbrace 0\\rbrace $ and $L=\\mathbb {R}^m$ , then $\\delta (L)=m$ and therefore claim (2) holds trivially." ], [ "Proof of Corollary ", "Lemma 3.4 Fix an integer $\\ell \\in [n]$ .", "Let $L\\subset \\mathbb {R}^n$ be any fixed subspace with $\\dim (L)=\\ell $ .", "Then $O_nL\\stackrel{d}{=}G_n L \\stackrel{d}{=}\\operatorname{null}(G_{n-\\ell ,n})$ , where $O_n$ is distributed according to the Haar measure on $\\mathrm {O}(n)$ , and the matrices $G_n\\in \\mathbb {R}^{n\\times n}$ and $G_{n-\\ell ,n}\\in \\mathbb {R}^{(n-\\ell )\\times n}$ both contain i.i.d.", "$\\mathcal {N}(0,1)$ entries.", "The key fact to note is that there is a unique probability measure on the Grassmannian ${G}_\\ell (\\mathbb {R}^n)$ (with the usual metric induced by the Hausdorff distance between unit balls of subspaces) that is invariant under the action of the orthogonal group $\\mathrm {O}(n)$ , cf.", "[20].", "It is easy to see that all $O_n L$ and $G_nL$ and $\\operatorname{null}(G_{n-\\ell ,n})$ are invariant under $\\mathrm {O}(n)$ .", "Using the lemma above, we may realize $O_n L\\stackrel{d}{=}G_n L$ where $G_n\\in \\mathbb {R}^{n\\times n}$ contains i.i.d.", "$\\mathcal {N}(0,1)$ on its entries.", "Now apply Theorem REF to conclude." ], [ "Proof of Theorem ", "The key to the proof of Theorem REF is the following.", "Proposition 4.1 Suppose that $X_1,\\ldots ,X_m$ are i.i.d.", "$\\mathcal {N}(0,I_n)$ and $\\beta _0=0$ .", "Then the following statements hold.", "$\\operatorname{\\mathbb {P}}\\big (\\hbox{The cone constrained MLE $\\widehat{\\beta }_K$ exists}\\big )\\le \\operatorname{\\mathbb {P}}\\big (GK\\cap \\mathbb {R}_{> 0}^m = \\emptyset \\big )$ .", "There exists some universal constant $c>0$ such that $&\\operatorname{\\mathbb {P}}\\big (\\hbox{The cone constrained MLE $\\widehat{\\beta }_K$ does not exist}\\big )\\\\&\\le \\operatorname{\\mathbb {P}}\\big (GK\\cap \\mathbb {R}_{\\ge 0}^m \\ne \\lbrace 0\\rbrace \\big )+2e^{-c(\\sqrt{m}-\\sqrt{\\delta (K)})^2}.$ We shall specify a particular construction of $G$ in the proof.", "Let $G \\in \\mathbb {R}^{m\\times n}$ be the matrix whose rows are given by $\\lbrace X_i^\\top : i \\in [m]\\rbrace \\subset \\mathbb {R}^n$ .", "Then the entries of $G$ are distributed as i.i.d.", "$\\mathcal {N}(0,1)$ due to the normality assumption on $X_i$ 's.", "We further let $V\\equiv GK \\subset \\mathbb {R}^m$ .", "Then $V$ is a closed convex cone in $\\mathbb {R}^m$ (which is random due to the randomness of $G$ ).", "Using that under the global null $\\beta _0=0$ , the distributions of $Y_i$ 's are independent of $X_i$ with $\\operatorname{\\mathbb {P}}(Y_i=\\pm 1)=1/2$ so $\\lbrace Y_i\\cdot X_i^\\top \\beta : \\beta \\in K\\rbrace \\stackrel{d}{=}\\lbrace X_i^\\top \\beta : \\beta \\in K\\rbrace $ .", "Now it easily follows that $\\max _{\\beta \\in K}\\ell (\\beta ) \\stackrel{d}{=}\\min _{\\beta \\in K} \\sum _{i=1}^m \\log \\Big [1+\\exp \\big (-X_i^\\top \\beta \\big )\\Big ] = \\min _{v \\in V} \\sum _{i=1}^m \\log \\big (1+e^{-v_i}\\big ),$ where recall $\\ell (\\cdot )$ is the likelihood function defined in (REF ).", "With these definitions and observations, we may prove the claims (1)-(2).", "(1).", "Suppose $V\\cap \\mathbb {R}_{> 0}^m = GK\\cap \\mathbb {R}_{> 0}^m \\ne \\emptyset $ .", "Then there exists some $ \\bar{v} \\in V$ such that $\\bar{v} \\in \\mathbb {R}_{> 0}^m$ .", "This means $\\min _{v \\in V} \\sum _{i=1}^m \\log \\big (1+e^{-v_i}\\big )&\\le \\min _{\\beta \\ge 1} \\sum _{i=1}^m \\log \\big (1+e^{-\\beta \\bar{v}_i}\\big )\\le \\operatornamewithlimits{\\underline{lim}}_{\\beta \\uparrow \\infty } \\sum _{i=1}^m \\log \\big (1+e^{-\\beta \\bar{v}_i}\\big ) =0.$ As the left hand side of the above display is non-negative, we necessarily have $\\min _{v \\in V} \\sum _{i=1}^m \\log \\big (1+e^{-v_i}\\big )=0$ .", "So any minimizer of $v \\mapsto \\sum _{i=1}^m \\log \\big (1+e^{-v_i}\\big )$ must be reached at $\\infty $ , and the cone constrained MLE does not exist.", "Summarizing the above arguments, we have proven $\\big \\lbrace GK\\cap \\mathbb {R}_{> 0}^m \\ne \\emptyset \\big \\rbrace \\subset \\big \\lbrace \\hbox{The cone constrained MLE $\\widehat{\\beta }_K$ does not exist}\\big \\rbrace $ and therefore the claim.", "(2).", "Suppose $V\\cap \\mathbb {R}_{\\ge 0}^m = GK\\cap \\mathbb {R}_{\\ge 0}^m = \\lbrace 0\\rbrace $ .", "By the closedness of $\\mathbb {R}_{\\ge 0}^m$ , the quantity $\\varepsilon _0\\equiv \\inf _{v \\in V, \\Vert v\\Vert _{}=1} \\operatorname{dist}(v, \\mathbb {R}_{\\ge 0}^m) >0$ is strictly positive.", "As $\\operatorname{dist}^2(v, \\mathbb {R}_{\\ge 0}^m) = \\min _{w \\in \\mathbb {R}_{\\ge 0}^m} \\Vert v-w\\Vert _{}^2 = \\Vert v_-\\Vert _{}^2$ , by homogeneity of $V$ , for all $v \\in V$ , we have $\\Vert v_-\\Vert _{}\\ge \\Vert v\\Vert _{}\\varepsilon _0$ .", "This means for any $R>0$ , $\\lbrace v \\in V, \\Vert v\\Vert _{}\\ge R\\rbrace \\subset \\lbrace v \\in \\mathbb {R}^m, \\Vert v_-\\Vert _{}\\ge R\\varepsilon _0\\rbrace $ and therefore $\\min _{v \\in V, \\Vert v\\Vert _{}\\ge R} \\sum _{i=1}^m \\log \\big (1+e^{-v_i}\\big )&\\ge \\min _{v \\in \\mathbb {R}^m, \\Vert v_-\\Vert _{}\\ge R\\varepsilon _0 }\\sum _{i=1}^m \\log \\big (1+e^{-v_i}\\big )\\nonumber \\\\&\\ge \\log \\big (1+ e^{R\\varepsilon _0}\\big )\\ge R\\varepsilon _0.$ On the other hand, as $0\\in V$ , we always have $\\min _{v \\in V}\\sum _{i=1}^m \\log \\big (1+e^{-v_i}\\big )\\le m (\\log 2).$ Combining (REF ) and (REF ), if $R>0$ is chosen such that $R>m(\\log 2)/\\varepsilon _0$ , then the set $\\lbrace v\\in V, \\Vert v\\Vert _{}\\ge R\\rbrace $ is not feasible in the minimization problem $v\\mapsto \\sum _{i=1}^m \\log \\big (1+e^{-v_i}\\big ), v \\in V$ .", "So on the event $E(t)\\equiv \\bigg \\lbrace \\inf _{\\mu \\in K\\cap \\partial B_n} \\Vert G\\mu \\Vert _{}\\ge \\big (\\sqrt{m}-\\sqrt{\\delta (K)}-C\\sqrt{t}\\big )_+\\bigg \\rbrace ,\\quad t\\ge 1,$ the set $\\lbrace \\beta \\in K: \\Vert \\beta \\Vert _{}\\ge R/\\big (\\sqrt{m}-\\sqrt{\\delta (K)}-C\\sqrt{t}\\big )_+ \\rbrace $ is not feasible in the minimization problem $\\beta \\mapsto \\sum _{i=1}^m \\log \\big (1+e^{-X_i^\\top \\beta }\\big )$ , so the cone constrained MLE $\\widehat{\\beta }_K$ exists provided that $\\sqrt{m}-\\sqrt{\\delta (K)}-C\\sqrt{t}>0$ .", "Summarizing the arguments above, we have shown $\\big \\lbrace GK\\cap \\mathbb {R}_{\\ge 0}^m = \\lbrace 0\\rbrace \\big \\rbrace \\cap E(t)\\subset \\big \\lbrace \\hbox{The cone constrained MLE $\\widehat{\\beta }_K$ exists}\\big \\rbrace ,$ provided $t\\ge 1$ is chosen such that $\\sqrt{m}-\\sqrt{\\delta (K)}-C\\sqrt{t}>0$ .", "Using Lemma REF below, the above display further implies $&\\operatorname{\\mathbb {P}}\\big (\\hbox{The cone constrained MLE $\\widehat{\\beta }_K$ does not exist}\\big )\\\\&\\le \\operatorname{\\mathbb {P}}\\big (GK\\cap \\mathbb {R}_{\\ge 0}^m \\ne \\lbrace 0\\rbrace \\big )+e^{-t},$ provided $t\\ge 1$ is chosen such that $\\sqrt{m}-\\sqrt{\\delta (K)}-C\\sqrt{t}>0$ .", "Now taking $t=c \\big (\\sqrt{m}-\\sqrt{\\delta (K)}\\big )^2$ for a small enough absolute constant $c>0$ to conclude when $\\sqrt{m}-\\sqrt{\\delta (K)}\\ge C$ .", "The case $\\sqrt{m}-\\sqrt{\\delta (K)}< C$ follows by simply adjusting constants.", "The proof of the above Proposition REF relies on the following result that gives tight bounds on certain smallest cone-constrained eigenvalues, which may be of independent interest.", "Lemma 4.2 Let $K\\subset \\mathbb {R}^n$ be a non-trivial closed convex cone.", "There exists some universal constant $C>0$ such that for all $t\\ge 1$ , $\\operatorname{\\mathbb {P}}\\bigg (\\inf _{\\mu \\in K\\cap \\partial B_n} \\Vert G\\mu \\Vert _{}\\ge \\big (\\sqrt{m}-\\sqrt{\\delta (K)}-C\\sqrt{t}\\big )_+\\bigg )\\ge 1-e^{-t}.$ As $\\inf _{\\mu \\in K\\cap \\partial B_n} \\Vert G\\mu \\Vert _{} = \\inf _{\\mu \\in K\\cap \\partial B_n} \\sup _{v \\in B_m} \\langle v,G\\mu \\rangle $ , by using the one-sided Gaussian min-max theorem (cf.", "Theorem REF -(1)), we have for any $z \\in \\mathbb {R}$ , $\\operatorname{\\mathbb {P}}\\bigg (\\inf _{\\mu \\in K\\cap \\partial B_n} \\Vert G\\mu \\Vert _{}\\le z\\bigg )&=\\operatorname{\\mathbb {P}}\\bigg (\\inf _{\\mu \\in K\\cap \\partial B_n} \\sup _{v \\in B_m} \\langle v,G\\mu \\rangle \\le z\\bigg )\\\\&\\le 2\\operatorname{\\mathbb {P}}\\bigg (\\inf _{\\mu \\in K\\cap \\partial B_n} \\sup _{v \\in B_m}\\Big \\lbrace \\Vert \\mu \\Vert _{}\\langle h,v\\rangle -\\Vert v\\Vert _{}\\langle g,\\mu \\rangle \\Big \\rbrace \\le z\\bigg ).$ On the other hand, note that $&\\inf _{\\mu \\in K\\cap \\partial B_n} \\sup _{v \\in B_m}\\Big \\lbrace \\Vert \\mu \\Vert _{}\\langle h,v\\rangle -\\Vert v\\Vert _{}\\langle g,\\mu \\rangle \\Big \\rbrace = \\inf _{\\mu \\in K\\cap \\partial B_n} \\sup _{0\\le \\beta \\le 1} \\beta \\big (\\Vert h\\Vert _{}-\\langle g,\\mu \\rangle \\big )\\\\& = \\inf _{\\mu \\in K\\cap \\partial B_n}\\big (\\Vert h\\Vert _{}-\\langle g,\\mu \\rangle \\big )_+ = \\bigg \\lbrace \\Vert h\\Vert _{}-\\sup _{\\mu \\in K\\cap \\partial B_n}\\langle g,\\mu \\rangle \\bigg \\rbrace _+.$ Combining the above two displays, we have for any $z \\in \\mathbb {R}$ , $\\operatorname{\\mathbb {P}}\\bigg (\\inf _{\\mu \\in K\\cap \\partial B_n} \\Vert G\\mu \\Vert _{}\\le z\\bigg )\\le 2 \\operatorname{\\mathbb {P}}\\bigg (\\bigg \\lbrace \\Vert h\\Vert _{}-\\sup _{\\mu \\in K\\cap \\partial B_n}\\langle g,\\mu \\rangle \\bigg \\rbrace _+\\le z\\bigg ).$ By Gaussian concentration in Proposition REF , there exists some universal constant $C_0>0$ such that for any $t\\ge 1$ , on an event $E(t)$ with probability at least $1-e^{-t}$ , $\\big \\vert \\Vert h\\Vert _{}-\\sqrt{m}\\big \\vert \\vee \\big \\vert \\sup _{\\mu \\in K\\cap \\partial B_n}\\langle g,\\mu \\rangle -\\operatorname{\\mathfrak {w}}(K\\cap \\partial B_n) \\big \\vert \\le C_0\\sqrt{t}.$ Consequently, with $z(t)\\equiv \\big (\\sqrt{m}-\\operatorname{\\mathfrak {w}}(K\\cap \\partial B_n)-4C_0\\sqrt{t}\\big )_+$ , using (REF ), it holds for any $t\\ge 1$ that $\\operatorname{\\mathbb {P}}\\bigg (\\inf _{\\mu \\in K\\cap \\partial B_n} \\Vert G\\mu \\Vert _{}\\le z(t)\\bigg )\\le 2\\operatorname{\\mathbb {P}}(E(t)^c)\\le 2e^{-t}.$ The claim now follows by adjusting constants using Proposition REF -(3)(4).", "Now we may prove Theorem REF .", "(1).", "For $\\alpha \\in [0,\\pi /2)$ , let $L_+(\\alpha )\\equiv \\Big \\lbrace v \\in \\mathbb {R}^m: \\min _{i \\in [m]} v_i\\ge \\sin (\\alpha )\\cdot \\Vert v\\Vert _{}\\Big \\rbrace .$ Then $\\lbrace L_+(\\alpha ): \\alpha \\in [0,\\pi /2]\\rbrace $ is a non-decreasing sequence of closed convex cones in $\\mathbb {R}^m$ as $\\alpha \\downarrow 0$ , with $L_+(0)=\\mathbb {R}_{\\ge 0}^m$ being the non-negative orthant cone.", "Consequently, $\\lbrace L_k\\equiv L_+(1/k)\\cap B_m: k \\in \\mathbb {N}\\rbrace $ is a non-decreasing sequence of compact sets.", "It is easy to verify that $\\mathrm {cl}(\\cup _{k \\in \\mathbb {N}} L_k)=\\mathbb {R}_{\\ge 0}^m\\cap B_m$ .", "Now using Lemma REF and integrability of the suprema of Gaussian processes, we have $\\delta (L_+(1/k)) = \\operatorname{\\mathbb {E}}\\bigg (\\sup _{v \\in L_k}\\langle h,v\\rangle \\bigg )^2\\uparrow \\operatorname{\\mathbb {E}}\\bigg (\\sup _{v \\in \\mathbb {R}_{\\ge 0}^m\\cap B_m}\\langle h,v\\rangle \\bigg )^2 = \\delta (\\mathbb {R}_{\\ge 0}^m)=\\frac{m}{2}.$ The last identity $\\delta (\\mathbb {R}_{\\ge 0}^m)=m/2$ follows from, e.g., [4].", "On the other hand, as $\\big \\lbrace GK\\cap \\mathbb {R}_{> 0}^m = \\emptyset \\big \\rbrace \\subset \\big \\lbrace GK\\cap L_+(1/k) = \\lbrace 0\\rbrace \\big \\rbrace $ holds for any $k \\in \\mathbb {N}$ , by Proposition REF -(1), $&\\operatorname{\\mathbb {P}}\\big (\\hbox{The cone constrained MLE $\\widehat{\\beta }_K$ exists}\\big )\\nonumber \\\\&\\le \\operatorname{\\mathbb {P}}\\big (GK\\cap \\mathbb {R}_{> 0}^m = \\emptyset \\big )\\le \\operatorname{\\mathbb {P}}\\big (GK\\cap L_+(1/k) = \\lbrace 0\\rbrace \\big ).$ Now using Theorem REF -(2), there exists some universal constant $C_0>0$ such that for all $t\\ge 1$ , $&\\operatorname{\\mathbb {P}}\\big (GK\\cap L_+(1/k) = \\lbrace 0\\rbrace \\big )\\le \\mathbf {1}\\Big (\\sqrt{\\delta (K)}<\\sqrt{m-\\delta (L_+(1/k))}+C_0\\sqrt{t}\\Big )+e^{-t}.$ Now under the assumed condition, the convergence in (REF ) entails that $\\sqrt{\\delta (K)}\\ge \\sqrt{m/2}+2C_0\\sqrt{t}\\ge \\sqrt{m-\\delta (L_+(1/k))}+C_0\\sqrt{t}$ for all $k$ large enough (that may depend on $m,t$ ), so combined with (REF ) and (REF ) we have $\\operatorname{\\mathbb {P}}\\big (\\hbox{The cone constrained MLE $\\widehat{\\beta }_K$ exists}\\big )\\le e^{-t},$ as desired.", "(2).", "Using Proposition REF -(2) and Theorem REF -(1) and the fact that $\\delta (\\mathbb {R}_{\\ge 0}^m)=m/2$ , there exist some universal constants $C_1,c>0$ such that for all $t\\ge 1$ , $&\\operatorname{\\mathbb {P}}\\big (\\hbox{The cone constrained MLE $\\widehat{\\beta }_K$ does not exist}\\big )\\\\&\\le \\mathbf {1}\\Big (\\sqrt{\\delta (K)}>\\sqrt{m/2}-C_1\\sqrt{t}\\Big )+e^{-t}+2e^{-c(\\sqrt{m}-\\sqrt{\\delta (K)})^2}.$ Now under the condition that $\\sqrt{\\delta (K)}\\le \\sqrt{m/2}-C_1\\sqrt{t}$ , the above display can be further simplified as $\\operatorname{\\mathbb {P}}\\big (\\hbox{The cone constrained MLE $\\widehat{\\beta }_K$ does not exist}\\big )\\le e^{-t}+2e^{-(c/4)m}.$ The second term on the right hand side of the above display can be assimilated into the first term upon adjusting constants, as the effective range of $t$ is $1\\le t\\le m/(2C_1^2)$ .", "This completes the proof." ], [ "Proofs of Theorems ", "The following proposition will be used crucially in the proofs of both Theorems REF and REF .", "Recall the notation $G^{-1}L$ defined in (REF ).", "Proposition 5.1 Suppose $K\\subset B_n$ is a compact convex set with $0 \\in K$ , and $L\\subset \\mathbb {R}^m$ is a closed convex cone.", "Then for any $x \\in \\partial B_n, z \\in \\mathbb {R}$ , $\\operatorname{\\mathbb {P}}\\Big ( \\mathsf {h}_{K\\cap G^{-1}L}(x)\\ge z\\Big )&\\le 2 \\operatorname{\\mathbb {P}}\\bigg (\\sup _{\\mu \\in K, \\langle g,\\mu \\rangle \\ge \\Vert \\mu \\Vert _{} \\sup \\limits _{v \\in L^\\circ \\cap \\partial B_m}\\langle h,v\\rangle } \\langle x,\\mu \\rangle \\ge z\\bigg ),\\nonumber \\\\\\operatorname{\\mathbb {P}}\\big (\\mathsf {h}_{K\\cap G^{-1}L}(x)< z\\big )&\\le 2 \\operatorname{\\mathbb {P}}\\bigg (\\sup _{\\mu \\in K, \\langle g,\\mu \\rangle \\ge \\Vert \\mu \\Vert _{} \\sup \\limits _{v \\in L^\\circ \\cap \\partial B_m}\\langle h,v\\rangle } \\langle x,\\mu \\rangle < z\\bigg ).$ The first inequality also holds for its uniform version in $x \\in \\partial B_n$ , i.e., for any $z \\in \\mathbb {R}$ , $\\operatorname{\\mathbb {P}}\\Big ( \\sup _{x \\in \\partial B_n}\\mathsf {h}_{K\\cap G^{-1}L }(x)\\ge z\\Big )&\\le 2 \\operatorname{\\mathbb {P}}\\bigg (\\sup _{x \\in \\partial B_n, \\mu \\in K, \\langle g,\\mu \\rangle \\ge \\Vert \\mu \\Vert _{} \\sup \\limits _{v \\in L^\\circ \\cap \\partial B_m}\\langle h,v\\rangle } \\langle x,\\mu \\rangle \\ge z\\bigg ).$ We shall write $\\mathsf {h}\\equiv \\mathsf {h}_{K\\cap G^{-1}L }$ for notational convenience in the proof.", "The method of proof is similar in spirit to that of Proposition REF .", "First, by Lemma REF , we may rewrite $\\mathsf {h}(x)$ as $\\mathsf {h}(x)& = \\sup _{\\mu \\in K}\\big \\lbrace \\langle x,\\mu \\rangle : G\\mu \\in L\\big \\rbrace = \\sup _{\\mu \\in K}\\inf _{v\\in L^\\circ } \\big \\lbrace \\langle x,\\mu \\rangle -\\langle v,G\\mu \\rangle \\big \\rbrace .$ Now let for $\\varepsilon >0$ $\\mathsf {h}_{\\varepsilon }(x)&\\equiv \\sup _{\\mu \\in K}\\inf _{v \\in L^\\circ } \\bigg \\lbrace \\langle x,\\mu \\rangle -\\langle v,G\\mu \\rangle +\\frac{\\varepsilon }{2}\\Vert v\\Vert _{}^2\\bigg \\rbrace =\\sup _{\\mu \\in K}\\bigg \\lbrace \\langle x,\\mu \\rangle -\\frac{ 1}{2\\varepsilon }\\bigg (\\sup _{v \\in L^\\circ \\cap B_m}\\langle v,G\\mu \\rangle \\bigg )^2\\bigg \\rbrace .$ On the event $E(R)\\equiv \\big \\lbrace \\Vert G\\Vert _{\\operatorname{op}}\\le R\\big \\rbrace \\cap \\big \\lbrace \\Vert g\\Vert _{}+\\Vert h\\Vert _{}\\le R\\big \\rbrace ,$ we may restrict the range of the minimum over $v \\in L^\\circ $ to $v \\in B_m(R/\\varepsilon )$ (uniformly in $x\\in \\partial B_n$ ).", "In other words, on the event $E(R)$ , $\\mathsf {h}_{\\varepsilon }(x)&\\equiv \\sup _{\\mu \\in K}\\inf _{v \\in L^\\circ \\cap B_m(R/\\varepsilon )} \\bigg \\lbrace \\langle x,\\mu \\rangle -\\langle v,G\\mu \\rangle +\\frac{\\varepsilon }{2}\\Vert v\\Vert _{}^2\\bigg \\rbrace .$ Clearly $\\mathsf {h}(x)\\le \\mathsf {h}_{\\varepsilon }(x)$ for any $\\varepsilon >0$ .", "This means for any $z \\in \\mathbb {R}$ , $&\\operatorname{\\mathbb {P}}\\big (\\mathsf {h}(x)\\ge z\\big )\\le \\operatorname{\\mathbb {P}}\\big (\\mathsf {h}_\\varepsilon (x)\\ge z\\big )\\\\&\\le \\operatorname{\\mathbb {P}}\\bigg ( \\sup _{\\mu \\in K}\\inf _{v \\in L^\\circ \\cap B_m(R/\\varepsilon )} \\bigg \\lbrace \\langle x,\\mu \\rangle -\\langle v,G\\mu \\rangle +\\frac{\\varepsilon }{2}\\Vert v\\Vert _{}^2\\bigg \\rbrace \\ge z \\bigg ) + \\operatorname{\\mathbb {P}}\\big (E(R)^c\\big ).$ By the one-sided Gaussian min-max theorem (cf.", "Theorem REF -(1)), we have $\\operatorname{\\mathbb {P}}\\big (\\mathsf {h}(x)\\ge z\\big )&\\le 2\\operatorname{\\mathbb {P}}\\bigg ( \\sup _{\\mu \\in K}\\inf _{v \\in L^\\circ \\cap B_m(R/\\varepsilon )} \\bigg \\lbrace \\langle x,\\mu \\rangle -\\Vert \\mu \\Vert _{}\\langle h,v\\rangle +\\Vert v\\Vert _{}\\langle g,\\mu \\rangle +\\frac{\\varepsilon }{2}\\Vert v\\Vert _{}^2\\bigg \\rbrace \\ge z \\bigg ) \\nonumber \\\\&\\qquad + \\operatorname{\\mathbb {P}}\\big (E(R)^c\\big ).$ On the other hand, as $0 \\in K$ , on the event $E(R)$ , $0&\\le \\sup _{\\mu \\in K}\\inf _{v \\in L^\\circ \\cap B_m(R/\\varepsilon )} \\bigg \\lbrace \\langle x,\\mu \\rangle -\\Vert \\mu \\Vert _{}\\langle h,v\\rangle +\\Vert v\\Vert _{}\\langle g,\\mu \\rangle +\\frac{\\varepsilon }{2}\\Vert v\\Vert _{}^2\\bigg \\rbrace \\nonumber \\\\& = \\sup _{\\mu \\in K}\\inf _{\\beta \\in [0,R/\\varepsilon ]} \\bigg \\lbrace \\langle x,\\mu \\rangle +\\beta \\Big (\\langle g,\\mu \\rangle -\\Vert \\mu \\Vert _{}\\sup _{v \\in L^\\circ \\cap \\partial B_m}\\langle h,v\\rangle \\Big )+\\frac{\\varepsilon }{2}\\beta ^2\\bigg \\rbrace \\nonumber \\\\& = \\sup _{\\mu \\in K} \\bigg \\lbrace \\langle x,\\mu \\rangle -\\frac{1 }{2\\varepsilon } \\bigg (\\langle g,\\mu \\rangle -\\Vert \\mu \\Vert _{}\\sup _{v \\in L^\\circ \\cap \\partial B_m}\\langle h,v\\rangle \\bigg )_-^2\\bigg \\rbrace .$ Now as for any $\\mu \\in K$ such that $\\big (\\langle g,\\mu \\rangle -\\Vert \\mu \\Vert _{}\\sup \\limits _{v \\in L^\\circ \\cap \\partial B_m}\\langle h,v\\rangle \\big )_-^2>2\\varepsilon $ , the right hand side of the above display is $<0$ , so such $\\mu $ 's are not feasible.", "This means that on the event $E(R)$ , $& \\sup _{\\mu \\in K}\\inf _{v \\in L^\\circ \\cap B_m(R/\\varepsilon )} \\bigg \\lbrace \\langle x,\\mu \\rangle -\\Vert \\mu \\Vert _{}\\langle h,v\\rangle +\\Vert v\\Vert _{}\\langle g,\\mu \\rangle +\\frac{\\varepsilon }{2}\\Vert v\\Vert _{}^2\\bigg \\rbrace \\nonumber \\\\& \\le \\sup _{\\begin{array}{c}\\mu \\in K, \\big (\\langle g,\\mu \\rangle -\\Vert \\mu \\Vert _{} \\sup \\limits _{v \\in L^\\circ \\cap \\partial B_m}\\langle h,v\\rangle \\big )_-^2\\le 2\\varepsilon \\end{array} } \\langle x,\\mu \\rangle .$ Combining (REF )-(REF ), we have $\\operatorname{\\mathbb {P}}\\big (\\mathsf {h}(x)\\ge z\\big )&\\le 2\\operatorname{\\mathbb {P}}\\bigg (\\sup _{\\begin{array}{c}\\mu \\in K, \\big (\\langle g,\\mu \\rangle -\\Vert \\mu \\Vert _{} \\sup \\limits _{v \\in L^\\circ \\cap \\partial B_m}\\langle h,v\\rangle \\big )_-^2\\le 2\\varepsilon \\end{array} } \\langle x,\\mu \\rangle \\ge z\\bigg ) + 3\\operatorname{\\mathbb {P}}(E(R)^c).$ Since the set $S_\\varepsilon \\equiv \\bigg \\lbrace \\mu \\in K, \\bigg (\\langle g,\\mu \\rangle -\\Vert \\mu \\Vert _{}\\sup \\limits _{v \\in L^\\circ \\cap \\partial B_m}\\langle h,v\\rangle \\bigg )_-^2\\le 2\\varepsilon \\bigg \\rbrace $ is non-increasing as $\\varepsilon \\downarrow 0$ , and $&\\cap _{\\varepsilon >0}S_\\varepsilon \\subset \\bigg \\lbrace \\mu \\in K, \\bigg (\\langle g,\\mu \\rangle -\\Vert \\mu \\Vert _{}\\sup \\limits _{v \\in L^\\circ \\cap \\partial B_m}\\langle h,v\\rangle \\bigg )_-^2=0\\bigg \\rbrace ,$ we have by Lemma REF $\\sup _{\\begin{array}{c}\\mu \\in K, \\big (\\langle g,\\mu \\rangle -\\Vert \\mu \\Vert _{} \\sup \\limits _{v \\in L^\\circ \\cap \\partial B_m}\\langle h,v\\rangle \\big )_-^2\\le 2\\varepsilon \\end{array} } \\langle x,\\mu \\rangle \\downarrow \\sup _{\\mu \\in \\cap _{\\varepsilon >0} S_\\varepsilon }\\langle x,\\mu \\rangle \\le \\sup _{\\begin{array}{c}\\mu \\in K, \\big (\\langle g,\\mu \\rangle -\\Vert \\mu \\Vert _{} \\sup \\limits _{v \\in L^\\circ \\cap \\partial B_m}\\langle h,v\\rangle \\big )_-^2=0\\end{array} } \\langle x,\\mu \\rangle $ as $\\varepsilon \\downarrow 0$ .", "Now taking $\\varepsilon \\downarrow 0$ followed by $R\\uparrow \\infty $ on the right hand side of (REF ), we have for any $z \\in \\mathbb {R}$ , $\\operatorname{\\mathbb {P}}\\big (\\mathsf {h}(x)\\ge z\\big )\\le 2 \\operatorname{\\mathbb {P}}\\bigg (\\sup _{\\mu \\in K, \\langle g,\\mu \\rangle \\ge \\Vert \\mu \\Vert _{} \\sup \\limits _{v \\in L^\\circ \\cap \\partial B_m}\\langle h,v\\rangle } \\langle x,\\mu \\rangle \\ge z\\bigg ).$ The uniform version (REF ) follows by working with the additional $\\sup _{x \\in \\partial B_n}$ in the above arguments and a slight modification of the definition of $S_\\varepsilon $ .", "For the other direction, using (REF ) and the convex Gaussian min-max theorem (cf.", "Theorem REF -(2)), we have for any $z \\in \\mathbb {R}$ , $\\operatorname{\\mathbb {P}}\\big (\\mathsf {h}_\\varepsilon (x)< z\\big )&\\le 2\\operatorname{\\mathbb {P}}\\bigg ( \\sup _{\\mu \\in K}\\inf _{v \\in L^\\circ \\cap B_m(R/\\varepsilon )} \\bigg \\lbrace \\langle x,\\mu \\rangle -\\Vert \\mu \\Vert _{}\\langle h,v\\rangle +\\Vert v\\Vert _{}\\langle g,\\mu \\rangle +\\frac{\\varepsilon }{2}\\Vert v\\Vert _{}^2\\bigg \\rbrace < z \\bigg ) \\nonumber \\\\&\\qquad + \\operatorname{\\mathbb {P}}\\big (E(R)^c\\big ).$ Combining the above display and the simple lower bound $& \\sup _{\\mu \\in K}\\inf _{v \\in L^\\circ \\cap B_m(R/\\varepsilon )} \\bigg \\lbrace \\langle x,\\mu \\rangle -\\Vert \\mu \\Vert _{}\\langle h,v\\rangle +\\Vert v\\Vert _{}\\langle g,\\mu \\rangle +\\frac{\\varepsilon }{2}\\Vert v\\Vert _{}^2\\bigg \\rbrace \\nonumber \\\\& \\ge \\sup _{\\mu \\in K}\\inf _{v \\in L^\\circ }\\big \\lbrace \\langle x,\\mu \\rangle -\\Vert \\mu \\Vert _{}\\langle h,v\\rangle +\\Vert v\\Vert _{}\\langle g,\\mu \\rangle \\big \\rbrace \\nonumber \\\\& = \\sup _{\\mu \\in K}\\inf _{\\beta \\ge 0}\\bigg \\lbrace \\langle x,\\mu \\rangle +\\beta \\bigg (\\langle g,\\mu \\rangle -\\Vert \\mu \\Vert _{}\\sup \\limits _{v \\in L^\\circ \\cap \\partial B_m}\\langle h,v\\rangle \\bigg )\\bigg \\rbrace \\nonumber \\\\& = \\sup _{\\mu \\in K, \\langle g,\\mu \\rangle \\ge \\Vert \\mu \\Vert _{} \\sup \\limits _{v \\in L^\\circ \\cap \\partial B_m}\\langle h,v\\rangle } \\langle x,\\mu \\rangle ,$ we have $\\operatorname{\\mathbb {P}}\\big (\\mathsf {h}_\\varepsilon (x)< z\\big )&\\le 2\\operatorname{\\mathbb {P}}\\bigg ( \\sup _{\\mu \\in K, \\langle g,\\mu \\rangle \\ge \\Vert \\mu \\Vert _{} \\sup \\limits _{v \\in L^\\circ \\cap \\partial B_m}\\langle h,v\\rangle } \\langle x,\\mu \\rangle < z \\bigg ) + \\operatorname{\\mathbb {P}}\\big (E(R)^c\\big ).$ Now noting that using the same method of proof as for the claim (REF ) in Step 1 in the proof of Proposition REF , we have $ \\mathsf {h}_\\varepsilon (x)\\downarrow \\mathsf {h}(x)$ as $\\varepsilon \\downarrow 0$ .", "So by taking $\\varepsilon \\downarrow 0$ followed by $R \\uparrow \\infty $ in (REF ) we obtain $\\operatorname{\\mathbb {P}}\\big (\\mathsf {h}(x)< z\\big )\\le 2 \\operatorname{\\mathbb {P}}\\bigg (\\sup _{\\mu \\in K, \\langle g,\\mu \\rangle \\ge \\Vert \\mu \\Vert _{} \\sup \\limits _{v \\in L^\\circ \\cap \\partial B_m}\\langle h,v\\rangle } \\langle x,\\mu \\rangle < z\\bigg ).$ Combining (REF ) and (REF ) to conclude.", "For any $x \\in \\partial B_n$ , recall $K_x= K\\cap \\lbrace \\mu \\in \\mathbb {R}^n: \\langle x,\\mu \\rangle \\ge 0\\rbrace $ defined in (REF ), and let for any further $a>0$ $K(x;a)&\\equiv \\mathrm {cl}\\big (\\big \\lbrace \\mu /\\Vert \\mu \\Vert _{}: \\langle x,\\mu \\rangle =a, \\mu \\in K\\setminus \\lbrace 0\\rbrace \\big \\rbrace \\big )\\subset \\partial B_n.$ Lemma 5.2 For any $x \\in \\partial B_n$ with $x \\notin K^\\circ $ , we have $K(x;a)=K_x\\cap \\partial B_n$ for any $a>0$ .", "The fact that $K(x;a_1)=K(x;a_2)$ for any $a_1,a_2>0$ follows from the cone property of $K$ , so we shall focus on the case $K(x;1)$ with $a=1$ without loss of generality.", "By definition of $K(x;1)$ we have $K(x;1)\\subset K_x\\cap \\partial B_n$ , so we only need to prove the converse direction.", "Upon using a suitable orthonormal transformation, we may assume without loss of generality that $x = e_1$ .", "This means that $K_x= K\\cap \\lbrace v \\in \\mathbb {R}^n: v_1\\ge 0\\rbrace ,\\quad K(x;1)=\\mathrm {cl}\\big (\\big \\lbrace \\mu /\\Vert \\mu \\Vert _{}: \\mu _1=1, \\mu \\in K\\setminus \\lbrace 0\\rbrace \\big \\rbrace \\big ).$ Take any $v \\in K_x\\cap \\partial B_n$ .", "By the assumption $x \\notin K^\\circ $ , there exists some $\\mu \\in K$ such that $\\mu _1=\\langle x,\\mu \\rangle >0$ .", "This means $K\\cap \\lbrace v \\in \\mathbb {R}^n: v_1>0\\rbrace \\ne \\emptyset $ .", "So for $v \\in K_x \\cap \\partial B_n\\subset K\\cap \\lbrace v \\in \\mathbb {R}^n: v_1\\ge 0\\rbrace $ , we may find some sequence $\\lbrace v^{(\\ell )}\\rbrace _{\\ell \\in \\mathbb {N}}\\subset K\\cap \\lbrace v \\in \\mathbb {R}^n: v_1>0\\rbrace $ such that $v^{(\\ell )} \\rightarrow v$ .", "This convergence implies the norm convergence $\\Vert v^{(\\ell )}\\Vert _{}\\rightarrow \\Vert v\\Vert _{}=1$ , and consequently $v^{(\\ell )}/\\Vert v^{(\\ell )}\\Vert _{}\\rightarrow v$ .", "Now by defining $\\mu ^{(\\ell )}\\equiv v^{(\\ell )}/v^{(\\ell )}_1$ , we have $\\mu ^{(\\ell )}_1=1,\\mu ^{(\\ell )}\\in K\\setminus \\lbrace 0\\rbrace $ and $\\mu ^{(\\ell )}/\\Vert \\mu ^{(\\ell )}\\Vert _{}= v^{(\\ell )}/\\Vert v^{(\\ell )}\\Vert _{}\\rightarrow v$ .", "This verifies that $v \\in K(x;1)$ , as desired.", "With all the preparations, we may prove Theorem REF .", "For any $R>0$ , let $K_n(R)\\equiv K\\cap B_n(R)$ .", "Using (REF ) in Proposition REF , we have for any $x \\in \\partial B_n$ and $z \\in \\mathbb {R}$ , $\\operatorname{\\mathbb {P}}\\Big (\\max _{\\mu \\in K_n(R): G\\mu \\in L}\\langle x,\\mu \\rangle \\ge z\\Big )&\\le 2 \\operatorname{\\mathbb {P}}\\bigg (\\sup _{\\mu \\in K_n(R), \\langle g,\\mu \\rangle \\ge \\Vert \\mu \\Vert _{} \\Vert \\operatorname{\\mathsf {P}}_L^\\perp h\\Vert _{}} \\langle x,\\mu \\rangle \\ge z\\bigg ),\\nonumber \\\\\\operatorname{\\mathbb {P}}\\Big (\\max _{\\mu \\in K_n(R): G\\mu \\in L }\\langle x,\\mu \\rangle < z\\Big )&\\le 2 \\operatorname{\\mathbb {P}}\\bigg (\\sup _{\\mu \\in K_n(R), \\langle g,\\mu \\rangle \\ge \\Vert \\mu \\Vert _{} \\Vert \\operatorname{\\mathsf {P}}_L^\\perp h\\Vert _{}} \\langle x,\\mu \\rangle < z\\bigg ).$ Note that $\\max _{\\mu \\in K_n(R): G\\mu \\in L}\\langle x,\\mu \\rangle $ and $\\sup _{\\mu \\in K_n(R), \\langle g,\\mu \\rangle \\ge \\Vert \\mu \\Vert _{} \\Vert \\operatorname{\\mathsf {P}}_L^\\perp h\\Vert _{}} \\langle x,\\mu \\rangle $ either take value 0 for all $R>0$ , or blow up to $\\infty $ as $R\\uparrow \\infty $ .", "Further note that $\\sup _{\\mu \\in K, \\langle g,\\mu \\rangle \\ge \\Vert \\mu \\Vert _{} \\Vert \\operatorname{\\mathsf {P}}_L^\\perp h\\Vert _{}} \\langle x,\\mu \\rangle & = 0\\vee \\sup \\bigg \\lbrace a>0: \\sup _{\\mu \\in K, \\langle x,\\mu \\rangle =a} \\big (\\langle g,\\mu \\rangle -\\Vert \\mu \\Vert _{}\\Vert \\operatorname{\\mathsf {P}}_L^\\perp h\\Vert _{}\\big )\\ge 0\\bigg \\rbrace \\\\& = 0 \\vee \\sup \\bigg \\lbrace a>0: \\sup _{\\mu \\in K(x;a)}\\langle g,\\mu \\rangle \\ge \\Vert \\operatorname{\\mathsf {P}}_L^\\perp h\\Vert _{}\\bigg \\rbrace \\\\&= \\sup _{\\mu \\in K} \\langle x,\\mu \\rangle \\cdot \\mathbf {1}\\big (\\sup _{\\mu \\in K_x\\cap \\partial B_n}\\langle g,\\mu \\rangle \\ge \\Vert \\operatorname{\\mathsf {P}}_L^\\perp h\\Vert _{}\\big ),$ where in the last equality we used Lemma REF , and we interpret $\\infty \\cdot 0=0$ .", "Now taking limit as $R\\uparrow \\infty $ for free in (REF ), we have for any $x \\in \\partial B_n$ and $z \\in \\mathbb {R}$ , $\\operatorname{\\mathbb {P}}\\Big (\\max _{\\mu \\in K: G\\mu \\in L}\\langle x,\\mu \\rangle \\ge z\\Big )&\\le 2 \\operatorname{\\mathbb {P}}\\bigg (\\sup _{\\mu \\in K} \\langle x,\\mu \\rangle \\cdot \\mathbf {1}\\big (\\sup _{\\mu \\in K_x\\cap \\partial B_n}\\langle g,\\mu \\rangle \\ge \\Vert \\operatorname{\\mathsf {P}}_L^\\perp h\\Vert _{}\\big )\\ge z\\bigg ),\\nonumber \\\\\\operatorname{\\mathbb {P}}\\Big (\\max _{\\mu \\in K: G\\mu \\in L}\\langle x,\\mu \\rangle < z\\Big )&\\le 2 \\operatorname{\\mathbb {P}}\\bigg (\\sup _{\\mu \\in K} \\langle x,\\mu \\rangle \\cdot \\mathbf {1}\\big (\\sup _{\\mu \\in K_x\\cap \\partial B_n}\\langle g,\\mu \\rangle \\ge \\Vert \\operatorname{\\mathsf {P}}_L^\\perp h\\Vert _{}\\big )< z\\bigg ).$ Similar arguments using (REF ) in Proposition REF leads the one-sided inequality for the uniform version: for any $z \\in \\mathbb {R}$ , $\\operatorname{\\mathbb {P}}\\Big (\\max _{\\mu \\in K: G\\mu \\in L}\\Vert \\mu \\Vert _{}\\ge z\\Big )&\\le 2 \\operatorname{\\mathbb {P}}\\bigg (\\sup _{\\mu \\in K, \\langle g,\\mu \\rangle \\ge \\Vert \\mu \\Vert _{} \\Vert \\operatorname{\\mathsf {P}}_L^\\perp h\\Vert _{}} \\Vert \\mu \\Vert _{}\\ge z\\bigg ).$ (Homogeneous case $b=0$).", "In this case $L=\\lbrace 0\\rbrace $ so $\\operatorname{\\mathsf {P}}_L^\\perp h=h$ .", "Clearly for $x \\in K^\\circ $ , $\\max _{\\mu \\in K: G\\mu =0}\\langle x,\\mu \\rangle =0$ regardless of the relationship between $m$ and $K$ .", "Now we consider the case $x \\notin K^\\circ $ .", "By Lemma REF , $\\sup _{\\mu \\in K}\\langle x,\\mu \\rangle =\\infty $ , so by taking $z=\\varepsilon >0$ in the two displays in (REF ), $\\operatorname{\\mathbb {P}}\\Big ( \\max _{\\mu \\in K: G\\mu =0}\\langle x,\\mu \\rangle \\ge \\varepsilon \\Big )&\\le 2\\operatorname{\\mathbb {P}}\\bigg (\\sup _{\\mu \\in K_x\\cap \\partial B_n}\\langle g,\\mu \\rangle \\ge \\Vert h\\Vert _{}\\bigg ),\\nonumber \\\\\\operatorname{\\mathbb {P}}\\Big ( \\max _{\\mu \\in K: G\\mu =0}\\langle x,\\mu \\rangle <\\varepsilon \\Big )&\\le 2\\operatorname{\\mathbb {P}}\\bigg (\\sup _{\\mu \\in K_x\\cap \\partial B_n}\\langle g,\\mu \\rangle < \\Vert h\\Vert _{}\\bigg ).$ As $x\\notin K^\\circ $ and $K$ is non-trivial, $K_x$ is also non-trivial.", "By Gaussian concentration in Proposition REF and Proposition REF -(3)(4), there exists some universal constant $C>0$ such that for any $t\\ge 1$ , $\\big \\vert \\sup _{\\mu \\in K_x\\cap \\partial B_n}\\langle g,\\mu \\rangle - \\sqrt{\\delta (K_x)}\\big \\vert \\vee \\big \\vert \\Vert h\\Vert _{}-\\sqrt{m}\\big \\vert \\le C\\sqrt{t}.$ Combining the above two displays (REF )-(REF ) and noting that $\\max _{\\mu \\in K: G\\mu =0}\\langle x,\\mu \\rangle \\in \\lbrace 0,\\infty \\rbrace $ , for $x \\notin K^\\circ $ and any $ t\\ge 1$ , $\\sqrt{m}\\ge \\sqrt{\\delta (K_x)}+C\\sqrt{t} \\,\\Rightarrow \\, \\operatorname{\\mathbb {P}}\\Big ( \\max _{\\mu \\in K: G\\mu =0}\\langle x,\\mu \\rangle =0\\Big )\\ge 1-e^{-t},\\nonumber \\\\\\sqrt{m}\\le \\sqrt{\\delta (K_x)}-C\\sqrt{t}\\,\\Rightarrow \\, \\operatorname{\\mathbb {P}}\\Big ( \\max _{\\mu \\in K: G\\mu =0}\\langle x,\\mu \\rangle =\\infty \\Big )\\ge 1-e^{-t}.$ The claim now follows by noting that the case $x\\in K^\\circ $ can be merged into the first inequality above.", "(In-homogeneous case $b\\ne 0$).", "In this case we take $L\\equiv L_b\\equiv \\lbrace tb:t\\ge 0\\rbrace $ .", "As $\\sup _{\\mu \\in K, \\langle g,\\mu \\rangle \\ge \\Vert \\mu \\Vert _{} \\Vert \\operatorname{\\mathsf {P}}_{L_b}^\\perp h\\Vert _{}} \\Vert \\mu \\Vert _{} ={\\left\\lbrace \\begin{array}{ll}0,& \\sup _{\\mu \\in K\\cap \\partial B_n} \\langle g,\\mu \\rangle < \\Vert \\operatorname{\\mathsf {P}}_{L_b}^\\perp h\\Vert _{}\\\\\\infty , & \\sup _{\\mu \\in K\\cap \\partial B_n} \\langle g,\\mu \\rangle \\ge \\Vert \\operatorname{\\mathsf {P}}_{L_b}^\\perp h\\Vert _{}\\end{array}\\right.", "},$ by using (REF ) it follows that for any $\\varepsilon >0$ , $\\operatorname{\\mathbb {P}}\\Big (\\sup _{\\mu \\in K: G\\mu \\in L_b}\\Vert \\mu \\Vert _{}\\ge \\varepsilon \\Big )&\\le 2 \\operatorname{\\mathbb {P}}\\bigg (\\sup _{\\mu \\in K\\cap \\partial B_n} \\langle g,\\mu \\rangle \\ge \\Vert \\operatorname{\\mathsf {P}}_{L_b}^\\perp h\\Vert _{}\\bigg ).$ Taking $\\varepsilon \\downarrow 0$ and using similar Gaussian concentration as in (REF ), there exists some universal constant $C>0$ such that for any $t\\ge 1$ , $\\sqrt{m}\\ge \\sqrt{\\delta (K)}+C\\sqrt{t}\\,\\Rightarrow \\, \\operatorname{\\mathbb {P}}\\Big (\\sup _{\\mu \\in K: G\\mu \\in L_b}\\Vert \\mu \\Vert _{}=0\\Big )\\ge 1-e^{-t}.$ On the other hand, for $K\\ne \\mathbb {R}^n$ , we may find some $x \\in \\partial B_n$ such that $K_x=K$ , so the second inequality of (REF ) entails $\\sqrt{m}\\le \\sqrt{\\delta (K)}-C\\sqrt{t}\\,\\Rightarrow \\, \\operatorname{\\mathbb {P}}\\Big (\\sup _{\\mu \\in K: G\\mu \\in L_b}\\Vert \\mu \\Vert _{}=\\infty \\Big )\\ge 1-e^{-t}.$ If $K=\\mathbb {R}^n$ , the above display also holds as $G \\mathbb {R}^n \\cap L_b \\ne \\lbrace 0\\rbrace $ with the prescribed probability by Theorem REF .", "Now combining (REF )-(REF ), we have the following: If $\\sqrt{m}\\ge \\sqrt{\\delta (K)}+C\\sqrt{t}$ , then $\\operatorname{\\mathbb {P}}\\Big (\\big \\lbrace \\mu \\in K: G\\mu \\in L_b\\big \\rbrace =\\lbrace 0\\rbrace \\Big )\\ge 1-e^{-t}.$ In this case, the CP (REF ) is infeasible.", "If $\\sqrt{\\delta (K_x)}+C\\sqrt{t}\\le \\sqrt{m}\\le \\sqrt{\\delta (K)}-C\\sqrt{t}$ , then $\\operatorname{\\mathbb {P}}\\Big (\\big \\lbrace \\mu \\in K: G\\mu \\in L_b\\big \\rbrace \\ne \\lbrace 0\\rbrace , \\max _{\\mu \\in K: G\\mu \\in L_b}\\langle x,\\mu \\rangle =0\\Big )\\ge 1-e^{-t}.$ In this case, the CP (REF ) is feasible with a non-positive cost optimum.", "If $\\sqrt{m}\\le \\sqrt{\\delta (K_x)}-C\\sqrt{t}$ , then $\\operatorname{\\mathbb {P}}\\Big ( G K_x \\cap L_b\\ne \\lbrace 0\\rbrace , \\max _{\\mu \\in K: G\\mu =0}\\langle x,\\mu \\rangle =\\infty \\Big )\\ge 1-e^{-t}.$ In this case, the CP (REF ) is feasible with $\\max _{\\mu \\in K: G\\mu =b}\\langle x,\\mu \\rangle =\\infty $ .", "The claim follows.", "Next we prove Theorem REF .", "(1).", "For the upper estimate, we take the generic $K$ in Proposition REF as $K_n\\equiv K\\cap B_n$ .", "Note that $\\sup _{x \\in \\partial B_n, \\mu \\in K_n, \\langle g,\\mu \\rangle \\ge \\Vert \\mu \\Vert _{} \\sup \\limits _{v \\in L^\\circ \\cap \\partial B_m}\\langle h,v\\rangle } \\langle x,\\mu \\rangle = \\mathbf {1}\\Big (\\sup _{\\mu \\in K\\cap \\partial B_n} \\langle g,\\mu \\rangle \\ge \\sup \\limits _{v \\in L^\\circ \\cap \\partial B_m}\\langle h,v\\rangle \\Big ),$ so an application of (REF ) yields that for any $\\varepsilon >0$ , $\\operatorname{\\mathbb {P}}\\Big ( \\sup _{x \\in \\partial B_n}\\mathsf {h}_{K_n\\cap G^{-1}L }(x)\\ge \\varepsilon \\Big )\\le 2\\operatorname{\\mathbb {P}}\\bigg (\\sup _{\\mu \\in K\\cap \\partial B_n} \\langle g,\\mu \\rangle \\ge \\sup \\limits _{v \\in L^\\circ \\cap \\partial B_m}\\langle h,v\\rangle \\bigg ).$ Taking $\\varepsilon \\downarrow 0$ , we have $\\operatorname{\\mathbb {P}}\\Big ( \\sup _{x \\in \\partial B_n}\\mathsf {h}_{K_n\\cap G^{-1}L }(x)>0\\Big )\\le 2\\operatorname{\\mathbb {P}}\\bigg (\\sup _{\\mu \\in K\\cap \\partial B_n} \\langle g,\\mu \\rangle \\ge \\sup \\limits _{v \\in L^\\circ \\cap \\partial B_m}\\langle h,v\\rangle \\bigg ).$ By Gaussian concentration in Proposition REF , there exists some universal constant $C_0>0$ such that for $t\\ge 1$ , with probability at least $1-e^{-t}$ , $\\big \\vert \\sup _{\\mu \\in K\\cap \\partial B_n}\\langle g,\\mu \\rangle -\\operatorname{\\mathfrak {w}}(K\\cap \\partial B_n)\\big \\vert \\vee \\big \\vert \\sup _{v \\in L^\\circ \\cap \\partial B_m} \\langle h,v\\rangle -\\operatorname{\\mathfrak {w}}(L^\\circ \\cap \\partial B_m)\\big \\vert \\le C_0\\sqrt{t}.$ Combined with (REF ), we have for any $t\\ge 1$ , $\\operatorname{\\mathbb {P}}\\Big ( \\sup _{x \\in \\partial B_n}\\mathsf {h}_{K_n\\cap G^{-1}L }(x)>0\\Big )\\le \\mathbf {1}\\Big (\\operatorname{\\mathfrak {w}}(K\\cap \\partial B_n)-\\operatorname{\\mathfrak {w}}(L^\\circ \\cap \\partial B_m)+C\\sqrt{t}>0\\Big )+2e^{-t}.$ The claim now follows by Proposition REF -(3)(4).", "(2).", "For the lower estimate, we first assume $K\\ne \\mathbb {R}^n$ .", "Then we may find some $x \\in \\partial B_n$ such that $K_x=K$ and $\\sup _{\\mu \\in K}\\langle x,\\mu \\rangle =\\infty $ .", "A straightforward modification of (REF ) shows that for this choice of $x$ and any $z >0$ , $\\operatorname{\\mathbb {P}}\\Big (\\mathsf {h}_{K\\cap G^{-1}L }(x)< z\\Big )&\\le 2 \\operatorname{\\mathbb {P}}\\bigg (\\sup _{\\mu \\in K} \\langle x,\\mu \\rangle \\cdot \\mathbf {1}\\Big (\\sup _{\\mu \\in K_x\\cap \\partial B_n}\\langle g,\\mu \\rangle \\ge \\sup _{v \\in L^\\circ \\cap B_m} \\langle h,v\\rangle \\Big )< z\\bigg )\\\\& \\le 2\\operatorname{\\mathbb {P}}\\bigg (\\sup _{\\mu \\in K\\cap \\partial B_n} \\langle g,\\mu \\rangle < \\sup \\limits _{v \\in L^\\circ \\cap \\partial B_m}\\langle h,v\\rangle \\bigg ).$ By a similar concentration argument as above, we have for any $z>0$ and $t\\ge 1$ , $\\operatorname{\\mathbb {P}}\\Big (\\mathsf {h}_{K\\cap G^{-1}L }(x)< z\\Big )\\le \\mathbf {1}\\Big (\\operatorname{\\mathfrak {w}}(K\\cap \\partial B_n)-\\operatorname{\\mathfrak {w}}(L^\\circ \\cap \\partial B_m)-C\\sqrt{t}<0\\Big )+2e^{-t}.$ The claim follows for $K\\ne \\mathbb {R}^n$ .", "The claim for $K=\\mathbb {R}^n$ follows simply by noting that $n-1\\le \\delta (K_x)\\le n$ ." ], [ "Proof of Theorem ", "We first give an upper estimate for $\\mathsf {h}_{L\\cap G K_n}(x)$ .", "Proposition 6.1 Suppose that $K \\subset \\mathbb {R}^n$ is a closed convex cone and $L\\subset \\mathbb {R}^m$ is a subspace with $K\\ne \\lbrace 0\\rbrace , L\\ne \\lbrace 0\\rbrace $ .", "Fix $x \\in L\\cap \\partial B_m$ .", "Then there exists some universal constant $C>0$ such that for any $t\\ge 1$ , $\\operatorname{\\mathbb {P}}\\Big (\\mathsf {h}^2_{L\\cap G K_n}(x)>\\Big \\lbrace \\delta (K)-\\delta (L^\\circ )+C\\sqrt{t}\\cdot \\Big (\\sqrt{\\delta (K)}\\vee \\sqrt{\\delta (L^\\circ )}\\Big )+C t\\Big \\rbrace _+\\Big )\\le e^{-t}.$ Here $K_n\\equiv K\\cap B_n$ .", "We shall write $\\mathsf {h}\\equiv \\mathsf {h}_{L\\cap G K_n}$ for notational convenience in the proof.", "Using the same arguments as in the proof of (REF ) (a pointwise version of Proposition REF ) up to (REF ), for any $z \\in \\mathbb {R}$ and $R>0$ , $\\operatorname{\\mathbb {P}}\\big (\\mathsf {h}(x)>z\\big )&\\le 2 \\operatorname{\\mathbb {P}}\\bigg (\\sup _{\\mu \\in K_n} \\inf _{1\\le \\beta \\le R} \\beta \\bigg \\lbrace \\langle g,\\mu \\rangle - \\Vert \\mu \\Vert _{}\\sup _{w \\in L^\\circ (\\beta ;x)} \\langle h,w\\rangle \\bigg \\rbrace >z \\bigg ),$ where recall $L^\\circ (\\beta ;x)=\\lbrace (v-x)/\\Vert v-x\\Vert _{}:v \\in L^\\circ , \\Vert v-x\\Vert _{}=\\beta \\rbrace $ is defined in (REF ).", "For subspace $L$ and $x \\in L\\cap \\partial B_m$ , $L^\\circ (\\beta ;x)$ can be computed exactly: With $\\ell ^\\circ \\equiv \\dim (L^\\circ )+1\\le m$ , there exists some orthogonal matrix $O_x \\in \\mathbb {R}^{m\\times m}$ such that $L^\\circ (\\beta ;x)=O_x\\big \\lbrace v \\in \\mathbb {R}^m: \\Vert v\\Vert _{}=1, v|_{(\\ell ^\\circ :m]}=0, v_1 = 1/\\beta \\big \\rbrace .$ Consequently, with $h^x\\equiv O_x^\\top h \\sim \\mathcal {N}(0,I_m)$ , $&\\sup _{\\mu \\in K_n} \\inf _{1\\le \\beta \\le R} \\beta \\bigg \\lbrace \\langle g,\\mu \\rangle - \\Vert \\mu \\Vert _{}\\sup _{w \\in L^\\circ (\\beta ;x)} \\langle h,w\\rangle \\bigg \\rbrace \\nonumber \\\\&=\\sup _{\\mu \\in K_n} \\inf _{1\\le \\beta \\le R} \\beta \\bigg \\lbrace \\langle g,\\mu \\rangle - \\Vert \\mu \\Vert _{}\\cdot \\bigg (\\frac{1}{\\beta }h_1^x + \\sqrt{1-\\frac{1}{\\beta ^2}} \\cdot \\Vert h_{[2:\\ell ^\\circ ]}^x \\Vert _{}\\bigg ) \\bigg \\rbrace \\nonumber \\\\&\\le \\sup _{\\mu \\in K,\\langle \\mu ,g\\rangle \\ge 0} \\inf _{1\\le \\beta \\le R} \\Big \\lbrace \\beta \\langle g,\\mu \\rangle - \\sqrt{\\beta ^2-1}\\cdot \\Vert \\mu \\Vert _{} \\Vert h_{[2:\\ell ^\\circ ]}^x \\Vert _{} \\Big \\rbrace +\\vert h_1^x\\vert .$ By Lemma REF below, the first term above can be further bounded from above by $&\\bigg \\lbrace \\sup _{\\mu \\in K_n,\\langle \\mu ,g\\rangle \\ge 0}\\Big (\\langle g,\\mu \\rangle ^2-\\Vert \\mu \\Vert _{}^2\\Vert h_{[2:\\ell ^\\circ ]}^x \\Vert _{}^2\\Big )_+ \\bigg \\rbrace ^{1/2}+ \\frac{C \\Vert h_{[2:\\ell ^\\circ ]}^x \\Vert _{} }{R}\\\\& = \\bigg \\lbrace \\bigg (\\sup _{\\mu \\in K_n}\\langle g,\\mu \\rangle \\bigg )^2- \\Vert h_{[2:\\ell ^\\circ ]}^x \\Vert _{}^2\\bigg \\rbrace _+^{1/2}+ \\frac{C \\Vert h_{[2:\\ell ^\\circ ]}^x \\Vert _{} }{R}.$ Combined with (REF ), we have $&\\sup _{\\mu \\in K_n} \\inf _{1\\le \\beta \\le R} \\beta \\bigg \\lbrace \\langle g,\\mu \\rangle - \\Vert \\mu \\Vert _{}\\sup _{w \\in L^\\circ (\\beta ;x)} \\langle h,w\\rangle \\bigg \\rbrace \\nonumber \\\\&\\le \\bigg \\lbrace \\bigg (\\sup _{\\mu \\in K_n}\\langle g,\\mu \\rangle \\bigg )^2- \\Vert h_{[2:\\ell ^\\circ ]}^x \\Vert _{}^2\\bigg \\rbrace _+^{1/2} + \\frac{C \\Vert h_{[2:\\ell ^\\circ ]}^x \\Vert _{} }{R}+\\vert h_1^x\\vert .$ Using (REF ) and the above display, we have $\\operatorname{\\mathbb {P}}\\big (\\mathsf {h}(x)>z\\big )&\\le 2 \\operatorname{\\mathbb {P}}\\bigg (\\bigg \\lbrace \\bigg (\\sup _{\\mu \\in K_n}\\langle g,\\mu \\rangle \\bigg )^2- \\Vert h_{[2:\\ell ^\\circ ]}^x \\Vert _{}^2\\bigg \\rbrace _+^{1/2} + \\frac{C \\Vert h_{[2:\\ell ^\\circ ]}^x \\Vert _{} }{R}+\\vert h_1^x\\vert >z \\bigg ).$ Taking $R\\uparrow \\infty $ , we obtain the estimate $\\operatorname{\\mathbb {P}}\\big (\\mathsf {h}(x)>z\\big )&\\le 2 \\operatorname{\\mathbb {P}}\\bigg (\\bigg \\lbrace \\bigg (\\sup _{\\mu \\in K_n}\\langle g,\\mu \\rangle \\bigg )^2- \\bigg (\\sup _{v \\in L^\\circ \\cap B_m} \\langle h,v\\rangle \\bigg )^2\\bigg \\rbrace _+^{1/2} +\\vert h_1^x\\vert \\ge z \\bigg ).$ Using Gaussian concentration as in Proposition REF , there exists some universal constant $C_0>0$ such that for any $t\\ge 1$ , with probability at least $1-e^{-t}$ , $&\\bigg \\lbrace \\bigg (\\sup _{\\mu \\in K\\cap B_n}\\langle g,\\mu \\rangle \\bigg )^2-\\bigg (\\sup _{v \\in L^\\circ \\cap B_m} \\langle h,v\\rangle \\bigg )^2\\bigg \\rbrace _+^{1/2}\\\\& \\le \\Big (\\operatorname{\\mathfrak {w}}(K\\cap B_n)-\\operatorname{\\mathfrak {w}}(L^\\circ \\cap B_m)+C_0\\sqrt{t}\\Big )_+^{1/2}\\cdot \\Big (\\operatorname{\\mathfrak {w}}(K\\cap B_n)+\\operatorname{\\mathfrak {w}}(L^\\circ \\cap B_m)+C_0\\sqrt{t}\\Big )_+^{1/2}\\\\& \\le \\Big \\lbrace \\delta (K)-\\delta (L^\\circ )+C_0\\sqrt{t}\\cdot \\Big (\\sqrt{\\delta (K)}\\vee \\sqrt{\\delta (L^\\circ )}\\Big )+C_0 t\\Big \\rbrace _+^{1/2}.$ Consequently the right hand side of (REF ) can be bounded by $\\mathbf {1}\\bigg (\\Big \\lbrace \\delta (K)-\\delta (L^\\circ )+C\\sqrt{t}\\cdot \\Big (\\sqrt{\\delta (K)}\\vee \\sqrt{\\delta (L^\\circ )}\\Big )+C t\\Big \\rbrace _+^{1/2}\\ge z \\bigg )+ Ce^{-t/C}.$ The proof is complete by possibly adjusting constants.", "The proof of Proposition REF above makes use of the following result.", "Lemma 6.2 Let $a=(a_1,a_2)^\\top \\in \\mathbb {R}_{\\ge 0}^2$ , and let $\\mathsf {Q}_a:\\mathbb {R}_{\\ge 1}\\rightarrow \\mathbb {R}$ be defined by $\\mathsf {Q}_a(\\beta )\\equiv a_1\\beta -a_2\\sqrt{\\beta ^2-1},\\, \\beta \\ge 1.$ Then there exists some universal constant $C>0$ such that for any $R>1$ , $\\bigg (\\inf _{1\\le \\beta \\le R}\\mathsf {Q}_a(\\beta )-\\sqrt{(a_1^2-a_2^2)_+}\\bigg )_+\\le \\frac{Ca_2}{R}.$ Moreover, $\\inf _{\\beta \\ge 1}\\mathsf {Q}_a(\\beta )={\\left\\lbrace \\begin{array}{ll}\\sqrt{a_1^2-a_2^2},& a_1\\ge a_2,\\\\-\\infty , & a_1<a_2.\\end{array}\\right.", "}$ The derivative of $\\mathsf {Q}_a$ is easily computed as $\\mathsf {Q}_a^{\\prime }(\\beta )= a_1-a_2\\frac{\\beta }{\\sqrt{\\beta ^2-1}},\\, \\beta >1.$ So $\\mathsf {Q}_a^{\\prime }$ is non-decreasing with $\\lim _{\\beta \\downarrow 1} \\mathsf {Q}_a^{\\prime }(\\beta )=-\\infty ,\\quad \\mathsf {Q}_a^{\\prime }(R)=a_1-a_2\\frac{R}{\\sqrt{R^2-1}},\\quad \\lim _{\\beta \\uparrow \\infty } \\mathsf {Q}_a^{\\prime }(\\beta )=a_1-a_2.$ Case 1.", "Suppose $\\mathsf {Q}_a^{\\prime }(R)\\ge 0$ .", "Then the global infimum of $\\mathsf {Q}_a$ is attained at some $\\beta _\\ast \\in [1,R]$ that solves $\\mathsf {Q}_a^{\\prime }(\\beta _\\ast )=0$ , i.e., $\\beta _\\ast =\\sqrt{a_1^2/(a_1^2-a_2^2)}$ .", "In other words, $\\inf _{1\\le \\beta \\le R} \\mathsf {Q}_a^{\\prime }(\\beta ) = \\mathsf {Q}_a^{\\prime }(\\beta _\\ast )=\\sqrt{a_1^2-a_2^2}.$ Case 2.", "Suppose $\\mathsf {Q}_a^{\\prime }(R)<0$ .", "Then $\\mathsf {Q}_a^{\\prime }$ is globally non-positive on $[1,R]$ , and therefore for any $R>1$ , as $a_1<a_2(1+O(1/R^2))$ , $\\inf _{1\\le \\beta \\le R} \\mathsf {Q}_a(\\beta ) = \\mathsf {Q}_a(R) =R\\bigg (a_1-a_2 \\sqrt{1-\\frac{1}{R^2}}\\bigg )\\le \\frac{C a_2}{R}.$ In this case, as $\\sqrt{(a_1^2-a_2^2)_+}\\le Ca_2/R$ , $\\bigg (\\inf _{1\\le \\beta \\le R}\\mathsf {Q}_a(\\beta )-\\sqrt{(a_1^2-a_2^2)_+}\\bigg )_+\\le \\frac{Ca_2}{R}.$ Combining the two cases concludes the proof for the claim for finite $R>1$ .", "The case for $R=\\infty $ can be argued similarly.", "In particular, for $a_1>a_2$ , the global minimum is computed exactly in the same way as in Case 1 above.", "For the case $a_1\\le a_2$ , $\\inf _{\\beta \\ge 1}\\mathsf {Q}_a(\\beta )&= \\lim _{\\beta \\uparrow \\infty } \\mathsf {Q}_a(\\beta ) = \\lim _{\\beta \\uparrow \\infty } \\beta \\cdot \\Big (a_1-a_2\\sqrt{1-\\beta ^{-2}}\\Big )={\\left\\lbrace \\begin{array}{ll}0,& a_1=a_2;\\\\-\\infty , & a_1<a_2.\\end{array}\\right.", "}$ The claim follows by noting that the case $a_1=a_2$ can be assimilated into the first case.", "We next give a lower estimate for $\\mathsf {h}_{L\\cap G K_n}(x)$ .", "Proposition 6.3 Suppose that $K \\subset \\mathbb {R}^n$ is a closed convex cone and $L\\subset \\mathbb {R}^m$ is a subspace with $K\\ne \\lbrace 0\\rbrace , L\\ne \\lbrace 0\\rbrace $ .", "Fix $x \\in L\\cap \\partial B_m$ .", "Then there exists some universal constant $C>0$ such that for any $t\\ge 1$ , $\\operatorname{\\mathbb {P}}\\Big (\\mathsf {h}^2_{L\\cap G K_n}(x)<\\Big \\lbrace \\delta (K)-\\delta (L^\\circ )-C\\sqrt{t}\\cdot \\Big (\\sqrt{\\delta (K)}\\vee \\sqrt{\\delta (L^\\circ )}\\Big )-C t\\Big \\rbrace _+\\Big )\\le e^{-t}.$ Here $K_n\\equiv K\\cap B_n$ .", "We will use the estimate in (REF ) to compute the quantity in the right hand side therein: Recall $L^\\circ (\\beta ;x)$ and $h^x$ as in the proof of Proposition REF .", "We have $&\\sup _{\\mu \\in K_n}\\inf _{v \\in L^\\circ } \\Big \\lbrace -\\Vert \\mu \\Vert _{}\\langle h,v-x\\rangle +\\Vert v-x\\Vert _{}\\langle g,\\mu \\rangle \\Big \\rbrace \\\\& =\\sup _{\\mu \\in K_n} \\bigg [ \\inf _{\\beta \\ge 1} \\beta \\bigg \\lbrace \\langle g,\\mu \\rangle - \\Vert \\mu \\Vert _{}\\sup _{w \\in L^\\circ (\\beta ;x)} \\langle h,w\\rangle \\bigg \\rbrace \\bigg ]\\\\&\\ge \\sup _{\\mu \\in K,\\langle \\mu ,g\\rangle \\ge 0} \\inf _{\\beta \\ge 1} \\Big \\lbrace \\beta \\langle g,\\mu \\rangle - \\sqrt{\\beta ^2-1}\\cdot \\Vert \\mu \\Vert _{} \\Vert h_{[2:\\ell ^\\circ ]}^x \\Vert _{} \\Big \\rbrace -\\vert h_1^x\\vert .$ Here the last inequality follows from similar arguments as in (REF ) in the proof of Proposition REF .", "By Lemma REF , the first term above equals $&\\bigg \\lbrace \\sup _{\\mu \\in K_n,\\langle \\mu ,g\\rangle \\ge 0}\\Big (\\langle g,\\mu \\rangle ^2-\\Vert \\mu \\Vert _{}^2\\Vert h_{[2:\\ell ^\\circ ]}^x \\Vert _{}^2\\Big )_+ \\bigg \\rbrace ^{1/2} = \\bigg \\lbrace \\bigg (\\sup _{\\mu \\in K_n}\\langle g,\\mu \\rangle \\bigg )^2- \\Vert h_{[2:\\ell ^\\circ ]}^x \\Vert _{}^2\\bigg \\rbrace _+^{1/2}.$ The remaining proof follows from the same lines as in Proposition REF .", "Now we are in a good position to prove Theorem REF .", "We write $S_k\\equiv \\Pi _{m\\rightarrow k} L$ .", "By Propositions REF and REF , for any $x \\in S_k\\cap \\partial B_k$ , there exists some $C_0=C_0(\\tau )>0$ such that for any $t\\ge 1$ , with probability at least $1-e^{-t}$ , we have $\\big \\vert \\mathsf {h}_{L\\cap G K_n}(x)- \\sqrt{\\delta (K)-\\delta (L^\\circ )}\\big \\vert \\le C_0\\sqrt{t}.$ Now choosing $t\\equiv \\varepsilon ^2(\\delta (K)-\\delta (L^\\circ ))/(4C_0^2)\\ge c \\varepsilon ^2 \\delta (K)$ , for any $x \\in S_k\\cap \\partial B_k$ with probability at least $1-\\exp (-c\\varepsilon ^2\\delta (K))$ , $\\bigg (1-\\frac{\\varepsilon }{2}\\bigg )\\sqrt{\\delta (K)-\\delta (L^\\circ )}\\le \\mathsf {h}_{L\\cap G K_n}(x)\\le \\bigg (1+\\frac{\\varepsilon }{2}\\bigg ) \\sqrt{\\delta (K)-\\delta (L^\\circ )}.$ Let $T_\\varepsilon $ be a minimal $(\\varepsilon /2)\\sqrt{\\delta (K)-\\delta (L^\\circ )}$ net of $S_k\\cap \\partial B_k$ under the Euclidean metric.", "Then an easy volume estimate shows that the log cardinality of $T_\\varepsilon $ can be bounded by $\\log T_\\varepsilon \\lesssim k \\log \\bigg (\\frac{1}{\\varepsilon \\sqrt{\\delta (K)-\\delta (L^\\circ )}}\\bigg )\\lesssim _\\tau k \\log \\bigg (\\frac{1}{\\varepsilon ^2\\delta (K)}\\bigg ),$ where the last inequality follows from the assumption that $\\delta (K)-\\delta (L^\\circ )\\gtrsim _\\tau \\delta (K)\\gtrsim 1$ .", "As $\\mathsf {h}_{L\\cap G K_n}$ is 1-Lipschitz, we have $\\sup _{x \\in S_k\\cap \\partial B_k}\\inf _{x^{\\prime } \\in T_\\varepsilon }\\big \\vert \\mathsf {h}_{L\\cap G K_n}(x)-\\mathsf {h}_{L\\cap G K_n}(x^{\\prime }) \\big \\vert \\le \\frac{\\varepsilon }{2}\\sqrt{\\delta (K)-\\delta (L^\\circ )}.$ Combining (REF ) and (REF ) using a union bound, we have $(1-\\varepsilon )\\sqrt{\\delta (K)-\\delta (L^\\circ )}\\le \\mathsf {h}_{L\\cap G K_n}(x)\\le (1+\\varepsilon ) \\sqrt{\\delta (K)-\\delta (L^\\circ )},\\quad \\forall x \\in S_k\\cap \\partial B_k,$ with probability at least $1- \\exp \\bigg [-c\\varepsilon ^2\\delta (K)+C k \\log \\bigg (\\frac{1}{\\varepsilon ^2\\delta (K)}\\bigg )\\bigg ].$ The claim follows from the assumption on the dimension $k$ ." ] ]
2212.05545
[ [ "PromptCAL: Contrastive Affinity Learning via Auxiliary Prompts for\n Generalized Novel Category Discovery" ], [ "Abstract Although existing semi-supervised learning models achieve remarkable success in learning with unannotated in-distribution data, they mostly fail to learn on unlabeled data sampled from novel semantic classes due to their closed-set assumption.", "In this work, we target a pragmatic but under-explored Generalized Novel Category Discovery (GNCD) setting.", "The GNCD setting aims to categorize unlabeled training data coming from known and novel classes by leveraging the information of partially labeled known classes.", "We propose a two-stage Contrastive Affinity Learning method with auxiliary visual Prompts, dubbed PromptCAL, to address this challenging problem.", "Our approach discovers reliable pairwise sample affinities to learn better semantic clustering of both known and novel classes for the class token and visual prompts.", "First, we propose a discriminative prompt regularization loss to reinforce semantic discriminativeness of prompt-adapted pre-trained vision transformer for refined affinity relationships.", "Besides, we propose a contrastive affinity learning stage to calibrate semantic representations based on our iterative semi-supervised affinity graph generation method for semantically-enhanced prompt supervision.", "Extensive experimental evaluation demonstrates that our PromptCAL method is more effective in discovering novel classes even with limited annotations and surpasses the current state-of-the-art on generic and fine-grained benchmarks (with nearly $11\\%$ gain on CUB-200, and $9\\%$ on ImageNet-100) on overall accuracy." ], [ "Introduction", "The deep neural networks have demonstrated favorable performance in the Semi-Supervised Learning (SSL) setting [42], [51], [48], [15], [28].", "Some recent works can even achieve comparable performance to their fully-supervised counterparts using few annotations for image recognition [40], [48], [3].", "However, these approaches heavily rely on the closed-world assumption that unlabeled data share the same underlying class label space as the labeled data [49], [10].", "In many realistic scenarios, this assumption does not hold true because of the dynamic nature of real-world tasks where novel classes can appear in addition to known classes.", "Figure: PromptCAL Overview.", "In contrast to previous method based on semi-supervised contrastive learning, PromptCAL constructs affinity graph on-the-fly to guide representation learning of the class token and prompts.", "Meanwhile, our prompt-adapted backbone can be tuned to enhance semantic discriminativeness.PromptCAL can discover reliable affinities from a memory bank, especially for novel classes.Therefore, PromptCAL is better task-aligned and discriminative to novel semantic information.In contrast to SSL, the Novel Category Discovery (NCD) problem was introduced by [12] to relax the closed-world assumption of SSL, which assumes the unlabeled data contain novel classes.", "Recently, the nascent Generalized Novel Category Discovery (GNCD) problem, first proposed in [43], [4], extends NCD and assumes the unlabeled data can contain both known and novel classes, which is more pragmatic and challenging.", "To be more specific, GNCD intends to categorize images from known and novel classes given predefined categories in the training set comprising labeled-knowns, unlabeled-knowns, and unlabeled-novels.", "Our work focuses on GNCD problem.", "The key challenge of GNCD is to discriminate among novel classes when only the ground truths of known classes are accessible in the training set.", "Recent studies show that self-supervised pre-trained representations are conducive to discovering novel semantics [5], [43], [4], [55], [11].", "A typical work on GNCD [43] takes advantage of the large-scale pre-trained visual transformer (ViT) [38], and learns robust clusters for known and novel classes through semi-supervised contrastive learning on downstream datasets.", "However, we discover that the remarkable potential of pre-trained ViT is actually suppressed by this practice, due to the class collision [54] induced by abundant false negatives in contrastive loss, , considering different unlabeled images from the same or similar semantic class as false negatives.", "As supported by empirical studies, abundant false negatives in contrastive training can deteriorate the compactness and purity of semantic clustering [16], [20], [54], [5].", "Based on empirical investigation, we show that this issue is particularly severe in category discovery.", "Furthermore, although the existing commonly adopted practice [43], [4] of freezing most parts of the pre-trained backbone can alleviate overfitting on known classes, it constraints the flexibility and adaptability of backbones [18].", "Lack of adaptability inhibit models from learning discriminative semantic information on downstream datasets.", "In order to address above limitations and to learn better semantically discriminative representations, we propose Prompt-based Contrastive Affinity Learning (PromptCAL) framework to tackle GNCD problem.", "To be specific, our approach aims to discover semantic clusters in unlabeled data by simultaneous semantic prompt learning based on our Discriminative Prompt Regularization (DPR) loss and representation calibration based on our Contrastive Affinity Learning (CAL) process.", "Firstly, CAL discovers abundant reliable pseudo positives for DPR loss and contrastive loss based on generated affinity graphs.", "These semantic-aware pseudo labels further enhance the semantic discriminativeness of DPR supervision.", "Secondly, DPR regularizes semantic representations of ensembled prompts, which facilitates the discovery of more accurate pseudo labels at the next-step of CAL.", "Therefore, as model and prompt representations are iteratively enhanced, we can obtain higher quality pseudo positives for further self-training as well as acquire better semantic clustering.", "Our PromptCAL achieves State-of-The-Art (SoTA) performance in extensive experimental evaluation on six benchmarks.", "Specifically, PromptCAL remarkably surpasses previous SoTA by more than $10\\%$ clustering accuracy on the fine-grained CUB-200 and StandfordCars datasets; it also significantly outperforms previous SoTAs by nearly $4\\%$ on ImageNet-100 and $7\\%$ on CIFAR-100.", "Interestingly, we identify that both DPR supervised prompts and unsupervised prompts of PromptCAL can learn semantic discriminativeness, which advances the flexibility of the pre-trained backbone.", "Furthermore, PromptCAL still achieves the best performance in challenging low-labeling and few-class setups.", "Contributions.", "Our major contribution are: (1) We propose a two-stage framework for the generalized novel category discovery problem, in which semantic prompt tuning and contrastive affinity learning mutually reinforce and benefit each other during the learning process.", "(2) We propose two synergistic learning objectives, contrastive affinity loss and discriminative prompt regularization loss, based on our semi-supervised adapted affinity graphs to enhance semantic discriminativeness.", "(3) We comprehensively evaluate and validate our method on three generic (, CIFAR-10, CIFAR-100, and ImageNet-100) and three fine-grained benchmarks (, CUB-200, Aircraft, and StandfordCars), achieving state-of-the-art performance, thereby showing its effectiveness.", "(4) We further showcase generalization ability of PromptCAL and effectiveness in more challenging low-labeling and few-class setups." ], [ "Related Work", "Category discovery.", "Novel Category Discovery (NCD), first formulated by DTC [12], aims to categorize the unlabeled novel classes by transferring the knowledge from labeled known classes [12], [11], [53], [56], [47], [55], [9].", "The challenging NCD differs from SSL [42], [34] in that the unlabeled data are sampled from distinct underlying semantic distribution.", "DTC [12] proposes to jointly warmup network weights and cluster prototypes based on DEC [47] method on unlabeled data, and then fit an annealing sharpened distribution.", "RankStats [11] and RS+ [53] propose to utilize ranking statistics to generate pseudo positives among unlabeled novel classes.", "OpenMix [56] transfers semantic knowledge by MixUp augmentation [52] between known and novel classes as well as between reliable novel anchors and other novel examples.", "NCL [55] proposes a neighborhood contrastive loss and a hard-negative generation process by mixing [52] novel and known classes.", "UNO [9] first formulates the NCD problem into classification based on dynamic class assignments by Sinkhorn-Knopp algorithm [22].", "Generalized Novel Category Discovery (GNCD) problem, first proposed in [43], further extends NCD under a more realistic assumption that unlabeled data can be both sampled from novel classes and known classes.", "Specifically, the model learns to categorize unlabeled training data containing known and novel classes based on the knowledge of labeled known classes.", "Besides, a concurrent work, ORCA [4] proposes an uncertainty adaptive margin loss to reduce the intra-class variances between known and novel classes.", "GCD [43] addresses this challenging problem via proposed semi-supervised contrastive learning on large-scale pre-trained visual transformer (ViT) followed by constraint KMeans [1], [2].", "However, GCD still possesses limitations: first, the frozen backbone lacks the adaptability to downstream tasks; besides, abundant false negatives will degenerate the semantic representation [16], [7], [54], [20].", "To fully unleash the potential of pre-trained ViT, we address these two critical issues via our proposed prompt-based contrastive affinity learning.", "Positive Mining in Neighborhoods.", "Some recent works in self-supervised learning discovered that mining positives to antagonize the side effect of abundant false negatives in the sample-wise contrastive loss is essential to the downstream performance [20], [8], [16], [54], [36].", "FNC [16] comprehensively analyzes the adverse effect of false-negatives on contrastive learning SoTAs and performs positive mining based on ensembled similarities on local patch pairs.", "LA [58] proposes to learn better representation through soft clusters in neighborhoods at different scales.", "NNCLR [8], NCL [55], and WCL [54] conduct positive mining based on K-Nearest Neighbors (KNN) as pseudo positives to improve contrastive learning.", "We find one work in SSL [17] also adopt a graph diffusion algorithm to propagation pseudo labels.", "But there exist major differences between their work and ours: first, features in our context prone to open-set noises [10] and thus more challenging than SSL; second, we conduct an efficient online diffusion per iteration via a graph subsampling strategy, while they conduct diffusion per epoch on the entire dataset; third, we compute affinity propagation on consensus affinity graph with prior knowledge, while they conduct propagation on naive KNN graph.", "Our work extends consensus KNN [33].", "Originally, they utilize non-learnable SIFT [30] features; while we exploit deep features and can train end-to-end.", "We also incorporate additional constraint knowledge into graphs.", "Visual prompt learning.", "Prompt learning originates from the field of Natural Language Processing (NLP) [29].", "Visual prompt learning (VPT) [18] tunes embedded visual prompts with a frozen pre-trained ViT backbone supervised by downstream objectives, which achieves better transfer.", "However, based on our experimental analysis, VPT [18] does not exhibit significant benefits especially on fine-grained datasets.", "Our objective acts as prompt regularization and a weaker semantic supervision signal, which is in a disparate design and learning goal of prompt ensembling [37], [19] and prompt composition [13] in NLP [29].", "Figure: Overview of our PromptCAL framework.Our prompt-adapted backbone outputs a class embedding and ensembled prompt embedding.", "(a) In warmup training, we conduct semi-supervised contrastive clustering (Semi-sup.", "Contrastive Loss) on the projected features of the class token and ensembled prompt, respectively.", "(b) In contrastive affinity learning stage, at each iteration, we forward the student and EMA (exponentially moving averaged) teacher with different augmented views of images.", "Output teacher embeddings are enqueued into their corresponding token-specific memory.We iteratively compute semi-supervised contrastive loss on the current batch and our contrastive affinity loss between student embeddings and memory embeddings with pseudo-labels from the dynamically generated affinity graph by SemiAG.", "(c) We generate affinity graphs for the class embedding and prompt embedding respectively via affinity propagation on their corresponding consensus KNN graphs." ], [ "Method", "The challenging aspect of GNCD in comparison to SSL is clustering novel semantics under both semantic shifts and missing annotations [49], [42].", "However, existing methods [43], [11], [53], [55] cannot reliably discover and employ semantic affinities on pre-trained representations.", "Meanwhile, recent SoTAs [43], [4] lack suitable strategies to adapt the pre-trained backbone to learn discriminative semantic information without overfitting on known classes.", "To this end, we propose PromptCAL, which consists of two synergistic learning objectives: discriminative prompt regularization (DPR) and contrastive affinity learning (CAL).", "The whole framework is displayed in Fig.", "REF .", "Specifically, in the first stage, we learn warm-up representation (in Sec.", "REF ) for further tuning.", "Our DPR loss which is applied to both stages for prompt regularization is also explained.", "In the second stage, we discover reliable pseudo positives on generated affinity embedding graphs based on semi-supervised affinity generation (SemiAG) mechanism (in Sec.", "REF ).", "Next, we propose our contrastive affinity loss (in Sec.", "REF ) on pseudo labels generated by online SemiAG supported by embedding memories.", "Lastly, we also present a detailed PromptCAL training algorithm in Appendix ." ], [ "Preliminaries", "Before introducing our method, we formulate the GNCD problem and present some preliminaries.", "Problem Definition.", "Our GNCD setting follows [43].", "Specifically, we assume that the training dataset $\\mathcal {D}=\\mathcal {D}_l \\bigcup \\mathcal {D}_u$ comprises two subsets: a labeled set $\\mathcal {D}_l=\\lbrace x_i, y_i\\rbrace _{i=1}^{N_1} \\subset \\mathcal {X}_l \\times \\mathcal {Y}_l$ with its label space $\\mathcal {Y}_l=\\mathcal {C}_{kwn}$ , and an unlabeled set $\\mathcal {D}_u=\\lbrace x_i\\rbrace _{i=1}^{N_2} \\subset \\mathcal {X}_u$ with its underlying label space $\\mathcal {Y}_u=\\mathcal {C}=\\mathcal {C}_{kwn} \\bigcup \\mathcal {C}_{new}$ .", "Here, $\\mathcal {C}$ , $\\mathcal {C}_{kwn}$ , and $\\mathcal {C}_{new}$ denote the label set for All, Known, and New classes, respectively.", "Following [43], we assume the knowledge of $|\\mathcal {C}|$ .", "Architecture.", "We take a self-supervised pre-trained ViT as our backbone [38].", "We denote our visual prompt-adapted ViT backbone [18] as $f(\\cdot | \\theta , \\theta _{\\text{P}})$ parameterized by prompts $\\theta _{\\text{P}}$ and last block weights $\\theta $ .", "In each mini-batch $\\mathcal {B}$ , there are two augmented views for each sample.", "Given a sample vector $\\mathbf {x} \\in \\mathcal {B}$ , we can extract its embedding $\\mathbf {h}=f(\\mathbf {x} | \\theta , \\theta _{\\text{P}}) \\in \\mathcal {H}$ and project $\\mathbf {h}$ into feature vector $\\mathbf {z}=g(\\mathbf {h} | \\theta _{\\text{H}}) \\in \\mathcal {Z}$ through a projection head $g(\\cdot | \\theta _{\\text{H}})$ with parameters $\\theta _\\text{H}$ .", "Here, $\\mathcal {H}, \\mathcal {Z}$ denote embedding and feature spaces.", "Contrastive Loss.", "To simplify notations of PromptCAL, we extend the definition of the standard supervised contrastive loss [21] as follows.", "Given a $l_2$ -normalized query vector $\\mathbf {t}_q$ and a set of $l_2$ -noramlized key vectors $\\mathbf {T}_k$ (which can be from the embedding or feature space), we define: $\\begin{aligned}& L_{\\text{con}}(\\mathbf {t}_q, \\mathbf {T}_k; \\tau , \\mathcal {P}, \\mathcal {A}) \\\\& = - \\frac{1}{|\\mathcal {P}(\\mathbf {t}_q)|} \\sum _{\\mathbf {t}_k^+ \\in \\mathcal {P}(\\mathbf {t}_q)}{\\frac{\\exp (\\frac{\\mathbf {t}_q \\cdot \\mathbf {t}_k^+}{\\tau })}{ \\sum _{\\mathbf {t_a} \\in \\mathcal {A}( \\mathbf {t}_q ) }\\exp (\\frac{\\mathbf {t}_q \\cdot \\mathbf {t_a}}{\\tau })}} \\end{aligned}$ where $\\tau $ is the temperature parameter of the contrastive loss, and $\\cdot $ denotes the cosine similarity operation.", "Here, $\\mathcal {P}(\\mathbf {t}_q)$ and $\\mathcal {A}(\\mathbf {t}_q)$ represent the positive set and anchor set of the query $\\mathbf {t}_q$ , respectively, which are subsets of $\\mathbf {T}_k$ ." ], [ "Warm-up Phase with Discriminative Prompt Regularization", "Discriminative Prompt Regularization.", "Although computation overheads are largely reduced by only tuning the last block [43], it restricts the backbone from better learning semantic representations and adapting to diverse downstream datasets.", "Counterintuitively, we discover that naively adapting backbone with visual prompts [18] overfits small datasets (refer to ablations on CUB-200 [44] in Sec.", "REF ).", "Motivated by [26], [45], we propose a discriminative prompt regularization loss to regularize and force prompts to learn semantically discriminative features with a task-related auxiliary loss.", "We investigate the superiority of DPR supervision on our prompt-adapted backbone in ablation study (Sec.", "REF ) and Appendix .", "We assign input prompts at the last ViT block as [P] tokens (short for prompt), the output of which are ensembled and supervised by a task-related clustering loss in both training stages.", "All the remaining prompts are unsupervisedly learned, which provides the backbone with extra flexibility.", "Concretely, we average the $l_2$ -normalized output embeddings of all [P] tokens into an ensembled embedding $\\mathbf {h}_{\\text{P}}$ (the same shape as the class embedding), and forward it to the projection head and obtain $\\mathbf {z}_{\\text{P}}$ .", "Finally, we define the DPR task-related loss function on $\\mathbf {h}_{\\text{P}}$ /$\\mathbf {z}_{\\text{P}}$ as the same form of the loss defined on $\\mathbf {h}$ /$\\mathbf {z}$ but with a weaker weight $\\gamma $ .", "Warm-up Training.", "Since randomly initialized prompts are not ready for contrastive affinity learning, we apply warm-up training to prepare the class token and prompts with dataset-specific representation.", "The overall training objective in this stage is formulated as: $L_1(\\mathbf {x}) = L_{\\text{semi}}^{\\text{CLS}}(\\mathbf {z}) + \\gamma L_{\\text{semi}}^{\\text{P}}(\\mathbf {z}_{\\text{P}}) $ where $L_{\\text{semi}}^{\\text{CLS}}$ and $L_{\\text{semi}}^{\\text{P}}$ represent the semi-supervised contrastive loss (SemiCL) on [CLS] and its DPR counterpart on [P], respectively.", "Here, $\\gamma $ is DPR loss weight.", "Further, based on extended contrastive loss (Eq.", "REF ), the SemiCL on [CLS] feature $\\mathbf {z} \\in \\mathbf {Z}_{\\mathcal {B}}$ is written as: $\\begin{aligned}L_{\\text{semi}}^{\\text{CLS}}(\\mathbf {z}) &= (1- \\alpha ) L_{\\text{con}}\\Big (\\mathbf {z}, \\mathbf {Z}_{\\mathcal {B}}; \\tau , \\mathcal {P}_{\\text{self}}, \\mathcal {A}_{\\text{self}} \\Big ) \\\\&+ \\alpha L_{\\text{con}}\\Big (\\mathbf {z}, \\mathbf {Z}_{\\mathcal {B}_l}; \\tau , \\mathcal {P}_{\\text{sup}}, \\mathcal {A}_{\\text{sup}}\\Big ) \\mathbb {I}\\Big (\\mathbf {z} \\in \\mathbf {Z}_{\\mathcal {B}_l} \\Big )\\end{aligned}$ where $\\mathbb {I}$ is an indicator function.", "The first and second terms denote supervised and self-supervised contrastive loss on projected features of an entire batch $\\mathbf {Z_{\\mathcal {B}}}$ and only labeled samples $\\mathbf {Z_{\\mathcal {B}_l}}$ , respectively.", "Following [14], [21], we define $\\mathcal {P}_{\\text{self}}(\\mathbf {z})$ as the augmented counterpart of $\\mathbf {z}$ in $\\mathbf {Z_{\\mathcal {B}}}$ , and define $\\mathcal {P}_{\\text{sup}}(\\mathbf {z})$ as all other features in $\\mathbf {Z_{\\mathcal {B}_l}}$ that shares the same class label with $\\mathbf {z}$ .", "Besides, we have $\\mathcal {A}_{\\text{sup}}(\\mathbf {z})=\\mathbf {Z_{\\mathcal {B}_l}}-\\lbrace \\mathbf {z}\\rbrace $ and $\\mathcal {A}_{\\text{self}}(\\mathbf {z})=\\mathbf {Z_{\\mathcal {B}}}-\\lbrace \\mathbf {z}\\rbrace $ .", "Similar to Eq.", "REF , we can define the DPR loss function $L_{\\text{semi}}^{\\text{P}}$ on ensembled prompt feature $\\mathbf {z}_{\\text{P}}$ in the overall loss (Eq.", "REF ).", "Figure: An intuitive toy example for SemiAG, which sequentially requires three operations.The relative pairwise distances are proportional to cosine distances in the embedding space.Each of the four graphs denotes results obtained at each step after binarization with thresholds.Each operation can either remove false positives or retrieve ground-truth positives for the query embedding (dark green).Firstly, only reliable neighbors are retrieved as positives based on consensus information; secondly, more positives are retrieved by affinity propagation on the entire graph; and thirdly, pairwise constraints in label information of labeled data (SemiPriori) are incorporated for affinity calibration." ], [ "Semi-supervised Affinity Generation", "Once the warmed-up semantic representation for the class token and prompts are obtained, abundant positive samples can be discovered by reliable pseudo-labeling methods for enhanced clustering and supervision signals at next iteration.", "However, pseudo-labeling techniques in recent works (, naive nearest neighbors, pair-wise predictions as positives [55], [4], [9], [11], [20]) are not robust enough to semantic shifts [32].", "To address this issue, we propose a semi-supervised affinity generation method under the assumption that consensus local neighbors share the same semantics.", "Specifically, we first construct an consensus affinity graph in $\\mathcal {H}$ based on neighborhood statistics [33].", "Then, we conduct affinity propagation on the entire graph to calibrate affinities.", "Lastly, we incorporate the semi-supervised priori from $\\mathcal {D}_l$ into the graph.", "We explain these steps below.", "An illustrative example is presented in Fig.", "REF .", "The workflow of SemiAG operations is presented in Fig.", "REF (c).", "Consensus KNN graph.", "Given an embedding graph $\\mathbf {G}_{\\mathcal {H}}=(\\mathcal {V}, \\mathcal {E})$ whose node set $\\mathcal {V}=\\lbrace \\mathbf {h}_i\\rbrace _{i = 1}^{N_G}$ contains $N_G$ embeddings and edge set is $\\mathcal {E}=\\lbrace e_{i,j}= \\mathbf {h}_i \\cdot \\mathbf {h}_j\\rbrace _{i,j = 1}^{N_G}$ , we build a consensus graph $\\mathbf {G}_c=(g_{i,j})_{i,j = 1}^{N_G}$ on $\\mathcal {V}$ via consensus statistics.", "Each edge $g_{i,j}$ of $\\mathbf {G}_c$ is defined as: $g_{i,j}=\\left\\lbrace \\begin{aligned}&|\\lbrace \\mathbf {h}_c| \\mathbf {h}_i, \\mathbf {h}_j \\in \\mathcal {O}_K(\\mathbf {h}_c)), \\forall \\mathbf {h}_c \\in \\mathcal {V} \\rbrace | & i\\ne j \\\\&0 & i=j ,\\end{aligned}\\right.$ where $\\mathcal {O}_{K}(\\mathbf {h}_c) = \\texttt {argtopK}_{\\mathbf {h}_j}(\\lbrace \\mathbf {h}_j \\cdot \\mathbf {h}_c | \\mathbf {h}_j \\in \\mathcal {V} \\rbrace )$ denotes the $K$ -neighborhood of $\\mathbf {h}_c \\in \\mathcal {V}$ .", "Then, we convert it into $\\mathbf {\\tilde{G}}_c$ by row normalization.", "However, consensus graph has a defect: the neighborhood consensus condition is rigorous and only considers local information, which means abundant potential positives are still unretrieved.", "Affinity propagation with SemiPriori.", "To overcome this issue, we leverage the graph diffusion algorithm [50] on the probabilistic matrix $\\mathbf {\\tilde{G}}_c$ to propagate local affinities along multi-hop paths to characterize higher-order structural information and avoid degenerated solutions.", "Specifically, we apply TPG diffusion algorithm [50], which iteratively computes the diffused graph $\\mathbf {\\tilde{G}}_d$ as: $\\mathbf {\\tilde{G}}_d^{(t+1)} = \\mathbf {\\tilde{G}}_c \\mathbf {\\tilde{G}}_d^{(t)} \\mathbf {\\tilde{G}}_c^T + \\mathbf {I}, t=1, ..., \\eta $ where $I$ is an identity matrix, and $\\eta $ is the total diffusion step.", "$\\mathbf {\\tilde{G}}_d^{(t)}$ denotes the $t$ -th step diffused graph and $\\mathbf {\\tilde{G}}_d^{(0)}=\\mathbf {\\tilde{G}}_c$ .", "We denote the final diffused graph as $\\mathbf {\\tilde{G}}_d$ .", "In Appendix , we provide more detailed descriptions.", "However, the consensus graph and affinity propagation neglect abundant prior information in the labeled data.", "To address the issue, we incorporate SemiPriori, , add sample-wise class labels as pairwise constraints to $\\mathbf {\\tilde{G}}_d$ .", "We set the edge to 1 if two nodes have the same labels (, $y_i = y_j$ ) and prune the edge if $y_i \\ne y_j$ .", "Meanwhile, we sparsify $\\mathbf {\\tilde{G}}_d$ with a pre-defined quantile $q$ , then the generated binarized affinity graph $\\mathbf {G}_b$ is denoted as: $\\mathbf {G}_b(i,j) = \\left\\lbrace \\\\\\begin{aligned}& 0 & & y_i \\ne y_j \\\\& 1 & & (y_i = y_j) \\vee \\Big (\\mathbf {\\tilde{G}}_d(i,j)>q\\Big ) \\\\\\end{aligned}\\right.$ On binarized affinity graph $\\mathbf {G}_b$ , positive/negative pairs are regarded as reliable pseudo positives/negatives in noisy embedding space for further contrastive affinity learning (in Sec.", "REF ).", "Therefore, pseudo-labels of both labeled and unlabeled data are computed; while, those of labeled data are calibrated by SemiPriori.", "Note that we compute two binarized graphs for [CLS] and [P] embeddings, respectively." ], [ "Contrastive Affinity Learning Phase", "In this section, given reliable pseudo positives identified from an embedding graph, we introduce two critical components for the second phase learning: online graph sampling strategy and our proposed CAL loss.", "The overall framework of contrastive affinity learning is illustrated in Fig.", "REF (b).", "Graph sampling with memory.", "One practical issue arises (in Sec.", "REF ): SemiAG on mini-batches is not effective due to sampling insufficiency; while conducting SemiAG offline on the entire dataset is time-consuming and memory inefficiency [17].", "To strike a balance between the graph size and computation resources, inspired by [27], we dynamically construct a sub-graph $\\mathbf {G}_{\\mathcal {H}}^\\prime $ sub-sampled from the entire graph $\\mathbf {G}_{\\mathcal {H}}$ supported by an extra embedding memory bank $\\mathcal {M}$ and an exponentially moving averaged (EMA) teacher ($f_{\\text{T}}, g_{\\text{T}}$ ), like MoCo [14].", "Specifically, for each input batch, the EMA teacher produces stable embeddings, which are enqueued to the fixed-size first-in-first-out memory.", "The sub-graph $\\mathbf {G}_{\\mathcal {H}}^\\prime $ is then constructed by the embeddings in the memory and teacher embeddings in the current batch.", "We denote its node set as $\\mathcal {V}(\\mathbf {G}_{\\mathcal {H}}^\\prime ) = \\mathcal {M}\\bigcup \\lbrace \\mathbf {h}_{\\text{T}}=f_{\\text{T}}(\\mathbf {x})| \\mathbf {x}\\in \\mathcal {B}\\rbrace $ .", "In this way, we can apply the same SemiAG operation (in Sec.", "REF ) to the sub-graph on the fly with adjustable memory sizes.", "Note that we maintain another memory for SemiAG on prompts, since we retain DPR loss in contrastive affinity learning phase.", "Contrastive affinity loss.", "The target of CAL loss is to gradually calibrate the semantic representation by learning from generated affinity constraints in graphs.", "Given the sub-graph $\\mathbf {G}_{\\mathcal {H}}^\\prime $ and its corresponding binarized graph $\\mathbf {G}_b^\\prime $ by SemiAG (in Sec.", "REF ), we formulate CAL loss with [CLS] embedding $\\mathbf {h}_i$ as a query, embeddings in sub-graph node set $\\mathcal {V}(\\mathbf {G}_{\\mathcal {H}}^\\prime )$ as keys: $\\begin{aligned}L_{\\text{CAL}}^{\\text{CLS}}(\\mathbf {h}_i, \\mathbf {G}_b^\\prime ) = L_{\\text{con}}(\\mathbf {h}_i, \\mathcal {V}(\\mathbf {G}_{\\mathcal {H}}^\\prime ), \\tau _a, \\mathcal {P}_{a}, \\mathcal {A}_{a}) \\end{aligned}$ where $\\tau _a$ is a hyper-parameter, and the positive set is defined as $\\mathcal {P}_{a}(\\mathbf {h}_i)=\\lbrace \\mathbf {h}_{\\text{T}, j} | \\mathbf {G}_b^\\prime (i,j)=1, \\forall \\mathbf {h}_{\\text{T}, j \\ne i} \\in \\mathcal {V}(\\mathbf {G}_{\\mathcal {H}}^\\prime )\\rbrace \\cup \\lbrace \\mathbf {h}_{\\text{T}, i}^\\prime \\rbrace $ where $\\mathbf {h}_{\\text{T}, i}^\\prime $ is $\\mathbf {h}_i$ augmented counterpart.", "Note that $\\mathcal {P}_{a}$ is always non-empty.", "Since the whole $\\mathcal {V}(\\mathbf {G}_b^\\prime )$ is too large, we define the anchor set $\\mathcal {A}_a(\\mathbf {h}_i)$ as the union of $\\mathcal {P}_{a}(\\mathbf {h}_i)$ and $N_{\\text{neg}}$ randomly sampled pseudo-negatives for each query.", "For $L_{\\text{CAL}}^{\\text{CLS}}$ loss, we also define its corresponding DPR counterpart of CAL loss as $L_{\\text{CAL}}^{\\text{P}}$ .", "Overall optimization objective.", "At CAL stage, we also preserve SemiCL loss in feature space to retain the model capability of instance-wise discrimination.", "To further increase the consistency between the teacher and student, we adapt supervised and self-supervised term of SemiCL (Eq.", "REF ) as: $\\begin{aligned}L_{\\text{self}}^{\\text{CLS}}(\\mathbf {z}) & = L_{\\text{con}}\\Big (\\mathbf {z}, \\mathbf {Z}_{\\mathcal {B}, T}; \\tau , \\mathcal {P}_{\\text{self}}, \\mathcal {A}_{\\text{self}} \\Big ) \\\\L_{\\text{sup}}^{\\text{CLS}}(\\mathbf {z}) & = L_{\\text{con}}\\Big (\\mathbf {z}, \\mathbf {Z}_{\\mathcal {B}_l, T}; \\tau , \\mathcal {P}_{\\text{sup}}, \\mathcal {A}_{\\text{sup}}\\Big ) \\mathbb {I}\\Big (\\mathbf {z} \\in \\mathbf {Z}_{\\mathcal {B}_l}\\Big )\\end{aligned}$ Here, we use student feature $\\mathbf {z}$ as a query and teacher features $\\mathbf {Z}_{\\mathcal {B}, T}, \\mathbf {Z}_{\\mathcal {B}_l, T}$ as keys to strengthen consistencies.", "The positive and negative sets follow the same definition as in Eq.", "(REF ) but are defined in the teacher feature space.", "Then, the overall loss for [CLS] token at CAL stage is formulated as: $L_2^{\\text{CLS}} = (1 - \\alpha )L_{\\text{sup}}^{\\text{CLS}} + \\alpha \\Big (\\beta L_{\\text{CAL}}^{\\text{CLS}} + (1 - \\beta )L_{\\text{self}}^{\\text{CLS}} \\Big )$ where $\\beta $ is an adjustable weight.", "Its corresponding DPR counterpart can be similarly defined, denoted as $L_2^{\\text{P}}$ .", "Finally, since we also adopt DPR at CAL stage, the overall optimization objective is formulated as: $L_2 = L_2^{\\text{CLS}} + \\gamma L_2^{\\text{P}}$ During the inference, the [CLS] embeddings are adopted as final predictions." ], [ "Datasets", "We evaluate PromptCAL on three generic datasets (, CIFAR-10/100 [24] and ImageNet-100 [25]) and three fine-grained datasets (, CUB-200 [44], StandfordCars [23], and Aircraft [31]).", "A summary of datasets is listed in Appendix .", "For each dataset, we first subsample $|\\mathcal {C}_{kwn}|$ known classes from all classes.", "Then, a pre-defined ratio of images for known classes are sampled to form the labeled set $\\mathcal {D}_l$ .", "We set labeling ratio to $50\\%$ for all datasets unless otherwise specified.", "All unsampled images constitute $\\mathcal {D}_u$ .", "In practice, we adopt the same dataset split of $\\mathcal {D}_l$ and $\\mathcal {D}_u$ as in [43].", "(See Table REF in Appendix  for more details on known class numbers and labeling ratios for all dataset).", "Besides, we adopt fewer $|\\mathcal {C}_{kwn}|$ and smaller labeling ratios in more challenging setups for ablation study (Sec.", "REF )." ], [ "Evaluation Protocol", "We follow GCD [43] evaluation protocol in all experiments unless otherwise specified.", "Specifically, we perform SemiKMeans clustering [43] on the predicted embeddings.", "Then, all clusters are mapped through the optimal assignment solved by Hungarian algorithm [46] to their ground-truth classes.", "The accuracy scores for All, Known, and New classes are reported.", "The predicted embeddings from the student class token are evaluated during inference." ], [ "Implementation Details", "Following GCD [43], we use ViT-B/16 pre-trained DINO [6] on ImageNet-1K [25] as our backbone for evaluation.", "For all experiments, we fix the batch size to 128 and use the same data augmentation strategies as [43].", "We present complete implementation details in Appendix ." ], [ "Main Results", "Evaluation on generic datasets.", "We evaluate both stages of PromptCAL on three generic datasets (, CIFAR-10/100 [24], and ImageNet-100 [25]).", "Table REF shows that our PromptCAL consistently and significantly surpasses previous SoTAs, , ViT-adapted ORCA [4], our baseline GCD [43], and adapted NCD SoTA methods (UNO+ [9] and RankStats+ [11]) in terms of overall accuracy on all three datasets.", "Specifically, PromptCAL surpasses GCD by $6.4\\%$ on CIFAR-10, $8.2\\%$ on CIFAR-100, and $9.0\\%$ ImageNet-100 on All classes; it also remarkably outperforms ORCA by $7\\%$ on CIFAR-100 and $3.9\\%$ on ImageNet-100.", "Besides, in contrast to ORCA and UNO+ which suffer from severe overfitting on Known classes, PromptCAL manifests substantial advantages over other methods on New classes (about $10\\%$ improvements on three datasets).", "By comparing the 1st stage (PromptCAL-1st) with the 2nd stage (PromptCAL-2nd), we observe major performance boosts, especially on New classes.", "In addition, we also notice that both stages of our PromptCAL have significant contributions to the final performance on generic datasets.", "Specifically, PromptCAL-1st improves $5.6\\%$ and $3.0\\%$ over GCD on CIFAR-10/100, respectively; while the PromptCAL-2nd further improves by $5.2\\%$ and $9.0\\%$ on CIFAR-100 and ImageNet-100, respectively.", "Therefore, above results validate the advantages and effectiveness of our two-stage PromptCAL in category discovery.", "Table: Evaluation on three generic datasets.", "Accuracy scores are reported.", "†denotes adapted methods.", "Both stages of PromptCAL are evaluated.Table: Evaluation on three fine-grained datasets.", "Accuracy scores are reported.", "†denotes adapted methods.", "Both stages of PromptCAL are evaluated.Evaluation on fine-grained datasets.", "We also report results on fine-grained datasets to demonstrate the PromptCAL effectiveness in Table REF .", "Apparently, the low performance of KMeans illustrates the challenging nature of fine-grained category discovery caused by larger intra-class and lower inter-class variations.", "Notice that ORCA performance degrades substantially on three fine-grained datasets.", "In contrast, our PromptCAL consistently exceeds NCD SoTA and ORCA, and outperforms GCD by $\\sim $  $11\\%$ on All classes on CUB-200 and StanfordCars and $\\sim $  $7\\%$ on Aircraft.", "Different from results in Table REF , the results on fine-grained datasets show that the major performance gain of PromptCAL originates from the 2nd CAL stage.", "Noticeably, PromptCAL-1st performance even drops compared with GCD on CUB-200 and Aircraft datasets; while, PromptCAL-2nd achieves remarkable and consistent improvements, especially on New classes." ], [ "Ablation and analysis", "In this section, we conduct extensive ablation experiments to reveal and investigate contributions of each component.", "Next, we present in-depth analysis on the effectiveness of SemiAG and discuss the effect of visual prompts in PromptCAL.", "Further, we explore how PromptCAL performs in more challenging and real-world scenarios with lower-labeling and fewer-classes.", "Finally, we present additional ablation results in Appendix , and additional qualitative results in Appendix .", "Table: Ablation study on effectiveness of SemiAG in CAL stage on CUB-200  dataset.", "Here, cKNN: consensus KNN graph; AP: affinity propagation; SemiPriori: semi-supervised prior knowledge; SemiCL: semi-supervised contrastive loss in projected feature space on [CLS] and [P].", "Scores reported in clustering accuracy.Each proposed component favorably contributes to the overall performance.Table: The t-SNE  visualization of ViT embeddings on CIFAR-10 test set.", "(a) is [CLS] embeddings from naive VPT model; (b) denotes our PromptCAL [CLS] embeddings; (c) denotes our PromptCAL ensembled [P] embeddings; (d) represents embeddings of an arbitraty PromptCAL unsupervised prompt.All figures share the same axis scale.", "The complete visualization is presented in Appendix .Effectiveness of contrastive affinity learning.", "As mentioned in Sec.", "REF , SemiAG dominates the large improvements of PromptCAL.", "First, we conduct ablation experiments on SemiAG in CAL stage, in Table REF .", "The 1st row denotes the performance of using naive KNN with SemiPriori for pseudo labeling at CAL stage; while, the last row represents our full SemiAG setup.", "The 2nd, 3rd, and 4th row represent PromptCAL without affinity propagation (Sec.", "REF ), semi-supervised prior knowledge (Sec.", "REF ), and semi-supervised contrastive loss, respectively.", "From the results, we can observe that incorporating each component has a clear contribution: (a) Naive KNN severely overfits Known and performs poorer (with nearly $2.8\\%$ and $7.0\\%$ accuracy drops on All and New classes, respectively) than SemiAG, due to its susceptibility to noisy neighborhoods.", "(b) Affinity propagation is the most consequential component (improving by $8.3\\%$ on All and $13\\%$ on New), which proves the importance of counteracting adverse effects of false negatives in contrastive loss by retrieving more reliable positives.", "(c) Retaining SemiCL is beneficial, which, we guess, is because it can push away noisy pseudo positive samples and, thus, prevent overfitting and degenerated solutions.", "(d) SemiPriori further benefits overall performance by about $5.6\\%$ on All and $7\\%$ on New, which manifests the importance of incorporating the prior knowledge to calibrate pseudo labels.", "We empirically analyze that these components are closely associated with memory precision and recall (see Appendix ), which better explains the results.", "Table: Ablation study on effectiveness of prompt-related components on CUB-200 dataset.Here, Prompt: prompt-adapted backbone; L semi P L_{\\text{semi}}^{\\text{P}}: semi-supervised contrastive loss on [P] prompts; L CAL P L_{\\text{CAL}}^{\\text{P}}: CAL loss on [P]; CAL stage: second-stage training.Scores reported in clustering accuracy.Each component favorably contributes to the overall performance gain.Table: Ablation study on few-annotation GNCD on CIFAR-100  dataset.", "Digits following 'C' and 'L' stand for percentages of known classes and labeling ratios.", "†denotes adapted methods.", "Scores reported in accuracy.Role of discriminative prompt regularization.", "Table REF presents the ablation results for prompt-related components of PromptCAL.", "The 1st and 2nd rows denote the GCD baseline and our warmed-up PromptCAL-1st.", "We note that visual prompts make no significant difference to the performance.", "However, we argue that it is due to lack of semantic discriminative supervision.", "Specifically, by observing PromptCAL without semantic discrimination supervision (3rd row) underperforms PromptCAL without sample discrimination supervision (4th row), we can infer that semantic discriminativeness is more critical than sample-wise discriminativeness.", "Generally, lack of semantic discriminativeness will cause severe overfitting on Known classes.", "Furthermore, semantic prompt tuning is beneficial for discovering novel classes, since PromptCAL surpasses its counterpart without any prompt-related component (5th row) on New by $2.6\\%$ .", "To summarize, semantically-aware DPR plays a positive and auxiliary role in facilitating semantic discriminativeness especially in categorizing novel classes.", "In fact, we conclude from additional ablations in Appendix  that the gains of prompts are more significant on larger datasets.", "To vividly illustrate this point, we present the t-SNE [41] visualization results in Fig.", "REF (see complete results in Appendix REF ).", "Here, we directly draw conclusions that (a) naive VPT causes overclustering problem and lacks semantically discriminativeness; (b) our proposed DPR supervision increases semantic discriminativeness of supervised and unsupervised prompts, which further enhance semantic signals of DPR loss and enables DPR and CAL to synergistically improve the overall performance.", "We present more discussions on this in Appendix REF .", "Towards few-annotation GNCD.", "We further evaluate our PromptCAL against other SoTA methods on more challenging few-annotation setups on CIFAR-100 dataset, , fewer known classes and lower labeling ratios.", "We consider three setups in Table REF : (1) C50-L10: $50\\%$ classes are known in which $10\\%$ samples are labeled; (2) C25-L50: $25\\%$ classes are known in which $50\\%$ samples are labeled; (3) C10-L50: $10\\%$ classes are known in which $50\\%$ samples are labeled.", "Since the few-annotation can incur more open-set noises, we set $K=5$ for PromptCAL to increase robustness to noisy pseudo-labels.", "From results in Table REF , we conclude that PromptCAL is robust to both low-labeling and few-class scenarios, outcompeting all SoTAs with large margins.", "Practically, it is more demanding for models to infer novel semantic clustering when fewer classes are known under semantic shifts.", "This explains the lower performance of all models in setup (3) than in setup (1).", "Compared with GCD and ORCA, our PromptCAL can learn semantically robust representation and consistently achieve high performance in all setups.", "ORCA (ViT) [4] achieves stronger performance than GCD; while, our PromptCAL can still outperform ORCA with clear margins in all setups.", "For example, PromptCAL surpasses ORCA (ViT) by $\\sim $  $8\\%$ on All accuracy in C50-L10 and C25-L50.", "We again observe that PromptCAL-2nd learning contributes most to the overall performance, which again proves our proposed method can effectively calibrate the learned representation with remarkable gains on New classes.", "This capability best suits GNCD problem." ], [ "Conclusion", "In this paper, we propose a two-stage framework, PromptCAL, to tackle challenging GNCD problem.", "After the warm-up stage of semi-supervised contrastive learning, we iteratively and simultaneously conduct contrastive affinity learning and discriminative prompt regularization to calibrate semantic representations.", "Specifically, at each iteration, we leverage discovered pseudo affinities on generated affinity graphs to guide optimization of the class token and to reinforce the semantic discriminativeness of prompts and our prompt-adapted ViT backbone.", "Extensive experiments on multiple generic and fine-grained benchmarks showcase that PromptCAL achieves state-of-the-art performance.", "Additional evidences illustrates that our discriminative prompt regularization and contrastive affinity learning objectives achieve a synergistic effect.", "Moreover, PromptCAL exhibits remarkable gains on few-class and low-label settings for categorizing novel classes.", "[" ], [ "] In this appendix, we further provide detailed descriptions on the following contents: Additional details on our SemiAG method in Appendix .", "Dataset profiles in Appendix .", "The complete implementation details in Appendix .", "Additional experimental results in Appendix .", "Training algorithm of PromptCAL in Appendix .", "Qualitative and visualization results in Appendix .", "Efficiency analysis in Appendix .", "Broader impact and limitations in Appendix .", "License for experimental datasets in Appendix ." ], [ "Additional details on SemiAG", "In this section, we present an extended description of TPG [50] affinity propagation algorithm that underlies our SemiAG method.", "Suppose we have a graph $G=(V, \\mathbf {E})$ with a node set $V$ and an edge set $\\mathbf {E}$ .", "In our context, $V$ is a set of $N$ embeddings and $\\mathbf {E} \\in \\mathbf {R}^{N \\times N}$ represents the pairwise affinity matrix.", "TPG runs a graph diffusion process on a tensor product graph $\\mathcal {G}=(V \\times V, \\mathcal {E})$ defined on $G$ , where $\\mathcal {E}=\\mathbf {E} \\otimes \\mathbf {E}$ represents a 4-dim tensor.", "In particular, for $i,j,k,l=1...,N$ , the tensor element $\\mathcal {E}_{i,j,k,l}=\\mathbf {E}_{i,j}\\mathbf {E}_{k,l} \\in \\mathbf {R}^{NN \\times NN}$ .", "In other words, the tensor graph $\\mathcal {G}$ can be intuitively considered as a higher-order graph through cartesian product between $G$ and itself.", "Then the graph diffusion process on $\\mathcal {G}$ is formulated as: $\\mathcal {E}^{(t)} = \\sum _{i=0}^{t}{\\mathcal {E}^i} $ where $\\mathcal {E}^{(t)}$ denotes the $t$ -th step affinity matrix and $\\mathcal {E}^i$ is $i$ -power of $\\mathcal {E}$ .", "Theoretically, if the row-sum of $\\mathcal {E}$ is less than one, $\\mathcal {E}^{(t)}$ will converge to a nontrivial solution.", "To make computation tractable on large-scale data, TPG [50] proposes an iterative equation without multiplication on tensors which theoretically guarantees the same converged solution, which is formulated as: $\\mathbf {Q}^{(t+1)} = \\mathbf {E} \\mathbf {Q}^{(t)} \\mathbf {E}^T + \\mathbf {I} $ where $\\mathbf {I}$ denotes an identity matrix, $\\mathbf {E}$ is the affinity matrix, and $\\mathbf {Q}^{(0)}=\\mathbf {E}$ .", "In our work, we calibrates the affinity graph with only first-order structural information and, thus, set the diffusion step $\\eta =1$ since: firstly, online diffusion till convergence at each iteration will incur great computation overheads; besides, we find larger diffusion steps will include noisy false positives which significantly degrades the overall performance (see ablation in Sec.", "REF for negative impacts of low memory precision).", "Based on our further observation that the row-wise sum constraint has negligible effect on final performance, we exclude the row-wise sum threshold in TPG as another hyperparameter." ], [ "Dataset details", "We evaluate PromptCAL on six benchmarks, , CIFAR-10 [24], CIFAR-100 [24], ImageNet-100 [25], CUB-200 [44], StandfordCars [23], and Aircraft [31].", "The profile of six benchmark datasets is displayed in Table REF .", "Our dataset splits follow GCD [43].", "Table: The dataset profiles of six benchmarks for evaluation." ], [ "Implementation details", "Architecture and optimization.", "Following [43], we use a 12-layer base vision transformer [38] with a patch size of 16 (ViT-B/16) as our backbone in all experiments.", "The backbone weights are initialized with pre-trained DINO [6] on the ImageNet-1K [25] dataset.", "The first 11 blocks of the backbone are frozen as in [43].", "For our PromptCAL, we further adapt pre-trained ViT [38] with prompts by prepending 5 prompts before each block (in VPT-Deep scheme [18]).", "We only supervise the first 2 of 5 prompts at the last block with DPR loss, and all remaining prompts are unsupervised and thus automatically learned.", "In practice, this ViT backbone can be of any architecture and pre-trained with any self-supervised learning method on large-scale datasets.", "Initially, we separately adopt two DINO [6] projection heads for [CLS] and [P] to avoid negative influences, which are randomly initialized.", "In both stages, we fix the batch size to 128 on all datasets; besides, we optimize PromptCAL with standard SGD with a momentum of $0.9$ , a weight decay of $5 \\times 10^{-5}$ , and an initial learning rate of $0.1$ .", "For all datasets, we train PromptCAL with 200 epochs in the first stage; in the second stage, we train PromptCAL with 70 epochs on CIFAR-10/100 and ImageNet-100 datasetes; while, we optimize PromptCAL by 100 epochs on CUB-200, StanfordCars, and Aircraft datasets.", "Warmup training.", "In the 1st stage training of PromptCAL, we adopt an unsupervised $L_2$ distillation loss on ImageNet-1K [25] with a loss weight of $\\min \\big (0,0.5 \\times (1-\\frac{E}{5})\\big )$ .", "Here, $E$ denotes the epoch number.", "We add this loss based on consideration of potential adverse effects of randomly initialized visual prompts on the class token.", "Contrastive affinity learning.", "In the 2nd stage training of PromptCAL, model parameters (prompt-adapted backbone with two heads) are initialized by the best warmed-up checkpoint at the 1st stage.", "For SemiAG parameters, we fix the neighborhood size $K=|\\mathcal {M}|/(4|\\mathcal {C}|)$ for all datasets unless otherwise specified.", "We fix sizes of both memories as $|\\mathcal {M}|=|\\mathcal {M}_{\\text{P}}|=4096$ and set $N_{neg}=1024$ in all experiments.", "Furthermore, since most edges of the binarized affinity graph $\\mathbf {G}_b^\\prime $ are of small values, we first compute the mean value of non-zero affinities; then, we fix threshold $q$ to $80\\%$ quantile of affinities above this value for all fine-grained datasets, and $50\\%$ for all generic datasets.", "We fix diffusion step $\\eta =1$ .", "For loss parameters, we fix $\\alpha =0.35$ and $\\tau =1.0$ following GCD [43].", "Besides, we also fix $\\beta =0.6$ and temperature $\\tau _a=0.07$ for all datasets.", "Our teacher model is initialized by the student weights at the beginning, and we conduct momentum updates with a momentum of $0.999$ at each iteration.", "During the inference, the [CLS] representation of the student model is used for prediction.", "Validation scheme.", "Follow GCD [43] setup, we assume access to a small validation set, in which only samples from known classes are labeled.", "In the first stage, we keep the best checkpoint with the highest clustering accuracy on Known on the validation set.", "In the second stage, we keep the best checkpoint with the highest clustering quality on the validation set for evaluation.", "We define clustering quality as the average score of the clustering accuracy on Known classes and unsupervised Silhouette score [35] on New.", "Note that there is no information leakage, since Silhouette score does not need ground-truth label.", "Other baselines.", "For GCD [43], since our dataset splits are consistent with theirs, we report their official scores for main comparisons.", "In our ablations, we reproduce its results based on their official codes.", "For ORCA [4], we adapt their backbone from ResNet to the same pre-trained DINO and obtain results based on their official codes.", "For our baseline (PromptCAL w/o prompt), we remove all the prompts and DPR loss on them; besides, we keep the warmup training stage for fair comparison.", "Other parameters follow the standard setups.", "Table: The t-SNE  visualization of ViT embeddings on CIFAR-10 test set for GCD , naive VPT model , and PromptCAL-1st stage and 2nd stage,Here, [CLS], [P], and [P] * ^* denote embeddings from ViT class token, ensembled prompts supervised by DPR loss, and unsupervised prompts, respectively.The embedding clustering shows that DPR reinforces the semantic discriminativeness of [P], and for [P] * ^* despite no explicit supervision.", "(e) exhibits the class name each color denotes.All figures share the same axis scale." ], [ "Inductive category discovery", "In contrast to the evaluation protocol on transductive category discovery GCD [43], we also conduct ablation experiments on inductive category discovery protocol proposed in ORCA [4].", "In other words, besides achieving high performance on category discovery on the training data (transductive protocol), we also expect models to learn general rules applied to unseen test sets (inductive protocol).", "Therefore, we conduct experiment under this inductive evaluation protocol on three benchmarks (CUB-200 [44], CIFAR-100 [24], and ImageNet-100 [25] datasets).", "In this experiment, we hold out $10\\%$ (labeled and unlabeled) training data as the validation set for GCD and PromptCAL.", "From displayed results in Table REF , we can conclude that our PromptCAL achieves the best performance on three datasets, which manifests its good generalization capability.", "Meanwhile, we observe that PromptCAL boosts performance on New with significant margins." ], [ "Additional ablation on SemiAG and DPR", "To further validate the effectiveness of our SemiAG, we conduct ablation on different positive mining methods integrated into our online contrastive learning framework with CAL.", "Besides, we supplement more ablation results on larger datasets (, CIFAR-100 and ImageNet-100 datasets) to showcase that learning with semantically discriminative prompts can achieve notable improvements across various datasets.", "The experiment results are presented in Table REF .", "Firstly, we notice that SemiAG significantly outperforms other positive mining methods, , naive KNN with SemiPriori (KNN w/ S.P.)", "and Ranking Statistics (R.S.)", "[11].", "The results unveil that both KNN with SemiPriori and RankingStats fail to reliably uncover the substantial semantic information in embedding spaces, which proves that our SemiAG method is the most effective in this open-set setting.", "On the other hand, removing either DPR loss or entire prompt-related components in PromptCAL causes noticeable performance drop, , nearly $3\\%$ and $2\\%$ drops on All on CIFAR-100 dataset after removing prompts and DPR loss.", "Moreover, removing either component also leads to severe overfitting on Known classes." ], [ "Visualization on embeddings", "To inspect the learned semantic discriminativeness of PromptCAL, we visualize embeddings by t-SNE [41] algorithm in Fig.", "REF .", "Firstly, by comparing (a-d), we can conclude that PromptCAL can effectively learn better semantic clustering, witnessed by higher purity, larger inter-class separation, and high compactness.", "Notice in (b) that naive VPT model suffer from degraded clustering performance compared with (a) baseline, which again proves that lack of semantic supervision is a critical issue (see ablation in main body) in prompt tuning.", "Interestingly, though not supervised, automatically learned prompts [P]$^*$ in (i) and (j) can still learn robust semantically meaningful representation, benefiting from DPR on [P].", "Meanwhile, DPR loss reinforce this effect in (g) and (h).", "Furthermore, we also observe that [P] supervised by CAL loss (h) can learn better semantic clustering than those supervised by SemiCL (g), and better benefit [P]$^*$ (j).", "Thanks to better semantic information supplied by CAL loss, [CLS] of PromptCAL-2nd learns more compact and better-separated clusters compared with that of PromptCAL-1st.", "To summarize the above, we can conclude that the second stage enhances the prompts potential using CAL loss, which further enables prompts and CAL to synergistically improve the overall performance.", "Figure: Ablation study on the CAL loss weight β\\beta on StanfordCars  dataset.Table: Further ablation study on CUB-200 , CIFAR-100 , and ImageNet-100  datasets.", "We investigate four setups: the first is PromptCAL removing all prompt related components; the second is PromptCAL without DPR loss; the third is replacing SemiAG with naive KNN incorporated with SemiPriori; the last one is replacing our SemiAG with RankingStats  pseudo labeling.Table: Ablation study on the neighborhood size KK on the CIFAR-100  and Aircraft  datasets." ], [ "Analysis on memory precision and recall", "To provide essential reasons for superiority of our SemiAG, we visualize dynamic curves of memory precision and recall of different pseudo-labeling strategies at 2nd stage (Table REF ).", "We argue that both precision and recall matter in CAL stage.", "We can observe that SemiAG and SemiAG w/o SemiPriori has balanced precision and recall; while, KNN and SemiAG w/o AP suffer from either low precision or low recall.", "Moreover, SemiAG has higher precision and recall than SemiPriori due to priori constraint.", "High memory precision and recall can counteract the class collision problem with reliable retrieved pseudo positives, therefore facilitating the semantic representations." ], [ "Sensitivity analysis on hyper-parameters.", "We conduct ablation experiments on critical hyper-parameters of PromptCAL, which includes: (1) CAL loss weight $\\beta $ ; (2) neighborhood size $K$ ; (3) different pretraining methods; (4) number of auxiliary prompts.", "CAL loss weight.", "We sample $\\beta $ values from $0.2$ to $1.0$ at an interval of $0.2$ and run experiments on StanfordCars dataset.", "The results are visualized in Fig.", "REF .", "We observe that decreased weights of contrastive affinity learning will cause model suffer from low performance on New.", "We argue that, although different datasets exhibit different trends, the model performance is fairly robust within the modest value range (from $0.4$ to $0.8$ ).", "Neighborhood size.", "We select $K=5,10,15,20$ for ablations on two datasets (CIFAR-100 and Aircraft, both with 100 All classes).", "Results in Table REF display that PromptCAL is robust to small $K$ ; while, its performance degrades largely as the neighborhood expands.", "We guess it is because false positive has severer negative effects than false negatives.", "Pretraining.", "We argue that PromptCAL can take advantage of the property of the high KNN precision of ViT, which are pre-trained in various schemes.", "In Table REF , we replace DINO [6] pre-trained ViT with iBoT [57] pre-trained ViT as our backbone in CIFAR-100 experiments The KNN precision of DINO and iBoT on ImageNet-1K dataset are $76.1\\%$ and $77.1\\%$ , respectively [57].. We can show that PromptCAL further improves as iBoT possesses higher KNN precision [57].", "It manifests that our PromptCAL performance is likely to correlate with better initial representations.", "Number of supervised prompts.", "We varies the number of supervised prompts to observe sensitivity of performance this parameter.", "Table REF showcases the results under different setups.", "We can observe that leaving some unsupervised prompt to learn can provide extra flexibility to the backbone and thus achieves the best performance, especially on New.", "In general, PromptCAL is robust to different numbers of supervised prompts." ], [ "Additional results on Herbarium dataset", "We also present evaluation results on the challenging Herbarium2019 [39] dataset, which consists of 683 classes and 34k images in total.", "Our dataset split follows [43].", "Specifically, we set labeling ratio to $50\\%$ and known class number to 341.", "We compare PromptCAL with other SoTAs on this dataset.", "Considering larger class numbers, we enlarge the memory size to $2\\times 10^{4}$ and $N_{\\text{neg}}=5000$ , accordingly.", "We set $K=|\\mathcal {M}|/(4|\\mathcal {C}|) \\approx 7$ in this case.", "Other parameters follow the setup on fine-grained datasets.", "Table REF display the results, which demonstrates our PromptCAL also excels at discovering categories on large vocabulary fine-grained datasets, especially on New classes.", "Table: Additional experiments on the Herbarium2019  dataset.Table: Ablation study on pretraining methods on CIFAR-100  dataset.Table: Evaluation in the inductive GCD setting on three benchmarks.", "The results are reported in accuracy scores on the test set.", "Here, we also adopt the task-informed evaluation protocol in , , , Known * ^* and New * ^* are evaluated by separate clustering and Hungarian assignment.Table: Ablation study on prompt numbers of our prompt-adapted ViT backbone.", "Evaluation conducted on CUB-200  dataset." ], [ "Training algorithm of PromptCAL", "Given a training dataset $\\mathcal {D}$ , we describe our entire training algorithm of PromptCAL in Algo. .", "Before PromptCAL training, we adapt the ImageNet pre-trained ViT backbone $f(\\cdot | \\theta )$ with prompts into $f(\\cdot | \\theta , \\theta _{\\text{P}})$ , and randomly initialize two identity heads $g(\\cdot | \\theta _{\\text{H}})$ and $g_{\\text{P}}(\\cdot | \\theta _{\\text{P}, \\text{H}})$ for [CLS] and [P], respectively.", "In the 1st stage, we sample a batch of images $\\mathbf {X}$ with their corresponding labels $\\mathbf {Y}$ at each iteration.", "Note that ground-truth labels of unlabeled images are masked in $\\mathbf {Y}$ .", "We obtain [CLS] and [P] projected features ($\\mathbf {Z}, \\overline{\\mathbf {Z}}_{\\text{P}}$ ) by forwarding $\\mathbf {X}$ through backbone and two heads.", "Next, we compute SemiCL loss (Eq.", "REF ) on the features based on the class labels and label-or-not information in $\\mathbf {Y}$ .", "All tunable parameters ($\\theta $ , $\\theta _{\\text{P}}$ , $\\theta _{\\text{H}}$ , $\\theta _{\\text{P}, \\text{H}}$ ) are updated.", "Before the 2nd stage training, we initialize two empty embedding memory bank $\\mathcal {M}, \\mathcal {M}_{\\text{P}}$ for [CLS] and [P], respectively.", "Besides, we initialize the teacher model with the student weights.", "During the training, for each sampled batch ($\\mathbf {X}$ , $\\mathbf {Y}$ ), we first obtain student embeddings of [CLS] and ensembled [P] ($\\mathbf {H}, \\overline{\\mathbf {H}}_{\\text{P}}$ ), and corresponding student features ($\\mathbf {Z}$ , $\\overline{\\mathbf {Z}}_{\\text{P}}$ ) by forwarding images to the student.", "Meanwhile, we acquire the teacher embeddings and features ($\\mathbf {H}_T$ , $\\overline{\\mathbf {H}}_{\\text{P}, T}$ , $\\mathbf {Z}_T$ , $\\overline{\\mathbf {Z}}_{\\text{P}, T}$ ) from the teacher, correspondingly.", "Further, we construct a sub-graph for a token (line 14 for the class token and line 18 for ensembled prompts) based on its teacher embeddings of the current batch and all embeddings in its corresponding memory.", "Given the sub-graph, we sequentially perform three operations of SemiAG to obtain the calibrated binarized affinity graph (line 15 and 19).", "For each student embedding, we utilize its teacher embedding counterpart as a query on the affinity graph to acquire its pseudo positive set and pseudo anchor set with randomly sampled pseudo negatives (line 16 and 20).", "With these pseudo positive and anchor sets, we compute CAL loss on embeddings of each token (line 17 and 21) by Eq.", "REF .", "Along with CAL loss, we also compute SemiCL loss on the projected features; here, we utilize student embeddings as queries and teacher embeddings as keys in the contrastive loss (Eq.", "REF and Eq.", "REF ).", "In other words, for each student embedding, we construct its positive and anchor sets with teacher embeddings and then compute the semi-supervised contrastive loss.", "Next, we obtain the total loss for the [CLS] token by combining its SemiCL and CAL loss functions (Eq.", "REF ).", "After adding our DPR counterpart loss on ensembled prompts, we finally get the total loss at this stage (Eq.", "REF ).", "At each iteration, all tunable parameters of the student are updated.", "Lastly, we update two memories with teacher embeddings of their corresponding token and update momentum teacher model with the updated student model.", "Note that for inference, we adopt embeddings from the [CLS] token of the student model $f(\\cdot | \\theta , \\theta _{\\text{P}})$ for final predictions.", "Figure: Confusion matrix of PromptCAL on ImageNet-100 test set.", "The labels on the x-axis and y-axis denotes the class index of our generated split.", "The first 50 classes are Known, and the last 50 classes are New." ], [ "Qualitative results", "In this section, we present qualitative results of categorization confusion matrix, attention map visualization, and KNN retrieval.", "Confusion matrix on ImageNet-100.", "We present confusion matrix for GCD [43] and our PromptCAL on both Known and New classes on ImageNet-1K dataset in Fig.", "REF .", "We can observe that our PromptCAL can learn more robust clusters on New classes, while preserving high accuracy on Known.", "Moreover, our PromptCAL is less susceptible to confusion between Known and New.", "Attention map visualization.", "We visualize and compare the attention maps of [CLS] tokens of DINO [6], GCD [43], PromptCAL-1st, and PromptCAL-2nd in Fig.", "REF .", "We summarize the following observations: (1) DINO attends to the instance discriminative regions, , licence plate, and may overfit on surrounding objects; while, PromptCAL lays more attention on class-specific features, , car lights for cars, and feather textures for birds.", "(2) Although both GCD and PromptCAL can attend to semantically meaningful regions, PromptCAL-2nd focuses on multiple semantically discriminative regions, , car lights and textures, feathers and wings.", "(3) After CAL training, attention maps of PromptCAL-2nd in contrast to that of PromptCAL-1st are remarkably refined.", "Nearest-neighbor query.", "In Fig.", "REF , we visualize the 8 predicted nearest neighbors, from GCD [43] and our PromptCAL, of 20 randomly selected query images, which are labeled with correct (green) and incorrect (red).", "Specifically, we first randomly sample a subset from ImageNet-1K, and conduct KNN search (with cosine distance) for given random queries in [CLS] embedding space.", "We can observe that PromptCAL generally exhibits higher retrieval precision (, for “n02006656” in 3rd row, “02018207” in 5th row, “n02027492” in 8th row).", "To summarize, our PromptCAL learns more semantically calibrated local structures.", "We also notice that both GCD and PromptCAL fails on “n01695060” in 11th row, which, we guess, is due to the confusing view angle of the query image and high visual similarities between lizards of different species." ], [ "Efficiency analysis", "Compared with the raw ViT backbone (GCD [43]), our PromptCAL only adds negligible computation overheads during inference, since the only overheads origin from visual prompts.", "In Table REF , we quantitatively list inference time per image, thoughput, and FLOPs for PromptCAL.", "It can be observed that our PromptCAL achieves comparable inference efficiency with the raw ViT backbone.", "Table: Comparison on inference time, throughput, and FLOPs based on ViT-B/16 backbone." ], [ "Broader impact and limitations", "It should be noticed that although our method achieves state-of-the-art performance on generalized novel category discovery problem, the performance gap between the fully supervised counterpart and our method still exists.", "Besides, in real world, the data can be more complicated and uncurated.", "For instance, realistic data may follow long-tail distributions, human-annotation may incur noises, and the vocabulary maybe huge.", "We leave these for future research." ], [ "License for experimental datasets", "All datasets used in our experiments are permitted for research use.", "CIFAR-100 and CIFAR-10 [24] are released under MIT license for research use.", "ImageNet-100, the subset of ImageNet [25], also allows for research purpose.", "Besides, CUB-200 [44], Aircraft [31], StanfordCars [23] also permits for research purpose.", "Herbarium19 [39] are released for non-commercial purposes.", "[] linenosize= Training dataset $\\mathcal {D}=\\mathcal {D}_u \\cup \\mathcal {D}_l$ , an ImageNet pre-trained ViT backbone $f(\\cdot | \\theta )$ , and a randomly-initialized [CLS] projection head $g(\\cdot | \\theta _{\\text{H}})$ .", "Trained prompt-adapted model $f(\\cdot | \\theta , \\theta _{\\text{P}})$ .", "Initialize prompt-adapted backbone with random prompts into $f(\\cdot | \\theta , \\theta _{\\text{P}})$ .", "Randomly initialize prompt projection head $g_{\\text{P}}(\\cdot | \\theta _{\\text{P},\\text{H}})$ from $g$ .", "Stage 1: Warm-up Training each epoch $e$ =1...$E_1$ each batch $(\\mathbf {X}, \\mathbf {Y}) \\in \\mathcal {D}$ $\\mathbf {Z}, \\overline{\\mathbf {Z}}_{\\text{P}} = \\text{Forward}(\\mathbf {X}, f, g, g_{\\text{P}})$ forward backbone and heads Compute overall SemiCL loss $L_1$ by Eq.", "(REF ) on $\\mathbf {Z}, \\overline{\\mathbf {Z}}_{\\text{P}}$ .", "Back-propagation and optimize $\\theta , \\theta _{\\text{P}}, \\theta _{\\text{H}}, \\theta _{\\text{P}, \\text{H}}$ .", "Stage 2: Contrastive Affinity Learning Initialize memory $\\mathcal {M}, \\mathcal {M}_{\\text{P}}$.", "Initialize teacher $f_T, g_T, g_{\\text{P}, T}$ from the student model.", "each epoch $e$ =1...$E_2$ each batch $(\\mathbf {X}, \\mathbf {Y}) \\in \\mathcal {D}$ Forward $\\mathbf {H}, \\overline{\\mathbf {H}}_{\\text{P}}, \\mathbf {Z}, \\overline{\\mathbf {Z}}_{\\text{P}} = \\text{Forward}(\\mathbf {X}, f, g, g_{\\text{P}})$ forward student $\\mathbf {H}_T, \\overline{\\mathbf {H}}_{\\text{P}, T}, \\mathbf {Z}_T, \\overline{\\mathbf {Z}}_{\\text{P}, T} = \\text{Forward}(\\mathbf {X}, f_T, g_T, g_{\\text{P}, T})$ forward teacher SemiAG for [CLS] Concatenate embedding $E \\leftarrow [\\mathbf {H}_T; \\mathcal {M}]$ for [CLS] token and construct sub-graph $\\mathbf {G}_{\\mathcal {H}}^\\prime $ .", "Compute binarized affinity graph $\\mathbf {G}_b^\\prime $ from $\\mathbf {G}_{\\mathcal {H}}^\\prime $ by applying SemiAG in Eq.", "(REF ) (REF ) (REF ) sequentially.", "Obtain pseudo positives $\\mathcal {P}_a$ and pseudo anchors $\\mathcal {A}_a$ from $\\mathbf {G}_b^\\prime $ .", "Compute CAL loss $L_{\\text{CAL}}^{\\text{CLS}}$ for [CLS] with $\\mathcal {P}_a$ and $\\mathcal {A}_a$ on $\\mathbf {H}$ by Eq.", "(REF ).", "SemiAG for [P], similar process to [CLS] Concatenate embedding $E_{\\text{P}} \\leftarrow [\\overline{\\mathbf {H}}_{\\text{P}, T}; \\mathcal {M}_{\\text{P}}]$ for [P] token and construct sub-graph $\\mathbf {G}_{\\text{P}, \\mathcal {H}}^\\prime $ .", "Compute $\\mathbf {G}_{\\text{P}, b}^\\prime $ from $\\mathbf {G}_{\\text{P}, \\mathcal {H}}^\\prime $ by applying Eq.", "(REF ) (REF ) (REF ) sequentially.", "Obtain pseudo labels $\\mathcal {P}_{\\text{P}, a}$ and $\\mathcal {A}_{\\text{P}, a}$ from $\\mathbf {G}_{\\text{P}, b}^\\prime $ .", "Compute CAL loss $L_{\\text{CAL}}^{\\text{P}}$ for [P] with $\\mathcal {P}_{\\text{P}, a}$ and $\\mathcal {A}_{\\text{P}, a}$ on $\\overline{\\mathbf {H}}_{\\text{P}, T}$ by Eq.", "(REF ).", "SemiCL loss Compute $L_{\\text{sup}}^{\\text{CLS}}, L_{\\text{self}}^{\\text{CLS}}$ for [CLS] and $L_{\\text{sup}}^{\\text{P}}, L_{\\text{self}}^{\\text{P}}$ for [P] on $\\mathbf {Z}$ and $\\mathbf {Z}_T$ by Eq.", "(REF ).", "Compute total loss Compute [CLS] total loss $L_2^{\\text{CLS}}$ with $L_{\\text{sup}}^{\\text{CLS}}, L_{\\text{self}}^{\\text{CLS}}, L_{\\text{CAL}}^{\\text{CLS}}$ by Eq.", "(REF ).", "Compute overall total loss $L_2$ with $L_2^{\\text{CLS}}$ and its DPR counterpart $L_2^{\\text{P}}$ by Eq.", "(REF ).", "Back propagation Back-propagation and optimize student $\\theta , \\theta _{\\text{P}}, \\theta _{\\text{H}}, \\theta _{\\text{P}, \\text{H}}$ .", "$\\mathcal {M} \\leftarrow \\text{Enqueue}(\\mathcal {M}, \\mathbf {H}_T)$ , $\\mathcal {M}_{\\text{P}} \\leftarrow \\text{Enqueue}(\\mathcal {M}_{\\text{P}}, \\overline{\\mathbf {H}}_{\\text{P}, T})$ update memories Update momentum teacher with current student.", "$f(\\cdot | \\theta , \\theta _{\\text{P}})$ PromptCAL training algorithm.", "Figure: Attention map visualization of class tokens for comparison on StandfordCars (left) and CUB-200 (right) datasets.", "The columns from left to right refer to attention maps of DINO , GCD , our first stage PromptCAL, and our second stage PromptCAL.", "In the first row, attended areas are marked in red in each images; the second row display the complete attention maps corresponding to the first row images (yellow regions denote high attention values).Figure: Visualization of retrieved 8-NN for 20 randomly selected query images (with blue borders).The correct/incorrect predictions are marked with green/red borders.The predictions on the left come from GCD, and the right is from PromptCAL.The first column contains ImageNet synsetIDs, category name, and Known/New for each query.", "Better view with zoom in." ] ]
2212.05590
[ [ "Dynamical fluctuations in the Riesz gas" ], [ "Abstract We consider an infinite system of particles on a line performing identical Brownian motions and interacting through the $|x-y|^{-s}$ Riesz potential, causing the over-damped motion of particles.", "We investigate fluctuations of the integrated current and the position of a tagged particle.", "We show that for $0 < s < 1$, the standard deviations of both quantities grow as $t^{\\frac{s}{2(1+s)}}$.", "When $s>1$, the interactions are effectively short-ranged, and the universal sub-diffusive $t^\\frac{1}{4}$ growth emerges with only amplitude depending on the exponent.", "We also show that the two-time correlations of the tagged-particle position have the same form as for fractional Brownian motion." ], [ "Introduction", "Systems of diffusive particles interacting via short-ranged interactions have been actively investigated in the past few decades.", "Among the most popular research subjects is the emergence of the hydrodynamic behavior and large deviations in such systems [1], [2], [3], [4], [5].", "The equilibrium properties of systems with long-ranged interactions have also been studied [6], [7], [8], and in a few cases, their dynamical properties have been explored [9], [10], [11], [12].", "In this work, we consider particles on an infinite line interacting via a long-ranged Riesz potential [13] $V_{s}(x,y) = g\\,s^{-1} |x-y|^{-s}$ The $s^{-1}$ pre-factor in (REF ) is convenient to have as the derivative of the potential that drives the particles.", "We assume that both the exponent $s$ and the coupling constant $g$ are positive: $s>0$ and $g>0$ .", "The potential is thus repulsive, and for any $s>0$ , it is sufficiently strong so that particles cannot collide—diffusion cannot overwhelm the repulsion.", "The order of the particles never changes, so we have a single-file system.", "The motion caused by the Riesz potential (REF ) is assumed to be over-damped.", "Particles also undergo independent Brownian motions.", "This system can be thought of as a gas at a finite temperature, with (identical) diffusion coefficients proportional to the temperature.", "Riesz gases with particles undergoing deterministic motion have been studied for a long time.", "Applications to astrophysics [14] where particles are interpreted as stars or galaxies, as well as applications to plasma physics [15] are natural.", "Riesz gases also appear in the context of crystallization and packing problems [16], [8], [17], Ginzburg-Landau vortices [18], and random matrices [19], [20].", "Specific Riesz gases, most frequently Coulomb gases, appear in concrete applications.", "However, Riesz gases with $s \\le 1$ can be experimentally engineered in cold atom systems [21], [22], and they are potentially interesting in a view of applications to quantum computers.", "The zero-temperature dynamics of the Riesz gas demonstrates interesting properties, such as signatures of chaos [23].", "In mathematics, Riesz gases are also subject to intense studies (see [24], [25] for recent reviews).", "Some Riesz gases have received special attention.", "In the $s \\rightarrow 0$ limit, the interaction is logarithmic, and such Riesz gases are known as log gases [19].", "In one dimension, a log gas of Brownian motions is a Dyson gas [26]: it was investigated by Dyson in the context of random matrices with eigenvalues playing the role of particles.", "In two dimensions, the log gas describes a genuine two-dimensional Coulomb interaction [25], [27] and in the over-damped case with particles additionally performing two-dimensional Brownian motions, the gas is called the Ginibre gas [28], [29], [30].", "The Coulomb gas in $d$ dimensions has the exponent $s=d-2$ .", "The Calogero gas with $s = 2$ is mostly studied in one dimension [31], [32], [33], albeit it makes sense in arbitrary dimension.", "Re-writing the Riesz potential as $(D/|x-y|)^s$ shows that the gas of hard spheres with diameter $D$ emerges in the $s \\rightarrow \\infty $ limit.", "In one dimension, equilibrium properties of a Riesz gas in a confining potential have been studied via the Coulomb-gas approach [34], [35], [36], [37], [38] originally developed for the Dyson gas [39], [40].", "The equilibrium behavior changes qualitatively when the exponent passes through the threshold value $s=1$ corresponding to particles confined to a line but interacting through the (three-dimensional) Coulomb potential [41], [42], [34].", "For $s>1$ , the gas is effectively short-ranged; for $0<s<1$ , the gas is long-ranged and the free energy functional is non-local.", "The goal of the present work is to investigate dynamical properties of one-dimensional stochastic Riesz gases.", "Among a few studies of the dynamics of Riesz gases, we mention [43], [44], [45], [46], [23], [29].", "Still, little is known about dynamics and equilibrium properties of systems with finite number of particles in confining (usually harmonic) potential remains the most popular research area.", "Another feature of our work is reliance on the macroscopic fluctuating theory (MFT) [47], [48], [49].", "The MFT is a powerful deterministic framework derived from fluctuating hydrodynamics in the vanishing-noise limit.", "The MFT is widely applied to diffusive lattice gases with a single scalar field [4], [5].", "Extensions of the MFT to several interacting stochastic fields and to stochastic field theories with higher derivatives are also actively explored [50], [51].", "We show that similarly to the equilibrium properties, an MFT suitable to one-dimensional stochastic Riesz gases undergoes a qualitative change when the exponent passes through the threshold value $s=1$ .", "In one dimension, the MFT allows the investigation of the statistics of quantities like the total current across the origin [52], [53] and, for single-file systems, the total displacement of a tagged particle [54], [55].", "For systems with short-ranged interactions, the variance of both these quantities grows as $t^{\\frac{1}{2}}$ for long times [56], [57], [54], [55].", "An amusing subtlety of diffusive systems often present in one dimension concerns the initial conditions, viz., their ever-lasting nature [58].", "If initial conditions are deterministic (also known as quenched), fluctuations are often different from fluctuations in random (also known as annealed) initial conditions.", "This is particularly striking for large deviations that can be much more probable (albeit still highly rare) in the annealed case.", "In higher than one dimension, fluctuations in deterministic and annealed settings are often identical in the leading order [59], [60].", "We now state the main results of this paper.", "Starting with stochastic hydrodynamics of the one-dimensional Riesz gas we develop a deterministic reformulation analogous to the MFT of stochastic diffusive lattice gases.", "When $0<s<1$ , the governing equations contain non-local terms, which did not appear in the original MFT equations; when $s>1$ , we recover the usual MFT equations with a density-dependent diffusion coefficient which we derive.", "Using the relevant MFT, we then probe the asymptotic behavior of the variance of the integrated current $Q$ and of the position $X$ of a tagged particle.", "These asymptotic behaviors are obtained by applying a perturbation approach [53] to our governing equations.", "We find $\\langle Q^2 \\rangle \\sim \\langle X^2 \\rangle \\sim t^{2 \\gamma }$ with exponent $\\gamma ={\\left\\lbrace \\begin{array}{ll}\\frac{1}{2}\\frac{s}{s+1} & 0<s<1\\\\\\frac{1}{4} & s>1\\end{array}\\right.", "}$ We limit ourselves to the uniform state.", "In this situation $\\langle Q\\rangle =\\langle X \\rangle =0$ and the second moments in (REF ) are the variances.", "For the marginal case of $s=1$ (which is physically important as it corresponds to particles confined to a line interacting through three-dimensional Coulomb potential), we argue that $\\langle Q^2 \\rangle \\sim \\langle X^2 \\rangle \\sim \\sqrt{t/\\ln t}$ .", "Note that $Q$ and $X$ grow sub-diffusively with an $s-$ dependent exponent when $0<s<1$ and with the universal exponent $\\frac{1}{4}$ as soon as $s>1$ .", "This value $\\frac{1}{4}$ is the same as that for short-ranged diffusive systems with forbidden overtaking such as simple exclusion processes [57], [61].", "We also determine the two-time correlation function of the tagged particle position.", "The form of this function depends on the setting: $\\langle X(t_1) X(t_2) \\rangle _\\text{ann} & \\sim \\left[t_1^{2\\gamma } + t_2^{2\\gamma } - |t_1-t_2|^{2\\gamma }\\right]\\\\\\langle X(t_1) X(t_2) \\rangle _\\text{det} & \\sim \\left[(t_1+t_2)^{2\\gamma } - |t_1-t_2|^{2\\gamma }\\right]$ The two-time correlation function in the annealed case is exactly the same as for fractional Brownian motion with exponent $\\gamma $ .", "The rest of the paper is organized as follows.", "In Sec.", ", we define the Riesz gas.", "In particular, we use dimensional analysis to show that the behavior depends on two dimensionless numbers, the Riesz exponent $s$ and a Péclet number which is essentially the ratio of typical interaction to noise.", "In Secs.", "–, we focus on the genuinely long-ranged Riesz gas with the exponent in the range $0<s<1$ .", "In Sec.", ", we use a path-integral formulation of the process and minimize an effective action to derive deterministic governing equations and boundary conditions, that play the role of the MFT equations for our problem.", "In Sec.", ", we employ a perturbation approach and solve the governing equations in leading order.", "This allows us to determine the variances of the integrated current and the position of the tagged particle.", "Section  is devoted to two-time correlations.", "In Sec.", ", we discuss our findings and outline possible future developments.", "In Appendix , we briefly consider effectively short-ranged Riesz gas ($s>1$ ).", "Details of derivations of the results of Sec.", "are relegated to Appendix ." ], [ "The Riesz gas", "We consider the Riesz gas with particles on the line interacting through the potential (REF ).", "Particles are also undergoing independent Brownian motions with diffusion coefficient $D$ .", "In the over-damped limit, the particle positions $x_i$ evolve according to coupled stochastic differential equations $\\dot{x}_i = g \\sum _{j \\ne i} \\frac{x_i-x_j}{|x_i-x_j|^{2+s}} + \\eta _i$ The noise contributions $\\eta _i$ are Gaussian with zero-mean and correlations $\\langle \\eta _i(t) \\eta _j(t^{\\prime }) \\rangle = 2 D \\delta _{ij} \\delta (t-t^{\\prime })$ To ensure that the gas does not freely expand, we imagine that the number of particles is very large but finite, and the particles are confined by a very shallow potential.", "The density of particles in a very large region around the origin being essentially uniform, we denote it by $\\rho $ .", "When $s>0$ , the system is characterized by a single dimensionless parameter $G = \\frac{g \\rho ^s}{D}$ This parameter $G$ measures the relative strength of interactions versus noise.", "Since $g \\rho ^{s+1}$ is a typical velocity of a particle caused by an adjacent particle and $\\rho ^{-1}$ is a typical distance between adjacent particles, $G$ plays a role of a Péclet number for the Riesz gas.", "The coupling constant $g$ and the diffusion coefficient $D$ have independent dimensions for $s>0$ .", "One can use them to construct the units of length and time: $\\left(\\frac{g}{D}\\right)^\\frac{1}{s}$ and $\\frac{1}{D}\\left(\\frac{g}{D}\\right)^\\frac{2}{s}$ .", "Measuring length and time in terms of these units we can effectively set the coupling constant and diffusion coefficient to unity and take $g=1=D$ in the following.", "Then, the problem only depends on the dimensionless density $\\rho $ , i.e., on $G^{1/s}$ in terms of the original variables.", "Note that when $s=0$ , i.e., for the Dyson gas, the coupling constant and diffusion coefficient have the same dimensions and can not be set to 1 independently.", "The coarse-grained density field of the particles satisfies the continuity equation $\\partial _t q + \\partial _x J = 0$ where $q = q(x,t)$ is the density and $J= J(x,t)$ is the local current.", "The current $J$ contains the standard diffusion term, $-D\\partial _x q= -\\partial _x q$ , plus another deterministic contribution $J_{{\\rm Riesz}}$ arising from the Riesz potential and a stochastic component due to the noise.", "Thus, we write $J = J_{{\\rm Riesz}} - \\partial _x q + \\sqrt{2 q}\\, \\eta $ The noise $\\eta =\\eta (x,t)$ satisfies $\\langle \\eta (x,t)\\rangle =0, \\quad \\langle \\eta (x,t) \\eta (x^{\\prime },t^{\\prime }) \\rangle = \\delta (x-x^{\\prime }) \\delta (t-t^{\\prime })$ The amplitude of the noise, $\\sqrt{2 q}$ , reflects the Brownian nature of the point particles [2], [3], [4], [5].", "The Riesz contribution $J_{{\\rm Riesz}}$ reads $J_{{\\rm Riesz}} ={\\left\\lbrace \\begin{array}{ll}q \\mathcal {H}_{s}[q] &0<s<1 \\\\- (1+s) \\zeta (s) q^{s} \\partial _x q & s>1\\end{array}\\right.", "}$ as we show below.", "In the $s>1$ range, this Riesz current (REF ) contains the zeta function $\\zeta (s)$ (see Appendix  for a derivation).", "In the $0<s<1$ range, the Riesz current is expressed using a modified Hilbert transform $\\mathcal {H}_{s}[q] = \\int d y\\, \\frac{x-y}{|x-y|^{2+s}} \\, q(y)$ which reduces to the Hilbert transform in the $s\\rightarrow 0$ limit.", "Hereinafter, spatial integrals over the entire line will be denoted $\\int $ , e.g., $\\int dy\\equiv \\int _{-\\infty }^{\\infty } dy$ in (REF ).", "We also define the potential $\\mathcal {V}_{s}[q]$ at $x$ due to the density profile $q$ : $\\mathcal {V}_{s}[q] = \\frac{1}{s} \\int dy\\, \\frac{q(y)}{|x-y|^{s}}$ This potential satisfies $ \\mathcal {H}_{s}[q] = -\\partial _x \\mathcal {V}_{s}[q]$ .", "The Riesz current (REF ) can be derived from the general formula (expressing that the deterministic motion of particles is over-damped) $J_{{\\rm Riesz}}= - q\\,\\frac{\\partial }{\\partial x}\\left(\\frac{\\delta }{\\delta q} \\mathcal {E}[q]\\right)$ where $\\mathcal {E}$ is the interaction energy: $\\mathcal {E}[q] = \\frac{1}{2 s} \\int \\int dx\\,dy\\,\\,\\frac{q(x) q(y)}{|x-y|^s}$ when $0<s<1$ , and $\\mathcal {E}[q] = \\frac{\\zeta (s)}{s}\\int dx\\, q^{1+s}$ when $s>1$ .", "The total deterministic current can be derived from $J_{{\\rm Riesz}} - \\partial _x q = - q\\,\\frac{\\partial }{\\partial x}\\left(\\frac{\\delta }{\\delta q} \\mathcal {F}[q]\\right)$ with free energy $\\mathcal {F}[q] = \\mathcal {E}[q] + \\int dx\\, q \\ln q$ that in addition to the interaction energy contains an entropic contribution.", "In the present work, we are primarily interested in Riesz gases with a long-ranged potential ($0<s<1$ ).", "For $s > 1$ , the potential becomes short-ranged and a well-understood single-file behavior will emerge (as discussed in Sec.", ")." ], [ "Hydrodynamics of the Riesz gas", "When $0<s<1$ , the fluctuating hydrodynamics of one-dimensional stochastic Riesz gases is governed by the stochastic partial differential equation $\\partial _t q = - \\partial _x\\Big ( q \\mathcal {H}_{s}[q] - \\partial _x q + \\sqrt{2 q}\\,\\eta \\Big )$ with $\\mathcal {H}_{s}[q]$ given by (REF ).", "Equation (REF ) resembles the governing equation of fluctuating hydrodynamics of diffusive lattice gases [2].", "Analytical tools available to investigate the statistical properties of lattice gases can be adapted to the present case to probe dynamical fluctuations in the Riesz gas.", "Namely, we shall develop a deterministic reformulation of fluctuating hydrodynamics analogous to the MFT of diffusive lattice gases [47], [48], [49]." ], [ "Path-integral formalism", "The solution of the stochastic equation (REF ) can be expressed via a path integral.", "One writes the Gaussian measure for the white noise and integrates over it.", "This procedure known as the Martin-Siggia-Rose method [62] is standard; details can be found, for example, in the closely related context of the macroscopic fluctuation theory [63], [52], [55].", "The probability of transition from an initial configuration at $t =0$ to a final configuration at $t = T$ can be written as a functional integral after integrating out the white noise $\\eta (x,t)$ : $P(q(x,T)|q(x,0)) = \\int \\int \\int {\\mathcal {D}}J\\, {\\mathcal {D}}q\\, {\\mathcal {D}}p\\, e^{-\\mathcal {S}}$ where $\\mathcal {S}=\\int _0^T dt\\int dx\\,\\left[\\frac{(J - q \\mathcal {H}_{s}[q]+ \\partial _x q )^2}{4 q} + p (\\partial _t q + \\partial _x J)\\right]$ The second term in the integrand ensures that $q$ and $J$ obey the continuity equation (REF ), with $p=p(x,t)$ playing the role of the Lagrange multiplier.", "Evaluating the quadratic integral over $J$ yields $P(q(x,T)|q(x,0)) = \\int \\int {\\mathcal {D}}q\\, {\\mathcal {D}}p\\,\\,e^{- \\int _0^T \\int dt\\, dx\\, S(q,p) }$ with action $S(q,p) = p \\partial _t q - q (\\partial _x p)^2 - q (\\partial _x p) \\mathcal {H}_{s}[q]\\,\\partial _x p + \\partial _x p\\, \\partial _x q$ The form of (REF ) is remarkably similar to the MFT action [5].", "The new feature is the presence of $\\mathcal {H}_{s}$ accounting for the long-ranged interactions." ], [ "The cumulant generating function of an observable", "Take an arbitrary observable ${\\mathcal {O}}(\\lbrace q(x,t),p(x,t)\\rbrace )$ .", "Its characteristic function can be written as $\\langle e^{\\lambda {\\mathcal {O}}} \\rangle = \\int \\int {\\mathcal {D}}q\\, {\\mathcal {D}}p\\,\\, e^{\\lambda {\\mathcal {O}}- \\int _0^T \\int dt\\, dx\\, S(q,p) }\\,P[q(x,0)]$ where $P[q(x,0)]$ , the probability of the initial profile $q(x,0)$ , represents how the system is prepared at $t=0$ .", "For the deterministic initial condition with uniform profile $\\rho $ , we merely take $P[q(x,0)] = \\delta \\left( q(x,0) - \\rho \\right)$ If the system is prepared with the equilibrium distribution of density profiles (annealed case), we take $P[q(x,0)] = \\exp (-\\mathcal {F}[q(x,0)])$ with free energy defined by (REF ) and (REF ).", "The cumulant generating function (CGF) is the logarithm of the characteristic function.", "The CGF encodes all cumulants $\\langle {\\mathcal {O}}^n \\rangle _c$ of the observable ${\\mathcal {O}}$ : $\\mu (\\lambda ) = \\ln {\\langle {\\rm e}^{\\lambda {\\mathcal {O}}} \\rangle } = \\sum _n \\frac{1}{n!", "}\\langle {\\mathcal {O}}^n \\rangle _c$ In this work, we analyze two observables.", "The first is the integrated current $Q_T$ that has flown through the origin during the time interval $(0,T)$ .", "It is given by $Q(T) = \\int _{0}^{\\infty } dx\\,[q(x,T) - q(x,0)]$ We note that $Q(T)$ depends only on the final and the initial density profiles.", "Another observable is the position $X(T)$ of a tagged particle (or tracer) at time $T$ ; without loss of generality we set $X(0)=0$ .", "The dynamics of the tracer is identical to the other particles, its tag allows us to focus on the same particle and thus study a self-diffusion phenomenon.", "In a single-file motion [64], [65], [66], [67], [68], [69] particles cannot overtake each other, so the number of particles to the right of the tracer remains constant.", "We schematically write $\\int _{X(T)}^\\infty dx\\, q(x,T) = \\int _0^\\infty dx\\,q(x,0)$ which in conjunction with (REF ) gives [54], [70], [71] $\\int _0^{X(T)} dx \\, q(x,T) = Q(T)$ This useful relation implies that the statistics of $X(T)$ and $Q(T)$ are closely related.", "Below we first derive analytical results for the statistics of the current and then translate them to the statistics of the position of the tagged particle." ], [ "Governing equations and boundary conditions", "The action $S$ and the integrated current $Q(T)$ grow with time and in the large-time limit, the path integral (REF ) will be dominated by its saddle point [4], [5].", "The corresponding optimal `trajectory' $\\lbrace q(x,t),~p(x,t)\\rbrace $ is found by varying the action with respect to $q$ and $p$ .", "For $0< t < T$ , the Euler-Lagrange equations read $(\\partial _t - \\partial _x^2) q &=& -\\partial _x \\left( {2} q \\partial _x p + q \\mathcal {H}_{s}[q] \\right) \\\\(\\partial _t + \\partial _x^2) p &=& - (\\partial _x p)^2 -\\mathcal {H}_{s}[q] \\partial _x p + \\mathcal {H}_{s}[q \\partial _x p] $ These equations differ from the equations of the macroscopic fluctuation theory [4], [5] only by terms with $\\mathcal {H}_{s}$ .", "Similar equations have also appeared in the study of the large $N$ limit of Harish-Chandra-Itzykson-Zuber integrals [72], [46].", "The governing equations (REF ) are usually universal, i.e., independent of the observable [see, however, Eq.", "(REF )], while the boundary conditions do depend on the observable.", "In the one-dimensional case when the observable is the integrated current, the saddle-point relations at initial and final times involve a contribution from $Q(T)$ [52], [53], [54].", "Hence, the boundary condition at $t =T$ reads $p(x,T) = \\lambda \\frac{\\delta Q(T)}{\\delta q(x,T)} = \\lambda \\theta (x)$ The initial condition at $t=0$ depends on whether the initial preparation of the system is annealed or deterministic.", "We consider a system starting from a uniform density $\\rho $ , and hence, for the deterministic initial condition, we have $q(x,0)|_{\\text{det}} = \\rho $ In the annealed case, the system starts at equilibrium and density fluctuations are allowed.", "The free energy $\\mathcal {F}$ is defined by (REF ) and (REF ), and the initial condition for $p(x,0)$ reads $p(x,0)|_{\\text{ann}} = -\\lambda \\frac{\\delta Q(T)}{\\delta q(x,0)} + \\frac{\\delta \\mathcal {F}}{\\delta q}\\bigg \\vert _{q(x,0)}$ Thus we must solve Eqs.", "(REF ) subject to (REF ) at the final time $T$ and the initial condition (REF ) in the deterministic case, or (REF ) in the annealed case.", "The function $\\mu (\\lambda )$ is determined by substituting the solution $\\lbrace q(x,t),~p(x,t)\\rbrace $ in the path integral (REF ).", "This latter calculation can be significantly simplified [73], [74], [75], [76] by noting that $\\frac{d \\mu }{d \\lambda } = \\frac{\\langle {\\mathcal {O}} {\\rm e}^{\\lambda {\\mathcal {O}}}\\rangle }{ \\langle {\\rm e}^{\\lambda {\\mathcal {O}} } \\rangle }$ Therefore the CGF is obtained by evaluating the value of the observable ${\\mathcal {O}}$ for the optimal solution $q(x,t),~p(x,t)$ .", "In the present case, we have $\\mu ^{\\prime }(\\lambda ) = Q(T)$ with $Q(T)$ evaluated on the solution $q(x,t)$ of the governing equations (REF ) with appropriate boundary conditions.", "Handling a pair of non-linear, non-local coupled partial differential equations (REF ) is mathematically daunting.", "These equations do not admit an analytical solution.", "Fortunately, a perturbative calculation based on expansion in $\\lambda $ leads to exact results for the variance $\\left\\langle Q^2 \\right\\rangle $ as we show in the next section.", "To compute higher cumulants like $\\left\\langle Q^4 \\right\\rangle _c$ , one should be able to determine higher orders in a perturbative expansion.", "At present, this task seems analytically intractable." ], [ "Perturbative solution", "We follow the strategy developed in [53] relying on the obvious fact that for $\\lambda =0$ , the solution follows the noiseless evolution, which in the present case is very simple: $q(x,t)=\\rho $ and $p(x,t)=0$ at all times.", "The expansion of $(q,p)$ in $\\lambda $ generates the cumulants of the current.", "A calculation at the lowest order enables one to determine the variance of $Q(T)$ both in the deterministic and the annealed ensembles.", "Up to the first order in $\\lambda $ we have $q = \\rho + \\lambda q_1 + O(\\lambda ^2), ~~~~ p = \\lambda p_1 + O(\\lambda ^2)$ Plugging these expansions into Eqs.", "(REF ) we obtain $(\\partial _t - \\partial _x^2) q_1 &=& - \\rho \\partial _x \\left( \\mathcal {H}_{s}[q_1] + {2}\\partial _x p_1 \\right)\\\\(\\partial _t + \\partial _x^2) p_1 &=& \\rho \\mathcal {H}_{s}[\\partial _x p_1]$ at first order.", "(We have taken into account an obvious relation $\\mathcal {H}_{s}[\\rho ] = 0$ .)", "The boundary condition (REF ) reads $p_1(x,T) = \\theta (x) $ The initial conditions (REF )–(REF ) become, at first order, $&q_1(x,0)|_{\\text{det}} = 0 \\\\&p_1(x,0)|_{\\text{ann}} = \\theta (x) +\\frac{\\delta \\mathcal {F}}{\\delta q}\\bigg \\vert _{q_1(x,0)}$" ], [ "Variance of the integrated current: Deterministic case", "The first-order equations () can be solved via Fourier transform $\\widehat{f}(k) = \\int dx\\, {\\rm e}^{-{\\rm i}\\,k x} f(x) dx$ (Recall the short notation $\\int dx\\equiv \\int _{-\\infty }^\\infty dx$ for the spatial integrals over entire line.)", "A very useful identity $\\widehat{\\mathcal {H}_{s}[f]}(k) = - \\frac{{\\rm i}\\, \\sqrt{\\pi }}{2^s}\\,\\frac{\\Gamma \\left(\\frac{1-s}{2}\\right)}{\\Gamma \\left(1 +\\frac{s}{2}\\right)}\\,\\,k\\, |k|^{s-1} \\widehat{f}(k)$ immediately follows from the general formula [77], [78] $\\int dx\\, {\\rm e}^{-{\\rm i}\\, k x} |x|^{\\lambda }= \\frac{2^{\\lambda +1}\\sqrt{\\pi }}{|k|^{\\lambda +1}} \\frac{\\Gamma \\left( \\frac{1+\\lambda }{2}\\right) }{\\Gamma \\left(-\\frac{\\lambda }{2}\\right)}$ Equation () becomes $\\partial _t \\widehat{p}_1 = \\omega (k) \\widehat{p}_1$ with dispersion relation $& \\omega (k) = k^2 + A_s |k|^{s+1} \\\\& A_s = \\frac{ \\rho \\sqrt{\\pi }}{2^s}\\frac{\\Gamma \\left(\\frac{1-s}{2}\\right)}{\\Gamma \\left(1 +\\frac{s}{2}\\right)}$ The boundary condition (REF ) gives $\\widehat{p_1}(k,T) = \\frac{1}{ {\\rm i}\\, k}$ , and hence (REF ) leads to $\\widehat{p}_1(k,t) = \\frac{1}{{\\rm i}\\,k}\\, e^{- \\omega (k) (T-t)}$ The equation for $q_1$ is solved along similar lines.", "The Fourier transform of (REF ) is $[\\partial _t + \\omega (k)]\\widehat{q}_1 = 2 \\rho k^2 \\widehat{p}_1$ The initial condition (REF ) gives $\\widehat{q}_1(k,0) =0$ .", "Solving (REF ) with $ \\widehat{p}_1$ given by (REF ) we obtain $\\widehat{q}_1(k,t) = \\rho k\\,\\frac{e^{-\\omega (k)(T-t)} - e^{-\\omega (k)(T+t)}}{{\\rm i}\\,\\omega (k)}$ Using (REF ) we find the cumulant generating function at lowest order in $\\lambda $ : $\\mu ^{\\prime }(\\lambda ) = \\lambda \\int _0^\\infty dx\\,\\left[q_1(x,T) - q_1(x,0)\\right]$ In Fourier space, this gives $\\mu ^{\\prime }(\\lambda ) & = & {\\rm i}\\, \\lambda \\int _{-\\infty }^\\infty \\frac{\\widehat{q}_1(k,T)-\\widehat{q}_1(k,0)}{k} \\frac{dk}{2 \\pi } \\nonumber \\\\& = &\\lambda \\rho \\int _{-\\infty }^\\infty \\frac{1 - {\\rm e}^{-2 \\omega (k)T}}{\\omega (k)} \\frac{dk}{2 \\pi }$ The asymptotic $T \\rightarrow \\infty $ behavior of the above integral is dominated by the $|k|^{1+s}$ term as the diffusive part $k^2$ in $\\omega (k)$ becomes irrelevant (as readily seen by redefining $\\kappa := k T^{\\frac{1}{s+1}}$ ).", "Thus, in the large time limit, we find $\\int _{-\\infty }^\\infty \\frac{1 - {\\rm e}^{-2 \\omega (k)T}}{\\omega (k)} \\frac{dk}{2 \\pi }\\rightarrow \\frac{T^{\\frac{s}{s+1}}}{\\pi } \\int _{0}^\\infty d\\kappa \\,\\frac{1 - {\\rm e}^{-2 A_s \\kappa ^{s+1}}}{ A_s \\kappa ^{s+1}}$ The second integral can be computed [79] leading to $\\left\\langle Q^2 \\right\\rangle _{{\\rm det}} = W_s (2 T)^{\\frac{s}{s+1}}, \\qquad W_s = \\frac{\\rho \\, \\Gamma \\big (\\frac{1}{s+1}\\big )}{\\pi s A_s^{\\frac{1}{s+1}}}$ where we have taken into account that, by definition of the CGF in (REF ), the first order term in $\\mu ^{\\prime }(\\lambda )$ represents the variance of the current.", "Recalling an explicit formula () for $A_s$ , we write (REF ) as $\\langle Q^2 \\rangle _{{\\rm det}} = (\\rho T)^\\frac{s}{s+1} U_s$ with amplitude $U_s = \\frac{\\Gamma \\big (\\frac{1}{s+1}\\big )}{s }\\left[\\frac{4^s\\,\\Gamma \\left(1 +\\frac{s}{2}\\right)}{\\pi ^{s+3/2}\\,\\Gamma \\left(\\frac{1-s}{2}\\right)}\\right]^\\frac{1}{s+1}$ depending only on the Riesz exponent $s$ .", "The variance of the current across the origin increases as $T^{s/(s+1)}$ , i.e., slower than for single-file diffusive systems [57], [61], [55] where the exponent is $1/2$ .", "The exponent $\\frac{s}{s+1}$ approaches $\\frac{1}{2}$ when $s \\uparrow 1$ , albeit the amplitude $U_s$ vanishes in this limit, $U_s\\rightarrow \\sqrt{(1-s)/\\pi }$ .", "This indicates that precisely at $s=1$ , the growth of the variance might be slower than $\\sqrt{T}$ , possibly with a logarithmic correction (see Sec. ).", "Formulae (REF )–(REF ) are also singular when $s\\downarrow 0$ , although the hydrodynamic equations remain well-defined.", "This may be an indication that the function $\\mu (\\lambda )$ is itself singular for $s \\rightarrow 0$ and that the perturbative scheme breaks down in this limit.", "When $s<1$ , the diffusive contribution is subdominant in the long time limit.", "This could have been anticipated by observing that the second order derivatives in Eqs.", "(REF ) are negligible in the scaling limit compared to the Hilbert operator $\\mathcal {H}_{s}$ .", "Physically, this means that, in the limit we consider, the Riesz current is dominated by the advection term coming from the interactions, rather than by the diffusive flux of entropic origin." ], [ "Variance of the integrated current: Annealed case", "In the annealed case, the boundary condition at the final time $T$ is the same as in the deterministic case, so Eq.", "(REF ) still holds.", "To implement the initial condition (), we need an appropriate expression for the free energy.", "For $0<s<1$ , the expression (REF ) is schematic, e.g., it diverges.", "To avoid the divergence, we subtract $\\rho $ from $q(x)$ and $q(y)$ in (REF ), and also subtract a constant from the entropic contribution so that it vanishes at infinity.", "This gives $\\mathcal {F}[q] &=& \\int dx\\, q(x)\\, \\ln \\frac{q(x)}{\\rho } \\nonumber \\\\&+& \\int \\int dx dy\\,\\,\\frac{(q(x)-\\rho ) (q(y)-\\rho )}{2s\\, |x-y|^s}$ Plugging (REF ) into () together with expansion (REF ) we obtain $p_1(x,0) = \\theta (x) + \\frac{1}{s} \\int dy\\,\\frac{ q_1(y,0)}{|x-y|^{s}} + \\frac{q_1(x,0)}{\\rho }$ in the first order.", "Performing the Fourier transform of this relation and using Eqs.", "(REF ), (REF ), and (REF ), we derive the initial value $\\widehat{q}_1(k,0)$ in the annealed case $\\widehat{q}_1(k,0) = \\frac{ \\rho k}{{\\rm i}\\, \\omega (k) }\\left( {\\rm e}^{- \\omega (k)T} -1 \\right)$ Equation (REF ) is still valid, but now we have to solve it subject to the initial condition (REF ).", "The solution reads $\\widehat{q}_1(k,t) = \\rho k\\,\\frac{e^{-\\omega (k)(T-t)} - e^{-\\omega (k)t}}{{\\rm i}\\, \\omega (k)} $ To establish the cumulant generating function in the lowest order we proceed as before and find $\\mu ^{\\prime }(\\lambda ) & = & {\\rm i}\\, \\lambda \\int _{-\\infty }^\\infty \\frac{\\widehat{q}_1(k,T)-\\widehat{q}_1(k,0)}{k} \\frac{dk}{2 \\pi } \\nonumber \\\\& = &\\lambda \\rho \\int _{-\\infty }^\\infty \\frac{1 - e^{-\\omega (k)T}}{\\omega (k)} \\frac{dk}{ \\pi }$ The long time behavior is extracted similarly to the deterministic case (cf.", "equation (REF )).", "We finally arrive at a simple relation between the variances in the annealed and deterministic cases: $\\langle Q^2 \\rangle _{\\rm {ann}} &=& 2^\\frac{1}{1+s}\\, \\langle Q^2 \\rangle _{\\rm {det}}$ As expected, the variance in the annealed case is enhanced compared to the deterministic setting because of the non-zero initial fluctuations.", "This result is another example of the ever-lasting influence of initial conditions [58].", "The ratio, $2^{\\frac{1}{1+s}}$ , approaches as $s \\rightarrow 1$ the value $\\sqrt{2}$ found in hard-core single-file systems such as the symmetric exclusion process [53]." ], [ "Variance of the tagged particle position", "The tagged particle position can be determined by the fact that particles in the Riesz gas can not overtake one another.", "The motion of the tracer displaces the particles ahead of it and drives a current through the system.", "At first order in $\\lambda $ , we have from (REF ), $X(T) = \\frac{\\lambda }{\\rho } \\int _0^{\\infty } dx\\, [q_1(x,T) - q_1(x,0)]$ which differs from the integrated current only by a $1/\\rho $ factor.", "This simple relation is valid only at the first order.", "At higher orders in $\\lambda $ , the number of particles in the vicinity of the tracer is random and its statistics must also be taken into account [52], [55], [71], [80].", "Using (REF ) we find $\\langle X^2 \\rangle = \\frac{1}{\\rho ^2}\\, \\langle Q^2 \\rangle $ in the leading order.", "This is valid both for the deterministic and annealed cases.", "Therefore $\\langle X^2 \\rangle _{\\rm {ann}} =2^{\\frac{1}{1+s}}\\, \\langle X^2 \\rangle _{\\rm {det}}$ The tagged particle variance scales as a fractional Brownian motion (fBM) with Hurst exponent $\\gamma = \\frac{1}{2}\\frac{s}{1+s}$ [81], [82].", "The vanishing of the exponent as $s \\rightarrow 0$ is consistent with the logarithmic mean square displacement of a tracer in Dyson's model of interacting Brownian particles established by Spohn [43], [44].", "Note that for the symmetric exclusion process, a lattice gas with hard-core interacting particles (that heuristically corresponds to $s \\rightarrow \\infty $ ), the statistical identity between the tracer process and the fBM with exponent $\\frac{1}{4}$ has been proved in [83]." ], [ "Two-time correlations", "In this section, we investigate the two-time correlations of the process.", "The results provide further indication that a tracer in the Riesz gas behaves as a fractional Brownian motion.", "The saddle-point method used to determine quadratic fluctuations can be extended to unequal time correlations by introducing a source term in the optimal equations.", "This will allow us to calculate two-time correlations of the integrated current and the tracer's position." ], [ "Generating functional", "To derive current-current correlations at different times, we introduce the generating functional (see [4], [84] for a detailed presentation of the formalism): $Z[\\lambda (t)] &=& \\left\\langle {\\rm exp}\\left[\\int _0^T dt\\, \\lambda (t) Q(t)\\right] \\right\\rangle \\nonumber \\\\&=& \\left\\langle e^{\\int _0^T dt\\int dx\\lambda (t) \\theta (x) (q(x,t)- q(x,0))} \\right\\rangle $ Two-time correlation function of the current at times $t_1,t_2 < T$ can be found by taking functional derivatives of this generating functional $\\mu [\\lambda (t)] = \\ln Z[\\lambda (t)]$ For example, recalling that $\\langle Q(t) \\rangle = 0$ for all $t$ , we can expand the generating functional at lowest order with respect to the source-function $\\lambda (t)$ : $\\mu [\\lambda (t)] = \\int _0^T dt \\int _0^T dt^{\\prime } \\lambda (t) \\lambda (t^{\\prime })C(t,t^{\\prime }) + \\ldots $ where $C(t,t^{\\prime }) = \\langle Q(t) Q(t^{\\prime }) \\rangle _c$ is the two-time correlation function.", "Higher order terms generate multiple-time correlation functions.", "Writing the average as a functional integral as in equation (REF ), we observe that the bulk action $S(q,p)$ given in (REF ) is tilted by the source term, $S(q,p) \\rightarrow S_{\\lambda (t)}(q,p)$ , with $S_{\\lambda (t)}(q,p) = S(q,p) - \\lambda (t) \\theta (x) (q(x,t)- q(x,0))$ Taking the functional derivative of $\\mu [\\lambda (t)]$ leads to $\\frac{\\delta \\mu [\\lambda (t)] }{\\delta \\lambda (t_1)} = \\frac{\\langle Q(t_1) \\exp (\\int _0^T \\lambda (t) Q(t)) \\rangle }{\\langle \\exp (\\int _0^T \\lambda (t) Q(t)) \\rangle } = \\langle Q(t_1) \\rangle _{[\\lambda ]}$ where the final average is over the tilted action $S_{\\lambda (t)}$ .", "Comparing with (REF ), we deduce that the two-time correlations are given by $C(t_1,t_2) = \\frac{\\delta \\langle Q(t_1) \\rangle _{[\\lambda ]} }{\\delta \\lambda (t_2)}\\Biggr |_{\\lambda \\equiv 0}$ Thus, it suffices to determine the average with respect to the tilted action to access the correlations.", "As in the previous sections, we determine the generating functional by writing the Euler-Lagrange equations and solving them perturbatively at the lowest order." ], [ "The Euler-Lagrange equations with a source", "In presence of the source term, the tilted action $S_{\\lambda (t)}$ affects only the equation for $p$ $(\\partial _t + \\partial _x^2) p &=& - \\lambda (t) \\theta (x) - (\\partial _x p)^2 \\nonumber \\\\&-& \\mathcal {H}_{s}[q] \\partial _x p + \\mathcal {H}_{s}[q \\partial _x p]$ while $q$ satisfies the same equation (REF ) as before.", "At final time, we have $p(x,T) = 0$ The initial conditions depend on the setting: $&q(x,0)|_{\\text{det}} = \\rho \\\\&p(x,0)|_{\\text{ann}} =\\theta (x)\\int _0^T dt \\lambda (t) + \\frac{\\delta \\mathcal {F}}{\\delta q}\\bigg \\vert _{q(x,0)}$ We write $q = \\rho + q_1$ and $p = p_1$ , where $q_1$ and $p_1$ are linear functionals of $\\lambda (t)$ .", "Performing a perturbative expansion we obtain $(\\partial _t - \\partial _x^2)q_1 &=& - \\rho \\partial _x( \\mathcal {H}_{s}[q_1] + {2} \\partial _x p_1) \\\\(\\partial _t + \\partial _x^2)p_1 &=& \\rho \\mathcal {H}_{s}[\\partial _x p_1] -\\lambda (t) \\theta (x) $ in the first order.", "The Fourier transform of $p_1$ reads $\\widehat{p}_1(k, t)= \\frac{1}{{\\rm i}\\,k} \\int _t^T d\\tau \\lambda (\\tau ) {\\rm e}^{\\omega (k) (t-\\tau )}$ with $\\omega (k)$ defined in Eqs.", "(REF )–().", "We begin with the deterministic setting.", "In this case $q_1(x,0)=0$ and the solution of (REF ) reads $\\widehat{q}_1(k, t) = - 2 {\\rm i}\\,\\rho k\\int _0^t d\\tau \\int _\\tau ^T dt_2 \\lambda (t_2){\\rm e}^{\\omega (k)(2 \\tau - t_2 -t)}$ To determine the two-time correlation function we write $\\langle Q(t_1) \\rangle &=&\\int _0^\\infty dx \\left( {q}_1(x,t_1)- {q}_1(x,0) \\right) \\nonumber \\\\&=& {\\rm i}\\int _{-\\infty }^\\infty \\frac{\\widehat{q}_1(k,t_1) - \\widehat{q}_1(k,0) }{k}\\frac{dk}{2 \\pi } \\nonumber \\\\&=& \\int _0^T dt_2 \\, C(t_1,t_2)\\lambda (t_2)$ In the last step we have used (REF ) at the first order.", "Substituting $\\widehat{q}_1$ and exchanging the order of the integrals (see Appendix  for details), we determine the two-time correlation function.", "In the large time limit, this correlation function has a neat form $C_{{\\rm det}}(t_1,t_2) = W_s \\left\\lbrace (t_1 + t_2)^{\\frac{s}{1+s}} - |t_1 - t_2|^{\\frac{s}{1+s}} \\right\\rbrace $ with $W_s$ defined in Eq.", "(REF ).", "In the annealed setting, the initial condition () leads to $\\widehat{q}_1(k,0) = \\frac{ \\rho k}{{\\rm i}\\, \\omega (k) } \\int _0^T d\\tau \\lambda (\\tau ) ({\\rm e}^{-\\omega (k)\\tau } - 1)$ in the first order.", "Using this we determine $\\widehat{q}_1(k,t)$ and use Eq.", "(REF ) to calculate the generating functional at lowest order (see Appendix ).", "In the large time limit, the two-time correlation function has again a neat form $C_{{\\rm ann}}(t_1,t_2)= W_s\\left(t_1^{\\frac{s}{1+s}} + t_2^{\\frac{s}{1+s}} - |t_1-t_2|^\\frac{s}{1+s} \\right)$ As was explained in [84], [85], the results in the annealed case can be deduced from the deterministic case by letting the system evolve up to a time $t_0$ and then measuring the current $Q_{+}(t) = Q(t_0 +t) - Q(t_0)$ that has flown from that time on.", "Then, if we calculate, with the help of (REF ), the deterministic correlations for $Q_{+}$ at times $t_1,t_2$ , and assume that $ t_0 \\gg t_1,t_2$ , we obtain (REF ).", "Indeed, by shifting the time by the large duration $t_0$ , the system is effectively put into equilibrium.", "For the tracer position, the two-time correlations are obtained by multiplying the two-time current-current correlation function by the factor $\\rho ^{-2}$ .", "We observe that these expressions are the same as those for a fractional Brownian motion with Hurst exponent $H=\\frac{s}{1+s}$ [81].", "This further suggests that the tagged particle behaves as a fractional Brownian process." ], [ "Discussion", "In this work, we have studied current and tracer fluctuations in the one-dimensional Riesz gas.", "Our main focus has been on the genuinely long-range interaction regime, $0<s<1$ , for which the collective dynamics differs significantly from usual single-file systems.", "We have derived integro-differential MFT-type equations, with non-local terms, that in principle allow one to probe large deviations in one-dimensional stochastic Riesz gases.", "We have investigated fluctuations of the integrated current $Q$ and the position $X$ of the tagged particle in the one-dimensional stochastic Riesz gas.", "By applying a perturbation approach to governing MFT-type equations we have determined $\\langle Q^2 \\rangle $ and $\\langle X^2 \\rangle $ .", "The calculation of higher cumulants, e.g., $\\langle Q^4 \\rangle $ and $\\langle X^4 \\rangle $ , remains an analytical challenge.", "Before presenting some extensions and discussing open issues, we restate some of our main results in terms of the original variables so that the dependence on the coupling constant $g$ and diffusion constant $D$ becomes visible.", "In the genuinely long-range regime, $0<s<1$ , the variance of the position of tagged particle grows as $\\langle X^2 \\rangle _{{\\rm det}} = U_s G^{-\\frac{1}{s+1}} \\rho ^{-2}(\\rho ^2 D T)^\\frac{s}{s+1}$ in the deterministic case; in the annealed case, the variance is larger by a factor $2^\\frac{1}{s+1}$ .", "The growth law (REF ) depends on the dimensionless parameter $G=g\\rho ^s/D$ measuring the relative strength of interactions versus noise and the amplitude $U_s$ is given by (REF )." ], [ "The short-range regime $s>1$", "Now, we briefly explain how the short-range regime $s>1$ can be dealt with by borrowing known results for the single file-diffusion derived from standard MFT formalism [54], [55].", "When $s >1$ , we observe from (REF ), (REF )–(REF ) that the deterministic current has the form $J=-D(q)\\partial _x q, \\quad D(q) = 1+(1+s) \\zeta (s) q^s$ We also observe from the stochastic component of the current, Eq.", "(REF ), that the mobility is given by $\\sigma (q)=2q.$ Therefore, for $s>1$ , the governing equations are bona fide MFT equations and we can use the following general formula [55] for the self-diffusion of a tracer in single-file hydrodynamics characterized by the diffusion coefficient $D(q)$ and the mobility $\\sigma (q)$ : $\\langle X^2 \\rangle |_\\text{ann} = \\sqrt{2}\\,\\langle X^2 \\rangle |_\\text{det} = \\frac{\\sigma (\\rho )}{\\rho ^2}\\sqrt{\\frac{T}{\\pi D(\\rho )}}$ By specializing to the short-range Riesz gas, we deduce $\\langle X^2 \\rangle |_\\text{det} = \\frac{1}{\\sqrt{1+(1+s)\\zeta (s)G}}\\,\\sqrt{\\frac{2 D T}{\\pi \\,\\rho ^2}}$ In the annealed case, the variance is $\\sqrt{2}$ times larger.", "When the relative strength of interaction vanishes, i.e., $G\\rightarrow 0$ , we recover the well-known behavior for Brownian particles undergoing single-file diffusion.", "The $T^{1/2}$ temporal growth remains the same, independent of $G$ , but the amplitude decays as $G$ increases." ], [ "The case $s=1$", "For the marginal case $s=1$ , that separates long-range and short-range regimes, the Riesz gas corresponds to a physically relevant system of particles interacting through the three-dimensional Coulomb potential and confined to a one-dimensional line.", "This case deserves a separate careful investigation and, here, we shall present heuristic arguments allowing us to conjecture a plausible asymptotic behavior for the tracer's fluctuations.", "When $ s \\rightarrow 1$ , the $s-$ dependent term in (REF ) diverges as $1+(1+s)\\zeta (s)\\,\\frac{g\\rho ^s}{D} \\rightarrow 2\\,\\frac{g\\rho }{D}\\,\\frac{1}{s-1}$ where we have used the asymptotic $\\zeta (s)\\simeq (s-1)^{-1}$ of the zeta function $\\zeta (s)$ near $s=1$ .", "The characteristic dimensionless diffusive length scale is $\\ell \\sim \\sqrt{\\rho ^2 DT}$ , and as long as $\\ell ^{-1}\\sim \\ell ^{-s}$ , there is no difference between the Riesz gas with exponent $s$ and the Coulomb gas with $s=1$ .", "Therefore $(\\rho ^2 DT)^{s-1}\\sim 1$ , from which we deduce $(s-1)^{-1}\\sim \\ln (\\rho ^2 DT)$ , and (REF ) becomes $1+(1+s)\\zeta (s)\\,\\frac{g\\rho ^s}{D} \\rightarrow \\, \\ln (\\rho ^2 DT), \\qquad G=\\frac{g\\rho }{D}$ Plugging this into (REF ) yields $\\langle X^2 \\rangle \\sim \\frac{1}{\\sqrt{G}}\\,\\,\\sqrt{\\frac{D T}{\\rho ^2 \\, \\ln (\\rho ^2 DT)}}$ The dependence on the Péclet number $G$ is natural in view of the behavior in $s<1$ and $s>1$ regimes, (REF ) and (REF ).", "Re-writing (REF ) as $\\langle X^2 \\rangle \\sim \\rho ^{-3/2}\\,\\,\\sqrt{\\frac{D^2 T}{g\\,\\ln (\\rho ^2 DT)}}$ emphasizes the $\\rho ^{-3/2}$ dependence on the density." ], [ "Higher dimensions", "The most interesting challenge is to extend our analysis to stochastic Riesz gases in higher dimensions ($d \\ge 2$ ).", "As the single-file phenomenon is absent for $d >1$ , the problem seems simpler at first sight.", "Also, the MFT framework admits a straightforward extension (with non-local terms if $s <d$ ).", "However, our approach based on the relation between the position and the current is no longer applicable and new ideas are required.", "We anticipate that the tagged particle behaves diffusively in the short-range $s>d$ regime: $\\langle {\\bf R}^2 \\rangle = 2d\\,F(G) DT, \\qquad s>d$ Thus $F(G) D$ is the self-diffusion coefficient, and the challenge is to compute $F(G)$ as a function of the Péclet number $G=g\\rho ^{s/d}/D$ .", "The derivation of the exact formula for $F(G)$ looks like an unattainable goal.", "(Indeed, for the simple exclusion process, the self-diffusion coefficient is unknown already on the square lattice.)", "Perhaps, one can probe the asymptotic behavior of $F(G)$ in the large $G$ limit.", "In the case of vanishingly small $G$ , we have non-interacting Brownian particles, so $F(0)=1$ .", "In the long-range regime, $s<d$ , a sub-diffusive behavior is expected, $\\langle {\\bf R}^2 \\rangle \\sim T^{\\beta (s,d)}$ , with an unknown exponent $\\beta (s,d)<1$ when $s<d$ .", "In two dimensions, the Riesz gases with $s=0, 1, 2$ are particularly interesting.", "We expect sub-diffusive behaviors for the Ginibre gas ($s=0$ ) and the Coulomb gas ($s=1$ ).", "The Calogero gas (usually studied [31], [32], [33] in one dimension) is marginal in two dimensions, so similar to (REF ) logarithmic corrections are plausible, such as $\\langle {\\bf R}^2 \\rangle \\sim DT/\\ln (\\rho DT)$ .", "The self-diffusion phenomenon in the Riesz gases in dimensions $d\\ge 2$ appears intractable with available tools.", "The MFT framework for high-dimensional Riesz gases could be applied, however, to more tractable problems such as void formation [59]." ], [ "Interaction energy when $s \\ge 1$", "Despite the simplicity and beauty of Eq.", "(REF ), its derivation is long and far from rigorous.", "We refer to [34] for derivation of a similar result in the case of harmonically confined particles.", "Here we limit ourselves to a few no-rigorous arguments in favor of (REF ).", "First, we notice that Eq.", "(REF ) giving the interaction energy in the $s<1$ regime is a natural continuous version of the exact formula $\\mathcal {E} = (2s)^{-1} \\sum _{i\\ne j} \\frac{1}{|x_i-x_j|^s}$ (Every $i\\ne j$ in (REF ) appears twice, hence the factor $1/2$ .)", "A singularity at $x=y$ in the integral in Eq.", "(REF ) is integrable if $s<1$ , so we can use the integral representation (REF ) of the sum (REF ).", "The integral still diverges in an infinite system, but we put it under the carpet.", "When $s>1$ , the singularity at $x=y$ in the integral in Eq.", "(REF ) leads to divergence.", "We thus return to summation and compute the energy per particle by fixing $i$ and summing over all $j\\ne i$ , or equivalently over $n=j-i$ $\\mathfrak {e} = (2s)^{-1}\\sum _{n\\ne 0}\\frac{1}{|n/q(x)|^s}=s^{-1}\\zeta (s) q^{s}$ We have relied on a crucial assumption that the spatial distribution of particles is locally equidistant; see [34] for justification.", "This assumption has allowed us to write $x_j-x_i=n/q(x)$ in (REF ), where $x$ means $x_i$ .", "The total energy is $\\mathcal {E}[q] = \\int dx\\,q\\mathfrak {e}= s^{-1}\\zeta (s) \\int dx\\,q^{s+1}$ with extra $q$ in the first integral since $\\sum _i\\rightarrow \\int dx\\,q(x)$ .", "Equation (REF ) is the announced Eq.", "(REF ).", "When $s=1$ , the sum in (REF ) diverges.", "It seems reasonable to use the diffusive scale $\\sqrt{T}$ as an upper cutoff in the sum.", "Thus (REF ) gives $\\mathfrak {e} = \\sum _{n=1}^{\\sqrt{T}}\\frac{q(x)}{n}=q(x)\\, \\ln \\sqrt{T} = \\tfrac{1}{2}q \\ln T$ Therefore $E[q] = \\int dx\\,q\\mathfrak {e} = \\tfrac{1}{2}\\ln T \\int dx\\,q^2$ and in the long-time limit $D(q) = q\\ln T$ If one trusts these heuristic arguments, one gets $\\langle X^2 \\rangle |_\\text{ann} = \\sqrt{2}\\,\\langle X^2 \\rangle |_\\text{det} = \\rho ^{-3/2}\\,\\sqrt{\\frac{4T}{\\pi \\ln T}}$ In Sec.", ", we presented heuristic arguments leading to Eq.", "(REF ) for the variance.", "This result is consistent with Eq.", "(REF ).", "The only difference is that Eq.", "(REF ) is agnostic to the numerical pre-factor.", "(If in the estimate (REF ) we take the scale $T^{1/4}$ of a typical displacement of a tagged particle as an upper cutoff, this would enhance (REF ) by a factor $\\sqrt{2}$ .)", "A more careful treatment of the $s = 1$ model is a worthwhile endeavor [86]." ], [ "Derivation of Eqs. (", "In this appendix, we fill some missing steps in the calculations of the two-time correlation functions.", "For the deterministic case, we take $\\widehat{q}_1$ given by (REF ) and substitute it into the second line in (REF ).", "This gives $\\langle Q(t_1) \\rangle = {\\rm i}\\int _{-\\infty }^\\infty \\frac{\\widehat{q}_1(k,t_1) - \\widehat{q}_1(k,0) }{k}\\frac{dk}{2 \\pi } = \\frac{ \\rho }{\\pi }\\int _{-\\infty }^\\infty dk\\int _0^{t_1} d\\tau \\int _\\tau ^T dt_2 \\lambda (t_2){\\rm e}^{\\omega (k)(2 \\tau - t_2 -t_1)}$ We split the integral over $t_2$ as $\\int _\\tau ^T = \\int _\\tau ^{t_1} + \\int _{t_1}^T$ , exchange the order of the integrals over $\\tau $ and $t_2$ and evaluate the integrals over $\\tau $ .", "This gives $\\langle Q(t_1) \\rangle &=& \\frac{ \\rho }{\\pi }\\int _{-\\infty }^\\infty dk\\left[\\int _0^{t_1}dt_2 \\lambda (t_2) \\int _0^{t_2} d\\tau \\, {\\rm e}^{\\omega (k)(2 \\tau - t_2 -t_1)}+ \\int _{t_1}^T dt_2 \\lambda (t_2)\\int _0^{t_1} d\\tau {\\rm e}^{\\omega (k)(2 \\tau - t_2 -t_1)} \\right]\\nonumber \\\\&=& \\frac{ \\rho }{\\pi }\\int _{-\\infty }^\\infty dk\\left[ \\int _0^{t_1} dt_2 \\, \\lambda (t_2)\\,\\frac{{\\rm e}^{\\omega (k)(t_2 -t_1)} - {\\rm e}^{-\\omega (k)(t_2 +t_1) }}{2 \\omega (k)} + \\int _{t_1}^T dt_2 \\, \\lambda (t_2)\\,\\frac{{\\rm e}^{\\omega (k)(t_1 -t_2)} - {\\rm e}^{-\\omega (k)(t_2 +t_1) }}{2 \\omega (k)}\\right]\\nonumber \\\\&=& \\frac{ \\rho }{\\pi } \\int _{0}^T dt_2 \\, \\lambda (t_2)\\int _{-\\infty }^\\infty dk\\,\\frac{{\\rm e}^{-\\omega (k)|t_1 -t_2|} - {\\rm e}^{-\\omega (k)(t_2 +t_1) }}{2 \\omega (k)}$ The last integral over $k$ represents a two-time correlation function [cf.", "Eq.", "(REF )].", "Subtracting 1 from the first term in the numerator and adding 1 to the second term, we analyze the asymptotic behavior of these two integrals using the same method as in deriving the asymptotic (REF ).", "It suffices to identify $2T \\rightarrow |t_1 -t_2|$ in one integral and $2T \\rightarrow t_2 +t_1$ in the other.", "This completes the derivation of Eq.", "(REF ).", "In the annealed case, taking into account $\\widehat{q}_1(k,0)$ given by (REF ), the solution of (REF ) becomes: $\\widehat{q}_1(k,t) = \\frac{ \\rho k}{{\\rm i}\\, \\omega (k) } {\\rm e}^{-\\omega (k)t} \\int _0^T d\\tau \\, \\lambda (\\tau ) ({\\rm e}^{-\\omega (k)\\tau } - 1)- 2 {\\rm i}\\,\\rho k\\int _0^t d\\tau \\int _\\tau ^T dt_2\\, \\lambda (t_2){\\rm e}^{\\omega (k)(2 \\tau - t_2 -t)}$ The second term on the right-hand side is the same as in the deterministic case.", "We only need to evaluate the contribution of the first term.", "After a bit of algebra we arrive at an integral $\\rho \\int _0^T dt_2\\, \\lambda (t_2) \\int _{-\\infty }^\\infty \\frac{dk}{2 \\pi }\\left( \\frac{1 - {\\rm e}^{-\\omega (k)t_1} }{ \\omega (k) }+ \\frac{1 - {\\rm e}^{-\\omega (k)t_2} }{ \\omega (k) }- \\frac{1 - {\\rm e}^{-\\omega (k)(t_1 +t_2)} }{ \\omega (k) } \\right)$ When $t_1$ and $t_2$ are large, we use again the same calculation as in deriving the asymptotic (REF ).", "We find that the $k$ -integral in (REF ) behaves as $W_s\\left(t_1^{\\frac{s}{1+s}} + t_2^{\\frac{s}{1+s}} - |t_1-t_2|^\\frac{s}{1+s} \\right)$ Adding this contribution to the asymptotic of the second term on the right-hand side of (REF ), which is just the two-time correlation function in the deterministic case, we arrive at the announced Eq.", "(REF ).", "Acknowledgments.", "We are thankful to H. Spohn for an inspiring discussion and to S. Mallick for a careful reading of the manuscript.", "The work of K.M.", "has been supported by the project RETENU ANR-20-CE40-0005-01 of the French National Research Agency (ANR)." ] ]
2212.05583
[ [ "Detecting Code Injections in Noisy Environments Through EM Signal\n Analysis and SVD Denoising" ], [ "Abstract The penetration of embedded devices in networks that support critical applications has rendered them a lucrative target for attackers and evildoers.", "However, traditional protection mechanisms may not be supported due to the memory and computational limitations of these systems.", "Recently, the analysis of electromagnetic (EM) emanations has gathered the interest of the research community.", "Thus, analogous protection systems have emerged as a viable solution e.g., for providing external, non-intrusive control-flow attestation for resource-constrained devices.", "Unfortunately, the majority of current work fails to account for the implications of real-life factors, predominantly the impact of environmental noise.", "In this work, we introduce a framework that integrates singular value decomposition (SVD) along with outlier detection for discovering malicious modifications of embedded software even under variable conditions of noise.", "Our proposed framework achieves high detection accuracy i.e., above 93\\% AUC score for unknown attacks, even for extreme noise conditions i.e., -10 SNR.", "To the best of our knowledge, this is the first time this realistic limiting factor, i.e., environmental noise, is successfully addressed in the context of EM-based anomaly detection for embedded devices." ], [ "Introduction", "Today, embedded devices have become an indispensable component of a wide range of heterogeneous applications.", "Such devices are deployed to support even mission-critical tasks of Industrial Control Systems (ICS) within Critical Infrastructures (CI).", "Therefore, it does not come as a surprise that embedded devices have become targets of cyberattacks.", "Nonetheless, due to their limited on-chip processing capabilities and reliance in proprietary OS and toolchains, the installation of traditional means of protection such as anti-malware or Host-based Intrusion Detection Systems (H-IDS) is deemed impractical.", "Because of this, the applicability of external means of control-flow attestation has attracted the interest of the research community.", "While several alternative solutions have been considered, the monitoring of systems based on side-channel analysis has gained traction.", "Among the different types of side-channels (power [9] acoustic [1], thermal [7]), the analysis of electromagnetic (EM) signals is preferable because it offers high bandwidth and enables the monitoring of the Central Processing Unit (CPU) activity at fast sampling rates [10],[5].", "Previous work in the area [11] [10],[5]has shown that it is possible to achieve high accuracy for detecting significant violations in the execution flow of a monitored program under controlled laboratory environments.", "However, minimal code injections of just a few instructions may be more stealthy and challenging to detect especially in noisy environments.", "In this work, we propose a framework for reliably identifying code injection attacks of arbitrary lengths of code even under the influence of variable levels of environmental noise.", "At the core of the framework lies a non-destructive, noise-reduction technique that is based on the Singular Value Decomposition (SVD) method.", "The analysis process is not based on supervised Machine Learning (ML), but rather it is based on an outlier detection strategy at the core of which lies the well-known Local Outlier Factor (LOF) [3] method.", "This design decision renders the system capable of detecting even unseen/unknown attacks i.e.", "the injection of variable lengths of code and alternative types of instructions, that have not been analyzed before." ], [ "Technical Background & Definitions", "The execution of instructions by the CPU of a given device results in changes in the flow of electric current inside the CPU's circuitry.", "This change produces a magnetic field that interacts with the electric field, resulting in an EM field.", "Moreover, the components of the printed circuit board act as antennas.", "Thus, the board unintentionally transmits EM signals that are highly correlated to the instructions running at the CPU.", "These emanations can be captured by placing a probe near the source of the signal [5].", "It is generally accepted that it is possible to identify (at bare minimum) the active execution state of a program by analyzing such analog signals.", "Figure: Comparison between a clean signal (red) and the noisy version of the same signal (blue) for various SNR levels.Code injection attacks start by exploring a target software for pre-existing vulnerabilities that allow an attacker to inject malicious code.", "During the exploitation step, the instructions that are injected change the original logic of the program or forge a new (malicious) execution path, thus altering the original control flow.", "Attacks based on buffer overflows are examples of such malicious activity.", "EM signals obtained by the CPU are amplitude modulated according to corresponding instruction, with a carrier signal that has a base frequency, that the clock of the monitored CPU [8].", "Therefore, different instructions can theoretically be distinguished by observing the amplitude of the signal across time.", "While previous research in the field has shown that is possible to detect as little as one-instruction injection with an average AUC rate of 98.7% for low noise environments [11], a decrease in the detection rate should be expected when the noise level increases.", "Figure REF displays the same signal at different noise levels.", "It is clear that important signal artifacts get concealed behind noise for the low SNR conditions.", "At the same time, the utilization of noise filtering might affect the anomaly detection process due to the destructive nature of corresponding methods.", "In this respect, previous work in the area [4], [11] shows that SVD denoising is superior to other noise elimination techniques.", "Briefly, SVD decomposes a 2-D matrix into three components (matrices $U$ , $\\Sigma $ , $V$ ).", "$\\Sigma $ is a diagonal matrix where the values along the main diagonal correspond to the singular values of the 2-D matrix.", "In the context of noisy signals we assume that the 2-D matrix is the composition of a clean signal and noise.", "Then each of the $U$ , $\\Sigma $ , $V$ can still be expressed as the composition of two signal subspaces: the clean signal and the noise.", "In order to denoise the signal, all singular values corresponding to noise must be set to zero, thus creating a new approximation of the singular matrix, $\\Sigma _{new}$ .", "Figure REF (right) illustrates this transformation.", "After this, a denoised version of the signal $X_s$ can be obtained as the dot product of $U$ , $\\Sigma _{new}$ , and $V^T$ .", "More specifically, this can be expressed as in Equation REF .", "$\\begin{split}X_s & = U \\Sigma _{new} V^T = U \\begin{bmatrix} \\sigma _s & 0 \\\\ 0 & \\sigma _n = 0\\end{bmatrix} V^T\\end{split}$ where $X_s$ is the clean signal, $U$ is the left singular matrix, $\\sigma _s$ are the singular values that correspond to the signal, $\\sigma _n$ are the singular values that correspond to noise and $V^T$ is the right singular matrix transposed.", "One challenge with this approach is to accurately pinpoint the number of singular values corresponding to the signal.", "In other words, identifying the point in the $\\Sigma $ matrix that accurately partitions the signal and the noise subspaces.", "That point is referred to as Cutting Point.", "Hassanpour et al.", "[6] identified this point as the one where the slope of the curve of the corresponding singular values changes drastically.", "Figure: Position of the cutting point where the slope of the curve of the corresponding singular values changes drastically.", "Noise reduction can be achieved by zeroing out singular values after that point." ], [ "Proposed Framework", "Our proposed framework specifies practices for: (a) obtaining the EM signals from the subject device, (b) performing signal pre-processing to reduce noise while maintaining important characteristics that correspond to anomalies, (c) fingerprinting the morphology of EM signals corresponding to normal operations (training), and finally (d) quantifying the level of disparity of these signals versus the ones obtained on the field towards recognizing anomalous operations.", "Signal Acquisition: To obtain the signals, the antenna must be placed in close proximity to the target device.", "The distance depends mainly on the size of the CPU chip and the enclosure/chassis of the device.", "The collection of a large number of samples that correspond to each potential execution path of instructions (e.g., same loop) is needed to achieve efficient detection.", "From empirical data, the number of signal-observations of the same operation, should be at least the same as the number of samples in each signal-observation during the fingerprinting phase.", "During the deployment phase, this number should be double.", "According to Vedros et al.", "[11] high sampling rates, i.e., around x16 the CPU clock speed (with higher sampling rates providing diminishing returns), are ideal for anomaly detection purposes, while at the same time they provide robustness against noise and can reliably detect even small code injections.", "Noise Reduction & Pre-processing: In this work, we apply a modification of the SVD denoising as a preprocessing step to reduce the noise level in the captured signals.", "The applied adaptations revolve around two axes.", "On the one hand, the method is applied to multiple examples (as opposed to a single) describing the same phenomenon, i.e., the same execution path.", "This significantly speeds up the denoising process and alleviates the need for bringing individual observations to their Hankel Matrix representation as done in [6], thus reserving memory.", "On the other hand, the signals are denoised by considering a higher-than-usual values for cutting points.", "Although this decision may appear counter-intuitive, the reader should keep in mind that anomaly detection and not noise elimination is the main objective.", "Therefore, since the only characteristic that distinguishes different instructions is the difference in the amplitude of the signal, we must take special care in preserving those differences during the denoising process.", "[1] Anomaly Detection Algorithm Detectbenign dataset $X$ , test observation $q$ $status_q \\leftarrow 0$ $\\forall i \\in X$ $s_{x_i} \\leftarrow LOF(X, x_i)$ $S_i \\leftarrow s_{x_i}$ $s_q \\leftarrow LOF(X, q)$ $\\forall s_{xi} \\in S$ $s_{x_i} \\ge s_q$ $n \\leftarrow n + 1$ $p_q \\leftarrow \\frac{n+1}{|x|+1}$ $\\tau \\leftarrow 1 - confidence$ $p_{max} <= \\tau $ $status_q \\leftarrow 1$ $status_q$ Anomaly Detection: Our EM signal anomaly detection strategy is a modification of the transduction and hypothesis testing algorithm introduced by Barbara et al.", "in [2].", "The basic assumption of this method is that benign instances are easy to obtain in real-life conditions.", "Meanwhile, malicious cases, while bearing differences compared to benign observations, are highly unpredictable regarding the exact location, the morphology, and the extent of the disparity.", "Thus, the anomalous cases that can be observed in real-life may potentially be infinite.", "The latter is the primary reason why we chose to approach the problem at hand as an outlier detection problem rather than a classification problem.", "Transduction is carried out by placing an unknown signal in a known sample distribution of data and then carrying out hypothesis testing; it determines whether that instance is a good fit.", "The process described above is given in Algorithm .", "In further detail, the proposed approach requires that the following steps are executed: Step 1: Collect a population of normal signals.", "Here, we assume that during the fingerprinting step all examples correspond to a benign operation and the device is not infected.", "This will be the baseline of signals.", "Step 2: Apply the LOF algorithm for each example in the baseline to quantify the strangeness (i.e., the level of unfitness) of that signal with the rest.", "The reader may recall that LOF has been proven to have superior discrimination power over simpler methods like K-Nearest Neighbors (KNN) due to the fact that it takes into consideration the density of the neighboring points relative to the density of the point in consideration that is why it became the method of choice.", "This will be the baseline strangeness distribution.", "Step 3: During the deployment phase, obtain a signal we wish to test, and compute its unfitness score with respect to the baseline constructed in Step 1 by using LOF.", "Then, transduce that value against the baseline strangeness distribution that was constructed in the previous step.", "Step 4: As a result of the previous step, one obtains a fraction of the number of signals whose unfitness score is greater or equal to the unfitness score of the test point to the total signals considered.", "This fraction can be considered a p-value for a statistical hypothesis test, where the null hypothesis is this point belongs to the baseline distribution (i.e., it can be considered normal), while the alternative hypothesis is this point does not fit the distribution, (i.e.", "it needs to be deemed an anomaly).", "Figure: A graphical representation of described tank filling scenario." ], [ "Experimental Setup", "To evaluate the proposed framework, we created an experimental setup consisting exclusively of low-cost, off-the-shelf components.", "A simple control process emulating a tank filling system was used as the software-to-be-monitored for malicious modifications (Figure REF ).", "The control logic was implemented in the AVR assembly language and installed in an Arduino Mega with an ATmega2560 CPU clocked at 16MHz.", "The choice of language was made to have better control over the actual instructions being executed at the CPU.", "The two adversarial cases considered in this work include the injection of ADD and JMP instructions, respectively.", "An ADD instruction consumes one CPU cycle, while a JMP instruction takes three cycles.", "Naturally, the injection of these malicious instructions causes a displacement of one and three cycles.", "Observations: By comparing the amplitude of the signals we can verify that there is a difference between the normal (expected CLR) and malicious instructions (ADD and JMP) (Figure REF ).", "Figure: Zoom-in at the critical section where the injection occurred.", "The reader should compare the difference amplitude for each instruction that was injected vs the normal instruction.Data Gathering: We used a near-field probe placed directly on top of the device's CPU.", "This was done to obtain EM readings that are virtually noise-free.", "Later on, synthetic random noise having a Gaussian distribution was added to each signal.", "We considered noise levels of 10 SNR, 5 SNR, 0 SNR, -5 SNR, and -10 SNR.", "Evaluation: To evaluate the detection accuracy of the proposed system, we used the ten-fold cross-validation method.", "More specifically, 90% of the normal dataset was withheld and used for training, and 10% of the remaining normal dataset, along with the same amount of anomalous signals, was used for testing purposes in each of the folds.", "The average from the folds was calculated and reported.", "The predictive accuracy was evaluated using the rea Under Curve (AUC) score of the corresponding receiver operator characteristic (ROC) curve." ], [ "Experimental Evaluation", "Four sets of experiments were conducted to evaluate the efficiency of our framework.", "The reader should keep in mind that all experiments consider the injection of a minimum number of instructions, i.e., one.", "Therefore, all results presented in this section define a lower bound (minimum) of predictive accuracy." ], [ "Considering No Pre-processing for Noise Reduction", "As a first experiment, we wanted to identify the performance of the anomaly detection algorithm when no noise elimination is applied.", "Table REF contains the results (AUC scores) of the experiments.", "The results indicate that the AUC score is near-perfect when the system is deployed in a relatively clean environment (i.e., 10 to 5 SNR).", "However, the predictive accuracy rapidly drops to very poor levels (below 70%) when considering SNR below 5dB.", "Conclusions: A noise reduction step is necessary, especially when the system is expected to operate at SNR levels below 5 dB." ], [ "Considering SVD with Traditional Cutting Points for Noise Reduction", "As a next experiment, we introduced an additional pre-processing step aiming at noise reduction.", "For this experiment we assumed that the noise levels remain the same between fingerprinting and deployment phases.", "In this experiment, the parameter cutting point is chosen according to traditional criteria i.e., where the slope of the curve in the corresponding singular values graph changes drastically [6].", "Notice that based on the chosen criteria in all of our experiments the value of the cutting point was dictated to be 1 (i.e., only on singular value is retained).", "The results of this experiment are given in Table REF .", "In parentheses the value of parameters (a) cutting point used in training phase, (b) cutting point used in deployment phase, and (c) number of neighbors.", "Based on the results, the application of SVD with traditional cutting points drastically degrades the anomaly detection accuracy for all cases.", "Apparently, such an aggressive noise reduction procedure eliminates important characteristics that are indicative of the anomalies.", "Conclusion: The application of SVD for noise reduction as a pre-processing step when using traditional cutting points, has a negative impact on the anomaly detection process.", "Therefore, new criteria for choosing optimal cutting points should be identified." ], [ "Statistically Inferring Cutting Points for Anomaly Detection", "In the next set of experiments, we use a brute force approach to find the optimal cutting points across all considered noise environments.", "By increasing the value of the cutting point parameter, we are able to achieve a significant increase in predictive accuracy.", "Nevertheless, a dichotomy exists.", "Discarding a lot of singular values may lead to the elimination of important characteristics indicative to anomalies, while retaining too many may result in retaining noise that may conceal or distort these artifacts-of-interest.", "Detailed results for each experiment, along with the chosen hyperparameters, are given in Table REF .", "An interesting observation is that the values for the cutting points and the number of neighbors are the same, regardless of the types of injections.", "This indicates that the identified parameters for a given type of injection at a specific level of noise are potentially transferable across different types of injections.", "By plotting those values (Figure REF ), we can statistically derive an Equation REF that estimates the value of the optimal cutting point based on the active noise level.", "More specifically, the formula is given as: $\\widehat{\\mathcal {T}} = 9.7915 \\cdot e^{0.0916 \\cdot n}$ where $n$ is the noise level in the training and deployment environments; $\\widehat{\\mathcal {T}}$ is the value for the cutting points that need to be applied to reduce noise in the training and deployment environments.", "This is a number of singular values that will be retained during SVD denoising.", "Figure: Statistical analyses of cutting points based on the noise level.Conclusions: (a) SVD denoising with cutting points that lead to the retention of more singular values provides high predictive accuracy even for minimal code injections and for extremely high levels of environmental noise; (b) It is possible to statistically infer a formula for identifying (near-optimal) cutting points for anomaly detection, regardless of the type of injection; (c) The cutting points identified through the formula achieved an appropriate level of denoising regardless on the type of injection." ], [ "Evaluating Considering Variable Noise Levels", "The subsequent set of experiments aims to quantify the predictive accuracy of our approach assuming that the noise levels have increased from the fingerprinting to the testing phase.", "The results of this experiment are provided in Table REF .", "In parentheses are the hyperapameters used to obtain the results namely, the cutting point during the fingerpinting phase, the cutting point during the deployment phase, and lastly the number of neighbors.", "The reader may notice that different cutting points were used for the two phases.", "All cutting points were derived automatically from Equation REF without further analysis.", "Utilizing the new strategy for inferring cutting points dramatically increases the anomaly detection accuracy to levels higher than 95% (AUC score) for the majority of the cases.", "The only exception to this rule is the case of having an extremely noisy deployment environment i.e., -10SNR.", "In this case, the best condition is to train the baseline in an equally noisy environment which yields an AUC score of 92.20% and 93.07% respectively.", "For the cases where the noise levels differ, an average of 86.89% for the ADD and 88.73% for the JMP is expected.", "These results constitute a dramatic improvement over applying SVD noise reduction with conventional cutting points.", "Conclusions: The proposed framework can achieve highly accurate anomaly detection even if the noise levels drastically increase from fingerprinting to deployment stage." ], [ "Conclusion", "EM-based anomaly detection systems may be particularly advantageous in the realm of embedded devices because they are able to detect modifications in software such as the injection of foreign instructions, remotely, without burdening the target system.", "However, the majority of works in the area, do not consider the impact of environmental noise.", "In this work, we introduced an EM-based anomaly detection framework that is robust against environmental noise.", "Through experiments, we proved that the proposed system yields highly accurate predictions i.e.,  94-95% AUC score even in environments of extremely high noise levels, by utilizing the SVD technique to achieve noise reduction.", "Among the most important contributions of this work is an equation that can infer values for the highly sensitive denoising hyperparameters (i.e., cutting points).", "Acknowledgments Prepared as part of the Laboratory Directed Research and Development program under DOE Idaho Operations Office Contract DE-AC07-05ID14517.", "We also acknowledge the Resilient Control and Instrumentation Systems (ReCIS) program." ] ]
2212.05643
[ [ "On a sum of a multiplicative function linked to the divisor function\n over the set of integers B-multiple of 5" ], [ "Abstract Let $d(n)$ and $d^{\\ast}(n)$ be the numbers of divisors and the numbers of unitary divisors of the integer $n\\geq1$.", "In this paper, we prove that \\[ \\underset{n\\in\\mathcal{B}}{\\underset{n\\leq x}{\\sum}}\\frac{d(n)}{d^{\\ast}% (n)}=\\frac{16\\pi% %TCIMACRO{\\U{b2}}% %BeginExpansion {{}^2}% %EndExpansion }{123}\\underset{p}{\\prod}(1-\\frac{1}{2p% %TCIMACRO{\\U{b2}}% %BeginExpansion {{}^2}% %EndExpansion }+\\frac{1}{2p^{3}})x+\\mathcal{O}\\left( x^{\\frac{\\ln8}{\\ln10}+\\varepsilon }\\right) ,~\\left( x\\geqslant1,~\\varepsilon>0\\right) , \\] where $\\mathcal{B}$ is the set which contains any integer that is not a multiple of $5,$ but some permutations of its digits is a multiple of $5.$" ], [ " @softinputwarning @testin @oriescapechar @prmezera @prdimen @pagfile @pkgfile @metafile @referencesfile @endtoks @usepackagetoks" ] ]
2212.05549
[ [ "Cross-Modal Learning with 3D Deformable Attention for Action Recognition" ], [ "Abstract An important challenge in vision-based action recognition is the embedding of spatiotemporal features with two or more heterogeneous modalities into a single feature.", "In this study, we propose a new 3D deformable transformer for action recognition with adaptive spatiotemporal receptive fields and a cross-modal learning scheme.", "The 3D deformable transformer consists of three attention modules: 3D deformability, local joint stride, and temporal stride attention.", "The two cross-modal tokens are input into the 3D deformable attention module to create a cross-attention token with a reflected spatiotemporal correlation.", "Local joint stride attention is applied to spatially combine attention and pose tokens.", "Temporal stride attention temporally reduces the number of input tokens in the attention module and supports temporal expression learning without the simultaneous use of all tokens.", "The deformable transformer iterates L times and combines the last cross-modal token for classification.", "The proposed 3D deformable transformer was tested on the NTU60, NTU120, FineGYM, and Penn Action datasets, and showed results better than or similar to pre-trained state-of-the-art methods even without a pre-training process.", "In addition, by visualizing important joints and correlations during action recognition through spatial joint and temporal stride attention, the possibility of achieving an explainable potential for action recognition is presented." ], [ "Introduction", "Spatiotemporal feature learning is a crucial part of action recognition, which aims to fuse not only the spatial features of each frame but also the temporal correlation between input sequences.", "Previous studies on action recognition [19], [6], [5], [42], [9], [47] investigated the application of 3D convolutional kernels with an additional temporal space beyond the 2D spatial feature space.", "Since then, 3D convolutional neural networks (CNN) have achieved a promising performance and have eventually become the de facto standard for various action recognition tasks using sequential data.", "Vision transformers (ViTs) for action recognition, which have peaked in popularity, have recently been used to explore a 3D token embedding to fuse the temporal space within a single token.", "However, ViTs-based action recognition methods [1], [34] are limited in that they can only conduct spatiotemporal feature learning within restricted receptive fields.", "To avoid this problem, several studies [15], [56], [46] have been conducted to allow more flexible receptive fields for deep learning models.", "Deformable CNN leverage dynamic kernels to capture the intense object regions.", "First, they determine the deformable coordinates using embedded features.", "The kernel is then applied to the features extracted from the deformable coordinates.", "Deformable ViTs [46], [56] encourage the use of an existing attention module to learn deformable features.", "The query tokens are projected onto the coordinates to obtain deformable regions from the key and value tokens.", "The deformed value tokens are then applied to the attention map, which is generated through a scaled dot product of the input query and deformed key tokens.", "These methods suggest a new approach that can overcome the limitations of existing standardized feature learning.", "However, despite some impressive results, these studies are still limited in that they are only compatible with the spatial dimensions.", "Therefore, as a primary challenge, there is a need for the development of novel and deformable ViTs that can learn spatiotemporal features from image sequences.", "Another challenge is the efficient application of multimodal input features to an action recognition model.", "Action recognition is classified into three categories based on the feature type.", "The first is a video-based approach [55], [4], [45], [29], [20], [43], [33], which has traditionally been used for action recognition.", "This approach is limited by a degraded performance caused by noise, such as varying object sizes, occlusions, or different camera angles.", "The second is a skeleton-based approach [50], [25], [12], [13], [11], which mainly converts poses into graphs for recognizing actions through a graph neural network (GNN).", "Although this approach is robust against noise, its performance is highly dependent on the pose extraction method.", "To overcome the shortcomings of the previous two approaches, the third method aims to simultaneously fuse heterogeneous domain features using multimodal or cross-modal learning.", "With this approach, video and skeleton features are jointly trained simultaneously.", "However, because most related studies use a separate model composed of a GNN + CNN or CNN + CNN for each modality, there is a limit in constructing an effective single model.", "To alleviate the drawbacks stated above, we propose the use of transformer with 3D deformable attention for dynamically utilizing the spatiotemporal features for action recognition.", "In this way, the proposed model applies flexible cross-modal learning, which handles the skeletons and video frames in a single transformer model.", "The skeletons are projected onto sequential joint tokens, and each joint token contains an activation at every joint coordinate.", "To provide effective cross-modal learning between each modality, the proposed method adopts a cross-modal token that takes the role of mutually exchanging contextual information.", "Therefore, the proposed model is capable of achieving a boosted performance without an auxiliary submodel for the cross-modalities.", "Figure REF shows a comparison between the previous full attention and the proposed 3D deformable attention.", "In the case of the full attention shown in Fig.", "REF (a), all tokens within a spatiotemporal space are covered against a specific query token.", "By contrast, our proposed 3D deformable attention scheme, shown in Fig.", "REF (b), considers only tokens with high relevance within the entire spatiotemporal space.", "The main contributions of this study are as follows: We propose the first 3D deformable attention that adaptively considers the spatiotemporal correlation within a transformer as shown in Fig.", "REF (b), breaking away from previous studies that consider all tokens against a specific query in a complete sequence.", "We propose a cross-modal learning scheme based on complementary cross-modal tokens.", "Each cross-modal token delivers contextual information between the different modalities.", "This approach can support a simple yet effective cross-modal learning within a single-transformer model structure.", "We present qualitative evidence for 3D deformable attention with visual explanations and prove that the proposed model outperforms several previous state-of-the-art (SoTA) methods." ], [ "Related Works", "Spatiotemporal learning for action recognition.", "Early studies in this area focused primarily on employing a 3D CNN, which is an extension of a 2D CNN.", "This has become a central remedy in vision-based action recognition in recent years.", "PoseC3D [19] combines 3D volumetric heat maps from the skeletons and frames of the input video.", "SlowFast [21] makes a significant contribution to the field by providing a frame-fusion scheme between different frame rates.", "There are also related methods [22], [42], [9], [47], [44], [4], [20], [43], [24] that explore the use of a 3D CNN architecture for action recognition.", "STDA [24] applies a 3D deformable CNN that captures substantial intense regions for spatiotemporal learning.", "Over the last few years, focus has shifted toward skeleton-based action recognition with respect to the emergence of a GNN.", "ST-GCN [50] has become a baseline adopting separate spatial and temporal representation modules for spatiotemporal modeling.", "In addition, ViTs have attracted considerable attention owing to their superior performance in sequential tasks.", "STAR [1] applies cross-attention for the fusing of temporal correlations between spatial representations.", "ViViT [2] embeds an input video with a 3D tokenizer to compose the spatiotemporal features in a single token.", "Other studies [7], [1], [31] have adopted a temporal stride to capture the diversity between different time steps.", "However, the concept of a 3D deformation, despite its excellent performance, cannot be applied to the attention of ViTs owing to various structural constraints.", "Cross-modal learning for action recognition.", "Most current action recognition methods use various modalities with video frames and skeletons.", "Several methods [17], [6], [5], [16] employ a graph convolutional network (GCN) to handle a raw skeleton input and a CNN for the video frames.", "VPN [17] applies GCN subnetworks to support the CNN.", "The footage of a GCN networks is linearly combined with the CNN feature maps.", "MMNet [6] introduced a multimodal network with two GCN subnetworks and a CNN.", "Each subnetwork embeds the features separately, and these features are then summed at the end of the network.", "Other studies [14], [48], [3], [1], [19], [39] transformed graphical skeletons into the heat maps.", "PoseC3D [19] uses dual 3D CNN branches for video frames and 3D volumetric heat maps.", "STAR [1] proposed joint tokens generated by combining CNN feature maps with 2D joint heatmaps.", "To fuse the two modalities, they concatenated multiclass tokens by combining different modal tokens.", "Despite the improved performance of cross-modal learning, video frames and skeleton modalities are merely integrated, thus neglecting a careful design.", "We propose an effective feature fusion method called a cross-modal token.", "To exchange contextual information, each token is dispatched to another modality.", "Transformer with deformable attention.", "The idea of a 2D deformable CNN for the learning of deformable features was applied to the attention module of a ViT, achieving an excellent performance in various applications, including image classification.", "Deformable DETR [56] was applied to object detection and demonstrated its ability to accurately detect objects of various sizes.", "A deformable attention transformer (DAT) [46] with an improved numerical stability and a robust performance was recently proposed.", "In terms of action recognition, a 3D deformable CNN [24], [26] for spatiotemporal learning showed a better performance than a 2D deformable CNN but was not applied to a transformer owing to the structural constraints of an attention optimized for spatial feature embedding.", "Therefore, in this study, we propose a new 3D deformable transformer capable of fusing cross-modal features using cross-modal tokens.", "The proposed deformable method enables 3D deformable feature embedding based on spatio joint stride and temporal stride attention.", "The remainder of this paper is organized as follows.", "Section provides a detailed explanation of the proposed approach.", "Section provides an experimental analysis of several benchmarks as well as visual descriptions.", "Finally, Section provide some concluding remarks regarding this research." ], [ "Approach", "We propose a 3D deformable transformer for action recognition with adaptive spatiotemporal receptive fields and a cross-modal learning scheme.", "The overall architecture of the proposed model is shown in Fig.", "REF and is described in detail in the following sections." ], [ "Cross-modal learning", "In action recognition, cross-modal learning has been the mainstream, leveraging various modalities such as video frames and skeletons.", "Several successful studies [19], [6], [5], [17], [23], [16] have employed subnetworks that handle different domain features.", "However, these designs eventually increase the redundancy and complexity owing to the domain-specific subnetworks.", "We propose simple yet effective cross-modal learning for mutually exchanging contextual information.", "Our cross-modal learning method consists of a backbone [44], which provides intermediate feature maps and sequential tasks.", "When the image has a height $H$ , with $W$ , temporal dimension $T$ , and feature dimension $C$ , the backbone network feeds visual feature maps $\\mathrm {F}_a \\in \\mathbb {R}^{C \\times T \\times \\frac{H}{2} \\times \\frac{W}{2}}$ and $\\mathrm {F}_b \\in \\mathbb {R}^{4C \\times T \\times \\frac{H}{8} \\times \\frac{W}{8}}$ , extracted from intermediate layers.", "In the case of $\\mathrm {F}_b$ , we consider it as the RGB modality input for visual representation learning, whereas the local level feature map, $\\mathrm {F}_a$ , is regarded as the pose modality input by combining with skeletons.", "To fuse both modalities, we apply the following concepts: Pose modality.", "To design a cross-modal learning scheme with an alleviated redundancy, we propose visual feature-oriented pose tokens combined with joint heatmaps, such as [19], [1].", "First, the sequential skeletons are decomposed into single-joint units.", "Each joint is then recomposed into a joint heatmap $\\mathcal {H} \\in \\mathbb {R}^{T \\times R \\times \\frac{H}{2} \\times \\frac{W}{2}}$ by projecting the joints toward an empty voxel at the corresponding coordinates $(x_{t,r},y_{t,r})$ .", "Here, $R$ is the number of joints, and spatial dimension follows feature map size of $\\mathrm {F}_a$ .", "Finally, pose tokens $\\mathrm {P}$ using the joint-tokenizer shown in Fig.", "REF (a) using the multiplication of $\\mathrm {F}_a$ and the Gaussian blur output at time $t$ as follows: $\\mathrm {P}_t = ||_r \\sum ^{\\frac{H}{2}}_{j} \\sum ^{\\frac{W}{2}}_{i} \\mathrm {F}_{a,t}(i,j)\\mathcal {H}_{t,r}(i,j)$ $\\mathcal {H}_{t,r}(i,j) = e^{-\\frac{(i-x_{t,r})^2+(j-y_{t,r})^2}{2\\sigma ^2}}$ where $\\mathrm {P} \\in \\mathbb {R}^{C \\times T \\times R}$ consists of $R$ pose tokens for every skeleton sequence with $C$ feature dimensions.", "$||$ indicates concatenation.", "To meet the feature dimensions with RGB modality, $\\mathrm {F}_b$ , linear projection is applied to pose tokens, resulting in $\\mathrm {P} \\in \\mathbb {R}^{4C \\times T \\times R}$ as shown in Fig.", "REF (b).", "RGB modality.", "With RGB modality, the extracted visual feature map $\\mathrm {F}_b$ is regarded as RGB tokens $\\mathrm {Z} \\in \\mathbb {R}^{4C \\times T \\times \\frac{H}{8} \\times \\frac{W}{8}}$ and is fused with the position, as shown in Fig.", "REF (c).", "Figure: Illustration of proposed 3D deformable attention for adaptive spatiotemporal learning.", "The input RGB token 𝐗\\mathbf {X} is embedded as a query token to find an offset in the 3D token search (3DTS) module.", "The deformable token is multiplied with key (𝐖 k \\mathbf {W}_k) and value (𝐖 v \\textbf {W}_v) weights.", "It is then fed to the multi-head self-attention (MSA) to interact with the deformed key and value tokens.", "Using 3DTS, an offset vector from a 3D conv block finds the deformed tokens by moving the reference points scattered on the input RGB tokens Z\\mathrm {Z}." ], [ "3D deformable transformer", "Cross-modal tokens.", "An intuitive method is to concatenate all tokens from both modalities, considering the characteristics of each token, and then combining information through the transformer stacks.", "However, to combine different modalities in a single transformer, a deliberate design is required, and the modalities must be cooperative and complementary.", "Similarly, in [1], the authors employ multi-class tokens for cross-modal learning.", "Despite a simple yet effective approach, on par with other transformers, it is aimed only at an information fusion for all tokens without considering the intrinsic properties and complementarities of various modalities.", "Therefore, we propose a cross-modal token that effectively combines the different modalities within the transformer.", "The cross-modal token $\\mathbf {M} \\in \\mathbb {R}^{4C \\times T \\times 3}$ is a set of three trainable tokens: CLS, RGB and pose modal tokens.", "In previous studies [18], [41], CLS token were used as the final embedding fusing information by interacting with other tokens.", "We consider the CLS token $\\mathrm {M}_{CLS} \\in \\mathbb {R}^{4C \\times T \\times 1}$ as a `modality-head' compiling the remaining two modal tokens, which are dispatched to mutual modalities to trade their domain knowledge.", "The first $\\mathrm {M}_{RGB}$ and $\\mathrm {M}_{CLS}$ tokens are fed to the 3D deformable attention, as shown in Fig.", "REF (d).", "Then, the output RGB and CLS modal tokens, $\\mathrm {M}_{RGB}$ and $\\mathrm {M}_{CLS}$ of a 3D deformable attention (Fig.", "REF (e)), reflect information from their own domains through separated transformer blocks cooperating with the dispatched CLS tokens, as shown in Fig.", "REF (f).", "Hereafter, we introduce the 3D deformable attention shown in Fig.", "REF (e), which is the core of the proposed transformer.", "3D deformable attention.", "Although transformers have recently become a new standard in vision tasks, relatively few studies have been conducted on action recognition tasks.", "Because the nature of a transformer considers long-term relations between the input tokens, it may lead to an exponentially increasing computational complexity with the time steps.", "In addition, to solve the problem of static transformers, a DAT [46] that flexibly selects the key and value positions in a self-attention has been proposed; however, it is unsuitable for an action recognition that has to deal with cross-modalities and spatiotemporal features.", "To alleviate the complexity while maintaining the nature of the transformer, inspired by [46], we propose the use of 3D deformable attention for action recognition, as shown in Fig.", "REF (e).", "3D deformable attention can adaptively capture spatiotemporal features on the RGB modality.", "The 3D deformable attention module consists of a 3D token search (3DTS) and multi-head self-attention (MSA) with a feed-forward network (FFN), as shown in Fig.", "REF .", "First, the input of the module, RGB token $\\mathrm {Z}$ , is embedded to only query tokens with $\\mathbf {W}_q$ , and is then fed to the 3DTS, which contains a two-layered Conv3D with kernel $k$ .", "After the first Conv3D, layer normalization (LN) and GELU non-linearity are applied.", "The last Conv3D generates offsets that contain flow fields against the reference points.", "The reference points are defined as being regularly scattered within a 3D space.", "The offsets guide the reference points to find discriminative token coordinates in the spatiotemporal tokens $\\mathrm {Z}$ .", "3D deformable tokens $\\tilde{\\mathrm {Z}}$ are configured by selecting the tokens from the adjusted coordinates taken from the offsets.", "$\\tilde{\\mathrm {Z}} = \\mathrm {3DTS}(\\mathrm {Z}\\mathbf {W}_q, \\mathrm {Z} ; \\omega )$ where $\\mathrm {Z} \\in \\mathbb {R}^{4C \\times T \\times \\frac{H}{8} \\times \\frac{W}{8}}$ and $\\tilde{\\mathrm {Z}} \\in \\mathbb {R}^{4C \\times \\tilde{T} \\times \\tilde{H} \\times \\tilde{W}}$ are the input and selected RGB tokens respectively.", "The size of $\\tilde{T}$ , $\\tilde{H}$ and $\\tilde{W}$ are determined based on the kernel size $k$ .", "In our case, we set the $k$ as 7 without padding for sparsely extracting deformable tokens and increasing the efficiency.", "In addition, $\\mathbf {W}\\in \\mathbb {R}^{4C \\times 4C}$ and $\\omega $ are trainable weight and model parameters for the 3DTS module, respectively.", "It should be noted that while query tokens are composed in the same manner as the transformer, the key and value tokens are composed of selected tokens from the 3DTS.", "These tokens are then embedded into the key and value tokens using $\\mathbf {W}_k$ and $\\mathbf {W}_v$ , respectively.", "Herein, we aim to make the $\\mathrm {M}_{RGB}$ token faithfully learn the RGB modality features, and $\\mathrm {M}_{CLS}$ trades domain knowledge between the RGB and pose modalities.", "To fuse cross-modal tokens with the RGB modality, three tokens, $\\mathrm {M}_{RGB}$ , $\\mathrm {M}_{CLS}$ and spatiotemporal feature tokens $\\mathrm {Z}$ , are concatenated to token $\\mathbf {X}$ .", "$\\mathbf {X} = [\\mathrm {Z}||\\mathrm {M}_{RGB}||\\mathrm {M}_{CLS}]$ where $\\mathrm {M}_{RGB}$ and $\\mathrm {M}_{CLS}$ are obtained from a portion of the proposed cross-modal tokens representing the RGB modality and modality head, respectively.", "Similarly, the selected deformable tokens $\\tilde{\\mathrm {Z}}$ are coupled with two cross-modal tokens to produce $\\tilde{\\mathbf {X}}$ $\\tilde{\\mathbf {X}} = [\\tilde{\\mathrm {Z}}||\\mathrm {M}_{RGB}||\\mathrm {M}_{CLS}]$ Table: Accuracy comparisons with state-of-the-art approaches on NTU60 and NTU120.", "P and R denote pose and RGB modalities, respectively.", "†\\dag indicates estimated pose.Then, $\\mathbf {X}$ is multiplied with query weight $\\mathbf {W}_q$ and, $\\tilde{\\mathbf {X}}$ is multiplied with key and value weights, $\\mathbf {W}_k$ and $\\mathbf {W}_v$ , respectively.", "Those recomposed tokens are fed into multi-head self-attention as a query, key and value for each.", "$\\mathbf {X} = \\mathbf {X} + \\mathrm {MSA}(\\mathbf {X\\mathbf {W}}_q, \\mathbf {\\tilde{X}\\mathbf {W}}_k, \\mathbf {\\tilde{X}\\mathbf {W}}_v)$ The output $\\mathbf {X}$ of 3D deformable attention is finally obtained by applying a LN and FFN.", "$\\mathbf {X} = \\mathbf {X} + \\mathrm {FFN}(\\mathrm {LN(\\mathbf {X})})$ We visualized the attention scores of the selected tokens from the proposed 3D deformable attention shown in Fig.", "REF .", "As indicated in Fig.", "REF , our proposed 3DTS faithfully identifies the fundamental intense regions with adaptive receptive fields against entire sequences.", "Figure: Proposed stride attention modules.", "(a) Joint stride attention, where a series of joint tokens are grouped into query, key, and value including all temporal dimensions with a stride window.", "(b) Temporal stride attention where tokens are bundled with temporal stride windows to fuse the changes with the timesteps.Local joint stride attention.", "In action recognition, there are often multiple people appearing in a scene; therefore, the number of joint tokens increases with the number of people.", "To reduce the computational complexity, we concatenate the joints of multiple people into a series of joint tokens, as depicted in Fig.", "REF (b).", "Although this approach is an efficient way to process multiple people in the same scene simultaneously without significantly increasing the complexity, it still results in a problem in that the size of the joint token increases exponentially as the number of people increases.", "To avoid this problem, we configure the query, key, and value tokens using a sliding window on the joint tokens, as shown in Fig.", "REF (a).", "All tokens in each sliding window are flattened and then concatenated with $\\mathrm {M}_{pose}$ and $\\mathrm {M}_{CLS}^{^{\\prime }}$ dispatched from a 3D deformable attention (Fig.", "REF (f)) to apply a scaled-dot product.", "This is more efficient than calculating all tokens at once and maintaining the relations with each other.", "The output of the joint stride attention is the pose token $\\mathbf {O}$ .", "The calculated RGB tokens $\\mathbf {X}$ and pose tokens $\\mathbf {O}$ are fed to the temporal stride attention module, as shown in Fig.", "REF (h).", "Before this step, to fuse contextual information from each modality, $\\mathrm {M}_{CLS}^{^{\\prime }}$ memorized from the 3D deformable attention and $\\mathrm {M}_{CLS}^*$ calculated from the joint stride attention are projected together into a new single $\\mathrm {M}_{CLS}$ as shown in Fig.", "REF (g).", "Subsequently, the temporal stride attention module learns the correlations between temporal changes against tokens concatenated with cross-modal tokens, RGB tokens $\\mathbf {X}$ and pose tokens $\\mathbf {O}$ , as shown in Fig.", "REF (h).", "Temporal stride attention.", "Several limitations of an attention module exist when the transformer handles the input tokens.", "In general, the attention module covers all input tokens with scaled-dot products.", "Thus, the complexity of the attention module is highly dependent on the number of input tokens.", "In the case of sequential data, this problem is more serious because the input tokens grow with the sizes of the temporal dimensions.", "Ahn [1] divided temporal dimensions into two groups that contain regularly interleaved tokens.", "Despite the halved temporal dimensions, the complexity was only slightly reduced, and the temporal correlations of the neighborhood were decoupled.", "Unlike Ahn [1], we propose a temporal stride with mitigated complexity and an enhanced temporal correlation for cross-attention.", "When building input queries, keys, and value tokens, the temporal dimensions are split into regularly increasing strides to couple various sequential relationships with reduced complexity.", "As shown in Fig.", "REF (b), we first set a local time window for a given stride.", "This window traverses all tokens and specifies the query, key, and value tokens.", "It not only reduces the number of input tokens of the attention module but also supports temporal representation learning without using all tokens at once.", "All of the deformable transformers stated above are repeated $L$ -times, as shown in Fig.", "REF .", "To make the final logits, we concatenate the cross-modal tokens only along with the channel dimension and then feed them to the classification head, as depicted in Fig.", "REF (i)." ], [ "Experiments", "Datasets.", "We conducted experiments using several representative benchmark datasets: FineGYM [36], NTU60 [35], NTU120 [27], and Penn Action [53].", "FineGYM contains 29K videos with 99 fine-grained action labels collected from gymmnastic video clips.", "NTU60 and NTU120 are representative multimodal datasets that are used for human action recognition.", "NTU60 consists of 57K videos of 60 action labels collected in a controlled environment.", "NTU120, which is a superset of NTU60, contains 114K videos of 120 action labels.", "The NTU datasets use three types of validation protocols following the action subjects and camera settings, , cross-subject (XSub), cross-view (XView), and cross-setup (XSet).", "In addition, we validated our proposed model with a smaller dataset, Penn Action, which contains 2K videos for 15 action labels.", "Settings.", "We adopted an AdamW optimizer for 90 epochs with a cosine scheduler with a 5-epoch warm-up.", "A batch consisted of randomly cropped videos with a pixel resolution of $224\\times 224$ for training and center-cropped videos for testing.", "Training and testing were conducted using four NVIDIA Tesla V100 32GB GPUs with APEX." ], [ "Comparison with state-of-the-art approaches", "NTU60-XSub & -XView.", "In Table REF , we compare the top-1 accuracy with various SoTA action recognition methods for the NTU60 dataset.", "Compared with the best result of PoseC3D [19], the proposed method shows a slightly degraded performance for both protocols because our model is trained without a pre-training, unlike PoseC3D.", "However, when compared with the TSMF and STAR models trained under fair conditions, our model showed a 0.5% - 2.5% higher performance, respectively.", "Among the GNN-based methods, InfoGCN ranks remarkably even when it is trained with a single modality; nonetheless, our model shows higher performance of 94.3% and 97.9%, respectively.", "NTU120-XSub & -XSet.", "We provide benchmark results against the NTU120 dataset, which has double the number of videos and action labels compared to NTU60.", "With this result in Table REF , the GNN-based SoTA method, InfoGCN, achieved accuracy of 89.8% and 91.2% for both protocols.", "In the case of the multimodal trained method, PoseC3D, it achieved accuracy of 95.3% and 96.4% with a pre-training.", "Unlike PoseC3D, the scratch-trained multimodal method, STAR, showed accuracy of 90.3% and 92.7%.", "Finally, our proposed method showed a promising performance under similar conditions, , multimodal and scratch training, with accuracy of 90.5% and 91.4%, respectively.", "FineGYM.", "In the case of the FineGYM dataset, which is harsher than other datasets because clips have dynamic motions and camera movements from sports games.", "This dataset has rarely been used for multimodal action recognition, because it does not contain ground-truth skeletons.", "We used the estimated skeletons from HRNet [38] to apply it to cross-modal learning.", "Table REF shows the results of the comparison experiments with other SoTA methods for the FineGYM.", "In the case of a single modality, ST-GCN showed relatively low accuracy because it used the estimated skeletons as our approach.", "The TQN achieved 90.6% with the RGB single modality.", "Cross-modality-based methods, including PoseC3D and our approach, showed a relatively high performance compared to other methods.", "Pretrained PoseC3D demonstrated an SoTA performance of 95.6%; however, our method showed promising accuracy despite not applying a pretraining step.", "Penn Action.", "We validated our method by applying smaller datasets with clearly recorded videos as shown in Table REF .", "In the GNN regime using the single-pose modality, Pr-VIPE and UNIK achieved a high performance of 97.5% and 97.9%, respectively.", "Among cross-modality-based methods, Multitask CNN and STAR show the best performances among the CNN and transformer approaches at 98.6% and 98.7%, respectively.", "Our proposed method outperforms above SoTAs by a large margin of 0.3%–5.6%.", "This result indicates that the proposed method can achieve a good performance regardless of whether small or large datasets are applied.", "Table: Accuracy comparisons with other SoTA approaches on FineGYM.Table: Accuracy comparisons with other SoTA approaches on PennAction.Figure: (a) Ablation study with different stride on various window (wnd) sizes on PennAction.", "(b) Ablation study with different numbers of input frames during the training phase on PennAction.Figure: Ablation study with different numbers of frames during test phase against model trained with 12 frames.Figure: Visualized 3D deformable attentions.", "The proposed 3D deformable attention found discriminative tokens with strong attentions among entire frame sequences.", "In particular, strong attentions are discovered at only noticeable changes in action.Figure: Visualized joint stride attention.", "The proposed 3D deformable attention activates the attention score of each joint differently depending on the size of the “pitching” action for each frame (shown in color)." ], [ "Ablation studies", "Temporal stride.", "To solve the limitations of a complexity dependent on the number of tokens in an attention module, we propose a local window cross attention using a temporal stride.", "In this experiment, to evaluate the validity of the proposed approach, we observe the changes in performance by applying various strides to a fixed-sized window using the PennAction dataset.", "As shown in Fig.", "REF (a), the best performance is obtained when the stride is about half the size of the window, regardless of the window size.", "This is because more overlap between temporal tokens fuels the temporal correlations with enhanced efficiency.", "Therefore, the proposed method faithfully maintains the correlations, even when tokens are divided into local windows.", "Number of input frames.", "One of the important factors to determine in the learning of sequential data is the number of input frames.", "If the number of frames is sufficiently large, a better feature representation learning is possible, although the computational cost significantly increases.", "We therefore empirically determined the optimal number of input frames using the PennAction dataset, as illustrated in Fig.", "REF (b).", "In our model, the best performance was achieved using 12 input frames.", "When the number of frames is smaller than 12, the performance drops slightly, and there is a limit to learning the continuity of the actions.", "On the fly frames in test phase.", "The proposed method showed a good performance on several benchmarks using the suggested modules for capturing temporal changes.", "We provided the evaluation results by diversifying the number of input frames in a model trained using 12 frames.", "The results verify the power of the spatiotemporal feature learning of the proposed method.", "In Fig.", "REF , the proposed method shows a uniform performance for various numbers of input frames.", "Therefore, the proposed method is robust in learning spatiotemporal features, even if the number of testing frames is sparse." ], [ "Qualitative analysis", "Visualization of 3D deformable attention.", "We propose a 3D deformable attention mechanism supporting an adaptive receptive field for action recognition.", "To demonstrate its effectiveness, we provide qualitative evaluations that show the visualized attention values of selected tokens against different video sequences.", "As shown in Fig.", "REF , the proposed method finds intensive regions through a 3D token search of the entire sequence.", "The activation is relatively low in inactive scenes, but, on the contrary, high activation occurs at a large transition among sequences.", "In particular, activations appear finely in joints used for actual action and do not appear coarsely.", "We can confirm that the proposed method not only properly finds tokens that are practically required for recognition in the entire sequence but also shows that strong activation appears in those tokens.", "Visualization of joint stride attention.", "To make the proposed model independent of the number of input joints, we propose a local joint stride attention by grouping joint tokens with a sliding window.", "This approach provides improved efficiency in attention, but we need to verify whether the configured tokens with overlapping contribute to the attention module without full attention.", "In this experiment, we charted the attention levels of each joint according to sequence frames, as shown in Fig.", "REF .", "In practical terms, the joints that move the most when the ball is pitched in the sample video are the head, right hand, and right leg.", "The chart shows that the attention was largely activated according to this action flow.", "From this experiment, it is clear that the proposed local joint attention maintains the correlations between the entire joint token and the efficiency." ], [ "Conclusion", "ViTs have become the mainstream in various vision tasks, achieving an overwhelming performance; however, it has been used relatively little in action recognition tasks.", "Therefore, we first proposed a 3D deformable attention consisting of stride window cross attention for better spatiotemporal feature learning, as well as a cross-modal framework for action recognition.", "The proposed method achieved a newly demonstrated SoTA performance on representative action recognition datasets.", "Based on the results of qualitative experiments, we can confirm that our proposed method has a strong spatiotemporal feature learning capability for action recognition." ] ]
2212.05638
[ [ "A field redefinition invariant Lagrange multiplier formalism" ], [ "Abstract In this paper, we propose a field redefinition invariant Lagrange multiplier (LM) formalism in which new ghost-like fields, analogous to Lee-Yang ghosts, are introduced.", "These ghost fields are required to restore the field redefinition invariance of the standard path integral of the LM theory and, at the same time, to cancel the additional contributions due to the LM fields.", "We argue that the extra degrees of freedom in the standard LM formalism, coming from the LM fields, should cancel against the degrees of freedom of the ghost fields.", "Hence, in the field redefinition invariant formalism the doubling of degrees of freedom, associated with the LM fields, is absent." ], [ "Introduction", "In recent years, there has been an increasing interest in the study of higher-derivative theories, especially in the context of gravity, mainly due to the application of such models in cosmology (for a review we refer to Refs.", "[1], , ).", "Other examples of higher-derivatives theories include noncommutative theories [4], and the Pais-Uhlenbeck oscillator [6], .", "The higher-derivative gravity models were conceived, a long time ago, to obtain theories that could tame ultraviolet divergences that originally arise in the quantization of the Einstein-Hilbert action [8], , , [11], .", "Based on this idea, Stelle [13] obtained a renormalizable quantum gravity theory introducing quadratic terms in the Einstein-Hilbert action.", "Recently, it has been proposed in a series of papers [14], [15], [16], [17], [18], [19] a simpler alternative that restricts quantum effects to one-loop order and has General Relativity as the classical limit.", "This is done by using Lagrange Multipliers (LM) fields to restrict the quantum path integral to field configurations that satisfy the Euler-Lagrange equations.", "The standard LM theory, as we shall call it, yields twice the usual one-loop contributions, while the tree-level effects are kept unaltered.", "The degrees of freedom are also doubled.", "Although it is clear that the doubling of one-loop contributions and degrees of freedom is caused by the presence of the LM fields; its interpretation is, however, still open.", "On the other hand, these extra degrees of freedom appear to be associated with the propagation of ghosts (the kinetic term of the LM fields has the wrong sign) so that the additional one-loop contributions in the standard LM theory may be considered unphysical [14].", "This is also a general downside of non-degenerate higher-derivatives theories, which include Stelle's gravity mentioned before; they are plagued by Ostrogradsky ghosts [20], , [22], [23], .", "The presence of ghosts could break unitarity due to negative norm states [25], [26], , although, one can argue that ghosts can be harmless in some cases [28], [29], , [31], , , [34].", "In particular, the unitarity of the standard LM theory has been studied briefly in Ref.", "[18] through indefinite metric quantization [35], .", "In this paper, we propose a modification of the standard LM theory to circumvent these issues in a simpler manner.", "To this purpose, we notice that the path integral of the standard LM theory lacks field redefinition invariance, which is an expected property of any quantum path integral [37], , [39].", "It is the field redefinition invariance that guarantees, for instance, the validity of the equivalence theorem of the $S$ -matrix within the framework of path integral quantization [40], , [42].", "Hence, we introduce a determinant factor in the measure of the path integral of the standard LM theory to restore this invariance.", "We shall refer to this field redefinition invariant LM formalism as the modified LM theory.", "We show that by choosing an appropriate determinant factor to obtain a field redefinition invariant LM theory, the doubling of one-loop contributions is absent.", "Exponentiating this determinant factor ghost-like fields arise.", "These ghost fields are responsible for the cancellation of the additional one-loop contributions due to the LM fields at the perturbative level.", "However, the main feature of the standard LM theory, namely the restriction of the loop expansion to one-loop order, is not altered by their presence.", "A similar modification of the measure of the path integral is required to retain general covariance in the worldline formalism, which leads to the introduction of the Lee-Yang ghost fields [43], .", "In this regard, the ghost fields of the modified LM theory are analogous to Lee-Yang ghost fields.", "We also argue that the LM fields degrees of freedom cancel against the degrees of freedom of the ghost fields of the modified LM theory, which is similar to the cancellation of the unphysical degrees of freedom of gauge fields by Faddeev-Popov ghosts in gauge theories [25], [45].", "To rigorously derive this result, it is necessary to investigate the classical Hamiltonian dynamics of the modified LM theory.", "This is outside the scope of this paper, which aims to study the quantum LM theory; nevertheless, we briefly discuss the possible nature of the classical modified LM theory at the end of this paper.", "This paper is organized as follows.", "We review the standard LM theory in Section .", "In Section , we investigate the field redefinition invariance of the LM theory concluding that the path integral of the standard LM theory is not invariant under it.", "Then, we propose a field redefinition invariant LM theory.", "In Section , the modified LM theory is studied in more detail.", "We show that the doubling of the one-loop contribution (and degrees of freedom) is absent.", "In Section , we present a discussion of our proposal and related topics such as unitarity.", "In Appendix , we extend the modified LM theory for fermionic systems and suggest a general formulation for systems with both bosonic and fermionic fields.", "In Appendix , we provide a diagrammatic analysis to illustrate some of the perturbative properties of the LM theory.", "Finally, In Appendix , we make explicit some of the symmetries that the modified LM theory possesses." ], [ "Standard LM theory", "In this section, we briefly review the standard LM theory [15], [16], [17].", "Consider the action $S[\\phi ] = \\int \\mathop {d x} \\mathcal {L} ( \\phi ),$ where $ \\mathcal {L} ( \\phi ) $ is the Lagrangian of a set of bosonic fields $ \\phi _{i} $ (represented simply by $ \\phi $ ).", "The path integral quantization procedure yieldsThe Lagrangian in Eq.", "(REF ) is not singular, it is also assumed that the system is not constrained.", "$Z[j] = \\int \\mathop {\\mathcal {D} \\phi } \\exp \\frac{i}{\\hbar } \\int \\mathop {dx} \\left( \\mathcal {L} ( \\phi ) + j \\phi \\right), \\quad (j \\phi \\equiv j_i \\phi _i).$ The standard LM theory is obtained using a Lagrange multiplier field $\\lambda $ to restrict the path integral in Eq.", "(REF ) to field configurations of $\\phi $ that satisfy the classical equations of motion.", "The path integral quantization of the action of Eq.", "(REF ) in the framework of the LM theory becomes [16] $Z_{\\text{LM}} [0] = \\int \\mathop {\\mathcal {D} \\phi }\\mathop {\\mathcal {D} \\lambda }\\exp \\frac{i}{\\hbar } \\int \\mathop {dx} \\left( \\mathcal {L}(\\phi ) + \\lambda \\frac{\\delta S[\\phi ]}{\\delta \\phi }\\right).$ The functional integral over $\\lambda $ in Eq.", "(REF ) results in the functional $\\delta $ -function, thus $Z_{\\text{LM}} [0] =\\int \\mathop {\\mathcal {D} \\phi } \\delta \\left( \\frac{\\delta S [\\phi ]}{\\delta \\phi } \\right)\\exp \\frac{i}{\\hbar } S [ \\phi ].$ Using the functional analog of $\\int \\mathop {dx} \\delta (g(x)) f(x) = \\sum _{\\bar{x}} |g^\\prime (\\bar{x})|^{-1} f(\\bar{x}),$ reduces Eq.", "(REF ) to $Z_{\\text{LM}} [0]= \\sum _{\\bar{\\phi }(x)} \\det \\left( \\mathcal {L}^{\\prime \\prime }(\\bar{\\phi })\\right)^{-1}\\exp \\frac{i}{\\hbar } \\int \\mathop {dx} \\mathcal {L}(\\bar{\\phi }).$ In Eq.", "(REF ), $\\bar{x}$ is a solution to $g(\\bar{x}) = 0$ while $\\bar{\\phi }(x)$ , in Eq.", "(REF ), satisfies $\\left.\\frac{\\delta S[ \\phi ]}{\\delta \\phi } \\right|_{\\phi = \\bar{\\phi }(x)} =0.$ The exponential in Eq.", "(REF ) leads to tree diagrams while the determinant yields one-loop contributions, which are twice the contributions obtained with the path integral in Eq.", "(REF ).", "We can see it quantitatively as follows.", "Comparing the one-loop approximation for the generating functional in Eq.", "(REF ), $Z|_{\\text{1loop}} = \\det \\left( \\mathcal {L}^{\\prime \\prime }({\\phi })\\right)^{-1/2},$ with the exact form obtained in the standard LM theory (in Eq.", "(REF )) we see that the LM theory results in the square of determinant in Eq.", "(REF ).", "Now, using the connected generating functional $ W[j] = -i \\hbar \\ln Z[j] $ , we have that (sources are omitted) $\\begin{split}W|_{\\text{1loop}} &= - i \\hbar \\ln Z|_{\\text{1loop}}\\\\ &= \\frac{i \\hbar }{2} \\mathop {\\rm Tr} \\ln {\\mathcal {L}}_{\\text{}}^{\\prime \\prime } ( {\\phi } ) = \\frac{1}{2} W_{\\text{LM} } |_{\\text{1loop}},\\end{split}$ where $ W_{\\text{LM}} = -i \\hbar \\ln Z_{\\text{LM}} [0] $ .", "Hence, $W_{\\text{LM}}|_{\\text{1loop}} = 2 W|_{\\text{1loop}},$ which shows that in the standard LM theory the one-loop contributions are doubled.", "The factor of 2 in Eq.", "(REF ) comes from extra contributions due to the LM field $ \\lambda $ in Eq.", "(REF ).", "In Appendix , we provide a diagrammatic analysis of the Standard LM theory.", "It is verified that the perturbative expansion is restricted to one-loop order and the one-loop contribution doubled." ], [ "Field redefinitions in quantum path integrals", "We start this section by reviewing the invariance of quantum path integral under field redefinitions.", "Later we examine the behavior of the generating functional of the standard LM theory under field redefinitions.", "One can redefine the field $ \\phi $ so that the generating functional in Eq.", "(REF ) remains invariant.", "Under the local redefinition of the fields $\\phi \\rightarrow \\phi ^{\\prime } = F[ \\phi ],$ where $ F [ \\phi ] $ is an invertible functional, the generating functional in Eq.", "(REF ) reads $Z[j] = \\int \\mathop {\\mathcal {D} \\phi ^{\\prime }} \\det \\left( \\frac{\\delta \\phi }{\\delta \\phi ^{\\prime }}\\right) \\exp \\frac{i}{\\hbar } \\int \\mathop {dx} \\left( \\mathcal {L} ( F[\\phi ] ) + j F[\\phi ] \\right),$ where $\\det \\left( \\frac{\\delta F \\left[ \\phi \\right] }{\\delta \\phi }\\right)^{-1} = \\det \\frac{\\delta \\phi }{\\delta \\phi ^{\\prime }}\\equiv \\det {J}$ is the Jacobian determinant of the redefinition (REF ).", "Thus, the quantum path integral is invariant under field redefinitions as long its measure is supplemented by the corresponding Jacobian determinant, and the source term properly altered [39].", "For non-linear field redefinitions, it may also be necessary to introduce an extra term in the action [46], which is not relevant to this paper.", "Note that in this paper we only consider quantum path integrals.", "Then, from now on, we use “path integral” to refer to quantum path integral.", "The classical path integral, firstly introduced in [47], [48] as the functional formulationFor a review, see Ref.", "[49].", "of the Koopman, von Neumann theory [50], (the operational version of classic mechanics), does not behave in the same manner under field redefinitions.", "For the classical path integral the Jacobian determinant is absent.", "In the next section, we see that the behavior of the path integral of the standard LM theory under field redefinition differs from both, classical and quantum.", "Hence, to restore the quantum behavior we propose to introduce a new term in the measure of the path integral of the LM theory." ], [ "Field redefinitions in the standard Lagrange multiplier theory", "Let us use the generating functional in Eq.", "(REF ) so that we can see transparently how the generating functional of the standard LM theory transforms under field redefinitions.", "We also provide an alternative derivation using the generating functional in its standard form, that is Eq.", "(REF ).", "Under the field redefinition (REF ) the measure of integration in Eq.", "(REF ) transforms as $\\mathop {\\mathcal {D} \\phi } \\delta \\left( \\frac{\\delta S[\\phi ] }{\\delta \\phi } \\right) \\rightarrow \\mathop {\\mathcal {D} \\phi ^{\\prime }}\\det \\left(\\frac{\\delta \\phi }{\\delta \\phi ^{\\prime }} \\right)\\delta \\left( \\frac{\\delta \\phi ^{\\prime } }{\\delta \\phi }\\frac{\\delta S[ \\phi ^{\\prime }]}{\\delta \\phi ^{\\prime }} \\right) = \\mathcal {D} \\phi ^{\\prime } \\det {J}^2 \\delta \\left( \\frac{\\delta S[\\phi ^{\\prime }]}{\\delta \\phi ^{\\prime }} \\right),$ where we have used the following property of the delta function: $\\begin{split}\\delta \\left( \\frac{\\delta \\phi ^{\\prime } }{\\delta \\phi } \\frac{\\delta S[ \\phi ^{\\prime }] }{\\delta \\phi ^{\\prime }} \\right) &={\\left\\Vert \\frac{\\delta \\phi ^{\\prime } }{\\delta \\phi }\\right\\Vert ^{-1}}{\\delta \\left( \\frac{\\delta S[ \\phi ^{\\prime } ] }{\\delta \\phi ^{\\prime }}\\right)}\\\\ &= \\det {J} \\delta \\left( \\frac{\\delta S[\\phi ^{\\prime }] }{\\delta \\phi ^{\\prime }} \\right)\\end{split}$ (since the transformation in Eq.", "(REF ) is invertible.)", "Thus, we have that $Z_{\\text{LM}}[0] \\rightarrow Z_{\\text{LM}} ^{\\prime }[0] = \\int \\mathop {\\mathcal {D} \\phi ^{\\prime }} \\det {J}^2\\delta \\left(\\frac{\\delta S [ \\phi ^{\\prime }]}{\\delta \\phi ^{\\prime }} \\right)\\exp \\frac{i}{\\hbar } S[ \\phi ^{\\prime }].$ This is not in agreement with Eq.", "(REF ), which in this case should be equal to $\\int \\mathop {\\mathcal {D} \\phi ^{\\prime }}\\det {J} \\delta \\left(\\frac{\\delta S [ \\phi ^{\\prime }]}{\\delta \\phi ^{\\prime }} \\right)\\exp \\frac{i}{\\hbar } S[ \\phi ^{\\prime }],$ by the extra factor of $ \\det J$ .", "Therefore, we conclude that the standard LM theory is not invariant under field redefinitions.", "One can argue that the extra factor in Eq.", "(REF ) is expected since in the LM theory any redefinition of the field $ \\phi $ must be accompanied by a redefinition of the respective LM field $ \\lambda $ .", "This is necessary to preserve the form of the LM theory.", "Indeed, the field redefinition (REF ) in the generating functional (REF ) must be supplemented by an appropriate redefinition of the LM field $ \\lambda $ : $\\lambda \\rightarrow \\lambda ^{\\prime } = \\lambda \\frac{\\delta \\phi ^{\\prime }}{\\delta \\phi },$ thus $S[ \\phi ] + \\lambda \\frac{\\delta S[\\phi ]}{\\delta \\phi } \\rightarrow S[\\phi ^{\\prime }] + \\lambda ^{\\prime } \\frac{\\delta S[\\phi ^{\\prime } ]}{\\delta \\phi ^{\\prime }}.$ (The special case of gauge invariance is treated in section V of Ref. [17].)", "It is this companion redefinition that contributes to the extra Jacobian factor in Eq.", "(REF ) since $\\mathop {\\mathcal {D} \\lambda } \\rightarrow \\mathop {\\mathcal {D} \\lambda ^{\\prime }} \\det \\frac{\\delta \\lambda }{\\delta \\lambda ^{\\prime }} = \\mathop {\\mathcal {D} \\lambda ^{\\prime }} \\det {J}.$ This is another approach to derive the result in Eq.", "(REF ).", "Although the extra factor in Eq.", "(REF ) is necessary to the form invariance of the action of the standard LM theory, the behavior of the measure of its path integral under field redefinitions diverges from the expected (see Eq.", "(REF )), which indicates an inconsistency since the extra Jacobian factor does not appear under redefinitions of the LM field $ \\lambda $ .", "If we redefine, independently, the field $ \\phi $ and its LM field $ {\\lambda } $ leading to the respective Jacobian determinants $ {J}_{\\phi } $ , $ {J}_{\\lambda } $ ; then the measure of integration in Eq.", "(REF ) transform as $\\mathop {\\mathcal {D} \\phi }\\mathop {\\mathcal {D} \\lambda }\\rightarrow \\mathop {\\mathcal {D} \\phi ^{\\prime }}\\mathop {\\mathcal {D} \\lambda ^{\\prime }}\\det {J}^{2}_{\\phi } {J}_{\\lambda }$ that shows the discrepancy between the factors of the Jacobian determinant for $ \\phi $ and its LM field $ \\lambda $ .", "Apparently imposing $ {J}_{\\lambda } = {J}_{ \\phi }^{-1} $ would lead to a field redefinition invariant path integral for the standard LM theory as $ \\det {J}_{\\phi }^{2} {J}_{\\lambda } = \\det {J}_{\\phi } $ .", "But these field redefinitions of the LM field $ \\lambda $ are not welcomed, since they would generally break the form of the LM theory shown in Eq.", "(REF ).", "Note that it does not occur for the companion redefinition of the LM field in Eq.", "(REF ), the linearity of the LM field $ \\lambda $ is kept unaltered." ], [ "Field redefinition invariant LM theory", "In the previous section, we concluded that the standard LM theory is not invariant under field redefinitions.", "In this section, we show that the invariance under field redefinitions can be restored if we modify the measure of integration of the standard LM theory.", "Let us consider the following measure of integration for the LM theory, where we introduce a term $\\Delta [ \\phi ]$ , $\\mathop {\\mathcal {D} \\phi } \\Delta [ \\phi ] \\delta \\left( \\frac{\\delta S [\\phi ] }{\\delta \\phi } \\right)$ (compare with the usual in Eq.", "(REF )).", "Under the field redefinitions in Eq.", "(REF ) the modified measure transforms as $\\mathop {\\mathcal {D} \\phi } \\Delta [ \\phi ] \\delta \\left( \\frac{\\delta S [\\phi ] }{\\delta \\phi } \\right) \\rightarrow \\mathop {\\mathcal {D} \\phi ^{\\prime }} \\det {J}^2 \\Delta ^{\\prime }[ \\phi ^{\\prime }] \\delta \\left( \\frac{\\delta S [\\phi ^{\\prime }] }{\\delta \\phi ^{\\prime }} \\right).$ Thus, provided that $\\Delta ^{\\prime }[\\phi ^{\\prime }]= \\det {J}^{-1} \\Delta [\\phi ^{\\prime }]$ the path integral for the LM theory is invariant under field redefinitions.", "Our choice for $ \\Delta [\\phi ]$ reads $\\Delta [ \\phi ]= \\det \\left( \\frac{\\delta ^{2} S[ \\phi ] }{\\delta \\phi \\delta \\phi }\\right)^{+1/2},$ which is the Pfaffian of the Hessian of the action (REF ).", "It should be the absolute value of the Hessian, but we assume that the Hessian of the action (REF ) is positiveThe determinants of Eq.", "(REF ) in the Hamiltonian formulation are known to be always positive [48], [49].", "This assumption also let us avoid some subtleties that can appear due to phases [52].", "so that the absolute value can be omitted in Eq.", "(REF ).", "In Appendix  we extend our proposal to fermionic systems and suggest a generalization of the Eq.", "(REF ).", "The determinant, $ \\Delta [ \\phi ]$ , leads to non-trivial contributions to the perturbation expansion of the LM theory.", "In the next section, we see that these contributions are responsible for canceling extra contributions that arise with the introduction of the LM field $ \\lambda $ .", "Now, let us show that $ \\Delta [ \\phi ] $ in Eq.", "(REF ) satisfies the transformation law in Eq.", "(REF ).", "The functional derivative transforms as $\\frac{\\delta }{\\delta \\phi } = \\frac{\\delta \\phi ^{\\prime } }{\\delta \\phi } \\frac{\\delta }{\\delta \\phi ^{\\prime }} = {J}^{-1} \\frac{\\delta }{\\delta \\phi ^{\\prime }}$ thus $\\frac{\\delta ^{2} }{\\delta \\phi \\delta \\phi }=J^{-2} \\left(\\frac{\\delta ^{2} }{\\delta \\phi ^{\\prime } \\delta \\phi ^{\\prime }} - \\frac{\\delta J }{\\delta \\phi ^{\\prime } } J^{-1}\\frac{\\delta }{\\delta \\phi ^{\\prime }}\\right).$ Therefore, $\\Delta ^{\\prime }[\\phi ^{\\prime }]= \\det J^{-1} \\Delta [ \\phi ^{\\prime }] \\det K^{+1/2}$ with $K = 1 - \\left(\\frac{\\delta ^{2} S [ \\phi ^{\\prime } ] }{\\delta \\phi ^{\\prime } \\delta \\phi ^{\\prime }}\\right)^{-1} \\frac{\\delta J}{\\delta \\phi ^{\\prime } } J^{-1}\\frac{\\delta S [ \\phi ^{\\prime } ] }{\\delta \\phi ^{\\prime }}.$ The factor $ \\det K^{+1/2} $ does not contribute to the measure of integration of the LM theory, given that the delta function in Eq.", "(REF ) imposes that $\\det K = \\det 1$ .", "Using Eq.", "(REF ), the path integral of the modified LM theory reads $\\mathcal {Z}_{\\text{LM}}[0] = \\int \\mathop {\\mathcal {D} \\phi } \\mathop {\\mathcal {D} \\lambda } \\det \\left( \\frac{\\delta ^{2} S [\\phi ]}{\\delta \\phi \\delta \\phi }\\right)^{+1/2} \\exp \\frac{i}{\\hbar } S_{\\text{LM}} [\\phi ],$ where $S_{ \\text{LM} } [\\phi ]=\\int \\mathop {dx}\\left( \\mathcal {L} ( \\phi ) + \\lambda \\frac{\\delta S [\\phi ] }{\\delta \\phi } \\right)$ is the action of the standard LM theory.", "The path integral in Eq.", "(REF ) (invariant under field redefinitions) is our proposal for the quantum LM theory.", "We can write the determinant in Eq.", "(REF ) locally with the introduction of new fields.", "For this, we first rewrite it as $\\begin{split}\\det \\left( \\frac{\\delta ^{2} S [\\phi ] }{\\delta \\phi \\delta \\phi }\\right)^{+1/2} =\\det \\left(\\frac{\\delta ^{2} S_{} [ \\phi ] }{\\delta \\phi \\delta \\phi }\\right)\\det \\left(\\frac{\\delta ^{2} S [ \\phi ] }{\\delta \\phi \\delta \\phi }\\right)^{-1/2}\\end{split}$ and then exponentiate the determinants in Eq.", "(REF ), which results in $\\Delta [ \\phi ]= \\int \\mathop {\\mathcal {D}{\\bar{\\theta }}} \\mathop {\\mathcal {D }\\theta } \\mathop {\\mathcal {D} \\chi } \\exp \\frac{i}{\\hbar } \\int \\mathop {d^{}x} \\left( \\bar{\\theta } \\frac{\\delta ^{2 }S [\\phi ] }{\\delta \\phi \\delta \\phi } \\theta + \\frac{1}{2} \\chi \\frac{\\delta ^{2} S [\\phi ] }{\\delta \\phi \\delta \\phi } \\chi \\right),$ where $ \\bar{\\theta } , \\theta $ and $\\chi $ are scalar fermionic and bosonic fields, respectively.", "An analogous procedure is done in the worldline formalism to obtain a covariant path integral.", "The term $ \\Delta [g] \\sim \\sqrt{g} $ analogous to Eq.", "(REF ) is required to the measure of the path integral retain general covariance so that the Lee-Yang ghost fields are introduced [43], .", "These new fields (analogous to Lee-Yang ghost fields) are ghost-like.", "The LM field $ \\lambda $ leads to negative norm states [14], [18] as well as the ghost fields $ \\theta $ , $ \\bar{\\theta } $ that violate the spin-statistic theorem.", "The ghost fields $ {\\theta } $ , $ {\\bar{\\theta }} $ are similar to Faddeev-Popov ghosts [25], while the field $ \\chi $ plays the role of a third ghostA third ghost (the Nielsen-Kallosh ghost [53], ) appears in gauge theories for non-singular gauges with functional dependence [55].", "It is similar to our case given that the ghost-like fields in the action Eq.", "(REF ) depend on a functional, the Hessian of the action $S[ \\phi ]$ .", "Gauge theories and gravity with high derivative gauges are examples of models that need a third ghost [56], , ..", "Although in principle, these two kinds of ghosts (Faddeev-Popov and the Lee-Yang-like $ \\theta $ , $ \\bar{\\theta } $ , and $ \\chi $ ghost fields) are introduced for different reasons, it can be said that both are necessary for an appropriate path integral quantization.", "From Eq.", "(REF ) we have the ghost fields action $S_{\\text{gh}}[ \\phi ] =\\int \\mathop {dx}\\left( \\bar{\\theta } \\frac{\\delta ^{2 }S [\\phi ] }{\\delta \\phi \\delta \\phi } \\theta + \\frac{1}{2} \\chi \\frac{\\delta ^{2} S [\\phi ] }{\\delta \\phi \\delta \\phi } \\chi \\right)$ that allows us to define an effective action for the modified LM theory as $\\begin{split}S_{\\text{eff}} [\\phi ] &= S_{\\text{LM}}[ \\phi ]+ S_{\\text{gh}}[ \\phi ] \\\\&= \\int \\mathop {dx}\\left( \\mathcal {L} ( \\phi ) + \\lambda \\frac{\\delta S[\\phi ] }{\\delta \\phi } + \\bar{\\theta } \\frac{\\delta ^{2 }S [\\phi ] }{\\delta \\phi \\delta \\phi } \\theta + \\frac{1}{2} \\chi \\frac{\\delta ^{2} S [\\phi ] }{\\delta \\phi \\delta \\phi } \\chi \\right).\\end{split}$ Substituting Eqs.", "(REF )—(REF ) into Eq.", "(REF ) yields $\\mathcal {Z}_{\\text{LM}}[0] = \\int \\mathop {\\mathcal {D} \\phi } \\mathop {\\mathcal {D} \\lambda } \\mathop {\\mathcal {D}\\bar{\\theta }} \\mathop {\\mathcal {D}\\theta } \\mathop {\\mathcal {D} \\chi }\\exp \\frac{i}{\\hbar } S_{\\text{eff} } [\\phi ],$ which is the proper path integral for the LM theory.", "The action of the modified LM theory in Eq.", "(REF ) is invariant under the ghost number ($ \\mathop {gh} $ ) symmetry This symmetry is related to the ghost charge, which is conserved by the Noether theorem.", "In Appendix  we also show that the ghost sector is invariant under (anti)BRST-like symmetry due to its natural supersymmetric structure.", "presented in Appendix .", "It implies that the ghost number of the effective action should vanish $ \\mathop {gh} S_{\\text{eff}} [\\phi ] = 0$ , while $ \\mathop {gh} [ \\theta ]= -\\mathop {gh} [ \\bar{\\theta } ]=1$ and $ \\mathop {gh} [\\chi ]=0$ , which agree with ref. [55].", "This shows another similarity between the ghost fields $ \\theta $ , $ \\bar{\\theta } $ , and Faddeev-Popov ghosts.", "By this similarity, let us suppose that the ghost fields $ \\theta $ and $ \\bar{\\theta } $ serve as negative degrees of freedom in the modified LM theory.", "If we denote the degrees of freedom of a field $ I$ by $ N_I$ and the total degrees of freedom of the fermionic ghost fields by $ N_{\\text{gh}} =N_{\\theta } + N_{\\bar{\\theta }}$ , then the total degrees of freedom introduced in the modified LM theory reads $N_{\\lambda } - N_{ \\text{gh}} + N_{\\chi } = 0,$ since $ N_{\\lambda } = N_{\\chi } = N_{ \\text{gh}}/2 = N_{\\phi } $ .", "Naively, the modified LM theory has a total of $N_{\\phi } +N_{\\lambda } - N_{ \\text{gh}} + N_{\\chi } = N_{\\phi }$ degrees of freedom, which coincides with the degrees of freedom present in the theory described by the action (REF ) (this is also consistent with the free energy density at finite temperature [59].)", "Therefore, the extra degrees of freedom due to the LM field would cancel against the ghost fields of the modified LM theory, while the standard LM theory has twice $ N_{\\phi } + N_{\\lambda } = 2 N_{\\phi } $ .", "To count rigorously the degrees of freedom of the modified LM theory further investigation is required.", "In the next section, we follow [17] to show that the generating functional of the modified LM theory in Eq.", "(REF ) does not lead to contributions beyond one-loop order that is a main characteristic of the standard LM theory.", "Moreover, now that we have introduced ghost fields in the standard LM theory the doubling of one-loop contributions is absent." ], [ "Modified LM theory", "To obtain a field redefinition invariant path integral for the LM theory we introduce the factor $ \\Delta [\\phi ] $ , defined in Eq.", "(REF ), in the measure of integration of the standard LM theory.", "In this section, we shall treat it in more detail showing not only that the additional term $\\Delta [ \\phi ] $ does not lead to contributions beyond one-loop order, but also that the ghost contributions from Eq.", "(REF ) cancel the extra contribution due to the LM field $ \\lambda $ .", "The generating functional of the modified LM theory, for the action $ S [ \\phi ]$ in Eq.", "(REF ), is $\\begin{split}& \\mathcal {Z}_{\\text{LM}} [0] =\\int \\mathop {\\mathcal {D} \\phi } \\mathop {\\mathcal {D} \\lambda } \\mathop {\\mathcal {D} \\bar{\\theta } } \\mathop {\\mathcal {D} \\theta } \\mathop {\\mathcal {D} \\chi } \\exp { \\frac{i}{\\hbar } }\\\\& \\times {\\int \\mathop {d x} \\left( {\\mathcal {L}}_{\\text{}}^{} ( \\phi ) + \\lambda \\frac{\\delta S[ \\phi ] }{\\delta \\phi } + \\bar{\\theta } \\frac{\\delta ^{2} S[ \\phi ] }{\\delta \\phi \\delta \\phi }\\theta + \\frac{1}{2} \\chi \\frac{\\delta ^{2} S[\\phi ]}{\\delta \\phi \\delta \\phi } \\chi \\right) }.\\end{split}$ It can be written conveniently (integrating out the fields $ \\theta $ , $ \\bar{\\theta } $ , $ \\chi $ and $ \\lambda $ ) as $\\begin{split}\\mathcal {Z}_{\\text{LM}}[0] = & \\int \\mathop {\\mathcal {D} \\phi } \\det \\left( \\mathcal {L}^{\\prime \\prime }( \\phi )\\right)\\det \\left( \\mathcal {L}^{\\prime \\prime }( \\phi )\\right)^{-1/2}\\mathop {\\delta } \\left( \\mathcal {L}^{\\prime } ( \\phi )\\right)\\\\ &\\times \\exp \\frac{i}{\\hbar } \\int \\mathop {d x} \\mathcal {L} ( \\phi ).\\end{split}$ Using the functional analog of the Eq.", "(REF ), we obtain $\\mathcal {Z}_{\\text{LM}} [0]= \\sum _{\\bar{\\phi }(x)} \\det \\left( \\mathcal {L}^{\\prime \\prime }(\\bar{\\phi })\\right)^{-1/2}\\exp \\frac{i}{ \\hbar }\\int \\mathop {d x} \\mathcal {L}(\\bar{\\phi }).$ The generating functional of the modified LM theory, at one-loop, now coincides with Eq.", "(REF ).", "The field redefinition invariant formulation of the LM theory leads to the same tree-level and one-loop contributions obtained with the generating functional in Eq.", "(REF ), while higher-loop order contributions vanish.", "It shows that our choice in Eq.", "(REF ) led to a modified path integral for the LM theory (in Eq.", "(REF )) that is invariant under field redefinitions, but that kept restricting quantum corrections to one-loop order.", "From Eqs.", "(REF ) and (REF ), we have that $\\begin{split}\\mathcal {W}_{\\text{LM}} |_{\\text{1loop}} &\\equiv - i \\hbar \\ln \\mathcal {Z}|_{\\text{1loop}} \\\\ &= - i \\hbar \\ln Z|_{\\text{1loop}} = W |_{\\text{1loop}}.\\end{split}$ Comparing with Eq.", "(REF ) we see that the factor of 2 that appears in the standard LM theory is now absent.", "Hence, considering an appropriate, invariant under field redefinitions, path integral for the LM theory we remove the doubling of one-loop contributions while preserving its main feature, namely the truncated perturbative expansion.", "In Appendix  we provide a diagrammatic analysis of the modified LM theory to illustrate the results obtained here.", "It is shown that the ghost contributions are responsible for canceling the extra contributions due to the LM field and, as in the Standard LM theory, the perturbative expansion is restricted to one-loop order." ], [ "Discussion", "In this paper, we have found that the path integral of the standard LM theory is not invariant under field redefinitions.", "To restore this invariance, we proposed a modification of the path integral quantization of the LM theory introducing a new determinant factor in the measure of the path integral of the standard LM theory.", "The field redefinition invariant LM formalism turned out to circumvent issues that would otherwise arise in the standard LM theory.", "We have shown that in this modified LM theory new ghost fields, analogous to Lee-Yang ghosts, arise.", "However, these ghosts do not spoil the main feature of the LM theory, namely the restriction of the loop expansion to one-loop order.", "Moreover, it is the ghost fields that are responsible for the cancellation of the extra contributions due to the LM fields, which is one of the main drawbacks of the standard LM theory.", "We also suggested that ghost fields degrees of freedom should cancel the degrees of freedom coming from the LM fields arguing that the degrees of freedom of the fields with non-vanishing ghost number should be counted as negative.", "This was justified by the parallel that we have shown between them and Faddeev-Popov ghosts.", "But, note that this only holds rigorously for the Faddeev-Popov ghosts since there is a direct correspondence between them and constraints [60].", "This is a shortcoming of our approach that was based solely on path integral quantization.", "A related issue is how to treat singular action in our formalism, in particular the Einstein-Hilbert action.", "It can be applied to degenerate theories replacing the original singular action with an effective (non-singular) action obtained through the generalized Faddeev-Senjanovic (FS) procedure [25], [61].", "This is work in progress, and we expect to report on it in near future; in particular, we want to provide a consistent field redefinition LM theory for quantum gravity, which is the main application of the LM formalism.", "Unitarity is another issue that requires attention in the LM theory.", "The standard LM theory is known to yield an unbounded Hamiltonian since the presence of ghost-like LM fields (the kinetic term has a wrong sign) results in instabilities that could lead to the lack of unitarity [14], [29].", "In Ref.", "[18], this is resolved using indefinite metric quantization.", "In particular, it has been verified that the standard LM theory is consistent with the unitary condition using Cutkosky's cutting rules [62].", "This analysis is not altered by the presence of the ghost fields of the modified LM theory; therefore, both approaches satisfy this unitary condition.", "In our approach the doubling of one-loop quantum correction (and degrees of freedom) are necessarily unphysical and cannot be disregarded.", "However, we have shown that the ghost fields $ \\theta $ , $ \\bar{\\theta } $ and $ \\chi $ are responsible for the cancellation of these unphysical one-loop quantum effects coming from the ghost-like LM field $ \\lambda $ , as Faddeev-Popov ghosts that cancel unphysical longitudinal contributions in gauge theories.", "Moreover, the unphysical states due to the LM field $ \\lambda $ should cancel against the unphysical states coming from the ghost fields $ \\theta $ , $ \\bar{\\theta } $ , while the third ghost $ \\chi $ does not lead to unphysical states as we have discussed in section .", "Therefore, the modified LM theory should be unitary.", "To further clarify these issues, the study of the Hamiltonian formalism for the modified LM theory is crucial.", "The similarity between the measure (REF ) and a Senjanovic measure [61] $\\mathop {\\mathcal {D} p} \\mathop {\\mathcal {D} q}\\left| \\det \\Vert \\left\\lbrace \\alpha _{i} ,\\, \\alpha _{j} \\right\\rbrace \\Vert \\right|^{1/2} \\delta ( \\alpha _{k}),$ where $ \\alpha _{k} $ are second class constraints, after the canonical momenta $p$ are integrated; can be a clue to investigate it.", "If this is the case, the modified LM theory is a degenerate (higher-derivative) theory.", "The analogy between Faddeev-Popov ghosts and the ghosts $ \\theta $ , $ \\bar{\\theta } $ of the modified LM theory would be confirmed, and we could rigorously state that they count as negative degrees of freedom.", "Moreover, since the phase space of the LM theory would be reduced (it is constrained), it should be free of Ostrogradsky ghosts [63], , which agrees with our results.", "Besides that, it is clear that there is also a similarity between the classical path integral [47] $\\begin{split}Z_{\\text{CPI}}[0] = &\\int \\mathop {\\mathcal {D} \\phi } \\mathop {\\mathcal {D} \\lambda } \\mathop {\\mathcal {D} \\bar{\\theta } } \\mathop {\\mathcal {D} \\theta }\\\\ & \\times \\exp {i \\int \\mathop {d x} \\left( \\lambda \\frac{\\delta S[ \\phi ] }{\\delta \\phi } + \\bar{\\theta } \\frac{\\delta ^{2} S[ \\phi ] }{\\delta \\phi \\delta \\phi }\\theta \\right) }\\end{split}$ and the path integral of the LM theory, in particular of the modified LM theory in Eq.", "(REF ).", "The classical path integral in Eq.", "(REF ) restricts the field $ \\phi $ to its classical field configurations, while in the LM theory in Eq.", "(REF ) the field $ \\phi $ is restricted to satisfy its classical equations of motion.", "At first sight, these constraints appear to be equivalent, but the classical path integral is invariant under several novel symmetries as the (anti)BRST-like symmetry [48], [47], supersymmetry [65], , and universal local symmetries [67] that are broken by the weight $ \\exp \\left( iS [ \\phi ]/\\hbar \\right) $ in the quantum path integral in Eq.", "(REF ).", "Therefore, although the similarities between the generating functionals (REF ) and (REF ), they cannot be equivalent.", "In particular, the properties of non-superposition and non-interference, found in the classical theory, correspond to the universal local symmetries of the classical path integral [67].", "Considering that the path integral of the LM theory is not invariant under them, it must contain richer physics that we hope to explore in future work.", "Discussions with J. Frenkel and D. G. C. McKeon were illuminating.", "We would also like to thank Pedro F. H. Bairrão for a careful reading of the manuscript and J. P. Edwards for an enlightening comment.", "We thank CNPq (Brazil) for financial support." ], [ "Modified LM theory with Grassmann fields", "In the case of fermionic systems described by anticommuting fields (Grassmann fields) the measure of integration changes under field redefinitions in the inverse way of the bosonic case: $\\mathop {\\mathcal {D} \\psi } \\mathop {\\mathcal {D} \\bar{\\psi }} \\rightarrow \\det {J}^{-1} \\mathop {\\mathcal {D} \\psi ^{\\prime }} \\mathop {\\mathcal {D} \\bar{\\psi }},$ where the Jacobian determinant is defined analogously with the bosonic case (in Eq.", "(REF )) as $\\det {J}^{-1} = \\det \\frac{\\partial \\psi ^{\\prime }}{\\partial \\psi }.$ Thus, in the LM formalism we have that $\\mathop {\\mathcal {D} \\psi } \\mathop {\\mathcal {D} \\bar{\\psi }} \\mathop {\\delta } \\left( \\frac{\\delta S }{\\delta \\psi }\\right) \\mathop {\\delta } \\left( \\frac{\\delta S }{\\delta \\bar{\\psi }}\\right) \\rightarrow \\det {J}^{-2}\\mathop {\\mathcal {D} \\psi ^{\\prime }} \\mathop {\\mathcal {D} \\bar{\\psi }} \\mathop {\\delta } \\left( \\frac{\\delta S }{\\delta \\psi ^{\\prime }}\\right) \\mathop {\\delta } \\left( \\frac{\\delta S }{\\delta \\bar{\\psi }}\\right).$ To restore the usual field redefinition property, as in Eq.", "(REF ), we introduce in the measure the term $ \\Delta [ \\psi , \\bar{\\psi } ]$ that must transform as $\\Delta [ \\psi , \\bar{\\psi } ] \\rightarrow \\Delta ^{\\prime }[ \\psi ^{\\prime }, \\bar{\\psi } ]=\\det {J} \\Delta [ \\psi ^{\\prime } , \\bar{\\psi } ].$ We can easily generalize Eq.", "(REF ) to a fermionic system as $\\Delta [ \\psi , \\bar{\\psi } ] = \\det \\left(\\frac{\\delta ^{2} S }{\\delta \\psi \\delta \\bar{\\psi }}\\right)^{-1}.$ It satisfies Eq.", "(REF ), since Eq.", "(REF ) also is valid for left derivatives: $\\frac{\\delta }{\\delta \\psi } = \\frac{\\delta \\psi ^{\\prime } }{\\delta \\psi } \\frac{\\delta }{\\delta \\psi ^{\\prime }} = {J}^{-1} \\frac{\\delta }{\\delta \\psi ^{\\prime }},$ therefore, $\\Delta ^{\\prime } [ \\psi ^{\\prime }, \\bar{\\psi } ] = \\det \\left( {J}^{-1} \\frac{\\delta ^{2} S }{\\delta \\psi ^{\\prime } \\delta \\bar{\\psi }} \\right)^{-1}= \\det {J} \\Delta [ \\psi ^{\\prime } , \\bar{\\psi } ].$ As in the bosonic case, the contribution of $ \\Delta [ \\psi , \\bar{\\psi } ]$ cancels half of the one-loop contribution from the standard LM formalism.", "For instance, let us consider a free fermionic system.", "The one-loop contribution is equal to $ \\det M = \\Delta _{0}^{-1} =\\det \\left( \\delta ^{2} S_0/\\delta \\psi \\delta \\bar{\\psi }\\right) $ , in the LM formalism we have the square of it: $ \\det M^{2} $ (the doubling).", "Instead, in the modified LM theory, we obtain $ \\Delta _0 \\det M^{2}= \\det M^{-1} \\det M^{2} = \\det M $ as in the original theory.", "Moreover, the contributions beyond one-loop order are suppressed as in the standard LM formalism.", "In this fermionic system, the measure of integration in the framework of the modified LM formalism is obtained by the substitution $\\mathop {\\mathcal {D} \\psi } \\mathop {\\mathcal {D} \\bar{\\psi }}[\\text{theory}]{\\text{LM}}\\mathop {\\mathcal {D} \\psi } \\mathop {\\mathcal {D} \\bar{\\psi }}\\det \\left( \\frac{\\delta ^{2} S[ \\psi , \\bar{\\psi } ] }{\\delta \\psi \\delta \\bar{\\psi }}\\right)^{-1}\\delta \\left( \\frac{\\delta S }{\\delta \\psi }\\right) \\mathop {\\delta } \\left( \\frac{\\delta S }{\\delta \\bar{\\psi }}\\right).$ This is mostly interesting for gauge theories since Faddeev-Popov ghosts are Grassmann fields.", "The Eqs.", "(REF ) and (REF ) can be written in a single expression using the concept of superfields.", "Let $ \\Psi _{i} $ denote a set of fields with even and odd Grassmann numbers, bosonic and fermionic fields respectively, that define the superfield $ \\Psi $ .", "The measure of integration of the generating functional in the modified LM theory for the superfield $ \\Psi $ reads $\\mathop {\\mathcal {D} \\Psi }[\\text{theory}]{\\text{LM}}\\mathop {\\mathcal {D} \\Psi } \\mathop {\\mathrm {SDet}} \\left( \\frac{\\delta ^{2} S[ \\Psi ] }{\\delta \\Psi \\delta \\Psi }\\right)^{+1/2} \\mathop {\\delta } \\left( \\frac{\\delta S [ \\Psi ] }{\\delta \\Psi }\\right),$ where $ \\mathop {\\mathrm {SDet}} M $ denote the superdeterminant of the supermatrix $M$ ." ], [ "Standard LM theory", "Following the procedure of Refs.", "[16], [17], we provide a diagrammatic analysis.", "Assuming that the Lagrangian (REF ) has the polynomial form (field indices are explicit) $\\mathcal {L} ( \\phi ) =\\frac{1}{2!}", "a_{i j}^{(2)} \\phi _{i} \\phi _{j} + \\frac{1}{3!}", "a_{ijk}^{(3)} \\phi _{i} \\phi _{j} \\phi _{k} +\\frac{1}{4!}", "a_{ijkl}^{(4)} \\phi _{i} {\\phi }_{j}^{} \\phi _{k} \\phi _{l} + \\cdots ,$ we have that $\\begin{split}\\mathcal {L} + \\lambda \\frac{\\partial \\mathcal {L}}{\\partial \\phi } =\\frac{1}{2!}", "& a_{i j}^{(2)} \\phi _{i} \\phi _{j} + \\frac{1}{3!}", "a_{ijk}^{(3)} \\phi _{i} \\phi _{j} \\phi _{k} + \\\\ + &\\frac{1}{4!}", "a_{ijkl}^{(4)} \\phi _{i} {\\phi }_{j}^{} \\phi _{k} \\phi _{l} + \\cdots + a_{i j}^{(2)} \\phi _{i} \\lambda _{j} +\\\\ & + \\frac{1}{2!}", "a_{ijk}^{(3)} \\phi _{i} \\phi _{j} \\lambda _{k} +\\frac{1}{3!}", "a_{ijkl}^{(4)} \\phi _{i} {\\phi }_{j}^{} \\phi _{k} \\lambda _{l} + \\cdots ,\\end{split}$ and the Feynman rules can be obtained straightforwardly.", "As we can see in Eq.", "(REF ), along with the vertices that come from the original Lagrangian $ \\mathcal {L} ( \\phi ) $ we have also similar vertices in which one field $ \\phi $ is replaced by an LM field $ \\lambda $ .", "We obtain the propagators inverting the matrix of the bilinear terms in $ \\phi $ and $ \\lambda $ : $\\begin{pmatrix}a_{ij} & a_{ij} \\\\a_{ij} & 0\\end{pmatrix}^{-1} =\\begin{pmatrix}0 & a_{ij}^{-1} \\\\a_{ij}^{-1}& - a_{ij}^{-1}\\end{pmatrix},$ which reveals that there is no propagator $ \\left\\langle \\phi _{i} \\phi _{j} \\right\\rangle $ for the field $ \\phi $ .", "Instead there are the mixed propagators $ \\langle \\phi _{i} \\lambda _{j} \\rangle = \\left\\langle \\lambda _{i} \\phi _{j} \\right\\rangle =-i a_{ij}^{-1} $ , and the propagator $ \\left\\langle \\lambda _{i} \\lambda _{j}\\right\\rangle = i a_{ij}^{-1}$ of the field $ \\lambda $ that gets a negative sign.", "By these rules, we cannot draw any diagram beyond one-loop order.", "We can derive it as follows.", "To draw a one-loop diagram with $ \\lambda $ in an external leg requires at least one internal field $ \\phi $ line since there is no vertex with more than one LM field $ \\lambda $ .", "There is no propagator for the field $ \\phi $ , therefore it is not possible to draw these diagrams.", "We can only draw one-loop diagrams with $ \\phi $ in the external lines, as we see in Fig.", "REF , and only mixed propagators can appear in the internal lines.", "These one-loop diagrams cannot be iterated to construct any diagram of higher-loop order (see Fig.", "REF ), since there is no propagator for the field $ \\phi $ .", "Other diagrams of higher order also cannot be drawn for similar, see Fig.", "REF .", "Figure: One-loop contributions to 4-point amplitude (φ i φ j φ k φ l ) ( \\phi _{i} \\phi _{j} \\phi _{k} \\phi _{l} ) in the LM theory.", "Solid and doubled lines represent respectively the fields φ \\phi and λ \\lambda .Figure: Diagrams higher than one loop order cannot be drawn joining one-loop diagrams in the LM theory.", "In these examples crossed lines represent forbidden topologies.Figure: Higher-loop order diagrams cannot be drawn at all in the LM theory.", "Crosses denote forbidden topologies.Comparing the diagrams of the LM theory, see Fig.", "REF , to the diagrams in Fig.", "REF from the original theory described by the Lagrangian in Eq.", "(REF ), we see that the one-loop contributions are doubled in the LM theory described by the Lagrangian in Eq.", "(REF ).", "Figure: One-loop contributions to 4-point amplitude (φ i φ j φ k φ l ) ( \\phi _{i} \\phi _{j} \\phi _{k} \\phi _{l} ) in the theory described by the Lagrangian ℒ(φ) \\mathcal {L} ( \\phi ) in Eq.", "().This doubling appears in the diagrams as a relative combinatorial factor of 2, which is consistent with Eq.", "(REF ).", "For instance, while the symmetry factor of the diagram in Fig.", "REF (a) is 2, the diagram in Fig.", "REF (b) has a symmetry factor of 1.", "Thus, the LM theory contribution shown in Fig.", "REF (b) results in twice the usual one-loop contribution shown in Fig.", "REF (a).", "Figure: Diagrams (a) and (b) are respectively the one-loop contributions to 2-point amplitude (φ i φ j )( \\phi _{i} \\phi _{j} ) in the theory described by the Lagrangians in the Eqs.", "() and () (LM theory)." ], [ "Modified LM theory", "In the modified LM theory, we have additional terms due to the ghosts $ \\theta $ , $ \\bar{\\theta } $ , and $ \\chi $ .", "Substituting Eq.", "(REF ) in Eq.", "(REF ) one reads from the ghost action that $\\begin{split}\\mathcal {L}_{\\text{gh}} ( \\phi ) &= \\bar{\\theta }_{i}\\left( a^{(2)}_{ij} + a^{(3)}_{ijk} \\phi _{k} + \\frac{1}{2!}", "a^{(4)}_{ijkl} \\phi _{k} \\phi _{l} + \\cdots \\right)\\theta _{j} + \\\\& +\\frac{1}{2} \\chi \\left( a^{(2)}_{ij} + a^{(3)}_{ijk} \\phi _{k} + \\frac{1}{2!}", "a^{(4)}_{ijkl} \\phi _{k} \\phi _{l} + \\cdots \\right) \\chi .\\end{split}$ This shows that the propagators of the ghosts are equal to $ i a_{ij} $ coinciding with the mixed propagators.", "It is important to remark that the LM field $ \\lambda $ does not interact with ghosts, otherwise diagrams of order higher than one loop would arise.", "Now we can proceed with the diagrammatic analysis of the modified LM theory.", "First, since ghosts can only appear in closed loops the tree-level contributions are kept unaltered.", "We can proceed to consider loop diagrams.", "Let us review the results derived from the diagrammatic analysis done in the previous section and check if they remain valid in the presence of the ghost fields introduced in the modified LM theory.", "The conclusions are: One-loop diagrams with the LM field $ \\lambda $ in an external leg are not allowed.", "We can only draw one-loop diagrams with $ \\phi $ in the external legs.", "Diagrams with more than one loop are not allowed.", "The ghost fields do not interact with the LM field $ \\lambda $ , hence the first conclusion remains true.", "It is also not possible to draw any one-loop diagram with external ghosts legs, see Fig.", "REF (d), since we must have at least one internal $ \\phi $ propagator, which is forbidden, see Fig.", "REF (c).", "Figure: Diagrams higher than one loop order cannot be drawn in the modified LM theory (a, b).", "Ghosts cannot appear in external lines (c, d).", "(crosses denote forbidden topologies.", ")The last conclusion follows from the same reasoning used in section , see Fig.", "REF (a, b).", "Thus, the ghosts in the modified LM theory do not spoil the main characteristic of the LM theory, namely the restriction of the loop expansion to one-loop order, whereas matter fields require further attention [18].", "To complete our diagrammatic analysis note that the diagrams in the Figs.", "REF , REF , REF and REF differ only by overall factors.", "The diagrams in Fig.", "REF (a) have a symmetry factor of 1 (with the minus sign from the fermionic loop these diagrams have an overall factor of $-1$ ), while diagrams in Fig.", "REF (b) have a symmetry factor of 2 as the diagrams in Fig.", "REF .", "Therefore, for these contributions, the ghost diagrams add up leading to an overall factor of $ -1/2$ .", "Figure: Contributions coming from ghosts to 4-point amplitude (φ i φ j φ k φ l ) ( \\phi _{i} \\phi _{j} \\phi _{k} \\phi _{l} ) in the modified LM theory.", "Pointed and dashed lines represent respectively the ghost fields θ ¯ \\bar{\\theta }, θ \\theta , and χ\\chi .In the modified LM theory the total one-loop contribution is then obtained by summing the diagrams from the standard LM theory in Fig REF (overall factor of 1) with the ghosts diagrams in Fig REF (overall factor of $-1/2$ ) resulting in the usual one-loop contribution (overall factor of $1/2$ ) coming from the action in (REF ).", "For instance, see Fig.", "REF .", "Figure: One-loop contribution in the modified LM theory to 4-point amplitude (φ i φ j φ k φ l )( \\phi _{i} \\phi _{j} \\phi _{k} \\phi _{l} ), which is identical to the original theory.", "The overall factors are indicated.This show that ghost contributions in the modified LM theory are canceling the extra one-loop contributions of the standard LM theory, which is consistent with Eq.", "(REF )." ], [ "Ghost number symmetry", "The effective action of the modified LM theory in Eq.", "(REF ) is invariant under the transformation $\\delta \\bar{\\theta } = - \\sigma \\bar{\\theta } , \\quad \\delta \\theta = \\sigma \\theta , \\quad \\delta \\phi = \\delta \\lambda = \\delta \\chi =0;$ where $ \\sigma $ is a commuting parameter.", "This symmetry of the LM theory is related to the conservation of the ghost charge $Q_{\\text{gh}}$ [47].", "The ghost charge is directly related to the ghost number $ \\mathop {gh}$ mentioned in section .", "It can be shown that $ Q_{\\text{gh}} \\theta = \\sigma $ , $ Q_{\\text{gh}} \\bar{\\theta }=- \\sigma $ while $ Q_{\\text{gh}} \\phi = Q_{\\text{gh}} \\lambda = Q_{\\text{gh}} \\chi =0$ , that is, $ Q_{\\text{gh}} \\equiv \\sigma \\mathop {gh} $ ." ], [ "(Anti)BRST-like symmetry", "The ghost sector in Eq.", "(REF ) is invariant under the BRST-like symmetry $\\delta \\theta = \\chi \\epsilon , \\quad \\delta \\chi = \\epsilon \\bar{\\theta },\\quad \\delta \\phi = \\delta \\lambda = \\delta \\bar{\\theta } =0,$ and the anti-BRST symmetry $\\bar{\\delta } \\bar{\\theta } = \\chi \\bar{\\epsilon }, \\quad \\bar{\\delta } \\chi = \\theta \\bar{\\epsilon },\\quad \\bar{\\delta } \\phi = \\bar{\\delta } \\lambda = \\bar{\\delta } \\theta =0,$ where ($ \\bar{\\epsilon } $ ) $ \\epsilon $ is an anticommuting parameter of the (anti)BRST symmetry.", "The idempotency of the BRST operator can be shown straightforwardly: $\\delta ^{2} \\chi = \\delta \\left( \\epsilon \\bar{\\theta }\\right) = \\epsilon \\delta \\bar{\\theta } =0$ and $\\delta ^{2} \\theta = \\delta \\left( \\chi \\epsilon \\right) =- \\epsilon ^{2} \\theta =0.$" ] ]
2212.05629
[ [ "Elliptically symmetric distributions for directional data of arbitrary\n dimension" ], [ "Abstract We formulate a class of angular Gaussian distributions that allows different degrees of isotropy for directional random variables of arbitrary dimension.", "Through a series of novel reparameterization, this distribution family is indexed by parameters with meaningful statistical interpretations that can range over the entire real space of an adequate dimension.", "The new parameterization greatly simplifies maximum likelihood estimation of all model parameters, which in turn leads to theoretically sound and numerically stable inference procedures to infer key features of the distribution.", "Byproducts from the likelihood-based inference are used to develop graphical and numerical diagnostic tools for assessing goodness of fit of this distribution in a data application.", "Simulation study and application to data from a hydrogeology study are used to demonstrate implementation and performance of the inference procedures and diagnostics methods." ], [ "Introduction", "Directional data are ubiquitous in oceanography with wave directions as an example, in meteorology where wind directions are directional data of interest, and in biology where protein backbone structures are directional data researchers study.", "These exemplify directional data of dimension no higher than three.", "Other examples of low dimensional direction data include migratory movements of animals, and measurements on a periodic scale, such as weekdays and hours.", "Directional data of higher dimensions arise in bioinformatics and hydrogeology, among many other fields of research.", "For example, gene expression data associated with a large number of genes for each experimental unit are often standardized to preserve directional characteristics when studying the fluctuation of gene expressions over cell cycles [7].", "By transforming the original gene expression data on a high dimensional Euclidean space to a unit hypersphere, one ignores absolute expression levels and can obtain better clustering of genes that are functionally related [3].", "Another example of directional data with dimension usually higher than three is compositional data [21], [1].", "For instance, microbiome data are often summarized as the composition of bacterial taxa so that one can focus on the microbial relative abundances as opposed to absolute abundances in microbiome analysis [26].", "A compositional data point is a vector with non-negative components that sum to one, hence a component-wise square-root transformation of this vector yields a vector on a unit hypersphere [24].", "Each of the above examples of directional data can be viewed as realizations of a random variable supported on a unit-radius $d$ -dimensional spherical space defined by $\\mathbb {S}^{d-1} = \\lbrace \\mathbf {y}\\in \\mathbb {R}^d : \\Vert \\mathbf {y}\\Vert = 1 \\rbrace $ , for $d\\ge 2$ , where $\\Vert \\mathbf {y}\\Vert $ is the $L_2$ -norm of $\\mathbf {y}$ .", "[13] provided a brief survey of statistical methods for analyzing circular data, i.e., directional data with $d=2$ .", "Two general strategies for constructing a circular distribution are highlighted in this review paper: one uses a “wrapped\" circular version of a random variable supported on $\\mathbb {R}$ to formulate a circular distribution; the other deduces a circular distribution via projecting a univariate random variable on $\\mathbb {R}$ or a bivariate random variable on $\\mathbb {R}^2$ onto the circle.", "Both strategies have been generalized and used to formulate directional distributions on $\\mathbb {S}^{d-1}$ for $d>2$ .", "With the Gaussian distribution playing an important role in statistics, it is not surprising that directional distributions originating from a Gaussian distribution have been most studied and adopted in practice, including the so-called wrapped normal distribution and projected normal distribution, with more attention on the latter in recent literature.", "In particular, [22] used a projected multivariate normal distribution to construct a regression model for a circular response and linear predictors, and employed the maximum likelihood method to infer unknown parameters.", "[29] incorporated projected normal distributions to develop Bayesian hierarchical models for analyzing circular data.", "[10] proposed Bayesian inferential method for directional data of arbitrary dimension, again modelled by projected normal distributions.", "Projected normal distributions are also referred to as angular Gaussian distributions.", "Different angular Gaussian distributions are created by imposing different constraints on the parameter space associated with a multivariate Gaussian distribution in order to resolve the non-identifiability issue that arises when the support of a random variable changes from a Euclidean space to a spherical space.", "[19] imposed constraints on the mean vector and variance-covariance matrix of a Gaussian distribution so that the resultant angular Gaussian distribution is identifiable and, more interestingly, elliptically symmetric.", "The authors thus coined their proposed distribution as the elliptically symmetric angular Gaussian distribution, ESAG for short.", "[20] further developed regression models for directional data assuming an ESAG distribution for the response given covariates.", "Both works on ESAG focus on directional data with $d\\le 3$ .", "More recently, [25] proposed a new directional distribution, called scaled von Mises-Fisher distribution, using grouped transformations of the von Mises-Fisher distribution to achieve elliptical symmetry.", "The authors used this new distribution to model archeomagnetic data that can be converted to directional data with $d=3$ .", "The feature of elliptical symmetry of a distribution makes capturing certain anisotropic pattern of directional data possible.", "An added benefit of ESAG is that the normalization constant in its probability density function is much easier to compute compared to many existing directional distributions, such as the Kent distribution [12].", "This makes maximum likelihood estimation under the ESAG model for directional data more straightforward.", "To incorporate the constraints imposed on the mean vector and variance-covariance matrix of a Gaussian distribution when formulating ESAG, [19] designed a parameterization of ESAG when $d=3$ , which allows one to bypass the complicated problem of optimization with constraints when finding the maximum likelihood estimators of the induced parameters.", "But their parameterization cannot be easily generalized to cases with $d>3$ .", "This limits the use of ESAG in applications where directional data of higher dimension are observed.", "The first contribution of our study presented in this paper is a novel parameterization of ESAG that yields a mathematically sophisticated model for directional data of arbitrary dimension.", "This new parameterization of ESAG for $d\\ge 3$ is presented in Section .", "Under the new parameterization, maximum likelihood estimation translates to a routine numerical problem of optimization without constraints, as we describe in Section .", "A legitimate concern in any parametric modelling is potential violations of certain model assumptions in a given application.", "To address this concern, we propose model diagnostics methods that exploit directional residuals in Section , which constitutes a second major contribution of our study.", "Operating characteristics of the proposed model diagnostics methods are demonstrated in simulation study in Section .", "In Section , we entertain data from hydrogeological research, where we fit ESAG to compositional data from different geographic locations.", "Section  summarizes the contributions of the study and outlines the follow-up research agenda." ], [ "Constraints on parameters", "Let $\\mathbf {X}$ be a $d$ -dimensional Gaussian variable with mean $\\mbox{$\\mu $}$ and variance-covariance $\\mathbf {V}$ , i.e., $\\mathbf {X}\\sim N_d (\\mbox{$\\mu $}, \\mathbf {V})$ .", "Then the normalized variable, $\\mathbf {Y}=\\mathbf {X}/\\Vert \\mathbf {X}\\Vert $ , follows an angular Gaussian distribution, $\\mbox{AG}(\\mbox{$\\mu $}, \\mathbf {V})$ , supported on $\\mathbb {S}^{d-1}$ .", "Parameters in $\\mbox{$\\mu $}$ and $\\mathbf {V}$ associated with $\\mbox{AG}(\\mbox{$\\mu $}, \\mathbf {V})$ are not identifiable because $\\mathbf {X}/\\Vert \\mathbf {X}\\Vert $ and $c\\mathbf {X}/\\Vert c\\mathbf {X}\\Vert $ are equal for $c>0$ , and thus they follow the same angular distribution, even though $\\mathbf {X}$ and $c\\mathbf {X}$ have different mean or/and variance-covariace when $c\\ne 1$ .", "To construct an identifiable angular Gaussian distribution, [19] impose the following two sets of constraints on $\\mbox{$\\mu $}$ and $\\mathbf {V}$ , where $\\mbox{det}(\\cdot )$ refers to the determinant of a matrix, $\\mathbf {V}\\mbox{$\\mu $}& = \\mbox{$\\mu $}, \\\\ \\mbox{det}(\\mathbf {V}) & =1,$ leading to the ESAG distribution, with the probability density function given by $f(\\mathbf {y}|\\mbox{$\\mu $},\\mathbf {V})=\\frac{(2\\pi )^{-(d-1)/2}}{ (\\mathbf {y}^{ \\mathrm {\\scriptscriptstyle T} }\\mathbf {V}^{-1} \\mathbf {y})^{d/2}}\\exp \\left[ \\frac{1}{2}\\left\\lbrace \\frac{(\\mathbf {y}^{ \\mathrm {\\scriptscriptstyle T} }\\mbox{$\\mu $})^2}{\\mathbf {y}^{ \\mathrm {\\scriptscriptstyle T} }\\mathbf {V}^{-1}\\mathbf {y}} -\\mbox{$\\mu $}^{ \\mathrm {\\scriptscriptstyle T} }\\mbox{$\\mu $}\\right\\rbrace \\right] M_{d-1}\\left\\lbrace \\frac{\\mathbf {y}^{ \\mathrm {\\scriptscriptstyle T} }\\mbox{$\\mu $}}{(\\mathbf {y}^{ \\mathrm {\\scriptscriptstyle T} }\\mathbf {V}^{-1}\\mathbf {y})^{1/2}}\\right\\rbrace ,$ where $M_{d-1}(t)=(2\\pi )^{-1/2}\\int _{0}^{\\infty }x^{d-1} \\exp \\lbrace -(x-t)^2/2 \\rbrace dx$ .", "Henceforth, we say that $\\mathbf {Y}$ follows a $(d-1)$ -dimensional ESAG, or $\\mathbf {Y}\\sim \\mbox{ESAG}_{d-1}(\\mbox{$\\mu $}, \\mathbf {V})$ , if $\\mathbf {Y}$ follows a distribution specified by the density in (REF ) with constraints in (REF ) and ().", "Figure REF presents four random samples scattering on 3-dimensional spheres, generated from $\\mbox{ESAG}_2(\\mbox{$\\mu $}, \\, \\mathbf {V})$ with the following parameters specifications, where $\\mathbf {1}_d$ is a vector of $d$ ones and $\\mathbf {I}_d$ is $d$ -dimensional identity matrix: (a) $\\mbox{$\\mu $}\\ =\\ 2\\times \\mathbf {1}_3, \\, \\ \\ \\mathbf {V}\\ =\\ \\mathbf {I}_3$ ; (c) $\\mbox{$\\mu $}\\ =\\ 2\\times \\mathbf {1}_3, \\\\ \\mathbf {V}= \\begin{bmatrix}1.57 & -0.08 & -0.50 \\\\-0.08 & 0.74 & 0.34 \\\\-0.50 & 0.34 & 1.16\\end{bmatrix}$ ; (b) $\\mbox{$\\mu $}\\ =\\ 4\\times \\mathbf {1}_3, \\,\\ \\ \\mathbf {V}\\ =\\ \\mathbf {I}_3$ ; (d) $\\mbox{$\\mu $}\\ =\\ 2\\times \\mathbf {1}_3, \\\\ \\mathbf {V}=\\begin{bmatrix}0.74 & -0.08 & 0.34 \\\\-0.08 & 1.57 & -0.50 \\\\0.34 & -0.50 & 1.16\\end{bmatrix}$ .", "Figure: Four random samples from ESAG 2 (μ,𝐕)\\mbox{ESAG}_2(\\mbox{$\\mu $}, \\, \\mathbf {V}) with μ\\mbox{$\\mu $} and 𝐕\\mathbf {V} specified by (a)–(d) in Section .Comparing the four data clouds depicted in Figure REF , one can see that a larger $\\Vert \\mbox{$\\mu $}\\Vert $ leads to less variability in a random sample (e.g., contrasting (a) with (b)); and $\\mathbf {V}$ also influences the orientation of the data cloud (e.g., comparing (a), (c), and (d)).", "Because the dimension of the parameter space associated with $N_d(\\mbox{$\\mu $}, \\mathbf {V})$ is $d(d+3)/2$ , and there are $d+1$ constraints imposed by (REF ) and (), there are at most $p=(d-1)(d+2)/2$ identifiable parameters for $\\mbox{ESAG}_{d-1}(\\mbox{$\\mu $}, \\mathbf {V})$ .", "Let $\\mbox{$\\Omega $}$ be the $p\\times 1$ parameter vector that specifies $\\mbox{ESAG}_{d-1}(\\mbox{$\\mu $}, \\mathbf {V})$ .", "To facilitate likelihood-based inference, it is desirable to formulate $\\mbox{$\\Omega $}$ so that the parameter space is $\\mathbb {R}^p$ .", "For this purpose, we define $\\mbox{$\\Omega $}=(\\mbox{$\\mu $}^{ \\mathrm {\\scriptscriptstyle T} }, \\mbox{$\\gamma $}^{ \\mathrm {\\scriptscriptstyle T} })^{ \\mathrm {\\scriptscriptstyle T} }$ , where, clearly, $\\mbox{$\\mu $}\\in \\mathbb {R}^d$ , and thus $\\mbox{$\\gamma $}\\in \\mathbb {R}^{(d-2)(d+1)/2}$ includes parameters needed to specify $\\mathbf {V}$ that satisfies (REF ) and () after $\\mbox{$\\mu $}$ is given.", "The parameterization leading to $\\mbox{$\\gamma $}$ starts from the spectral decomposition of $\\mathbf {V}$ , $\\mathbf {V}= \\sum _{j=1}^{d} \\lambda _j \\mathbf {v}_j\\mathbf {v}_j^{ \\mathrm {\\scriptscriptstyle T} }, $ where $\\lambda _1,...,\\lambda _d\\in (0,\\, +\\infty )\\triangleq \\mathbb {R}_+$ are eigenvalues of $\\mathbf {V}$ , and $\\mathbf {v}_1,...,\\mathbf {v}_d$ are the corresponding orthonormal eigenvectors.", "According to (REF ), one of the eigenvalues of $\\mathbf {V}$ is equal to 1, with $\\mbox{$\\mu $}$ being the corresponding (non-zero) eigenvector.", "Without loss of generality, we set $\\lambda _d = 1$ and $\\mathbf {v}_d = \\mbox{$\\mu $}/\\Vert \\mbox{$\\mu $}\\Vert $ .", "It follows that $\\lambda _1 = 1/\\prod _{j=2}^{d-1}\\lambda _j$ since $\\mbox{det}(\\mathbf {V})=\\prod _{j=1}^d \\lambda _j=1$ by ().", "To this end, once $\\mbox{$\\mu $}$ is given, one needs to formulate $\\mbox{$\\gamma $}$ so that it can be mapped to $ \\lambda _2,...,\\lambda _{d-1}$ and $\\mathbf {v}_1,...,\\mathbf {v}_{d-1}$ , through which $\\mathbf {V}$ is determined via (REF ).", "In what follows, we present the derivations leading to such mapping in two steps." ], [ "Constructing eigenvectors", "We first define an orthonormal basis of $\\mathbb {R}^d$ as a function of $\\mbox{$\\mu $}=(\\mu _1, \\ldots , \\mu _d)^{ \\mathrm {\\scriptscriptstyle T} }$ , denoted by $(\\tilde{\\mathbf {v}}_1, \\ldots , \\tilde{\\mathbf {v}}_d)$ , with $\\tilde{\\mathbf {v}}_j=\\mathbf {u}_j/\\Vert \\mathbf {u}_j\\Vert $ , for $j=1, \\ldots , d$ , and $\\begin{aligned}\\mathbf {u}_j & =\\left\\lbrace \\begin{array}{ll}(-\\mu _2,\\, \\mu _1,\\,0,...,0)^{ \\mathrm {\\scriptscriptstyle T} }, & \\mbox{for $j=1$,}\\\\(\\mu _1\\mu _{j+1},\\, ...,\\, \\mu _j\\mu _{j+1},\\,- \\sum _{k=1}^{j} \\mu _k^2, \\, 0, \\ldots , 0)^{ \\mathrm {\\scriptscriptstyle T} }, & \\mbox{for $j=2,...,d-1$,} \\\\\\mbox{$\\mu $}& \\mbox{for $j=d$}.\\end{array}\\right.\\end{aligned}$ If (REF ) yields $\\mathbf {u}_j=\\mathbf {0}_d$ , for $j\\in \\lbrace 1, \\ldots , d-1\\rbrace $ , then we set $\\mathbf {u}_j=\\mathbf {e}_j$ , i.e., the unit vector with 1 at the $j$ -th entry.", "By (REF ), $\\tilde{\\mathbf {v}}_d=\\mathbf {v}_d$ .", "We next relate $\\lbrace \\tilde{\\mathbf {v}}_j\\rbrace _{j=1}^{d-1}$ to $\\lbrace \\mathbf {v}_j\\rbrace _{j=1}^{d-1}$ via a $(d-1)$ -dimensional rotation matrix $\\mathcal {R}_{d-1}$ formulated following the strategy proposed by [17], which [24] exploited to parameterize the Kent distribution for modeling compositional data.", "Following this strategy, for $d>3$ , we write $\\mathcal {R}_{d-1}$ as a product of $(d-2)(d-1)/2$ plane rotation matrices that are functions of longitude angles, $ \\theta _1,..,\\theta _{d-2}\\in [-\\pi , \\pi )$ , and latitude angles, $\\phi _1,..,\\phi _{(d-2)(d-3)/2}\\in [0,\\pi ]$ .", "Here, a $(d-1)$ -dimensional plane rotation matrix $R_{jk}^*(\\cdot )$ comes from replacing the $(j,j)$ , $(j,k)$ , $(k,j)$ , and $(k,k)$ entries of $\\mathbf {I}_{d-1}$ by $\\cos (\\cdot )$ , $-\\sin (\\cdot )$ , $\\sin (\\cdot )$ , and $\\cos (\\cdot )$ , respectively.", "More specifically, we define $(\\mathbf {v}_1,...,\\mathbf {v}_{d-1})=(\\tilde{\\mathbf {v}}_1,...,\\tilde{\\mathbf {v}}_{d-1})\\mathcal {R}_{d-1}$ , where $\\mathcal {R}_{d-1} =\\left[\\prod _{m=1}^{d-3}\\left\\lbrace R_{12}^*(\\theta _{d-m-1})\\prod _{j=1}^{d-m-2}R_{j+1,j+2}^*(\\phi _{1-j+(d-m-1)(d-m-2)/2}) \\right\\rbrace \\right]R_{12}^*(\\theta _1).$ The rotation matrix in (REF ) depends on $(d-2)(d-1)/2$ angles that we refer to as orientation parameters in the sequel.", "Putting the orientation parameters along with the eigenvaules, we have the collection of parameters needed to specify $\\mathbf {V}$ after $\\mbox{$\\mu $}$ is given in $(\\lambda _2, \\ldots , \\lambda _{d-1}, \\, \\theta _1, \\ldots , \\theta _{d-2}, \\, \\phi _1, \\ldots , \\phi _{(d-2)(d-3)/2})$ .", "We next turn to defining $\\mbox{$\\gamma $}\\in \\mathbb {R}^{(d-2)(d+1)/2}$ that can be mapped to this collection of parameters via groups of transformations." ], [ "Grouped spherical transformations", "Following setting $\\lambda _d=1$ , we now let $\\lambda _1 \\le ...\\le \\lambda _{d-1}$ , and write $\\lambda _j=(r_{j-1}+1)\\lambda _{j-1}$ , where $r_{j-1}\\ge 0$ , for $j=2, \\ldots , d-1$ .", "Because $\\prod _{j=1}^{d-1} \\lambda _j=1$ by (), the first $d-1$ eigenvalues can be expressed in terms of $r_1, \\ldots , r_{d-2}$ as follows, $\\lambda _1 = \\left\\lbrace \\prod _{j=1}^{d-2} (r_j+1)^{d-(j+1)}\\right\\rbrace ^{-1/(d-1)} \\textrm { and }\\lambda _j = \\lambda _1\\prod _{k=1}^{j-1}(r_k+1), \\mbox{ for $j=2,...,d-1$.", "}$ We call $r_1, \\ldots , r_{d-2}$ radial parameters for a reason to become clear momentarily.", "In what follows, we define transformations mapping $\\mbox{$\\gamma $}$ to radial and orientation parameters in $\\widetilde{\\mbox{$\\Omega $}}=(r_1, \\ldots , r_{d-2}, \\, \\theta _1, \\ldots , \\theta _{d-2}, \\, \\phi _1, \\ldots , \\phi _{(d-2)(d-3)/2})^{ \\mathrm {\\scriptscriptstyle T} }$ after partitioning these parameters into $d-2$ groups motivated by the following observations.", "As the dimension of $\\mathbf {Y}$ increases from $k$ to $k+1$ , where $k \\ge 3$ , we need one additional radial parameter to account for the additional eigenvalue of $\\mathbf {V}$ , along with, by (REF ), one additional longitude angle and $k-2$ additional latitude angles, yielding a total of $k$ additional parameters needed to specify $\\mathbf {V}$ when one increases the dimension of $\\mathbf {Y}$ by one from $k$ .", "This collection of additional parameters can be viewed as the parameters needed to specify a $(k-1)$ -sphere under a spherical coordinate system [16], in terms of both parameter counts and parameter interpretations.", "A spherical coordinate system for characterizing $(k-1)$ -spheres of arbitrary radius consists of one radial coordinate ranging over $[0, \\, +\\infty )$ , where a radial parameter falls, one angular coordinate ranging over $[-\\pi , \\, \\pi )$ , which a longitude angle is within, and another $k-2$ angular coordinates, each ranging over $[0, \\, \\pi ]$ , which a latitude angle belongs to.", "These $k$ radial and orientation parameters can then link to $k$ parameters in $\\mathbb {R}^k$ using the connection between the spherical coordinate system in the $(k-1)$ -dimensional spherical space and the Cartesian coordinate system in the $k$ -dimensional Euclidean space [4].", "This is the connection that relates $\\mbox{$\\gamma $}$ to $\\widetilde{\\mbox{$\\Omega $}}$ after partitioning $\\widetilde{\\mbox{$\\Omega $}}$ in a way that we demonstrate in a concrete example next.", "Suppose that $\\mathbf {Y}\\sim \\mbox{ESAG}_4(\\mbox{$\\mu $}, \\, \\mathbf {V})$ and thus $d=5$ .", "After $\\mbox{$\\mu $}$ is specified, we need radial and orientation parameters in $\\widetilde{\\mbox{$\\Omega $}}=(r_1, r_2, r_3, \\, \\theta _1, \\theta _2, \\theta _3, \\, \\phi _1, \\phi _2, \\phi _3)^{ \\mathrm {\\scriptscriptstyle T} }$ to specify $\\mathbf {V}$ .", "We divide $\\widetilde{\\mbox{$\\Omega $}}$ into $3(=d-2)$ groups of parameters as follows: $(r_1, \\, \\theta _1)$ , which are the only radial and orientation parameters needed to specify an ESAG resulting from normalizing a bivariate Gaussian random variable, i.e., $k=2$ ; $(r_2, \\, \\theta _2, \\, \\phi _1)$ , which includes the three additional radial and orientation parameters as we move from a 2-dimensional ESAG to a 3-dimensional ESAG, i.e., the dimension of the random variable changes from $k=3$ to $k+1=4$ ; $(r_3, \\, \\theta _3, \\, \\phi _2, \\, \\phi _3)$ , which contains the four additional radial and orientation parameters as the dimension of the ESAG random variable increases from $k=4$ to $k+1=5$ .", "In general, for $d\\ge 3$ , we divide $(d-2)(d+1)/2$ parameters in $\\widetilde{\\mbox{$\\Omega $}}$ into $d-2$ groups, with the first group being $(r_1, \\,\\theta _1)$ , and, for $j=2,...,d-2$ , the $j$ -th group being $(r_j, \\, \\theta _j, \\, \\tilde{\\mbox{$\\phi $}}_j)$ , where $\\tilde{\\mbox{$\\phi $}}_j = (\\tilde{\\phi }_{j,1},...,\\tilde{\\phi }_{j,j-1})^{ \\mathrm {\\scriptscriptstyle T} }$ .", "In other words, $\\tilde{\\phi }_{j,k}$ , for $j=2, \\ldots , d-2$ and $k=1, \\ldots , j-1$ , are the original latitude angles assigned to the $j$ -th group.", "We then formulate each group of parameters in $\\widetilde{\\mbox{$\\Omega $}}$ using a group of new parameters in the corresponding Euclidean space by invoking the connection between a spherical coordinate system and the corresponding Cartesian coordinate system [4].", "To adapt to the grouping for $\\widetilde{\\mbox{$\\Omega $}}$ , we also define $\\mbox{$\\gamma $}$ as $d-2$ groups of parameters, $\\mbox{$\\gamma $}= (\\tilde{\\mbox{$\\gamma $}}_1^{ \\mathrm {\\scriptscriptstyle T} },...,\\tilde{\\mbox{$\\gamma $}}_{d-2}^{ \\mathrm {\\scriptscriptstyle T} })^{ \\mathrm {\\scriptscriptstyle T} }$ , where $\\tilde{\\mbox{$\\gamma $}}_j = (\\gamma _{j,1},...,\\gamma _{j,j+1})^{ \\mathrm {\\scriptscriptstyle T} }\\in \\mathbb {R}^{j+1}$ , for $ j=1,...,d-2$ .", "Then the transformations that map $\\mbox{$\\gamma $}$ to $\\widetilde{\\mbox{$\\Omega $}}$ are given by $r_1 = \\Vert \\tilde{\\mbox{$\\gamma $}}_1\\Vert , \\, \\,\\theta _1 = \\mbox{atan2}(\\gamma _{1,2},\\gamma _{1,1}),$ and, for $j=2, \\ldots , d-2$ , $r_j = \\Vert \\tilde{\\mbox{$\\gamma $}}_{j}\\Vert $ , $\\begin{aligned}\\theta _j & = {\\left\\lbrace \\begin{array}{ll}0, & \\text{if $\\gamma _{j,j}^2+\\gamma _{j,j+1}^2 = 0$,}\\\\\\displaystyle {\\mbox{arccos}\\frac{\\gamma _{j,j}}{\\sqrt{\\gamma _{j,j}^2+\\gamma _{j,j+1}^2}}}, & \\text{if $\\gamma _{j,j+1} \\ge 0$ and $\\gamma _{j,j}^2+\\gamma _{j,j+1}^2 \\ne 0$,}\\\\\\displaystyle {-\\mbox{arccos}\\frac{\\gamma _{j,j}}{\\sqrt{\\gamma _{j,j}^2+\\gamma _{j,j+1}^2}}}, & \\text{if $\\gamma _{i,i+1} < 0$}, \\end{array}\\right.", "}\\\\\\tilde{\\phi }_{j,k} & = {\\left\\lbrace \\begin{array}{ll}0, & \\text{if $\\sum _{\\ell =k}^{j+1} \\gamma _{j,\\ell }^2 = 0$, for $k=1, ..., j-1$ },\\\\\\displaystyle {\\mbox{arccos} \\frac{\\gamma _{j,k} }{\\sqrt{\\sum _{\\ell =k}^{j+1} \\gamma _{j,\\ell }^2 }}}, & \\text{otherwise, for $k=1, ..., j-1$.}\\end{array}\\right.", "}\\end{aligned}$ This completes the parameterization of $\\mbox{ESAG}_{d-1}(\\mbox{$\\mu $}, \\, \\mathbf {V})$ so that all identifiable parameters in $\\mbox{$\\Omega $}=(\\mbox{$\\mu $}^{ \\mathrm {\\scriptscriptstyle T} }, \\, \\mbox{$\\gamma $}^{ \\mathrm {\\scriptscriptstyle T} })^{ \\mathrm {\\scriptscriptstyle T} }$ range over the entire real line.", "Having the parameter space being $\\mathbb {R}^p$ greatly simplifies the implementation of maximum likelihood estimation for $\\mbox{$\\Omega $}$ ." ], [ "Maximum likelihood estimation", "Using the parameterization of ESAG developed in Section , one can easily derive the likelihood function of a sample from ESAG, following which one can maximize the logarithm of it with respect to $\\mbox{$\\Omega $}$ over $\\mathbb {R}^p$ to obtain the maximum likelihood estimator (MLE) of $\\mbox{$\\Omega $}$ .", "Straightforward as it appears, some cautions should be given in this likelihood-based inference procedure, in part due to the nature of $\\mbox{$\\gamma $}$ ." ], [ "Interpretations of parameters", "According to Section REF , $\\mbox{$\\gamma $}=\\mathbf {0}$ implies $\\lambda _j=1$ , for $j=1,...,d$ , and thus $\\mathbf {V}=\\mathbf {I}_d$ , leading to an isotropic hyperspherical distribution [14].", "If $\\mathbf {Y}\\sim \\mbox{ESAG}_{d-1}(\\mbox{$\\mu $},\\, \\mathbf {I}_d)$ , then, for any orthogonal matrix $\\mathbf {P}$ such that $\\mathbf {P}\\mbox{$\\mu $}=\\mbox{$\\mu $}$ , we have $\\mathbf {P}\\mathbf {Y}\\sim \\mbox{ESAG}_{d-1}(\\mbox{$\\mu $},\\, \\mathbf {I}_d)$ , i.e., $\\mathbf {P}\\mathbf {Y}=\\mathbf {Y}$ in distribution, or, $\\mathbf {P}\\mathbf {Y}\\stackrel{\\mathcal {L}}{=}\\mathbf {Y}$ in short.", "In addition, if $\\tilde{\\mbox{$\\gamma $}}_j=\\mathbf {0}$ , then $r_j=0$ , and thus $\\lambda _{j+1}=(r_j+1)\\lambda _j=\\lambda _j$ , in which case we say that the distribution is isotropic in the subspace spanned by $\\lbrace \\mathbf {v}_j,\\mathbf {v}_{j+1}\\rbrace $ , or partially isotropic.", "That is, given any orthogonal matrix $\\mathbf {P}$ such that $\\mathbf {P}\\mbox{$\\mu $}=\\mbox{$\\mu $}$ and $\\mathbf {P}\\mathbf {v}_k=\\mathbf {v}_k$ , for $k\\ne j, j+1$ , we have $\\mathbf {P}\\mathbf {Y}\\stackrel{\\mathcal {L}}{=}\\mathbf {Y}$ .", "Practically speaking, this means that rotating data from an isotropic (a partially isotropic) ESAG via certain orthogonal matrix that rotates the mean direction to itself (and rotates certain eigenvectors of $\\mathbf {V}$ to themselves) does not change the distribution of the data.", "From the modelling point of view, any level of isotropy of ESAG implies a reduced model.", "Hence, testing whether or not a data set can be modelled by a reduced, thus more parsimonious, ESAG amounts to testing hypotheses regarding parameters in $\\mbox{$\\gamma $}$ .", "For example, testing $\\mathbf {V}=\\mathbf {I}_d$ is equivalent to testing $\\mbox{$\\gamma $}=\\mathbf {0}$ .", "A note of caution one should bear in mind when obtaining the MLE of $\\mbox{$\\Omega $}$ is that, even though the mapping from $\\mbox{$\\gamma $}$ to $(\\lambda _2,\\, \\ldots , \\, \\lambda _{d-1}, \\, \\mathbf {v}_1^{ \\mathrm {\\scriptscriptstyle T} }, \\, \\ldots , \\, \\mathbf {v}_{d-1}^{ \\mathrm {\\scriptscriptstyle T} })$ is bijective, the mapping from the latter to the former is not a bijection because, as one can see in (REF ), if $\\mathbf {v}_j$ is an eigenvector of $\\mathbf {V}$ corresponding to the eigenvalue $\\lambda _j$ , then so is $-\\mathbf {v}_j$ .", "This suggests that there exist $\\mbox{$\\gamma $}\\ne \\mbox{$\\gamma $}^{\\prime }$ yet both $\\mbox{$\\gamma $}$ and $\\mbox{$\\gamma $}^{\\prime }$ map to the same $\\mathbf {V}$ given $\\mbox{$\\mu $}$ .", "When this happens, we say that $\\mbox{$\\gamma $}$ and $\\mbox{$\\gamma $}^{\\prime }$ are equivalent.", "We show in Appendix A that, if $\\mbox{$\\gamma $}$ and $\\mbox{$\\gamma $}^{\\prime }$ are equivalent, then $\\Vert \\tilde{\\mbox{$\\gamma $}}_j\\Vert = \\Vert \\tilde{\\mbox{$\\gamma $}}_j^{\\prime }\\Vert $ , for $j=1,...,d-2$ , which in turn suggests that the interpretations of $\\mbox{$\\gamma $}$ and $\\mbox{$\\gamma $}^{\\prime }$ relevant to isotropy of ESAG are the same.", "A theoretical implication of the existence of equivalent $\\mbox{$\\gamma $}$ and $\\mbox{$\\gamma $}^{\\prime }$ is that, although one cannot claim consistency of the MLE of $\\mbox{$\\gamma $}$ (since the MLE may consistently estimate $\\mbox{$\\gamma $}$ or $\\mbox{$\\gamma $}^{\\prime }$ ), the consistency of the MLE of $\\mathbf {V}$ is guaranteed by the invariance property of MLE [5].", "A numerical implication of this is that maximum likelihood estimation of $\\mbox{$\\Omega $}$ tends to be very forgiving in terms of the starting value for $\\mbox{$\\Omega $}$ , especially when the focal point of inference lies in $\\mbox{$\\mu $}$ and $\\mathbf {V}$ .", "We provide empirical evidence of these implications in a simulation experiment next." ], [ "Empirical evidence", "Using the proposed parameterization, we generate a random sample of size $n=1000$ from $\\mbox{ESAG}_3(\\mbox{$\\mu $}, \\mathbf {V})$ , where $\\mbox{$\\mu $}=(2,\\,-2,\\,-1,\\,-3)^{ \\mathrm {\\scriptscriptstyle T} }$ , and $\\mathbf {V}$ is determined via $\\mbox{$\\mu $}$ and $\\mbox{$\\gamma $}=(\\gamma _{1,1}, \\, \\gamma _{1,2}, \\, \\gamma _{2,1}, \\, \\gamma _{2,2}, \\, \\gamma _{2,3})^{ \\mathrm {\\scriptscriptstyle T} }=(-2,\\,5,\\, 3, \\,5, \\,-8)^{ \\mathrm {\\scriptscriptstyle T} }$ .", "We then maximize the log-likelihood function of this random sample to find the MLE of $\\mbox{$\\Omega $}$ , denoted by $\\hat{\\mbox{$\\Omega $}}$ , using two different starting values of $\\mbox{$\\Omega $}$ : one coincides with the truth, the other is given by $\\mbox{$\\mu $}_0 = \\mathbf {1}_4$ and $\\mbox{$\\gamma $}_0 = \\mathbf {0}$ .", "This produces two estimates of $\\mbox{$\\Omega $}$ .", "We repeat this experiment 100 times.", "In all 100 Monte Carlo replicates, we employ the Broyden-Fletcher-Goldfarb-Shanno algorithm [8] to find a maximizer of the log-likelihood function.", "In fact, we find that most commonly used optimization algorithms work well in maximizing the objective function despite the choice of starting values, partly thanks to the fact that transformations involved in the parameterization derivations in Section  are mostly smooth and simple enough.", "Figure REF presents graphical summaries of 100 realizations of a subset of $\\hat{\\mbox{$\\Omega $}}=(\\hat{\\mbox{$\\mu $}}^{ \\mathrm {\\scriptscriptstyle T} }, \\hat{\\mbox{$\\gamma $}}^{ \\mathrm {\\scriptscriptstyle T} })^{ \\mathrm {\\scriptscriptstyle T} }$ , $(\\hat{\\mu }_2, \\, \\hat{\\gamma }_{1,1}, \\, \\hat{\\gamma }_{2,1})$ , corresponding to each choice of starting value.", "In particular, for each parameter, a kernel density estimate based on 100 realizations of its MLE is depicted in Figure REF .", "The top panels of Figure REF , which present results from using the truth of $\\mbox{$\\Omega $}$ to start the optimization algorithm, provide empirical evidence suggesting that the usual asymptotic properties of an MLE, including consistency and asymptotic normality, are expected to hold for $\\hat{\\mbox{$\\Omega $}}$ when one uses a starting value in a neighborhood of the truth.", "The bottom panels of Figure REF , which show results from using a starting value that has little resemblance with the truth, indicate that $\\hat{\\mbox{$\\mu $}}$ still behaves like a regular MLE that is consistent and asymptotically normally distributed, but $\\hat{\\mbox{$\\gamma $}}$ appears to follow a bimodal distribution.", "The two modes of the distribution of $\\hat{\\mbox{$\\gamma $}}$ are expected to be the true value of $\\mbox{$\\gamma $}$ and another value $\\mbox{$\\gamma $}^{\\prime }$ that is equivalent to $\\mbox{$\\gamma $}$ .", "Figure: Estimated distributions of estimators for selected parameters in Ω\\mbox{$\\Omega $} based on 100 realizations of each parameter estimator when the true parameter values are used as the starting value (upper panels) and when μ 0 \\mbox{$\\mu $}_0 and γ 0 \\mbox{$\\gamma $}_0 not equal to the truth are used as starting values (lower panels) in search for a maximizer of the log-likelihood.", "Vertical lines mark the true values of the corresponding parameters.Despite the potential bimodality of $\\hat{\\mbox{$\\gamma $}}$ when a less carefully chosen starting value of $\\mbox{$\\Omega $}$ is used to find $\\hat{\\mbox{$\\Omega $}}$ , the resultant estimate of $\\mathbf {V}$ , $\\hat{\\mathbf {V}}$ , is similar, if not identical, to the estimate one obtains when using the truth as the starting value.", "Figure REF shows boxplots of the Frobenius norm of $\\mathbf {V}-\\hat{\\mathbf {V}}$ corresponding to 100 realizations of $\\hat{\\mathbf {V}}$ resulting from each choice of the starting value.", "From there one can see that $\\hat{\\mathbf {V}}$ is virtually unaffected by the choice of starting values.", "Although the robustness of $\\hat{\\mbox{$\\mu $}}$ and $\\hat{\\mathbf {V}}$ to the choice of starting value is reassuring, one should not treat $\\hat{\\mbox{$\\gamma $}}$ as a conventional MLE due to its behavior observed in Figure REF .", "Consequently, the usual Fisher information matrix or the sandwich variance does not serve well for estimating the variance of $\\hat{\\mbox{$\\Omega $}}$ .", "We thus recommend use of bootstrap for the uncertainty assessment of $\\hat{\\mbox{$\\Omega $}}$ .", "Figure: Boxplots of the Frobenius norm of 𝐕-𝐕 ^\\mathbf {V}-\\hat{\\mathbf {V}} as sample size nn varies when the true parameter values are used as the starting value (in the left panel) and when μ 0 \\mbox{$\\mu $}_0 and γ 0 \\mbox{$\\gamma $}_0 not equal to the truth are used as starting values (in the right panel) in search for a maximizer of the log-likelihood." ], [ "Model diagnostics", "Even though the ESAG family accommodates certain anisotropic feature of a distribution and thus offers some flexibility in modelling, it remains fully parametric and thus is subject to model misspecification in a given application.", "In this section, we develop residual-based model diagnostics tools that data analysts can use to assess whether or not an ESAG distribution provides adequate fit for their directional data, either as a marginal distribution, or a conditional distribution of the directional response given covariates $\\mathbf {W}$ as in a regression setting." ], [ "Residuals", "Denote by $\\lbrace \\mathbf {Y}_i\\rbrace _{i=1}^n$ the observed directional data of size $n$ , where $\\mathbf {Y}_1, \\ldots , \\mathbf {Y}_n$ are independent with $\\mathbf {Y}_i\\sim \\mbox{ESAG}_{d-1}(\\mbox{$\\mu $}_i, \\, \\mathbf {V}_i)$ , for $i=1, \\ldots , n$ .", "The subscript $i$ attached to the mean and variance-covariance can be dropped if one aims to assess the goodness of fit (GOF) for the observed data using an ESAG as the marginal distribution.", "Otherwise the subscript implies covariate-dependent model parameters in ESAG as in a regression model for $\\mathbf {Y}$ .", "In a non-regression or regression setting, after one obtains the MLE of all unknown parameters in the model, one has the MLEs $\\hat{\\mbox{$\\mu $}}_i$ and $\\hat{\\mathbf {V}}_i$ , following which a prediction can be made by $\\hat{\\mathbf {Y}}_i = \\hat{\\mbox{$\\mu $}}_i/\\Vert \\hat{\\mbox{$\\mu $}}_i\\Vert $ , for $i=1, \\ldots , n$ .", "Similar to a directional residual defined in [11], we define residuals as $\\hat{\\mathbf {r}}_i = \\left(\\mathbf {I}_d-\\hat{\\mathbf {Y}}_i \\hat{\\mathbf {Y}}_i^{ \\mathrm {\\scriptscriptstyle T} }\\right)\\mathbf {Y}_i, \\mbox{ for $i=1, \\ldots , n$}.$ In (REF ), $\\hat{\\mathbf {Y}}_i\\hat{\\mathbf {Y}}^{ \\mathrm {\\scriptscriptstyle T} }_i$ can be viewed as the projection onto the space spanned by $\\hat{\\mbox{$\\mu $}}_i$ , and thus $\\mathbf {I}_d-\\hat{\\mathbf {Y}}_i \\hat{\\mathbf {Y}}_i^{ \\mathrm {\\scriptscriptstyle T} }$ is the projection onto the space orthogonal to the space spanned by $\\hat{\\mbox{$\\mu $}}_i$ .", "Equivalently, by the orthogonality of eigenvectors of $\\hat{\\mathbf {V}}_i$ , $\\mathbf {I}_d-\\hat{\\mathbf {Y}}_i \\hat{\\mathbf {Y}}_i^{ \\mathrm {\\scriptscriptstyle T} }$ is the projection onto the space spanned by the $d-1$ eigenvectors of $\\hat{\\mathbf {V}}_i$ that are orthogonal to $\\hat{\\mbox{$\\mu $}}_i$ , denote by $\\lbrace \\hat{\\mathbf {v}}_{i,j} \\rbrace _{j=1}^{d-1}$ .", "Hence (REF ) can be re-expressed as $\\hat{\\mathbf {r}}_i=\\hat{\\mathbf {P}}_{-d} \\hat{\\mathbf {P}}_{-d}^{ \\mathrm {\\scriptscriptstyle T} }\\mathbf {Y}_i$ , where $\\hat{\\mathbf {P}}_{-d}=[\\hat{\\mathbf {v}}_{i,1} \\mid ...\\mid \\hat{\\mathbf {v}}_{i, d-1}]$ , that is, $\\hat{\\mathbf {P}}_{-d}$ is the $d\\times (d-1)$ matrix with the $j$ -th column being $\\hat{\\mathbf {v}}_{i,j}$ , for $j=1,\\ldots , d-1$ .", "The potential dependence $\\hat{\\mathbf {P}}_{-d}$ on covariates via the subscript $i$ is suppressed for simplicity.", "For model diagnostic purposes, we use the following quadratic form of residuals, $\\hat{Q}_i = \\hat{\\mathbf {r}}_i^{ \\mathrm {\\scriptscriptstyle T} }\\hat{\\mathbf {V}}_i^{-1} \\hat{\\mathbf {r}}_i, \\mbox{ for $i=1, \\ldots , n$}.$ Note that $\\hat{\\mathbf {r}}_i=\\hat{\\mathbf {P}}_{-d} \\hat{\\mathbf {P}}_{-d}^{ \\mathrm {\\scriptscriptstyle T} }\\mathbf {Y}_i$ converges to $\\mathbf {r}_i=\\mathbf {P}_{-d} \\mathbf {P}_{-d}^{ \\mathrm {\\scriptscriptstyle T} }\\mathbf {Y}_i$ in distribution, where $\\mathbf {P}_{-d}$ results from excluding the $d$ -th column of the $d \\times d$ matrix $\\mathbf {P}=[\\mathbf {v}_1 \\mid ... \\mid \\mathbf {v}_{d-1} \\mid \\mathbf {v}_{d}]$ , and $\\mathbf {P}_{-d}\\mathbf {P}_{-d}^{ \\mathrm {\\scriptscriptstyle T} }= \\mathbf {I}_d - \\mbox{$\\mu $}_i\\mbox{$\\mu $}_i^{ \\mathrm {\\scriptscriptstyle T} }/\\Vert \\mbox{$\\mu $}_i\\Vert ^2$ .", "Additionally, $\\hat{\\mathbf {V}}_i$ converges to $\\mathbf {V}_i$ in probability as $n\\rightarrow \\infty $ .", "Thus, (REF ) converges to $ Q_i=\\mathbf {r}_i^{ \\mathrm {\\scriptscriptstyle T} }\\mathbf {V}^{-1}_i \\mathbf {r}_i$ in distribution as $n\\rightarrow \\infty $ .", "In what follows, we investigate the distribution of $Q_i$ to gain insight on the asymptotic distribution of (REF ).", "The subscript $i$ as the data point index is suppressed in this investigation.", "For $\\mathbf {Y}\\sim \\mbox{ESAG}_{d-1}(\\mbox{$\\mu $}, \\, \\mathbf {V})$ , the random variable can be expressed as $\\mathbf {Y}= \\mathbf {X}/\\Vert \\mathbf {X}\\Vert = (\\mathbf {V}^{1/2}\\mathbf {Z}+\\mbox{$\\mu $})/\\Vert \\mathbf {X}\\Vert $ , where $\\mathbf {Z}\\sim N_d(\\mathbf {0}, \\, \\mathbf {I}_d)$ .", "Hence, $\\mathbf {r}= \\mathbf {P}_{-d}\\mathbf {P}_{-d}^{ \\mathrm {\\scriptscriptstyle T} }\\mathbf {Y}= {\\mathbf {P}_{-d}} \\mathbf {P}_{-d}^{ \\mathrm {\\scriptscriptstyle T} }\\mathbf {V}^{1/2}\\mathbf {Z}/\\Vert \\mathbf {X}\\Vert $ , following which we show in Appendix B that $Q = \\mathbf {r}^{ \\mathrm {\\scriptscriptstyle T} }\\mathbf {V}^{-1} \\mathbf {r}=\\frac{\\Vert \\mathbf {U}_{-d}\\Vert ^2}{\\Vert \\mathbf {X}\\Vert ^2}, $ where $\\mathbf {U}_{-d}$ results from replacing the $d$ -th entry of $\\mathbf {U}=\\mathbf {P}^{ \\mathrm {\\scriptscriptstyle T} }\\mathbf {Z}$ with zero.", "Since $\\mathbf {P}$ is an orthogonal matrix, $\\mathbf {U}=\\mathbf {P}^{ \\mathrm {\\scriptscriptstyle T} }\\mathbf {Z}\\sim N_d(\\mathbf {0}, \\, \\mathbf {I}_d)$ , and thus $\\Vert \\mathbf {U}_{-d}\\Vert ^2 \\sim \\chi _{d-1}^2 $ .", "Now we see that $Q$ relates to the quotient of norms of Gaussian vectors, the distribution of which was studied in [15], following which one can derive the distribution of $Q$ analytically.", "One then can see that $Q$ is not a pivotal quantity and its distribution is not of a form familiar or easy enough for direct use for model diagnosis.", "We next construct a transformation of $Q$ aiming at attaining an approximate pivotal quantity for the purpose of model diagnostics." ], [ "Graphical model diagnostic", "Diagnostics methods proposed by [20] and [25] build upon the finding that, if $\\mathbf {Y}=(Y_1, \\ldots , Y_d)^{ \\mathrm {\\scriptscriptstyle T} }\\sim \\mbox{ESAG}_{d-1}(\\mbox{$\\mu $}, \\, \\mathbf {V})$ , then $\\Vert \\mbox{$\\mu $}\\Vert (Y_1, \\ldots , Y_{d-1})^{ \\mathrm {\\scriptscriptstyle T} }$ converges in distribution to $N_{d-1}(\\mathbf {0}, \\, \\sum _{j=1}^{d-1} \\lambda _j^{-1} \\mathbf {v}_j \\mathbf {v}_j^{ \\mathrm {\\scriptscriptstyle T} })$ as $\\Vert \\mbox{$\\mu $}\\Vert \\rightarrow \\infty $ [19].", "Following this finding, one also has that $T_0= \\Vert \\mbox{$\\mu $}\\Vert ^2 Q =(\\Vert \\mbox{$\\mu $}\\Vert ^2/\\Vert \\mathbf {X}\\Vert ^2) \\Vert \\mathbf {U}_{-d}\\Vert ^2$ converges in distribution to $\\chi _{d-1}^2$ for ESAG, and thus is a pivot in limit as $\\Vert \\mbox{$\\mu $}\\Vert \\rightarrow \\infty $ (instead of $n\\rightarrow \\infty $ ).", "One may thus assess adequacy of a posited ESAG model for a data set by checking if $\\lbrace \\hat{T}_{0,i}\\rbrace _{i=1}^n=\\lbrace \\Vert \\hat{\\mbox{$\\mu $}}_i\\Vert ^2\\hat{Q}_i\\rbrace _{i=1}^n$ approximately come from $\\chi ^2_{d-1}$ .", "As seen in Figure REF , a larger $\\Vert \\mbox{$\\mu $}\\Vert $ implies that the distribution has a higher concentration and thus less variability in data.", "This diagnostic strategy based on $T_0$ is thus intuitively well motivated since, with $\\Vert \\mbox{$\\mu $}\\Vert $ large, $\\Vert \\mbox{$\\mu $}\\Vert ^2/\\Vert \\mathbf {X}\\Vert ^2$ is expected to be close to one, making $T_0$ close to $\\Vert \\mathbf {U}_{-d}\\Vert ^2\\sim \\chi _{d-1}^2$ .", "However, empirical evidence from our extensive simulation study suggest that a practically unreasonably large $\\Vert \\mbox{$\\mu $}\\Vert $ is needed to make $\\chi _{d-1}^2$ a reasonably good approximation of the distribution of $T_0$ .", "Consequently, this strategy based on $T_0$ is of little practical value since data observed in most applications can rarely have low enough variability to make this approximation satisfactory.", "Motivated by the fact that $E(\\Vert \\mathbf {X}\\Vert ^2)=\\Vert \\mbox{$\\mu $}\\Vert ^2+\\sum _{j=1}^d \\lambda _j$ [23], we propose the following random quantity for diagnostics purposes, $T_1 & = \\left(\\Vert \\mbox{$\\mu $}\\Vert ^2 + \\sum _{j=1}^{d}\\lambda _j\\right) Q, $ which follows $\\chi ^2_{d-1}$ approximately when $\\Vert \\mbox{$\\mu $}\\Vert $ is large, with the approximation improves much faster than that for $T_0$ as $\\Vert \\mbox{$\\mu $}\\Vert $ increases, and thus is more like a pivot than $T_0$ is.", "Figure REF presents kernel density estimates of the distributions of $T_0$ and $T_1$ based on random samples of these random quantities, each of size 500, generated based on Monte Carlo replicates from $\\mbox{ESAG}_3(\\mbox{$\\mu $}, \\, \\mathbf {V})$ .", "More specifically, we set $\\Vert \\mbox{$\\mu $}\\Vert =4.24$ , which is not large enough to make the $\\chi ^2$ - approximation for $T_0$ satisfactory, and $\\sum _{j=1}^{d}\\lambda _j=11.1$ .", "As one can see in this figure, the variability of $T_0$ is way too low to make $\\chi ^2_{d-1}$ approximate its distribution well, and $T_1$ greatly improves over $T_0$ in its proximity to $\\chi ^2_{d-1}$ .", "In general, $T_1$ only requires a moderate $\\Vert \\mbox{$\\mu $}\\Vert $ to make the $\\chi ^2$ -approximation practically useful.", "Figure: Kernel density estimates of T 0 T_0 (dashed line) and T 1 T_1 (dotted line) comparing with the density of χ 3 2 \\chi ^2_3 (solid line).Following maximum likelihood estimation of all unknown parameters, one can exploit an empirical version of $T_1$ , $\\lbrace \\hat{T}_{1,i}\\rbrace _{i=1}^n$ , where $\\hat{T}_{1,i}=(\\Vert \\hat{\\mbox{$\\mu $}}_i\\Vert ^2 + \\sum _{j=1}^{d}\\hat{\\lambda }_{i,j}) \\hat{Q}_i$ , for $i=1, \\ldots , n$ , and check if $\\lbrace \\hat{T}_{1,i}\\rbrace _{i=1}^n$ can be reasonably well modeled by $\\chi _{d-1}^2$ .", "It can be a graphical check via a quantile-quantile (QQ) plot, for example, to see if there exists any clear signal of this sample deviating from $\\chi _{d-1}^2$ .", "Such graphical check is easy to implement following parameter estimation, and can provide visual warning signs when ESAG is a grossly inadequate model for the observed data $\\lbrace \\mathbf {Y}_i\\rbrace _{i=1}^n$ .", "Certainly, in a given application, the quality of $\\chi ^2$ -approximation for $T_1$ is unknown with its true distribution yet to be estimated.", "We next propose a bootstrap procedure to facilitate a quantitative test for model misspecification, which leads to another graphical diagnostic tool as a byproduct that does not rely on a $\\chi ^2$ -approximation for $T_1$ ." ], [ "Goodness of fit test", "Consider testing the null hypothesis that $\\mathbf {Y}$ follows an ESAG.", "Although $T_1$ defined in (REF ) approximately follows $\\chi ^2_{d-1}$ under the null hypothesis, a testing procedure based on $T_1$ that does not acknowledge its exact null distribution can lead to misleading conclusion, e.g., an inflated Type I error for the test.", "Instead of estimating the exact null distribution of $T_1$ , we use a random sample of $T_1$ induced from an ESAG as a reference sample, and quantify the dissimilarity between this reference sample and the observed empirical version of $T_1$ , $\\lbrace \\hat{T}_{1,i}\\rbrace _{i=1}^n$ .", "One may use a nonparametric test for testing if two data sets come from the same distribution, such as the Kolmogorov–Smirnov (KS) test [6] and the Cramér-von Mises test [2], to compare $\\lbrace \\hat{T}_{1,i}\\rbrace _{i=1}^n$ and the reference sample induced from an ESAG.", "We employ the KS test in all presented simulation study in this article.", "A smaller $p$ -value from the test indicates a larger distance between the underlying distribution of $\\lbrace \\hat{T}_{1,i}\\rbrace _{i=1}^n$ and that of the reference sample, with the latter approximately representing what one expects for $T_1$ under the null hypothesis.", "Here, the ultimate test statistic for testing the null hypothesis is a $p$ -value from the KS test.", "Denote this test statistic as $\\mbox{KS}_p$ .", "Even when data are from an ESAG, it is analytically unclear what $\\mbox{KS}_p$ should be because the ESAG from which the reference sample is induced is not exactly the true ESAG (as to be seen next).", "We thus use parametric bootstrap to estimate the null distribution of $\\mbox{KS}_p$ to obtain an approximate $p$ -value to compare with a preset nominal level, such as 0.05, according to which we conclude to reject or fail to reject the null at the chosen nominal level.", "The following presents a detailed algorithm for this hypothesis testing procedure.", "[h!]", "Goodness-of-Fit Test Procedure [1] Compare observed empirical version of $T_1$ with a reference sample Given data $\\lbrace \\mathbf {Y}_i\\rbrace _{i=1}^n$ for a non-regression setting or $\\lbrace (\\mathbf {Y}_i,\\mathbf {W}_i)\\rbrace _{i=1}^n$ for a regression setting, find the MLE $\\hat{\\mbox{$\\mu $}}_i$ and $\\hat{\\mbox{$\\gamma $}}_i$ , for $i=1, \\ldots , n$ , assuming an ESAG model for $\\mathbf {Y}_i$ or $\\mathbf {Y}_i$ conditioning on $\\mathbf {W}_i$ .", "Compute $\\hat{\\mathbf {V}}_i$ and $\\lbrace \\hat{\\lambda }_{i,j}\\rbrace _{j=1}^{d-1}$ based on $\\hat{\\mbox{$\\mu $}}_i$ and $\\hat{\\mbox{$\\gamma $}}_i$ , for $i=1, \\ldots , n$ .", "Compute $\\hat{T}_{1,i}= (\\Vert \\hat{\\mbox{$\\mu $}}_i\\Vert ^2+\\sum _{j=1}^{d-1}\\hat{\\lambda }_{i,j})\\hat{Q}_i$ , for $i = 1 ,\\ldots , n$ .", "Generate $\\lbrace \\tilde{\\mathbf {Y}}_i\\rbrace _{i=1}^n$ , where $\\tilde{\\mathbf {Y}}_i \\sim \\mbox{ESAG}(\\hat{\\mbox{$\\mu $}}_i,\\hat{\\mathbf {V}}_i)$ , for $i = 1, ..., n$ .", "Compute $\\tilde{T}_{1,i}= (\\Vert \\hat{\\mbox{$\\mu $}}_i\\Vert ^2+\\sum _{j=1}^{d-1}\\hat{\\lambda }_{i,j})\\tilde{Q}_i$ , where $\\tilde{Q}_i =\\tilde{r}_i ^{ \\mathrm {\\scriptscriptstyle T} }\\hat{\\mathbf {V}}_i^{-1}\\tilde{r}_i$ and $\\tilde{r}_i=\\hat{\\mathbf {P}}_{-d}\\hat{\\mathbf {P}}_{-d}^{ \\mathrm {\\scriptscriptstyle T} }\\tilde{\\mathbf {Y}}_i$ , for $i = 1 ,... , n $ .", "Use the KS test to test if $\\lbrace \\hat{T}_{1,i}\\rbrace _{i=1}^n$ and $\\lbrace \\tilde{T}_{1,i}\\rbrace _{i=1}^n$ arise from the same distribution.", "Denote by $\\mbox{KS}_p$ the resultant $p$ -value of the KS test.", "Bootstrap procedure to estimate the null distribution of $\\mbox{KS}_p$ Set $B$ = number of bootstraps Initiate $s = 0$ $b$ in $1,...,B$ Generate the $b$ -th bootstrap sample $\\lbrace \\mathbf {Y}_i^{(b)}\\rbrace _{i=1}^n$ , where $\\mathbf {Y}_i^{(b)}\\sim \\mbox{ESAG}(\\hat{\\mbox{$\\mu $}}_i, \\, \\hat{\\mathbf {V}}_i)$ for $i = 1, ..., n$ .", "Repeat steps 2–7 using data $\\lbrace \\mathbf {Y}_i^{(b)}\\rbrace _{i=1}^n$ for a non-regression setting or $\\lbrace (\\mathbf {Y}_i^{(b)}, \\mathbf {W}_i)\\rbrace _{i=1}^n$ for a regression setting.", "Denote the $p$ -value of the KS test as $\\mbox{KS}_p^{(b)}$ .", "if $\\mbox{KS}_p^{(b)} < \\mbox{KS}_p$ then $s = s + 1$ Define an estimated $p$ -value for this GOF test as $s/B$ .", "Several remarks are in order for this algorithm.", "First, in Step 5, $\\mbox{ESAG}(\\hat{\\mbox{$\\mu $}}_i, \\, \\hat{\\mathbf {V}}_i)$ , from which we induce a data point $\\tilde{T}_{1,i}$ in the reference sample $\\lbrace \\tilde{T}_{1,i}\\rbrace _{i=1}^n$ , can be viewed as the member of the ESAG family that is closest to the distribution that characterizes the true data generating process producing $\\mathbf {Y}_i$ , where the closeness between two distributions is quantified by the Kullback-Leibler divergence [30].", "Hence, $\\tilde{\\mathbf {Y}}_i$ generated from $\\mbox{ESAG}(\\hat{\\mbox{$\\mu $}}_i, \\, \\hat{\\mathbf {V}}_i)$ at this step is expected to resemble $\\mathbf {Y}_i$ if the null hypothesis is true, with $\\hat{\\mbox{$\\mu $}}_i$ and $\\hat{\\mathbf {V}}_i$ consistently estimating $\\mbox{$\\mu $}_i$ and $\\mathbf {V}_i$ , respectively.", "Second, in Step 6, $\\tilde{T}_{1,i}$ is constructed in a way that closely mimics $T_1$ instead of $\\hat{T}_{1,i}$ .", "In particular, just like $T_1$ where all population parameters are used in its construction, such as $\\mbox{$\\mu $}$ , $\\lbrace \\lambda _j\\rbrace _{j=1}^d$ , as well as $\\mathbf {V}$ and $\\mathbf {P}_{-d}$ that $Q$ depends on, computing $\\tilde{T}_{1,i}$ (following steps 2–5) requires no parameter estimation although it depends on $\\hat{\\mbox{$\\mu $}}_i$ , $\\lbrace \\hat{\\lambda }_{i,j}\\rbrace _{j=1}^d$ , $\\hat{\\mathbf {V}}_i$ and $\\hat{\\mathbf {P}}_{-d}$ , which are viewed as population parameters associated with $\\tilde{\\mathbf {Y}}_i$ .", "One may certainly construct in Step 6 a random quantity closely mimicking $\\hat{T}_{1,i}$ instead, but that would involve another round of parameters estimation based on $\\lbrace \\tilde{\\mathbf {Y}}_i\\rbrace _{i=1}^n$ and thus is computationally unattractive.", "Third, we acknowledge that, even under the null hypothesis, $\\mbox{ESAG}(\\hat{\\mbox{$\\mu $}}_i, \\, \\hat{\\mathbf {V}}_i)$ is not the true distribution of $\\mathbf {Y}_i$ , with MLEs in place of the true model parameters.", "Hence, even when the null hypothesis is true, $\\lbrace \\hat{T}_{1,i} \\rbrace _{i=1}^n$ do not come from the same distribution as that of the reference sample $\\lbrace \\tilde{T}_{1,i}\\rbrace _{i=1}^n$ , but the two distributions are expected to be closer than when the null hypothesis is severely violated.", "The bootstrap procedure is designed to estimate the null distribution of the distance between these two distributions that is quantified by $\\mbox{KS}_p$ , with a smaller value of $\\mbox{KS}_p$ indicating a larger distance and thus stronger evidence against the null.", "As to be seen in the upcoming simulation study, this bootstrap procedure is capable of approximating the null distribution of $\\mbox{KS}_p$ well enough to yield an empirical size of the test matching closely with any given nominal level.", "In the absence of model misspecification, the distribution of $\\lbrace \\tilde{T}_{1,i}\\rbrace _{i=1}^n$ approximates the distribution of $T_1$ , with the accuracy of the approximation depends less on $\\Vert \\mbox{$\\mu $}\\Vert $ than the $\\chi ^2$ -approximation does.", "Therefore, a more reliable graphical diagnostic device than the aforementioned QQ plot using $\\chi ^2_{d-1}$ as a reference distribution is a QQ plot based on $\\lbrace \\hat{T}_{1,i}\\rbrace _{i=1}^n$ and $\\lbrace \\tilde{T}_{1,i}\\rbrace _{i=1}^n$ , as we demonstrate in the upcoming empirical study." ], [ "Design of simulation", "To demonstrate operating characteristics of the diagnostics methods proposed in Section , we apply them to data $\\lbrace \\mathbf {Y}_i\\rbrace _{i=1}^n$ generated according to four data generating processes specified as follows: (M1) An ESAG model, $\\mbox{ESAG}_3(\\mbox{$\\mu $}, \\mathbf {V})$ , with $\\mbox{$\\mu $}=(2, \\, -2, \\, 3, \\,-3)^{ \\mathrm {\\scriptscriptstyle T} }$ and $\\mathbf {V}$ defined via $\\mbox{$\\mu $}$ and $\\mbox{$\\gamma $}=(2,\\,3, \\,5, \\,8, \\,2)^{ \\mathrm {\\scriptscriptstyle T} }$ .", "(M2) A mixture of ESAG and angular Cauchy, with a mixing proportion of $1-\\alpha $ on $\\mbox{ESAG}_3(\\mbox{$\\mu $}, \\mathbf {V})$ specified in (M1), where a random vector from an angular Cauchy is generated by normalizing a random vector from a multivariate Cauchy with mean $\\mbox{$\\mu $}$ .", "This creates a scenario where $(1-\\alpha )\\times 100\\%$ of the data arise from EAG but the rest of the data deviate from ESAG, where $\\alpha \\in \\lbrace 0.05, \\, 0.1, \\, 0.2\\rbrace $ .", "(M3) An angular Gaussian distribution, $\\mbox{AG}(\\mbox{$\\mu $}, \\, \\tilde{\\mathbf {V}})$ , where $\\det (\\tilde{\\mathbf {V}})=\\alpha \\ne 1$ , which creates a scenario where the constraint in () is violated.", "More specifically, when formulating (M1), one has the eigenvalues $\\lbrace \\lambda _j\\rbrace _{j=1}^{d-1}$ and the corresponding eigenvectors $\\lbrace \\mathbf {v}_j\\rbrace _{j=1}^{d-1}$ of $\\mathbf {V}$ , besides $\\lambda _d=1$ and $\\mathbf {v}_d=\\mbox{$\\mu $}/\\Vert \\mbox{$\\mu $}\\Vert $ .", "Using these quantities from (M1), we define $\\tilde{\\mathbf {V}}=\\sum _{j=1}^d \\tilde{\\lambda }_j\\mathbf {v}_j\\mathbf {v}_j^{ \\mathrm {\\scriptscriptstyle T} }$ , where $\\tilde{\\lambda }_j=\\alpha ^{1/(d-1)}\\lambda _j$ , for $j=1, \\ldots , d-1$ , and $\\tilde{\\lambda }_d=1$ , with $\\alpha \\in \\lbrace 0.05, 0.1, 5, 10\\rbrace $ .", "Because $\\tilde{\\mathbf {V}}\\mbox{$\\mu $}=\\mbox{$\\mu $}$ , the constraint in (REF ) for ESAG is satisfied for this angular Gaussian distribution.", "(M4) Similar to (M3) but $\\tilde{\\lambda }_j=\\alpha ^{-1/(d-1)}\\lambda _j$ , for $j=1, \\ldots , d-1$ , and $\\tilde{\\lambda }_d=\\alpha \\in \\lbrace 0.1, 0.5, 2.5, 5\\rbrace $ .", "This leads to $\\tilde{\\mathbf {V}}\\mbox{$\\mu $}=\\alpha \\mbox{$\\mu $}$ and thus violates constraint (REF ).", "Because now $\\mbox{det}(\\tilde{\\mathbf {V}})=1$ , the constraint in () for ESAG is satisfied for this angular Gaussian distribution.", "We generate random samples of size $n\\in \\lbrace 250, 500, 1000\\rbrace $ following each data generating process.", "The proportions of data sets across 300 Monte Carlo replicates for which the GOF test rejects the null hypothesis at various significance levels are recorded for each simulation setting.", "This rejection rate estimates the size of the test under (M1), and sheds light on how sensitive the proposed diagnostic methods are to various forms and severity of deviations from ESAG exhibited in (M2)–(M4).", "We set $B=200$ in the bootstrap algorithm." ], [ "Simulation results", "Under (M1), Figure REF shows the rejection rate versus the nominal level when the null hypothesis stating that $\\mathbf {Y}\\sim \\mbox{ESAG}$ is true.", "This figure suggests that the null distribution of the test statistic $\\mbox{KS}_p$ is approximated well enough over a wide range of nominal levels based on merely $B=200$ bootstrap samples, especially at the lower tail so that the size of the test is close to a low nominal level such as 0.05.", "Figure: Rejection rates of the GOF test versus nominal levels under (M1) when n=250n=250 (dashed line), 500 (dotted line), and 1000 (dash-dotted line).", "The solid line is the 45 ∘ 45^\\circ reference line.Table: Rejection rates of the GOF test under (M2)–(M4) at nominal level 0.05Table REF presents rejection rates of the GOF test at nominal level 0.05 under the remaining three data generating processes (M2)–(M4).", "Under (M2), when $\\alpha \\times 100\\%$ of the observed data are not from ESAG, the power of the test steadily increases as $\\alpha $ increases.", "A larger sample size also boosts the power of detecting violation of the null.", "Under (M3), when data are from $\\mbox{AG}(\\mbox{$\\mu $}, \\, \\tilde{\\mathbf {V}})$ that does not satisfy constraint () due to $\\det (\\tilde{\\mathbf {V}})=\\alpha (\\ne 1)$ , one can see from Table REF that, depending on the severity of the violation of () that is controlled by the deviation of $\\alpha $ from 1, the proposed test has a moderate power to detect this particular violation of ESAG, with a higher power at a larger sample size.", "Under (M4), when data are from $\\mbox{AG}(\\mbox{$\\mu $},\\tilde{\\mathbf {V}})$ with constraint (REF ) violated due to $\\tilde{\\mathbf {V}}\\mbox{$\\mu $}=\\alpha \\mbox{$\\mu $}$ , one can see from Table REF that, as $\\alpha $ deviates from 1 from either directions, the proposed test possesses moderate to high power to detect violation of the null hypothesis, with the power increasing quickly as $n$ grows larger.", "This can also serve as evidence for that, between the two constraints of ESAG in (REF ) and (), violating the first constraint leads to an angular Gaussian deviating from ESAG more.", "Besides the quantitative GOF test that performs satisfactorily according to the above empirical evidence, one can also inspect the QQ plot based on $\\lbrace \\hat{T}_{1,i}\\rbrace _{i=1}^n$ and the bootstrap sample $\\lbrace \\tilde{T}_{1,i}\\rbrace _{i=1}^n$ to graphically check ESAG assumptions.", "Figure REF shows a collection of such plots based on a randomly chosen Monte Carlo replicate from each of the four considered data generating processes.", "As evidenced in Figure REF , violation of the ESAG assumptions as designed in (M2)–(M4) causes a QQ plot deviating from a straight-line pattern, a pattern more or less observed in the absence of model misspecification as in (M1).", "To create such QQ plots does not require the full $B$ -round bootstrap procedure in the above algorithm, and provides a convenient graphical check on the goodness of fit.", "Figure: QQ plots based on {T ^ 1,i } i=1 n \\lbrace \\hat{T}_{1,i}\\rbrace _{i=1}^n and the bootstrap sample {T ˜ 1,i } i=1 n \\lbrace \\tilde{T}_{1,i}\\rbrace _{i=1}^n under (M1) (top-left panel), (M2) with α=0.2\\alpha =0.2 (top-right panel), (M3) with α=0.05\\alpha =0.05 (bottom-left panel), and (M4) with α=2.5\\alpha =2.5 (bottom-right panel), respectively.", "Solid lines are 45 ∘ 45^\\circ reference lines." ], [ "Application to hydrochemical data", "In this section, we analyze the hydrochemical data containing 14 molarities measured monthly at different stations along the Llobregat River and its tributaries in northeastern Spain between the summer of 1997 and the spring of 1999 [18].", "The complete data are available in the R package, compositions [28].", "For illustration purposes, we focus on the compositional data recording relative abundance of two major ions, $\\mbox{K}^+$ and $\\mbox{Na}^+$ , and two minor ions, $\\mbox{Ca}^{2+}$ and $\\mbox{Mg}^{2+}$ .", "Taking the square-root transformation of the compostional data gives directional data with $d=4$ .", "The four considered ions are mostly from potash mine tailing, which is one of the major sources of anthropogenic pollution in the Llobregat Basin [27].", "We first assume that the composition of $(\\mbox{K}^+, \\mbox{ Na}^+, \\mbox{ Ca}^{2+}, \\mbox{ Mg}^{2+})$ in tributaries of Anoia, one of the two main tributaries of the Llobregat River, follows an ESAG distribution.", "Using 67 records collected from stations placed along tributaries of Anoia, we obtain the estimated mean and variance-covariance of the compositional vector given by $\\hat{\\mbox{$\\mu $}}_{\\hbox{\\tiny A}} = \\begin{bmatrix}1.99 \\\\5.74 \\\\7.95 \\\\4.59\\end{bmatrix},\\hspace{14.22636pt}\\hat{\\mathbf {V}}_{\\hbox{\\tiny A}}= \\begin{bmatrix}0.93 & 1.15 & -0.76 & -0.09 \\\\1.15 & 2.77 & -1.41 & -0.27 \\\\-0.76 & -1.41 & 1.99 & 0.38 \\\\-0.09 & -0.27 & 0.38 & 0.73\\end{bmatrix}.$ The GOF test yields an estimated $p$ -value of 0.66, suggesting that the estimated ESAG distribution may provide an adequate fit for the data.", "The QQ plot in Figure REF (see the left panel) may indicate some disagreement in the upper tail when it comes to the distribution of $\\hat{T}_1$ and its bootstrap counterpart induced from an ESAG distribution, but otherwise mostly resemble each other in distribution.", "Transforming the estimated mean $\\hat{\\mbox{$\\mu $}}_{\\hbox{\\tiny A}}$ back to the composition of four considered ions, we estimate the mean composition of $(\\mbox{K}^+, \\mbox{ Na}^+, \\mbox{ Ca}^{2+}, \\mbox{ Mg}^{2+})$ to be (0.03, 0.27, 0.52, 0.18).", "We repeat the above exercise for another compositional data of size 43 collected from stations placed along tributaries of the lower Llobregat course, and find the estimated mean vector and variance-covariance matrix to be $\\hat{\\mbox{$\\mu $}}_{\\hbox{\\tiny L}} = \\begin{bmatrix}3.27 \\\\8.56 \\\\9.01 \\\\5.78\\end{bmatrix},\\hspace{14.22636pt}\\hat{\\mathbf {V}}_{\\hbox{\\tiny L}}=\\begin{bmatrix}0.63 & 1.50 & -0.71 & -0.90 \\\\1.50 & 5.36 & -2.66 & -3.17 \\\\-0.71 & -2.66 & 2.43 & 2.10 \\\\-0.90 & -3.17 & 2.10 & 2.91\\end{bmatrix}.$ The estimated $p$ -value from the GOF test is 0.55 in this case.", "This, along with the QQ plot in Figure REF (see the middle panel), also implies that the inferred ESAG distribution fits the data reasonably well.", "According to the estimated mean direction $\\hat{\\mbox{$\\mu $}}_{\\hbox{\\tiny L}}$ , the estimated the mean composition of $(\\mbox{K}^+, \\mbox{ Na}^+, \\mbox{ Ca}^{2+}, \\mbox{ Mg}^{2+})$ is (0.05, 0.37, 0.41, 0.17), which shares some similarity with the estimated mean composition associated with Anoia tributaries in that $\\mbox{Ca}^+$ and $\\mbox{Na}^+$ are the two dominating components among the four, and $\\mbox{K}^+$ is the minority.", "The two estimated variance-covariance matrices, $\\hat{\\mathbf {V}}_{\\hbox{\\tiny A}}$ and $\\hat{\\mathbf {V}}_{\\hbox{\\tiny L}}$ , also share some implications in common: the two major ions, $\\mbox{K}^+$ and $\\mbox{Na}^+$ , are positively correlated, so are the two minor ions, $\\mbox{Ca}^{2+}$ and $\\mbox{Mg}^{2+}$ ; but a major ion is negatively correlated with a minor ion in composition.", "Diagonal entries of $\\hat{\\mathbf {V}}_{\\hbox{\\tiny A}}$ and $\\hat{\\mathbf {V}}_{\\hbox{\\tiny L}}$ should not be interpreted or compared here in the same way as if data were not directional because the variability of ESAG($\\mbox{$\\mu $}$ , $\\mathbf {V}$ ) depends on both $\\mbox{$\\mu $}$ and $\\mathbf {V}$ .", "For the compositional vector as a whole, with $\\Vert \\hat{\\mbox{$\\mu $}}_{\\hbox{\\tiny A}}\\Vert \\approx 11.00<\\Vert \\hat{\\mbox{$\\mu $}}_{\\hbox{\\tiny L}}\\Vert \\approx 14.10$ , we have data evidence suggesting that the compositional data from Anoia tributaries are less concentrated around its mean direction, and thus more variable, than those from tributaries of the lower Llobregat course.", "When zooming in on one component at a time in the compositional vector, one can compare variability between two ESAG distributions base on $\\mathbf {V}/\\Vert \\mbox{$\\mu $}\\Vert ^2$ .", "For instance, even though $\\hat{\\mathbf {V}}_{\\hbox{\\tiny A}}[3,3]=1.99<\\hat{\\mathbf {V}}_{\\hbox{\\tiny L}}[3,3]=2.43$ , we would not jump to the conclusion that the composition of $\\mbox{Ca}^{2+}$ is less variable in Anoia tributaries than that in the other set of locations.", "Instead, because $\\hat{\\mathbf {V}}_{\\hbox{\\tiny A}}[3,3]/\\Vert \\hat{\\mbox{$\\mu $}}_{\\hbox{\\tiny A}}^2\\Vert = 0.18>\\hat{\\mathbf {V}}_{\\hbox{\\tiny L}}[3,3]/\\Vert \\hat{\\mbox{$\\mu $}}_{\\hbox{\\tiny L}}^2\\Vert = 0.17$ , we conclude that the composition of $\\mbox{Ca}^{2+}$ is similar in variability between the two sets of locations, but tributaries of Anoia may be subject to slightly higher variability in this regard.", "This conclusion is also consistent with the comparison of the sample standard deviation of the composition of $\\mbox{Ca}^{2+}$ between the two data sets.", "Moreover, estimates for the other set of parameters of ESAG arising in the new parameterization, $\\mbox{$\\gamma $}$ , also provide statistically interesting insights on the underlying distributions.", "Denote by $\\hat{\\mbox{$\\gamma $}}_{\\hbox{\\tiny A}}$ the estimate based on data from Anoia tributaries, and by $\\hat{\\mbox{$\\gamma $}}_{\\hbox{\\tiny L}}$ the estimate based on data from tributaries of the lower Llobregat course.", "We find that $\\Vert \\hat{\\mbox{$\\gamma $}}_{\\hbox{\\tiny A}}\\Vert =6.24 <\\Vert \\hat{\\mbox{$\\gamma $}}_{\\hbox{\\tiny L}}\\Vert =17.03$ , indicating that neither of the two ESAG distributions is isotropic, with the second ESAG deviating from isotropy further.", "To check partial isotropy, we look into the estimated eigenvalues associated with $\\hat{\\mathbf {V}}_{\\hbox{\\tiny A}}$ and $\\hat{\\mathbf {V}}_{\\hbox{\\tiny L}}$ .", "With one eigenvalue fixed at 1, the three estimated eigenvalues associated with $\\hat{\\mathbf {V}}_{\\hbox{\\tiny A}}$ are 0.37 (0.05), 0.62 (0.10), and 4.44 (0.64), with the estimated standard errors in parentheses obtained based on 300 bootstrap data sets, each of the same size as the raw data sampled from the raw data with replacement.", "Similarly, we have the three estimated eigenvalues associated with $\\hat{\\mathbf {V}}_{\\hbox{\\tiny L}}$ given by 0.19 (0.04), 0.54 (0.29), and 9.61 (1.84).", "Taking the estimated standard errors into consideration, with the large discrepancy between the estimated (and fixed) eigenvalues, neither of the two data sets provides sufficient evidence indicating partial isotropy.", "Lastly, we fit the ESAG model to the 110 records that combine the above two data sets and obtain an estimated $p$ -value of 0.02 from the GOF test, with the corresponding QQ plot clearly deviating from a straight line (see the right panel in Figure REF ).", "We thus conclude that an ESAG distribution is inadequate for modeling the data that mix compositional data from Anoia tributaries and those from tributaries of the lower Llobregat course.", "This lack of fit is not surprising because Anoia mostly passes through vineyards and industrialized zones, whereas the Llobergat lower course also flows through densely populated areas with high demands of water besides agricultural and industrial areas.", "This explains the vastly different patterns and sources of anthorpogenic and geological pollution between Anoia and the lower Llobregat course [9], which create substantial heterogeneity in the mixed compositional data that an ESAG model is unlikely to capture.", "Figure: QQ plots from the GOF test applied to compositional data from tributaries of Anoia (left panel), those from tributaries of the lower Llobregat course (middle panel), and the data that combine the previous two data sets (right panel)." ], [ "Discussion", "Given the wide range of applications where directional data are of scientific interest and typically of dimension higher than three, an important first step towards sound statistical analysis of such data is the formulation of a directional distribution of arbitrary dimension.", "We adopt the initial formulation of the ESAG distribution proposed by [19], and take it to the next level via a sequence of reparameterizations leading to a distribution family indexed by parameters ranging over the entire real space.", "The resultant parametric family for directional data avoids pitfalls that many existing directional distributions suffer so that, unlike the Kent distribution for instance, there is no hard-to-compute normalization constant in the density function, and it is easy to simulate data from an ESAG of any dimension.", "More importantly, the proposed parameterization of ESAG lends itself to straightforward maximum likelihood inference procedures that are numerically stable and less dependent on “good\" starting values for parameter estimation.", "New parameters introduced along the way of reparameterization have statistically meaningful interpretations, which facilitate formulating hypothesis testing where one compares a reduced ESAG model, such as an isotropic or a partially isotropic model, with a saturated ESAG model.", "In summary, the proposed ESAG family of arbitrary dimension sets the stage for carrying out a full range of likelihood-based inference for directional data, including parameter estimation, uncertainty assessment, and hypothesis testing.", "To ease the concerns of model misspecification when assuming a parametric family in a given application, we develop graphical and quantitative diagnostics methods that utilize directional residuals.", "Maximum likelihood estimation and the proposed diagnostics methods for ESAG can be easily implemented using the R code developed and maintained by the first author that is available upon request.", "An immediate follow-up step is to consider regression models for directional data, which is well motivated by the lack of fit of a marginal ESAG distribution for the mixed compositional data entertained in Section .", "We conjecture that, conditioning on covariates relating to geological features of considered tributaries and covariates reflecting human activities developed in regions these tributaries running through, the mixed compositioinal data can be better modelled by an ESAG distribution with covariate-dependent $\\mbox{$\\mu $}$ and $\\mbox{$\\gamma $}$ .", "With $\\mbox{$\\mu $}$ and $\\mbox{$\\gamma $}$ ranging over the entire real space of adequate dimensions, the proposed ESAG family prepares itself well for regression analysis of directional data without using complicated link functions to introduce dependence of model parameters on covariates $\\mathbf {W}$ .", "For example, one may consider a fully parametric regression model as simple as $\\mathbf {Y}|\\mathbf {W}\\sim \\mbox{ESAG}(\\mbox{$\\mu $}(\\mathbf {W}), \\, \\mathbf {V}(\\mathbf {W}))$ , where $\\mbox{$\\mu $}(\\mathbf {W})$ is a linear function of covariates $\\mathbf {W}$ , and $\\mathbf {V}(\\mathbf {W})$ is determined by $\\mbox{$\\mu $}(\\mathbf {W})$ and $\\mbox{$\\gamma $}(\\mathbf {W})$ , with the latter also a linear function of covariates.", "More flexible dependence structures of $\\mbox{$\\mu $}$ and $\\mbox{$\\gamma $}$ on covariates are also worthy of consideration in the follow-up research along the line of regression analysis.", "Once we enter the realm of regression models, the dimension of the parameter space grows more quickly as $d$ increases than before considering regression analysis for directional data.", "Upon completion of the study presented in this article, we have embarked on the exciting journey of developing scalable inference procedures suitable for settings with high dimensional parameter space following the strategies of frequentist penalized maximum likelihood estimation and Bayesian shrinkage estimation via hierarchical modeling." ], [ "Appendix A: Implication of $\\mbox{$\\gamma $}$ and {{formula:788cf513-66a8-4eb7-91b3-3da3bd21f681}} being equivalent", "Under the proposed parameterization of $\\mbox{ESAG}_{d-1}(\\mbox{$\\mu $}, \\mathbf {V})$ , $\\mathbf {V}$ is determined by $\\mbox{$\\gamma $}$ after $\\mbox{$\\mu $}$ is specified.", "We thus write $\\mathbf {V}$ as $\\mathbf {V}(\\mbox{$\\gamma $})$ in this appendix, and view quantities related to $\\mathbf {V}$ as functions of $\\mbox{$\\gamma $}$ , such as the eigenvalues of $\\mathbf {V}$ and the radial parameters in (REF ).", "If $\\mbox{$\\gamma $}$ and $\\mbox{$\\gamma $}^{\\prime }$ are equivalent, then $\\mathbf {V}(\\mbox{$\\gamma $})=\\mathbf {V}(\\mbox{$\\gamma $}^{\\prime })$ , and thus $\\mathbf {V}(\\mbox{$\\gamma $})$ and $\\mathbf {V}(\\mbox{$\\gamma $}^{\\prime })$ share the same eigenvalues.", "By (REF ), $\\lbrace \\lambda _j(\\mbox{$\\gamma $})=\\lambda _j(\\mbox{$\\gamma $}^{\\prime })\\rbrace _{j=1}^{d-1}$ implies that $\\lbrace r_j(\\mbox{$\\gamma $})=r_j(\\mbox{$\\gamma $}^{\\prime })\\rbrace _{j=1}^{d-2}$ .", "Lastly, from Section REF , $r_j=\\Vert \\tilde{\\mbox{$\\gamma $}}_j\\Vert $ , for $j=1, \\ldots , d-2$ .", "Therefore, if $\\mbox{$\\gamma $}$ and $\\mbox{$\\gamma $}^{\\prime }$ are equivalent, $\\Vert \\tilde{\\mbox{$\\gamma $}}_j\\Vert =r_j(\\mbox{$\\gamma $})=r_j(\\mbox{$\\gamma $}^{\\prime })=\\Vert \\tilde{\\mbox{$\\gamma $}}_j^{\\prime }\\Vert $ , for $j=1, \\ldots , d-2$ .", "By the spectral decomposition theorem, $\\mathbf {V}^{\\alpha }=\\mathbf {P}\\mathbf {D}^{\\alpha }\\mathbf {P}^{ \\mathrm {\\scriptscriptstyle T} }$ , where $\\mathbf {D}^{\\alpha } = \\mbox{diag}(\\lambda _1^\\alpha ,...,\\lambda _d^\\alpha )$ and $\\mathbf {P}=[\\mathbf {v}_1 \\mid ... \\mid \\mathbf {v}_{d}]$ .", "Using this decomposition with $\\alpha =-1$ and 1/2, we have $Q & = \\mathbf {r}^{ \\mathrm {\\scriptscriptstyle T} }\\mathbf {V}^{-1} \\mathbf {r}\\\\& = \\frac{\\mathbf {Z}^{ \\mathrm {\\scriptscriptstyle T} }}{\\Vert \\mathbf {X}\\Vert } \\mathbf {V}^{1/2}\\mathbf {P}_{-d}\\mathbf {P}_{-d}^{ \\mathrm {\\scriptscriptstyle T} }\\times \\mathbf {V}^{-1} \\times \\mathbf {P}_{-d}\\mathbf {P}_{-d}^{ \\mathrm {\\scriptscriptstyle T} }\\mathbf {V}^{1/2}\\frac{\\mathbf {Z}}{\\Vert \\mathbf {X}\\Vert } \\\\& = \\frac{\\mathbf {Z}^{ \\mathrm {\\scriptscriptstyle T} }}{\\Vert \\mathbf {X}\\Vert ^2} \\mathbf {P}\\mathbf {D}^{1/2} \\mathbf {P}^{ \\mathrm {\\scriptscriptstyle T} }\\mathbf {P}_{-d}\\mathbf {P}_{-d}^{ \\mathrm {\\scriptscriptstyle T} }\\times \\mathbf {P}\\mathbf {D}^{-1} \\mathbf {P}^{ \\mathrm {\\scriptscriptstyle T} }\\times \\mathbf {P}_{-d}\\mathbf {P}_{-d}^{ \\mathrm {\\scriptscriptstyle T} }\\mathbf {P}\\mathbf {D}^{1/2} \\mathbf {P}^{ \\mathrm {\\scriptscriptstyle T} }\\mathbf {Z},$ where $\\mathbf {P}^{ \\mathrm {\\scriptscriptstyle T} }\\mathbf {P}_{-d} =\\begin{bmatrix}\\mathbf {P}_{-d}^{ \\mathrm {\\scriptscriptstyle T} }\\\\\\mathbf {v}_d^{ \\mathrm {\\scriptscriptstyle T} }\\end{bmatrix}\\mathbf {P}_{-d}=\\begin{bmatrix}\\mathbf {I}_{d-1} \\\\\\mathbf {0}^{ \\mathrm {\\scriptscriptstyle T} }\\end{bmatrix},$ and thus $\\mathbf {P}^{ \\mathrm {\\scriptscriptstyle T} }\\mathbf {P}_{-d} \\mathbf {P}_{-d}^{ \\mathrm {\\scriptscriptstyle T} }\\mathbf {P}=\\begin{bmatrix}\\mathbf {I}_{d-1} & \\mathbf {0}\\\\\\mathbf {0}^{ \\mathrm {\\scriptscriptstyle T} }& 0\\end{bmatrix}\\triangleq \\tilde{\\mathbf {I}}_d.$ It follows that $Q & =\\frac{1}{\\Vert \\mathbf {X}\\Vert ^2} \\mathbf {Z}^{ \\mathrm {\\scriptscriptstyle T} }\\mathbf {P}\\mathbf {D}^{1/2} \\tilde{\\mathbf {I}}_d \\mathbf {D}^{-1} \\tilde{\\mathbf {I}}_d \\mathbf {D}^{1/2} \\mathbf {P}^{ \\mathrm {\\scriptscriptstyle T} }\\mathbf {Z}\\\\& =\\frac{1}{\\Vert \\mathbf {X}\\Vert ^2} \\mathbf {Z}^{ \\mathrm {\\scriptscriptstyle T} }\\mathbf {P}\\tilde{\\mathbf {I}}_d \\mathbf {D}^{1/2}\\mathbf {D}^{-1}\\mathbf {D}^{1/2} \\tilde{\\mathbf {I}}_d \\mathbf {P}^{ \\mathrm {\\scriptscriptstyle T} }\\mathbf {Z}\\\\& = \\frac{1}{\\Vert \\mathbf {X}\\Vert ^2} \\mathbf {U}^{ \\mathrm {\\scriptscriptstyle T} }\\tilde{\\mathbf {I}}_d \\tilde{\\mathbf {I}}_d\\mathbf {U}, \\mbox{ where $\\mathbf {U}=\\mathbf {P}^{ \\mathrm {\\scriptscriptstyle T} }\\mathbf {Z}$,}\\\\& = \\frac{1}{\\Vert \\mathbf {X}\\Vert ^2} \\mathbf {U}_{-d}^{ \\mathrm {\\scriptscriptstyle T} }\\mathbf {U}_{-d}, \\mbox{ where $\\mathbf {U}_{-d}= \\tilde{\\mathbf {I}}_d \\mathbf {U}$,}$ which gives (REF )." ] ]
2212.05634
[ [ "Transductive Linear Probing: A Novel Framework for Few-Shot Node\n Classification" ], [ "Abstract Few-shot node classification is tasked to provide accurate predictions for nodes from novel classes with only few representative labeled nodes.", "This problem has drawn tremendous attention for its projection to prevailing real-world applications, such as product categorization for newly added commodity categories on an E-commerce platform with scarce records or diagnoses for rare diseases on a patient similarity graph.", "To tackle such challenging label scarcity issues in the non-Euclidean graph domain, meta-learning has become a successful and predominant paradigm.", "More recently, inspired by the development of graph self-supervised learning, transferring pretrained node embeddings for few-shot node classification could be a promising alternative to meta-learning but remains unexposed.", "In this work, we empirically demonstrate the potential of an alternative framework, \\textit{Transductive Linear Probing}, that transfers pretrained node embeddings, which are learned from graph contrastive learning methods.", "We further extend the setting of few-shot node classification from standard fully supervised to a more realistic self-supervised setting, where meta-learning methods cannot be easily deployed due to the shortage of supervision from training classes.", "Surprisingly, even without any ground-truth labels, transductive linear probing with self-supervised graph contrastive pretraining can outperform the state-of-the-art fully supervised meta-learning based methods under the same protocol.", "We hope this work can shed new light on few-shot node classification problems and foster future research on learning from scarcely labeled instances on graphs." ], [ "Introduction", "Graph Neural Networks (GNNs) [1], [2], [3], [4] are a family of neural network models designed for graph-structured data.", "In this work, we concentrate on GNNs for the node classification task, where GNNs recurrently aggregate neighborhoods to simultaneously preserve graph structure information and learn node representations.", "However, most GNN models focus on the (semi-)supervised learning setting, assuming access to abundant labels [5], [6].", "This assumption could be practically infeasible due to the high cost of data collection and labeling, especially for large graphs.", "Moreover, recent works have manifested that directly training GNNs with limited nodes can result in severe performance degradation [7], [8], [9].", "Such a challenge has led to a proliferation of studies [10], [11], [12], [6] that try to learn fast-adaptable GNNs with extremely scarce known labels, i.e., Few-Shot Node Classification (FSNC) tasks.", "Particularly, in FSNC, there exist two disjoint label spaces: base classes are assumed to contain substantial labeled nodes while target novel classes only contain few available labeled nodes.", "If the target FSNC task contains $N$ novel classes with $K$ labeled nodes in each class, the problem is denoted as an $N$ -way $K$ -shot node classification task.", "Here the $K$ labeled nodes are termed as a support set, and the unlabeled nodes are termed as a query set for evaluation.", "Currently, meta-learning has become a prevailing and successful paradigm to tackle such a shortage of labels on graphs.", "Inspired by the way humans learn unseen classes with few samples via utilizing previously learned prior knowledge, a typical meta-learning based framework will randomly sample a number of episodes, or meta-tasks, to emulate the target $N$ -way $K$ -shot setting [7].", "Based on this principle, various models [7], [8], [9], [10], [11], [12], [13] have been proposed, which makes meta-learning a plausible default choice for FSNC tasks.", "On the other hand, despite the remarkable breakthroughs that have been made, meta-learning based methods still have several limitations.", "First, relying on different arbitrarily sampled meta-tasks to extract transferable meta-knowledge, meta-learning based frameworks suffer from the piecemeal knowledge issue [14].", "That being said, a small portion of the nodes and classes are selected per episode for training, which leads to an undesired loss of generalizability of the learned GNNs regarding nodes from unseen novel classes.", "Second, the feasibility for sampling meta-tasks is based on the assumption that there exist sufficient base classes where substantial labeled nodes are accessible.", "However, this assumption can be easily overturned for real-world graphs where the number of base classes can be limited, or the labels of nodes in base classes can be inaccessible.", "In a nutshell, these two concerns motivate us to design an alternative blackframework for meta-learning to cover more realistic scenarios.", "Inspired by [15], [16], we postulate that the key to solving FSNC is to learn a generalizable GNN encoder.", "We validate this postulation by a motivating example in Section REF .", "Then, without the episodic emulation, the proposed novel framework, Transductive Linear Probing (TLP), directly transfers pretrained node embeddings for nodes in novel classes learned from Graph Contrastive Learning (GCL) methods [17], [18], [19], [20], [21], [22], [23], and fine-tunes a separate linear classifier with the support set to predict labels for unlabeled nodes.", "GCL methods are proven to learn generalizable node embeddings by maximizing the representation consistency under different augmented views [24], [17], [18], [23].", "If the representations of nodes in novel classes are discriminative enough, probing them with a simple linear classifier should provide decent accuracy.", "Based on this intuition, we propose two instantiations of the TLP framework in this paper: TLP with the self-supervised form of GCL methods and TLP with the supervised GCL counterparts.", "We evaluate TLP by transferring node embeddings from various GCL methods to the linear classifier and compare TLP with meta-learning based methods under the same evaluation protocol.", "Moreover, we examine the effect of supervision during GCL pretraining for target FSNC tasks to further analyze what role labels from base classes play in TLP.", "Throughout this paper, we aim to shed new light on the few-shot node classification problem through the lens of empirical evaluations of both the \"old\" meta-learning paradigm and the \"new\" transductive linear probing framework.", "The summary of our contributions is as follows: New Framework We are the first to break with convention and precedent to propose a new framework, transductive linear probing, as a competitive alternative to meta-learning for FSNC tasks.", "Comprehensive Study We perform comprehensive reviews on current literature and the research community and conduct a large-scale study on six widely-used real-world datasets that cover different scenarios in FSNC: (1) a sufficient number of base classes with substantial labeled nodes in each class, (2) a sufficient number of base classes with no labeled nodes in each class, (3) a limited number of base classes with substantial labeled nodes in each class, and (4) a limited number of base classes with no labeled nodes in each class.", "We evaluate all the compared methods under the same protocol.", "Findings We demonstrate that despite the recent advances in few-shot node classification, meta-learning based methods struggle to outperform TLP methods.", "Moreover, the TLP-based methods with self-supervised GCL can outperform their supervised counterparts and those meta-learning based methods even if all the labels from base classes are inaccessible.", "This signifies that without label information, self-supervised GCL can focus more on node-level structural information, which results in better node representations.", "However, TLP also inherits its limitation for scalability due to the large memory consumption of GCL, which makes it hard to deploy on extremely large graphs.", "Based on those observations, we identify that improving adaptability and scalability are the promising directions for meta-learning based and TLP-based methods, respectively.", "Our implementations for experiments are releasedhttps://github.com/Zhen-Tan-dmml/TLP-FSNC.git.", "We hope to facilitate the sharing of insights and accelerate the progress on the goal of learning from scarcely labeled instances on graphs." ], [ "Problem Statement", "Formally, given an attributed network $\\mathcal {G} = (\\mathcal {V}, \\mathcal {E}, \\mathbf {X}) = (\\mathbf {A}, \\mathbf {X})$ , where $\\mathcal {V}$ denotes the set of nodes $\\lbrace v_1, v_2, ..., v_n\\rbrace $ , $\\mathcal {E}$ denotes the set of edges $\\lbrace e_1, e_2, ..., e_m\\rbrace $ , $\\mathbf {X} = [\\mathbf {x}_1;\\mathbf {x}_2; ...;\\mathbf {x}_n] \\in \\mathbb {R}^{n\\times d}$ denotes all the node features, and $\\mathbf {A} = \\lbrace 0, 1\\rbrace ^{n\\times n}$ is the adjacency matrix representing the network structure.", "Specifically, $\\mathbf {A}_{j,k} = 1$ indicates that there is an edge between node $v_j$ and node $v_k$ ; otherwise, $\\mathbf {A}_{j,k} = 0$ .", "The few-shot node classification problem assumes that there exist a series of target node classification tasks, $\\mathcal {T} = \\lbrace \\mathcal {T}_i\\rbrace ^{I}_{i=1}$ , where $\\mathcal {T}_i$ denotes the given dataset of a task, and $I$ denotes the number of such tasks.", "We term the classes of nodes available during training as base classes (i.e., $\\mathbb {C}_{base}$ ) and the classes of nodes during target test phase as novel classes (i.e., $\\mathbb {C}_{novel}$ ) and $\\mathbb {C}_{base} \\cap \\mathbb {C}_{novel} = \\varnothing $ .", "Notably, under different settings, labels of nodes for training (i.e., $\\mathbb {C}_{base}$ ) may or may not be available during training.", "Conventionally, there are few labeled nodes for novel classes $\\mathbb {C}_{novel}$ during the test phase.", "The problem of few-shot node classification is defined as follows: Definition 1 Few-shot Node Classification: Given an attributed graph $\\mathcal {G} = (\\mathbf {A}, \\mathbf {X})$ with a divided node label space $\\mathbb {C} = \\lbrace \\mathbb {C}_{base}, \\mathbb {C}_{novel}\\rbrace $ , we only have few-shot labeled nodes (support set $\\mathbb {S}$ ) for $\\mathbb {C}_{novel}$ .", "The task $\\mathcal {T}$ is to predict the labels for unlabeled nodes (query set $\\mathbb {Q}$ ) from $\\mathbb {C}_{novel}$ .", "If the support set in each target (test) task has $N$ novel classes with $K$ labeled nodes, then we term this task an $N$ -way $K$ -shot node classification task.", "The goal of few-shot node classification is to learn an encoder that can transfer the topological and semantic knowledge learned from substantial data in base classes ($\\mathbb {C}_{base}$ ) and generate discriminative embeddings for nodes from novel classes ($\\mathbb {C}_{novel}$ ) with limited labeled nodes." ], [ "Episodic Meta-learning for Few-shot Node Classification.", "Episodic meta-learning is a proven effective paradigm for few-shot learning tasks [25], [26], [27], [28], [29], [30], [31], [32].", "The main idea is to train the neural networks in a way that emulates the evaluation conditions.", "This is hypothesized to be beneficial for the prediction performance on test tasks [25], [26], [33], [27].", "Based on this philosophy, many recent works in few-shot node classification [34], [8], [11], [10], [35], [36], [37], [12], [38], [39] successfully transfer the idea to the graph domain.", "It works as follows: during the training phase, it generates a number of meta-train tasks (or episodes) $\\mathcal {T}_{tr}$ from $\\mathbb {C}_{base}$ to emulate the test tasks, following their $N$ -way $K$ -shot node classification specifications: $\\mathcal {T}_{tr} &= \\lbrace \\mathcal {T}_t\\rbrace _{t=1}^T = \\lbrace \\mathcal {T}_1, \\mathcal {T}_2, ..., \\mathcal {T}_T\\rbrace , \\\\\\mathcal {T}_t &= \\lbrace \\mathcal {S}_t, \\mathcal {Q}_t\\rbrace , \\\\\\mathcal {S}_t &= \\lbrace (v_1, y_1), (v_2, y_2), ..., (v_{N\\times K}, y_{N\\times K})\\rbrace , \\\\\\mathcal {Q}_t &= \\lbrace (v_1, y_1), (v_2, y_2), ..., (v_{N\\times K}, y_{N\\times K})\\rbrace .$ For a typical meta-learning based method, in each episode, $K$ labeled nodes are randomly sampled from $N$ base classes, forming a support set, to train the GNN model while emulating the $N$ -way $K$ -shot node classification in the test phase.", "Then GNN predicts labels for an emulated query set of nodes randomly sampled from the same classes as the support set.", "The Cross-Entropy Loss ($L_{CE}$ ) is calculated to optimize the GNN encoder $g_\\theta $ and the classifier $f_\\psi $ in an end-to-end fashion: $\\theta , \\psi = \\arg \\min _{\\theta , \\psi } L_{CE}(\\mathcal {T}_{t};\\theta , \\psi ).$ Based on this, Meta-GNN [34] combines MAML [31] with GNNs to achieve optimization for different meta-tasks.", "GPN [8] applies ProtoNet [30] and computes node importance for a transferable metric function.", "G-Meta [10] aims to establish a local subgraph for each node to achieve fast adaptations to new meta-tasks.", "RALE [35] obtains relative and absolute node embeddings based on node positions on graphs to model node dependencies in each meta-task.", "An exhaustive survey is beyond the scope of this paper; see [13] for an overview.", "However, all those methods are evaluated on different datasets with each own evaluation protocol, which fragments the practical knowledge on how meta-learning performs with a few labeled nodes and makes it hard to explicitly compare their superiority or inferiority.", "To bridge this gap, in this paper, we conduct extensive experiments to compare new advances and prior works for FSNC tasks uniformly and comprehensively." ], [ "A Motivating Example and Preliminary Analysis", "More recently, related works in the image domain demonstrate that the reason for the fast adaptation lies in feature reuse rather than those complicated mate-learning algorithms [15], [16].", "In other words, with a carefully pretrained encoder, decent performance can be obtained through directly fine-tuning a simple classifier on the target task.", "However, few studies have been done on the graph domain due to its important difference from images that nodes in a graph are not i.i.d.", "Their interactive relationships are reflected by both the topological and semantic information.", "To validate such hypothesis on graphs, based on [16], we construct an Intransigent GNN model, namely I-GNN, that simply does not adapt to new tasks.", "We decouple the training procedure to two separate phases.", "In the first phase, a GNN encoder $g_\\theta $ with a linear classifier $f_{\\phi }$ as the classifier is simply pretrained on all base classes $\\mathbb {C}_{base}$ with vanilla supervision through $L_{CE}$ : $\\begin{aligned}\\mathcal {T}_{tr}^\\prime &= \\cup \\lbrace \\mathcal {T}_t\\rbrace _{t=1}^T = \\cup \\lbrace \\mathcal {T}_1, \\mathcal {T}_2, ..., \\mathcal {T}_T\\rbrace , \\\\\\theta , \\phi &= \\arg \\min _{\\theta , \\phi } L_{CE}(\\mathcal {T}_{tr}^\\prime ;\\theta , \\phi ) + \\mathcal {R}(\\theta ),\\end{aligned}$ where $\\mathcal {R}(\\theta )$ is a weight-decay regularization term: $\\mathcal {R}(\\theta ) = \\Vert \\theta \\Vert ^2 /2$ .", "Then, we freeze the parameter of the GNN encoder $g_\\theta $ and discard the classifier $f_\\phi $ .", "When fine-tuning on a target few-shot node classification task $\\mathcal {T}_i = \\lbrace \\mathcal {S}_i, \\mathcal {Q}_i\\rbrace $ , the embeddings of all nodes from $\\mathcal {T}_i$ are directly transferred from the pretrained GNN encoder $g_\\theta $ .", "Then another linear classifier $f_\\psi $ is involved and tuned with few-shot labeled nodes from the support set $\\mathcal {S}_i$ to predict labels of nodes in the query set $\\mathcal {Q}_i$ : $\\psi = \\arg \\min _\\psi L_{CE}(\\mathcal {S}_i;\\theta ,\\psi ).$" ], [ "Results and Analysis of the Intransigent GNN model I-GNN", "We demonstrate the performance of the intransigent model and compare it with those meta-learning based models in Table REF , REF .", "Under the same evaluation protocol (defined in Section REF ), the simple intransigent model I-GNN has very competitive performance with meta-learning based methods.", "On datasets (e.g., CiteSeer) where the number of base classes $|\\mathbb {C}_{base}|$ is limited, I-GNN consistently outperforms meta-learning based methods in terms of accuracy.", "This motivating example concludes that transferring node embeddings from the vanilla supervised training method I-GNN could be an alternative to meta-learning.", "Moreover, we take one step further and postulate that if more transferable node embeddings are obtained during pretraining, the performance on target FSNC tasks could be improved even more.", "Figure: The framework of TLP with supervised GCL: (a) Supervised GCL framework.", "(b) Fine-tuning on few-shot labeled nodes from novel classes with support and query sets.", "Colors indicate different classes (e.g., Neural Networks, SVM, Fair ML, Explainable AI).", "Specially, white nodes mean labels of those nodes are unavailable.", "Labels of all nodes in base classes are available.", "Different types of nodes indicate if nodes are from base classes or novel classes.", "The counterpart of TLP with self-supervised GCL is very simliar to this, and a figure is included in Appendix ." ], [ "Transductive Linear Probing for Few-shot Node Classification.", "Inspired by the motivating example above, we generalize it to a new framework, Transductive Linear Probing (TLP), for few-shot node classification.", "The only difference between TLP and I-GNN is that the pretraining method can be an arbitrary strategy rather than the vanilla supervised learning.", "It can even be self-supervised training methods that do not have any requirement on base classes.", "In this way, the second line of Eq.", "(REF ) can be generalized to: $\\theta = \\arg \\min _{\\theta } L_{pretrain}(\\mathcal {T}_{tr}^\\prime ;\\theta ),$ where $L_{pretrain}$ is an arbitrary loss function to pretrain the GNN encoder $g_\\theta $ .", "Then following Eq.", "(REF ), we can exploit a linear classifier to probe the transferred embeddings of nodes from novel classes, and perform the final node classification.", "In this paper, we thoroughly investigate Graph Contrastive Learning (GCL) as the pretraining strategy for TLP due to two reasons: (1) GCL [17], [19], [20], [40], [41], [23] is a proved effective way to learn generalizable node representations in either a supervised or self-supervised manner.", "By maximizing the consistency over differently transformed positive and negative examples (termed as views), GCL enforces the GNNs to be aware of the semantic and topological knowledge and injected perturbations on graphs.", "Trained on the global structures, GCL should be capable of addressing the piecemeal knowledge issue in meta-learning to increase the generalizability of the learned GNNs.", "Also, [42] summarizes the characteristics of GCL frameworks and empirically demonstrates the transferability of the learned representations.", "(2) GCL has no requirement for the base classes, which means GCL can be deployed even when the number of base classes is limited, or the nodes in base classes are unlabeled.", "The effectiveness of GCL highly relies on the contrastive loss function.", "There are two categories of contrastive loss function for graphs: (1) Supervised Contrastive Loss ($L_{SupCon}$ ) [43], [44].", "(2) Self-supervised Contrastive Loss: Information Noise Contrastive Estimation ($L_{InfoNCE}$ ) [19], [20], [22] and Jensen-Shannon Divergence ($L_{JSD}$ ) [17], [18].", "We also consider a special GCL method, BGRL [21], which does not explicitly require negative examples.", "The framework for TLP with an iconic supervised GCL method is provided in Fig.", "REF .", "From another perspective, our work is the first to focus on the extrapolation ability of GCL methods, especially under extremer few-shot settings without labels for nodes in base classes." ], [ "Experimental Settings", "We conduct systematic experiments to compare the performance of meta-learning and TLP methods (with self-supervised and supervised GCL) on the few-shot node classification task.", "For meta-learning, we evaluate ProtoNet [30], MAML [31], Meta-GNN [34], G-Meta [10], GPN [8], AMM-GNN [9], and TENT [12].", "For TLP methods with both self-supervised and supervised forms, we evaluate MVGRL [17], GraphCL [18], GRACE [19], MERIT [20], and SUGRL [22].", "Moreover, BGRL [45] and I-GNN [16] are exclusively used for TLP methods with self-supervised GCL or supervised GCL, respectively.", "The detailed descriptions of these models can be found in Appendix .", "For comprehensive studies, we benchmark those methods on six prevalent real-world graph datasets: CoraFull [46], ogbn-arxiv [47], Coauthor-CS [48], Amazon-Computer [48], Cora [49], and CiteSeer [49].", "Specifically, each dataset is a connected graph and consists of multiple node classes for training and evaluation.", "A more detailed description of those datasets is provided in Appendix with their statistics and class split policies in Table REF in Appendix ." ], [ "Evaluation Protocol", "In this section, we specify the evaluation protocol used to compare both meta-learning based methods and TLP based methods.", "For an attributed graph dataset $\\mathcal {G} = (\\mathbf {A}, \\mathbf {X})$ with a divided node label space $\\mathbb {C} = \\lbrace \\mathbb {C}_{base}, \\mathbb {C}_{novel}$ (or $\\mathbb {C}_{test}$ )$\\rbrace $ , we split $\\mathbb {C}_{base}$ into $\\mathbb {C}_{train}$ and $\\mathbb {C}_{dev}$ (The split policy for each datasets are listed in Table REF ).", "For evaluation, given a GNN encoder $g_\\theta $ , a classifier $f_\\psi $ , the validation epoch interval $V$ , the number of sampled meta-tasks for evaluation $I$ , the epoch patience $P$ , the maximum epoch number $E$ , the experiment repeated times $R$ , and the $N$ -way, $K$ -shot, $M$ -query setting specification, the final FSNC accuracy $\\mathcal {A}$ and the confident interval $\\mathcal {I}$ (two mainly-concerned metrics) are calculated according to Algorithm REF given below.", "The default values of all those parameters are given in Table REF in Appendix .", "[] Unified Evaluation Protocol for Few-shot Node Classification [1] Graph $\\mathcal {G}$ , $\\mathbb {C}_{train}$ , $\\mathbb {C}_{dev}$ , $\\mathbb {C}_{test}$ ; GNN $g_\\theta $ , classifier $f_\\psi $ ; parameters $V$ , $I$ , $P$ , $E$ , $R$ , $N$ , $K$ , $M$ Trained models $g_\\theta $ and $f_\\psi $ , accuracy $\\mathcal {A}$ , confident interval $\\mathcal {I}$ .", "// Repeat experiment for $R$ times $r=1,2,\\cdots ,R$ $p\\leftarrow 1$ , $t\\leftarrow 1$ , $s_{best}\\leftarrow 0$ ; $t \\le E$ Optimize $g_\\theta $ based on the specific training strategy (i.e., meta-learning and TLP); // Training $t\\mod {V} =0$ Sample $I$ meta-tasks from $\\mathbb {C}_{dev}$ on $\\mathcal {G}$ ;// Validation Calculate the obtained few-shot node classification accuracy $s$ ; $s>s_{best}$ $s_{best}\\leftarrow s$ , $p\\leftarrow 0$ ; $p\\leftarrow p+1$ ; $p=P$ break; // Early Break Sample $I$ meta-tasks from $\\mathbb {C}_{test}$ on $\\mathcal {G}$ ;// Test Calculate the obtained classification accuracy $s_{test}$ ; $s_{r}\\leftarrow s_{test}$ , $r \\leftarrow r + 1$ ; Calculate averaged accuracy $\\mathcal {A}$ and confident interval $\\mathcal {I}$ based on $\\lbrace s_1,s_2,\\cdots ,s_r\\rbrace $ ; Table: The overall few-shot node classification results of meta-learning methods and TLP with various GCL methods under different settings.", "Accuracy (↑\\uparrow ) and confident interval (↓\\downarrow ) are in %\\%.", "The best and second best results are bold and underlined, respectively.", "OOM denotes out of memory." ], [ "Comparison", "Table REF presents the performance comparison of all methods on the few-shot node classification task.", "Specifically, we give results under four different few-shot settings to exhibit a more comprehensive comparison: 5-way 1-shot, 5-way 5-shot, 2-way 1-shot, and 2-way 5-shot.", "More results are given in Appendix .", "We choose the average classification accuracy and the 95% confidence interval over $R$ repetitions as the evaluation metrics.", "From Table REF , we discover the following observations: TLP methods consistently outperforms meta-learning methods, which indicates the importance of transferring comprehensive node representations in FSNC tasks.", "In TLP methods, the model is forced to extract node-level structural information, while the meta-learning methods mainly focus on label information.", "As a result, TLP methods can transfer better node representations and exhibit superior performance on meta-test tasks.", "Even without using any label information from base classes, TLP with self-supervised GCL methods can mostly outperform TLP with supervised GCL methods.", "This signifies that directly injecting supervision can potentially hinder the generalizability for TLP, which is further investigated in the following sections.", "Increasing the number of shots $K$ (i.e., number of labeled nodes in the support set) has more significant effect on performance of both forms of TLP methods, compared with meta-learning methods.", "This is due to the fact that with the additional support nodes, TLP with GCL can provide more informative node representations to learn a more powerful classifier.", "Instead, the meta-learning methods are based on the extracted label information and thus cannot benefit from additional node-level information.", "Most TLP methods encounter the OOM (out of memory) problem when applied to the ogbn-arxiv dataset.", "This is due to the fact that the contrastive strategy in TLP methods will consume a larger memory compared with traditional supervised learning.", "Thus, the scalability problem is not negligible for TLP with GCL methods.", "BGRL [45] exhibits less competitive performance compared with other TLP methods with self-supervised GCL.", "The result indicates that negative samples are important for self-supervised GCL in FSNC, which can help the model exploit node-level information.", "Nevertheless, without the requirement of negative samples, BGRL can parallel better to handle the OOM problem." ], [ "Further Analysis", "To explicitly compare the results between meta-learning and TLP and between two forms of TLP, we provide further results of all methods on various $N$ -way $K$ -shot settings in Fig.", "REF and Fig.", "REF .", "From the results, we can obtain the following observations: blackWhen a larger values of $N$ is presented, the performance drop is less significant on TLP based methods compared to meta-learning based methods.", "The performance of all methods degrades as $N$ increases (i.e., more classes in each meta-task).", "With a larger $N$ , the variety of classes in each meta-task can result in a more complex class distribution and thus increase the classification difficulties.", "Nevertheless, the performance drop is less significant on TLP with both forms of GCL methods.", "This is because the utilized GCL methods focus more on node-level structural patterns, which incorporate more potentially useful information for classification.", "As a result, TLP is more capable of alleviating the problem of difficult classification caused by a larger $N$ .", "As shown in Fig.", "REF , the performance improvement of TLP with self-supervised GCL methods over meta-learning methods on CiteSeer is generally more impressive than other datasets.", "The main reason is that CiteSeer bears a significantly smaller class set (2/2/2 classes for $\\mathbb {C}_{train}$ /$\\mathbb {C}_{dev}$ /$\\mathbb {C}_{test}$ ).", "In consequence, the meta-learning methods cannot effectively leverage the supervision information during training.", "Nevertheless, TLP with self-supervised GCL can extract useful structural information for better generalization performance.", "Figure: NN-way KK-shot results on CoraFull, meta-learning and TLP.", "TLP Methods with *\\ast are based on supervised GCL methods and I-GNN.Figure: 2-way KK-shot results on CiteSeer and Amazon-Computer, meta-learning and two forms of TLP.", "TLP Methods with *\\ast are based on supervised GCL methods and I-GNN." ], [ "Effect of Supervision Information in Base Classes", "In this section, we further investigate the effectiveness of the supervised information in TLP with supervised GCL methods.", "Specifically, we leverage a combined loss black$L_{JointCon}=\\lambda L_{SelfCon} + (1-\\lambda )L_{SupCon}$ , where $L_{SelfCon}$ indicates a self-supervised GCL loss, either $L_{JSD}$ or $L_{InfoNCE}$ according to the models, and $L_{JointCon}$ is a mixture of supervised GCL loss and self-supervised GCL loss.", "In this way, we can gradually adjust the value of $\\lambda $ to inject different levels of supervision signals into GCL and then observe the performance fluctuation.", "Note that due to the unstable training curve brought by the joint loss $L_{JointCon}$ , we increase the epoch patience number from $P$ to $2P$ to ensure convergence.", "The results on Cora dataset (we observe similar results on other datasets) with different values of $\\lambda $ are provided in Fig.", "REF .", "From the results, we can obtain the following observations: In general, the classification performance increases with a larger value of $\\lambda $.", "In other words, directly injecting supervision information into GCL for TLP will usually reduce the performance on few-shot node classification tasks.", "Nevertheless, carefully injecting supervision information can slightly increase the accuracy by choosing a suitable value of $\\lambda $ .", "On the other hand, the results also verify that the TLP framework can still achieve considerable performance without any explicit restrictions for base classes.", "Even with a relatively small value of $\\lambda $ (e.g., 0.1), the performance improvement over TLP with totally supervised GCL (i.e., $\\lambda =0.0$ ) is still significant.", "That being said, the contrastive strategy that leverages graph structures can provide better performance by providing comprehensive node representations.", "Figure: Results on dataset Cora (2-way)" ], [ "Evaluating Learned Node Representations on Novel Classes", "In this section, we further validate the quality of the learned node representations from different training strategies.", "Particularly, we leverage two prevalent clustering evaluation metrics: normalized mutual information (NMI) and adjusted random index (ARI), on learned node representations clustered based on K-Means.", "We evaluate the representations learned from two datasets CoraFull and CiteSeer for a fair comparison.", "The results are presented in Table REF in Appendix REF .", "Based on the results, we can obtain the following observations: The meta-learning methods typically exhibit inferior NMI and ARI scores compared with both forms of TLP.", "This is because meta-learning methods are dedicated for extracting supervision information from node samples and thus cannot fully utilize node-level structural information.", "In general, TLP with self-supervised GCL methods can result in larger values of both NMI and ARI scores than TLP with supervised GCL.", "This is due to the fact that the self-supervised GCL model focuses more on extracting structural information without the interruption of label information.", "As a result, the learned node representations are more comprehensive and thus exhibit superior clustering performance.", "The difference of NMI and ARI scores between meta-learning and TLP is more significant on CiteSeer than CoraFull.", "This phenomenon potentially results from the fact that CiteSeer consists of fundamentally fewer classes than CoraFull.", "In consequence, for CiteSeer, the meta-learning methods will largely rely on label information instead of node-level structural information for classification.", "Figure: The t-SNE visualization results.", "Fig.", "(a)-(f) are for dataset CoraFull (5-way).", "Fig.", "(g)-(h) are for dataset CiteSeer (2-way).", "TLP methods with ** are based on supervised GCL methods." ], [ "Visualization", "To provide an explicit comparison of different baselines, we visualize the learned node representations from CoraFull and CiteSeer via the t-SNE algorithm, where colors denote different classes.", "It is noteworthy that for clarity, we randomly select five classes from $\\mathbb {C}_{test}$ for the visualization.", "The results are provided in Fig.", "REF (more results are included in Fig.", "REF ).", "Specifically, we discover that: TLP with self-supervised GCL generally outperforms TLP with supervised GCL.", "This is because without learning label information, TLP with self-supervised GCL can concentrate on node representation patterns, which are easier to transfer to target unseen novel classes.", "The learned node representations are less discriminative for meta-learning on CiteSeer compared with CoraFull.", "This is because CiteSeer contains fewer classes, which means the node representations learned by meta-learning methods will be less informative, since they are only required to classify nodes from a small class set." ], [ "Conclusion, Limitations, and Outlook", "In this paper, we propose TLP as an alternative framework to meta-learning for FSNC tasks.", "First, we provide a motivating example, a vanilla intransigent GNN model, to validate our postulation that a generalizable GNN encoder is the key to FSNC tasks.", "Then, we provide a formal definition for TLP, which transfers node embeddings from GCL pretraining to the prevailing meta-learning paradigm.", "We conduct comprehensive experiments and compare various meta-learning based and TLP-based methods under the same protocol.", "Our rigorous empirical study reveals several interesting findings on the strengths and weaknesses of the two approaches and identifies that adaptability and scalability are the promising directions for meta-learning based and TLP-based methods, respectively.", "However, due to limited space, several limitations of our work need to be acknowledged.", "Limited design considerations.", "Even though an exhaustive survey on FSNC or GCL is out of the scope of this work, we do not provide a more fine-grained comparison on model details, such as different GNN encoders or various transformations during GCL pretraining.", "blackAlso, we only consider methods applied on a single graph, which currently are the mainstream of research on FSNC.", "There are more recent works (e.g., [50]) studying FSNC across multiple graphs.", "Lack of theoretical justifications.", "Our findings are based on empirical studies, which cannot disclose the underlying mathematical mechanisms of those methods, such as the performance guarantee by transferring node embeddings from different GCL methods.", "How to address these limitations is saved as future work.", "Note that based on the experiments here, the observations drawn are not conclusive termination.", "We only cover existing methods in this work and hope this work to be inspiring for developing meta-learning based FSNC methods that can outperform TLP based methods, or better ways to utilize labels in TLP methods.", "In broader terms, this work lies at the confluence of graph few-shot learning and graph contrastive learning.", "We hope this work can facilitate the sharing of insights for both communities.", "On the one hand, we hope our work provides a necessary yardstick to measure progress across the FSNC field.", "On the other hand, our work should have exhibited several practical guidelines for future research in both vigorous fields.", "For example, the meta-learning community can get inspired by GCL to learn more transferable graph patterns.", "Also, few-shot TLP can serve as a new metric to evaluate the extrapolation ability of GCL methods." ], [ "Acknowledgements", "This work is supported by the National Science Foundation under grants IIS-2006844, IIS-2144209, IIS-2223769, IIS-2229461, CNS-2154962, and BCS-2228534, the Army Research Office (ARO) W911NF2110030, the Office of Naval Research N00014-21-1-4002, the JP Morgan Chase Faculty Research Award, the Cisco Faculty Research Award, and the Jefferson Lab subcontract JSA-22-D0311." ], [ "Default Values of Parameters in Evaluation Protocol", "In this section, we provide the default values of parameters used in our experiments.", "The details are provided in Table REF .", "It is noteworthy that the parameters are consistent for all models in both meta-learning and TLP methods.", "For the experiments that utilize a joint loss of TLP with self-supervised GCL and supervised GCL, we increase the patience number from $P$ to $2P$ to ensure convergence.", "Table: Default Values of Parameters in Evaluation Protocol for Experiments" ], [ "Description of Baselines", "In this section, we provide further details about the baselines used in our experiments.", "Meta-learning based methods: ProtoNet [30]: ProtoNet learns a prototype for each class in meta-tasks by averaging the embeddings of samples in this class.", "Then it conducts classification on query instances based on their distances to prototypes.", "MAML [31]: MAML first optimizes model parameters according to the gradients calculated on the support instances for several steps.", "Then it meta-updates parameters based on the loss of query instances calculated with the parameters updated on support instances.", "Meta-GNN [34]: Meta-GNN combines GNNs with the MAML strategy to apply meta-learning on graph-structured data.", "Specifically, Meta-GNN learns node embeddings with GNNs, while updating and meta-updating the GNN parameters based on the MAML strategy.", "G-Meta [10]: G-Meta extracts a subgraph for each node to learn the node representation with GNNs.", "Then it conducts the classification on query nodes based on the MAML strategy to update and meta-update the parameters of GNNs.", "GPN [8]: GPN proposes to learn node importance for each node in meta-tasks to select more beneficial nodes for classification.", "Then GPN utilizes ProtoNet to learn node prototypes via averaging node embeddings in a weighted manner.", "AMM-GNN [9]: AMM-GNN proposes to extend MAML with an attribute matching mechanism.", "Specifically, the node embeddings will be adjusted according to the embeddings of nodes in the entire meta-task in an adaptive manner.", "TENT [12]: TENT reduces the variance among different meta-tasks for better generalization performance.", "In particular, TENT learns node and class representations by conducting node-level and class-level adaptations.", "It also incorporates task-level adaptations that maximizes the mutual information between the support set and the query set.", "Transductive Linear Probing with different Pretraining methods: I-GNN [16]: I-GNN learns a GNN encoder with a classifier that is trained on all base classes $\\mathbb {C}_{base}$ with the vanilla Cross-Entropy loss $L_{CE}$ .", "Then for each meta-test task, the GNN will be frozen and a new classifier is learned based on the support set for classification.", "MVGRL [17]: MVGRL learns node and graph level representations by contrasting the representations of two structural views of graphs, which include first-order neighbors and a graph diffusion.", "It utilizes a Jensen-Shannon Divergence based contrastive loss $L_{JSD}$ .", "GraphCL [18]: GraphCL proposes to leverage combinations of different transformations in GCL to facilitate GNNs with generalizability, transferrability, and robustness without sophisticated architectures.", "It also uses $L_{JSD}$ as the objective.", "GRACE [19]: GRACE proposes a hybrid scheme for generating different graph views on both structure and attribute levels.", "GRACE further provides theoretical justifications behind the motivation.", "It proposes a variant of Information Noise Contrastive Estimation $L_{InfoNCE}$ as the contrastive loss.", "MERIT [20]: MERIT employs two different objectives named cross-view and cross-network contrastiveness to further maximize the agreement between node representations across different views and networks.", "It uses $L_{InfoNCE}$ similar to that in GRACE as the loss function.", "SUGRL [22]: SUGRL proposes to simultaneously enlarge inter-class variation and reduce intra-class variation.", "The experimental results show promising improvements of generalization error with SUGRL.", "It also uses $L_{InfoNCE}$ similar to that in GRACE as the loss function.", "BGRL [45]: BGRL leverages the concept of BYOL [51] and applies it to graph-structured data by enforcing the agreement between positive views without any explicitly designs on negative views.", "Specially, it uses Mean Squared Error $L_{MSE}$ between positive views as the final loss." ], [ "Description of Benchmark Datasets", "In this section, we provide the detailed descriptions of the benchmark datasets used in our experiments.", "All the datasets are public and available on both PyTorch-Geometric [52] and DGL [53].", "CoraFull [46] is a citation network that extends the prevalent small cora network.", "Specifically, it is achieved from the entire citation network, where nodes are papers, and edges denote the citation relations.", "The classes of nodes are obtained based on the paper topic.", "For this dataset, we use 40/15/15 node classes for $\\mathbb {C}_{train}$ /$\\mathbb {C}_{dev}$ /$\\mathbb {C}_{test}$ .", "ogbn-arxiv [47] is a directed citation network that consists of CS papers from MAG [54].", "Here nodes represent CS arXiv papers, and edges denote the citation relations.", "The classes of nodes are assigned based on the 40 subject areas of CS papers in arXiv.", "For this dataset, we use 20/10/10 node classes for $\\mathbb {C}_{train}$ /$\\mathbb {C}_{dev}$ /$\\mathbb {C}_{test}$ .", "Coauthor-CS [48] is a co-authorship graph based on the Microsoft Academic Graph from the KDD Cup 2016 challenge.", "Here, nodes are authors, and are connected by an edge if they co-authored a paper; node features represent paper keywords for each author’s papers, and class labels indicate most active fields of study for each author.", "For this dataset, we use 5/5/5 node classes for $\\mathbb {C}_{train}$ /$\\mathbb {C}_{dev}$ /$\\mathbb {C}_{test}$ .", "Amazon-Computer [48] includes segments of the Amazon co-purchase graph [55], where nodes represent goods, edges indicate that two goods are frequently bought together, node features are bag-of-words encoded product reviews, and class labels are given by the product category.", "For this dataset, we use 4/3/3 node classes for $\\mathbb {C}_{train}$ /$\\mathbb {C}_{dev}$ /$\\mathbb {C}_{test}$ .", "Cora [49] is a citation network dataset where nodes mean paper and edges mean citation relationships.", "Each node has a predefined feature with 1,433 dimensions.", "The dataset is designed for the node classification task.", "The task is to predict the category of certain paper.", "For this dataset, we use 3/2/2 node classes for $\\mathbb {C}_{train}$ /$\\mathbb {C}_{dev}$ /$\\mathbb {C}_{test}$ .", "CiteSeer [49] is also a citation network dataset where nodes mean scientific publications and edges mean citation relationships.", "Each node has a predefined feature with 3,703 dimensions.", "The dataset is designed for the node classification task.", "The task is to predict the category of certain publication.", "For this dataset, we use 2/2/2 node classes for $\\mathbb {C}_{train}$ /$\\mathbb {C}_{dev}$ /$\\mathbb {C}_{test}$ ." ], [ "Implementation Details", "In this section, we introduce the implementation details for all methods compared in our experiments.", "Specifically, for the encoders used in TLP methods, we follow the settings in the original papers of the corresponding models to ensure consistency, and we choose Logistic Regression as the linear classifier for the final classification.", "For encoders in meta-learning methods, we utilize the original designs for papers using GNNs.", "For papers without using GNNs (i.e., ProtoNet [30] and MAML [31]), we use a two-layer GCN [1] as the encoder with a hidden size of 16.", "We utilize the Adam optimizer [56] for all experiments with a learning rate of 0.001.", "To effectively initialize the GNNs in our experiments, we leverage the Xavier initialization [57].", "For meta-learning methods using the MAML framework, we set the number of meta-update steps as 20 with a meta-learning rate of 0.05.", "To ensure more stable convergence in meta-learning methods, we set the weight decay rate as $10^{-4}$ .", "We set the dropout rate as 0.5 for better generalization performance.", "The evaluation protocol parameters are provided in Table REF .", "All experiments are implemented using PyTorch [58].", "We run all experiments on a single 80GB Nvidia A100 GPU." ], [ "Visualization", " In this section, we provide additional visualization results for more meta-learning and TLP methods on CoraFull dataset in Fig.", "REF .", "Figure: The t-SNE visualization results of meta-learning and TLP methods on CoraFull.", "TLP methods with ** are based on supervised GCL methods." ], [ "Node Representation Evaluation", "In this section, we provide the detailed node representation evaluations on two datasets CoraFull and CiterSeer based on NMI and ARI scores in Table REF .", "Table: The overall NMI (↑\\uparrow ) and ARI (↑\\uparrow ) results of meta-learning and TLP methods on two datasets" ], [ "Main Results for the Other Three Datasets or Other Settings", "In this section, we further provide results for the other three datasets used in our experiments: Coauthor-CS, Amazon-Computer, and Cora, and 2-way classification results on CoraFull, ogbn-arxiv, and Coauthor-CS: Table: The overall few-shot node classification results of meta-learning methods and TLP with different GCL methods under different settings.", "Accuracy (↑\\uparrow ) and confidence interval (↓\\downarrow ) are in %\\%.", "The best and second best results are bold and underlined, respectively.Table: The overall few-shot node classification results of meta-learning methods and TLP with different GCL methods under different settings.", "Accuracy (↑\\uparrow ) and confidence interval (↓\\downarrow ) are in %\\%.", "The best and second best results are bold and underlined, respectively." ] ]
2212.05606
[ [ "Physics-informed data-driven prediction of Jet A-1 spray characteristics\n using time-resolved flame chemiluminescence and sparse Mie scattering" ], [ "Abstract A time-lag and linear regression-based framework is developed and its performance is assessed for predicting temporally resolved spray number of droplets using flame chemiluminescence and a sparse number of droplets data.", "Separate pressure, interferometric laser imaging for droplet sizing, shadowgraphy, and flame chemiluminescence are performed for the spray characterization.", "Simultaneous 10 kHz flame chemiluminescence and 0.2 Hz Mie scattering measurements are performed for the purposes of the framework development and the number of droplets prediction.", "Both methane and/or Jet A-1 are used in the experiments.", "Three conditions corresponding to perfectly premixed methane and air, Jet A-1 spray, and Jet A-1 spray in premixed methane and air flames are examined.", "For all test conditions, the fuels and air flow rates are adjusted to produce a fixed power of 10 kW.", "The results show that the frequency of the spatially averaged flame chemiluminescence as well as the number and the mass of the droplets (for both reacting and non-reacting conditions) oscillations frequencies match; however, these frequencies do not match that of the pressure fluctuations.", "This suggests that the flame chemiluminescence dynamics is driven by the fuel injection system.", "For signals with matching frequency content, a data-driven framework is developed for predicting an objective signal (the spray number of droplets) using an input signal (the flame chemiluminescence).", "The performance of the developed framework is assessed for tested spray conditions and the predicted number of droplets agrees well with those measured.", "For gas turbine engine combustion research, the developed framework is of importance, as it facilitates understanding the time-resolved spray characteristics for instances that the spray data is available sparsely." ], [ "Introduction", "The modus operandi of existing civil aviation engines is turbulent spray combustion.", "Spray flames feature complex interactions between turbulence, liquid fuel atomization and transport, as well as combustion chemistry.", "Such complex interactions allow for the presence of several pathways for thermoacoustic coupling [1], [2], [3], [4], which are detrimental to the engine operation and can lead to poor combustion emissions and, sometimes, system failure [5].", "Despite several thermoacoustic-related investigations have been performed in the past decades and many review papers have been published (see for example [3], [5], [4], [6], [7], [8]), our understanding of the coupling between sprays and their flames remains to be further developed.", "This is challenging, as numerous thermo-fluidic parameters are required to be measured with large spatio-temporal resolutions.", "This challenge is further accentuated for multi-nozzle configurations [9], [10], [11] and/or at high-pressure conditions [12], [13], [14].", "Acknowledging the importance of the technical efforts made to address the above experimental challenges, data-driven approaches may be developed and implemented to predict information that is difficult to acquire or missing from the experiments.", "The present study is motivated by the need for developing and assessing a data-driven framework that allows for predicting the spray flames characteristics.", "For perfectly premixed flames, the pressure oscillations inside the combustion chamber can lead to the oscillations of the injected fuel and air mixture, which is usually followed by vortex shedding or deformation of a helical vortex structure inside the combustion chamber [15], [16], [17].", "This is accompanied by periodic variations of the flame surface density, and as a result, periodic heat release rate oscillations [18].", "For technically-premixed flames (or partially premixed flames), the thermoacoustic coupling may occur due to the periodic spatial and temporal variations of the fuel-air equivalence ratio [19], [20], [21], [22].", "In addition to the above thermo-fluidic pathways, the structural oscillations can also create a feedback mechanism for thermoacoustic coupling of perfectly and technically premixed flames, as discussed in [23], [24], [25].", "The thermoacoustic coupling can be quantified using the Rayleigh gain, which requires time/phase resolved information related to the pressure and the heat release rate oscillations [26], [27], [28], [29].", "Of importance is measurement of the heat release rate; and, for perfectly premixed and atmospheric flames, experimental observables such as the planar laser-induced fluorescence of $\\mathrm {OH} \\times \\mathrm {CH_2O}$ , $\\mathrm {H} \\times \\mathrm {CH_2O}$ , $\\mathrm {HCO}$ , and $\\mathrm {CH}$ may be used for quantifying the heat release rate [30], [31], [32], [33], [34], [35], [36].", "For atmospheric Jet A-1 spray flames subjected to self-excited thermoacoustic oscillations, Apeloig et al.", "[37] performed high-speed simultaneous pressure and planar laser-induced fluorescence of both Kerosene and OH.", "They [37] utilized an air-blast injector that created a liquid film, which was atomized and carried into the combustion chamber.", "Their results showed that the air flow rate fluctuations alter the trajectory of the atomized droplets, and this alteration created a pulsation in the spray.", "For a high-pressure combustor, Kheirkhah et al.", "[13] employed high-speed and times-resolved pressure, flame chemiluminescence, and Stereoscopic-Particle Image Velocimetry.", "Their results [13] showed that the spray velocity features fluctuations at dominant frequencies similar to those of the pressure and heat release rate.", "Recently, Passarelli et al.", "[38] studied self-excited oscillations of Jet A-1 at high pressure.", "Their results showed that, while the flame chemiluminescence and Jet A-1 spray may feature large amplitude fluctuations at a matching frequency, the dominant frequency of pressure oscillations may be different than those of the spray and flame chemiluminescence.", "Although past experimental studies related to the themroacoustics of spray flames are of significant importance as they elaborate the underlying coupling mechanism in combustors operating with liquid fuels, the spray is often studied qualitatively.", "For example, the spatially integrated fuel droplets Mie scattering signal is used [13] to understand the spray dynamics.", "Although such qualitative characterization of the spray is important for understanding the thermoacoustic coupling in liquid fueled combustors, alternative experimental observables, such as the number of droplets and/or the liquid fuel mass inside an illuminated volume/plane, may be used to study the spray characteristics quantitatively.", "The objective of the present study is to quantify the temporal variation of the spray number of droplets inside a plane using a data-driven framework.", "Specifically, we aim to utilize time-resolved flame chemiluminescence and sparse Mie scattering data along with a data-driven framework (which is developed here) to predict the variation of the spray number of droplets with time.", "In the following, the experimental methodology, the spray flames characteristics, the prediction framework, and the results are presented in sections –, respectively.", "The conclusions are summarized in section ." ], [ "Experimental methodology", "Details of the utilized experimental setup, diagnostics, and data reduction are elaborated in this section." ], [ "Experimental setup", "The experimental setup refers to the fuel and air delivery system as well as the utilized burner.", "Air was provided by an Atlas Copco compressor; and, the air flow rate was controlled using an Alicat 5000 MCRH.", "A gaseous fuel (methane) and a liquid fuel (Jet A-1) were utilized in the present study.", "Grade 2.0 methane (99$\\%$ chemical purity) was provided from a pressurized bottle; and, its flow rate was controlled using a Brooks SLA5853.", "As demonstrated in Fig.", "REF , both air and methane were fed into a mixing chamber (see the green cylinder, item 1).", "The utilized Jet A-1 density at standard pressure and temperature conditions was measured and equals 812.0 $\\mathrm {kg/m^3}$ .", "The minimum flash point, the freezing point, and the distillation end point of the utilized Jet A-1 are 38, -47, and 300$^\\mathrm {o}\\mathrm {C}$ , respectively.", "The maximum aromatic and sulfur concentrations of the utilized Jet A-1 are 25% by volume and 0.3% by mass, respectively.", "The above temperatures and concentrations were provided by the fuel producer.", "Grade 5.0 (99.999% chemical purity) nitrogen was purged into a pressurized vessel that carried Jet A-1, using the bottle shown as item 2 in Fig.", "REF .", "During each experiment, the pressure of nitrogen in the fuel vessel was fixed using an Alicat dual-valve pressure controller (see item 3 in Fig.", "REF ).", "The spray flow rate was calibrated, with details of the calibration procedure provided in Appendix A.", "Figure: The layout of the utilized experimental setup and diagnostics.", "Items 1–3 are a mixing chamber, a nitrogen bottle, and a pressure controller, respectively.", "Items 4 and 5 are the flow conditioning equipment and the gas turbine model combustor, respectively.", "Items 6–8 are a camera, an intensifier, and a UV lens equipped with a bandpass filter, respectively.", "Items 9, 10-12, 13, and 14 are a laser, sheet forming optics, a camera, and a lens, respectively.", "Item 15 is a high-precision rotational stage.The mixture of methane and air was provided at the bottom to the flow conditioning equipment, see item 4 in Fig.", "REF .", "For clarity, the flow conditioning equipment and the combustor details are presented in Fig.", "REF .", "As shown in the figure, the mixture of methane and air enters a diffuser section (with an area ratio of 4:1).", "This is followed by a settling chamber, which is equipped with 5 equally spaced mesh screens.", "The details of the diffuser section and the settling chamber are identical to those used in [39], [40], [41].", "Downstream of the settling chamber, a gas turbine model combustor, which includes a plenum and a combustion chamber are installed, as shown in Fig.", "REF .", "The plenum and the chamber are originally designed by Turbomeca and are similar to those presented in Weigand et al.", "[42] for gaseous fuels combustion.", "In Weigand et al.", "[42], the plenum carries a conical bluff-body; however, in the present study and to accommodate for the spray injection, similar to Wang et al.", "[43], the conical bluff-body was replaced by a Delavan pressure swirl atomizer (see Fig.", "REF ).", "The combustion chamber is equipped with 4 fused silica windows for optical accessibility.", "A Cartesian coordinate system was used in the present study.", "The origin of the coordinate system is at the exit plane of the spray injector and the injector centerline.", "The $y$ –axis of the coordinate system coincides with the chamber centerline; and, the $x$ –axis is normal to the $y$ –axis and the combustion chamber side walls.", "Figure: The burner, which is composed of a diffuser section, a settling chamber, a plenum, and a combustion chamber.", "The inset on the right-hand-side presents the combustor, which includes the plenum and the combustion chamber.", "The Mie scattering, Shadowgraphy, and ILIDS fields of view are shown by the blue, yellow, and green squares, respectively.", "The flame chemiluminescence field of view is shown by the red circle." ], [ "Diagnostics", "Simultaneous plenum pressure, high-speed flame chemiluminescence, and low-speed Mie scattering data were collected for the purposes of developing and testing the framework of the present study.", "For the above simultaneous measurements, the acquisition frequency of pressure, flame chemiluminescence, and Mie scattering measurements are 100000, 10000, and 0.2 Hz respectively.", "For each test condition (elaborated later in this section), 20 datasets of pressure, flame chemiluminescence, and Mie scattering were collected every 5 s, with the details of the acquisition timing shown in Fig.", "REF .", "In the figure, $n_\\mathrm {Mie}$ , $n_\\mathrm {CL}$ , and $n_\\mathrm {p}$ correspond to the number of the collected Mie scattering image, flame chemiluminescence image, and pressure data, respectively.", "As shown by either of the red dashed lines in the figure, the Mie scattering image (see the light and dark green signals) is simultaneously acquired with one chemiluminescence image (see the light and dark blue signals) and one pressure data (see the black signal) for each dataset.", "Additionally, 250 chemiluminescence images before as well as 250 chemiluminescence images after the above simultaneously collected chemiluminescence image is acquired, see Fig.", "REF .", "Also, 4 pressure data before and 5 pressure data after the above simultaneously collected pressure data was acquired.", "It is important to highlight that, for each dataset, the flame chemiluminescence and pressure measurements are time-resolved; however, the Mie scattering measurement is not.", "In addition to the above simultaneous measurements, separate pressure, flame chemiluminescence, shadowgraphy, and Interferometric Laser Imaging for Droplet Sizing (ILIDS) were also performed to characterize the spray flames.", "The data acquisition frequency for the separate pressure, chemiluminescence, shadowgraphy, and ILIDS were 100000, 10000, 10000, and 5 Hz, with the data collection durations of 100, 0.5, 0.5, and 100 s, respectively.", "For the purposes of spectral analysis, the separate flame chemiluminescence and shadowgraphy measurements were repeated 6 times.", "Further details regarding the above diagnostics and the reduction of the collected data are presented in the following.", "Figure: The timing between the synchronized Mie scattering, flame chemiluminescence, and pressure measurements.", "The light and dark green lines present the laser excitation and camera exposure timing, respectively, for the Mie scattering measurements.", "The light and dark blue lines present the timing signal for the intensifier gate and the chemiluminescence camera exposure, respectively.", "The black lines represent the timing for the pressure measurements." ], [ "Pressure measurements", "A pressure transducer (Model 106B52 from PCB Piezotronics) was installed flush mount with the plenum wall, see Fig.", "REF .", "A National Instruments PCIe 6361 data acquisition system was used to collect the voltage generated from the pressure transducer.", "The transducer features a sensitivity of 716.3 mV/kPa, which was used to convert the collected voltage to pressure data." ], [ "Flame chemiluminescence", "The hardware for acquiring the flame chemiluminescence includes a high-speed camera (Photron Fastcam Nova S12, item 6 in Fig.", "REF ) and a high-speed image intensifier (Invisible vision UVi 1850B, item 7 in Fig.", "REF ).", "A UV Nikon lens (focal length of 105 mm and aperture number of 1.4) was equipped with a bandpass filter (center wavelength of 310 nm and bandpass width of 20 nm) and was mounted on the intensifier.", "The lens and the bandpass filter are shown as item 8 in Fig.", "REF .", "Although the center wavelength of the bandpass filter is close to the $\\mathrm {OH^*}$ emission wavelength, the collected signal from items 6–8 has contributions from emissions of $\\mathrm {CO_2^*}$ as well as $\\mathrm {C_2^*}$ .", "Passarelli et al.", "[38] indicated while the chemiluminescence signal collected near the $\\mathrm {CH^*}$ band featured significant contributions from broadband emissions (such as $\\mathrm {C_2^*}$ ), the chemiluminescence signal collected near the $\\mathrm {OH^*}$ emission band (similar to the present study) did not feature significant contributions from the broadband emissions in their investigation.", "Nonetheless, in the present study, the collected chemiluminescence signal is not deemed as an accurate indicator of the heat release rate, may only qualitatively relate to the exothermic processes inside the combustion chamber, and may be used for understanding the flame dynamics, similar to those in [44], [13].", "For both separate and simultaneous flame chemiluminescence measurements reported here, the camera exposure time and the intensifier gate were set to 25 and 20 $\\mu $ s, respectively, as shown in Fig.", "REF .", "The intensifier gain was set to 40% for all measurements.", "The raw chemiluminescence images were subtracted by the mean of 10000 background images.", "The background subtracted chemiluminescence images were normalized by a Whitefield, which was collected using an NL-360ARC Neewer LED lamp.", "Then, the images were denoised using an $11 \\times 11$ median-based filter.", "The pixel size for the chemiluminescence images was 68.4 $\\mu $ m. However, the effective spatial resolution was determined using the USAF 1951 target plate; and, this resolution was 198.4 $\\mu $ m as discussed in Appendix B.", "The flame chemiluminescence field of view was a circle, with a diameter of 75.5 mm.", "Its center was positioned at $x = 0$ and $y=38.0$  mm, see Fig.", "REF .", "Further details regarding the flame chemiluminescence hardware and the data reduction procedure can be found in [45], [25]." ], [ "Mie scattering", "The Mie scattering hardware includes an Nd:YAG laser (Lab-Series-170 laser from Spectra Physics, item 9 in Fig.", "REF ), sheet forming optics (10–12), as well as a camera and its collection optics (items 13 and 14).", "The beam produced by the laser has a wavelength of 1064 nm, which was converted by a second harmonic generator to a 532 nm wavelength and an 8 mm in diameter beam.", "The beam energy was measured using model QE25LP-S-MB-QED-INT-D0 from Gentec Electro-Optics.", "At maximum power, the mean and standard deviation of the 532 nm laser beam energy were about 457.2 and 5.6 mJ per pulse.", "In order to avoid saturation and maximize the quality of the collected Mie scattering image, the laser was operated at 1% of its maximum power.", "The sheet forming optics included a plano-concave cylindrical lens (item 10 in Fig.", "REF with a focal length of -100 mm, a plano-convex cylindrical lens with a focal length of 500 mm (item 11), and a plano-convex cylindrical lens with a focal length of 1000 mm (item 12).", "The above optics facilitated the generation of a collimated laser sheet, with its centerline positioned at $y = 40$  mm.", "The camera is the Andor's Zyla 5.5 sCMOS, which was equipped with a Macro Sigma lens.", "The lens focal length is 105 mm, and the lens aperture number was set to 2.8.", "Finally, a bandpass filter with a center wavelength and Full Width at Half Maximum (FWHM) of 532 and 20 nm was mounted on the camera lens.", "The Mie scattering field of view spans the width of the combustion chamber, and its vertical extent is limited between $y = 20$ and 60 mm.", "The lower extent of the Mie scattering field of view was selected to minimize reflections from the spray injector in the Mie scattering images.", "The effective resolution of the Mie scattering images was 28.8 $\\mu $ m as discussed in Appendix B.", "The Mie scattering images were pre-processed to obtain the spray number of droplets.", "First, 500 images were collected and averaged (referred to as the background image) when the laser was turned off but the spray was lit.", "A raw Mie scattering image (corresponding to the test condition of J100M0) subtracted by the above background is shown in Fig.", "REF (a).", "After the background subtraction, the results were binarized (procedure 1), see Fig.", "REF (b).", "Analysis of the shadowgraphy images (discussed later) suggests that the spray droplets are rather spherical and relatively small.", "Thus, structures that are not circular and/or relatively large are not droplets and were removed from the Mie scattering data.", "Two separate procedures, see (2a) and (2b) in Fig.", "REF , were followed to identify large as well as small and non-spherical structures, respectively.", "A labeling algorithm in MATLAB was used to identify the former type of the structures, which are shown by the yellow color in Fig.", "REF (c).", "As for the latter type of the structures, first, an equivalent diameter which equals the mean of the structure width and height was obtained.", "Then, the area of a circle with the above equivalent diameter was calculated.", "For structures with their shape close to a circle, the calculated area is close to the area of the structure.", "However, the areas of irregular (non-droplet) structures are significantly smaller than the area of the equivalent circle.", "For example, for an irregular structure which is 2 pixels wide and 48 pixels high, the area is $2\\times 48= 96~\\mathrm {pixels}^2$ , but the area of the equivalent circle is $(\\pi /4)[(2+48)/2]^2 = 491~\\mathrm {pixels}^2$ , which is significantly larger than $96~\\mathrm {pixels}^2$ .", "The above criterion was used to identify the small and non-droplet structures, with a sample shown in Fig.", "REF (d).", "The large-scale and irregular structures were combined, procedure (3), and shown in Fig.", "REF (e).", "This image was then subtracted from the binarized Mie scattering image, procedure (4), with the results shown in Fig.", "REF (f).", "Although the results in Fig.", "REF (f) allow for identifying the majority of the droplets, the utilized algorithm automatically removes many droplets that reside inside the large-scale structures.", "In order to avoid loss of these droplets in the pre-processing of the Mie scattering images, they were further treated.", "First, the local maxima in Fig.", "REF (a) were obtained, a mean-based filtering algorithm was applied to Fig.", "REF (a) around the local maxima, and the resultant image was binarized (procedure 5), see Fig.", "REF (g).", "Then, the resultant image was multiplied by the mask in Fig.", "REF (c), procedure 6, to identify the small droplets inside the large-scale structures.", "The corresponding image is shown in Fig.", "REF (h).", "Finally, Fig.", "REF (h) was added to Fig.", "REF (f), see procedure 7, with the final reduced Mie scattering image of the droplets shown in Fig.", "REF (i).", "The inset of Fig.", "REF (i) presents a sample of the identified droplets.", "Procedures (1–7) were applied to all Mie scattering images and a labeling algorithm in MATLAB was used to calculate the number of the droplets, referred to as $n$ , in the processed Mie scattering images.", "Figure: (a) Representative raw Mie scattering image.", "(b) The binarized image in (a).", "(c) is the large scale features in (b).", "(d) small scale and elongated/non-circular structures in (b).", "(e) is the summation of (c) and (d).", "(f) is (e) subtracted from (b).", "(g) is the mean-based filtered around the local maxima of the image shown in (a) and then binarized.", "(h) is the multiplication of (c) and (g) to generate droplets binarized images in the large soot structures.", "(i) is the summation of (h) and (f)." ], [ "Shadowgraphy imaging", "The hardware for the shadowgraphy imaging includes the high-speed camera (item 6 in Fig.", "REF ) which is equipped with a Macro Sigma lens (with a focal length of 105 mm and aperture number of 2.8), an NL-360ARC Neewer LED lamp, and a Semrock FF01-433/530-25 dual bandpass filter.", "This filter was selected following the recommendations of Bennewitz et al.", "[46], [47] and is shown [45] to improve the quality of the shadowgraphy images for sooty droplets.", "A representative shadowgraphy image corresponding to the test condition of J100M0 is presented in Fig.", "REF (a).", "The dark regions in the inset of the figure are the shadow of the droplets.", "The pixel size for the shadowgraphy experiments was 68.0 $\\mu $ m. The effective spatial resolution was determined using the USAF 1951 target plate and was 99.2 $\\mu $ m (see Appendix B).", "In the present study, two challenges exist for identifying the droplets using the shadowgraphy technique.", "First, due to the presence of the turbulent flames and soot, the detected light intensity can significantly vary in the spray region.", "Specifically, some regions near the droplets may feature a relatively small intensity gradient; however, some regions that entail the ridges of the flame and/or soot structures may feature relatively large local intensity gradients.", "Both of these posed challenges for identifying the droplets, with the corresponding regions were treated using separate strategies.", "The second challenge is that the spray is relatively dense near the injector, which leads to the overlapping of the droplets shadows.", "Figure: (a) Raw shadowgraphy image.", "(b) is the mask for identifying regions with large local variation of the light intensity, see the black colored region.", "(c) is the ratio of (a) to its mean-based filtered image.", "(d) is the processed image in (a) to reduce the effect of background with large intensity variation.", "(e) is the results in (d) filtered to exclude large as well as small non-droplet features and then multiplied by the mask in (b).", "(f) and (g) are the identified droplets inside the small intensity varying regions and the droplets closest to the injector, respectively.", "(h) is the logical sum of the results in (e–g).To help addressing the first challenge, regions with large variations in the background light intensity were first identified by calculating the absolute value of the difference between the pixel intensity and the local mean.", "Then, a mean-based filter and a threshold were applied to the resultant image, creating a mask shown in Fig.", "REF (b).", "To identify the droplets in the large-intensity-variation regions (see the black color in Fig.", "REF (b)), the absolute value of the raw image gradient was obtained.", "Then, the ratio of the absolute intensity gradient and the absolute mean local gradient were calculated.", "This was followed by applying a mean-based filter to the above ratio.", "This ratio was multiplied by the ratio of the raw image to its mean-based filter, with the latter ratio and the resultant image shown in Figs.", "REF (c) and (d), respectively.", "The results in Fig.", "REF (d) were then filtered by that developed in section REF (see procedures (2a) and (2b) in Fig.", "REF ) to identify and remove large aspect ratio structures, which are not droplets.", "The resultant image includes droplets in both small and large-intensity-variation regions.", "Since the detection of the droplets in the large-intensity-variation regions are of interest in this step, the resultant image was then multiplied by the mask in Fig.", "REF (b), with black being unity and white being zero.", "The product is shown in Fig.", "REF (e).", "In the above procedure, a relatively large threshold is applied to Fig.", "REF (d) in order to avoid false detection of large and circular soot structures.", "As a result of this, those identified in the above procedure do not include majority of the droplets.", "Indeed, many of the droplets reside in the white region of the mask shown in Fig.", "REF (b).", "In order to identify these droplets, the ratio of the pixel intensity in the raw image to that of the local mean (which was calculated in the previous step and shown in Fig.", "REF (c)) was utilized.", "The resulting image was thresholded and multiplied by the mask shown in Fig.", "REF (b), with black being zero and white being unity.", "The product is shown in Fig.", "REF (f).", "A rectangular region, which is highlighted by the pink lines in Fig.", "REF (a) was considered to identify the droplets at the vicinity of the nozzle.", "Droplets within this region were identified by, first, subtracting the pixel intensities in Fig.", "REF (d) from their local mean values; and, then thresholding the resultant image.", "To ensure the overlapping droplets are not removed in the above procedure, the resultant image was not filtered by the aspect ratio filters (which were developed and discussed in processes (2a and 2b) in section REF .", "Finally, the identified droplets within the rectangular pink mask highlighted in Fig.", "REF (a) were considered, with the resultant image shown in Fig.", "REF (g).", "The results in Figs.", "REF (e), (f), and (g) were added and repeating droplets were removed (to avoid double counting) and the final image is shown in Fig.", "REF (h).", "This is the final processed shadowgraphy image from the raw image (shown in Fig.", "REF (a)) that contains all identified droplets.", "Using the post-processed shadowgraphy images, the number of droplets ($n_\\mathrm {S}$ ) as well as individual droplet diameter $d_\\mathrm {S} = \\sqrt{4A_\\mathrm {S}/\\pi }$ , with $A_\\mathrm {S}$ being individual droplet area, were obtained." ], [ "Interferometric Laser Imaging for Droplet Sizing", "Separate Interferometric Laser Imaging for Droplet Sizing was employed in the present study to characterize the spray.", "This technique has been used for measuring the droplet diameter for both reacting and non-reacting flows in the past [48], [49], [50].", "In this technique, the interference of reflected and first-order refracted rays scattered from a spherical droplet illuminated by a laser source is used to measure the droplet diameter.", "The utilized hardware is identical to those used for the Mie scattering, except, a high-precision rotational stage (see item 15 in Fig.", "REF ) was also used to adjust the angular position of the camera with respect to the laser sheet.", "The collected ILIDS images are analyzed to obtain the spray droplet diameters using [48], [49], [50].", "$d=\\dfrac{2\\lambda N}{\\alpha }\\left[\\cos (\\frac{\\theta }{2})+\\dfrac{m\\sin (\\theta /2)}{\\sqrt{m^{2}-2m\\cos (\\theta /2)+1}}\\right]^{-1}.$ In Eq.", "(REF ), $\\lambda $ is the wavelength of the incident laser light, which is 532 nm.", "$m$ is Jet A-1 index of refraction, which is taken from [51] and is 1.44.", "$\\theta $ is the angle between the normal to the camera lens and the laser sheet.", "This angle was set to $70^\\mathrm {o}$ to maximize the quality of the collected fringe patterns, following the recommendations of Sahu et al. [52].", "The collection angle, $\\alpha $ , depends on the lens diameter ($d_\\mathrm {l}$ , which is 60 mm) and the working distance ($L$ , which is the distance between the plane of the camera lens and the laser sheet).", "Specifically, this angle is calculated using $\\alpha =2 \\arctan [d_\\mathrm {l}/(2L)]$ .", "In the present study, $L = 240$  mm is fixed, and as a result, the collection angle is $14.3^{\\mathrm {o}}$ ($0.25~\\mathrm {rad}$ ).", "In Eq.", "(REF ), $N$ is the number of the fringe patterns, which is obtained using the following data reduction procedure.", "First, while the laser was turned off, 500 background images were collected and subtracted from the raw ILIDS images.", "Figure.", "REF (a) presents a representative ILIDS image subtracted by the background.", "Then, the raw image was binarized, see Fig.", "REF (b).", "The background subtracted image (Fig.", "REF (a)) was multiplied by the binarized image in Fig.", "REF (b).", "Then, a disk-shaped convolution function in MATLAB was applied to the obtained results, which allowed to maximize the intensity at the center of the droplets, similar to the procedure used in [49].", "The obtained convoluted image is shown in Fig.", "REF (c).", "The centers of the droplets were then obtained, with the corresponding results shown in Fig.", "REF (d).", "A representative droplet with its identified center is shown in Fig.", "REF (e).", "Variation of the light intensity along the direction normal to the fringe patterns (shown in Fig.", "REF (f)) was obtained and the Fast Fourier Transform of the intensity was used to identify the number of the fringe patterns.", "Figure REF (g) presents the centers of the identified droplets in Fig.", "REF (a) as well as the overlaid red circles with diameters scaled to reflect the size of the identified droplets.", "As discussed in Appendix B, the effective resolution of the ILIDS images was 21.4 $\\mu $ m, which allowed for detecting droplets with diameters $5 \\lesssim d \\lesssim 107~\\mathrm {\\mu }$ m. Figure: (a) Representative raw ILIDS image.", "(b) Binarized image corresponding to (a).", "(c) Convoluted image of the results in (a) multiplied by (b).", "(d) Location of the droplets centers.", "(e) and (f) are sample 2D light intensity variation and that along the direction normal to the fringe patterns, respectively.", "(g) Droplets centers and their corresponding size (the red circle diameter highlighted by 30μ30~\\mathrm {\\mu }m is to scale the diameter of the droplets identified in the figure)." ], [ "Test Conditions", "Three conditions were tested and tabulated in Table REF .", "For all test conditions, both the total generated power and the global fuel-air equivalence ratio were kept constant at 10 kW and 0.6, respectively.", "In the table, J0M100 and J100M0 pertain to test conditions for which the fuel was either methane or Jet A-1, respectively.", "For J40M60, the mass flow rates of methane and Jet A-1 were set to produce 40$\\%$ (4 kW) and 60$\\%$ (6 kW) of the total power from Jet A-1 and methane, respectively.", "In the above calculation, the lower heating values of methane and Jet A-1 were used and set to 49853 and 43200 kJ/kg, respectively.", "The set pressure inside the liquid fuel reservoir ($P_\\mathrm {l}$ ) and the fuels ($\\dot{m}_\\mathrm {CH4}$ and $\\dot{m}_\\mathrm {Jet~ A-1}$ ) and air ($\\dot{m}_\\mathrm {air}$ ) mass flow rates are provided in the second to fifth columns of the table.", "Several combinations of the gaseous and liquid fuels flow rates were considered for measurements, however, the combination of powers generated by gas and liquid fuels for the test condition of J40M60 was selected since the corresponding flow rate of Jet A-1 (5.6 grams per minute) was the minimum liquid flow rate required to form the spray.", "Table: Test conditions.", "The total power (10 kW) and the global fuel-air equivalence ratio (0.6) were fixed for all test conditions.", "The units of P l P_\\mathrm {l} and p rms ' p^\\prime _\\mathrm {rms} are kPa and Pa, respectively.", "The unit of m ˙ CH 4\\dot{m}_\\mathrm {CH4}, m ˙ Jet A-1\\dot{m}_\\mathrm {Jet~A-1}, and m ˙ air \\dot{m}_\\mathrm {air} is grams per minute.Addressing the objective of the present study requires understanding the characteristics of the plenum pressure, flame chemiluminescence, and spray.", "These characteristics are studied in this section.", "As presented in the last column of Table REF , changing the test condition from J0M100 to J40M60 and J100M0 increases the root-mean-square (RMS) of the pressure fluctuations from 28.5 to 46.3 and 83.5, respectively.", "Figure REF presents the power spectrum densities ($PSD$ ) of the plenum pressure fluctuations for all test conditions.", "The results in the figure are stepped by a factor of 10 for clarity.", "As can be seen, the plenum pressure features large amplitude oscillations near $300 \\lesssim f \\lesssim 700$  Hz for all test conditions.", "This frequency band is highlighted by the gray shaded area in Fig.", "REF .", "Figure: The power spectrum density of the plenum pressure oscillations.", "The results are stepped by a factor of 10 for clarity.The time averaged flame chemiluminescence ($\\overline{CL}$ ) for the test conditions J0M100, J40M60, and J100M0 are presented in Figs.", "REF (a–c), respectively.", "For fully premixed flames (J0M100), the maximum mean flame chemiluminescence is positioned at $y \\approx 40$  mm.", "Adding the Jet A-1 spray and decreasing the methane flow rate displaces the mean flame chemiluminescence closer to the nozzle exit plane.", "Also, the maximum $\\overline{CL}$ increases by a factor of about 5 and 10 changing the test condition from J0M100 to J40M60 and J100M0, respectively.", "The broadband luminosity images were also collected (not shown here), and it was observed that the spray flames of test conditions J100M0 and J40M60 feature large soot emissions, especially close to the spray injector.", "Thus, the large values of $\\overline{CL}$ for J40M60 and J100M0 (compared to J0M100) as well as the smaller vertical position of the maximum $\\overline{CL}$ may be due to the pronounced soot formation close to the injector.", "Nonetheless, for the analyses presented here, the flame chemiluminescence is not used as a quantitative marker of the heat release rate.", "Instead, the flame chemiluminescence is used for the purposes of training and predicting the number of spray droplets data.", "Figure: Time-averaged flame chemiluminescence for (a) J0M100, (b) J40M60, and (c) J100M0.In the present study, the number of droplets inside a laser-illuminated plane ($n$ ) is studied using the Mie scattering technique.", "Additionally, using the shadowgraphy technique, the number and diameter of the droplets inside a focusing volume are investigated.", "As discussed in the following, both techniques are necessary and provide complementary understanding of the spray characteristics and/or dynamics.", "Analysis of the Mie scattering images suggests that the mean (standard deviation) of the number of droplets calculated for all frames and for the test conditions J40M60 and J100M0 are 28 (23) and 204 (45), respectively.", "Figures REF (a) and (b) present the mean of the number of droplets obtained from the Mie scattering images for the test conditions J40M60 and J100M0, respectively.", "The contours are presented in a logarithmic scale for improved clarity of presentation.", "As can be seen, for J40M60 and J100M0, the spray extends to about 60 mm downstream of the nozzle.", "J100M0 features a conical spray; however, for J40M60, the spray is mostly injected along the centerline.", "Although the spray for the test condition of J40M60 features a non-conical shape, the number of droplets can be estimated for this condition and the shape of the spray does not negatively impact such estimation.", "Figure: The mean number of droplets, n ¯(x,y)\\overline{n}(x,y) obtained from the Mie scattering measurements.", "(a) and (b) correspond to test conditions J40M60 and J100M0, respectively.The Probability Density Function (PDF) of the droplets diameters obtained from the ILIDS and shadowgraphy techniques are presented in Figs.", "REF (a) and (b), respectively.", "Generally, the values of the PDF reported in Fig.", "REF (a) are smaller than those in Fig.", "REF (b).", "This is because the shadowgraphy technique allows for detecting the droplets inside a relatively large volume (compared to ILIDS); and as a result, a large number of droplets (hence larger PDF values) are reported in Fig.", "REF (b) compared to those in Fig.", "REF (a).", "As discussed in section  as well as in Appendix B, the range of the detectable droplets diameter for ILIDS is $5~\\mathrm {\\mu m} \\lesssim d \\lesssim 107~\\mathrm {\\mu m}$ .", "For a droplet to be detected by the shadowgraphy technique, its area should be at least twice the area of an effective pixel resolution.", "Thus, the minimum detectable droplet area is $2\\times 99.2^2~(\\mu \\mathrm {m})^2= 19681~(\\mu \\mathrm {m})^2$ .", "This leads to a minimum detectable diameter of $\\sqrt{4A_\\mathrm {min}/\\pi }=158~\\mathrm {\\mu m}$ for the shadowgraphy technique.", "With the exception of $107~\\mathrm {\\mu m} \\lesssim d \\lesssim 158~\\mathrm {\\mu m}$ , and for their corresponding detectable droplet diameter range, the combination of the ILIDS and the shadowgraphy techniques allows for understanding the distribution of the droplet diameters in the present study.", "The PDF of the droplet diameter for the test conditions J40M60 and J100M0 are shown by the blue circular and red triangular data symbols, respectively, in Fig.", "REF .", "For droplets smaller than 107 $\\mu $ m, the most probable droplet diameter is about 22 $\\mu $ m for J100M0.", "Since the injection pressure for the test condition of J40M60 is about five times smaller than that for J100M0 (see Appendix A), and as a result, the total number of the generated droplets averaged over all collected Mie scattering images for J40M60 is significantly smaller than that for J100M0 (150 versus 2902), the PDF does not allow for discerning a most probable diameter for J40M60 and for $5~\\mathrm {\\mu m} \\lesssim d \\lesssim 107~\\mathrm {\\mu m}$ .", "The results in Fig.", "REF (b) suggest that the PDF of the droplet diameter decreases with increasing the diameter for the test condition of J100M0.", "However, for $158~\\mathrm {\\mu m} \\lesssim d \\lesssim 1000~\\mathrm {\\mu m}$ , J40M60 features a most probable droplet diameter of about 411 $\\mathrm {\\mu m}$ .", "In essence, the results presented in Fig.", "REF suggest that changing the test condition from J100M0 to J40M60 changes the most probable spray droplets diameter from about 22 to 411 $\\mathrm {\\mu m}$ .", "Figure: The probability density functions of the droplet diameter estimated using (a) the ILIDS and (b) the shadowgraphy techniques.Although the probability density functions generated from the ILIDS and shadowgraphy techniques provide complementary information related to the droplet characteristics, the temporal variation of such characteristics can only be inferred from the shadowgraphy technique in the present study, as the utilized laser features a maximum repetition rate of 10 Hz, which is relatively small.", "Acknowledging the minimum detectable droplet diameter from the shadowgraphy technique (158 $\\mu $ m) and using the measured density of Jet A-1 at the laboratory temperature ($\\rho _\\mathrm {l}=812.0~\\mathrm {kg/m^3}$ ), the total mass of the droplets within the focusing volume of the shadowgraphy technique, $m_\\mathrm {f}$ , can be estimated using $m_\\mathrm {f}(t) = \\rho _\\mathrm {l}\\sum _{i = 1}^{i = n_\\mathrm {S}(t)} \\frac{\\pi }{6} d^3(i),$ where $n_\\mathrm {S}(t)$ is the total detected number of droplets at time $t$ using the shadowgraphy technique.", "Figures REF (a) and (b) present the Joint Probability Density Functions (JPDF) of the total detected droplets mass versus the total number of droplets for non-reacting and reacting conditions, respectively.", "The results suggest that the mean mass of the resolved droplets for the test condition J40M60 decreases from 2.2 to 2.1 mg changing from non-reacting to reacting conditions; and, $m_\\mathrm {f}$ decreases from 3.3 to 0.9 mg changing from non-reacting to reacting conditions for J100M0.", "In addition to the droplets evaporation (due to combustion), the above decrease is due to the change in the shape of the spray after it is lit.", "Nonetheless, The results pertaining to both non-reacting and reacting conditions suggest that there exists a positive correlation between $m_\\mathrm {f}$ and the number of droplets ($n_\\mathrm {S}$ ), both obtained from the shadowgraphy technique for droplets larger than 158 $\\mu $ m. Given the above positive correlation, it is speculated that provided a dominant instability exists in the temporal variation of $n_\\mathrm {S}$ , such instability may also be present in the temporal variation of the mass of the liquid fuel inside the combustor.", "Figure: The joint probability density function of the fuel mass (m f m_\\mathrm {f}) versus the number of droplets (n S n_\\mathrm {S}), both estimated based on the shadowgraphy technique.", "(a) and (b) correspond to the non-reacting and reacting conditions, respectively.In order to assess the above speculation, the power spectrum densities of the number, $PSD(n_\\mathrm {S})$ , and mass, $PSD(m_\\mathrm {f})$ , of the droplets normalized by the corresponding maxima were calculated.", "Additionally and for comparison purposes, the normalized power spectrum density of the spatially averaged flame chemiluminescence was estimated and is presented by the solid black lines in Fig.", "REF .", "The results in Figs.", "REF (a), (b and d), and (c and e) correspond to the test conditions J0M100, J40M60, and J100M0, respectively.", "In the figures, the $PSD^*$ of $n_\\mathrm {S}$ and for non-reacting and reacting conditions are shown by the light green dashed and red dashed lines, respectively, for both spray test conditions.", "The $PSD^*$ of $m_\\mathrm {f}$ and for non-reacting and reacting conditions are shown by the dark dotted-dashed green and orange dashed lines, respectively.", "The results in Fig.", "REF show that the flame chemiluminescence spectra features a dominant peak at $f \\approx 100$  Hz for perfectly premixed methane-air flames; however, adding the spray and reducing the methane flow rate removes this dominant frequency in the power spectrum densities.", "For the non-reacting and reacting sprays, both $n_\\mathrm {S}$ and $m_\\mathrm {f}$ feature large amplitude oscillations for $f \\lesssim 10$  Hz and $f\\approx 10-40$  Hz corresponding to J40M60 and J100M0, respectively.", "Similar to the normalized power spectrum densities of the number of droplets and mass of the droplets, the flame chemiluminescence $PSD^*$ also features relatively large amplitude oscillations near the above frequencies for the corresponding spray-relevant test conditions, see the gray regions in the figure.", "Figure: The normalized power spectrum densities of the spatially averaged flame chemiluminescence (solid black lines), the number of droplets for non-reacting spray (dotted-dashed light green lines), the number of droplets for reacting spray (dotted-dashed red lines), the mass of the droplets for non-reacting spray (dotted-dashed dark green lines), and the mass of the droplets for reacting spray (dashed orange lines).", "The results in (a), (b and d), and (c and e) correspond to test conditions J0M100, J40M60, and J100M0, respectively.Comparison of the results presented in Figs.", "REF  and REF show that, for the fully premixed condition (J0M100), the pressure and flame chemiluminescence feature dominant frequencies that do not match.", "Similar to the fully-premixed flames, for the spray test conditions, the pressure spectra do not follow the PSDs of the flame chemiluminescence, spray number of droplets, and droplets mass.", "However, for the spray test conditions, the chemiluminescence signal features strong oscillations at frequencies that match those of the number of droplets and the droplets mass obtained from the shadowgraphy technique for both non-reacting and reacting conditions.", "In essence, acknowledging the limitations of the $\\mathrm {OH^*}$ chemiluminescence and shadowgraphy techniques, the flame chemiluminescence features an instability that also exists in the fuel number of droplets and their mass for both non-reacting and reacting sprays.", "This suggests that the flame chemiluminescence oscillations are driven by the fuel injection and that there exists a coupling between the amount of the spray in the combustor and the flame chemiluminescence.", "The above analysis was performed using the shadowgraphy technique, which suffers from low imaging resolution (about 99 $\\mu $ m) but provides temporally resolved information regarding the number of droplets and their mass in the measurement volume.", "Compared to this technique, the Mie scattering technique features an improved spatial resolution (about 29 $\\mu $ m), which allows for estimating the number of droplets with a wide range of diameters inside the plane of measurements.", "The Mie scattering technique used in this study, however, suffers from low temporal resolution, since the utilized laser featured a maximum frequency of 10 Hz.", "In the following section, we will utilize the findings discussed above to develop a framework, which will be used in section  to predict the time-resolved variation of the number of droplets from the Mie scattering technique." ], [ "The data-driven framework", "In this section, a framework is developed to predict the temporally resolved spray number of droplets from time-resolved chemiluminescence and sparse Mie scattering data.", "The framework is developed for two generic signals, $g_1(t)$ and $g_2(t)$ , which correspond to the time-resolved and spatially averaged flame chemiluminescence as well as the number of droplets (obtained from the sparse Mie scattering measurements).", "In section , it was discussed that the mass and the number of the droplets obtained from the shadowgraphy technique feature large amplitude oscillations at frequencies matching those of the chemiluminescence oscillations.", "In this section, it is assumed such characteristic can be extended to the number of droplets identified by the Mie scattering technique.", "Specifically, it is assumed that $g_1(t)$ and $g_2(t)$ feature large amplitude oscillations at a matching frequency.", "The validity of this assumption for prediction of $g_2(t)$ is later assessed in section .", "$g_1(t)$ and $g_2(t)$ were formulated as $g_1(t) = \\sin (2\\pi f t)+B_{g_1} \\mathcal {R}_1(t),$ $g_2(t) = \\sin (2\\pi f t)+B_{g_2} \\mathcal {R}_2(t).$ In Eqs.", "(REF ) and (REF ), $f$ is the matching frequency at which both $g_1(t)$ and $g_2(t)$ feature relatively large amplitude oscillations.", "This frequency was set to 31.8 Hz and corresponds to the test condition of J100M0.", "In Eqs.", "(REF ) and (REF ), $\\mathcal {R}_1$ and $\\mathcal {R}_2$ are random functions that vary between -1 and 1, with amplitudes of $B_{g_1}$ and $B_{g_2}$ , respectively.", "These functions are included in the formulations of $g_1(t)$ and $g_2(t)$ to simulate deviations from perfect sinusoidal oscillations of $g_1(t)$ and $g_2(t)$ , aiming to reflect the characteristics of the experimentally measured data.", "In Eqs.", "(REF ) and (REF ), a relatively large range of variation (0–2) was considered for $B_{g_1}$ and $B_{g_2}$ to study the performance of the developed framework.", "The power spectrum densities of $g_1$ (solid curves) and $g_2$ (dotted curves) for $B_{g_1}=B_{g_2}=0,~0.5,~1,~1.5$ , and 2 were calculated and presented in Fig.", "REF .", "The time period utilized to obtain these power spectrum densities was considered to be 0.5 s similar to the duration of the data collection for the separate chemiluminescence measurements.", "The results in Fig.", "REF are stepped by $10^5$ for clarity purposes.", "As can be seen, for all tested background amplitudes, the PSDs of $g_1$ and $g_2$ feature relatively large amplitudes at the matching frequency.", "Figure: The power spectrum densities of g 1 g_1 (solid curves) and g 2 g_2 (dotted curves) for several background values of the random fluctuations.For the analyses presented in this section, two number of training datasets ($p = 19$ and 199) were considered.", "$p = 19$ corresponds to the simultaneous experimental measurements performed in this study, with the acquisition timing shown in Fig.", "REF .", "For $p=19$ , 20 datasets were collected: 19 datasets are used for the training and one dataset is used for the prediction.", "One order of magnitude larger number of datasets is also considered to study the prediction accuracy of the framework: $p= 199$ for training and one dataset for prediction.", "The time duration for generating $g_1(t)$ and $g_2(t)$ corresponds to that shown in Fig.", "REF and equals $p\\Delta t_\\mathrm {M}+\\Delta t_\\mathrm {CL}$ , which is the summation of time periods $p\\Delta t_\\mathrm {M}$ (time period between the $p+1$ datasets), $\\Delta t_\\mathrm {CL}/2$ at the beginning of the data collection, and $\\Delta t_\\mathrm {CL}/2$ at the end of the data collection, with $\\Delta t_\\mathrm {M}=5$  s and $\\Delta t_\\mathrm {CL} = 50~\\mathrm {ms}$ (see Fig.", "REF ).", "Thus, for prediction purposes, the time durations for generating $g_1(t)$ and $g_2(t)$ signals are 95.05 and 995.05 for $p = 19$ and 199, respectively.", "Following the simultaneous measurements performed in this study (see Fig.", "REF ), the values of $g_2(t)$ at $t_0+(i-1) \\Delta t_\\mathrm {M}$ are used for the prediction purposes, similar to the Mie scattering data, which are experimentally available at discrete times.", "$t_0$ is a reference time, which was set as half of the duration of the chemiluminescence datasets ($t_0 = \\Delta t_\\mathrm {CL}/2 = 25~\\mathrm {ms}$ ).", "$i$ corresponds to the $i^\\mathrm {th}$ dataset and changes from 1 to $p+1$ .", "Since the chemiluminescence data is mimicked by $g_1(t)$ , its variation is known at $t_0+(i-1) \\Delta t_\\mathrm {M}+\\tau $ , with $\\tau $ being a time-lag and $-25~\\mathrm {ms} \\le \\tau \\le 25~\\mathrm {ms}$ , see Fig.", "REF .", "Substituting $t_0+(i-1) \\Delta t_\\mathrm {M}+\\tau $ and $t_0+(i-1) \\Delta t_\\mathrm {M}$ in Eqs.", "(REF ) and (REF ) for time, it is obtained that $g_1(t_0+(i-1) \\Delta t_\\mathrm {M}+\\tau ) = \\sin \\left(2\\pi f (t_0+(i-1) \\Delta t_\\mathrm {M}+\\tau ) \\right)+B_{g_1} \\mathcal {R}_1\\left(t_0+(i-1) \\Delta t_\\mathrm {M}+\\tau \\right),$ $g_2(t_0+(i-1) \\Delta t_\\mathrm {M}) = \\sin \\left(2\\pi f (t_0+(i-1) \\Delta t_\\mathrm {M}) \\right)+B_{g_2} \\mathcal {R}_2\\left(t_0+(i-1) \\Delta t_\\mathrm {M} \\right).$ Since $\\Delta t_\\mathrm {M}$ and $t_0$ are fixed in the present study, $g_1(t_0+(i-1) \\Delta t_\\mathrm {M}+\\tau )$ and $g_2(t_0+(i-1) \\Delta t_\\mathrm {M})$ are presented as $g_1(i,\\tau )$ and $g_2(i)$ , respectively.", "Thus, Eqs.", "(REF ) and (REF ) can be formulated as $g_1(i,\\tau ) = \\sin \\left(2\\pi f (t_0+(i-1) \\Delta t_\\mathrm {M}+\\tau ) \\right)+B_{g_1} \\mathcal {R}_1\\left(t_0+(i-1) \\Delta t_\\mathrm {M}+\\tau \\right),$ $g_2(i) = \\sin \\left(2\\pi f (t_0+(i-1) \\Delta t_\\mathrm {M}) \\right)+B_{g_2} \\mathcal {R}_2\\left(t_0+(i-1) \\Delta t_\\mathrm {M} \\right).$ The time-lag in Eq.", "(REF ) is a key parameter for the prediction framework developed here.", "First, for a given $\\tau $ , the values of $g_2(i)$ versus $g_1(i,\\tau )$ (with $i \\in [1,2,\\cdots ,p]$ ) were calculated.", "Then, for each value of $\\tau $ , least square fits to the above variations were obtained, with the corresponding linear fit shown in Fig.", "REF (a).", "This fit relates the values of $g_1$ measured at $t+\\tau $ to $g_2$ measured at $t$ .", "The 3D time-lag based formulation is given by $L(g_1(t+\\tau ),\\tau ) = a(\\tau )+b(\\tau )g_1(t+\\tau ).$ In Eq.", "(REF ), $a(\\tau )$ and $b(\\tau )$ are the abscissa and slopes of the linear fit at a given time-lag.", "In Fig.", "REF (a), the 3D surface, $L(g_1,\\tau )$ , as well as the contours of $L(g_1,\\tau )$ versus $\\tau $ are colored based on the values of $L(g_1,\\tau )$ .", "Please note that, in the analyses, $\\tau $ was varied in steps of $0.1$  ms to generate $L(g_1,\\tau )$ ; however, only for clarity of presentation, $\\tau $ was varied in steps of 1 ms to produce the 3D surface shown in Fig.", "REF (a).", "Also, for the results presented in Fig.", "REF , $B_{g_1} = B_{g_2} = 0.5$ and $p =19$ .", "Variations of $g_2(i)$ versus $g_1(i,\\tau $ ) for $\\tau = -T/2$ , 0, and $T/2$ are presented in Fig.", "REF (b–d), respectively.", "Here, $T$ is one period of oscillations and $T/2 = 1/(2f) = 15.6$  ms.", "The results in Fig.", "REF (c) suggest that the slope of the linear fit at $\\tau = 0$ is positive; however, it becomes negative at $\\tau = -15.6$ and 15.6 ms.", "In fact, the slope of the linear fit varies as a cosine-shaped function, which is due to the time-lag between $g_1(t+\\tau )$ and $g_2(t)$ and the sinusoidal functions in Eqs.", "(REF ) and (REF ).", "Figure: (a) The 3D variation of L(g 1 ,τ)L(g_1,\\tau ).", "(b–d) are g 2 (i)g_2(i) versus g 1 (i,-T/2)g_1(i,-T/2), g 1 (i,0)g_1(i,0), and g 1 (i,T/2)g_1(i,T/2), respectively.", "The circular data points in (b–d) correspond to p=19p =19 datasets and the dotted-dashed lines are the linear fits.After estimating $L(g_1,\\tau )$ (which is a time-lag dependent and linear fit for estimation of $g_2(i)$ ), the root-mean-square-error ($RMSE$ ) was calculated to quantify the accuracy in predicting $g_2(i)$ using $g_1(i,\\tau )$ at $t = t_0 + (i-1) \\Delta t_\\mathrm {M} + \\tau $ .", "$RMSE$ is given by $RMSE(\\tau ,p) = \\sqrt{\\frac{\\sum _{i = 1}^{i = p} \\left[g_2(i)-L(g_1,\\tau )\\right]^2}{p}}.$ Figure REF presents the variation of $RMSE$ versus $\\tau $ for $p = 19$ (first row) and $p = 199$ (second row).", "The first to fifth columns pertain to $B_{g_1} = B_{g_2} = 0$ , 0.5, 1, 1.5, and 2.0, respectively.", "The time-lag within $-1/(2f) < \\tau < 1/(2f)$ that $RMSE$ minimizes is referred to as $\\tau ^*$ .", "This is the time-lag at which $L(g_1,\\tau ^*)$ provides the most accurate prediction of $g_2$ .", "Thus, the following equation can be used for approximating $g_2$ .", "$g_2(t) \\approx L(g_1,\\tau ^*) = a(\\tau ^*)+b(\\tau ^*)g_1(t+\\tau ^*),$ where $a(\\tau ^*)$ and $b(\\tau ^*)$ are the abscissa and slopes of the linear fit at $\\tau =\\tau ^*$ .", "In Fig.", "REF , $\\tau = \\pm 1/(2f)$ are shown by the solid blue lines for clarity.", "$\\tau ^*$ for all examined background values and for both $p=19$ and $p=199$ are obtained and shown by the red dashed lines in Fig.", "REF .", "For $g_1$ and $g_2$ used in Eqs.", "(REF ) and (REF ), the true value of $\\tau ^*$ is zero.", "Figure: Variation of RMSE(τ,p)RMSE(\\tau ,p) for 19 (first row) and 199 (second row) training datasets, respectively.", "The first to fifth columns pertain to B g 1 =B g 2 =0B_{g_1} = B_{g_2} = 0, 0.5, 1, 1.5, and 2.0, respectively.", "The black dotted-dashed curves are the right-hand-side of Eq.", "(S.8) from Appendix C. The red dashed line corresponds to the time-lag at which RMSERMSE is minimized.", "The blue dashed lines highlight τ=±1/(2f)\\tau = \\pm 1/(2f).The results in Fig.", "REF show that, for $p=19$ , the correct value of $\\tau ^*$ is predicted for $B_{g_1}=B_{g_2}=0$ and 0.5.", "However, for $B_{g_1}=B_{g_2} \\ge 1$ , the time-lag that leads to the minimum value of $RMSE$ is estimated to be non-zero for $p=19$ .", "It can be seen that increasing the number of training datasets from $p = 19$ to $p=199$ increases the background value for which $\\tau ^*$ is correctly estimated from 0.5 to 1.", "However, for $p = 199$ , the correct value of $\\tau ^*$ could not be obtained for $B_{g_1}=B_{g_2} = 1.5$ and 2.0.", "For $p =\\infty $ , Eqs.", "(REF , REF , and REF ) were utilized and an analytical formulation for $RMSE$ versus $\\tau $ was obtained, see Appendix C. The variations of $RMSE(\\tau ,\\infty )$ are shown by the dotted-dashed black curves in Fig.", "REF .", "As can be seen, independent of the background value, the correct value of the time-lag can be estimated as the number of training datasets approaches infinity.", "It is also obtained that the estimated $RMSE$ for $p=19$ (which is relevant to the experiments) is close to that for $p=\\infty $ provided $B_{g_1} = B_{g_2} \\le 0.5$ .", "Though not presented here, the performance of the framework for periodic functions with non-zero time-lag was examined, and it was obtained that the developed framework can allow for estimating the true value of $\\tau ^*$ .", "Using $\\tau ^*$ , $g_1(t)$ , and Eq.", "(REF ), the predicted temporal variation of $g_2(t)$ , which is $L(g_1,\\tau ^*)$ was obtained.", "Figure REF presents the temporal variations of actual $g_2(t)$ and predicted $g_2(t)$ in solid green and dashed black lines, respectively.", "The first and second rows pertain to relatively small ($p = 19$ ) and large ($p = 199$ ) numbers of training datasets.", "The results in the first to fifth columns pertain to increasing values of the background signal amplitude.", "$g_1(t)$ and $g_2(t)$ were generated for a relatively long period of time, and for presentation purposes, the results in Fig.", "REF are only shown for the time period of 50 ms.", "Similar results are obtained for other time durations.", "Overlaid on the figure are the RMS of the predicted signal subtracted from the actual signal.", "As can be seen in Fig.", "REF, for a given value of the background, increasing $p$ decreases the RMS of the differences between actual $g_2(t)$ and the predicted $g_2(t)$ .", "It can also be seen that, for $B_{g_1}=B_{g_2}=0$ and 0.5, the prediction of the framework is close to the actual signal.", "It is important to highlight that, for the large background amplitude values ($B_{g_1}=B_{g_2} \\ge 1$ ) and for the smaller value of the training dataset, the predicted signal is time lagged, leading to incorrect prediction of $g_2(t)$ , compare the black and green variations in Fig.", "REF (c–e).", "This is due to incorrect estimation of $\\tau ^*$ , as shown in Fig.", "REF (c–e) for large background values.", "Figure: The actual (solid green) versus predicted (dashed black) values of g 2 (t)g_2(t).", "The first and second rows pertain to 19 and 199 training datasets, respectively.", "The first to fifth columns are for amplitudes of the random background signals equal to 0, 0.5, 1, 1.5, and 2.0, respectively.The results discussed above suggest that the presence of a large amplitude background signal can lead to an incorrect prediction of $\\tau ^*$ and as a result an incorrect prediction of $g_2$ .", "In order to study the effect of inaccurately estimating $\\tau ^*$ on the prediction of $g_2$ , all possible combinations of $p$ training datasets from $p+1$ available datasets were considered to predict $g_2(i)$ using Eq.", "(REF ).", "That is, in addition to using datasets $[1, 2, \\cdots , p]$ for predicting $g_2(i)$ for the $(p+1)^\\mathrm {th}$ dataset, the rest of the combinations, for example datasets of $[1, 2, \\cdots , p-1, p+1]$ , were considered to predict $g_2(i)$ for the $p^\\mathrm {th}$ excluded dataset.", "Predicted versus the actual values of $g_2(i)$ for $i=1$ to $p+1$ are presented in Fig.", "REF .", "Also, presented in the figure are the RMS of the predicted results subtracted by the actual results.", "As can be seen, increasing $p = 19$ to 199 decreases the $RMS$ for $B_{g_1}=B_{g_2} \\le 1$ .", "For $B_{g_1}=B_{g_2} \\ge 1.5$ , however, the estimated $RMS$ for both $p = 19$ and $p = 199$ are relatively large.", "Overall, the linear-regression training model utilized in the present study suggests that, for signals with characteristics similar to those discussed in this section, $p=19$ datasets are sufficient for the prediction of the twentieth dataset for the random variation background amplitude of about 50%, which is a relatively large background amplitude.", "The developed model here is used in the next section for prediction purposes.", "Figure: Predicted versus actual values of g 2 (i)g_2(i) for p=19p = 19 (first row) and p=199p=199 (second row).", "The first to fifth columns pertain to B g 1 =B g 2 =0B_{g_1} = B_{g_2} = 0, 0.5, 1.0, 1.5, and 2.0, respectively." ], [ "Results", "The framework developed in section  is employed for the time-resolved prediction of the spray number of droplets within the plane of illumination using the time-resolved and spatially averaged flame chemiluminescence ($\\overline{\\overline{CL}}$ ) and sparse Mie scattering data.", "Following section , $g_1(t) = \\overline{\\overline{CL}}(t)$ and $g_2(t) = n(t)$ .", "First, the 3D variations of $L(g_1,\\tau )$ were obtained and presented in Figs.", "REF (a) and (b) for the test conditions J40M60 and J100M0, respectively.", "Please note that 19 training datasets are used to perform this analysis and the 20$^\\mathrm {th}$ dataset is utilized for the prediction.", "For J100M0, the 3D surface presented in Fig.", "REF (b) and the contours of $L(g_1,\\tau )$ feature a nearly periodic variation, which is similar to that shown in Fig.", "REF .", "However, for J40M60, the results in Fig.", "REF (a) do not feature the periodic variations.", "This is because the matching frequency band of the spatially averaged flame chemiluminescence and spray is smaller than 10 Hz, which corresponds to a time period larger than 100 ms, and as a result, one complete cycle of oscillation cannot be observed in the 3D presentation.", "Nonetheless, as will be shown and discussed later, the above framework can allow for predicting the number of droplets for both test conditions J40M60 and J100M0.", "Figure: L(CL ¯ ¯,τ)L(\\overline{\\overline{CL}},\\tau ) for (a) J40M60 and (b) J100M0, respectively.Figure REF (a) and (b) present the variation of $RMSE$ obtained from Eq.", "(REF ) versus $\\tau $ for J40M60 and J100M0, respectively.", "$\\tau ^*$ for both conditions are identified by the red dashed lines.", "Specifically, $\\tau ^*$ equals 13.5 ms and -13.9 ms for J40M60 and J100M0, respectively.", "Following the framework discussed in section , the estimated values of $\\tau ^*$ in Figs.", "REF (a) and (b) were used to obtain the time-resolved variation of $n(t)$ for the 20$^\\mathrm {th}$ dataset from Eq.", "(REF ).", "These variations are presented in Figs.", "REF (c) and (d) for J40M60 and J100M0, respectively.", "In these figures, $t=0$  ms is set as the time at which the Mie scattering data ($n$ ) was collected for the 20$^\\mathrm {th}$ dataset.", "From Eq.", "(REF ), the argument of $g_1$ , which is $t+\\tau ^*$ , varies between -25 ms and 25 ms.", "Thus, the argument of $g_2$ , which is $t$ , varies between $-25-\\tau ^*$  ms and $25-\\tau ^*$  ms. Also overlaid by the circular green data point on Figs.", "REF (c) and (d) are the actual values of $n$ .", "As can be seen, the developed framework in section  can accurately predict the actual value of $n$ at $t=0$  ms and for both test conditions.", "Figure: (a) and (b) are variations of RMSE(τ,p=19)RMSE(\\tau ,p=19) for J40M60 and J100M0, respectively.", "The time-lag at which RMSERMSE minimizes is shown in (a and b) by the red dashed lines.", "(c) and (d) are the predicted (black dashed lines) variation and measured (green circular data symbol) value of the spray number of droplets for the 20 th ^\\mathrm {th} dataset of test conditions J40M60 and J100M0, respectively.In order to assess the accuracy of the predictions for the number of droplets, all 20 collected datasets were used and the predicted values of $n$ versus the corresponding measured values are presented by the blue circular (for J40M60) and red triangular (for J100M0) data symbols in Fig.", "REF , respectively.", "The procedure followed to obtain the results in Fig.", "REF is identical to that used for obtaining those in Fig.", "REF .", "The corresponding values of the RMS of the difference between the measured and predicted data are also presented in the figure.", "The RMS values are about 13 and 51 droplets for J40M60 and J100M0, respectively.", "In section , the number of droplets and the mass of the droplets were estimated using the shadowgraphy technique, and it was shown that the PSD of these parameters feature large amplitude oscillations at frequencies close to those of the flame chemiluminescence.", "Thus, it was assumed that the number of droplets measured inside a plane using the Mie scattering data features matching frequencies to that of the flame chemiluminescence.", "Comparison of the actual fuel number of droplets and the predicted values, see Fig.", "REF , suggest the above assumption along with the developed framework allow for the prediction of the fuel number of droplets.", "In essence, the discussions presented here show that using the knowledge related to the coupling between the spray and the flame chemiluminescence along with the framework developed here, the time-resolved variation of spray can be obtained from the corresponding sparsely measured data.", "In our future work, the above developed and tested framework will be used to provide improved understanding related to the dynamics of 2D measured flame structure and its interaction with the fuel droplets.", "Figure: Predicted versus measured number of droplets, n(i)n(i)." ], [ "Conclusions", "The temporal variation of Jet A-1 spray number of droplets inside a plane was predicted using the time-resolved flame chemiluminescence, sparse Mie scattering, as well as a physics-informed data-driven framework, which was developed in this study.", "The spray flames were characterized using separate flame chemiluminescence, interferometric laser imaging for droplet sizing, shadowgraphy, plenum pressure measurements.", "Also, simultaneous flame chemiluminescence, Mie scattering, and pressure data were collected for 20 datasets for the purposes of the above framework development and for prediction.", "For the simultaneous measurements, the acquisition frequency of the pressure, flame chemiluminescence, and Mie scattering were set to 100000, 10000, and 0.2 Hz, respectively.", "A gas turbine model combustor was retrofitted to operate with both Jet A-1 and a mixture of methane and air.", "The combustor was operated at a fixed power of 10 kW.", "The flow rates of methane, Jet A-1, and air were adjusted to generate test conditions corresponding to: (i) methane and air perfectly premixed flames, (ii) Jet A-1 spray flames, and (iii) dual fuel flames with 40% and 60% of the power generated by Jet A-1 and methane, respectively.", "For all test conditions, the plenum pressure features broadband oscillations near 300 to 700 Hz, which do not match those of the flame chemiluminescence.", "Specifically, for perfectly premixed flames, the spatially averaged flame chemiluminescence features a dominant frequency at about 100 Hz, which reduces to $\\lesssim $  10 Hz and 10–40 Hz for the dual fuel and Jet A-1 flames, respectively.", "Analysis of both non-reacting and reacting shadowgraphy images confirmed that the number and mass of the droplets oscillations feature dominant frequencies that match those of the flame chemiluminescence both for dual fuel and Jet A-1 spray flames.", "This suggested that the flame chemiluminescence is driven by fuel injection instability.", "A time-lag and linear regression-based analysis was used to develop a framework that allows to predict time-resolved variation of an objective signal using a time-resolved input signal and sparse information from the objective signal.", "In the present study, the objective and input signals are the number of spray droplets measured inside a plane from the Mie scattering and the spatially averaged flame chemiluminescence, respectively.", "The framework was developed assuming the objective and input signals feature relatively large amplitude oscillations at a matching frequency.", "In the developed framework, first, linear fits were used to obtain relations between the flame chemiluminescence and spray number of droplets for several time-lags.", "Then, the time-lag at which the error minimizes was obtained.", "Finally, the time-lag and the slope of the fits were used to predict the temporal variation of the spray number of droplets.", "Of importance for the development of the framework is the number of training datasets, whose influence on the accuracy of predictions was assessed.", "The developed framework was then used to predict the time-resolved variation of the number of spray droplets inside a plane.", "It was shown that the number of droplets predicted by the developed framework matches relatively well the measured number of droplets.", "The framework developed and assessed in the present study allows for utilizing temporally resolved chemiluminescence data and sparse number of droplets to predict the temporal variation of the spray number of droplets.", "Such information is of importance for spray characterization and understanding the coupling between the spray and the flame.", "For experiments that the high-speed spray data is not available, the developed framework in this study can be used for predicting the missing information." ], [ "Acknowledgments", "The authors are grateful for the financial support from the Natural Sciences Engineering Research Council of Canada and Zentek through the Alliance grant ALLRP 567111-21 as well as MITACS and Machinery Analytics through grant IT25776.", "The authors acknowledge KF Aerospace for providing Jet A-1.", "For the Jet A-1 flow rate calibration, the combustion chamber shown in Fig.", "REF was removed, the burner was flipped vertically, and it was connected to a sealed container, whose weight was monitored.", "Care was taken not to alter the fuel delivery system during this procedure.", "Then, the dual valve pressure-controller (item 3 in Fig.", "REF ) was used to set the nitrogen pressure to several values ranging from about 0 to 345 kPa.", "For each pressure, the injector operated for 300 s and the weight of the collected Jet A-1 was measured.", "Using the measured density of Jet A-1 (812.0 $\\mathrm {kg/m^3}$ ), the volume of the collected liquid was obtained (in liters) and divided by the duration (300 s) to calculate the fuel volume flow rate ($\\dot{Q}$ ).", "Figure REF presents the variation of $\\dot{Q}$ versus the vessel pressure.", "J40M60 and J100M0 test conditions are also overlaid by the solid blue circle and red triangular symbols, respectively.", "It was obtained that the vessel gauge pressures of 18.4 and 107.1 kPa lead to $\\dot{Q} = 6.8$ and 17.1 cubic centimeters per minute (5.6 and 13.9 $\\mathrm {gr/min}$ as tabulated in Table REF ), respectively.", "Figure: The relation between the volumetric flow rate of Jet A-1 and the liquid fuel vessel pressure.Similar to [53], [54], the USAF 1951 target plate was used to determine the spatial resolution of the optical diagnostics used in this study.", "Figure REF shows the image of the USAF plate captured by the flame chemiluminescence, Mie scattering, shadowgraphy, and ILIDS diagnostics.", "The pixel resolution for the flame chemiluminescence, Mie scattering, shadowgraphy, and ILIDS measurements were 68.4, 28.8, 68.0, and 21.4 $\\mathrm {\\mu }$ m, respectively.", "These values were converted to the number of line pairs per millimeter and are shown by the red, blue, yellow, and green dotted-dashed lines in Fig.", "REF .", "Following Refs.", "[53], [54], the image contrast was defined as $C=\\frac{I_\\mathrm {max}-I_\\mathrm {min}}{I_\\mathrm {max}+I_\\mathrm {min}},$ where $I_\\mathrm {max}$ and $I_\\mathrm {min}$ are the maximum and minimum light intensities acquired across each group of the USAF 1951 lines.", "$C$ was calculated for each group of the USAF 1951 lines using the chemiluminescence, Mie scattering, shadowgraphy, and ILIDS imaging systems, with the corresponding results shown in Fig.", "REF by the red square, blue circle, yellow triangular, and green diamond shape data symbols, respectively.", "Similar to Refs.", "[53], [54], $C=0.2$ was used for determining the effective spatial resolution of the diagnostics.", "These effective resolutions are shown by the dashed lines in the figure.", "As shown in Fig.", "REF , the effective spatial resolutions are 198.4 $\\mathrm {\\mu }$ m and 99.2 $\\mathrm {\\mu }$ m for the flame chemiluminescence and shadowgraphy imaging systems, respectively.", "However, for the Mie scattering and ILIDS, $C$ is larger than 0.2; and, the effective resolutions of these imaging techniques are taken to be identical to the corresponding pixel resolutions (28.8 $\\mathrm {\\mu }$ m for the Mie scattering and 21.4 $\\mathrm {\\mu }$ m for the ILIDS).", "Figure: USAF 1951 images acquired using (a) chemiluminescence, (b) Mie scattering, (c) shadowgraphy, and (d) ILIDS diagnostics.Figure: Contrast and resolution for the utilized optical diagnostics.For the ILIDS measurements, the above argument can be used to identify the range of the detectable droplets diameter.", "In our ILIDS measurements, the identified fringe patterns feature a diameter of about 2 mm, with a representative pattern shown in Fig.", "REF .", "Thus, the maximum number of fringes that can be resolved in the ILIDS experiments is $2~\\mathrm {mm}/21.4~\\mu \\mathrm {m} \\approx 45$ .", "Using Eq.", "(REF ) along with the minimum number of resolved fringes (which is 2) as well as the maximum number of fringes, ILIDS allows for resolving droplets with diameter $4.8 \\le d \\le 107.2 ~\\mathrm {\\mu }$ m. This means that the droplets with diameter larger than 107.2 cannot be detected using the ILIDS technique.", "For discrete time steps, Eq.", "( REF) can be written as $L(g_1(i,\\tau ),\\tau ) = a(\\tau )+b(\\tau )g_1(i,\\tau ).$ Using Eq.", "( REF) and the Right-Hand-Side (RHS) of Eq.", "(REF ), $RMSE^2$ can be calculated from $RMSE^2(\\tau ,p)=\\frac{1}{p}\\sum _{i=1}^{i=p}[g_2(i)-a(\\tau )-b(\\tau )g_1(i,\\tau )]^2.$ Then, substituting $g_1(i,\\tau )$ and $g_2(i)$ from Eqs.", "(REF and REF ) in Eq.", "(REF ), it is obtained that $RMSE^2(\\tau ,p) = \\frac{1}{p} \\sum _{i=1}^{i=p} [\\sin (2\\pi f(t_0+(i-1)\\Delta t_\\mathrm {M})) +B_{g_2}\\mathcal {R}_2(t_0+(i-1)\\Delta t_\\mathrm {M}) -a(\\tau )\\\\ -b(\\tau )\\sin (2\\pi f(t_0+(i-1)\\Delta t_\\mathrm {M}+\\tau ))-b(\\tau )B_{g_1}\\mathcal {R}_1(t_0+(i-1)\\Delta t_\\mathrm {M}+\\tau )]^2.$ Expanding the second order polynomial in the above equation, it can be shown that $RMSE^2(\\tau ,p) = \\underbrace{\\frac{\\sum _{i=1}^{i = p} \\sin ^2(2\\pi f(t_0+(i-1)\\Delta t_\\mathrm {M}))}{p}}_{Term~1} + \\underbrace{\\frac{\\sum _{i=1}^{i = p} B_{g_2}^2\\mathcal {R}_2^2(t_0+(i-1)\\Delta t_\\mathrm {M})}{p}}_{Term~2} + \\\\ \\underbrace{\\frac{\\sum _{i=1}^{i = p} a^2(\\tau )}{p}}_{Term~3} + \\underbrace{\\frac{\\sum _{i=1}^{i = p} b^2(\\tau )\\sin ^2\\left[2\\pi f(t_0+(i-1)\\Delta t_\\mathrm {M}+\\tau )\\right]}{p}}_{Term~4} + \\underbrace{\\frac{\\sum _{i=1}^{i = p} b^2(\\tau )B^2_{g_1}{\\mathcal {R}}_1^2(t_0+(i-1)\\Delta t_\\mathrm {M}+\\tau )}{p}}_{Term~5}+ \\\\ \\underbrace{\\frac{\\sum _{i=1}^{i = p} 2B_{g_2}\\sin (2\\pi f(t_0+(i-1)\\Delta t_\\mathrm {M})) \\mathcal {R}_2(t_0+(i-1)\\Delta t_\\mathrm {M})}{p}}_{Term~6} - \\underbrace{\\frac{\\sum _{i=1}^{i = p} 2a(\\tau )\\sin (2\\pi f(t_0+(i-1)\\Delta t_\\mathrm {M}))}{p}}_{Term~7}+ \\\\ -\\underbrace{\\frac{\\sum _{i=1}^{i = p} 2b(\\tau )\\sin (2\\pi f(t_0+(i-1)\\Delta t_\\mathrm {M}))\\sin (2\\pi f(t_0+(i-1)\\Delta t_\\mathrm {M}+\\tau ))}{p}}_{Term~8} + \\\\ -\\underbrace{\\frac{\\sum _{i=1}^{i = p} 2b(\\tau )B_{g_1}\\sin (2\\pi f(t_0+(i-1)\\Delta t_\\mathrm {M}))\\mathcal {R}_1(t_0+(i-1)\\Delta t_\\mathrm {M}+\\tau )}{p}}_{Term~9}+ \\\\ -\\underbrace{\\frac{\\sum _{i=1}^{i = p} 2a(\\tau )B_{g_2}\\mathcal {R}_2(t_0+(i-1)\\Delta t_\\mathrm {M})}{p}}_{Term~10}+ \\\\ -\\underbrace{\\frac{\\sum _{i=1}^{i = p} 2b(\\tau )B_{g_2}\\mathcal {R}_2(t_0+(i-1)\\Delta t_\\mathrm {M})\\sin (2\\pi f(t_0+(i-1)\\Delta t_\\mathrm {M}+\\tau ))}{p}}_{Term~11}+ \\\\ -\\underbrace{\\frac{\\sum _{i=1}^{i = p} 2b(\\tau )B_{g_1}B_{g_2}\\mathcal {R}_2(t_0+(i-1)\\Delta t_\\mathrm {M})\\mathcal {R}_1(t_0+(i-1)\\Delta t_\\mathrm {M}+\\tau )}{p}}_{Term~12}+ \\\\ \\underbrace{\\frac{\\sum _{i=1}^{i = p} 2a(\\tau )b(\\tau )\\sin (2\\pi f(t_0+(i-1)\\Delta t_\\mathrm {M}+\\tau ))}{p}}_{Term~13}+ \\underbrace{\\frac{\\sum _{i=1}^{i = p} 2a(\\tau )b(\\tau )B_{g_1}\\mathcal {R}_1(t_0+(i-1)\\Delta t_\\mathrm {M}+\\tau )}{p}}_{Term~14}+ \\\\ \\underbrace{\\frac{\\sum _{i=1}^{i = p} 2b^2(\\tau )B_{g_1}\\sin (2\\pi f(t_0+(i-1)\\Delta t_\\mathrm {M}+\\tau ))\\mathcal {R}_1(t_0+(i-1)\\Delta t_\\mathrm {M}+\\tau )}{p}}_{Term~15}.$ In the present study, $t_0=0.025$  s and $\\Delta t_\\mathrm {M}=5$  s for both test conditions.", "Using these and for $p \\rightarrow \\infty $ , it can be shown that Terms 1–5 in Eq.", "( REF) equal $1/2$ , $B_{g_2}^2/3$ , $a^2(\\tau )$ , $b^2(\\tau )/2$ , and $b^2(\\tau ) B_{g_1}^2/3$ , respectively.", "Also, Terms 6, 7, and 9–15 are zero for $p \\rightarrow \\infty $ .", "Using trigonometric relations, it can be shown that Term 8 reduces to $-b(\\tau )\\cos (2 \\pi f\\tau )$ .", "As a result, for an infinitely large number of training datasets, it can be obtained that $RMSE^2(\\tau ,\\infty ) = \\left(\\frac{1}{2}+\\frac{B^2_{g_1}}{3}\\right)b^2(\\tau )-b(\\tau )\\cos (2 \\pi f \\tau )+a^2(\\tau )+\\frac{B^2_{g_2}}{3}+\\frac{1}{2}.$ Analyses presented in section  showed that, for $B_{g_1}$ and $B_{g_2}$ smaller than or equal to unity, $a(\\tau )$ is close to zero and nearly independent of $\\tau $ .", "This can be seen for the results presented in Fig.", "REF (a), with $B_{g_1} = B_{g_2} = 0.5$ .", "Also, by definition, the slope of the best fit at a given value of $\\tau $ , i.e.", "$b(\\tau )$ , is estimated so that $RMSE$ is minimized.", "Thus, $b(\\tau )$ can be obtained using $\\partial RMSE/\\partial b = 0$ , which leads to $b(\\tau ) = \\frac{\\cos (2\\pi f \\tau )}{1+\\frac{2B^2_{g_1}}{3}}.$ Substituting $b(\\tau )$ from Eq.", "(REF ) into Eq.", "(REF ), a closed form for $RMSE(\\tau ,\\infty )$ is obtained and is given by $RMSE(\\tau ,\\infty ) = \\sqrt{\\frac{1}{2}+\\frac{B^2_{g_2}}{3}-\\frac{\\cos ^2(2\\pi f \\tau )}{2+\\frac{4B^2_{g_1}}{3}}}.$ The predictions of Eq.", "(REF ) are presented in Fig.", "REF using the black dotted-dashed curves." ] ]
2212.05543
[ [ "Maximal first Betti number rigidity for open manifolds of nonnegative\n Ricci curvature" ], [ "Abstract Let $M$ be an open Riemannian $n$-manifold with nonnegative Ricci curvature.", "We prove that if the first Betti number of $M$ equals $n-1$, then $M$ is flat." ], [ "Introduction", "Let $M$ be a complete $n$ -dimensional Riemannian manifold with nonnegative $\\text{Ricci}$ curvature.", "If $M$ is compact, a classical result in Riemannian geometry says that the first Betti number $b_1(M) \\le n$ and “=\" holds if and only if $M$ is a flat torus ([2], [9]).", "If $M$ is open (i.e.", "complete and not compact), it has been known that $b_1(M)\\le n-1$ ([3], [1]).", "In this paper, we will prove the following rigidity result: Theorem 1 If $M$ is an open Riemannian $n$ -manifold with $\\text{Ric}_M\\ge 0$ , then $b_1(M)=n-1$ if and only if $M$ is flat with a soul $T^{n-1}$ , a torus of dimension $n-1$ .", "Remark 1 Note that $M$ in Theorem REF has only two possible diffeomorphism types, $\\mathbb {R}\\times T^{n-1}$ or $\\mathbb {M}^2\\times T^{n-2}$ , where $\\mathbb {M}^2$ is an open Möbius band.", "In the former case, $M$ is isometric to $\\mathbb {R}\\times T^{n-1}$ (see (2) of Proposition REF ).", "Let's briefly explain a reason for $b_1(M)\\le n-1$ .", "Let $\\pi :(\\tilde{M},\\tilde{p})\\rightarrow (M,p)$ be the Riemannian universal covering.", "Let $G$ be a finitely generated subgroup of $\\pi _{1}(M,p)$ .", "Given a finite set of symmetric generators $S=\\lbrace g_1,\\cdots ,g_k\\rbrace $ of $G$ , the word length $|g|$ of an element $g\\in G$ is defined as $ |g|=\\min \\lbrace l\\,|\\, g=g_{i_1}g_{i_2}\\cdots g_{i_l}\\rbrace $ .", "Put $U(r)=\\lbrace g\\in G\\,|\\, |g|\\le r\\rbrace $ and let $\\#(U(r))$ be the number of elements in $U(r)$ .", "By the packing argument used in Milnor [6] and the use of the Dirichlet fundamental domain, Anderson [1] concluded that $\\#(U(r))\\cdot \\text{Vol}(B_r(p))\\le \\text{Vol}(B_{cr}(\\tilde{p}))$ for some constant $c>0$ independent of $r$ .", "Since $\\lim \\limits _{r\\rightarrow \\infty }\\frac{\\text{Vol}(B_r(\\tilde{p}))}{r^n}\\le \\omega _n$ ( where $\\omega _n$ is the volume of the unit ball in $\\mathbb {R}^{n}$ ) by Bishop-Gromov volume comparison and $\\liminf \\limits _{r\\rightarrow \\infty } \\frac{\\text{Vol}(B_r(p))}{r}>0$ by Yau [10], one concludes that every finitely generated subgroup of $\\pi _{1}(M,p)$ has polynomial growth of order $\\le n-1$ , thus $b_1(M)\\le n-1$ (see Proposition REF ).", "Note that Cheeger and Gromoll also showed that $b_1(M)\\le n-1$ as an application of their splitting theorem ( Theorem 4 in [3]).", "The approach to the main results of this paper is based on the proof of Theorem 4 in [3].", "Let $M$ be an open $n$ -manifold with $\\text{Ric}_M\\ge 0$ .", "By Theorem REF , $b_1(M)=n-1$ implies that $M$ has linear volume growth and $\\tilde{M}$ has Euclidean volume growth.", "In this paper, we classified such manifolds: Theorem 2 (part of Theorem REF ) Let $M$ be an open $n$ -manifold with $\\text{Ric}_M\\ge 0$ and let $\\tilde{M}$ be its Riemannian universal cover.", "Then $M$ has linear volume growth and $\\tilde{M}$ has Euclidean volume growth if and only if $M$ is flat with an $n-1$ dimensional soul.", "We use the following notion of orbit growth in the proof of Theorem REF and Theorem REF : Definition 1 Denote by $\\#(A)$ the number of elements in a set $A$ .", "Let $(X,d)$ be a metric space and let $\\text{Isom}(X)$ be its isometry group.", "Let $\\Gamma $ be a subgroup of $\\text{Isom}(X)$ .", "For every $ x\\in X$ , put $ D^{\\Gamma }(x,r)=\\lbrace g\\in \\Gamma :d(x,g(x))\\le r\\rbrace $ .", "Given $p\\in \\mathbb {R}_+$ , we say $\\Gamma $ has polynomial orbit growth related to $x$ of order $\\ge p \\,\\,(\\le p)$ , if and only if $\\liminf \\limits _{r\\rightarrow \\infty } \\frac{\\#(D^{\\Gamma }(x,r))}{r^p}>0\\,\\,\\,( \\limsup \\limits _{r\\rightarrow \\infty } \\frac{\\#(D^{\\Gamma }(x,r))}{r^p}<\\infty ).$ We say $\\Gamma $ has polynomial orbit growth related to $x$ of order $> p\\,\\,(<p)$ , if and only if $\\lim _{r\\rightarrow \\infty } \\frac{\\#(D^{\\Gamma }(x,r))}{r^p}=\\infty \\,\\,(=0).$ One may verify without difficulty that the polynomial orbit growth properties defined above do not depend on the choice of the base point $x$ .", "We will prove that either the conditions of Theorem REF or Theorem REF imply that the deck transformation group $\\Gamma $ of the Riemannian universal covering $\\pi :\\tilde{M}\\rightarrow M$ has orbit growth of order $\\ge n-1$ .", "But we also have: Theorem 3 Let $M$ be an open $n$ -manifold with $\\text{Ric}_M\\ge 0$ .", "Let $\\pi :\\tilde{M}\\rightarrow M$ be the Riemannian universal covering with deck transformation group $\\Gamma $ .", "If $M$ is not flat, then $\\Gamma $ has polynomial orbit growth of order $< n-1$ .", "So the manifolds in Theorem REF and Theorem REF can only be flat.", "Theorem REF is obtained by improving the volume estimate in the proof of Theorem 4 of Cheeger-Gromoll [3].", "The proof of Theorem REF is given in Section 2.", "Given Theorem REF , we now prove Theorem REF : [Proof of Theorem REF ] The condition $b_1(M)=n-1$ implies that $\\pi _{1}(M)$ has a finitely generated subgroup of polynomial growth of order $\\ge n-1$ (see Proposition REF ).", "One then concludes that the deck transformation group of the Riemannian universal covering of $M$ has polynomial orbit growth of order $\\ge n-1$ (Proposition REF ).", "By Theorem REF , $M$ can only be flat.", "The soul theorem and the classical Bochner rigidity then imply that a soul of $M$ is $T^{n-1}$ .", "To prove Theorem REF , we need the following Theorem: Theorem 4 Let $N$ be a complete Riemannian $m$ -manifold.", "Let $\\pi :(\\bar{N},\\bar{x}_0)\\rightarrow (N,x_0)$ be a normal covering with deck transformation group $G$ .", "Then for every $r>0$ we have: $ \\#(D^G(\\bar{x}_0,2r))\\cdot \\text{Vol}(B_r(x_0))&\\ge \\text{Vol}(B_r(\\bar{x}_0)),\\\\ \\#(D^G(\\bar{x}_0,r))\\cdot \\text{Vol}(B_r(x_0))&\\le \\text{Vol}(B_{2r}(\\bar{x}_0)).$ Remark 2 Inequality() is the orbit version of Anderson's inequality(REF ).", "The author observed that the inverse inequality(REF ) also holds for orbits.", "Note that the similar result does not hold for the word length, i.e.", "the inequality like $\\#(U(cr))\\cdot \\text{Vol}(B_{cr}(x_0))\\ge \\text{Vol}(B_{r}(\\bar{x}_0))$ is false in general.", "This is the main reason why we use the notion of polynomial orbit growth instead of polynomial growth of a (finitely generated) group.", "For example, we may consider $N=\\mathbb {R}\\times \\mathbb {S}^1$ with warped product metric $g_N=dr^2+\\phi (r)ds^2 $ and its universal cover, where $ds^2$ is the canonical metric on $\\mathbb {S}^1$ , $\\phi (r)=1$ for $|r|\\le 1$ and $\\phi (r)=r^{-2}$ for $|r|>2$ .", "A simple estimate shows that inequality(REF ) does not hold for every $c>0$ .", "Let $M$ be an open $n$ -manifold with $\\text{Ric}_M\\ge 0$ .", "Let $\\Gamma $ be the deck transformation group of its Riemannian universal covering.", "It is clear from the inequality() that $\\Gamma $ has polynomial orbit growth of order $\\le n-1$ .", "Definition 2 We say $\\Gamma $ has maximal orbit growth, if and only if $\\Gamma $ has polynomial orbit growth of order $\\ge n-1$ , i.e.", "$\\liminf \\limits _{r\\rightarrow \\infty } \\frac{\\#(D^{\\Gamma }(\\tilde{x},r))}{r^{n-1}}>0 \\text{ for some $\\tilde{x}\\in \\tilde{M}$}.$ By Theorem REF , if $\\Gamma $ has maximal orbit growth, then $M$ is flat.", "In the following Theorem REF , we give several equivalent characterizations of the maximal orbit growth condition and classify such manifolds.", "It is well known from the work of Milnor [6] and Gromov [4] that if $\\pi _1(M)$ is finitely generated, then it is almost nilpotent.", "The nilpotency rank of a finitely generated almost nilpotent group $G$ is defined in 2.4.1 of [7] and is denoted by $\\text{rank}(G)$ , which equals the polycyclic rank of a finite index subgroup of $G$ .", "Similar to the first Betti number, we have $\\text{rank}(\\pi _{1}(M))\\le n-1$ if $\\pi _{1}(M)$ is finitely generated (see Proposition REF ).", "Theorem 5 Let $M$ be an open $n$ -manifold with $\\text{Ric}_M\\ge 0$ .", "Let $\\pi :\\tilde{M}\\rightarrow M$ be the Riemannian universal covering with deck transformation group $\\Gamma $ .", "Then we have: (a) $M$ is flat with an $n-1$ dimensional soul.", "if and only if any one of the following conditions holds: (b) There is a finitely generated subgroup G of $\\pi _{1}(M)$ such that $\\text{rank}(G)= n-1$ .", "(c) $\\Gamma $ fails to have polynomial orbit growth of order $< n-1$ .", "That is, there exists a sequence $r_i\\rightarrow \\infty $ such that $\\lim _{i\\rightarrow \\infty } \\frac{\\#(D^{\\Gamma }(\\tilde{x},r_i))}{r_i^{n-1}}>0 \\text{ for some $\\tilde{x}\\in \\tilde{M}$} .$ (d) $\\Gamma $ has maximal orbit growth.", "(e) $M$ has linear volume growth and $\\tilde{M}$ has Euclidean volume growth.", "$(a)\\Rightarrow (b)$ : The condition shows that $M$ has the fundamental group of a compact flat $(n-1)$ -manifold.", "It follows from the Bieberbach theorem that $\\mathbb {Z}^{n-1}$ is a finite index subgroup of $\\pi _{1}(M)$ .", "$(b)\\Rightarrow (c)$ : By Proposition REF , $G$ has polynomial growth of order $\\ge n-1$ .", "By Proposition REF , $G$ has polynomial orbit growth of order $\\ge n-1$ .", "This implies $(d)$ holds, hence $(c)$ holds.", "$(c)\\Rightarrow (d)$ : If $(c)$ holds, Theorem REF claims that $M$ is flat.", "This implies that $\\Gamma $ has polynomial orbit growth of order $\\ge k$ and $\\le k$ , where $k$ is the dimension of a soul of $M$ (see (1) of Proposition REF ).", "Condition (c) forces $k=n-1$ .", "Hence (d) follows.", "$(d)\\Leftrightarrow (e)$ : If (d) holds, inequality() of Theorem REF , together with the fact that $M$ has at least linear volume growth show that $\\tilde{M}$ has Euclidean volume growth.", "Using inequality() again, we find that $M$ has linear volume growth.", "If (e) holds, inequality(REF ) of Theorem REF shows that $\\Gamma $ has maximal orbit growth.", "$(d)\\Rightarrow (a)$ : Theorem REF shows that $M$ is flat.", "Since $M$ has maximal orbit growth, (1) of Proposition REF asserts that every soul of $M$ has dimension $n-1$ ." ], [ "Proof of Theorem ", "We prove Theorem REF in this section.", "Let $N$ be a complete Riemannian manifold.", "For $p\\in N,r>0$ , set $D_{r}(p)=&\\lbrace x\\in N\\,|\\, d(x,p)\\le r \\rbrace ,\\\\C_{r}(p)=&\\lbrace x\\in N\\,|\\, \\text{if $q\\in N$ and $d(q,x)>r$, then $d(p,x)+d(x,q)-d(p,q)>0$}\\rbrace , \\\\N_r(p)=&D_r(p)\\cup C_{r}(p).$ Note that $C_r(p)$ is exactly the points $x\\in N$ such that every minimal geodesic connecting $p$ and $x$ cannot extend at the $x$ end to a minimal geodesic of length $>d(p,x)+r$ .", "To prove Theorem 4 in [3], Cheeger-Gromoll made the following key observation: Lemma 1 Let $x_0\\in N, R>0, \\Lambda =\\text{Isom}(N)$ .", "If $N$ contains no line, then there exists a $d>0$ such that $\\Lambda \\cdot B_R(x_0)\\subset N_d(x_0)$ .", "Otherwise, there exist $ d_i\\rightarrow \\infty ,x_i\\in B_R(x_0)$ and $f_i\\in \\Lambda $ such that $f_i(x_i)\\notin N_{d_i}(x_0)$ , then we can find unit speed minimal geodesics $\\gamma _i:[-d_i,d_i]\\rightarrow N$ with $\\gamma _i(0)=f_i(x_i)$ .", "But $f_i^{-1}\\circ \\gamma _i$ converges to a line $\\gamma $ with $d(x_0,\\gamma )\\le R$ .", "This is a contradiction.", "The key observation of the author is contained in the following Lemma REF : Lemma 2 Fix $h>0$ .", "Let $N$ be an $m$ -dimensional complete Riemannian manifold with nonnegative $\\text{Ricci}$ curvature ($m\\ge 2$ ).", "For $p\\in N,r>0$ , set $W_r^h(p)=D_r(p)\\cap C_{h}(p)$ .", "Then $\\lim \\limits _{r\\rightarrow \\infty } \\frac{\\text{Vol}(W_r^h(p))}{r^{m-1}}=0.$ Set $S_{p}N=&\\lbrace v\\in T_{p}N\\,|\\,||v||=1 \\rbrace ,\\\\C_{p}N=&\\lbrace v\\in S_{p}N\\,|\\, \\text{exp}_{p}(tv)|_{[0,\\infty )} \\,\\text{is not a ray}\\rbrace ,\\\\C^s_{p}N=&\\lbrace v\\in C_{p}N\\,|\\,\\text{exp}_{p}(tv)|_{[0,s+\\epsilon )} \\text{ is not a minimal geodesic for every $\\epsilon >0$}\\rbrace ,\\\\g(s)=&m(C^s_{p}N),\\text{where $m$ is the standard measure on $S_{p}N$.", "}$ Using the Bishop-Gromov volume element comparison in the polar coordinate of $T_pN$ , we get $\\begin{split}\\text{Vol}(W_r^h(p))=&\\int _{C^{r+h}_pN}\\int _{a(\\theta )}^{b(\\theta )}\\mu (t,\\theta )dtd\\theta \\\\\\le &\\int _{C^{r+h}_pN}\\int _{a(\\theta )}^{b(\\theta )}t^{m-1}dtd\\theta \\\\\\le & m(S_pN)\\cdot h\\cdot r^{m-1},\\end{split}$ where $\\mu (t,\\theta )dtd\\theta $ is the volume form of $N$ at $\\text{exp}_p(t\\theta )$ , $b(\\theta )-a(\\theta )\\le h$ , and $0\\le a(\\theta )\\le b(\\theta )\\le r $ for every $\\theta \\in C_p^{r+h}N$ (This is exactly the estimate Cheeger-Gromoll obtained).", "The author observed that by measure theory, $\\lim \\limits _{r\\rightarrow \\infty }m(C^{r+h}_{p}N\\backslash C^{\\sqrt{r}}_{p}N)=0$ since $m(C_{p}N)<\\infty , C^{r_1}_{p}N\\subset C^{r_2}_{p}N \\,\\,\\text{for}\\,\\, r_1<r_2 ,\\text{and} \\,\\,C_{p}N=\\cup _{r>0}C^r_{p}N.$ Put $A_r=C^{r+h}_{p}N\\backslash C^{\\sqrt{r}}_{p}N$ .", "Similar to (1), we have for $r>\\max \\lbrace 1,h\\rbrace $ $\\begin{split}\\text{Vol}(W_r^h(p)\\backslash W^h_{\\sqrt{r}}(p))=&\\text{Vol}(C_h(p)\\cap (D_r(p)\\backslash D_{\\sqrt{r}}(p)))\\\\=& \\int _{A_r}\\int _{a_1(\\theta )}^{b_1(\\theta )}\\mu (t,\\theta )dtd\\theta \\\\\\le &\\int _{A_r}\\int _{r-h}^{r}t^{m-1}dtd\\theta \\\\\\le & m(A_r)\\cdot h\\cdot r^{m-1},\\end{split}$ where $b_1(\\theta )-a_1(\\theta )\\le h$ and $\\sqrt{r}\\le a_1(\\theta )\\le b_1(\\theta )\\le r $ .", "So $\\text{Vol}(W^h_r(p))&=\\text{Vol}(W^h_{\\sqrt{r}}(p))+\\text{Vol}(W_r^h(p)\\backslash W^h_{\\sqrt{r}}(p))\\\\&\\le h(m(S_pN)r^{\\frac{m-1}{2}}+m(A_r)r^{m-1})$ The result follows.", "[Proof of Theorem REF ] Fix a point $\\tilde{p}\\in \\tilde{M}$ .", "Since $M$ is not flat, we have by splitting theorem that $(\\tilde{M},\\tilde{p})=(N^k\\times \\mathbb {R}^{n-k},(x_0,0)) $ , where $N$ does not contain a line and $k\\ge 2$ .", "Fix an $l>0$ such that $B_l(g_1\\cdot \\tilde{p})\\cap B_l(g_2\\cdot \\tilde{p})=\\emptyset \\,$ for any $ g_1,g_2\\in \\Gamma ,g_1\\ne g_2$ .", "by Lemma REF , there is a $d>0$ such that $\\Lambda \\cdot B_l(x_0)\\subset N_d(x_0)$ , where $\\Lambda =\\text{Isom}(N)$ .", "We conclude that $\\Gamma \\cdot B_l(\\tilde{p})\\subset N_d(x_0)\\times \\mathbb {R}^{n-k} ,$ since the isometry group of $\\tilde{M}$ also splits.", "Write $\\Gamma _r=D^\\Gamma (\\tilde{p},r)=\\lbrace g\\in \\Gamma \\,|\\,d(g\\cdot \\tilde{p},\\tilde{p})\\le r \\rbrace ,$ (REF ) implies $\\sqcup _{g\\in \\Gamma _r}g\\cdot B_l(\\tilde{p})\\subset (N_d(x_0)\\cap B_{r+l}(x_0))\\times B_{r+l}(0).$ By Lemma REF , we may write $\\text{Vol}(W_r^d(x_0))=f(r)r^{k-1}$ , where $\\lim \\limits _{r\\rightarrow \\infty }f(r)=0$ .", "(REF ) then gives $\\begin{split}&\\#(\\Gamma _r)\\cdot \\text{Vol}(B_l(\\tilde{p}))\\\\\\le & \\omega _{n-k}\\left( \\text{Vol}(D_d(x_0))+\\text{Vol}(W^d_{r+l}(x_0))\\right)(r+l)^{n-k}\\\\ \\le & \\omega _{n-k}(\\omega _{k}d^k+f(r+l)r^{k-1} )\\cdot (r+l)^{n-k},\\end{split}$ Since $k\\ge 2$ , this gives the result." ], [ "Realization of Maximal Orbit Growth", "In this section, we prove the Propositions used in the Introduction and prove Theorem REF .", "They are used to realize the maximal orbit growth under various conditions.", "To obtain the polynomial growth property of a finitely generated subgroup of $\\pi _{1}(M)$ , Milnor [6] essentially used the orbit growth as a bridge: Proposition 1 (Milnor [6]) Let $X$ be a metric space, $x_0\\in X$ , $\\Gamma $ be a finitely generated subgroup of $\\text{Isom}(X)$ , and $S=\\lbrace g_1,\\cdots ,g_k\\rbrace $ be a set of symmetric generators of $\\Gamma $ .", "Put $W(r)&=\\lbrace g\\in \\Gamma \\,|\\, \\text{the word length of $g$ related to } S, |g|\\le r \\rbrace ,\\\\h&=\\max \\lbrace d(x_0,g_ix_0)\\,|\\,i=1,\\cdots ,k\\rbrace .$ Then $W(r) \\subset D^\\Gamma (x_0,hr) $ .", "Especially, if $\\Gamma $ has polynomial growth of order $\\ge $ p ($>$ p), then $\\Gamma $ has polynomial orbit growth of order $\\ge $ p ($>$ p).", "If $g\\in W(r)$ , we can write $g=g_{i_1}g_{i_2}\\cdots g_{i_t}$ , where $t\\le r$ , $g_{i_j}\\in S$ .", "We have $d(x_0, gx_0)&= d(x_0,g_{i_1}g_{i_2}\\cdots g_{i_t}x_0)\\\\&\\le d(x_0,g_{i_1}x_0)+d(g_{i_1}x_0,g_{i_1}g_{i_2}x_0)+\\cdots +d(g_{i_1}g_{i_2}\\cdots g_{i_{t-1}}x_0,g_{i_1}g_{i_2}\\cdots g_{i_t}x_0)\\\\&\\le hr.$ The result follows.", "Next, we show how the first Betti number of a space controls the polynomial growth of its fundamental group.", "Recall: Definition 3 ([5] section 7.2) If $G$ is an abelian group, the $\\text{rank}$ of $G$ is the maximal integer $k$ such that there exist $g_1,g_2,\\cdots ,g_k\\in G$ which satisfy : $\\text{if} \\,\\,\\sum _{i=1}^{k}l_ig_i=0\\,\\,\\text{for some $l_1,\\cdots ,l_k\\in \\mathbb {Z}$}, \\text{then}\\,\\, l_1=l_2=\\cdots =l_k=0.$ $g_1,g_2,\\cdots ,g_k$ is called independent in this case.", "Denote by $\\text{rank}(G)$ the $\\text{rank}$ of $G$ .", "Remark 3 If $G$ is a finitely generated abelian group, then its rank equals its nilpotency rank.", "Definition 4 Let $X$ be a topological space.", "The first Betti number of $X$ , denoted by $b_1(X)$ , is the rank of its first homology group $H_1(X)$ .", "From the proof of Theorem 1.3 in [1], we have the following algebraic proposition: Proposition 2 Let $X$ be a path-connected space, $x_0\\in X$ .", "If $b_1(X)=k$ , then there is a subgroup $G$ of $\\pi _1(X,x_0)$ such that $G$ is generated by $k$ elements and $G$ has polynomial growth of order $\\ge k$ .", "We include a proof of Proposition REF here for the convenience of readers: [Proof of Proposition REF ] Let $h:\\pi _1(X,x_0)\\rightarrow H_1(X)$ be the Hurewicz homomorphism.", "Since $b_1(X)=k$ , we can choose $\\gamma _1,\\cdots ,\\gamma _{k}\\in H_1(X)$ such that they are independent.", "We can choose $g_i\\in (h)^{-1}(\\gamma _i)$ since $h$ is surjective.", "Set $S_1&=\\lbrace g_1,\\cdots ,g_{k},g_1^{-1},\\cdots ,g_{k}^{-1} \\rbrace ,\\\\S_2&=\\lbrace \\gamma _1,\\cdots ,\\gamma _{k},\\gamma _1^{-1},\\cdots ,\\gamma _{k}^{-1} \\rbrace ,\\\\G_i&=\\langle S_i\\rangle \\,\\,\\text{for} \\,\\,i=1,2,\\\\W_i(r)&=\\lbrace g\\in G_i\\,|\\, \\text{the word length of $g$ related to } S_i, |g|\\le r \\rbrace .$ For every $r>0$ , there is an injection $\\begin{split}A_r:W_2(r)&\\rightarrow W_1(r)\\\\l_1\\gamma _1+\\cdots +l_{k}\\gamma _{k}&\\mapsto g_1^{l_1}g_2^{l_2}\\cdots g_{k}^{l_{k}}\\end{split}$ since $h\\circ A_r=\\text{id}$ .", "This implies that $G_1$ has polynomial growth of order $\\ge k$ since $G_2$ has polynomial growth of order $\\ge k$ .", "Similarly, we have: Proposition 3 If $G$ is a finitely generated almost nilpotent group with $\\text{rank}(G)=k$ , then $G$ has polynomial growth of order $\\ge k$ .", "It is easy to verify that a finitely generated group $A$ has polynomial growth of order $\\ge k$ if a finitely generated subgroup of $A$ has polynomial growth of order $\\ge k$ .", "By Theorem 17.2.2 of [5], there exists a finite index torsion-free subgroup of $G$ .", "Denote it by $H$ .", "Then $H$ is finitely generated and $\\text{rank}(H)=k$ .", "It suffices to prove that $H$ has polynomial growth of order $\\ge k$ .", "By Theorem 17.2.2 of [5], there is a normal series $H=H_k\\rhd H_{k-1}\\rhd \\cdots \\rhd H_0=\\lbrace e\\rbrace $ such that each $H_i/H_{i-1}$ is isomorphic to $\\mathbb {Z}$ .", "Denote by $[h_i]=h_iH_{i-1}$ a generator of $H_i/H_{i-1}$ .", "It is easy to check that $I:\\mathbb {Z}^k&\\rightarrow H\\\\(l_1,\\cdots ,l_k)&\\mapsto h_1^{l_1}\\cdots h_k^{l_k}$ is an injection.", "If we choose a finite set of generators of $H$ containing $h_1,h_2,\\cdots ,h_k$ , then the fact that $I$ is an injection implies that $H$ has polynomial growth of order $\\ge k$ with respect to this set of generators.", "We now study the orbit growth of open flat manifolds: Proposition 4 Let $M$ be an open flat $n$ -manifold.", "Let $\\pi :(\\mathbb {R}^n,0^n)\\rightarrow (M,x_0)$ be the universal covering map, with deck transformation group $\\Gamma $ .", "then (1) $\\Gamma $ has polynomial orbit growth of order $\\ge k$ and $\\le k$ , where $k$ is the dimension of a soul $S$ of $M$ .", "(2) If $T^{n-1}$ is a soul of $M$ , then $M$ is diffeomorphic to $\\mathbb {R}\\times T^{n-1}$ or $\\mathbb {M}^2\\times T^{n-2}$ .", "In the former case, $M$ is isometric to $\\mathbb {R}\\times T^{n-1}$ .", "(1) Denote the Sharafutdinov retraction ([8], see also [11]) by $r:M\\rightarrow S$ .", "$r$ is a distance nonincreasing strong deformation retraction.", "Choose a base point $x_0\\in S$ and let $\\iota :S\\hookrightarrow M$ be the inclusion.", "We make the identification $\\pi _1(M,x_0)=\\pi _1(S,x_0)$ by $\\iota _*:\\pi _1(S,x_0)\\rightarrow \\pi _1(M,x_0)$ .", "The properties of $S$ and $r$ guarantee the following: Fact (a) $\\pi ^{-1}(S)$ with the induced metric is totally geodesic in $\\mathbb {R}^n$ and $\\pi |_{\\pi ^{-1}(S)}:(\\pi ^{-1}(S), 0^n)\\rightarrow (S,x_0)$ is the Riemannian universal covering map of $S$ .", "Hence $\\pi ^{-1}(S)$ is a $k$ -dimensional linear subspace of $\\mathbb {R}^n$ .", "(b) For every $\\gamma \\in \\pi _1(S,x_0),y\\in \\pi ^{-1}(S), \\gamma \\cdot y=\\iota _*(\\gamma )\\cdot y$ , where both deck transformations are determined by the basepoint $0^n$ .", "Now Bieberbach theorem tells us that the pure translation subgroup $\\mathbb {Z}^k$ has finite index in $\\pi _1(S,x_0)$ .", "By Fact (b), $\\iota _*(\\mathbb {Z}^k)\\le \\Gamma $ has polynomial orbit growth of order $\\le k$ and $\\ge k$ .", "It follows that $\\Gamma $ has polynomial orbit growth of order $\\ge k$ .", "By Lemma REF below, we obtain that $\\Gamma $ also has polynomial orbit growth of order $\\le k$ .", "Lemma 3 Let $(X,d)$ be a metric space and let $H$ be a subgroup of $\\text{Isom}(X)$ .", "Let $K$ be a finite index subgroup of $H$ .", "If $K$ has polynomial orbit growth of order $\\le k$ , then $H$ also has polynomial orbit growth of order $\\le k$ .", "We may write $H$ as the disjoint union of left cosets: $H=\\sqcup _{i=1}^{l}h_iK$ .", "Fix an $x_0\\in X$ .", "Set $r_0=\\max \\limits _{i=1,\\cdots ,l}\\lbrace d(x_0,h_ix_0)\\rbrace $ .", "If $h=h_jb\\in D^H(x_0,r) $ for some $b\\in K$ , then $d(bx_0,x_0)\\le d(bx_0,h_j^{-1}x_0)+d(h_j^{-1}x_0,x_0)\\le r+r_0$ implies that $b\\in D^K(x_0,r+r_0)$ .", "So $\\#(D^H(x_0,r))\\le l\\cdot \\#(D^K(x_0,r+r_0))$ .", "The result follows.", "(2) In this case, $\\pi _{1}(M,x_0)=\\pi _{1}(S,x_0)\\cong \\mathbb {Z}^{n-1}$ .", "Since every $\\gamma \\in \\mathbb {Z}^{n-1}$ acts on $\\mathbb {R}^n$ by isometry, we may write $\\gamma =(A_\\gamma ,v_\\gamma )\\in \\text{O}(n)\\ltimes \\mathbb {R}^n$ .", "Put $\\Lambda =\\pi ^{-1}(S)$ .", "Since $\\mathbb {Z}^{n-1}$ acts on $\\Lambda $ by translation, Fact (a) and (b) shows that $A_\\gamma |_\\Lambda =\\text{id}$ for every $\\gamma \\in \\mathbb {Z}^{n-1}$ .", "Let $\\Lambda ^\\perp $ be the orthonormal completment of $\\Lambda $ , then $\\Lambda ^\\perp $ is a 1-dimensional invariant subspace of every $A_\\gamma $ .", "Each $A_\\gamma |_{\\Lambda ^\\perp }$ can be the reflection or the identity map.", "Let $e_1=(1,\\cdots ,0),\\cdots ,e_{n-1}=(0,\\cdots ,1)$ be the canonical generators of $\\mathbb {Z}^{n-1}$ .", "Write $e_i=(A_i,v_i)$ .", "If every $A_i$ is the identity map on $\\Lambda ^\\perp $ , then $M$ is isometric to $\\mathbb {R}\\times \\mathbb {T}^{n-1}$ .", "Otherwise, without loss of generality, we may assume that $A_1,\\cdots , A_l$ are the reflection on $\\Lambda ^\\perp $ , $A_{l+1},\\cdots ,A_{n-1}$ are the identity map on $\\Lambda ^\\perp $ , then $\\Gamma &=\\langle e_1,\\cdots , e_{n-1}\\rangle \\\\&=\\langle e_1,e_2-e_1,\\cdots ,e_l-e_1,e_{l+1},\\cdots ,e_{n-1}\\rangle \\\\&=\\langle (A_1,v_1),(\\text{id},v_2-v_1),\\cdots , (\\text{id},v_l-v_1),(\\text{id},v_{l+1}),\\cdots ,(\\text{id},v_{n-1})\\rangle .$ In this case, $M$ is diffeomorphic to $\\mathbb {M}^2\\times T^{n-2}$ .", "Finally, we prove Theorem REF : [Proof of Theorem REF ] Proof of inequality(REF ): We use the notion of Dirichlet domain as in [1].", "For every $g\\in G$ , put $D_g=\\lbrace x\\in \\overline{N}\\,|\\,d(x,\\bar{x}_0)<d(x,g\\bar{x}_0)\\rbrace $ .", "Define the Dirichlet domain $F$ associated to $\\bar{x}_0$ by $F=\\bigcap \\limits _{g\\in G,g\\ne e} D_g$ .", "Then $F$ is a fundamental domain for the action of $G$ .", "Denote by $\\overline{F}$ the closure of $F$ in $\\bar{N}$ .", "By the proof of Theorem 1.1 in [1], the following hold: $\\pi (B_r(\\bar{x}_0)\\cap \\overline{F})&=B_r(x_0),\\\\\\partial F=\\overline{F}\\backslash F \\,\\,&\\text{has Riemannian measure 0 in $\\overline{N}$},\\\\\\text{Vol}(B_r(\\bar{x}_0)\\cap F)&=\\text{Vol}(B_r(x_0)).$ Claim 1 $\\bigcup \\limits _{g\\in D(\\bar{x}_0,2r)}g\\cdot (B_r(\\bar{x}_0)\\cap \\overline{F})\\supset B_r(\\bar{x}_0)$ .", "For every $y\\in B_r(\\bar{x}_0)$ , let $g_y\\bar{x}_0$ be a point in $G\\cdot \\bar{x}_0$ such that $d(g_y\\bar{x}_0,y)=\\min _{g\\in G}\\lbrace d(g\\bar{x}_0,y)\\rbrace $ .", "Then $d(g_y\\bar{x}_0,y)\\le d(\\bar{x}_0,y)<r$ .", "So $g_y\\in D^G(\\bar{x}_0,2r)$ and $g_y^{-1}y\\in B_r(\\bar{x}_0)$ .", "If $\\bar{x}_0$ is the unique point in $G\\cdot \\bar{x}_0$ which is closest to $g_y^{-1}y$ , then $g_y^{-1}y\\in D_g$ for every $g\\ne e$ .", "By definition, $g_y^{-1}y\\in F$ , so $ y\\in g_y\\cdot (B_r(\\bar{x}_0)\\cap F)$ .", "Otherwise, consider points $p_i\\ne \\bar{x}_0$ on a minimal geodesic connecting $\\bar{x}_0$ and $g_y^{-1}y$ such that $\\lim \\limits _{i\\rightarrow \\infty }d(g_y^{-1}y,p_i)=0$ .", "Then $p_i\\in D_g$ for every $i$ and every $g\\ne e$ (since geodesics cannot branch).", "So $g_y^{-1}y\\in \\overline{F}$ , $y\\in g_y\\cdot (B_r(\\bar{x}_0)\\cap \\overline{F})$ .", "Since for every $g_1\\ne g_2$ in $G$ , $(g_1\\cdot (B_r(\\bar{x}_0)\\cap F))\\cap (g_2\\cdot (B_r(\\bar{x}_0)\\cap F))=\\emptyset $ , taking the volume in Claim REF , we get $\\#(D^G(\\bar{x}_0,2r))\\cdot \\text{Vol}(B_r(x_0))\\ge \\text{Vol}(B_r(\\bar{x}_0))$ .", "Proof of inequality(): It is clear that $\\bigcup \\limits _{g\\in D(\\bar{x}_0,r)}g\\cdot (B_r(\\bar{x}_0)\\cap F)\\subset B_{2r}(\\bar{x}_0)$ .", "The result follows by taking the volume." ], [ "Acknowledgements", "The author thanks Professor Shicheng Xu for suggesting the problem of the first Betti number rigidity of open manifolds with nonnegative Ricci curvature to him.", "The author thanks his advisor Professor Xiaochun Rong for pointing out to him the equivalence of condition(c) and (d) in Theorem REF and for his sincere guidance." ] ]
2212.05530
[ [ "Maximum spread of $K_{2,t}$-minor-free graphs" ], [ "Abstract The spread of a graph $G$ is the difference between the largest and smallest eigenvalues of the adjacency matrix of $G$.", "In this paper, we consider the family of graphs which contain no $K_{2,t}$-minor.", "We show that for any $t\\geq 2$, there is an integer $\\xi_t$ such that the maximum spread of an $n$-vertex $K_{2,t}$-minor-free graph is achieved by the graph obtained by joining a vertex to the disjoint union of $\\lfloor \\frac{2n+\\xi_t}{3t}\\rfloor$ copies of $K_t$ and $n-1 - t\\lfloor \\frac{2n+\\xi_t}{3t}\\rfloor$ isolated vertices.", "The extremal graph is unique, except when $t\\equiv 4 \\mod 12$ and $\\frac{2n+ \\xi_t} {3t}$ is an integer, in which case the other extremal graph is the graph obtained by joining a vertex to the disjoint union of $\\lfloor \\frac{2n+\\xi_t}{3t}\\rfloor-1$ copies of $K_t$ and $n-1-t(\\lfloor \\frac{2n+\\xi_t}{3t}\\rfloor-1)$ isolated vertices.", "Furthermore, we give an explicit formula for $\\xi_t$." ], [ "Introduction", "Given a square matrix $M$ , the spread of $M$ , denoted by $S(M)$ , is defined as $S(M):= \\max _{i,j} |\\lambda _i -\\lambda _j|$ , where the maximum is taken over all pairs of eigenvalues of $M$ .", "In other words, $S(M)$ is the diameter of the spectrum of $M$ .", "Given a graph $G=(V,E)$ on $n$ vertices, the spread of $G$ , denoted by $S(G)$ , is defined as the spread of the adjacency matrix $A(G)$ of $G$ .", "Let $\\lambda _1(G) \\ge \\cdots \\ge \\lambda _n(G)$ be the eigenvalues of $A(G)$ .", "Here $\\lambda _1$ is called the $\\textit {spectral radius}$ of $G$ .", "Since $A(G)$ is a real symmetric matrix, we have that the $\\lambda _i$ s are all real numbers.", "Thus $S(G) = \\lambda _1 -\\lambda _n$ .", "The systematic study of the spread of graphs was initiated by Gregory, Hershkowitz, and Kirkland [11].", "One of the central focuses of this area is to find the maximum or minimum spread over a fixed family of graphs and characterize the extremal graphs.", "Problems of such extremal flavor have been investigated for trees [1], graphs with few cycles [9], [17], [27], the family of all $n$ -vertex graphs [2], [3], [19], [21], [22], [25], the family of bipartite graphs [3], graphs with a given matching number [13], girth [26], or size [12], and very recently for the families of outerplanar graphs [10], [14] and planar graphs [14].", "We note that the spreads of other matrices associated with a graph have also been extensively studied (see e.g.", "references in [10], [5], [7]).", "Given two graphs $G$ and $H$ , the join of $G$ and $H$ , denoted by $G\\vee H$ , is the graph obtained from the disjoint union of $G$ and $H$ by connecting every vertex of $G$ with every vertex of $H$ .", "Let $P_k$ denote the path on $k$ vertices.", "Given two graphs $G$ and $H$ , let $G\\cup H$ denote the disjoint union of $G$ and $H$ .", "Given a graph $G$ and a positive integer $k$ , we use $kG$ to denote the disjoint union of $k$ copies of $G$ .", "Given $v \\subseteq V(G)$ , let $N_G(v)$ denote the set of neighbors of $v$ in $G$ , and let $d_G(v)$ denote the degree of $v$ in $G$ , i.e., $d_G(v) = |N(v)|$ .", "Given $S\\subseteq V(G)$ , define $N_G(S)$ as $N_G(S) = \\lbrace N_G(v):v\\in S\\rbrace $ .", "Given a graph $G$ and disjoint vertex subsets $S,T\\subseteq V(G)$ , we use $E_G(S)$ to denote the set of edges in $E(G[S])$ , and use $E_G(S,T)$ to denote the set of edges with one endpoint in $S$ and the other endpoint in $T$ .", "For all above definitions, we may omit the subscript $G$ when there is no ambiguity.", "A graph $H$ is called a minor of a graph $G$ if a graph isomorphic to $H$ can be obtained from a subgraph of G by contracting edges.", "A graph $G$ is called $H$ -minor-free if $H$ is not a minor of $G$ .", "There has been extensive work on finding the maximum spectral radius of $K_{s, t}$ -minor-free graphs.", "Nikiforov [16] showed that every sufficiently large $n$ -vertex $K_{2,t}$ -minor-free graph $G$ satisfies $\\lambda _1(G)\\le (t-1)/2+\\sqrt{n+(t^2-2t-3)/4}$ , with equality if and only if $n \\equiv 1 \\pmod {t}$ and $G$ is $K_1\\vee {n/t}K_t$ .", "Tait [23] extended Nikiforov's result to $K_{s,t}$ -minor-free graphs by giving an upper bound on the maximum spectral radius of an sufficiently large $n$ -vertex $K_{s,t}$ -minor-free graph $G$ , and showed that the upper bound is tight if and only if $n \\equiv s-1 \\pmod {t}$ and $G$ is $K_{s-1}\\vee {(n-s+1)/t}K_t$ .", "In the same paper, Tait conjectured that for all $t\\ge s\\ge 2$ , the maximum spectral radius of a sufficiently large $n$ -vertex $K_{s,t}$ -minor-free graph is attained by $K_{s-1}\\vee (pK_t \\cup K_q)$ , where $p, q$ satisfy that $n-s+1 = pt +q$ and $q\\in [t]$ .", "Very recently, the $K_{s,t}$ -minor-free graphs with maximum spectral radius were determined for $t\\ge s\\ge 2$ by Zhai and Lin [30].", "In this paper, we determine the maximum-spread $K_{2, t}$ -minor-free graphs on $n$ vertices for sufficiently large $n$ and for all $t\\ge 2$ .", "Theorem 1 For $t\\ge 2$ and $n$ sufficiently large, the graph that maximizes the spread over the family of $K_{2,t}$ -minor-free graphs on $n$ vertices is $K_1\\vee \\left( \\left\\lfloor \\frac{2n+\\xi _t}{3t} \\right\\rfloor K_t \\cup \\left(n-1- t\\left\\lfloor \\frac{2n+\\xi _t}{3t} \\right\\rfloor \\right) P_1\\right)$ where $\\xi _t={\\left\\lbrace \\begin{array}{ll}2\\left\\lfloor \\frac{3t}{4}-1 - \\frac{(t-1)^2}{9}\\right\\rfloor & \\mbox{ if } t \\mbox{ is even}\\\\\\left\\lfloor \\frac{3t}{2}-2 - \\frac{2(t-1)^2}{9}\\right\\rfloor & \\mbox{ if } t\\ge 3, \\mbox{ and t is odd.}\\\\\\end{array}\\right.", "}$ The extremal graph is unique unless $t\\equiv 4 \\mod {1}2$ and $\\frac{2n+ \\xi _t}{3t}$ is an integer.", "In this special case, the maximum spread is achieved by two extremal graphs $K_1\\vee \\left( \\left\\lfloor \\frac{2n+\\xi _t}{3t} \\right\\rfloor K_t \\cup \\left(n-1- t\\left\\lfloor \\frac{2n+\\xi _t}{3t} \\right\\rfloor \\right) P_1\\right)$ and $K_1\\vee \\left( \\left(\\left\\lfloor \\frac{2n+\\xi _t}{3t} \\right\\rfloor -1\\right) K_t \\cup \\left(n-1- t\\left(\\left\\lfloor \\frac{2n+\\xi _t}{3t} \\right\\rfloor -1\\right)\\right) P_1\\right).$ We give a list of values of $\\xi _t$ for small $t$ in Table REF .", "Table: The values of ξ t \\xi _t for 2≤t≤202\\le t \\le 20.Our paper is organized as follows.", "In Section , we recall some useful lemmas and prove that in any maximum-spread $K_{2, t}$ -minor-free graph $G$ , there is a vertex $u_0$ which is adjacent to all other vertices in $G$ .", "In Section , we show that $G - u_0$ is a disjoint union of cliques on $t$ vertices and isolated vertices and complete the proof of Theorem REF ." ], [ "Notations and lemmas", "We first recall a result of Chudnovsky, Reed and Seymour [6] on the maximum number of edges of a $K_{2,t}$ -minor-free graph, which extends an earlier result of Myers [15].", "Theorem 2 [6] Let $t \\ge 2$ be a positive integer, and $G$ be a graph on $n>0$ vertices with no $K_{2,t}$ minor.", "Then $|E(G)|\\le \\frac{1}{2}(t+1)(n-1).$ Let $G$ be a graph which attains the maximum spread among all $n$ -vertex $K_{2,t}$ -minor-free graphs.", "As a first step towards proving Theorem REF , we want to show that $G$ must contain a vertex of degree $n-1$ .", "Recall the result of Nikiforov [16] on the maximum spectral radius of $K_{2,t}$ -minor-free graphs.", "Theorem 3 [16] Let $t\\ge 3$ and $G$ be a graph of order $n$ with no $K_{2,t}$ minor.", "If $n\\ge 400t^6$ , then the spectral radius $\\lambda _1(G)$ satisfies $\\lambda _1(G)\\le \\frac{t-1}{2}+\\sqrt{n+\\frac{t^2-2t-3}{4}},$ with equality if and only if $n\\equiv 1\\pmod {t}$ and $G=K_1\\vee \\lfloor n/t\\rfloor K_t$ .", "We first give some upper and lower bounds on $\\lambda _1(G)$ and $|\\lambda _n(G)|$ when $n$ is sufficiently large.", "We use known expressions for the eigenvalues of a join of two regular graphs [4].", "Lemma 1 [4] Let $G$ and $H$ be regular graphs with degrees $k$ and $\\ell $ respectively.", "Suppose that $|V(G)| = m$ and $|V(H)| = n$ .", "Then, the characteristic polynomial of $G\\vee H$ is $p_{G\\vee H}(t) = ((t-k)(t-\\ell )-mn)\\frac{p_G(t)p_H(t)}{(t-k)(t-\\ell )}$ .", "In particular, if the eigenvalues of $G$ are $k = \\lambda _1 \\ge \\ldots \\ge \\lambda _m$ and the eigenvalues of $H$ are $\\ell = \\mu _1 \\ge \\ldots \\ge \\mu _n$ , then the eigenvalues of $G\\vee H$ are $\\lbrace \\lambda _i: 2\\le i\\le m\\rbrace \\cup \\lbrace \\mu _j: 2\\le j\\le n\\rbrace \\cup \\lbrace x: (x-k)(x-\\ell )-mn = 0\\rbrace $ .", "We will apply Lemma REF to the graph $K_1\\vee qK_t$ to obtain a lower bound on $S(G)$ .", "Lemma 2 Let $G$ be a graph which attains the maximum spread among all $n$ -vertex $K_{2,t}$ -minor-free graphs.", "Then $\\sqrt{n-1} - \\frac{t-1}{2}-O\\left(\\frac{1}{\\sqrt{n}}\\right) \\le |\\lambda _n(G)| \\le \\lambda _1(G) \\le \\sqrt{n-1} +\\frac{t-1}{2}+O\\left(\\frac{1}{\\sqrt{n}}\\right) .$ The upper bound of $\\lambda _1(G)$ is due to Theorem REF .", "Now let us prove the lower bound.", "We will compute $S(K_1\\vee qK_t)$ , where $q = {(n-1)/t}$ .", "Note that $K_1\\vee qK_t$ is $K_{2,t}$ -minor-free.", "Hence, we can lower bound $S(G)$ by $S(K_1\\vee qK_t)$ .", "By Lemma REF , both $\\lambda _1(K_1\\vee qK_t)$ and $\\lambda _n(K_1\\vee qK_t)$ satisfy the equation $\\lambda (\\lambda -(t-1))-qt=0.$ Thus, we have $\\lambda _1(K_1\\vee qK_t) &= \\frac{t-1}{2}+\\sqrt{qt+\\frac{t^2-2t+1}{4}},\\\\\\lambda _n(K_1\\vee qK_t)&= \\frac{t-1}{2}-\\sqrt{qt+\\frac{t^2-2t+1}{4}}.$ Thus $S(K_1\\vee qK_t)=\\sqrt{4qt+t^2-2t+1}$ .", "Since $q=\\lfloor (n-1)/t\\rfloor $ , we then have $S(G)\\ge \\sqrt{4qt+t^2-2t+1}\\ge \\sqrt{4(n-t)+t^2-2t+1}=\\sqrt{4n+t^2-6t+1}=2\\sqrt{n-1}+O\\left(\\frac{1}{\\sqrt{n}}\\right).$ Therefore, $|\\lambda _n(G)| &= S(G)-\\lambda _1(G) \\\\&\\ge 2\\sqrt{n-1}+O\\left(\\frac{1}{\\sqrt{n}}\\right) - \\left(\\sqrt{n-1} +\\frac{t-1}{2}+O\\left(\\frac{1}{\\sqrt{n}}\\right) \\right)\\\\ &=\\sqrt{n-1} - \\frac{t-1}{2}-O\\left(\\frac{1}{\\sqrt{n}}\\right).$ For the rest of this paper, let $\\lambda _1\\ge \\cdots \\ge \\lambda _n$ be the eigenvalues of the adjacency matrix $A(G)$ of $G$ .", "Given a vector ${\\bf w}\\in \\mathbb {R}^n$ , let ${\\bf w}^{\\prime }$ denotes its transpose, and for each $i\\in [n]$ , let ${\\bf w}_i$ denote the $i$ -th coordinate of ${\\bf w}$ .", "Using the Rayleigh quotient of symmetric matrices, we have the following equalities for $\\lambda _1$ and $\\lambda _n$ : $\\lambda _1 &= \\max _{\\begin{array}{c}{\\bf w}\\in \\mathbb {R}^n\\\\{\\bf w}\\ne 0\\end{array}} \\frac{{\\bf w}^{\\prime } A(G) {\\bf w}}{{\\bf w}^{\\prime }{\\bf w}} = \\max _{\\begin{array}{c}{\\bf w}\\in \\mathbb {R}^n\\\\{\\bf x}\\ne 0\\end{array}} \\frac{2\\sum _{ij\\in E(G)} {\\bf w}_i {\\bf w}_j}{{\\bf w}^{\\prime }{\\bf w}}, \\\\\\lambda _n &= \\min _{\\begin{array}{c}{\\bf w}\\in \\mathbb {R}^n\\\\{\\bf w}\\ne 0\\end{array}} \\frac{{\\bf w}^{\\prime } A(G) {\\bf w}}{{\\bf w}^{\\prime }{\\bf w}} = \\min _{\\begin{array}{c}{\\bf w}\\in \\mathbb {R}^n\\\\{\\bf w}\\ne 0\\end{array}} \\frac{2\\sum _{ij\\in E(G)} {\\bf w}_i {\\bf w}_j}{{\\bf w}^{\\prime }{\\bf w}}.$ Let ${\\bf x}$ and ${\\bf z}$ be the eigenvectors of $A(G)$ corresponding to the eigenvalues $\\lambda _1$ and $\\lambda _n$ respectively.", "For convenience, let ${\\bf x}$ and ${\\bf z}$ be indexed by the vertices of $G$ .", "By the Perron-Frobenius theorem, we may assume that all entries of ${\\bf x}$ are positive.", "We also assume that ${\\bf x}$ and ${\\bf z}$ are normalized so that the maximum absolute values of the entries of ${\\bf x}$ and ${\\bf z}$ are equal to 1, and so there are vertices $u_0$ and $w_0$ with ${\\bf x}_{u_0} = {\\bf z}_{w_0} = 1$ .", "Let $V_+=\\lbrace v\\colon {\\bf z}_v> 0\\rbrace $ , $V_0=\\lbrace v\\colon {\\bf z}_v= 0\\rbrace $ , and $V_-=\\lbrace v\\colon {\\bf z}_v < 0\\rbrace $ .", "Since ${\\bf z}$ is a non-zero vector, at least one of $V_{+}$ and $V_{-}$ is non-empty.", "By considering the eigen-equations of $\\lambda _n \\sum _{v\\in V_{+}} {\\bf z}_v$ or $\\lambda _n \\sum _{v\\in V_{-}} {\\bf z}_v$ , we obtain that both $V_{+}$ and $V_{-}$ are non-empty.", "For any vertex subset $S$ , we define the volume of $S$ , denoted by ${\\rm Vol}(S)$ , as ${\\rm Vol}(S)= \\sum _{v\\in S} |{\\bf z}_v|$ .", "In the following lemmas, we use the bounds of $\\lambda _n$ to deduce some information on $V_{+}$ , $V_{-}$ and $V_0$ .", "Lemma 3 We have ${\\rm Vol}(V(G))=O(\\sqrt{n}).$ For any vertex $v \\in V(G)$ , we have $d(v) \\ge |\\sum _{y\\in N(v)}z_y|=|\\lambda _n| |z_v|.$ Applying Theorem REF , we have $(t+1)n\\ge \\sum _{v\\in V}d(v) \\ge \\sum _{v\\in V(G)} |\\lambda _n| |z_v|=|\\lambda _n| {\\rm Vol}(V).$ By Lemma REF , $|\\lambda _n|\\ge \\sqrt{(n-1)} - \\frac{t-1}{2}-O\\left(\\frac{1}{\\sqrt{n}}\\right)$ .", "We thus have ${\\rm Vol}(V)=O(\\sqrt{n})$ .", "Lemma 4 Then exists some constant $C_1$ such that for all $n$ sufficiently large, we have $d(w_0)\\ge n- C_1\\sqrt{n}$ .", "For any vertex $u\\ne w_0$ , $d(u)\\le 2C_1\\sqrt{n}$ and $|z_u|=O(\\frac{1}{\\sqrt{n}})$ .", "For any $u\\in V_+$ , we have $|\\lambda _n| z_u =-\\lambda _n z_u = -\\sum _{v \\in N(u)} z_v \\le \\sum _{v \\in N(u)\\cap V_-} |z_v|.$ Therefore, for any $u\\in V_{+}$ , $|\\lambda _n|^2 z_u \\le \\sum _{v \\in N(u)\\cap V_-} |\\lambda _n| |z_v|& = \\sum _{v \\in N(u)\\cap V_-} \\lambda _n z_v\\\\&\\le \\sum _{v \\in N(u)\\cap V_-} \\sum _{y\\in N(v)\\cap V_+} z_y \\\\&\\le d(u) z_u + \\sum _{y\\in V_+\\setminus \\lbrace u\\rbrace } z_y |N(y)\\cap N(u)\\cap V_-|\\\\&\\le d(u) z_u + \\sum _{y\\in V_+\\setminus \\lbrace u\\rbrace } z_y (t-1) \\quad \\textrm {since G is K_{2,t}-minor-free}\\\\&\\le d(u) z_u + (t-1){\\rm Vol}(V_+).$ Similarly, if $u\\in V_-$ , we have $ |\\lambda _n|^2 |z_u| \\le d(u) |z_u| + (t-1){\\rm Vol}(V_-).", "$ Setting $u=w_0$ , we get $|\\lambda _n|^2-d(w_0)\\le (t-1){\\rm Vol}(V_+)=O(\\sqrt{n}).$ Hence, $d(w_0)\\ge n - O(\\sqrt{n}) \\ge n -C_1\\sqrt{n}, \\textrm { for some $ 1 > 0$}.$$Now we show $ d(u)2C1n$ for any vertex $ u$ other than $ w0$.Otherwise, if $ d(u)2C1n$, then$ u$ and $ w0$ have at least $ C1nt$ neighbors (when $ n$ is sufficiently large).Thus $ G$ contains the subgraph $ K2,t$, contradicting that $ G$ is $ K2,t$-minor-free.It then follows that for all $ u= w0$, we have$$|z_u|\\le \\frac{(t-1){\\rm Vol}(V)}{|\\lambda _n|^2 -d(u)} = O\\left(\\frac{1}{\\sqrt{n}}\\right).$$$ Lemma 5 We have $u_0 = w_0$ .", "For any vertex $v\\ne w_0$ , ${\\bf x}_v = O\\left(\\frac{1}{\\sqrt{n}} \\right)$ .", "We will prove (ii) first.", "For any $v \\in V(G)\\backslash \\lbrace w_0\\rbrace $ , we have $\\lambda _1^2 x_v &= \\lambda _1 \\displaystyle \\sum _{s\\in N(v)} {\\bf x}_s \\nonumber \\\\&\\le \\lambda _1 \\left({\\bf x}_{w_0} + \\displaystyle \\sum _{s \\in N(v) \\backslash \\lbrace w_0\\rbrace } {\\bf x}_s\\right)\\nonumber \\\\&\\le \\lambda _1 + \\displaystyle \\sum _{s \\in N(v) \\backslash \\lbrace w_0\\rbrace } \\displaystyle \\sum _{t\\in N(s)} {\\bf x}_t \\nonumber \\\\& \\le \\lambda _1 + \\displaystyle \\sum _{s \\in N(v) \\backslash \\lbrace w_0\\rbrace } \\left({\\bf x}_{w_0} + \\displaystyle \\sum _{t\\in N(s)\\backslash \\lbrace w_0\\rbrace } {\\bf x}_t\\right)\\nonumber \\\\& \\le \\lambda _1 + (2C_1\\sqrt{n}){\\bf x}_{w_0} + \\displaystyle \\sum _{s \\in N(v) \\backslash \\lbrace w_0\\rbrace } \\displaystyle \\sum _{t\\in N(s)\\backslash \\lbrace w_0\\rbrace } {\\bf x}_t $ Claim 1 For any $v\\in V(G)\\backslash \\lbrace w_0\\rbrace $ , we have $\\displaystyle \\sum _{s \\in N(v) \\backslash \\lbrace w_0\\rbrace } \\displaystyle \\sum _{t\\in N(s)\\backslash \\lbrace w_0\\rbrace } {\\bf x}_t = O(\\sqrt{n})$ .", "Observe that $\\displaystyle \\sum _{s \\in N(v) \\backslash \\lbrace w_0\\rbrace } \\displaystyle \\sum _{t\\in N(s)\\backslash \\lbrace w_0\\rbrace } {\\bf x}_t& \\le \\displaystyle \\sum _{s \\in N(v) \\backslash \\lbrace w_0\\rbrace } \\displaystyle \\sum _{t\\in N(s)\\backslash \\lbrace w_0\\rbrace } 1 \\nonumber \\\\&= {\\lbrace (s,t)\\in V(G)^2: s\\in N(v)\\backslash \\lbrace w_0\\rbrace , t\\in N(s)\\backslash \\lbrace w_0\\rbrace \\rbrace }.", "\\nonumber \\\\&\\le 2|E_{G-w_0}(N(v))| + |E_{G-w_0}(N(v), V(G)\\backslash N(v))| \\nonumber \\\\& \\le 2|E_{G-w_0}(N(v))|+ |E_{G-w_0}(N(v), N_G(w_0)\\backslash N(v))| +\\nonumber \\\\& \\quad \\quad |E_{G-w_0}(N(v), V(G)\\backslash (N_G(w_0)\\cup N(v))| $ By Theorem REF and Lemma REF , $|E_{G-w_0}(N(v))|\\le (t+1)2C_1\\sqrt{n}.$ Since $G$ is $K_{2,t}$ -minor-free, the bipartite graph induced by $E_{G-w_0}(N(v), N_G(w_0)\\backslash N(v))$ is $K_{1,t}$ -free.", "Hence every vertex in $N(v)$ has at most $t-1$ neighbors in $N_G(w_0)\\backslash N(v)$ .", "It follows that $|E_{G-w_0}(N(v), V(G)\\backslash N(v))| \\le (t-1)|N(v)| \\le 2(t-1)C_1\\sqrt{n}.$ Similarly, every vertex in $V(G)\\backslash (N_G(w_0)\\cup N(v))$ has at most $t-1$ neighbors in $N(v)$ .", "It follows that $|E_{G-w_0}(N(v), V(G)\\backslash (N_G(w_0)\\cup N(v))| \\le (t-1) |V(G)\\backslash (N_G(w_0)\\cup N(v))| \\le (t-1)C_1\\sqrt{n}.$ Hence by (REF ), $ \\displaystyle \\sum _{s \\in N(v) \\backslash \\lbrace w_0\\rbrace } \\displaystyle \\sum _{t\\in N(s)\\backslash \\lbrace w_0\\rbrace } {\\bf x}_t \\le (t+1)2C_1\\sqrt{n} + 2(t-1)C_1\\sqrt{n}+(t-1)C_1\\sqrt{n} = O(\\sqrt{n}).$ Now by the claim above and (REF ), we have that $\\lambda _1^2 x_v \\le \\lambda _1 + 2C_1\\sqrt{n} + O(\\sqrt{n}) = O(\\sqrt{n}).$ Using the fact that $|\\lambda _1|\\ge \\sqrt{(n-1)} - \\frac{t-1}{2}-O\\left(\\frac{1}{\\sqrt{n}}\\right)$ , we have that $x_v = O\\left(\\frac{1}{\\sqrt{n}}\\right).$ It follows that $w_0 = u_0$ .", "Lemma 6 We have that $d(u_0) = n-1$ .", "Suppose for contradiction that $d(u_0)<n-1$ .", "Let $S=V(G)\\backslash (N(u_0)\\cup \\lbrace u_0\\rbrace )$ .", "Then $S\\ne \\emptyset $ .", "By Lemma REF , $|S|\\le C_1 \\sqrt{n}$ .", "Note that $G[S]$ is also $K_{2,t}$ -minor-free.", "Hence by Theorem REF , $|E(G[S])|\\le \\frac{1}{2}(t+1)|S|$ .", "It follows that there exists a vertex $v \\in S$ such that $d_S(v) \\le t+1$ .", "Moreover, since $G$ is $K_{2,t}$ -minor-free, we have that $d_{N(u_0)}(v) \\le t-1$ .", "Hence $d_G(v) \\le t+1 +(t-1) = 2t$ .", "Let $G^{\\prime }$ be obtained from $G$ by removing all the edges of $G$ incident with $v$ and adding the edge $v u_0$ .", "We claim that $\\lambda _n(G^{\\prime }) < \\lambda _n(G)$ .", "Indeed, consider the vector $\\tilde{{\\bf z}}$ such that $\\tilde{{\\bf z}}_{u} = {\\bf z}_u$ for $u\\ne v$ and $\\tilde{{\\bf z}}_v = -|{\\bf z}_v|$ .", "Then for sufficiently large $n$ , we have $\\tilde{{\\bf z}}^{\\prime } A(G^{\\prime }) \\tilde{{\\bf z}} &\\le {\\bf z}^{\\prime } A(G) {\\bf z}+ 2 \\displaystyle \\sum _{y\\sim v} |{\\bf z}_y {\\bf z}_v| - 2|{\\bf z}_v| z_{u_0}\\\\& \\le {\\bf z}^{\\prime } A(G){\\bf z}+ 2 \\cdot 2t \\cdot O\\left(\\frac{1}{\\sqrt{n}}\\right)\\cdot |{\\bf z}_v| - 2 |{\\bf z}_v|\\\\& < {\\bf z}^{\\prime } A(G) {\\bf z}.$ By the Rayleigh quotient, we have $\\lambda _n(G^{\\prime }) \\le \\frac{\\tilde{{\\bf z}}^{\\prime } A(G^{\\prime }) \\tilde{{\\bf z}}}{\\tilde{{\\bf z}}^{\\prime } \\tilde{{\\bf z}}} < \\frac{ {\\bf z}^{\\prime } A(G) {\\bf z}}{ {\\bf z}^{\\prime } {\\bf z}} = \\lambda _n(G).$ Similarly, we claim that $\\lambda _1(G^{\\prime }) > \\lambda _1(G)$ .", "Indeed, ${\\bf x}^{\\prime } A(G^{\\prime }) {\\bf x}&= {\\bf x}^{\\prime } A(G) {\\bf x}- 2 \\displaystyle \\sum _{y\\sim v} {\\bf x}_y {\\bf x}_v + 2{\\bf x}_v {\\bf x}_{u_0}\\\\& \\ge {\\bf x}^{\\prime } \\lambda _1(G) {\\bf x}- 2 \\cdot 2t \\cdot O\\left(\\frac{1}{\\sqrt{n}}\\right)\\cdot {\\bf x}_v + 2{\\bf x}_v\\\\& > {\\bf x}^{\\prime } A(G){\\bf x}.$ Using the Rayleigh quotient again, $\\lambda _1(G^{\\prime }) \\ge \\frac{ {\\bf x}^{\\prime } A(G^{\\prime }) {\\bf x}}{ {\\bf x}^{\\prime } {\\bf x}} > \\frac{ {\\bf x}^{\\prime } A(G) {\\bf x}}{ {\\bf x}^{\\prime }{\\bf x}} = \\lambda _1(G).$ Therefore, we have $S(G^{\\prime }) =\\lambda _1(G^{\\prime }) -\\lambda _n(G^{\\prime }) > \\lambda _1(G) -\\lambda _n(G) = S(G)$ , giving a contradiction." ], [ "Proof of Theorem ", "By Lemma REF , a maximum-spread $K_{2,t}$ -minor-free graph $G$ has a vertex $u_0$ with degree $n-1$ .", "Let $\\alpha $ be a normalized eigenvector corresponding to an eigenvalue $\\lambda $ of the adjacency matrix of $G$ so that $\\alpha _{u_0}=1$ .", "Let $H=G - u_0$ and $A_H$ be the adjacency matrix of $H$ .", "Note that $H$ is $K_{1,t}$ -minor-free since $G$ is $K_{2,t}$ -minor-free.", "Let $I$ denote the identity matrix of dimension $n-1$ and let $\\mathbf {1}$ denote the all one vector of dimension $n-1$ .", "Moreover, let $\\bf x$ denote the restriction of $\\alpha $ to the vertices of $H$ .", "The following lemma computes the vector ${\\bf x}$ .", "Lemma 7 We have $ {\\bf x} =\\sum _{k=0}^\\infty \\lambda ^{-(k+1)} A_H^k \\mathbf {1}.$ Since $H$ is $K_{1,t}$ -minor-free, the maximum degree of $H$ is at most $t-1$ .", "For sufficiently large $n$ , both $\\lambda _1(G)$ and $|\\lambda _n(G)|$ are greater than $t-1$ .", "Each vertex $v\\ne u$ is adjacent to $u_0$ and $\\alpha _{u_0}=1$ .", "Hence when restricting the coordinates of $A(G)\\alpha $ to $V(G)\\backslash \\lbrace u_0\\rbrace $ , we have that $A_H\\mathbf {x}+ \\mathbf {1}= \\lambda \\mathbf {x}.$ It then follows that $\\mathbf {x}&= (\\lambda I-A_H)^{-1}\\mathbf {1}\\nonumber \\\\&=\\lambda ^{-1} (I-\\lambda ^{-1}A_H)^{-1}\\mathbf {1}\\nonumber \\\\&= \\lambda ^{-1} \\sum _{k=0}^\\infty (\\lambda ^{-1}A_H)^{k} \\mathbf {1}\\nonumber \\\\&= \\sum _{k=0}^\\infty \\lambda ^{-(k+1)} A_H^k \\mathbf {1}.$ Here we use the assumption that $|\\lambda |> t-1 \\ge \\lambda _1(A_H)$ so that the infinite series converges.", "Lemma 8 Both $\\lambda _1$ and $\\lambda _n$ satisfy the following equation.", "$ \\lambda ^2 = (n-1) +\\sum _{k=1}^\\infty \\lambda ^{-k} \\mathbf {1}^{\\prime } A_H^k \\mathbf {1}.$ The eigen-equation at $u_0$ gives $ \\lambda =\\lambda {\\bf x}_{u_0}= \\sum _{v\\in V(H)} {\\bf x}_v.$ Applying Lemma REF , we get $\\sum _{v\\in V(H)} {\\bf x}_v &= \\mathbf {1}^{\\prime } \\cdot \\mathbf {x}\\\\&=\\mathbf {1}^{\\prime } \\cdot \\sum _{k=0}^\\infty \\lambda ^{-(k+1)} A_H^k \\mathbf {1}\\\\&= \\sum _{k=0}^\\infty \\lambda ^{-(k+1)} \\mathbf {1}^{\\prime } A_H^k \\mathbf {1}.$ Plugging it to Equation (REF ), we have $ \\lambda = (n-1)\\frac{1}{\\lambda } +\\sum _{k=1}^\\infty \\lambda ^{-(k+1)} \\mathbf {1}^{\\prime } A_H^k \\mathbf {1}.$ Multiplying by $\\lambda $ on both sides, we get Equation (REF ).", "For $k=1,2,3\\ldots $ , let $a_k= \\mathbf {1}^{\\prime } A_H^k \\mathbf {1}$ .", "In particular, $a_1= \\mathbf {1}^{\\prime } A_H \\mathbf {1}=\\sum _{v\\in V(H)} d_H(v)=2|E(H)|$ ; $a_2= \\mathbf {1}^{\\prime } A_H^2 \\mathbf {1}=\\sum _{v\\in V(H)} d_H(v)^2$ .", "Lemma 9 We have the following estimation of the spread of $G$ : $S(G)=2\\sqrt{n-1}+\\frac{2c_2}{\\sqrt{n-1}} + \\frac{2c_4}{(n-1)^{3/2}} + \\frac{2c_6}{(n-1)^{5/2}} + O\\left(n^{-7/2}\\right).$ Here $c_2 &= -\\frac{3}{8} \\left(\\frac{a_1}{n-1}\\right)^2 + \\frac{1}{2} \\frac{a_2}{n-1}, \\\\c_4 &=-\\frac{105}{128} \\left(\\frac{a_1}{n-1}\\right)^4 +\\frac{35}{16} \\left(\\frac{a_1}{n-1}\\right)^2\\frac{a_2}{n-1}-\\frac{5}{8}\\left(\\frac{a_2}{n-1}\\right)^2 -\\frac{5}{4}\\frac{a_1}{n-1}\\frac{a_3}{n-1} +\\frac{1}{2} \\frac{a_4}{n-1} \\\\c_6&=-\\frac{3003}{1024} \\left(\\frac{a_1}{n-1}\\right)^6 +\\frac{3003}{256} \\left(\\frac{a_1}{n-1}\\right)^4\\frac{a_2}{n-1}-\\frac{693}{64} \\left(\\frac{a_1}{n-1}\\right)^2\\left(\\frac{a_2}{n-1}\\right)^2+\\frac{21}{16}\\left(\\frac{a_2}{n-1}\\right)^3\\nonumber \\\\&\\hspace*{14.22636pt}-\\frac{21}{32}\\left(11\\left(\\frac{a_1}{n-1}\\right)^3-12\\left(\\frac{a_1}{n-1}\\right)\\left(\\frac{a_2}{n-1}\\right)\\right)\\left(\\frac{a_3}{n-1}\\right)- \\frac{7}{8}\\left(\\frac{a_3}{n-1}\\right)^2 \\nonumber \\\\&\\hspace*{14.22636pt}+\\frac{7}{16}\\left(9\\left(\\frac{a_1}{n-1}\\right)^2 -4\\frac{a_2}{n-1}\\right)\\frac{a_4}{n-1}- \\frac{7}{4}\\frac{a_1}{n-1}\\frac{a_5}{n-1} +\\frac{1}{2} \\frac{a_6}{n-1}.", "$ Recall that by (REF ), we have that for $\\lambda \\in \\lbrace \\lambda _1, \\lambda _n\\rbrace $ , $ \\lambda = (n-1)\\frac{1}{\\lambda } +\\sum _{k=1}^\\infty \\lambda ^{-(k+1)} \\mathbf {1}^{\\prime } A_H^k \\mathbf {1}.", "$ Multiplying by $\\lambda $ on both sides, we have that $\\lambda ^2 = (n-1) + \\displaystyle \\sum _{k=1}^{\\infty } \\frac{a_k}{\\lambda ^k}.$ By similar logic in the main lemma of the appendix in [14], $\\lambda $ has the following series expansion: $\\lambda _1 = \\sqrt{(n-1)} + c_1 + \\frac{c_2}{\\sqrt{n-1}} + \\frac{c_3}{n-1} + \\frac{c_4}{(n-1)^{\\frac{3}{2}}} + \\frac{c_5}{(n-1)^2} + \\frac{c_6}{(n-1)^\\frac{5}{2}} + O\\left(n^{-7/2}\\right).$ Similarly, $\\lambda _n = -\\sqrt{(n-1)} + c_1 - \\frac{c_2}{\\sqrt{n-1}} + \\frac{c_3}{n-1} - \\frac{c_4}{(n-1)^{\\frac{3}{2}}} + \\frac{c_5}{(n-1)^2} - \\frac{c_6}{(n-1)^\\frac{5}{2}} + O\\left(n^{-7/2}\\right).$ Using SageMath (computation available at https://github.com/wzy3210/graph_spreads), we get that $c_2, c_4, c_6$ are the values in Equations (REF ), (), () respectively.", "It follows that $S(G) =\\lambda _1 - \\lambda _n = 2\\sqrt{(n-1)} + \\frac{2c_2}{\\sqrt{n-1}} + \\frac{2c_4}{(n-1)^{\\frac{3}{2}}} + \\frac{2c_6}{(n-1)^\\frac{5}{2}} + O\\left(n^{-7/2}\\right).$ Lemma 10 For sufficiently large $n$ , a maximum-spread $K_{2,t}$ -minor-free $n$ -vertex graph $G$ must be of the form $K_1\\vee \\left(\\ell K_t\\cap (n-1-\\ell t)P_1\\right).$ [Proof of Lemma REF ] By Lemma REF , there exists a vertex $u_0 \\in V(G)$ of degree $n-1$ .", "Let $H = G-u_0$ .", "Since $G$ is $K_{2,t}$ -minor-free, every vertex in $H$ has at most $t-1$ neighbors in $H$ .", "Thus $\\Delta (H) \\le t-1$ , and it follows that $a_2= \\mathbf {1}^{\\prime } A_H^2 \\mathbf {1}=\\sum _{v\\in V(H)} d_H(v)^2 \\le (t-1) \\sum _{v\\in V(H)} d_H(v) = (t-1) a_1.$ Note that $a_1 = 2|E(H)| \\le \\Delta (H)|V(H)| \\le (t-1)(n-1).$ It follows that $a_i\\le (t-1)^i(n-1)$ for all $i\\ge 2$ .", "By Lemma REF , we have the following estimation of the spread of $G$ : $S(G)=2\\sqrt{n-1}+\\frac{2c_2}{\\sqrt{n-1}} + \\frac{2c_4}{(n-1)^{3/2}} + \\frac{2c_6}{(n-1)^{5/2}} + O\\left(n^{-7/2}\\right),$ where $c_2, c_4, c_6$ are computed in Lemma REF , and all $c_i$ 's are bounded by constants depending on $t$ .", "Note $c_2 &= -\\frac{3}{8} \\left(\\frac{a_1}{n-1}\\right)^2 + \\frac{1}{2} \\frac{a_2}{n-1}\\\\&\\le -\\frac{3}{8} \\left(\\frac{a_1}{n-1}\\right)^2 + \\frac{1}{2} \\frac{(t-1)a_1}{n-1}\\\\&= \\frac{(t-1)^2}{6} -\\frac{3}{8} \\left(\\frac{a_1}{n-1} -\\frac{2}{3}(t-1)\\right)^2 \\\\&\\le \\frac{(t-1)^2}{6},$ where in the last inequality, the equality is only achieved when $a_1 = \\frac{2}{3}(t-1)(n-1)$ .", "For $G_0=K_1\\vee \\left( \\left\\lfloor \\frac{2n+\\xi _t}{3t} \\right\\rfloor K_t \\cup \\left(n-1- t\\left\\lfloor \\frac{2n+\\xi _t}{3t} \\right\\rfloor \\right) P_1\\right)$ , we have $\\frac{a_1}{n-1}= \\frac{2}{3}(t-1)+O\\left(\\frac{1}{n}\\right)$ .", "Thus $S(G_0) = 2\\sqrt{n-1} + \\frac{(t-1)^2}{3\\sqrt{n-1}} + O\\left(\\frac{1}{n^{3/2}}\\right).$ Claim 2 There exists a constant $C>0$ such that the value of $a_1$ that maximizes $S(G)$ lies in the interval $(\\frac{2}{3}(t-1)(n-1)- Cn^{1/2}, \\frac{2}{3}(t-1)(n-1)+ Cn^{1/2})$ .", "Let $C$ be a sufficiently large constant chosen later.", "Suppose for contradiction that $a_1$ is not contained in the interval above.", "Then, we must have that $c_2 \\le \\frac{(t-1)^2}{6} - \\frac{3C^2n}{8(n-1)^2}.$ This implies that $S(G) \\le 2\\sqrt{n-1} + 2\\cdot \\frac{\\frac{(t-1)^2}{6} - \\frac{3C^2n}{8(n-1)^2}}{\\sqrt{n-1}} + O\\left(\\frac{1}{(n-1)^{3/2}}\\right)< S(G_0),$ when $C$ is chosen to be large enough such that $-\\frac{2\\cdot 3C^2}{8(n-1)^2}\\frac{n}{\\sqrt{n-1}} + O\\left(\\frac{1}{(n-1)^{3/2}}\\right)< 0.$ This gives us a contradiction since $G$ is assumed to be an extremal graph that maximizes the spread over all $K_{2,t}$ -minor-free graphs.", "From now on, we assume that $a_1 \\in (\\frac{2}{3}(t-1)(n-1)- Cn^{1/2}, \\frac{2}{3}(t-1)(n-1)+ Cn^{1/2})$ for some constant $C>0$ .", "Claim 3 There is a constant $C_2$ such that the value of $a_2$ lies in the interval $[(t-1)a_1-C_2, (t-1)a_1]$ .", "Let $C_2$ be a sufficiently large constant chosen later.", "Suppose for contradiction that $a_2<(t-1)a_1-C_2$ .", "We then have that $S(G)\\le 2\\sqrt{n-1} + \\frac{(t-1)^2}{3\\sqrt{n-1}}-\\frac{C_2}{(n-1)^{3/2}}+ O\\left(\\frac{1}{n^{3/2}}\\right)< S(G_0),$ if we choose $C_2$ large enough, giving a contradiction.", "Claim 4 For $i\\ge 2$ , we have $a_i\\in [(t-1)^{i-1}(a_1-(i-1)C_2),a_1(t-1)^{i-1}] $ .", "We will show this claim by inducting on $i\\ge 2$ .", "Note that by Claim REF , we have that $a_2 \\ge (t-1)a_1 -C_2$ .", "Moreover, $a_2\\le (t-1)a_1$ since $\\Delta (H)\\le t-1$ .", "Hence the base case holds.", "Moreover, we also obtain from above that $C_2 \\ge (t-1)a_1 - a_2$ .", "Let $H^{\\prime }$ be the set of vertices in $H$ such that its degree is in the interval $[1,t-2]$ .", "We have $C_2\\ge (t-1)a_1-a_2=\\sum _{v\\in H^{\\prime }}(t-1-d(v))d(v)\\ge (t-2)|H^{\\prime }|.$ This implies $|H^{\\prime }|\\le \\frac{C_2}{t-2}.$ For a vertex $v \\in H$ and non-negative integer $k$ , let $w_k(v)$ denote the number of walks of length $k$ in $H$ starting at $v$ .", "Observe that $(t-1)a_{i-1}-a_i &= (t-1)\\displaystyle \\sum _{v\\in V(H)} w_{i-1}(v) -\\displaystyle \\sum _{v\\in V(H)} w_i(v)\\\\&\\le \\displaystyle \\sum _{v\\in H^{\\prime }} \\left((t-1)-d_H(v)\\right)(t-1)^{i-1}\\\\&\\le |H^{\\prime }|(t-2)(t-1)^{i-1} \\\\&\\le C_2(t-1)^{i-1}$ Thus, $a_i&\\ge (t-1)a_{i-1} -C_2(t-1)^{i-1}\\\\&\\ge (t-1)((t-1)a_{i-2} -C_2(t-1)^{i-2}) -C_2(t-1)^{i-1} \\hspace*{28.45274pt}\\mbox{ by induction}\\\\&=(t-1)^2a_{i-2}-2C_2(t-1)^{i-1}\\\\&\\ge (t-1)^{i-1}a_{1} -(i-1)C_2(t-1)^{i-1},$ where the last inequality is obtained by repeatedly applying induction.", "Claim 5 $a_2 = (t-1) a_1$ .", "Assume that $a_1 = \\frac{2}{3}(t-1)(n-1) + A$ , and $a_2 = (t-1)a_1-B$ , where $A \\in [-Cn^{1/2}, Cn^{1/2}]$ and $0 \\le B \\le C_2$ .", "For $i\\ge 2$ , let $c_i(G), c_i(G_0)$ denote the $c_i$ values of $G$ and $G_0$ respectively.", "Observe that $c_2(G) &= -\\frac{3}{8} \\left(\\frac{a_1}{n-1}\\right)^2 + \\frac{1}{2} \\frac{a_2}{n-1}\\\\&= \\frac{(t-1)^2}{6} - \\frac{3A^2}{8(n-1)^2} - \\frac{B}{2(n-1)}.$ It follows that $c_2(G)-c_2(G_0) = -\\frac{3A^2}{8(n-1)^2} -\\frac{B}{2(n-1)} +O(n^{-2}).$ Moreover, by Claim REF , for all $i\\ge 4$ , we have that $c_i(G) - c_i(G_0) = O(n^{-1/2}).$ Thus $S(G) -S(G_0) &= 2 \\cdot \\frac{c_2(G)-c_2(G_0)}{\\sqrt{n-1}}+ 2 \\cdot \\frac{c_4(G)-c_4(G_0)}{(n-1)^{3/2}} +O((n-1)^{-5/2})\\\\&\\le 2 \\cdot \\frac{ O\\left(n^{-2}\\right)- \\frac{3A^2}{8(n-1)^2} - \\frac{B}{2(n-1)}}{\\sqrt{n-1}}+ 2 \\cdot \\frac{O(n^{-1/2})}{(n-1)^{3/2}} + O((n-1)^{-5/2}).$ Since $S(G)\\ge S(G_0)$ , this implies that $A = O(n^{1/4})$ , $B=0$ and thus $a_2 = (t-1)a_1$ .", "Claim 6 $H$ is the union of vertex disjoint $K_t$ s and isolated vertices.", "Recall that $a_1= \\mathbf {1}^{\\prime } A_H \\mathbf {1}=\\sum _{v\\in V(H)} d_H(v)=2|E(H)|$ , and $a_2= \\mathbf {1}^{\\prime } A_H^2 \\mathbf {1}=\\sum _{v\\in V(H)} d_H(v)^2$ .", "By Claim REF , we have that $\\sum _{v\\in V(H)} d_H(v)^2 = (t-1)\\sum _{v\\in V(H)} d_H(v).$ Since $d_H(v) \\le t-1$ for every $v\\in V(H)$ , it follows that $H$ is the disjoint union of $(t-1)$ -regular graphs and isolated vertices.", "Let $K$ be an arbitrary non-trivial component of $H$ .", "We will show that $K$ is a clique on $t$ vertices.", "We first claim that for any $u,v\\in V(K)$ , $N(u)\\cap N(v) \\ne \\emptyset $ .", "Otherwise, pick a shortest path $P$ between $u$ and $v$ in $P$ .", "Observe that $|V(P) \\cap N(u)| = |V(P)\\cap N(v)| = 1$ .", "Contract $uPv$ into one vertex $x$ (call the new graph $G^{\\prime }$ ).", "Note that $x$ and $N(u)\\cup N(v)$ form a $K_{1,t}$ in $G^{\\prime }$ .", "Together with $u_0$ which is adjacent to every vetex in $K$ , we have a $K_{2,t}$ minor in $G$ , giving a contradiction.", "Next, we claim that for any $u, v\\in V(K)$ with $uv \\notin E(K)$ , $|N(u)\\cap N(v)| \\ge t-2$ .", "Otherwise, $|N(u)\\backslash N(v)| \\ge 2$ and $|N(v)\\backslash N(u)|\\ge 2$ .", "Similar to before, pick an arbitrary vertex $w \\in N(u) \\cap N(v)$ and contract the path $uwv$ , we then obtain a $K_{1,t}$ -minor in $K$ , and thus a $K_{2,t}$ -minor in $G$ .", "Similarly, for any $u,v\\in V(K)$ with $uv\\in E(K)$ , we have $|N(u)\\cap N(v)| \\ge t-3$ .", "Moreover, note that for any $u,v \\in V(K)$ , $|N(u) \\cap N(v)| \\le t-2$ , since otherwise $\\lbrace u,v\\rbrace $ and $(N(u)\\cap N(v)) \\cup \\lbrace u_0\\rbrace $ forms a $K_{2,t}$ in $G$ , giving a contradiction.", "Hence, we have that for any $u,v\\in V(K)$ with $uv\\notin E(K)$ , $|N(u)\\cap N(v)| = t-2$ .", "Now if $K$ is not a clique on $t$ vertices, then let $u,v\\in V(K)$ be two vertices in $K$ such that $uv\\notin E(K)$ .", "By the above claim, there exists $u^{\\prime }, v^{\\prime } \\in V(K)$ such that $u^{\\prime }\\in N(u)\\backslash N(v)$ and $v^{\\prime } \\in N(v) \\backslash N(u)$ .", "We claim that $u^{\\prime }v^{\\prime } \\notin E(K)$ .", "Indeed, if $u^{\\prime }v^{\\prime }\\in E(K)$ , contract $v^{\\prime }u^{\\prime }$ into $w^{\\prime }$ .", "Then $\\lbrace u,v\\rbrace \\cup \\left(\\lbrace w^{\\prime }, u_0\\rbrace \\cup (N(u)\\cap N(v)) \\right)$ is a $K_{2,t}$ minor in $G$ , giving a contradiction.", "Now note that since $u^{\\prime }v \\notin E(K)$ , we have $|N(u^{\\prime })\\cap N(v)| = t-2$ .", "It follows that $N(u^{\\prime })\\cap N(v) = N(u)\\cap N(v)$ .", "Similarly, $N(v^{\\prime }) \\cap N(u) = N(u)\\cap N(v)$ .", "We claim that each vertex in $N(u)\\cap N(v)$ has exactly one non-neighbor in $N(u)\\cap N(v)$ .", "Indeed, let $w$ be an arbitrary vertex in $N(u)\\cap N(v)$ .", "Note that $w$ cannot be adjacent to all other vertices in $N(u)\\cap N(v)$ ; otherwise since $w$ is $u^{\\prime },u$ , and $v^{\\prime }$ , we then have $d(w)\\ge (t-3)+3=t$ , contradicting that $K$ is $(t-1)$ -regular.", "On the other hand, suppose $w$ has at least two non-neighbors in $N(u)\\cap N(v)$ .", "Then it follows that $|N(w)\\cap (N(u)\\cap N(v))| \\le t-2 -3 = t-5$ .", "Now observe that $N(u)\\cap N(w) = (N(w)\\cap N(u)\\cap N(v)) \\cup \\lbrace u^{\\prime }\\rbrace $ .", "It follows that $|N(u)\\cap N(w)| \\le t-4$ , contradicting our claim before that any two adjacent vertices must have at least $t-3$ common neighbors.", "Hence $w$ has exactly one non-neighbor in $N(u)\\cap N(v)$ , say $w^{\\prime }$ .", "But now observe that $N(w)\\cap N(w^{\\prime }) \\supseteq (N(u)\\cap N(v)\\backslash \\lbrace w,w^{\\prime }\\rbrace ) \\cup \\lbrace u,u^{\\prime },v, v^{\\prime }\\rbrace ,$ which implies that $|N(w)\\cap N(w^{\\prime })| \\ge t-4+4 = t$ , contradicting that $K$ is $(t-1)$ -regular.", "Hence by contradiction, $K$ is a clique on $t$ vertices.", "This completes the proof of Lemma REF .", "[Proof of Theorem REF ] For sufficiently large $n$ , let $G$ be an extremal graph attaining the maximum spread among all $n$ -vertex $K_{2,t}$ -minor-free graphs.", "By Lemma REF , we only need to consider graphs in the form of $G_\\ell = K_1\\vee \\left(\\ell K_t \\cup (n-1-\\ell t)P_1 \\right)$ .", "It also follows from Lemma REF that for $i\\ge 1$ , $a_i = \\ell t(t-1)^i.$ For each $i\\ge 2$ , let $c_i(\\ell )$ denote the $c_i$ value of $G_{\\ell }$ .", "Plugging $a_i$ 's into Equations (REF ), (), and (), we get $c_2(\\ell )&= -\\frac{3}{8}\\frac{t^2(t-1)^2}{(n-1)^2}\\ell ^2 + \\frac{1}{2} \\frac{t(t-1)^2}{(n-1)}\\ell ,\\\\c_4(\\ell ) &= -\\frac{105}{128}\\frac{t^4(t-1)^4}{(n-1)^4}\\ell ^4 +\\frac{35}{16} \\frac{t^3(t-1)^4}{(n-1)^3}\\ell ^3 -\\frac{15}{8}\\frac{t^2(t-1)^4}{(n-1)^2}\\ell ^2+\\frac{1}{2}\\frac{t(t-1)^4}{(n-1)}\\ell ,\\\\c_6(\\ell ) &= -\\frac{3003}{1024}\\frac{t^6(t-1)^6}{(n-1)^6}\\ell ^6+\\frac{3003}{256}\\frac{t^5(t-1)^6}{(n-1)^5}\\ell ^5-\\frac{1155}{64}\\frac{t^4(t-1)^6}{(n-1)^4}\\ell ^4 \\nonumber \\\\&\\hspace*{19.91692pt}+\\frac{105}{8} \\frac{t^3(t-1)^6}{(n-1)^3}\\ell ^3 -\\frac{35}{8}\\frac{t^2(t-1)^6}{(n-1)^2}\\ell ^2+\\frac{1}{2}\\frac{t(t-1)^6}{(n-1)}\\ell .$ Let $\\ell _1= \\frac{2(n-1)}{3t}$ , which is the (possibly real) argmax value of $c_2(\\ell )$ .", "Let $\\ell _0=\\lfloor \\frac{2n+\\xi _t}{3t}\\rfloor $ be the target maximum integer point of $S(G_\\ell )$ .", "By Claim REF , we assume that $\\ell \\in (\\ell _1 - C\\sqrt{n-1}, \\ell _1 + C\\sqrt{n-1})$ .", "Let us compute $S(G_{\\ell +1})-S(G_{\\ell })$ .", "We have $c_2(\\ell +1)-c_2(\\ell ) &= -\\frac{3}{8}\\frac{t^2(t-1)^2}{(n-1)^2}(2\\ell +1) + \\frac{1}{2} \\frac{t(t-1)^2}{(n-1)},\\\\c_4(\\ell +1) -c_4(\\ell )&= -\\frac{105}{128}\\frac{t^4(t-1)^4}{(n-1)^4}(4\\ell ^3+6\\ell ^2+4\\ell +1) +\\frac{35}{16} \\frac{t^3(t-1)^4}{(n-1)^3}(3\\ell ^2+3\\ell +1) \\nonumber \\\\&\\hspace*{11.38109pt}-\\frac{15}{8}\\frac{t^2(t-1)^4}{(n-1)^2}(2\\ell +1)+\\frac{1}{2}\\frac{t(t-1)^4}{(n-1)},\\\\c_6(\\ell +1)-c_6(\\ell )&= O\\left(\\frac{1}{n-1}\\right).$ Plugging $\\ell =\\ell _1\\cdot \\left(1+ O\\left(\\frac{1}{\\sqrt{n-1}}\\right)\\right)$ into $c_4(\\ell +1)-c_4(\\ell )$ , we have $ c_4(\\ell +1)-c_4(\\ell ) = -\\frac{1}{18}\\frac{t(t-1)^4}{n-1} + O\\left(\\frac{1}{(n-1)^{3/2}}\\right).$ Therefore, we have $S(G_{\\ell +1})-S(G_\\ell ) &= \\frac{2(c_2(\\ell +1)-c_2(\\ell ))}{\\sqrt{n-1}}+ \\frac{2(c_4(\\ell +1)-c_4(\\ell ))}{(n-1)^{3/2}}+ \\frac{2(c_6(\\ell +1)-c_6(\\ell ))}{(n-1)^{5/2}}+ O\\left(\\frac{1}{(n-1)^{3}}\\right) \\nonumber \\\\&= \\frac{2t(t-1)^2}{(n-1)^{5/2}}\\left(-\\frac{3}{8}t(2\\ell +1) +\\frac{1}{2}(n-1)- \\frac{(t-1)^2}{18}\\right)+ O\\left(\\frac{1}{(n-1)^3}\\right)\\nonumber \\\\&=-\\frac{3t^2(t-1)^2}{2(n-1)^{5/2}}\\left(\\ell +\\frac{1}{2} -\\frac{2}{3t}(n-1)+ \\frac{2(t-1)^2}{27t}\\right) + O\\left(\\frac{1}{(n-1)^{3}}\\right).$ Case a: $t\\ge 3$ and $t$ is odd.", "Recall that in this case we let $\\ell _0=\\left\\lfloor \\frac{2n+\\xi _t}{3t}\\right\\rfloor $ where $\\xi _t=\\left\\lfloor \\frac{3t}{2}-2 - \\frac{2(t-1)^2}{9}\\right\\rfloor .", "$ For $\\ell \\ge \\ell _0$ , we have $\\ell +\\frac{1}{2} -\\frac{2}{3t}(n-1)+ \\frac{2(t-1)^2}{27t}&\\ge \\ell _0+\\frac{1}{2} -\\frac{2}{3t}(n-1)+ \\frac{2(t-1)^2}{27t} \\\\&\\ge \\frac{2n+\\xi _t}{3t} -\\left(1 -\\frac{1}{3t}\\right) +\\frac{1}{2} -\\frac{2(n-1)}{3t}+ \\frac{2(t-1)^2}{27t} \\\\&\\ge \\frac{1}{3t} \\left(\\xi _t+1 - \\left(\\frac{3t}{2}-2 - \\frac{2(t-1)^2}{9}\\right)\\right)\\\\&> 0.$ Plugging it into Equation (REF ), we have that for $\\ell \\ge \\ell _0$ , $S(G_{\\ell +1})-S(G_\\ell ) \\le -\\frac{t(t-1)^2\\left(\\xi _t+1- \\left(\\frac{3t}{2}-2 - \\frac{2(t-1)^2}{9})\\right) \\right)}{2(n-1)^{5/2}} + O\\left(\\frac{1}{(n-1)^3}\\right)<0.$ When $\\ell \\le \\ell _0-1$ , we have $\\ell +\\frac{1}{2} -\\frac{2}{3t}(n-1)+ \\frac{2(t-1)^2}{27t}&\\le \\ell _0 -1 +\\frac{1}{2} -\\frac{2}{3t}(n-1)+ \\frac{2(t-1)^2}{27t} \\\\&\\le \\frac{2n+\\xi _t}{3t} -1 +\\frac{1}{2} -\\frac{2(n-1)}{3t}+ \\frac{2(t-1)^2}{27t} \\\\&\\le \\frac{1}{3t} \\left(\\xi _t- \\left(\\frac{3t}{2}-2 - \\frac{2(t-1)^2}{9}\\right)\\right)\\\\&< 0.$ At the last step, we observe that $\\frac{3t}{2}-2 - \\frac{2(t-1)^2}{9}$ is not an integer for odd $t$ .", "Thus, the inequality is strict.", "Therefore, for $\\ell \\le \\ell _0-1$ , $S(G_{\\ell +1})-S(G_\\ell ) \\ge \\frac{t(t-1)^2\\left(-\\eta _t+ \\left(\\frac{3t}{4}-2 - \\frac{2(t-1)^2}{9})\\right) \\right)}{2(n-1)^{5/2}} + O\\left(\\frac{1}{(n-1)^3}\\right)>0.$ Therefore, $S(G_\\ell )$ reaches the unique maximum at $\\ell _0$ for sufficiently large $n$ .", "This completes the case for odd $t$ .", "Case b: $t\\ge 2$ even.", "Let $\\ell _0=\\left\\lfloor \\frac{n+\\eta _t}{3t/2}\\right\\rfloor $ where $\\eta _t=\\left\\lfloor \\frac{3t}{4}-1 - \\frac{(t-1)^2}{9}\\right\\rfloor .", "$ For $\\ell \\ge \\ell _0$ , we have $\\ell +\\frac{1}{2} -\\frac{2}{3t}(n-1)+ \\frac{2(t-1)^2}{27t}&\\ge \\ell _0+\\frac{1}{2} -\\frac{2}{3t}(n-1)+ \\frac{2(t-1)^2}{27t} \\\\&\\ge \\frac{n+\\eta _t}{3t/2} -\\left(1 -\\frac{1}{3t/2}\\right) +\\frac{1}{2} -\\frac{2(n-1)}{3t}+ \\frac{2(t-1)^2}{27t} \\\\&\\ge \\frac{2}{3t} \\left(\\eta _t+1 - \\left(\\frac{3t}{4}-1 - \\frac{(t-1)^2}{9}\\right)\\right)\\\\&> 0.$ Plugging it into Equation (REF ), we have that for $\\ell \\ge \\ell _0$ , $S(G_{\\ell +1})-S(G_\\ell ) \\le -\\frac{t(t-1)^2\\left(\\eta _t+1- \\left(\\frac{3t}{4}-1 - \\frac{(t-1)^2}{9}\\right) \\right)}{2(n-1)^{5/2}} + O\\left(\\frac{1}{(n-1)^3}\\right)<0.$ When $\\ell \\le \\ell _0-1$ , we have $\\ell +\\frac{1}{2} -\\frac{2}{3t}(n-1)+ \\frac{2(t-1)^2}{27t}&\\le \\ell _0 -1 +\\frac{1}{2} -\\frac{2}{3t}(n-1)+ \\frac{2(t-1)^2}{27t} \\nonumber \\\\&\\le \\frac{\\eta _t}{3t/2} -1 +\\frac{1}{2} +\\frac{2}{3t}+ \\frac{2(t-1)^2}{27t} \\nonumber \\\\&\\le \\frac{2}{3t} \\left(\\eta _t- \\left(\\frac{3t}{4}-1 - \\frac{(t-1)^2}{9}\\right)\\right) \\nonumber \\\\&\\le 0 .$ If $\\frac{2}{3t}(n+\\eta _t)$ is not an integer, we have $S(G_{\\ell +1})-S(G_\\ell ) > \\frac{t(t-1)^2\\left(-\\eta _t+ \\left(\\frac{3t}{4}-1 - \\frac{(t-1)^2}{9}\\right) \\right)}{2(n-1)^{5/2}} + O\\left(\\frac{1}{(n-1)^3}\\right)\\ge 0.$ Therefore $\\ell _0$ is the unique maximum point of $S(G_\\ell )$ .", "Now we assume $\\frac{2}{3t}(n+\\eta _t)$ is an integer.", "Observe that $\\frac{3t}{4}-1 - \\frac{(t-1)^2}{9}$ is an integer if and only if $t$ is divisible by 4 and $t-1$ is divisible by 3.", "Therefore, the inequality (REF ) is strict except for the case when $t\\equiv 4 \\mod {1}2$ and $\\ell =\\ell _0-1$ .", "It implies that for $t\\lnot \\equiv 4 \\mod {1}2$ , $S(G_{\\ell +1})-S(G_\\ell ) \\ge \\frac{t(t-1)^2\\left(-\\eta _t+ \\left(\\frac{3t}{4}-1 - \\frac{(t-1)^2}{9}\\right) \\right)}{2(n-1)^{5/2}} + O\\left(\\frac{1}{(n-1)^3}\\right)>0.$ Thus for $t\\lnot \\equiv 4 \\mod {1}2$ , $S(G_\\ell )$ reaches the unique maximum at $\\ell _0=\\left\\lfloor \\frac{2n+\\xi _t}{3t}\\right\\rfloor $ for sufficiently large $n$ .", "Now we consider the remaining case that $t\\equiv 4 \\mod {1}2$ and $\\frac{2}{3t}(n+\\eta _t)$ is an integer.", "In this case, $S(G_{\\ell })$ can only achieve the maximum at $\\ell _0$ , or $\\ell _0-1$ , or both.", "In fact, we claim both of them are maximum points.", "Let $k=(t-4)/12$ and $\\ell _0=\\frac{2}{3t}(n+\\eta _t)$ .", "Note that by our assumption $k$ and $\\ell _0$ are both integers.", "Rearranging the terms, we have $t &=12k+4, \\\\\\eta _t &=1+k-16k^2, \\\\n &=6(3k+1)\\ell _0 + 16k^2 -k -1.", "$ Now we compute the spread of $G_{\\ell }$ where $\\ell =\\ell _0$ or $\\ell _0-1$ .", "By Lemma REF , $\\lambda _1$ and $\\lambda _n$ of $G_\\ell $ satisfies the equation $\\lambda ^2&=(n-1) +\\sum _{k=1}^\\infty \\lambda ^{-k}\\mathbf {1}^{\\prime } A_H^k\\mathbf {1}\\\\&=(n-1) +\\sum _{k=1}^\\infty \\lambda ^{-k}\\ell t (t-1)^k\\\\&= (n-1) + \\ell t \\frac{(t-1)/\\lambda }{1-(t-1)/\\lambda }\\\\&= (n-1) + \\frac{\\ell t (t-1)}{\\lambda - (t-1)}.$ Simplifying it, we get $\\lambda ^3-(t-1)\\lambda ^2-(n-1)\\lambda +(t-1)(n-1-\\ell t)=0.$ Let us define the spread of a polynomial $\\phi $, denoted by $S(\\phi )$ , as the difference of largest root and the smallest root.", "Thus, we have $S(G_\\ell )=S(\\phi _\\ell ), $ where $\\phi _\\ell $ is defined by the left hand side of Equation (REF ).", "Let $\\lambda =x+\\frac{t-1}{3}$ .", "The cubic equation (REF ) can be written as $ x^3 -\\frac{1}{3}(n+t^2-2t-2)x + \\frac{1}{27}(-27 l t^{2} - 2 t^{3} + 27 l t + 18 n t + 6 t^{2} - 18 n - 24 t + 20)=0.$ Now plugging $\\ell =\\ell _0$ , $t$ as in Equation REF , and $n$ as in Equation , into Equation (REF ), we get $ x^3 - (6(3k+1)\\ell _0 +64k^2 +23k+1) x -(72k^2+42k+6)=0.$ Similarly, plugging $\\ell =\\ell _0-1$ , $t$ as in Equation REF , and $n$ as in Equation , into Equation (REF ), we get $ x^3 - (6(3k+1)\\ell _0 +64k^2 +23k+1) x+(72k^2+42k+6)=0.$ Let the $\\phi _1$ (or $\\phi _2$ ) denote the cubic polynomial in the left hand of Equation (REF ) (or Equation (REF ) respectively).", "Observe that $\\phi _2(x)=-\\phi _1(-x)$ .", "If $\\phi _1$ has three real roots $x_1\\le x_2\\le x_3$ , then $\\phi _2$ has three real roots $-x_3\\le -x_2\\le -x_1$ .", "Thus $S(\\phi _1)=x_3-x_1=(-x_1)-(-x_3)=S(\\phi _2).$ It then follows that $S(G_{\\ell _0})= S(\\phi _{\\ell _0}) =S(\\phi _1) =S(\\phi _2)=S(\\phi _{\\ell _0-1})=S(G_{\\ell _0-1}).$ Therefore both $G_{\\ell _0}$ and $G_{\\ell _0-1}$ are extremal graphs for this special case.", "This completes the proof of Theorem REF ." ] ]
2212.05540
[ [ "von Neumann algebra description of inflationary cosmology" ], [ "Abstract We study the von Neumann algebra description of the inflationary quasi-de Sitter (dS) space.", "Unlike perfect dS space, quasi-dS space allows the nonzero energy flux across the horizon, which can be identified with the expectation value of the static time translation generator.", "Moreover, as a dS isometry associated with the static time translation is spontaneously broken, the fluctuation in time is accumulated, which induces the fluctuation in the energy flux.", "When the inflationary period is given by $(\\epsilon_H H)^{-1}$ where $\\epsilon_H$ is the slow-roll parameter measuring the increasing rate of the Hubble radius, both the energy flux and its fluctuation diverge in the $G \\to 0$ limit.", "Taking the fluctuation in the energy flux and that in the observer's energy into account, we argue that the inflationary quasi-dS space is described by Type II$_\\infty$ algebra.", "As the entropy is not bounded from above, this is different from Type II$_1$ description of perfect dS space in which the entropy is maximized by the maximal entanglement.", "We also show that our result is consistent with the observation that the von Neumann entropy for the density matrix reflecting the fluctuations above is interpreted as the generalized entropy." ], [ "Introduction", "Whereas the spacetime geometry close to de Sitter (dS) well describes the primordial inflation and the current accelerating expansion of the universe, understanding its quantum nature is challenging.", "Studies on the quantum field theory in the dS background tell us that a static observer in dS space is surrounded by the (cosmological) horizon of radius $r_H=H^{-1}$ having thermodynamic properties characterized by the Gibbons-Hawking temperature $\\beta ^{-1}=H/(2\\pi )$ and the entropy $S_{\\rm GH}=A/(4G)=\\pi M_{\\rm Pl}^2/H^2$ , where $A$ is the horizon area given by $4\\pi r_H^2$ and $M_{\\rm Pl}^2$ is defined as $G^{-1}$ [1].", "This is quite similar to the black hole as seen from far outside the horizon but the geometric structure of dS space different from that of the black hole gives rise to several ambiguities.", "That is, unlike the black hole horizon, the boundary of the physically well defined compact object, the cosmological horizon in dS space is observer dependent.", "Moreover, it is not clear that the dS entropy given by the finite number counts the number of degrees of freedom of the region beyond the horizon which is not compact.", "Such ambiguities are expected to be fixed by the more complete description of the thermodynamic behavior in quantum gravity.", "As an attempt to find it, it was recently proposed that the entanglement needs to be described in terms of the algebra of local observables, rather than the tensor product of two Hilbert spaces defined on two separated regions (for reviews, see, e.g., [2], [3]).", "Here the algebra of local observables ${\\cal A}$ is assumed to consist of bounded operators closed under Hermitian conjugation and the weak limit, which is called the von Neumann algebra.", "When the multiple of the identity is the only allowed center, the von Neumann algebra is called a factor.", "Then the Hilbert space is constructed by acting the local observables on the cyclic and separating state $|\\Psi \\rangle $ .", "Here the state $|\\Psi \\rangle $ is said to be cyclic for ${\\cal A}$ if the states $a|\\Psi \\rangle $ for $a \\in {\\cal A}$ are dense in the Hilbert space, i.e., only the zero vector is orthogonal to all states in the form of $a|\\Psi \\rangle $ .", "Meanwhile, $|\\Psi \\rangle $ is called separating if $a=0$ is the only local observables in ${\\cal A}$ satisfying $a|\\Psi \\rangle =0$ .", "When we merely consider the quantized fluctuation around the fixed background in the $G\\rightarrow 0$ (or equivalently, $M_{\\rm Pl}\\rightarrow \\infty $ ) limit, the algebra typically belongs to Type III, in which neither the pure state nor the entropy is well defined [4], [5], [6], [7].", "Formally, the state is called pure if the function $F_\\Psi (a)=\\langle \\Psi | a|\\Psi \\rangle $ cannot be written in the form of $p_1 F_{\\Phi _1}(a)+p_2 F_{\\Phi _2}(a)$ with $p_{1,2}>0$ , which means that decoherence does not take place thus interference effects appear.", "When the state is not pure, it is called mixed.", "Indeed, $F_\\Psi (a)$ is used to define the density matrix through the trace, the linear functional of operators satisfying the commutative property and the positivity.", "If the finite trace is not defined, the divergent entropy is not renormalized.", "For more complete discussion, see reviews [2], [3].", "By taking dynamical gravity into account through the ${\\cal O}(G)$ corrections and treating diffeomorphism invariance as gauge redundancy or constraint, the algebra becomes Type II factor, in which the entropy as a finite, renormalized quantity can be defined.", "For the black hole, $H_{L/R}-M$ , the deviation of the ADM Hamiltonian of the left/right patch around the ADM mass $M=r_s/(2G)$ ($r_s$ is the horizon radius) can take any real value in the $G\\rightarrow 0$ limit as $M$ becomes divergent.", "Then the black hole thermodynamics is described by Type II$_\\infty $ algebra [8], [9], in which only a subset of observables has the finite trace hence the entropy is not bounded from above.", "In contrast, in dS space, there is no boundary at infinity.", "Instead, the static patch is bounded by the horizon which is in thermal equilibrium with the Gibbons-Hawking radiation, resulting in the vanishing energy flux across the horizon.", "As we will see, this implies the absence of the operator analogous to $H_{L/R}-M$ in the black hole, hence the algebraic description of dS space is different from that of the black hole.", "In [10], it was found that when the static observer has the positive energy as a random variable, a system of dS space and the observer is well described by Type II$_1$ algebra, in which the trace of any bounded operator is finite.", "As a result, the entropy has an upper bound, which is saturated for the maximally entangled state.", "Meanwhile, in the inflationary era, $H$ is no longer a constant but a slowly varying function of the flat time coordinate $t$ .", "Then some of dS isometries are slightly broken and the spacetime geometry is given by quasi-dS space.", "In this case, the deviation of the background from perfect dS space is parametrized by the slow-roll parameter $\\epsilon _H$ .", "When the inflation is driven by the vacuum energy of the inflaton $\\phi (t)$ , a homogeneously evolving scalar field, $\\epsilon _H$ is proportional to $\\dot{\\phi }^2$ : $\\begin{split}\\epsilon _H \\equiv \\dot{r_H} = -\\frac{\\dot{H}}{H^2}=\\frac{4\\pi \\dot{\\phi ^2}}{M_{\\rm Pl}^2 H^2}=\\frac{4\\pi G \\dot{\\phi ^2}}{H^2},\\end{split}$ where dot denotes the derivative with respect to $t$ .", "In perfect dS case, equations of motion are solved by the constant $H$ and $\\dot{\\phi }=0$ , giving $\\epsilon _H=0$ .", "On the other hand, even if $\\dot{\\phi }^2$ does not vanish, we can suppress $\\epsilon _H$ close to zero by taking the $G\\rightarrow 0$ limit, which we will focus on throughout this work.", "In any case, as $\\epsilon _H \\rightarrow 0$ , the broken dS isometries are restored, implying the existence of the approximate timelike Killing vector associated with the static time coordinate $t_s$ .", "At the same time, the time scale $(\\epsilon _H H)^{-1}$ after which $H$ is no longer approximated as a constant becomes infinity.", "We will explicitly show that when we take this time scale to be the inflationary period, the energy flux across the horizon and its fluctuation become divergent in the $G\\rightarrow 0$ limit.", "Moreover, the energy flux across the horizon is interpreted as the expectation value of the static time translation generator, and its fluctuation is driven by the fluctuation in time, hence that in the value of $H$ at the end of inflation.", "Then we find that unlike perfect dS space, the inflationary quasi-dS space is described by Type II$_\\infty $ algebra rather than Type II$_1$ algebra.", "The organization of this article is as follows.", "In Section , we describe how the change in the horizon area induced by the slow-roll gives rise to the nonzero energy flux across the horizon, which is identified with the expectation value of the static time translation generator.", "In Section REF , we observe the modification of the von Neumann algebra description of the inflationary quasi-dS space from that of perfect dS space when we take the nonzero energy flux across the horizon into account.", "After claiming that quasi-dS space is well described by Type II$_\\infty $ algebra, we provide the expression for the von Neumann entropy of the static patch in Section REF .", "In Section REF , we relate this with the change in the horizon area considered in Section to complete our argument.", "Then we conclude with a brief comment about the possibility that the inflationary quasi-dS space is described by Type II$_1$ algebra.", "This can happen when the inflationary period is much shorter than $(\\epsilon _H H)^{-1}$ as recently conjectured in the swampland program.", "In Appendix , we summarize various coordinates on dS space which are used throughout the discussion.", "In Appendix , details of the density matrix considered in Section REF are given." ], [ "Horizon dynamics of quasi-dS space ", "In this section, we estimate the energy flux across the future horizon ${\\cal H}^+$ during the inflationary period, which relates the change in the horizon area to the static time translation generator.", "We assume that there is no energy flux across the initial singularity, ${\\cal H}^- \\cup {\\cal H}^{-\\prime }$ in Figure REF , the past boundary of the region $T\\cup R$ covered by the flat coordinates.", "Since the past horizon ${\\cal H}^-$ belongs to the initial singularity, we do not consider the energy flux across ${\\cal H}^-$ .", "The energy-momentum tensor of the inflaton field $\\phi $ with the canonical kinetic term is given by $\\begin{split}T_{\\mu \\nu }=\\nabla _\\mu \\phi \\nabla _\\nu \\phi -\\Big (\\frac{1}{2}(\\nabla \\phi )^2+V(\\phi )\\Big )g_{\\mu \\nu }.\\end{split}$ Since spacetime during inflation is homogeneous and isotropic at large scale, we expect that the equations of motion are solved by $\\phi (t)$ which depends only on the flat time coordinate $t$ (the metric of dS space in the flat coordinates $(t, r)$ can be found in (REF )).", "Since $\\dot{\\phi }$ becomes zero in the perfect dS limit ($H=$ constant), it measures the deviation of the background from dS space, which is evident from (REF ), i.e., $\\epsilon _H \\propto \\dot{\\phi }^2$ .", "So far as $\\epsilon _H$ is very tiny, one can find the approximate dS isometries, which allow an approximate timelike Killing vector along the direction of the static time coordinate $t_s$ , $k^a =(\\partial _{t_s})^a =(\\partial _t - H r\\partial _r)^a$ (the metric of dS space in the static coordinates $(t_s, r_s)$ can be found in (REF )).", "The component of the energy-momentum tensor associated with the $t_s$ direction is written as $\\begin{split}T_{t_st_s}=\\dot{\\phi }^2+(1-H^2r_s^2) \\Big (-\\frac{1}{2}\\dot{\\phi }^2+V(\\phi )\\Big ),\\end{split}$ hence $T_{t_st_s}=\\dot{\\phi }^2$ on the horizon $r_s=r_H=H^{-1}$ .", "Furthermore, the relation $\\partial _{r_s}\\phi =(\\partial t/\\partial r_s)\\dot{\\phi }=-[(Hr_s)/(1-H^2r_s^2)]\\dot{\\phi }$ (see (REF )) gives $\\begin{split}T_{t_s r_s}=-\\frac{Hr_s}{1-H^2r_s^2}\\dot{\\phi }^2.\\end{split}$ For a more straightforward interpretation of this, we consider $T_{t_sr_*}$ by converting $r_s$ into the tortoise coordinate $r_*$ defined in (REF ).", "From this, we can define `luminosity', the energy flux across the surface of constant $r_s$ by ${\\cal L}=-4\\pi r_s^2 T_{t_sr_*}$ [11].", "Since $\\begin{split}T_{t_s r_*}= -H r_s \\dot{\\phi }^2\\end{split}$ becomes $T_{t_s r_*}= -\\dot{\\phi }^2$ on the horizon, the luminosity on the horizon is given by ${\\cal L}=4\\pi H^{-2}\\dot{\\phi }^2=\\epsilon _H M_{\\rm Pl}^2$ .", "Meanwhile, the Kruskal-Szekeres coordinates $U$ and $V$ , in terms of which the metric is written as (REF ), are natural affine parameters on ${\\cal H}^+$ and ${\\cal H}^-$ , respectively.", "The energy-momentum tensor components in the Eddington-Finkelstein coordinates ($t, r_*$ ) and those in the Kruskal-Szekeres coordinates ($U, V$ ) are related as $\\begin{split}&T_{t_st_s}=H^2U^2 T_{UU}+H^2V^2T_{VV}-2 H^2 UV T_{UV},\\\\&T_{t_s r_*}=-H^2U^2T_{UU}+H^2V^2T_{VV}.\\end{split}$ Then the simple relations $\\begin{split}T_{UU}=\\frac{T_{t_st_s}}{H^2U^2}=-\\frac{T_{t_sr_*}}{H^2U^2} = \\frac{\\dot{\\phi }^2}{H^2U^2}\\end{split}$ are satisfied on ${\\cal H}^+$ ($V=0$ ).", "The energy-momentum tensor components on ${\\cal H}^+$ are used to find the first law of thermodynamics, which relates the energy flux across the horizon to the change in the horizon area [12], [13].", "In the perfect dS limit, we can use the timelike Killing vector $k^a=(\\partial _{t_s})^a=H(U\\partial _U-V\\partial _V)$ to find the conserved current $\\begin{split}J^a=-T^a_{~b}k^b.\\end{split}$ Since $V=0=$ (constant) on ${\\cal H}^+$ , relations $k^a=HU\\partial _U$ and $dV=0$ thus $g^{ab}(dV)_a (dV)_b=0$ are satisfied, implying that the vector $(\\partial _U)^a$ which is proportional to $g^{ab}(dV)_b$ is normal as well as tangential to ${\\cal H}^+$ .", "Then ${\\cal H}^+$ corresponds to the Killing horizon and the energy flux across ${\\cal H}^+$ is given by $\\begin{split}\\Delta E &=-\\int _{{\\cal H}^+}d\\Sigma _a J^a = \\int _{{\\cal H}^+} d\\Omega dU\\sqrt{\\gamma } T_{Ut_s}=\\int _{{\\cal H}^+} d\\Omega dU \\sqrt{\\gamma } (HU)T_{UU},\\end{split}$ where $d\\Sigma _a$ is the volume element on ${\\cal H}^+$ and $\\sqrt{\\gamma }=r_H^2=H^{-2}$ .", "As can be inferred from the relation $k^a=HU\\partial _U$ on ${\\cal H}^+$ and $T_{Ut_s}$ in the integrand, $\\Delta E$ is interpreted as the static time translation generator on ${\\cal H}^+$ .", "More precisely, denoting the quantum state of the inflationary universe by $|\\Phi \\rangle $ and the operator generating the static time translation on the `boundary' ${\\cal H}^+$ of the static patch $R$ by $H_R$ , the solutions to the equations of motion ${\\phi }(t)$ and $T_{\\mu \\nu }$ can be regarded as $\\langle \\Phi |\\phi |\\Phi \\rangle $ and $\\langle \\Phi |T_{\\mu \\nu }|\\Phi \\rangle $ , respectively, then $\\Delta E$ is identified with $\\langle \\Phi |H_R|\\Phi \\rangle $ (see also discussion in Section 2.4 and Section 3 of [9]).", "Meanwhile, since $r_s=r_H=H^{-1}$ is almost constant on ${\\cal H}^+$ , the relations $U=H^{-1}e^{H(t_s-r_*)}$ (see (REF )) and $t_s=t-\\frac{1}{2H}\\log \\big (1-H^2 r_s^2 \\big )$ (see (REF )) indicate that $dU=HU dt$ is satisfied on ${\\cal H}^+$ .", "Then from (REF ) one finds $\\begin{split}\\langle \\Phi |H_R|\\Phi \\rangle =\\Delta E= \\int d t \\frac{4\\pi }{H^2}\\dot{\\phi }^2=\\int dt \\epsilon _H M_{\\rm Pl}^2,\\end{split}$ where the integrand is nothing more than the luminosity ${\\cal L}$ and the range of $t$ integration is taken to be the inflationary period during which $H$ is almost constant.", "In addition, we assume $|\\dot{\\epsilon _H}/(\\epsilon _H H)|\\ll 1$ such that $\\epsilon _H$ does not vary much during the inflationary period.", "Since the value of $H$ considerably deviates from the initial value after $\\Delta t={\\cal O}(1)\\times (\\epsilon _H H)^{-1}$ , we may take $t \\in (-(2\\epsilon _H H)^{-1}, (2\\epsilon _H H)^{-1})$ , which becomes $t \\in (-\\infty , +\\infty )$ in the perfect dS limit $\\epsilon _H \\rightarrow 0$ .", "Then $\\langle \\Phi |H_R|\\Phi \\rangle $ is estimated as $\\begin{split}\\langle \\Phi |H_R|\\Phi \\rangle \\simeq \\frac{1}{\\epsilon _H H}\\times \\epsilon _H M_{\\rm Pl}^2 = \\frac{M_{\\rm Pl}^2}{H} \\end{split}$ up to ${\\cal O}(1)$ coefficient, showing that $\\langle \\Phi |H_R|\\Phi \\rangle $ is insensitive to $\\epsilon _H$ , or equivalently, $\\dot{\\phi }^2$ at leading order.", "The backreaction of the energy flux across the horizon leads to the deformation of the geometry parametrized by expansion, shear, and rotation.", "When the background is close to dS space, the horizon can be approximated as a Killing horizon, where all the three parameters vanish at leading order.", "Then the Raychaudhuri equation for the expansion $\\Theta = A^{-1}dA/dU$ which describes the change in the horizon area $A=4\\pi r_H^2=4\\pi H^{-2}$ is approximated as $\\begin{split}\\frac{d\\Theta }{dU} \\simeq -\\frac{8\\pi }{M_{\\rm Pl}^2}T_{ab}(\\partial _U)^a (\\partial _U)^b = -\\frac{8\\pi }{M_{\\rm Pl}^2}T_{UU},\\end{split}$ from which we can replace $T_{UU}$ by $-[M_{\\rm Pl}^2/(8\\pi )]d\\Theta /dU$ .", "Putting this into $\\langle \\Phi |H_R|\\Phi \\rangle $ , we obtain $\\begin{split}\\langle \\Phi |H_R|\\Phi \\rangle &=-\\frac{M_{\\rm Pl}^2}{8\\pi }\\int d\\Omega dU\\sqrt{\\gamma } (HU)\\frac{d\\Theta }{dU}\\\\&=-\\frac{M_{\\rm Pl}^2}{8\\pi }\\int d\\Omega \\Big [\\sqrt{\\gamma }HU \\Theta \\Big |_{U=0}^{U\\simeq \\infty }-\\int dU\\Big (\\sqrt{\\gamma }H+H U\\frac{d\\sqrt{\\gamma }}{dU}+\\sqrt{\\gamma }U\\frac{dH}{dU}\\Big )\\Theta \\Big ].\\end{split}$ Noting that $\\partial _U=(HU)^{-1}\\partial _{t_s}$ and $dt_s=dt$ on ${\\cal H}^+$ , one finds that $dH/dU=-\\epsilon _H H/U$ , $d\\sqrt{\\gamma }/dU=2\\epsilon _H/(H^2U)$ and $\\Theta =2\\epsilon _H/U$ .", "Then the last two terms in (REF ) are ${\\cal O}(\\epsilon _H^2 \\Delta t)$ .", "For the first surface term, since $\\sqrt{\\gamma }HU\\Theta $ is ${\\cal O}(\\epsilon _H)$ , the variation of $\\sqrt{\\gamma }HU\\Theta $ over ${\\cal H}^+$ is ${\\cal O}(\\epsilon _H^2 \\Delta t)$ .", "Therefore, the second term in (REF ) gives the leading contribution to $\\langle \\Phi |H_R|\\Phi \\rangle $ of ${\\cal O}(\\epsilon _H \\Delta t)$ : $\\begin{split}\\langle \\Phi |H_R|\\Phi \\rangle &=\\frac{M_{\\rm Pl}^2}{8\\pi }\\int dU H \\int d\\Omega \\sqrt{\\gamma } \\Theta +{\\cal O}(\\epsilon _H^2 \\Delta t)=\\frac{M_{\\rm Pl}^2}{8\\pi }\\int dU H \\frac{dA}{dU} +{\\cal O}(\\epsilon _H^2 \\Delta t)\\\\&=\\frac{M_{\\rm Pl}^2}{8\\pi }\\int d t H \\frac{dA}{dt} +{\\cal O}(\\epsilon _H^2 \\Delta t) = \\int dt \\frac{H}{2\\pi }\\frac{d}{dt}\\Big (\\frac{A}{4G}\\Big )+{\\cal O}(\\epsilon _H^2\\Delta t).\\end{split}$ Since the Gibbons-Hawking temperature is given by $H/(2\\pi )$ , the integrand can be written in the form of $T\\Delta S$ , which is consistent with the first law of thermodynamics.", "We also note that the explicit calculation of the integrand reproduces (REF ).", "To see the physical meaning of $\\langle \\Phi |H_R|\\Phi \\rangle =\\Delta E$ more clear, we recall that the static time translation is a diffeomorphism, the gauge invariance of gravity, hence it acts as a constraint on the dynamics of quantum gravity.", "As a result, just like the ADM mass of the black hole, the associated charge gets contribution from the surface integral on the boundary (${\\cal H}^+$ for dS space) only, which is given by $\\beta ^{-1}S=M_{\\rm Pl}^2/(2 H)=r_H/(2G)$ [14].", "This is supported by the fact that the energy inside the horizon in the perfect dS limit is estimated as $M_{\\rm Pl}^2/(2H)$ , which is obtained by multiplying the energy density during inflation $T_{tt}\\simeq [3/(8\\pi )]M_{\\rm Pl}^2 H^2$ by the volume inside the horizon $(4\\pi /3)H^{-3}$ .", "The radius of the horizon in the flat coordinates $H^{-1}e^{-Ht}$ gives $V=(4\\pi /3) e^{3Ht}\\times (H^{-1}e^{-Ht})^3$ where the factor $e^{3Ht}$ comes from $\\sqrt{-g}$ restricted to the spatial directions.", "If we regard the slow-roll as the adiabatic process, $H$ slowly decreases in time, then the `ADM mass' just after the end of inflation $M_{\\rm Pl}^2/(2H)+\\Delta E$ is identified with $M_{\\rm Pl}^2/(2H_f)$ where $H_f$ is the value of $H$ at that time.", "As $H_f$ significantly deviates from the initial $H$ , we expect that $\\Delta E$ is given by the same order as $M_{\\rm Pl}^2/(2H)$ .", "It is remarkable that $\\langle \\Phi |H_R|\\Phi \\rangle $ becomes divergent in the $G\\rightarrow 0$ , or equivalently, $ M_{\\rm Pl}^2 \\rightarrow \\infty $ limit.", "Indeed, whereas the perfect dS limit $\\epsilon _H \\rightarrow 0$ is trivially obtained by taking $\\dot{\\phi }\\rightarrow 0$ , we can also reach the same limit by taking $G\\rightarrow 0$ even if $\\dot{\\phi }^2$ is kept finite, as can be noticed from (REF ).", "In this case, as $\\epsilon _H$ almost vanishes, the spacetime geometry can be well approximated by the perfect dS space.", "But at the same time, as implied by the nonzero energy flux, the horizon is no longer in thermal equilibrium with the Gibbons-Hawking radiation.", "Then the state $|\\Phi \\rangle $ breaks the dS isometry by allowing the nonzero $\\langle \\Phi |H_R|\\Phi \\rangle $ , instead of being annihilated by $H_R$ , just like the Unruh state describing the evaporating black hole [15] (see also [16], [17] for recent discussions).", "This can be contrasted with perfect dS space ($\\dot{\\phi }=0$ ), in which the horizon is in thermal equilibrium hence the energy flux across the horizon vanishes.", "Then the quantum state $|\\Psi \\rangle $ for perfect dS space respects the dS isometry.", "This is called the Bunch-Davies state [18], [19], the dS analogy of the Hartle-Hawking state of the black hole [20].", "Our discussion so far is made in terms of the solutions to the classical equations of motion, which are regarded as the expectation values of the operators with respect to $|\\Phi \\rangle $ .", "On the other hand, as a dS isometry associated with the static time translation is spontaneously broken by the quasi-dS background, the quantum fluctuation in $\\phi $ combines with that in the trace of the spatial metric [21], [22] (see also [23], [24]), forming the gauge invariant operator which excites the curvature perturbation [25], [26].", "As the universe undergoes accelerated expansion, the wavelength of the curvature perturbation is stretched beyond the horizon scale, after which the fluctuation can be treated as a classical one [27], [28], [29], [30], [31].", "This contributes to the accumulated uncertainty of the classical trajectory $\\phi (t)$ during $\\Delta t$ given by [32], [33], [34] $\\begin{split}\\langle \\phi (t)^2 \\rangle \\equiv \\delta \\phi ^2=\\Big (\\frac{H}{2\\pi }\\Big )^2 H\\Delta t.\\end{split}$ Noting $\\beta ^{-1}=H/(2\\pi )$ , the accumulated uncertainty may be interpreted as the thermal fluctuation, which induces the fluctuation in $H_R$ estimated as $\\begin{split}\\delta H_R =\\Big (\\frac{\\partial }{\\partial \\beta }\\langle \\Phi |H_R|\\Phi \\rangle \\Big )^{1/2}=\\frac{M_{\\rm Pl}}{\\sqrt{2\\pi }}.\\end{split}$ We can reach the similar conclusion in the following way.", "In perfect dS space, different constant $t$ (flat time) slices are physically equivalent due to the isometry associated with the $t_s$ (static time) translation : the translation of $t$ can be compensated by the scaling of $r$ , leaving $r_s=r e^{Ht}$ (see (REF )) hence the metric in the static coordinates (REF ) unchanged.", "In quasi-dS space, however, the time dependent classical solution $\\phi (t)$ plays the role of `clock' distinguishing the specific constant $t$ slice from others.", "Then the fluctuation in $\\phi (t)$ given by (REF ) leads to the fluctuation in time $\\delta t$ , This should not be confused with $\\Delta t$ , the time interval without fluctuation during which the fluctuation in $\\phi (t)$ is accumulated.", "thus that in $H$ : $\\begin{split}|\\delta H|= |\\dot{H}\\delta t|=\\Big |\\frac{\\dot{H}}{\\dot{\\phi }}\\delta \\phi \\Big |=\\sqrt{\\frac{\\epsilon _H}{\\pi }}H\\frac{H}{M_{\\rm Pl}}(H\\Delta t)^{1/2}.\\end{split}$ From this, we can estimate the fluctuation in $\\langle \\Phi |H_R|\\Phi \\rangle $ over the inflationary period $\\Delta t=(\\epsilon _H H)^{-1}$ to obtain $\\begin{split}|\\delta H_R|=\\frac{M_{\\rm Pl}^2}{H^2}|\\delta H|=\\frac{M_{\\rm Pl}}{\\sqrt{\\pi }},\\end{split}$ which diverges in the $G \\rightarrow 0$ limit.", "Since $\\langle \\Phi | H_R |\\Phi \\rangle \\sim {\\cal O}(G^{-1})$ is divergent in the $G\\rightarrow 0$ limit, one may define the `renormalized' static time translation generator by $H^{\\prime }_{R}=H_{R}-\\langle \\Phi | H_R |\\Phi \\rangle $ .", "But still, $\\langle \\Phi | (H^{\\prime }_R)^2 |\\Phi \\rangle =(\\delta H_R)^2 \\sim {\\cal O}(G^{-1})$ is also divergent in the $G\\rightarrow 0$ limit, so $H^{\\prime }_R|\\Phi \\rangle $ has a divergent norm and $H^{\\prime }_R$ is not well defined.", "In order for the static time translation generator to be well defined, we need to extend the spacetime to the regions $L$ and $B$ in Figure REF such that $|\\Phi \\rangle $ describes the whole quasi-dS manifold covering $R\\cup T \\cup L \\cup B$ .", "Then we can introduce $H_L$ , the static time translation generator in the complementary static patch $L$ .", "Since $L$ is just a copy of $R$ , the energy flux across the future horizon ${\\cal H}^{+\\prime }$ has the same value as that across ${\\cal H}^{+}$ .", "But the static time in $L$ flows in the opposite direction to that in $R$ , so $\\Delta E$ through ${\\cal H}^{+}$ (say, flowing from $R$ to $T$ ) has an opposite sign to that through ${\\cal H}^{+\\prime }$ (say, flowing from $L$ to $B$ ).", "From this, we expect that $\\Delta E$ on ${\\cal H}^{+\\prime }$ is identified with $-\\langle \\Phi | H_L |\\Phi \\rangle $ and the sum of $\\Delta E$ on ${\\cal H}^+$ and that on ${\\cal H}^{+\\prime }$ vanishes, giving $\\langle \\Phi | H_R |\\Phi \\rangle =\\langle \\Phi | H_L |\\Phi \\rangle $ .", "Then the total static time translation generator $H_0=H_R-H_L = H^{\\prime }_R-H^{\\prime }_L$ is well defined as both $H_0$ and $(H_0)^2$ annihilate the thermofield double state describing the entanglement between states living on ${\\cal H}^+$ and ${\\cal H}^{+\\prime }$ .", "But the fact that $H^{\\prime }_L$ and $H^{\\prime }_R$ are not well defined indicates that a factorization of the Hilbert space into the Hilbert spaces defined on $R$ and $L$ is not well defined in the $G\\rightarrow 0$ limit.", "We can compare our results, $\\langle \\Phi | H_R |\\Phi \\rangle \\sim {\\cal O}(G^{-1})$ and $\\delta H_R\\sim {\\cal O}(G^{-1/2})$ , with the boundary Hamiltonian of the eternal AdS black hole, which describes the ${\\cal N}=4$ super Yang-Mills theory [8].", "In the large $N$ limit, Hamiltonians in the left and right boundaries $H_L$ and $H_R$ have thermal expectation values of ${\\cal O}(N^2) \\sim {\\cal O}(G^{-1})$ and $H^{\\prime }_{L/R}=H_{L/R}-\\langle H_{L/R}\\rangle $ satisfy $\\langle {H^{\\prime }_{L/R}}^2\\rangle \\sim {\\cal O}(N^2)$ hence $\\delta H_{L/R} \\sim {\\cal O}(N)\\sim {\\cal O}(G^{-1/2})$ , showing the same behaviors as $\\langle \\Phi | H_R |\\Phi \\rangle $ and $\\delta H_R$ , respectively." ], [ "von Neumann algebra for quasi-dS space", "In order to find the von Neumann algebra description of quasi-dS space, we first consider Type II$_1$ algebra for dS space discussed in [10] and see how it is modified by the nonzero energy flux across the horizon we obtained in Section .", "Since a static observer can access the static patch $R$ only, the quantum description of (quasi-)dS space as seen by the static observer is made in terms of the local observables on $R$ .", "Moreover, in the dS limit, the static patch is invariant under the subgroup of the dS isometry consists of the static time translation and rotation hence operators on the static patch are required to be invariant under the subgroup.", "However, as pointed out in [10], the only operators that commute with the static time translation generator are those proportional to the identity.", "The nontrivial operators can be considered by taking the Hamiltonian of the static observer into account in addition, such that the total Hamiltonian is given by $H_0+\\widehat{q}$ , where $H_0$ is the static time translation generator and $\\widehat{q}$ is the observer Hamiltonian.", "Here $\\widehat{q}$ is assumed to have the nonnegative eigenvalue $q$ and acts on $L^2(\\mathbb {R}^+)$ , the Hilbert space of the square integrable function of $q$ .", "For quasi-dS space, by taking the inflationary period to be $(\\epsilon _H H)^{-1}$ , the nonzero energy flux across the horizon $\\langle \\Phi | H_R|\\Phi \\rangle $ becomes divergent in the $G\\rightarrow 0$ limit.", "As discussed in Section , $H^{\\prime }_R=H_R-\\langle \\Phi | H_R|\\Phi \\rangle $ , the renormalized static time translation generator (restricted to the static patch $R$ ) is not well defined since $\\langle \\Phi | (H^{\\prime }_R)^2|\\Phi \\rangle \\sim {\\cal O}(G^{-1})$ also diverges in the $G\\rightarrow 0$ limit.", "The similar problem also arises in the boundary Hamiltonian of the AdS black hole, in which the issue is circumvented by considering $H^{\\prime }_R/N$ since ${\\cal O}(N)$ is equivalent to ${\\cal O}(G^{-1/2})$ in the large $N$ limit [8].", "Motivated by this, we may define $H^{\\prime }_R/N$ where $N$ is now the dimensionless parameter of ${\\cal O}(G^{-1/2})$ , say, $M_{\\rm Pl}/H$ .", "Then following [8], $H^{\\prime }_R/N$ can be expressed as $\\begin{split}\\frac{H^{\\prime }_R}{N}=U + \\frac{H_0}{N},\\end{split}$ where $U$ is an operator which commutes with any observables on $R$ .", "Whereas $U$ is in fact $H^{\\prime }_L/N$ , it is also identified with $H^{\\prime }_R/N$ in the $G\\rightarrow 0$ limit, in which $H_0/N \\rightarrow 0$ and $[a, H^{\\prime }_R/N]=(i/N)\\partial a/\\partial t_s \\rightarrow 0$ for any operator $a$ on $R$ .", "Moreover, the divergence of $\\langle \\Phi | H_R|\\Phi \\rangle /N \\sim {\\cal O}(G^{-1/2})$ in the $G\\rightarrow 0$ limit indicates that the lower bound on $U$ is $-\\infty $ thus the eigenvalue of $U$ can take any real number in $(-\\infty , +\\infty )$ and the Hilbert space relevant to $U$ is given by $L^2(\\mathbb {R})$ .", "In the following discussion, we will take the factor $1/N$ to be implicit for convenience.", "Indeed, as pointed out in [8], if we are interested in the ordinary functions of $N$ , we may work with $H^{\\prime }_R$ instead of $H^{\\prime }_R/N$ .", "Our discussion is based on the canonical ensemble in which the fluctuation $\\delta H_R\\sim {\\cal O}(G^{-1/2})$ is divergent in the $G\\rightarrow 0$ limit.", "On the other hand, we don't need to divide $H_R$ by $N$ in the microcanonical ensemble as the fluctuation in $H_R$ is restricted to be ${\\cal O}(1)$ [9].", "The reason we do not consider the microcanonical ensemble is that the divergent fluctuation in $H_R$ is induced by the fluctuation in the curvature perturbation and so far as we know there is no physical reason to restrict the fluctuation to be ${\\cal O}(1)$ .", "Defining $X\\equiv NU=H^{\\prime }_L$ , $H^{\\prime }_R$ is written as $X+ H_0$ with $X$ acting on the Hilbert space $L^2(\\mathbb {R})$ and the (renormalized) Hamiltonian restricted to the static patch $R$ is given by $\\widehat{H}=H_0+X+\\widehat{q}$ .", "Now we can construct Type III algebra ${\\cal A}_R \\otimes B(\\mathbb {R}) \\otimes B(\\mathbb {R}^+)$ , where ${\\cal A}_R$ is the algebra of the observables on $R$ and $B(\\mathbb {R}^{(+)})$ is the algebra of bounded operators acting on $\\mathbb {R}^{(+)}$ .", "This is converted into Type II algebra by imposing the diffeomorphism invariance as a gauge constraint.", "Focusing on an (approximate) isometry of the static time translation, the algebra of the observables on $R$ is given by an invariant subalgebra with respect to $\\widehat{H}$ , $\\begin{split}\\widehat{\\cal A}_R=\\big ({\\cal A}_R\\otimes B(\\mathbb {R}) \\otimes B(\\mathbb {R}^+)\\big )^{\\widehat{H}}.", "\\end{split}$ The elements of $\\widehat{{\\cal A}}_R$ can be explicitly written by introducing an operator $\\widehat{p}$ conjugate to $\\widehat{q}$ satisfying $[\\widehat{q}, \\widehat{p}]=i$ , which is interpreted as a (fluctuating) time measured by the static observer.", "By requiring $[\\widehat{q}, H_0]=0$ , $\\widehat{q}$ belongs to $\\widehat{{\\cal A}}_R$ .", "Moreover, for any $a\\in {\\cal A}_R$ , one finds that its gravitational dressing or the outer automorphism, $\\begin{split}{a}^{\\prime }=e^{i (H_0+X) \\widehat{p}} a e^{-i (H_0+X) \\widehat{p}}\\end{split}$ satisfies $[\\widehat{q}, {a}^{\\prime }]=-[H_0+X, {a}^{\\prime }]$ , or equivalently, $[\\widehat{H}, {a}^{\\prime }]=0$ .", "Therefore, $\\widehat{{\\cal A}}_R$ is generated by $\\begin{split}\\lbrace {a}^{\\prime }=e^{i (H_0+X) \\widehat{p}} a e^{-i (H_0+X) \\widehat{p}}, \\widehat{q}\\rbrace .", "\\end{split}$ By taking the conjugation by $e^{-i (H_0+X) \\widehat{p}}$ , one finds that it is equivalent to $\\lbrace a, \\widehat{q}-(H_0+X)\\rbrace $ .", "In order to implement the finite (renormaliazed) entropy, we need to define `trace' in a sensible way.", "The trace here refers to, in a somewhat abstract sense, a linear functional of operators satisfying ${\\rm Tr}(ab)={\\rm Tr}(ba)$ and ${\\rm Tr}(a^\\dagger a)>0$ for a nonzero $a$ .", "Just like the case of perfect dS space, the trace can be defined in terms of the Bunch-Davies state $|\\Psi \\rangle $ which is invariant under the dS isometries.", "We note that since $H_0$ belongs to the isometry generators, $H_0|\\Psi \\rangle =0$ is satisfied and $|\\Psi \\rangle $ does not distinguish $\\widehat{q}-H_0$ from $\\widehat{q}$ .", "From this and $[X, a]=0$ , one finds that for $a^{\\prime }$ defined in (REF ), $\\langle \\Psi |a^{\\prime }|\\Psi \\rangle $ is identified with $\\langle \\Psi |a|\\Psi \\rangle $ .", "Then we define the trace of any operator $\\widehat{a} \\in \\widehat{\\cal A}_R$ by $\\begin{split}{\\rm Tr}(\\widehat{a})= \\int _{-\\infty }^{\\infty }\\beta dx e^{\\beta x}\\int _0^\\infty \\beta dq e^{-\\beta q }\\langle \\Psi |\\widehat{a}|\\Psi \\rangle ,\\end{split}$ where $x$ is the eigenvalue of $X$ .", "For perfect dS space, since the divergent fluctuation in $X$ is not taken into account, the integration over $x$ is absent and the trace of the identity is not divergent but finite : ${\\rm Tr}(1)=1$ .", "To see the physical meaning of the identity in this case, let us observe the trace of other operators in $\\widehat{{\\cal A}}_R$ generated by $\\lbrace a, \\widehat{q}-H_0 \\rbrace $ .", "For $a\\in {\\cal A}_R$ which is independent of $\\widehat{q}-H_0$ , $\\begin{split}&{\\rm Tr}(a)={\\rm Tr}(a 1)=\\Big (\\beta \\int _0^\\infty dq e^{-\\beta q}\\Big )\\langle \\Psi |a|\\Psi \\rangle = \\langle \\Psi |a|\\Psi \\rangle ,\\end{split}$ showing that ${\\rm Tr}(a)$ is the expectation value of the local operator $a$ with respect to $|\\Psi \\rangle $ .", "On the other hand, when the operator $G\\in \\widehat{{\\cal A}}_R$ is given by a function of $\\widehat{q}-H_0$ , $\\begin{split}{\\rm Tr}(G(\\widehat{q}-H_0))&={\\rm Tr}(G(\\widehat{q}-H_0) 1)= \\beta \\int _0^\\infty dq e^{-\\beta q} \\langle \\Psi | G(\\widehat{q}-H_0)|\\Psi \\rangle \\\\& = \\beta \\int _0^\\infty dq e^{-\\beta q} \\langle \\Psi | G(q)|\\Psi \\rangle = \\beta \\int _0^\\infty dq e^{-\\beta q} G(q),\\end{split}$ from which one finds that for Tr$(G)$ to be an expectation value, $\\beta e^{-\\beta q}$ is interpreted as the probability distribution of the eigenvalues of $\\widehat{q}$ .", "Then it is reasonable to interpret the identity as a density matrix describing the maximal entanglement : the entropy of the system in the perfect dS background is maximized at $-{\\rm Tr}(1 \\log 1)=0$ .", "In other words, among the states in the Hilbert space ${\\cal H}\\otimes L^2(\\mathbb {R}^+)$ on which the algebra $\\widehat{{\\cal A}}_R$ acts, $\\begin{split}|\\Psi _{\\rm max}\\rangle =|\\Psi \\rangle \\otimes \\int _0^\\infty dq \\sqrt{\\beta }e^{-\\beta {q}/2}|q\\rangle \\end{split}$ gives the maximal entropy as the density matrix $\\rho _{\\rm max}=1$ is obtained from $\\langle \\Psi _{\\rm max}|\\widehat{a}|\\Psi _{\\rm max}\\rangle ={\\rm Tr}(\\rho _{\\rm max}\\widehat{a})$ [10].", "This is a feature of Type II$_1$ von Neumann algebra.", "In contrast, for quasi-dS space, the integration over $x$ in (REF ) leads to Tr$(1)=\\infty $ , which means that the density matrix for the maximal entanglement is not renormalized.", "Then $\\widehat{{\\cal A}}_R$ belongs to Type II$_\\infty $ von Neumann algebra, in which the trace is sensibly defined only for the subset of the algebra." ], [ "Density matrix and entropy", "Since the inflationary quasi-dS background slightly breaks the dS isometry, the quantum state during inflation $|\\Phi \\rangle $ is no longer the Bunch-Davies state $|\\Psi \\rangle $ .", "In order to find the density matrix in this case, we consider a state $\\begin{split}&|\\widehat{\\Phi }\\rangle =|\\Phi \\rangle \\otimes g(X)\\otimes f(\\widehat{q})\\end{split}$ in ${\\cal H}\\otimes L^2(\\mathbb {R})\\otimes L^2(\\mathbb {R}^+)$ , where $\\begin{split}&g(X)=\\int _{-\\infty }^\\infty g(x)|x\\rangle ,\\quad \\quad f(\\widehat{q})=\\int _{0}^\\infty f(q)|q\\rangle .\\end{split}$ Here $|g(x)|^2$ and $|f(q)|^2$ are interpreted as the probability distributions of $x$ and $p$ with the normalizations $\\int _{-\\infty }^\\infty |g(x)|^2dx=1$ and $\\int _{0}^\\infty |f(q)|^2dq=1$ , which are assumed to be slowly varying functions of $x$ and $q$ , respectively.", "Then $\\rho _{\\widehat{\\Phi }}$ , the density matrix associated with $|\\widehat{\\Phi }\\rangle $ is defined as $\\begin{split}\\langle \\widehat{\\Phi } | \\widehat{a} |\\widehat{\\Phi }\\rangle ={\\rm Tr}(\\rho _{\\widehat{\\Phi }}\\widehat{a})=\\int _{-\\infty }^{\\infty }\\beta dx e^{\\beta x} \\langle \\Psi _{\\rm max} | \\rho _{\\widehat{\\Phi }}\\widehat{a} |\\Psi _{\\rm max}\\rangle , \\end{split}$ for any $\\widehat{a}\\in \\widehat{\\cal A}_R$ .", "In order to obtain $\\rho _{\\widehat{\\Phi }}$ , we need to convert the states written in terms of $|\\Psi \\rangle $ , i.e., the states constructed by acting the operators in ${\\widehat{\\cal A}_R}$ on $|\\Psi \\rangle $ , into those written in terms of $|\\Phi \\rangle $ .", "This is well described by the relative Tomita operator $S_{\\Phi |\\Psi }$ , an antilinear operator satisfying While we follow the notations in [10], they are different from those in [2] : $S_{\\Phi |\\Psi }$ and $\\Delta _{\\Phi |\\Psi }$ in [10] are $S_{\\Psi |\\Phi }$ and $\\Delta _{\\Psi |\\Phi }$ in [2], respectively.", "$\\begin{split}S_{\\Phi |\\Psi } a |\\Psi \\rangle = a^\\dagger |\\Phi \\rangle \\end{split}$ for all $a\\in {\\cal A}_R$ .", "From this, we define the relative modular operator $\\begin{split}\\Delta _{\\Phi |\\Psi } = e^{-h_{\\Phi |\\Psi }} = S_{\\Phi |\\Psi }^\\dagger S_{\\Phi |\\Psi },\\end{split}$ which gives the relation $\\begin{split}\\langle \\Psi |\\Delta _{\\Psi |\\Phi }a|\\Psi \\rangle &= \\langle \\Psi |S_{\\Phi |\\Psi }^\\dagger S_{\\Phi |\\Psi } a|\\Psi \\rangle = \\langle \\Psi |S_{\\Phi |\\Psi }^\\dagger a^\\dagger |\\Phi \\rangle = \\langle \\Phi |a^\\dagger |\\Phi \\rangle ^*\\\\&=\\langle \\Phi |a|\\Phi \\rangle .", "\\end{split}$ In the same way, we can also define the Tomita operators and the modular operators $\\Delta _\\Phi =e^{-h_\\Phi }=S_\\Phi ^\\dagger S_\\Phi $ and $\\Delta _\\Psi =e^{-h_\\Psi }=S_\\Psi ^\\dagger S_\\Psi $ satisfying $\\begin{split}S_\\Phi a|\\Phi \\rangle =a^\\dagger |\\Phi \\rangle ,\\quad S_\\Psi a|\\Psi \\rangle =a^\\dagger |\\Psi \\rangle ,\\end{split}$ respectively.", "In order to find the physical meaning of these modular operators, in addition to ${\\cal A}_R$ , the local algebra restricted to the static patch $R$ , we consider another local algebra ${\\cal A}^{\\prime }_R$ , a commutant of ${\\cal A}_R$ .", "When we extend the spacetime manifold to $L$ and $B$ , ${\\cal A}^{\\prime }_R$ can be given by the local algebra on the complementary static patch $L$ .", "Then it was shown that (see, e.g., Section IV.", "A of [2]) the density matrix $\\rho _\\Psi $ for algebra ${\\cal A}_R$ and $\\rho ^{\\prime }_\\Psi $ for ${\\cal A}^{\\prime }_R$ associated with the state $|\\Psi \\rangle $ satisfy $\\begin{split}\\Delta _\\Psi =\\rho _\\Psi \\otimes {\\rho ^{\\prime }_\\Psi }^{-1}.\\end{split}$ We can also find the similar relations $\\Delta _\\Phi =\\rho _\\Phi \\otimes {\\rho ^{\\prime }_\\Phi }^{-1}$ , $\\Delta _{\\Psi |\\Phi }=\\rho _\\Psi \\otimes {\\rho ^{\\prime }_\\Phi }^{-1}$ , and $\\Delta _{\\Phi |\\Psi }=\\rho _\\Phi \\otimes {\\rho ^{\\prime }_\\Psi }^{-1}$ .", "Given the static time translation generators $H_R$ for ${\\cal A}_R$ and $H_L$ for ${\\cal A}^{\\prime }_R={\\cal A}_L$ , the density matrices can be written as $\\begin{split}\\log \\rho _\\Psi =-\\beta H_R +C,\\quad \\log \\rho ^{\\prime }_\\Psi =-\\beta H_L +C,\\end{split}$ respectively, from which one finds that $h_\\Psi =\\beta (H_R-H_L)$ .", "Since the static time in $R$ runs in the opposite direction to that in $L$ , $H_R-H_L$ is nothing more than the total static time translation generator $H_0$ .", "Thus, $h_\\Psi $ is identified with $\\beta H_0$ .", "The explicit value of $C$ in (REF ) can be obtained by taking the expectation value with respect to $|\\Psi \\rangle $ .", "From $\\langle \\Psi |H_R|\\Psi \\rangle =0$ (no energy flux across the horizon) and $\\langle \\Psi |\\log \\rho _\\Psi |\\Psi \\rangle ={\\rm Tr}(\\rho _\\Psi \\log \\rho _\\Psi )=-S_{\\rm bulk}(R)_\\Psi $ where $R$ denotes the bulk of the static patch $R$ , we obtain $\\begin{split}\\log \\rho _\\Psi =-\\beta H_R-S_{\\rm bulk}(R)_\\Psi .\\end{split}$ Comparing (REF ) and (REF ), it is reasonable to expect that $\\rho _{\\widehat{\\Phi }}$ contains $\\Delta _{\\Phi |\\Psi }$ , which converts the expectation value with respect to $|\\Psi \\rangle $ into that with respect to $|\\Phi \\rangle $ .", "This indeed is supported by the relation $\\Delta _{\\Phi |\\Psi }=\\rho _\\Phi \\otimes {\\rho ^{\\prime }_\\Psi }^{-1}$ and the observation that $L\\cup B$ is a copy of $R\\cup T$ thus $\\rho ^{\\prime }_\\Psi $ is just given by $\\rho _\\Psi $ .", "The rest part of $\\rho _{\\widehat{\\Phi }}$ depends on the probability distribution of $x$ and $q$ .", "Moreover, the fact that the observer's energy is described in a probabilistic way implies that the observer's time has an uncertainty.", "In [10] and [9], it was argued that when the fluctuation in the observer's time is bounded by $\\varepsilon \\ll \\beta $ and $g(x)$ as well as $f(q)$ is slowly varying over $1/\\varepsilon $ , i.e., $|\\Delta x|, |\\Delta q| \\sim 1/\\varepsilon $ , the density matrix $\\rho _{\\widehat{\\Phi }}$ is given by $\\begin{split}\\rho _{\\widehat{\\Phi }} = \\frac{1}{\\beta ^2}\\big | g(X+h_\\Psi /\\beta ) f ( \\widehat{q})\\big |^2e^{\\beta (-X+\\widehat{q})}\\Delta _{\\Phi |\\Psi }+{\\cal O}(\\varepsilon ).\\end{split}$ The justification of (REF ) can be found in Appendix .", "We note that in quasi-dS space, the curvature perturbation induces the fluctuation in $\\phi (t)$ hence that in the flat time given by $\\delta t=\\delta \\phi /\\dot{\\phi }(t)$ .", "As can be inferred from (REF ), these fluctuations are accumulated as time goes on, hence negligibly small compared to the fluctuation in the static observer's time provided $\\begin{split}\\delta t=\\frac{\\delta \\phi }{\\dot{\\phi }(t)}=\\sqrt{\\frac{1}{\\pi \\epsilon _H}}\\frac{(H\\Delta t)^{1/2}}{M_{\\rm Pl}} < \\varepsilon \\ll \\beta =\\frac{2\\pi }{H}.\\end{split}$ If the inequality is satisfied until $\\Delta t \\sim \\beta $ , i.e., several $e$ -folds, $\\epsilon _H$ is required to be larger than $H^2/M_{\\rm Pl}^2$ , which is known as the condition that the eternal inflation does not take place.", "That is, if $\\epsilon _H$ is too small, the field value of $\\phi (t)$ strongly fluctuates and $H$ may stay at some constant value for a long time instead of decreasing in time through the slow-roll.", "In this case, the semiclassical description we have considered is no longer valid.", "If the inequality is satisfied until the end of inflation $\\Delta t \\sim (\\epsilon _H H)^{-1}$ , we have the stronger bound $\\epsilon _H > H/M_{\\rm Pl}$ .", "Then the von Neumann entropy of the static patch associated with the state $|\\widehat{\\Phi }\\rangle $ , which will be identified with the generalized entropy up to the addition of constant, is written as $\\begin{split}S(R)_{\\widehat{\\Phi }}&=-\\langle \\widehat{\\Phi }|\\log \\rho _{\\widehat{\\Phi }}|\\widehat{\\Phi }\\rangle \\\\&=\\langle \\widehat{\\Phi }|h_{\\Phi |\\Psi }|\\widehat{\\Phi }\\rangle - \\langle \\widehat{\\Phi }|(\\beta (\\widehat{q}-X) +h_\\Phi )|\\widehat{\\Phi }\\rangle \\\\&\\quad -\\int _{0}^\\infty dq |f(q)|^2(\\log |f(q)|^2-\\log \\beta )-\\int _{-\\infty }^\\infty dx |g(H_R)|^2(\\log |g(H_R)|^2-\\log \\beta ),\\end{split}$ where in the second term $\\langle \\Phi |h_\\Phi |\\Phi \\rangle =0$ which is obtained from $S_\\Phi |\\Phi \\rangle =|\\Phi \\rangle $ is added and $H_R$ indicates $X+h_\\Psi /\\beta =X+H_0$ .", "From $\\Delta _\\Psi =\\rho _\\Psi \\otimes {\\rho ^{\\prime }_\\Psi }^{-1}$ and $\\Delta _{\\Psi |\\Phi }=\\rho _\\Psi \\otimes {\\rho ^{\\prime }_\\Phi }^{-1}$ , one finds $\\Delta _{\\Phi |\\Psi }^{is}\\Delta _\\Psi ^{-is}=\\Delta _\\Phi ^{is}\\Delta _{\\Psi |\\Phi }^{-is}$ (this quantity is called the Connes cocycle), which leads to $h_{\\Phi |\\Psi }-h_\\Phi =h_\\Psi -h_{\\Psi |\\Phi }$ .", "Then $S (R)_{\\widehat{\\Phi }}$ is rewritten as $\\begin{split}S (R)_{\\widehat{\\Phi }}=&-\\langle \\widehat{\\Phi }|h_{\\Psi |\\Phi }|\\widehat{\\Phi }\\rangle - \\langle \\widehat{\\Phi }|(\\beta (\\widehat{q}-X)-h_\\Psi )|\\widehat{\\Phi }\\rangle \\\\&-\\int _{0}^\\infty dq |f(q)|^2(\\log |f(q)|^2-\\log \\beta )-\\int _{-\\infty }^\\infty dx |g(H_R)|^2(\\log |g(H_R)|^2-\\log \\beta ).\\end{split}$ Let us first consider the second line.", "The first integral is interpreted as the entropy of the static observer, which is evident from the fact that $|f(q)|^2$ is the probability distribution of $\\widehat{q}$ eigenvalues, the observer's energy.", "As for the last term, we recall that the fluctuation in $H_R$ originates from the fluctuation in $\\phi (t)$ , or equivalently, the flat time $t$ .", "Since $H$ also varies depending on $t$ , the value of $H$ at the end of inflation also fluctuates, the probability distribution of which is described by $|g(H_R)|^2$ .", "These two integrals in the second line are not explicitly relevant to the excitations of matter in the static patch, and can be identified with $S_{\\rm bulk}(\\infty )_{\\widehat{\\Phi }}$ , the bulk entropy associated with $|\\widehat{\\Phi }\\rangle $ at the end of inflation.", "This is because at late time, the wavelength of almost all the excitations will be stretched beyond the horizon so the static observer does not find any excitation except for that of the observer state inside the horizon.", "We can compare it with the static observer in the Bunch-Davies state $|\\Psi \\rangle $ .", "As $|\\Psi \\rangle $ is invariant under the static time translation generated by $H_0$ , the bulk entropy will be constant in time, i.e., $S_{\\rm bulk}(R)_{\\widehat{\\Psi }}=S_{\\rm bulk}(\\infty )_{\\widehat{\\Psi }}$ , and it measures the entropy of empty dS space without any excitation except for that of the observer state.", "For the state during inflation $|\\Phi \\rangle $ , in contrast, the background does not respect the isometry generated by $H_0$ any longer.", "Then the spontaneous breaking of the isometry by the background gives rise to the curvature perturbation which does not appear in perfect dS space.", "But the background is still close to dS space and the wavelength of these excitations would be stretched as the universe undergoes accelerated expansion.", "Then just like the perfect dS background, almost all the excitations cross the horizon after several $e$ -folds and only the fluctuations of the observer's energy and the value of $H$ contribute to the entropy.", "Then $S_{\\rm bulk}(\\infty )_{\\widehat{\\Phi }}$ can be identified with the sum of $S_{\\rm bulk}(\\infty )_{\\widehat{\\Psi }}=S_{\\rm bulk}(R)_{\\widehat{\\Psi }}$ and the last integral in (REF ).", "The same argument leads to $S_{\\rm bulk}(\\infty )_{{\\Phi }}\\simeq S_{\\rm bulk}(\\infty )_{{\\Psi }}=S_{\\rm bulk}(R)_{{\\Psi }}$ .", "We also note that the last two integrals in (REF ) reflects two different ways to give rise to the uncertainty of the horizon area.", "First, since the horizon is deformed by the backreaction of observer's energy $q$ , the horizon area fluctuates as $q$ fluctuates [35], [36], [10].", "Second, as we remarked earlier, the probability distribution $|g(H_R)|^2$ is induced by the fluctuation in time, which leads to the fluctuation in the horizon radius, thus that in the horizon area at late time.", "Summarizing the discussion so far, $S (R)_{\\widehat{\\Phi }}$ can be written as $\\begin{split}S (R)_{\\widehat{\\Phi }}=-\\langle \\widehat{\\Phi }|h_{\\Psi |\\Phi }|\\widehat{\\Phi }\\rangle - \\langle \\widehat{\\Phi }|(\\beta (\\widehat{q}-X)-h_\\Psi )|\\widehat{\\Phi }\\rangle + S_{\\rm bulk}(\\infty )_{\\widehat{\\Phi }}.\\end{split}$ As we will see in the next section, whereas the first term is identified with the negative of the change in the generalized entropy, the term $- \\langle \\widehat{\\Phi }|(\\beta (\\widehat{q}-X)-h_\\Psi )|\\widehat{\\Phi }\\rangle = \\langle \\widehat{\\Phi }|(H_R-\\beta \\widehat{q})|\\widehat{\\Phi }\\rangle $ can be interpreted as the deformation of the horizon area from the initial value.", "This leads to the relation $S (R)_{\\widehat{\\Phi }}=S_{\\rm gen}(R)_{\\widehat{\\Phi }}+$ (constant)." ], [ "Horizon dynamics and entropy in quasi-dS space", "We now focus on the first term of the RHS in (REF ).", "It was argued in [9] that this term is identified with a negative of the relative entropy, $-S_{\\rm rel}(\\Phi || \\Psi )$ .", "Indeed, this is the case if we consider the expectation value of $-h_{\\Psi |\\Phi }$ with respect to $|\\Phi \\rangle $ , $\\begin{split}-\\langle \\Phi | h_{\\Psi |\\Phi }|\\Phi \\rangle &= \\langle \\Phi | \\log (\\Delta _{\\Psi |\\Phi })|\\Phi \\rangle =\\langle \\Phi | (\\log (\\rho _\\Psi )-\\log (\\rho _\\Phi )) |\\Phi \\rangle \\\\&={\\rm Tr}(\\rho _\\Phi \\log \\rho _\\Psi )-{\\rm Tr}(\\rho _\\Phi \\log \\rho _\\Phi )=-S_{\\rm rel}(\\Phi ||\\Psi ).\\end{split}$ Moreover, the relative entropy can be rewritten as [37], [9] $\\begin{split}S_{\\rm rel}( {\\Phi } || {\\Psi })=S_{\\rm gen}(\\infty )_{ {\\Phi }}-S_{\\rm gen}(R)_{ {\\Phi }},\\end{split}$ i.e., the difference between the generalized entropies at initial ($U=0)$ and late ($U=\\infty )$ times given by $\\begin{split}S_{\\rm gen}(R)=\\frac{A(b)}{4G}+S_{\\rm bulk}(R),\\quad S_{\\rm gen}(\\infty )=\\frac{A(b_\\infty )}{4G}+S_{\\rm bulk}(\\infty ),\\end{split}$ respectively, where $b$ and $b_\\infty $ indicate the horizon cuts at initial and late times as depicted in Figure REF .", "This comes from the observation that $H_R$ , the static time translation generator restricted to the static patch $R$ (recall that $H_0=h_\\Psi /\\beta =H_R-H_L$ ) satisfies $\\begin{split}\\langle {\\Phi } | \\beta H_R| {\\Phi }\\rangle =\\frac{A(b_\\infty )}{4G}-\\frac{A( b)}{4G}.\\end{split}$ Then from $S_{\\rm bulk}(\\infty )_\\Phi \\simeq S_{\\rm bulk}(\\infty )_\\Psi =S_{\\rm bulk}(R)_\\Psi $ we obtain $\\begin{split}S_{\\rm gen}(\\infty )_\\Phi -S_{\\rm gen}(R)_\\Phi &=\\frac{A(b_\\infty )}{4G}-\\frac{A( b)}{4G}+S_{\\rm bulk}(\\infty )_\\Phi -S_{\\rm bulk}(R)_\\Phi \\\\&\\simeq \\frac{A(b_\\infty )}{4G}-\\frac{A( b)}{4G}+S_{\\rm bulk}(R)_\\Psi -S_{\\rm bulk}(R)_\\Phi .\\end{split}$ From (REF ), the change in the horizon area can be written as $\\begin{split}\\frac{A(b_\\infty )}{4G}-\\frac{A( b)}{4G}&=\\beta \\langle \\Phi |H_R|\\Phi \\rangle =-\\langle \\Phi |\\log \\rho _\\Psi |\\Phi \\rangle -S_{\\rm bulk}(R)_\\Psi , \\end{split}$ from which the above expression is rewritten as $\\begin{split}S_{\\rm gen}(\\infty )_\\Phi -S_{\\rm gen}(R)_\\Phi &=-\\langle \\Phi |\\log \\rho _\\Psi |\\Phi \\rangle -S_{\\rm bulk}(R)_\\Phi \\\\&=-\\langle \\Phi |\\log \\rho _\\Psi |\\Phi \\rangle +\\langle \\Phi |\\log \\rho _\\Phi |\\Phi \\rangle =S_{\\rm rel}(\\Phi ||\\Psi ),\\end{split}$ which confirms (REF ).", "Meanwhile, when we replace $|\\Phi \\rangle $ by $|\\widehat{\\Phi }\\rangle $ , the first term in (REF ) is given by $\\begin{split}-\\langle \\widehat{\\Phi } | h_{\\Psi |\\Phi }|\\widehat{\\Phi }\\rangle &=\\langle \\widehat{\\Phi } | (\\log \\rho _\\Psi -\\log \\rho _\\Phi ) |\\widehat{\\Phi }\\rangle \\\\&=-\\langle \\widehat{\\Phi } | \\beta H_R|\\widehat{\\Phi }\\rangle -S_{\\rm bulk}(R)_\\Psi -\\langle \\widehat{\\Phi } | \\log \\rho _\\Phi |\\widehat{\\Phi }\\rangle ,\\end{split}$ where again the expression (REF ) for $\\log \\rho _\\Psi $ is used for the first two terms in the second line.", "To proceed, we note that the relation (REF ) holds even if $|\\Phi \\rangle $ is replaced by $|\\widehat{\\Phi }\\rangle $ , with the explicit values of $A(b_\\infty )$ and $A(b)$ are changed reflecting the backreaction of the observer.", "Moreover, since $\\log \\rho _\\Phi $ contains the probability distribution of the matter excitations in the bulk, it is tempting to relate the last term $ -\\langle \\widehat{\\Phi } | \\log \\rho _\\Phi |\\widehat{\\Phi }\\rangle = -{\\rm Tr}(\\rho _{\\widehat{\\Phi }}\\log \\rho _\\Phi )$ to the bulk entropy.", "However, it cannot be identified with $S_{\\rm bulk}(R)_{\\widehat{\\Phi }}$ as the dynamics of the bulk in the state $|\\widehat{\\Phi }\\rangle $ is also affected by the fluctuations in $q$ and $x$ , which are not reflected in $\\log \\rho _\\Phi $ .", "Since these fluctuations remain until the end of inflation when all the matter excitations cross the horizon, if we conjecture that the difference $S_{\\rm bulk}(R)_{\\widehat{\\Phi }}-(-{\\rm Tr}(\\rho _{\\widehat{\\Phi }}\\log \\rho _\\Phi ))$ which contains the effects of the fluctuations in $q$ and $x$ on the bulk dynamics are time independent at leading order, it can be identified with $S_{\\rm bulk}(\\infty )_{\\widehat{\\Phi }}-S_{\\rm bulk}(\\infty )_{{\\Phi }}$ .", "Then we obtain $\\begin{split}-\\langle \\widehat{\\Phi } | h_{\\Psi |\\Phi }|\\widehat{\\Phi }\\rangle &=\\Big ( -\\frac{A(b_\\infty )}{4G}+\\frac{A( b)}{4G}\\Big )-S_{\\rm bulk}(R)_\\Psi + \\big (S_{\\rm bulk}(R)_{\\widehat{\\Phi }}-S_{\\rm bulk}(\\infty )_{\\widehat{\\Phi }}+S_{\\rm bulk}(\\infty )_{{\\Phi }}\\big )\\\\&=-\\Big (\\frac{A(b_\\infty )}{4G}+S_{\\rm bulk}(\\infty )_{\\widehat{\\Phi }}\\Big )+\\Big (\\frac{A( b)}{4G}+S_{\\rm bulk}(R)_{\\widehat{\\Phi }}\\Big )\\\\&=-\\big (S_{\\rm gen}(\\infty )_{\\widehat{\\Phi }}-S_{\\rm gen}(R)_{ \\widehat{\\Phi }}\\big ),\\end{split}$ where in the second line we use the relation $S_{\\rm bulk}(\\infty )_{{\\Phi }} \\simeq S_{\\rm bulk}(\\infty )_{{\\Psi }}=S_{\\rm bulk}(R)_{{\\Psi }}$ .", "Now we investigate the change in the horizon area more explicitly.", "When the observer's energy $q$ in perfect dS space is concentrated in the tiny region, the Schwarzschild-de Sitter black hole can be created and the metric is modified as There has been a conjecture that in the absence of an observer collecting information, quantum gravity forbids the production of the black hole through the fluctuation [38].", "For discussions on how this conjecture applies to the dS background, see, e,g, [39], [40], [41], [42].", "$\\begin{split}&ds^2=-f(r_s)dt_s^2+\\frac{1}{f(r_s)}dr_s^2+r_s^2(d\\theta ^2+\\sin ^2\\theta d\\phi ^2),\\\\&f(r_s)=1-\\frac{2G q}{r_s}-H^2 r_s^2.\\end{split}$ If $q$ is small enough, say, $q\\lesssim H$ or $q/\\Delta E \\sim H^2/M_{\\rm Pl}^2 \\ll 1$ , the linear expansion in $q$ is valid such that the cosmological horizon and the black hole radii are approximated as $H^{-1}-Gq$ and $2Gq$ , respectively.", "For quasi-dS space, the radius of the cosmological horizon deformed by the slow-roll as well as the backreaction of the observer's energy is given by $r_H=H^{-1}+\\epsilon _H\\Delta t -Gq$ .", "Here $H$ is the initial value of the Hubble parameter, and the Hubble parameter during the inflationary period can be approximated by this constant value.", "Then the (cosmological) horizon area at initial time is estimated as $\\begin{split}\\frac{A( b)}{4G} \\simeq \\frac{\\pi }{G}(H^{-1}-Gq)^2 \\simeq \\frac{\\pi }{GH^2} - \\beta q,\\end{split}$ and that at late time, namely, just after the inflationary period $\\Delta t\\simeq (\\epsilon _H H)^{-1}$ , is approximated as $\\begin{split}\\frac{A(b_\\infty )}{4G} \\simeq \\frac{\\pi }{G}(H_f^{-1}-Gq)^2 \\simeq \\frac{\\pi }{GH_f^2} - \\beta _f q,\\end{split}$ where $H_f^{-1}=H^{-1} +\\epsilon _H\\Delta t$ is the value of the Hubble radius at the end of inflation and $\\beta _f=(2\\pi )/H_f$ .", "As we have seen in Section , the change in the horizon area leads to the energy flux across the horizon as $\\langle \\widehat{\\Phi }| H_R |\\widehat{\\Phi }\\rangle =\\int dt \\beta ^{-1}(d/dt)[A/(4G)]$ , or symbolically, $\\langle \\widehat{\\Phi }| \\beta H_R |\\widehat{\\Phi }\\rangle =[A(b_\\infty )-A(b)]/4G$ .", "Since $X+H_0=X+h_\\Psi /\\beta = H_R$ , (REF ) is rewritten as $\\begin{split}S (R)_{\\widehat{\\Phi }}&=-(S_{\\rm gen}(\\infty )_{\\widehat{\\Phi }}-S_{\\rm gen}(b)_{\\widehat{\\Phi }})- \\langle \\widehat{\\Phi }|\\beta \\widehat{q} |\\widehat{\\Phi }\\rangle +\\Big (\\frac{A(b_\\infty )}{4G}-\\frac{A(b )}{4G}\\Big )+ S_{\\rm bulk}(\\infty )_{\\widehat{\\Phi }}\\\\&=S_{\\rm gen}(b)_{\\widehat{\\Phi }}- \\langle \\widehat{\\Phi }|\\beta \\widehat{q} |\\widehat{\\Phi }\\rangle -\\frac{A(b )}{4G}.", "\\end{split}$ We also note that the state $|\\widehat{\\Phi }\\rangle $ contains the probability distribution of the observer's energy $q$ , in which $-\\beta q$ in (REF ) is modified to $- \\langle \\widehat{\\Phi }|\\beta \\widehat{q} |\\widehat{\\Phi }\\rangle $ .", "Therefore, we arrive at $\\begin{split}S (R)_{\\widehat{\\Phi }} = S_{\\rm gen}(b)_{\\widehat{\\Phi }}-\\frac{\\pi }{GH^2},\\end{split}$ and since $H$ is a constant, $S (R)_{\\widehat{\\Phi }}$ is identified with $S_{\\rm gen}(b)_{\\widehat{\\Phi }}$ up to the addition of a constant." ], [ "Conclusion", "Throughout this article, we have investigated how Type II$_1$ von Neumann algebra description of perfect dS space is modified in the inflationary quasi-dS space.", "Unlike perfect dS space, quasi-dS space allows the nonvanishing energy flux across the horizon, which is identified with the expectation value of the static time translation generator.", "In the evaluation of the energy flux, we assume the inflationary period to be $(\\epsilon _H H)^{-1}$ , which is natural in the sense that after this time scale $H$ deviates significantly from the initial value hence it is no longer approximated as a constant.", "Then both the energy flux and its fluctuation diverge in the $G\\rightarrow 0$ limit.", "Here the fluctuation originates from the breaking of the dS isometry associated with the static time translation, which induces the uncertainty of time, and also the fluctuation in the value of $H$ at the end of inflation.", "As a result, the inflationary quasi-dS space can be described by Type II$_\\infty $ algebra.", "This is different from Type II$_1$ algebra for perfect dS space : since the horizon radius fluctuates by the uncertainty of the observer's energy only, the entropy of any quantum state cannot exceed that of empty dS space in the Bunch-Davies state.", "In contrast, in Type II$_\\infty $ algebra for quasi-dS space, due to the divergent fluctuation of the energy flux, the trace hence the entropy is not well defined for the identity describing the maximal entanglement of the Bunch-Davies state, and there is no upper bound on the entropy.", "On the other hand, there has been a claim called the `dS swampland conjecture' that dS space is unstable in quantum gravity, which is supported by the distance conjecture and the covariant entropy bound [43], [44], [45], [46].", "Estimation based on the conjecture suggests the much shorter inflationary period given by $(\\epsilon _H^{-1/2} H^{-1})\\log (M_{\\rm Pl}/H)$ , after which $\\epsilon _H$ becomes ${\\cal O}(1)$ and the background geometry is no longer close to dS space [47], [48].", "Even shorter inflationary period $H^{-1}\\log (M_{\\rm Pl}/H)$ was conjectured under the name of `trans-Planckian censorship conjecture', which forbids the horizon crossing of the trans-Planckian modes [49].", "In these cases, the energy flux across the horizon, or $\\langle \\Phi |H_R|\\Phi \\rangle $ is given by $\\epsilon _H^{1/2}[1/(G H)]\\log (M_{\\rm Pl}/H)$ and $\\epsilon _H [1/(G H)]\\log (M_{\\rm Pl}/H)$ , respectively.", "Since $\\epsilon _H \\sim {\\cal O}(G)$ , the former still diverges but the latter is of ${\\cal O}(1)$ in the $G\\rightarrow 0$ limit.", "In both cases, the fluctuation $\\delta H_R$ becomes vanishing in the $G\\rightarrow 0$ limit as the values in two cases are estimated as $(\\epsilon _H/\\pi )^{1/2}\\log (M_{\\rm Pl}/H)[\\log (M_{\\rm Pl}/H)+1]$ and $(\\epsilon _H^3/\\pi )^{1/2}\\log (M_{\\rm Pl}/H)[\\log (M_{\\rm Pl}/H)+1]$ , respectively.", "Hence, the renormalized operator $H^{\\prime }_R=H_R-\\langle \\Phi |H_R|\\Phi \\rangle $ is well defined.", "Then in this limit, we do not need to consider the probability distribution $g(X)$ reflecting the fluctuation in time, and the von Neumann algebra can be defined in the same way as that in the pure dS space with observer, i.e., Type II$_1$ algebra.", "This shows that in addition to the nonzero energy flux across the horizon, the inflationary period plays the crucial role in determining the appropriate von Neumann algebra description of spacetime during the inflation." ], [ "Coordinate systems on de Sitter space", "We list several coordinate descriptions of dS space which are useful in our discussion.", "For more complete reviews, see, e.g., [50], [51].", "A natural description of dS space as seen by a static observer surrounded by the horizon is the static coordinates, in which the metric is written as $\\begin{split}ds^2=-(1-H^2 r_s^2)dt_s^2+\\frac{dr_s^2}{1-H^2 r_s^2}+r_s^2(d\\theta ^2+\\sin ^2\\theta d\\phi ^2).\\end{split}$ From this, one immediately finds that the timelike Killing vector is just given by $k^a=(\\partial _{t_s})^a$ and the horizon is located at $r_s=H^{-1}$ .", "In order to see the causal structure, it is convenient to introduce the tortoise coordinate, $\\begin{split}dr_*=\\frac{dr_s}{1-H^2 r_s^2},\\quad r_*=\\frac{1}{2H}\\log \\Big (\\frac{1+Hr_s}{1-Hr_s}\\Big ), \\end{split}$ such that the $(t_s, r_*)$ part of the metric is written in the conformally flat form.", "Then one can define the Eddington-Finkelstein coordinates $u=t_s-r_*$ and $v=t_s+r_*$ as lightcone coordinates.", "For the extension to the region beyond the horizon, the Kruskal-Szekeres coordinates are useful.", "In the static patch, they are given by $\\begin{split}&U =\\frac{1}{H}e^{Hu}=\\frac{1}{H}e^{Ht_s}\\sqrt{\\frac{1-Hr_s}{1+Hr_s}},\\\\&V=-\\frac{1}{H}e^{-Hv}=-\\frac{1}{H}e^{-Ht_s}\\sqrt{\\frac{1-Hr_s}{1+Hr_s}},\\end{split}$ in terms of which the metric is written as $\\begin{split}ds^2=-\\frac{4}{(1-H^2 UV)^2} dUdV+\\frac{(1+H^2 UV)^2}{H^2(1-H^2UV)^2}(d\\theta ^2+\\sin ^2\\theta d\\phi ^2).\\end{split}$ Using (REF ), the timelike Killing vector in dS is rewritten as $k^a=(\\partial _{t_s})^a=H(U\\partial _U-V\\partial _V)^a$ .", "The future (past) horizon is a null hypersurface satisfying $V=0$ ($U=0$ ), which is normal to $k^a=H U \\partial _U$ ($k^a=-H V \\partial _V$ ) or $k_a=-2 HU (dV)_a$ ($k_a=2 HV (dU)_a$ ).", "Thus, $U$ ($V$ ) is a natural canonical affine parameter on the future (past) horizon.", "Meanwhile, in order to describe the inflationary cosmology, the flat coordinates $(t, r)$ in terms of which the metric is written as $\\begin{split}ds^2=-dt^2+e^{2Ht}\\big [dr^2+r^2(d\\theta ^2+\\sin ^2\\theta d\\phi ^2)\\big ]\\end{split}$ are useful.", "They are related to the static coordinates by $\\begin{split}t_s=t-\\frac{1}{2H}\\log \\big (1-H^2 r^2 e^{2Ht}\\big ),\\quad r_s=r e^{Ht},\\end{split}$ which give $\\begin{split}&\\frac{\\partial t_s}{\\partial t}=\\frac{1}{1-H^2 r_s^2},\\quad \\frac{\\partial r_s}{\\partial t}= Hr_s,\\quad \\frac{\\partial t_s}{\\partial r}=\\frac{e^{Ht}Hr_s}{1-H^2r_s^2},\\quad \\frac{\\partial r_s}{\\partial r}=e^{Ht},\\\\&\\frac{\\partial t}{\\partial t_s}=1,\\quad \\frac{\\partial r}{\\partial t_s}=-H r,\\quad \\frac{\\partial t}{\\partial r_s}=-\\frac{Hr_s}{1-H^2 r_s^2 },\\quad \\frac{\\partial r}{\\partial r_s}=\\frac{e^{-Ht}}{1-H^2 r_s^2 }.\\end{split}$ Then the timelike Killing vector is written as $k^a=(\\partial _{t_s})^a=(\\partial _t - Hr \\partial _r)^a$ ." ], [ "Derivation of the density matrix", "Here we briefly sketch how (REF ), the expression for the density matrix $\\rho _{\\widehat{\\Phi }}$ associated with $|\\widehat{\\Phi }\\rangle $ is obtained, following [10].", "The relation (REF ), $\\langle \\Psi |\\Delta _{\\Phi |\\Psi }a|\\Psi \\rangle =\\langle \\Phi |a|\\Phi \\rangle $ indicates that the ${\\cal A}_R$ part of $\\rho _{\\widehat{\\Phi }}$ is given by $\\Delta _{\\Phi |\\Psi }$ .", "Meanwhile, from the facts that the algebra $\\widehat{{\\cal A}}_R$ is generated by $\\lbrace a, \\widehat{q}-(H_0+X)\\rbrace $ where $H_0=\\beta h_\\Psi $ and that $h_\\Psi -h_{\\Phi |\\Psi } =h_\\Phi -h_{\\Psi |\\Phi }$ belongs to ${\\cal A}_R$ , one finds that the combination $\\begin{split}e^{\\beta (-X+\\widehat{q})}\\Delta _{\\Phi |\\Psi }=e^{-\\beta X+\\beta (\\widehat{q}-h_\\Psi /\\beta )+(h_\\Psi -h_{\\Phi |\\Psi })}\\end{split}$ is well factorized into ${\\cal A}_R\\otimes B(\\mathbb {R})\\otimes B(\\mathbb {R}^+)$ .", "This motivates us to consider an ansatz $\\begin{split}\\rho _{\\widehat{\\Phi }}=\\frac{1}{\\beta ^2}g(X)^* f(\\widehat{q}-h_\\Psi /\\beta )^*e^{\\beta \\widehat{q}}\\Delta _{\\Phi |\\Psi }f(\\widehat{q}-h_\\Psi /\\beta )g(X) +{\\cal O}(\\varepsilon ).\\end{split}$ Since we assume that both $f(\\widehat{q}-h_\\Psi /\\beta )$ and $g(X)$ are slowly varying such that they are taken to be almost constant over $|q-h_\\Psi /\\beta |<1/\\varepsilon $ and $|x|<1/\\varepsilon $ , respectively, their commutators with any other operators are expected to be suppressed by ${\\cal O}(\\varepsilon )$ .", "Then $\\rho _{\\widehat{\\Phi }}$ can be written in the form of (REF ).", "To see $\\rho _{\\widehat{\\Phi }}$ satisfies (REF ), we consider the operator $\\begin{split}\\widehat{a}=a e^{-i u(\\beta (\\widehat{q}-X)- h_\\Psi )},\\end{split}$ which does not vanish for $\\beta |u| <\\varepsilon $ as it avoids the strong oscillation.", "Imposing $e^{iuh_\\Psi }=1+{\\cal O}(\\varepsilon )$ , its expectation value with respect to $|\\widehat{\\Phi }\\rangle =|\\Phi \\rangle \\otimes g(X)\\otimes f(\\widehat{q})$ is given by $\\begin{split}\\langle \\widehat{\\Phi }|\\widehat{a}|\\widehat{\\Phi }\\rangle &=\\int _{-\\infty }^\\infty dx |g(x)|^2 \\int _0^\\infty dq |f(q)|^2 \\langle \\Phi |ae^{-i u(\\beta ({q}-x)- h_\\Psi )}|\\Phi \\rangle \\\\&= \\int _{-\\infty }^\\infty dx |g(x)|^2 \\int _0^\\infty dq |f(q)|^2e^{-i u\\beta ({q}-x)}\\langle \\Phi |a|\\Phi \\rangle +{\\cal O}(\\varepsilon ).\\end{split}$ Here $\\langle \\Phi |a|\\Phi \\rangle $ can be replaced by $\\langle \\Psi |\\Delta _{\\Phi |\\Psi }a|\\Psi \\rangle $ .", "Ignoring ${\\cal O}(\\varepsilon )$ terms and using $h_\\Psi |\\Psi \\rangle =0$ , it becomes $\\begin{split}\\langle \\widehat{\\Phi }|\\widehat{a}|\\widehat{\\Phi }\\rangle &= \\int _{-\\infty }^\\infty dx \\int _0^\\infty dq \\langle \\Psi |\\big |g(x+h_\\Psi /\\beta )\\big |^2\\big |f(q)\\big |^2e^{-i u(\\beta ({q}-x)- h_\\Psi )}\\Delta _{\\Phi |\\Psi }a|\\Psi \\rangle \\\\&= \\int _{-\\infty }^\\infty dx \\int _0^\\infty dq \\langle \\Psi |\\big |g(x+h_\\Psi /\\beta )\\big |^2\\big |f(q )\\big |^2 \\Delta _{\\Phi |\\Psi }\\widehat{a}|\\Psi \\rangle \\\\&=\\int _{-\\infty }^\\infty \\beta dx e^{\\beta x} \\int _{0}^\\infty \\beta dq e^{-\\beta q }\\langle \\Psi |\\frac{1}{\\beta ^2}\\big | g(x+h_\\Psi /\\beta ) f(q )\\big |^2 e^{\\beta (-x+q)}\\Delta _{\\Phi |\\Psi }\\widehat{a}|\\Psi \\rangle .\\end{split}$ Matching this with $\\int \\beta dx e^{\\beta x}\\langle \\Psi _{\\rm max}|\\rho _{\\widehat{\\Phi }}\\widehat{a}|\\Psi _{\\rm max}\\rangle $ , we find that ${\\rho }_{\\widehat{\\Phi }}$ is written as (REF )." ] ]
2212.05637
[ [ "Information causality in multipartite scenarios" ], [ "Abstract Bell nonlocality is one of the most intriguing and counter-intuitive phenomena displayed by quantum systems.", "Interestingly, such stronger-than-classical quantum correlations are somehow constrained, and one important question to the foundations of quantum theory is whether there is a physical, operational principle responsible for those constraints.", "One candidate is the information causality principle, which, in some particular cases, is proven to hold for quantum systems and to be violated by stronger-than-quantum correlations.", "In multipartite scenarios, though, it is known that the original formulation of the information causality principle fails to detect even extremal stronger-than-quantum correlations, thus suggesting that a genuinely multipartite formulation of the principle is necessary.", "In this work, we advance towards this goal, reporting a new formulation of the information causality principle in multipartite scenarios.", "By proposing a change of perspective, we obtain multipartite informational inequalities that work as necessary criteria for the principle to hold.", "We prove that such inequalities hold for all quantum resources, and forbid some stronger-than-quantum ones.", "Finally, we show that our approach can be strengthened if multiple copies of the resource are available, or, counter-intuitively, if noisy communication channels are employed." ], [ "Introduction", "It is undeniable that quantum theory is one of the most successful scientific theories ever developed.", "Its mathematical formalism and axioms, although counter-intuitive, have led to extremely precise and, oftentimes, intriguing predictions, which have been confirmed experiment after experiment.", "Among the most counter-intuitive phenomena that quantum systems can display, Bell nonlocality [1], [2] is one of the most fascinating.", "Bell nonlocality refers to stronger-than-classical correlations between outcomes of measurements performed on space-like separated systems.", "However, as noted by Tsirelson [3], and, later, by Popescu and Rohrlich [4], nonlocal correlations displayed by quantum systems are limited and are not as strong as they could be, in principle.", "From a formal viewpoint, such limitations follow from the mathematical axioms of quantum theory.", "From a physical viewpoint, though, it would be of interest to the foundations of quantum theory to identify operational axioms that could explain the limits of quantum nonlocality and, ultimately, lead to the derivation of the mathematical axioms of the theory from first principles.", "In the last couple of decades, several principles have been proposed with the goal to explain why quantum theory is not more nonlocal.", "Among them, are the principle of nontrivial communication complexity [5], the principle of macroscopic locality [6], and the principle of local orthogonality [7].", "Although all of the cited candidate principles are very good at identifying and forbidding unreasonable consequences of stronger-than-quantum correlations, they are provably not capable of excluding all stronger-than-quantum correlations [8].", "One candidate principle that may be capable of singling out the nonlocal correlations allowed by quantum theory from more nonlocal ones is the information causality (IC) principle [9].", "Roughly speaking, the principle states that in a communication scenario, the receiver's available information concerning the sender's initial set of data can not exceed the message information amount.", "The violation of the principle would allow, for instance, one to receive an amount of information corresponding to one page of a book, and choose, afterwards, which page the receiver would like to read.", "It is well known that all quantum correlations, despite nonlocal, obey the information causality principle.", "It is also known that some stronger-than-quantum correlations violate the principle [9].", "It is unclear, though, if all stronger-than-quantum correlations violate it.", "The main reason for this is the fact that the principle, although relatively intuitive, is very hard to formalize in the form of a mathematical criteria.", "A sufficient criterion for the violation of the principle was proposed in Ref.", "[9], but it has been later proved that there are non-quantum correlations that do not violate it [10].", "Since then, other techniques have been developed to generate stronger criteria for the information causality principle [11], [12]; but, so far, none of them has shown to be strong enough to characterize the exact set of quantum correlations, even in simple scenarios.", "Another requirement that has been discovered recently is that any operational principle has to be genuinely multipartite to correctly retrieve all quantum nonlocal correlations [13].", "This observation complements the finding that the bipartite formulation of the original IC proposal cannot be applied to exclude even some extremal tripartite stronger-than-quantum correlations [14].", "Hence, it is clear that a genuinely multipartite formulation of information causality is necessary for it to be a valid and tight operational principle for quantum theory.", "In this work, we propose a novel multipartite perspective for the information causality principle.", "First, we establish a multipartite communication task that, in a sense, generalizes the random access code (RAC) task associated with the original IC formulation of Ref. [9].", "Thus, we present new multipartite informational-theoretic criteria, which ensure IC in the new proposed scenario.", "We then prove that the inequalities hold for all quantum nonlocal resources, and are violated by some stronger-than-quantum ones.", "Also, if many copies of the nonlocal resource are available, we show that by applying the concatenation approach presented in Ref.", "[9], and in analogy to the result obtained in such reference, one obtains stronger criteria for IC.", "In addition, we show that our findings are in agreement with the recent results reported in Ref.", "[15], where it is shown that the employment of noisy communication channels leads to the same constraints as the concatenation procedure mentioned." ], [ "Bipartite Information causality", "The information causality scenario [9], considers a strictly bipartite communication task: Alice encodes a bit-string $\\mathbf {x}$ of length $n$ in a message $M$ of $m$ bits (where $m<n$ ), and sends this message to Bob; he, then, decodes the message and produces a guess $G_i$ about a randomly selected bit $X_i$ (the i-th bit of the string $\\mathbf {x}$ ) of Alice.", "Within this context, the information causality principle states that Bob's information gain about the initial $n$ bits of Alice, considering all their possible local as well as pre-established shared resources, cannot be greater than the number of bits $m$ , sent by Alice.", "The principle, proven to hold quantum mechanically, is mathematically captured by an information inequality, written in terms of Shannon entropies, as $ \\sum _{i=1}^{n} I(X_i:G_i) \\le H(M), $ where $H(M)=-\\sum _{m}p(m)\\log _2{p(m)}$ is the Shannon entropy of the random variable $M$ described by the probability distribution $p(m)=p(M=m)$ .", "In turn $I(X_i:G_i)=H(X_i)+H(G_i)-H(X_i,G_i)$ stands for the mutual information between Alice's bit $X_i$ and Bob's guess $G_i$ .", "The information causality inequality (REF ) can be violated by post-quantum correlations, that is, correlations incompatible with the quantum mechanical rules.", "A paradigmatic example is that of a Popescu-Rohrlich(PR)-box [4], a non-signalling (NS) correlation described by the probability distribution $p(a,b|x,y) = \\left\\lbrace \\begin{aligned}1/2 & \\quad \\text{if} \\quad a\\oplus b = xy ; \\\\0& \\quad \\text{else}.\\end{aligned}\\right.$ To verify that is indeed the case, it is sufficient to consider Alice has two input bits, described by the random variables $X_1$ and $X_2$ , and consider the following protocol in the IC scenario.", "Alice inputs $x_1 \\oplus x_2$ on her share of the PR-box, obtaining outcome $a$ that is then encoded in the message $m=a\\oplus x_1$ to Bob.", "Bob inputs $y=0$ if his aim is to guess the bit $x_1$ and $y=1$ if wants to guess the $x_2$ of Alice, using as his guess $g_i=m \\oplus b$ .", "Using the definition of the PR-box, if $y=0$ we see that $g_1= x_1 \\oplus a \\oplus b= x_1$ .", "If $y=1$ , $g_2= x_1 \\oplus a \\oplus b= x_1 \\oplus x_1 \\oplus x_2=x_2$ , implying that $I(X_1:G_1)=I(X_2:G_2)=H(M)=1$ , thus violating the IC inequality (REF ).", "Notice that the simple protocol above employs a single copy of the distribution $p(a,b\\vert x,y)$ , which operationally can be understood as a black-box taking local inputs and producing correlated outputs.", "By introducing a concatenation procedure (a version of which will be detailed below), one can also consider the case of multiple copies of identical binary-input/binary-output non-signaling boxes.", "In particular, the concatenation procedure introduced in [9] shows that the IC inequality implies another constraint for non-signaling correlations, given by $ E_I^2 + E_{II}^2 \\le 1, $ where $E_j= 2 P_j -1$ is defined in terms of the conditional probabilities $p(a,b|x,y)$ as $ P_I &= \\frac{1}{2}[p(a\\oplus b = 0|0,0) + p(a\\oplus b = 0|1,0)];\\\\P_{II} &= \\frac{1}{2}[p(a\\oplus b = 0|0,1) + p(a\\oplus b = 1|1,1)].", "$ In fact, this constraint is equivalent to the bipartite quadratic Bell inequality, the so-called Uffink's inequality [16] (interestingly, however, in the next sections we will show that for multipartite scenarios such equivalence no longer holds).", "Of particular relevance is the fact that this mapping from the IC inequality (REF ) to Uffink's inequality (REF ), proves that any correlation beyond the Tsirelson's limit for the Clauser-Horne-Shimony-Holtz (CHSH) inequality [17] will violate the information causality principle and thus witness its incompatibility with quantum theory.", "More precisely, as proven by Tsirelson [18], the classically valid CHSH inequality $\\mathrm {CHSH}= \\left<A_0B_0 \\right>+\\left<A_0B_1 \\right>+\\left<A_1B_0 \\right>-\\left<A_1B_1 \\right> \\le 2,$ achieves a maximum value in quantum theory of $\\mathrm {CHSH}_{Q} = 2\\sqrt{2}$ .", "A PR-box in turn leads to $\\mathrm {CHSH}_{NS}=4$ .", "A direct analysis shows that any distribution achieving $\\mathrm {CHSH} > \\mathrm {CHSH}_{Q}$ violates Uffink's inequality and thus has its post-quantum nature witnessed by the information causality principle.", "One should remark, however, that it remains unclear whether the whole set of quantum correlations can be recovered from the IC principle [10], [8], [19].", "Interestingly, there are known post-quantum correlations known to not violate Uffink's inequality [11], [20].", "Motivated by that insufficiency of the standard formulation of the IC principle, a general informational-geometric approach has been introduced in [11] and shown to lead to stronger IC information inequalities.", "For example, the inequality given by $\\begin{aligned} \\sum _{i=1}^{n}I(X_i : G_i,M) + \\sum _{i=2}^{n}I(X_1 : X_i| G_i,M) \\\\ \\le H(M) + \\sum _{i=2}^{n}H(X_i) - H(X_0,...,X_{n-1}).\\end{aligned}$ The original IC inequality (REF ) is a particular case of (REF ).", "As a matter of fact, the stronger IC inequality (REF ) was proven, in some cases, to be even stronger than Uffink's inequality (REF ).", "More precisely, with a single copy, inequality (REF ) can detect the post-quantumness of correlations that cannot be detected by (REF ) even in the asymptotic regime of infinite copies of the correlation under test.", "Figure: Quantum causal structure, described as a DAG, associated with the multipartite information causality scenario.Building upon the original IC criterion, a more recent approach [15], generalized it to consider noisy channels between Alice and Bob, namely $\\sum _{i=1}^{n} I(X_i:G_i) \\le C,$ where $C\\equiv I(M:M^{\\prime })$ is the noisy channel capacity defined in terms of Shannon's mutual information, and $M^{\\prime }$ is the message that reaches the receiver after passing through the channel.", "Contrary to the original formulation, this new approach proposes to search for the strongest non-signaling correlations allowed by IC for every possible noisy channel between the parts.", "A generalization that allows recovering standard results, for instance, the Tsirelson’s bound implied by Uffink’s inequality (REF ), most importantly, however, without the need of a concatenation procedure, that is, in the single copy level.", "This stems from the fact that as the channel's noise is increased, the bound in (REF ) becomes stronger, capable of detecting the post-quantum nature of correlations that cannot be witnessed in the noise-free version.", "The main goal of this paper, as will be detailed in the following, is to generalize all such results, stated so far only for the bipartite scenario, into the multipartite setting" ], [ "New criteria for multipartite information causality", "Our first goal is to introduce a natural generalization of the bipartite IC inequality to the multipartite scenario.", "For that, we will closely follow the information-theoretic approach based on quantum causal structures envisioned in [11].", "In this approach, information principles such as information causality are nothing else than entropic constraints arising from imposing a quantum description to a given causal structure.", "As such, each quantum causal structure will have associated with it a given set of entropic inequalities, each of which can be interpreted as an information-theoretical principle.", "In this work, we consider a particular class of quantum causal structures that naturally generalize the known bipartite scenario.", "Consider $N$ parts, among which $N-1$ are senders in possession of their respective bit-strings $\\mathbf {x}^k = (X_1^k, X_2^k, \\cdots , X_n^k)$ , where $k\\in \\lbrace 1, 2, \\cdots , N-1\\rbrace $ .", "Each sender encodes a classical message $M_k$ of size $m < n$ to the $N^{th}$ -part, the receiver who has to compute one out of $n$ possible bits functions $f_j (X_j^1, X_j^2, \\cdots , X_j^{N-1})$ , by producing the guess $G_j$ , where $j\\in \\lbrace 1, \\cdots , n\\rbrace $ .", "This scenario is illustrated for the tripartite case, as a directed acyclic graph (DAG), in Fig.", "REF .", "As proven in the Appendix , considering that additionally to any local operations, every part may explore their pre-established correlations mediated by a joint quantum state $\\rho $ , the following multipartite version of information causality holds: $\\sum _k^{N-1} \\sum _i^n I(X_i^k : X_i^1, \\dots ,X_i^{k-1}, X_i^{k+1}, \\dots , X_i^{N-1}, G_i) \\le H(M_1,\\dots ,M_{N-1})+ \\sum _k^{N-1} \\sum _i^n I(X_{i+1}^k, \\dots , X_n^k : X_i^k).$ To illustrate, we consider the tripartite scenario, depicted in Fig.", "REF , such that Alice and Bob have just two initial uncorrelated bits and that the communication task of Charlie is to compute two specific functions $f_1 = x^1_1\\oplus x^2_1$ and $f_2 = x^1_2\\oplus x^2_2$ .", "The communication task is trivialized if the parties share the following tripartite non-signaling (post-quantum) correlation [21], $p(a,b,c|x,y,z) = \\left\\lbrace \\begin{aligned}1/4 & \\quad \\text{if} \\quad a\\oplus b \\oplus c = xz \\oplus yz; \\\\0& \\quad \\text{else},\\end{aligned}\\right.$ where $a,b,c,x,y,z \\in \\lbrace 0,1\\rbrace $ .", "To achieve it, the parties perform the protocol detailed in Fig.", "REF .", "In each run, Charlie can always perfectly compute each of the functions, since $g_1 = x^1_1\\oplus x^2_1$ and $g_2 = x^1_2\\oplus x^2_2$ .", "In other words, similarly to the usual information causality scenario, Charlie has potential access to the four bits of Alice and Bob but receives just two bits communicated by them.", "Particularizing inequality (REF ) to this case, we obtain $\\mathcal {I} = I(X_1^1:X_1^2, G_1) + I(X_2^1:X_2^2, G_2)+ \\\\ + I(X_1^2:X_1^1, G_1) + I(X_2^2:X_2^1, G_2) \\le \\\\ H(M_1, M_2),$ an inequality that is maximally violated by the NS correlation (REF ) with the protocol described above, since $\\mathcal {I} = 4$ while the quantum valid upper bound is $H(M_x,M_y)=2$ .", "Figure: The communication protocol is performed by Alice, Bob, and Charlie that share a non-signaling resource.", "Alice (Bob) receive initially two bits {x 1 ,x 2 }\\lbrace x_1, x_2\\rbrace ({y 1 ,y 2 }\\lbrace y_1, y_2\\rbrace ) and perform her local measurements as x=x 1 ⊕x 2 x = x_1 \\oplus x_2 (y=y 1 ⊕y 2 y = y_1 \\oplus y_2).", "After obtaining her outputs aa (bb), encodes the message with m x =a⊕x 1 m_x = a\\oplus x_1 (m y =b⊕y 1 m_y = b \\oplus y_1).", "Charlie inputs on his side z=0z=0 if he wants to compute f 1 f_1, and z=1z=1 if he wants to compute f 2 f_2.", "After receiving the messages, Charlie computes his guess by following g j =m x ⊕m y ⊕cg_j = m_x \\oplus m_y \\oplus c.In fact, the multipartite version (REF ) can be violated by the multipartite extension of the post-quantum correlation (REF ), given by $p(a_1, a_2, \\cdots , a_{N}| x_1, x_2, \\cdots , x_{N}) = \\left\\lbrace \\begin{aligned} 1/&2^{N-1}& \\quad \\text{if} \\quad \\; &\\bigoplus _{k=1}^{N} a_k = \\bigoplus _{k=1}^{N-1} x_k x_{N};\\\\&0& \\quad \\text{else.", "}& \\end{aligned} \\right.$ Considering $n=2$ , $f_j = X_j^1 \\oplus X_j^2 \\oplus \\cdots \\oplus X_j^{N-1}$ , and the direct extension of the protocol described in Fig.", "REF for the multipartite case, we see that the communication task is trivialized, implying the maximal violation of the multipartite IC inequality (REF ).", "Additionally, it is important to highlight that the multipartite inequality (REF ) does not consist of the parallel application of the criterion (REF ) between each sender with the receiver, an approach followed by Ref.", "[22].", "Indeed, looking at the simplest tripartite case for $n=2$ , when the receiver Charlie perfectly computes $g_1 = x^1_1\\oplus x^2_1$ and $g_2 = x^1_2\\oplus x^2_2$ all informational terms in the left-hand side of Eq.", "(REF ) vanish, showing that the post-quantum behavior reached with the protocol of Fig.REF cannot be detected by this previous approach based on the parallelization of the bipartite IC criterion." ], [ "Concatenation procedure", "As previously discussed, the first proposal for information causality [9] with the criterion (REF ) was able to witness the post-quantum nature of all non-signaling correlations beyond Tsirelson’s bound.", "For that, however, it was essential to consider a concatenation procedure involving many copies of the correlation under test.", "Here we show how such concatenation can be constructed for the tripartite scenario, also generalizing it to arbitrary multipartite scenarios.", "Similarly to the bipartite scenario, the success probability for the protocol in Fig.", "(REF ) can be connected to the probability of the resource shared between the parts, more specifically to the probability $p(a\\oplus b \\oplus c = xz \\oplus yz | x, y, z)$ .", "Clearly, the probabilities of Charlie correctly computing the values of $x_1 \\oplus y_1$ and $x_2 \\oplus y_2$ are, respectively, $P_I = \\frac{1}{4}[&p(a\\oplus b \\oplus c = 0|0,0,0) \\nonumber \\\\+ &p(a\\oplus b \\oplus c = 0|0,1,0) \\\\+ &p(a\\oplus b \\oplus c = 0|1,0,0) \\nonumber \\\\ +& p(a\\oplus b \\oplus c = 0|1,1,0)];\\nonumber $ $P_{II} = \\frac{1}{4}[&p(a\\oplus b \\oplus c = 0|0,0,1)\\\\ +& p(a\\oplus b \\oplus c = 1|0,1,1)\\nonumber \\\\ +&p(a\\oplus b \\oplus c = 1|1,0,1)\\nonumber \\\\ +& p(a\\oplus b \\oplus c = 0|1,1,1)].\\nonumber $ In the particular case where the parties share the correlation described by (REF ), we obtain $P_I = P_{II} = 1$ .", "In Fig.", "REF , we specify the concatenation procedure for the tripartite communication protocol of Fig.", "REF .", "In this case, Alice and Bob initially receive the respective bit-strings $\\mathbf {x}$ and $\\mathbf {y}$ of length $n = 2^K$ and share $2^K - 1$ identical copies of binary-input/binary-output non-signaling boxes with Charlie.", "The success probability that Charlie produces a guess $g_j$ correctly is given by (see appendix ) $p(g_j = x_j \\oplus y_j) = \\frac{1}{2}(1+E_I^{K-r}E_{II}^{r}),$ where $r$ denotes the number of times that Charlie measures $z=1$ in the $K$ levels of the concatenation code displayed in Fig.REF and $E_i = 2P_i -1$ (see Eq.()).", "By considering this success probability, we show in Appendix that information causality is always violated when $E_I^2 + E_{II}^2 > 1$ .", "In other words, when combined with a concatenation procedure and multiple copies of the behaviour under test, the tripartite information causality inequality (REF ) leads to a generalization of the bipartite inequality (REF ), given by $ E_I^2 + E_{II}^2 \\le 1.$ Similarly to (REF ), the multiple copies criterion (REF ) is maximally violated by the behaviour (REF ) since, for this case, $E_I = E_{II} = 1$ .", "Moreover, for isotropic correlations described by a visibility parameter $E$ and such that $E_I = E_{II}= E$ , the tripartite multiple copies inequality is violated when $E > 1/\\sqrt{2}$ , which is exactly the same bound obtained by [9] for the bipartite scenario.", "However, for the tripartite scenario, the Navascués-Pironio-Acin (NPA) hierarchy [23] implies that for any $E \\ge 1/2$ the corresponding correlation will have a post-quantum nature.", "That is, the tripartite information causality, at least with the specific concatenation considered here, is unable to recover the Tsirelson'sbound.", "As previously mentioned, the bipartite version of (REF ) is equivalent to the quadratic constraint obtained by Uffink [16].", "However, for more than two parts, such equivalence no longer holds.", "For the tripartite scenario, the Uffink inequality reads as $(C_{001}+C_{010}+C_{100}-C_{111})^2 + \\\\+ (C_{110}+C_{101}+C_{011}-C_{000})^2 \\le 16,$ where $C_{xyz} = \\sum _{a,b,c} (-1)^{a+b+c}p(a,b,c|x,y,z)$ .", "Indeed, there is no way to alternate between the inequalities (REF ) and (REF ) by changing labels.", "Even more importantly, as we will show in the next section, there are post-quantum correlations violating the multiple copies inequality (REF ) that do not violate the tripartite Uffink inequality (REF ) (and all the inequalities that are obtained from it by relabelling of parties, measurements, and outcomes)." ], [ "No concatenation version", "The motivation for the generalization of the IC inequality given by (REF ) comes from the fact that the upper bound in the information gain of the receiver in (REF ) should be understood as the single use of a noiseless classical channel of capacity $|M|$ .", "Interestingly, our results in the multipartite scenario can also take into account this insight.", "Indeed, for the multipartite scenario described in Section we may consider that each of the $N-1$ senders performs a single use of a classical noisy channel of capacity $C_k = I(M_k : M_{k}^{\\prime })$ , where $M_k$ is the message encoded by each sender $k$ and $M_k ^{\\prime }$ is the respective message reaching the decoder after the message passes through the noisy channel.", "In this case, the criterion defined in (REF ) is easily rewritten in terms of the channel capacity by considering data processing inequalities and the fact that $M_k$ completely determines $M_{k}^{\\prime }$ (that is, $M_{k}^{\\prime }$ is conditionally independent of any random variable $V$ of the causal structure given $M_k$ , $I(M_{k}^{\\prime } : V | M_{k}) = 0$ ).", "As proven in Appendix , it follows that $\\sum _k^{N-1} \\sum _i^n I(X_i^k : X_i^1, \\dots ,X_i^{k-1}, X_i^{k+1}, \\dots , X_i^{N-1}, G_i) \\\\ \\le \\sum _k^{N-1} C_k + \\sum _i^n I(X_{i+1}^k, \\dots , X_n^k : X_i^k).$ Particularizing for the tripartite scenario and, for simplicity, assuming completely uncorrelated initial bits, such a result reads as $\\mathcal {I} \\equiv \\sum _{i=1}^{n} [ I(X_i :Y_i , G_i) + I(Y_i :X_i , G_i ) ] \\\\ \\le I(M_x:M_x ^{\\prime }) + I(M_y : M_y ^{\\prime } ),$ where the two senders have initially $\\mathbf {x}^1 = (X_1^k, X_2^k, \\cdots , X_n^k)$ and $\\mathbf {x}^2 = (Y_1^k, Y_2^k, \\cdots , Y_n^k)$ , and $M_x ^{\\prime }$ and $M_y ^{\\prime }$ are the messages reaching the receiver after $M_x$ and $M_y$ pass through the noisy channel, respectively.", "To illustrate, an application of the noisy tripartite IC inequality will be shown in the next section." ], [ "Numerical Tests", "More importantly, to understand the strength of the criteria derived, we considered the following slice of the non-signaling set, $ p(a,b,c|x,y,z) = \\gamma p_{45} + \\epsilon p_D + (1 - \\gamma - \\epsilon ) p_{W}, $ where $\\gamma , \\epsilon \\in [0,1]$ , $p_{45}(a,b,c|x,y,z)$ is defined in (REF ), $p_D(a,b,c|x,y,z) = \\delta _{a,0}, \\delta _{b,0} \\delta _{c,0}$ and $p_W (a,b,c|x,y,z) = 1/8$ .", "Thus, we obtained Fig.", "REF , which highlights that (REF ) excludes even more supra-quantum correlation than (REF ).", "In addition, despite the distance evidenced between the quantum set and IC, we enforced that the bound (REF ) follows from the particular communication protocol depicted in Fig.REF .", "Therefore, it does not exclude the existence of better protocols, able to single out the quantum set for this slice of the non-signaling set or rule out post-quantum extremal correlations.", "In Fig.REF , also we presented the edge implied by (REF ) for the same slice in (REF ), where we considered that all communication is made through a binary symmetric channel that flips the bit with probability $\\epsilon $ .", "In this case, to obtain the curve, we followed the results from [15] and considered $\\epsilon \\rightarrow 1/2$ .", "From this result, it is clear that our stronger criterion in (REF ) and this new noisy channel approach are in complete agreement, even considering the simplest noisy channel.", "The codes related to the Fig.", "REF are available in [24].", "Figure: Non-signaling (NS) polytope slice given by Eq.().", "Every dot above these curves violates the respective criterion represented.", "The black dashed and solid lines describe the NS and quantum edges, respectively (the last was computed with the level 2 of NPA hierarchy ).", "The single and multiple copies limits defined by the criteria () and () are respectively depicted with the red and blue solid lines.", "Finally, the edge defined by the noisy channel criterion () is described by the orange dashed line.For the tripartite scenario with binary-input/binary-output there exist, 53856 non-signaling extremal correlations that are divided into 46 different equivalence classes, among which 45 are supra-quantum ones [21].", "Thus, we also checked the ability of (REF ) to exclude all the supra-quantum extremals of the non-signaling set.", "In Table REF , we highlight those classes for which we could find a violation of (REF ).", "Furthermore, Table REF contains the same analysis for the tripartite Uffink inequality (REF ).", "From these results, it is clear the no equivalence between the multiple copies IC inequality (REF ) and the Uffink result (REF ), since there exist extremal non-signaling correlations which respect one constraint while violating the other.", "The codes related to the results from Table REF are available in [25].", "Table: Classes of non-signaling extremal correlations defined in that violate () or ()" ], [ "Conclusion", "We proposed a new multipartite communication task in which the previous IC formulation does not detect non-local advantage.", "Thus, by employing the quantum causal structure formalism, we proposed a new criterion to describe IC in such a new context and proved its truthfulness for the whole set of quantum correlations, for any number of parts.", "Furthermore, we proved that our model allows the concatenation approach from [9], which enabled us to derive even stronger constraints for the multipartite non-signaling correlations set.", "In that case, our multipartite inequality proved to be strictly stronger than the multipartite Uffink's inequality from [16], which contrasts with the previous bipartite result from [9].", "In addition, our findings are in complete agreement with the recent noisy channel approach from [15], which allows many other analyses for such a multipartite context.", "We emphasize that our results are limited by one specific protocol, which is optimal to Eq.", "(REF ), however, it does not ensure that it is optimal for all non-signaling correlations.", "Thus, searching for better protocols for different correlations may yield stronger results.", "Furthermore, the analysis of non-dichotomy scenarios, or even cases where the sender's initial bits are correlated, may also produce interesting results, as previously analyzed in [26].", "Moreover, our findings open a new class of non-sequential multipartite RACs, where multiple parts send messages to each other with the task to compute a boolean function of the senders' initial bits.", "The figure of merit, in this case, is to compute the success probability concerning the receiver to compute correctly such a function.", "Thus, investigating these new thresholds for such a probability of success may have important implications for quantum information processing.", "We thank Marcelo Terra Cunha and Pedro Lauand for fruitful discussions and suggestions.", "We especially acknowledge Pedro Lauand for providing the extremal non-signaling behaviors leading to the results in Table REF .", "This work was also supported by the Brazilian National Council for Scientific and Technological Development (CNPq, grant No.", "307295/2020-6), the National Institute for Science and Technology on Quantum Information (INCT-IQ) (Grant No.", "465469/2014-0), the Serrapilheira Institute (Grant No.", "Serra-1708-15763) and the São Paulo Research Foundation FAPESP (Grant No.", "2018/07258-7 and 2019/00451-9)." ], [ "Concatenation in a multipartite communication task", "Here we extend the tripartite communication task from section to a general multipartite scenario.", "Thus, consider $N$ parts, among which $N-1$ are senders that initially have their respective bit-strings $\\mathbf {x}^k = (X_1^k, X_2^k, \\cdots , X_n^k)$ , where $k\\in \\lbrace 1, 2, \\cdots , N-1\\rbrace $ .", "Each sender encodes a classical message $M_k$ of size $m < n$ to the $N^{th}$ -part, the receiver.", "This last one needs rightly compute one of $n$ possible initial bits functions $f_j (X_j^1, X_j^2, \\cdots , X_j^{N-1})$ , by producing the guess $G_j$ , where $j\\in \\lbrace 1, \\cdots , n\\rbrace $ .", "Just as in the main text, in addition to the classical messages, non-signaling correlations are allowed among all $N$ parts.", "Now consider a little more particular case, where $n=2$ and $f_j = X_j^1 \\oplus X_j^2 \\oplus \\cdots \\oplus X_j^{N-1}$ .", "Just as in the previously described tripartite scenario, we find such a particular multipartite communication task is trivialized by a generalization of the correlation (REF ) for the $(N, 2, 2)$ Bell scenario, i.e.", "$p(a_1, a_2, \\cdots , a_{N}| x_1, x_2, \\cdots , x_{N}) = \\left\\lbrace \\begin{aligned} 1/&2^{N-1}& \\quad \\text{if} \\quad \\; &\\bigoplus _{k=1}^{N} a_k = \\bigoplus _{k=1}^{N-1} x_k x_{N};\\\\&0& \\quad \\text{else.", "}& \\end{aligned} \\right.$ where $a_k$ and $x_k$ respectively denote the output and input of the part $k$ .", "To see this, consider that the $N$ parts perform the strategy depicted in Fig.REF .", "That is, each sender performs the encoding $x_k = X_1^k \\oplus X_2^k$ and $M_k = X_1^k \\oplus a_k$ , and the receiver computes the guess $G_j = \\bigoplus _{k=1}^{N-1} M_k \\oplus a_N$ .", "In this case, by considering (REF ) we find $G_j &= \\bigoplus _{k=1}^{N-1} ( X_1^k \\oplus a_k ) \\oplus a_N;\\nonumber \\\\&= \\left(\\bigoplus _{k=1}^{N-1} X_1^k \\right)\\oplus \\left(\\bigoplus _{k=1}^{N} a_k\\right) \\nonumber \\\\&= \\left(\\bigoplus _{k=1}^{N-1} X_1^k \\right) \\oplus \\left(\\bigoplus _{k=1}^{N-1} x_k x_{N}\\right)\\nonumber \\\\&= \\left(\\bigoplus _{k=1}^{N-1} X_1^k \\right) \\oplus \\left(\\bigoplus _{k=1}^{N-1} ( X_1^k \\oplus X_2^k ) x_{N}\\right).$ Therefore, if the receiver chooses his measurement as $x_N = j$ , when $j=0$ we have $G_0 = X_1^1 \\oplus X_1^2 \\oplus \\cdots \\oplus X_1^{N-1}$ , and for $j=1$ we obtain $G_1 = X_2^1 \\oplus X_2^2 \\oplus \\cdots \\oplus X_2^{N-1}$ .", "i.e., the receiver always computes the functions perfectly and trivializes the communication task.", "It is clear that the task success is related to the probability of the non-signaling boxes working just as (REF ), i.e., $p(a_0 \\oplus a_1 \\oplus \\cdots \\oplus a_{N-1} = x_0 x_{N-1} \\oplus x_1 x_{N-1} \\oplus \\cdots \\oplus x_{N-2} x_{N-1} | x_0, x_1, \\cdots , x_{N-1})$ .", "Thus, the probabilities that the receiver computes the function values $f_1$ and $f_2$ correctly are, respectively, given by $P_I &= \\frac{1}{2^{N-1}}\\left[\\sum _{x_1,...,x_{N-1}} p\\left(\\bigoplus _{k=1}^{N} a_k = \\bigoplus _{k=1}^{N-1} x_k x_{N} | x_1,...,x_{N-1}, x_N =0\\right)\\right];\\\\P_{II} &= \\frac{1}{2^{N-1}}\\left[\\sum _{x_1,...,x_{N-1}} p\\left(\\bigoplus _{k=1}^{N} a_k = \\bigoplus _{k=1}^{N-1} x_k x_{N} | x_1,...,x_{N-1}, x_N =1\\right)\\right].$ When the parts share (REF ), we have $P_I = P_{II} = 1$ .", "However, by introducing a parameter $E\\in [0,1]$ , we can investigate other non-signaling behaviors by means of the following probability of success: $p\\left(\\bigoplus _{k=1}^{N} a_k = \\bigoplus _{k=1}^{N-1} x_k x_{N}\\right) = \\frac{1}{2} (1 + E).$ The perfect correlations of behavior (REF ) are retrieved when $E=1$ , and uniform probabilities are retrieved when $E=0$ .", "From this example, one can see that the concatenation approach, depicted in Fig.", "REF , can also be employed in this multipartite scenario.", "This is due to the fact that, to complete the task, it is sufficient for the receiver to know only $\\bigoplus _{k=1}^{N-1} M_k$ , instead of each message $M_k$ .", "For instance, when $n=4$ , the senders can divide their bits into two pairs and perform the encoding just as in the previous strategy.", "Now, if instead of sending their respective messages, $M_k^0$ and $M_k^{1}$ , the parts encode them in a third NS-box (REF ) by employing (REF ), the receiver is able to recover perfectly one of the functions $\\bigoplus _{k=1}^{N-1} M_k^{i =0,1}$ .", "This allows the parts to perform the same decoding one more time, resulting in perfect access by the receiver to one of the functions $f_0 = X_0^1 \\oplus X_0^2 \\oplus \\cdots \\oplus X_0^{N-1}$ , $f_1 = X_1^1 \\oplus X_1^2 \\oplus \\cdots \\oplus X_1^{N-1}$ , $f_2 = X_2^1 \\oplus X_2^2 \\oplus \\cdots \\oplus X_2^{N-1}$ , or $f_3 = X_3^1 \\oplus X_3^2 \\oplus \\cdots \\oplus X_3^{N-1}$ .", "In the most general scenario, the receivers have, initially, $n = 2^K$ bits, share $n-1$ perfect copies of the non-signaling resource (REF ), and the senders and the receiver perform the strategy just as depicted in Fig.", "REF .", "Here, for each part $k$ , we denote the output and input of the box $i$ of the level $l$ by $a_k^{i, l}$ and $x_k^{i, l}$ , respectively.", "Thus, we may write the guess produced by the receiver as: $G_j = \\left(\\bigoplus _{k=0}^{N-1} M_k\\right) \\oplus \\left(\\bigoplus _{l=0}^{K-1} a_N^{i_l,l} \\right),$ where the box $i_l$ is defined in terms of the box measured in the previous level, $i_l = 2 i_{l-1} + z_l + 1$ , when $l \\ge 1$ .", "In this case, the receiver performs measurements in $K$ boxes, one in each level, among which $(N-r)$ are to $z_N^{i,l} = 0$ and $r$ to $z_N^{i,l} = 1$ , where $r = z_0 + z_1 + \\cdots + z_{K-1}$ .", "Just as in the single copy scenario, the task success is directly related to the probability that the $n-1$ non-signaling boxes behave as (REF ), i.e., (REF ).", "Thus, when $E<1$ , for each box, there exists a probability that the receiver output $a_{N-1}^{i,l}$ is wrong and the property $\\bigoplus _{k=1}^{N-1} a_k^{i,l} = \\bigoplus _{k=1}^{N-2} x_k^{i,l} x_{N}^{i,l}$ does not hold.", "However, if an even number of mistakes is produced in the outputs of the receiver, then they all cancel each other and the produced guess with (REF ) will be correct.", "Therefore, the success probability for the multipartite task with concatenation is equal to the probability that the receiver produces an even number of wrong outputs, i.e.", ": $p\\left(G_j = \\bigoplus _{k=0}^{N-2} X_j^k \\right) = Q_{\\text{even}}^{(K-r)}(P_I)\\cdot Q_{\\text{even}}^{(r)}(P_{II}) + Q_{\\text{odd}}^{(K-r)}(P_I)\\cdot Q_{\\text{odd}}^{(r)}(P_{II}),$ where $P_I$ and $P_{II}$ are defined in () and $Q_{\\text{even}}^{(s)}(P)$ and $Q_{\\text{odd}}^{(s)}(P)$ are given by $Q_{\\text{even}}^s(P) &= \\sum _{j=0}^{\\lfloor \\frac{s}{2} \\rfloor } \\binom{s}{2j} (1-P)^{2j} P^{s-2j} = \\frac{1}{2}(1+(2P-1)^s);\\\\Q_{\\text{odd}}^s(P) &= \\sum _{j=0}^{\\lfloor \\frac{s-1}{2}\\rfloor } \\binom{s}{2j+1} (1-P)^{2j+1} P^{s-2j-1} = \\frac{1}{2}(1-(2P-1)^s).$ These describe the probabilities of the receiver producing an even and an odd number of mistakes, respectively, after $s$ measurements; $P$ denotes the probability of obtaining the right output in a NS-box.", "By inserting () in (REF ) and considering the bias from (REF ) in the probabilities from (), we find the communication task success probability $p\\left(G_j = \\bigoplus _{k=0}^{N-2} X_j^k \\right) = \\frac{1}{2}(1+E_I^{K-r}E_{II}^{r}),$ where $E_i = 2 P_i - 1$ ." ], [ "Proving new IC criteria", "In this appendix, we prove criterion (REF ), however, for the even more general multipartite scenario described in appendix .", "The strategy will be similar to the one employed in the first bipartite proposal [9], so, for completeness, we start by defining the following mutual information chain rule and data processing inequalities: $&I(A:B|C) = I(A:B,C) - I(A:C); \\\\&I(A:B^{\\prime }) \\le I(A:B), \\quad \\text{where} \\quad B \\longrightarrow B^{\\prime };$ Following the description given in appendix , first, we consider the following quantity, $I(\\mathbf {x}^k : \\mathbf {x}^1, \\cdots , \\mathbf {x}^{k-1}, \\mathbf {x}^{k+1}, \\cdots , \\mathbf {x}^{N-1}, M_k, c )$ , and prove that it is lower bounded by the left-hand side of (REF ).", "By applying the chain rule (REF ) two times, we obtain: $I(\\mathbf {x}^k : \\mathbf {x}^1, \\cdots , \\mathbf {x}^{k-1}, \\mathbf {x}^{k+1}, \\cdots , \\mathbf {x}^{N-1}, M_k, c ) \\\\ = I(X_1^k : \\mathbf {x}^1, \\cdots , \\mathbf {x}^{k-1}, \\mathbf {x}^{k+1}, \\cdots , \\mathbf {x}^{N-1}, M_k, c )+I(X_2^k, \\cdots , X_n^k : \\mathbf {x}^1, \\cdots , \\mathbf {x}^{k-1}, \\mathbf {x}^{k+1}, \\cdots , \\mathbf {x}^{N-1}, M_k, c | X_1^k) \\\\= I(X_1^k : \\mathbf {x}^1, \\cdots , \\mathbf {x}^{k-1}, \\mathbf {x}^{k+1}, \\cdots , \\mathbf {x}^{N-1}, M_k, c)+ I(X_2^k, \\cdots , X_n^k : \\mathbf {x}^1, \\cdots , \\mathbf {x}^{k-1}, \\mathbf {x}^{k+1}, \\cdots , \\mathbf {x}^{N-1}, M_k, c, X_1^k) \\\\ - I(X_2^k, \\cdots , X_n^k : X_1^k).$ From data processing () we have $I(X_2^k, \\cdots , X_n^k : \\mathbf {x}^1, \\cdots , \\mathbf {x}^{k-1}, \\mathbf {x}^{k+1}, \\cdots , \\mathbf {x}^{N-1}, M_k, c, X_1^k) \\ge \\\\ I(X_2^k, \\cdots , X_n^k : \\mathbf {x}^1, \\cdots , \\mathbf {x}^{k-1}, \\mathbf {x}^{k+1}, \\cdots , \\mathbf {x}^{N-1}, M_k, c).$ Furthermore, by applying the chain rule in the first term in the right-hand side of (REF ), and using strong subadditivity, $I(A:B|C) \\ge 0$ , we obtain: $I(X_1^k : \\mathbf {x}^1, \\cdots , \\mathbf {x}^{k-1}, \\mathbf {x}^{k+1}, \\cdots , \\mathbf {x}^{N-1}, M_k, c) \\\\ = I(X_1^k : X_2^1, \\cdots ,X_n^1| X_1^1, \\mathbf {x}^{2}, \\cdots , \\mathbf {x}^{k-1}, \\mathbf {x}^{k+1}, \\cdots , \\mathbf {x}^{N-1}, M_k, c) \\\\ + I(X_1^k : X_1^1, \\mathbf {x}^{2}, \\cdots , \\mathbf {x}^{k-1}, \\mathbf {x}^{k+1}, \\cdots , \\mathbf {x}^{N-1}, M_k, c) \\\\\\ge I(X_1^k : X_1^1, \\mathbf {x}^{2}, \\cdots , \\mathbf {x}^{k-1}, \\mathbf {x}^{k+1}, \\cdots , \\mathbf {x}^{N-1}, M_k, c).$ Therefore, back to (REF ), we write: $I(\\mathbf {x}^k : \\mathbf {x}^1, \\cdots , \\mathbf {x}^{k-1}, \\mathbf {x}^{k+1}, \\cdots , \\mathbf {x}^{N-1}, M_k, c ) \\\\ \\ge I(X_1^k : X_1^1, \\mathbf {x}^{2}, \\cdots , \\mathbf {x}^{k-1}, \\mathbf {x}^{k+1}, \\cdots , \\mathbf {x}^{N-1}, M_k, c) + I(X_2^k, \\cdots , X_n^k : \\mathbf {x}^1, \\cdots , \\mathbf {x}^{k-1}, \\mathbf {x}^{k+1}, \\cdots , \\mathbf {x}^{N-1}, M_k, c) \\\\ - I(X_2^k, \\cdots , X_n^k : X_1^k).", "$ Similarly to (REF ), we can employ the chain rule and strong subadditivity $N-3$ times in the first right-hand side term in (REF ) in order to highlight only the first bit $X_1^k$ of each bit-string $\\mathbf {x}^k$ : $I(\\mathbf {x}^k : \\mathbf {x}^1, \\cdots , \\mathbf {x}^{k-1}, \\mathbf {x}^{k+1}, \\cdots , \\mathbf {x}^{N-1}, M_k, c ) \\\\ \\ge I(X_1^k : X_1^1, X_1^2, \\cdots ,X_1^{k-1}, X_1^{k+1}, \\cdots , X_1^{N-1}, M_k, c)+ I(X_2^k, \\cdots , X_n^k : \\mathbf {x}^1, \\cdots , \\mathbf {x}^{k-1}, \\mathbf {x}^{k+1}, \\cdots , \\mathbf {x}^{N-1}, M_k, c) \\\\- I(X_2^k, \\cdots , X_n^k : X_1^k).", "$ Notice that the right side third term in (REF ) is, exactly, $I(\\mathbf {x}^k : \\mathbf {x}^1, \\cdots , \\mathbf {x}^{k-1}, \\mathbf {x}^{k+1}, \\cdots , \\mathbf {x}^{N-1}, M_k, c )$ , but without $X_1^k$ of the bit-string $x^k$ .", "Therefore, by performing the same steps $n-1$ times, we achieve: $I(\\mathbf {x}^k : \\mathbf {x}^1, \\cdots , \\mathbf {x}^{k-1}, \\mathbf {x}^{k+1}, \\cdots , \\mathbf {x}^{N-1}, M_k, c ) \\\\ \\ge \\sum _i^n I(X_i^k : X_i^1, X_i^2, \\cdots ,X_i^{k-1}, X_i^{k+1}, \\cdots , X_i^{N-1}, M_k, c) - \\sum _i^n I(X_{i+1}^k, \\cdots , X_n^k : X_i^k).", "$ From the data processing inequality (), we write $I(X_i^k : X_i^1, X_i^2, \\cdots ,X_i^{k-1}, X_i^{k+1}, \\cdots , X_i^{N-1}, M_k, c) \\ge I(X_i^k : X_i^1, X_i^2, \\cdots ,X_i^{k-1}, X_i^{k+1}, \\cdots , X_i^{N-1}, G_i),$ and, finally, obtain the lower bound: $I(\\mathbf {x}^k : \\mathbf {x}^1, \\cdots , \\mathbf {x}^{k-1}, \\mathbf {x}^{k+1}, \\cdots , \\mathbf {x}^{N-1}, M_k, c ) \\\\ \\ge \\sum _i^n I(X_i^k : X_i^1, X_i^2, \\cdots ,X_i^{k-1}, X_i^{k+1}, \\cdots , X_i^{N-1}, G_i)- \\sum _i^n I(X_{i+1}^k, \\cdots , X_n^k : X_i^k).", "$ The next step will be to prove that $I(\\mathbf {x}^k : \\mathbf {x}^1, \\cdots , \\mathbf {x}^{k-1}, \\mathbf {x}^{k+1}, \\cdots , \\mathbf {x}^{N-1}, M_k, c ) \\le H(M_k)$ .", "So: $I(\\mathbf {x}^k : \\mathbf {x}^1, \\cdots , \\mathbf {x}^{k-1}, \\mathbf {x}^{k+1}, \\cdots , \\mathbf {x}^{N-1}, M_k, c ) \\\\ = I(\\mathbf {x}^k : M_k | \\mathbf {x}^1, \\cdots , \\mathbf {x}^{k-1}, \\mathbf {x}^{k+1}, \\cdots , \\mathbf {x}^{N-1}, c)+ I(\\mathbf {x}^k : \\mathbf {x}^1, \\cdots , \\mathbf {x}^{k-1}, \\mathbf {x}^{k+1}, \\cdots , \\mathbf {x}^{N-1}, c) \\\\= I(M_k : \\mathbf {x}^1, \\cdots , \\mathbf {x}^{N-1}, c) - I(M_k : \\mathbf {x}^1, \\cdots , \\mathbf {x}^{k-1}, \\mathbf {x}^{k+1}, \\cdots , \\mathbf {x}^{N-1}, c )\\\\\\le I(M_k : \\mathbf {x}^1, \\cdots , \\mathbf {x}^{N-1}, c),$ where here we applied the chain rule two times, considering the non-signaling between the $N$ parts and the non-negativity of the mutual information, $I(A:B)\\ge 0$ .", "At this point, just as argued in Ref.", "[9], from the data processing inequality, we have $I(M_k : \\mathbf {x}^1, \\cdots , \\mathbf {x}^{N-1}, c) \\le I(M_k : M_k) = H(M_k)$ , which finally yields: $I(\\mathbf {x}^k : \\mathbf {x}^1, \\cdots , \\mathbf {x}^{k-1}, \\mathbf {x}^{k+1}, \\cdots , \\mathbf {x}^{N-1}, M_k, c ) \\le H(M_k).$ Now, we can put (REF ) and (REF ) together in order to achieve $\\sum _i^n I(X_i^k : X_i^1, \\cdots ,X_i^{k-1}, X_i^{k+1}, \\cdots , X_i^{N-1}, G_i) \\le H(M_k) + \\sum _i^n I(X_{i+1}^k, \\cdots , X_n^k : X_i^k).$ Finally, we recover (REF ) by summing inequality (REF ) over $k$ and considering non-signaling between the $N$ parts, i.e., $\\sum \\limits _k^{N-1} H(M_k) = H(M_1,...,M_{N-1})$ : $\\sum _k^{N-1} \\sum _i^n I(X_i^k : X_i^1, \\cdots ,X_i^{k-1}, X_i^{k+1}, \\cdots , X_i^{N-1}, G_i) \\le H(M_1,\\cdots ,M_{N-1})+ \\sum _k^{N-1} \\sum _i^n I(X_{i+1}^k, \\cdots , X_n^k : X_i^k).$" ], [ "Multiple copies inequality", "Here we prove the multipartite generalization of the multiple copies criterion (REF ), firstly derived in Ref.", "[9] for a strict bipartite scenario.", "First of all, we need to prove a simplified lower bound for (REF ).", "So, rewriting the left-hand side summation argument in (REF ), we have $I(X_i^k : X_i^1, \\cdots ,X_i^{k-1}, X_i^{k+1}, \\cdots , X_i^{N-1}, G_i) = &H(X_i^k) - H(X_i^k | X_i^1, X_i^2 \\cdots ,X_i^{k-1}, X_i^{k+1}, \\cdots , X_i^{N-1}, G_i)\\nonumber \\\\= & 1 - H(X_i^k \\oplus X_i^1| X_i^1, X_i^2, \\cdots ,X_i^{k-1}, X_i^{k+1}, \\cdots , X_i^{N-1}, G_i)\\nonumber \\\\\\ge & 1 - H(X_i^k \\oplus X_i^1| X_i^2, \\cdots ,X_i^{k-1}, X_i^{k+1}, \\cdots , X_i^{N-1}, G_i).$ Here, we particularized to the case where every bit $X_i^k$ is associated with a uniform distribution, $H(X_i^k) = 1$ .", "Further, we considered the fact that $H(A|B, C) = H(A \\oplus B | B, C)$ , because knowing $B$ results in the same uncertainty about $A$ and $A\\oplus B$ , and $H(A\\oplus B | B, C) \\ge H(A\\oplus B | C)$ , i.e., to remove the conditioning in $B$ does not increase the uncertainty of $A \\oplus B$ .", "This same argument can be applied $N-2$ times in order to move every conditioned random variable in the right-hand side of (REF ): $I(X_i^k : X_i^1, \\cdots ,X_i^{k-1}, X_i^{k+1}, \\cdots , X_i^{N-1}, G_i) \\ge 1 - H(X_i^1 \\oplus X_i^2 \\oplus \\cdots \\oplus X_i^{N-1} \\oplus G_i).$ However, from the communication task, when $X_i^1 \\oplus X_i^2 \\oplus \\cdots \\oplus X_i^{N-1} \\oplus G_i = 0$ , we necessarily have $G_i = X_i^1 \\oplus X_i^2 \\oplus \\cdots \\oplus X_i^{N-1}$ .", "Thus, the probability $p(X_i^1 \\oplus X_i^2 \\oplus \\cdots \\oplus X_i^{N-1} \\oplus G_i = 0)$ is exactly the success probability of the receiver, $p(G_i = X_i^1 \\oplus X_i^2 \\oplus \\cdots \\oplus X_i^{N-1})$ , while $p(X_i^1 \\oplus X_i^2 \\oplus \\cdots \\oplus X_i^{N-1} \\oplus G_i = 1)$ is the complementary part.", "Therefore, the right-hand side term from (REF ) can be written in terms of the binary entropy, which in (REF ) finally yields: $(N-1) \\sum _i^n (1 - h(p(G_i = X_i^1 \\oplus X_i^2 \\oplus \\cdots \\oplus X_i^{N-1}))) \\le \\mathcal {I} \\le H(M_1,\\cdots ,M_{N-1}).", "$ Notice that we considered the fact that the left-hand side has no dependence on the index $k$ .", "Furthermore, the rightmost term in (REF ) does not appear in (REF ), because we are assuming a uniform distribution for every initial bit $X_i^k$ .", "At this point, we particularize our description to the concatenation strategy earlier described in appendix .", "Here we rewrite the left-hand side summation in (REF ) in terms of the number of instances $r$ where the receiver performed measurement ${x_n}_j^k = 1$ , and substitute the concatenation success probability (REF ): $(N-1) \\sum _i^n (1 - h(p(G_i = X_i^1 \\oplus X_i^2 \\oplus \\cdots \\oplus X_i^{N-1}))) &= (N-1) \\sum _r^K \\binom{K}{r} \\left[ 1 - h\\left(\\frac{1+E_I^{K-r} E_{II}^{r}}{2}\\right)\\right]\\nonumber \\\\&\\ge \\frac{(N-1)}{2\\ln 2} \\sum _r^K \\binom{K}{r} (E_I^2)^{N-r}(E_{II}^2)^r \\nonumber \\\\& = \\frac{(N-1)}{2\\ln 2} (E_I^2 + E_{II}^2)^K,$ where we considered $1-h\\left(\\frac{1+y}{2}\\right) \\ge \\frac{y^2}{2\\ln 2}$ and $E_i = 2P_i -1$ , from ().", "After performing such encoding, each sender sends only a single bit message.", "Thus, $H(M_1,\\cdots ,M_{N-1}) $ in (REF ) is always fixed in $N-1$ , necessarily.", "Therefore, with (REF ) and (REF ), we find that when $E_I^2 + E_{II}^2 > 1$ , the new proposed criterion (REF ) can always be violated by some concatenation protocol with $K$ levels.", "Thus, we finally conclude the proof for the previously mentioned criterion in (REF ): $E_I^2 + E_{II}^2 \\le 1.$" ], [ "New inequality in terms of noisy channel capacity", "Here we prove the inequality (REF ), where the senders communicate their messages $M_k$ through a single use of a noisy channel to the receiver.", "The proof for the noiseless version (REF ) is essentially valid in this context, but it is necessary to introduce a new variable $M_k ^{\\prime }$ , representing the message after the action of the channel, on step (REF ) to obtain the upper bound in terms of the channel capacity $C_k = I(M_k : M_k ^{\\prime })$ .", "So, we have: $I(\\mathbf {x}^k : \\mathbf {x}^1, \\cdots , \\mathbf {x}^{k-1}, \\mathbf {x}^{k+1}, \\cdots , \\mathbf {x}^{N-1}, M_{k}^{\\prime }, c )\\le & I(M_{k}^{\\prime } : \\mathbf {x}^1, \\cdots , \\mathbf {x}^{N-1}, c)\\nonumber \\\\\\le & I(M_{k}^{\\prime } : \\mathbf {x}^1, \\cdots , \\mathbf {x}^{N-1}, c, M_{k}),$ where we considered the data processing inequality in the second step.", "As mentioned in the main text, $M_k$ completely determines $M_k ^{\\prime }$ , so $M_k ^{\\prime }$ is conditionally independent of any random variable from the causal structure, i.e., $\\;I(M_{k}^{\\prime } : V | M_{k}) = 0$ .", "Thus, we may write $I(M_{k}^{\\prime } : \\mathbf {x}^1, \\cdots , \\mathbf {x}^{N-1}, c, M_{k}) - I(M_{k}^{\\prime } : M_{k}) = 0$ and obtain in (REF ) $I(\\mathbf {x}^k : \\mathbf {x}^1, \\cdots , \\mathbf {x}^{k-1}, \\mathbf {x}^{k+1}, \\cdots , \\mathbf {x}^{N-1}, M_{k}^{\\prime }, c )\\le & I(M_{k}^{\\prime } : M_{k}) = C_k.", "$ The next steps are quite similar as the appendix , therefore we may write $\\sum _k^{N-1} \\sum _i^n I(X_i^k : X_i^1, \\dots ,X_i^{k-1}, X_i^{k+1}, \\dots , X_i^{N-1}, G_i) \\le \\sum _k^{N-1} C_k + \\sum _i^n I(X_{i+1}^k, \\dots , X_n^k : X_i^k).$" ] ]
2212.05601
[ [ "Decomposition of the Leinster-Cobbold Diversity Index" ], [ "Abstract The Leinster and Cobbold diversity index possesses a number of merits; in particular, it generalises many existing indices and defines an effective number.", "We present a scheme to quantify the contribution of richness, evenness, and taxonomic similarity to this index.", "Compared to the work of van Dam (2019), our approach gives unbiased estimates of both evenness and similarity in a non-homogeneous community.", "We also introduce a notion of taxonomic tree equilibration which should be of use in the description of community structure." ], [ "Introduction", "Measuring biodiversity is a difficult task due to sampling issues and accounting for missing data, but also as there is no one universally accepted definition of what biodiversity is [6].", "In ecological practice, definitions of biodiversity can include contributions from multiple channels of information such as the number of species (“richness”), dominance or rarity relations among the constituent species (“evenness”), and measures of “similarity” among the species (estimated either from taxonomic or phylogenetic relationships, or from functional traits relationships) [17], [14].", "Biogeographic patterns of diversity can depend on the definitions used.", "For example, [20] showed that biodiversity hotspots can shift from the tropics to higher latitudes if one only considers abundances or also takes account of functional traits similarity of species.", "An outstanding challenge in the field of conservation ecology is to relate the various aspects of biodiversity data to the functioning of ecosystems [16], [11].", "Thus the goal is to construct a biodiversity index that would carry information about as many aspects of diversity as possible.", "This goal has been actively pursued [18], [14], [2].", "By “carrying information” we means that, for example, we should be able to extract information about richness or evenness from our index.", "One way of extracting such information is to decompose the index additively or multiplicatively into components that can be interpreted in a biologically meaningful way; see for example discussions of $\\alpha $ - $\\beta $ - and $\\gamma $ - diversity [12], [1].", "A priori it is not clear why such a decomposition would exist, whether it has to be unique, and in cases of non-uniqueness, what are the conditions for optimality of a decomposition.", "As an example of this approach, van Dam [21] has recently proposed a straightforward decomposition of the Leinster–Cobbold (LC) index [14].", "In a sense, our work below is a generalisation of the work of van Dam, which uses intrinsic properties of the LC index to remove an important bias in van Dam's decomposition with intriguing and far-reaching consequences.", "The structure of the paper is as follows.", "In Section  we discuss the definitions of richness, evenness and similarity using [2], [5], [6], [9].", "In Section  we collect the required information about the LC index following [14], [15]; it subsumes many other diversity indices such Rao's index that is widely used in functional ecology [18], [19].", "In Section  we present van Dam's and then our decomposition and its consequences.", "Finally, in Section  we discuss the relation of our work to that or Chao and Ricotta [3], extensions and open problems." ], [ "Notation", "First of all, we need to establish notation.", "Everywhere below we assume that the number of species in a community is fixed at $n>1$ .", "We will use ${p}=(p_1,\\, \\ldots , p_n)$ to denote the vector of relative abundances, and let $\\Delta (n) = \\lbrace {x} \\in {\\mathbb {R}}^n,\\; | \\; x_k \\ge 0, \\; \\; k= 1, \\ldots ,n, \\; \\; \\sum _{k=1}^n x_k =1\\rbrace $ be the standard $n-1$ -simplex in ${\\mathbb {R}}^n$ .", "Remark 1 It has to be emphasised that admissible relative abundance vectors ${p}$ take values in $\\Delta (n)^\\circ $ , the interior of $\\Delta (n) $ : $\\Delta (n)^\\circ = \\lbrace {x} \\in {\\mathbb {R}}^n,\\; | \\; x_k > 0, \\; \\; k= 1, \\ldots ,n, \\; \\; \\sum _{k+1}^n x_k =1\\rbrace ,$ which is not a closed set in ${\\mathbb {R}}^n$ ; the consequence of that is that there are converging sequences $\\left({p}_m\\right)_{m=1}^\\infty $ in $\\Delta (n)^\\circ $ , whose limit is contained in the boundary $\\partial \\Delta (n) = \\Delta (n)\\backslash \\Delta (n)^\\circ $ ; such limits by necessity correspond to communities with fewer than $n$ species.", "We will often use the vector ${p}_h = \\left( \\frac{1}{n}, \\ldots , \\, \\frac{1}{n} \\right) \\in \\Delta (n)^\\circ ;$ the subscript $h$ stands for “homogeneous”.", "${p}_h$ is the relative abundance of a community where each species is represented equally.", "We denote by $M(n) \\subset \\Delta (n) \\backslash \\Delta (n)^\\circ $ the set of $n$ -vectors having one component equal to 1 and the rest equal to zero.", "Thus, ${m} \\in M(n)$ is the relative abundance vector of a monomorphic community.", "$\\bf {1}$ will stand below for an $n$ -vector with all components equal to 1.", "Next, we need to discuss sets of $n \\times n$ matrices.", "First of all, we will denote the $n \\times n$ identity matrix by $I_n$ .", "We will use the notation $J_n$ for the $n \\times n$ matrix of ones.", "In the present paper, for simplicity, we will be working with ultrametric matrices; this choice is motivated by the fact that similarity matrices (see subsection REF ) constructed using taxonomic trees are necessarily ultrametric and since using them simplifies the theory of [15].", "For more information on ultrametric matrices, please see [7] and [13], [15].", "We will denote that set of all ultrametric $n \\times n$ matrices by $\\mathcal {U}(n)$ and its interior by $\\mathcal {U}(n)^\\circ $ .", "Definition 2 (Defn.", "3.2 of [7]) A symmetric $n \\times n$ matrix $A$ is ultrametric if $A_{i,i}\\ge \\max _{k \\ne i} A_{ik}$ , $i, k \\in \\lbrace 1,\\ldots , n\\rbrace $ and $A_{ik}\\ge \\min \\lbrace A_{ij}, A_{jk} \\rbrace $ , $i,j,k \\in \\lbrace 1,\\ldots , n\\rbrace $ .", "Remark 3 Note that in [15] Leinster and Meckes take the matrices they call ultrametric to be strictly diagonally dominant.", "That would preclude the set of ultrametric matrices from being closed; so in our definition $J_n \\in \\mathcal {U}(n)$ ." ], [ "Evenness", "The concept of evenness (for which see, e.g.", "[2], [3], [9]) and references therein), is rather problematic.", "First of all, the terminology is badly chosen as it would immediately seem that the “most even” population of $n$ species is one for which the vector of relative abundances is the homogeneous vector ${p}_h$ , i.e.", "one where every species is equally represented.", "Thus, like the case of richness discussed below, the terminology seems to be precluding discussion.", "As rightly pointed in [9], such a categorical answer to the question of maximal evenness leaves open the discussion of what would constitute “maximum unevenness” in $\\Delta (n)^\\circ $ .", "van Dam [21] uses instead the concept of “balance”, which seems to us a better term; this is the concept which, after defining it properly (see (REF )) we will be using below." ], [ "Richness", "Richness is sometimes summarily defined to be the number of species, see e.g.", "[6].", "Our approach below allows us to retain this definition but at a price.", "Such a definition is open to the same criticism as the notion of maximal evenness defined by ${p}_h$ that we have discussed above.", "It is again “species-centric”, and takes into account only the last level of taxonomic classification.", "Below, in Section  we suggest how to introduce a defensible new notion of richness that uses taxonomic information." ], [ "Similarity", "In this section we discuss the construction of taxonomic similarity matrices $Z$ for a community with $n$ species.", "The usual way of constructing similarity matrices $Z$ which are automatically ultrametric is to assign distances between different levels of a taxonomic tree.", "Then the taxonomic distance $d(i,j)$ between two species is the sum of distance from the nodes corresponding to these species to the first common node, and then one puts $Z_{ij}=e^{-d(i,j)}$ or if the maximal distance in the tree has been normalised to 1, one could put $Z_{ij}=1-d(i,j)$ .", "As a example, consider the tree in Figure  REF Figure: A taxonomic treeand set the species-genus and the genus-family distance to be $0.3$ .", "If we use the additive recipe, we get $Z_1 = \\left( \\begin{array}{ccc} 1 & 0.7 & 0.4\\\\0.7 & 1 & 0.4\\\\0.4 & 0.4 & 1\\end{array} \\right)$" ], [ "The Leinster–Cobbold diversity index", "In their influential paper [14], Leinster and Cobbold introduced a far-reaching generalisation of Hill numbers, for discussions of which see [5], [10], the LC index.", "For details on its properties, see [14], [15]; here we just collect the bare minimum in the framework of taxonomic (ultrametric) similarity matrices." ], [ "Definition of the LC index", "As in the definition of Hill numbers, below $q \\in [0,\\infty )$ is the sensitivity parameter, measuring the importance given to rare species.", "Then for a community of $n$ species with relative abundance vector ${p}$ and (ultrametric) similarity matrix $Z$ , we have Definition 4 The LC diversity of order $q$ is $F(Z,{p},q) :=\\left(\\sum _{i=1}^n p_i \\left(Z{p}^T\\right)_i^{q-1}\\right)^{1/(1-q)}.$ Note that [14] use a different notation, similar to the Hill number notation in the literature; they denote the right-hand side of (REF ) by ${}^qD^Z({p})$ .", "We prefer the notation used here as it clearly shows functional dependencies and allows easy generalisation, which we discuss briefly in Section .", "We collect the required properties of the LC index in the proposition below and in subsection REF .", "Proposition 5 Let ${p}\\in \\Delta (n)^\\circ $ .", "$Z \\in U(n)$ .", "Then $F({p},Z,q)$ is a monotone decreasing function of $q$ ; $F({p},Z,q) < F({p}, I_n,q)$ for all $q$ if $Z \\ne I_n$ ; $F({p}_h,I_n,q)=n$ for all $q$ ; $F({p},J_n,q)=1$ for all $q$ .", "For proofs of (a) and (b) please see [14]; the rest are immediate.", "Following [15], we now discuss the concept of a maximally balanced abundance vector for a community of $n$ species with an ultrametric similarity matrix $Z$ ." ], [ "A crucial property of the LC index", "Using only ultrametric taxonomic similarity matrices simplifies the presentation considerably.", "For the more general case where the similarity matrix is simply a symmetric matrix with positive elements, see [15].", "The results of that paper have not, in our opinion, been sufficiently seriously considered by the biodiversity community.", "We present two theorems from [15].", "First of all we have the following existence and uniqueness result for maximisers of the LC diversity index.", "Theorem 6 For each $Z \\in \\mathcal {U}(n)^\\circ $ there exists a unique abundance vector ${p}^* \\in \\Delta (n)^\\circ $ that maximises $F(Z,{p},q)$ for every value of $q \\in [0, \\infty )$ .", "Definition 7 Given $Z \\in \\mathcal {U}(n)^\\circ $ , we call the corresponding abundance vector ${p}^*$ the maximally balanced abundance vector.", "This is the vector that corresponds to ${p}_h$ that arises in theories that do not take into account taxonomic similarity.", "Computing the maximally balanced vector in the case of ultrametric similarity matrices is a simple matter of solving a system of linear equations and normalising.", "If the similarity matrix is not ultrametric, the situation is more complex; see [15] for details.", "Theorem 8 Given $Z \\in \\mathcal {U}(n)^\\circ $ , ${p}^*$ is given by $p^*_i = \\frac{w_i}{\\sum _{j=1}^n w_j},$ where ${w}$ solves the system of equations $Z{w} ={1}$ , where 1 is a column vector of ones.", "Note that [15] provides an alternative way of computing ${p}^*$ .", "Definition 9 A taxonomic tree will be called taxonomically equilibrated if ${p}^* = {p}_h$ .", "Of course we have Proposition 10 If at each level of the tree all the nodes have the same degree, the taxonomic tree is equilibrated.", "The converse of Proposition REF does not hold, i. e. there are taxonomic graphs that do not satisfy the conditions of that proposition, for which a metric $d(\\cdot ,\\cdot )$ can be assigned such that the resulting ${p}^*$ is ${p}_h$ .", "An example is provided by the following tree: Figure: A taxonomic tree that allows p * =p h {p}^*={p}_h.It is not hard to show that the assignment of species-genus, genus-family and family-order distances of $0.25$ and using the additive recipe, results in a similarity matrix for which ${p}^*={p}_h$ .", "Thus there is a trichotomy of taxonomic trees: those in Proposition REF is which ${p}^*={p}_h$ holds for every assignment of distances; those where such assignments can be chosen, as in Figure REF , and such that no assignment of distances results in a homogeneous maximally balanced abundance vector; an example of such a tree is in Figure REF .", "Figure: A taxonomic tree for which p * ≠p h {p}^* \\ne {p}_h isguaranteed.We will discuss this trichotomy in more detail in [4].", "Remark 11 Compared to the diversity metrics proposed by Chao et al.", "[2], the LC index has the flexibility to take into account taxonomic, phylogenetic, and functional diversity simultaneously.", "However in that case the resulting similarity matrices are no longer ultrametric and we leave that more general case for future study.", "Also note that the above trichotomy of taxonomic trees would not necessarily exit if there were a canonical way of constructing similarity matrices." ], [ "An unbiased decomposition scheme", "We are now ready to propose a decomposition scheme for the LC index.", "It is best to start with the decomposition scheme proposed by van Dam [21] and see why it has to be modified.", "van Dam writes $F({p},Z,q) = \\frac{F({p},Z,q)}{F({p},I_n,q)} \\cdot \\frac{F({p},I_n,q)}{F({p}_h,I_n, q)} \\cdot F({p}_h,I_n, q).$ The first fraction is clearly a measure of dissimilarity, while the second fraction is a measure of balance; of course $F({p}_h,I_n, q)=n$ , so the last term in the right-hand side is a richness.", "From many points of view, this is a good decomposition as the two fractions always lie in the interval $(1/n,1]$ .", "Consult Proposition REF to see that the infimum $1/n$ is never reached.", "The problem here is with the definition of the measure of balance, as it does not take into account the taxonomic similarity matrix $Z$ while the dissimilarity measure uses information from both ${p}$ and $Z$ .", "We call such a decomposition asymmetrically biased.", "A different asymmetrically biased decomposition is given by $F({p},Z,q) = \\frac{F({p},Z,q)}{F({p}_h,Z,q)} \\cdot \\frac{F({p}_h,Z,q)}{F({p}_h,I_n, q)} \\cdot F({p}_h,I_n, q).$ In this decomposition the first fraction is a measure of balance, the second a measure of dissimilarity, while as before, the last term is richness.", "Of course here it is the measure of dissimilarity that is asymmetrically biased.", "A possibility that might be considered of multiplying (REF ) and (REF ) and taking the square root.", "That will give us a decomposition which we would call unbiased (though it could be also called “symmetrically biased”) .", "However, there is an additional problem in (REF ) which is that the first fraction in the right-hand side can take values larger than one if for example, $Z$ is not a similarity matrix of a taxonomically equilibrated tree and ${p}={p}^*$ .", "We do not pursue this direction as we do not see any reason for a relative measure not to take values in $[0,1]$ .", "The price of ensuring normalisation is having to deal with richness in more detail.", "Consider instead of (REF ), the following decomposition: $F({p},Z,q) =\\frac{F({p},Z,q)}{F({p}^*,Z,q)}\\cdot \\frac{F({p}^*,Z,q)}{F({p}^*,I_n, q)}\\cdot F({p}^*,I_n, q).$ It is asymmetrically biased as second factor does not involve ${p}$ .", "We will discuss the interpretation of $F({p}^*,I_n, q)$ later.", "To obtain an unbiased decomposition, we therefore multiply (REF ) and (REF ) and take a square root.", "The result is $F({p},Z,q) =\\sqrt{\\frac{F({p},Z,q)}{F({p}^*,Z,q)}\\frac{F({p},I_n,q)}{F({p}_h,I_n, q)}} \\cdot \\sqrt{\\frac{F({p},Z,q)}{F({p},I_n,q)}\\frac{F({p}^*,Z,q)}{F({p}^*,I_n, q)}} \\cdot \\sqrt{n F({p}^*,I_n, q)}.$ The last term in the right-hand side can be rewritten as $\\sqrt{n F({p}^*,I_n, q)}= \\sqrt{ \\frac{F({p}^*,I_n,q)}{n}} n:= E(Z,q)n.$ The term $E(z,q)$ expresses the lack of equilibration in the taxonomic tree; see Definition REF and Theorem REF .", "Putting $B({p},Z,q) = \\sqrt{\\frac{F({p},Z,q)}{F({p}^*,Z,q)}\\frac{F({p},I_n,q)}{F({p}_h,I_n, q)}},$ $D({p},Z,q) = \\sqrt{\\frac{F({p},Z,q)}{F({p},I_n,q)}\\frac{F({p}^*,Z,q)}{F({p}^*,I_n, q)}},$ We finally write our decomposition as $F({p},Z,q) = B({p},Z,q) D({p},Z,q) E(Z,q)n.,$ i.e.", "a product of measures of balance $B({p},Z,q)$ , dissimilarity $D({p},Z,q)$ , (lack of) equilibration $E(Z,q)$ and the classical richness $n$ .", "Note that by construction both the measure of balance (REF ) and dissimilarity (REF ) are constrained to lie in $[0,1]$ by Proposition REF .", "Both the measure of balance and of dissimilarity are geometric means of an unbiased measure and a biased one.", "It does not seem possible to find a truly unbiased decomposition of the LC index, which is the reason we could call the decomposition (REF ) symmetrically biased.", "Note that though $B({p},Z,q) \\ge 1/(\\sqrt{F({p}^*,Z,q)}$ (the right-hand side being independent of $q$ ), it is not clear what vector ${p}(q)$ maximises it for a particular value of $q$ .", "Of course the value 1 is reached for $q=0$ by the choice ${p}(0)= {p}^*$ .", "$E(Z,q)$ depends on $Z$ , as $Z$ defines ${p}^*$ , and hence $E(Z,q)$ reflects the structure of the underlying taxonomic tree.", "Note that in the case of similarity matrices of taxonomically equilibrated trees, for which we have ${p}^*= {p}_h$ and hence $F({p}^*,I_n, q)=n$ , so that $E(Z,q)=1$ , If $Z$ does not correspond to a taxonomically equilibrated taxonomic tree, $F({p}^*, I_n, q)$ is dependent on $q$ .", "Remark 12 We could have defined a notion of “richness” by $R(z,q):=E(Z,q)n$ , but the decomposition (REF ) seems to us more insightful and does not necessitate an advocacy of a new notion of richness.", "Definition REF singles out a class of taxonomic trees for which the two notions of richness coincide." ], [ "Practical examples", "To illustrate how our decomposition approach differs from the “ABC\" approach suggested by van Dam [21], we use a simple example in Leinster and Cobbold [14] (their Example 3; original data from [8]).", "This example has the nice feature that the difference between the two communities changes the sign when $q$ increases from 0 to 2.", "Figure: Comparing our approach (“This study\") and van Dam to decompose (A) the LC diversity index into(B) Lack of equilibration, (C) Balance (or Evenness), and (D)Dissimilarity for the example of Charaxinae in Leinsterand Cobbold .Compared to van Dam [21], our new decomposition approach yields different interpretations regarding what aspects of diversity lead to the differences in diversity estimates between the two communities (Canopy vs. Understory).", "When we put more emphasis on rare species ($q < 1$ ), the Canopy community is more diverse because it has a larger number of species richness (6 vs. 5) and its species are slightly more dissimilar with each other (Fig.", "REF A,D).", "By contrast, when we focus more on abundant species ($q > 1$ ), the Understory community becomes more diverse because of its greater balance (evenness) of dominant species (Fig.", "REF A,C).", "The difference between our decomposition approach and van Dam [21] is that our approach predicts a much larger difference in Balance than van Dam [21] (and a smaller difference in species dissimilarity) when we increasingly focus on dominant species (larger $q$ ).", "Our approach also shows that as $q$ increases, the Understory community shows a stronger lack of equilibration (i.e., deviation of ${p}$ from ${p}^*$ ) than the Canopy community, albeit this difference is small.", "In summary, we would interpret that the greater diversity of the Understory community when we focus on dominant species is because its dominant species are more balanced than those in the Canopy community.", "On the contrary, the interpretation would be that the dominant species are more dissimilar to each other in the Understory community if we use the approach of van Dam [21]." ], [ "Discussion", "The LC index uses three “information streams”: the number of species (richness) $n$ , the relative abundance vector ${p}$ and the similarity matrix $Z$ .", "We could in theory consider a diversity index $F(c_1, \\; \\ldots , c_m; \\, q)$ , where $c_1, \\ldots , \\; c_m$ are information streams, sources of information about the structure of the community, expressed as some mathematical objects (vectors, matrices higher order tensors).", "We could then follow the decomposition process of Section : find $m!$ biased decompositions, multiply them together and take the $m!$ -root.", "However, this is already unwieldy in the case of $m=3$ .", "But note that this procedure is unnecessary as the LC theory has a lot of built-in flexibility in the definition of similarity matrices.", "As explained in subsection REF , one can define a similarity matrix by setting $Z_{ij}=e^{-d(i,j)}$ , where $d(i,j)$ is some suitably defined distance between species $i$ and $j$ .", "Hence incorporating more information streams can be thought about as changing the distance function $d(\\cdot ,\\cdot )$ ; in the process of incorporating such information, such as functional similarity, the ultrametricty of the similarity matrix is lost; it is possible that the resulting function $d(\\cdot ,\\cdot )$ will no longer be a metric, becoming more generally a divergence measure.", "The point is ${p}$ and (a suitably redefined) $Z$ dependence of a diversity index is sufficient to incorporate all relevant information.", "In [3], Chao and Ricotta show how to quantify evenness using divergence measures.", "It is useful therefore to consider the LC diversity index (REF ) in this context.", "As now we deal with balance as opposed to evenness, we will denote the resulting balance index by $B$ .", "First of all, let us note that (REF ) cannot give rise to an divergence measure-based index of balance as there is no well defined upper bound for it for all $q$ .", "It is still of course useful in providing an estimate of the balance contribution to the LC diversity index.", "These two statements are not in contradiction.", "Clearly, the LC index itself provides a divergence measure-based estimate (index) of balance via ${\\mathbb {B}}= \\frac{F({p},Z,q)-1}{F({p}^*,Z,q)-1},$ where one could alternatively write $1=F({m},Z,q)$ where ${m}$ is any vector in $M(n)$ .", "Concerning similarity indices, it again does not seem to be possible to utilise $D({p},Z,q)$ of (REF ) to this end.", "On the other had, the LC index itself provides a divergence measure based similarity index by ${\\mathbb {S}}=\\frac{F({p},I_n,q)-F({p},Z,q)}{F({p},I_n,q)-F({p},J_n,q)}.$ Again, here the value 1 is never reached over $\\mathcal {U}(n)^\\circ $ .", "It is not hard to show the following proposition: Proposition 13 The indices ${\\mathbb {B}}$ , ${\\mathbb {S}}$ satisfy all the requirements in [3].", "To conclude, we have proposed a novel decomposition of the LC index.", "Compared to a previous version of decomposition due to van Dam [21], our approach estimates the balance and dissimilarity of the community more comprehensively (e.g., we not only estimate dissimilarity for a homogeneous community but also consider the present vector of relative abundance).", "As such, we believe that our inference is more robust when comparing balance and dissimilarity among communities.", "In addition, we had by necessity to introduce a notion of taxonomic tree equilibration (which turns out to be an important concept in our on-going work on quantifying un-evenness [4]), which is another descriptor of a biological community.", "We advocate the use of our decomposition (REF ) as a “maximally unbiased” estimate of contributions of balance and (dis)similarity to diversity." ] ]
2212.05617
[ [ "Local Volume dwarf KK242: radial velocity, SF region, and metallicity" ], [ "Abstract KK242 is a LV dwarf of transition type residing in the void environment.", "Koda et al.", "present clear indications on its connection with Scd galaxy NGC6503.", "This implies the distance to KK242 of ~6.3 Mpc and its M_B = -10.5 mag.", "Its radial velocity, known from the Effelsberg radio telescope \\HI\\ observations, reveals, however, the difference with that of NGC6503, dV ~ 400 km/s.", "If real, this fact implies the substantial constraints on its origin.", "To clear-up the issue of KK242 radial velocity, we obtained with the SAO 6-m telescope spectra of its faint star-forming (SF) complex.", "H-alpha and H-beta emission is detected in two adjacent compact regions, the southern and northern, separated by ~2\" (~60 pc).", "Their mean radial velocity is V_hel = -66 km/s, ~100 km/s lower than that of NGC6503.", "We use the HST Legacy Archive images and photometry of individual stars from the Extragalactic Distance Database, available for KK242, to identify in the SF complex the exciting hot stars, the probable BHeB and RHeB stars and a supernova remnant.", "We address, based on the possible range of its gas metallicity, the probable evolutionary paths of KK242.", "Using package Cloudy and parameters of the exciting B0V stars, we conclude that the observed flux ratio of [Sii] doublet to H-alpha is consistent with the value of 12+log(O/H) ~7.35+/-0.18 dex, expected for a stripped void dIrr galaxy." ], [ "Introduction", "A low surface brightness (LSB) dwarf galaxy of 'transition type' (dTr) KK242 was first discovered by [27] as a potential companion of Scd galaxy NGC6503.", "[19] observed KK242 in the 21-cm Hi-line with the Effelsberg 100-m radio telescope and found an emission line at the radial velocity of 426 km s$^{-1}$ .", "The latter differs from the radial velocity of NGC6503 by $\\sim $ 400 km s$^{-1}$ that looks quite unusual.", "Later, [31] rediscovered this galaxy in the deep images of their survey for LSB dwarfs around spiral galaxies.", "They presented images of KK242 (their name NGC6503-d1) in various filters, including H$\\alpha $ .", "Besides, they resolved in the broad-band images about three hundred the brightest stars of KK242.", "From the analysis of the colour-magnitude diagram (CMD), they derived the TRGB-based (Tip of Red Giant Branch) distance estimate consistent with that known for NGC6503.", "Both galaxies fall to the region of a nearby void occupying a part of the Local Volume (hereafter LV, see for more detail Section REF ).", "In the framework of the ongoing project aimed in studying various subsamples of void galaxies in the Nearby Void Galaxy (NVG) catalog [41], [42], [43], we conduct, in particular, their spectral observations to derive their gas metallicities and/or improve the accuracy of radial velocities.", "As said above, [31] show that there is a compelling evidence of KK242 to be a companion of NGC6503.", "However, its known estimate of the radial velocity, based on a single Hi observation with the Effelsberg 100-m radio telescope, is too much deviating from that of the host Scd galaxy.", "This may 'provoke' various 'exotic' scenarios of the origin of this pair.", "This was the primary motivation to clear up the issue of KK242 radial velocity.", "For this end, we obtained the spectrum of the only known faint complex of H$\\alpha $ emission in KK242 detected in papers of [31] and [26].", "The derived here radial velocity of –66 km s$^{-1}$ differs drastically from that measured via the single dish 21-cm Hi line emission.", "Recently [30] used our H$\\alpha $ radial velocity to search for the possible Hi 21-cm line emission based on the VLA D-configuration data cube for KK242.", "They detected the very faint Hi line at the position of KK242 with the radial velocity of V(HI) = –80 km s$^{-1}$ , which is consistent within uncertainties with V(H$\\alpha $ ) obtained in this work.", "The estimate of the radial velocity of KK242 via its H$\\alpha $ line was our primary task.", "Fortunately, thanks to the appropriate seeing during these observations and a suitable position angle of the long slit, we also got the interesting by-product results related to the substructure of this emission region and its individual components.", "Coupled with the Hubble Space Telescope (HST) Advanced Camera for Surveys (ACS) images from the Hubble Legacy Archive (HLA) and with the photometry of individual stars available in the Extragalactic Distance Database (EDD) [4], this enables us to discuss this star-forming complex in a more detail.", "The galaxy KK242 is also interesting for a deeper insight as one of a few known dTr objects in the void environment.", "The great majority of the 30 known dTrs within the distances of 5 Mpc [29] are related to a more typical environment like the Local Group and similar nearby groups.", "Such a connection can be related to the origin of this type dwarfs.", "Only two of these 30 dTr, UGC1703 and KK258 reside within the nearby voids described in [41].", "Two more dTrs, KKs03 and DDO210, are well isolated, despite the latter is situated close to the border of the Local Group.", "In this context, KK242 as a representative of the small minority of the void galaxy population, might display various deviations from other known dwarfs of this rare type.", "Besides, it is interesting to study the evolutionary path of such an unusual dwarf in the void-type global environment.", "The gas metallicity, if it could be estimated, is one of the important parameters used for the comparison of the observed properties of KK242 with those expected in the variety of possible evolutionary scenarios.", "Moreover, the issue of the massive star formation in such an atypically low-gas-density dwarf is crucial to address since this case represents even a more extreme gas environment than in the late-type gas-normal LSB dwarfs.", "The rest of the paper is arranged as follows.", "In Sec.", ", the spectral observations and data reduction are outlined and the used archive HST images are described.", "In Sec.", ", the results of the analysis of the BTA spectra along with the identification of the related objects at the HST image are presented.", "In Sec.", ", we discuss the properties of the studied SH region in KK242, the range of metallicity and the global environment of KK242, and in Sec.", ", we summarize our results and conclude.", "The linear scale at the adopted distance to KK242 (6.36 Mpc) is 30.9 pc in 1 arcsec." ], [ "Observations and data processing", "We obtained three optical spectra of KK242.", "The first spectrum of KK242 was obtained with the BTA multimode instrument SCORPIO-1 [1] during the night 2020 November 11, under photometric conditions (see Table REF ).", "The long slit with the width of 1.2 arcsec and the scale along the slit of 0.36 arcsec pixel$^{-1}$ (after binning by 2) was positioned on the brightest R-band source along the elongation of H$\\alpha $ emission, corresponding to PA = –15(see the left-hand panel of Fig. 1).", "For a more detailed description of the slit position relative to the emission-line regions, see Sect.", "REF .", "The grism VPHG1200R with the 2K$\\times $ 2K CCD detector E2V 42-40 (13.5$\\times $ 13.5 $\\mu $ m pixel) provided the spectrum coverage of 5700–7500 Å with the FWHM $\\sim $ 5.0 Å.", "The second spectrum of KK242 was obtained with the next generation BTA multimode instrument SCORPIO-2 [2] during the night 2021 November 5, under photometric conditions (see Table REF ).", "We aimed to pick up in this spectrum all the light collected in the first observation.", "Therefore, accounting for the smaller seeing on this night ($\\sim $ 1.1 versus 1.3 arcsec), we select of the two possible slit width options, 1.0 and 1.54 arcsec, the wider one.", "This should allow us to directly compare the results for both spectra.", "The long slit with the width of 1.54 arcsec and the scale along the slit of 0.40 arcsec pixel$^{-1}$ (after binning by 2) was positioned similar to that in the first observations with PA = –15.", "The grism VPHG1200@540 with the 4K$\\times $ 2K CCD detector E2V 261-84 (15$\\times $ 15 $\\mu $ m pixel) provided the spectrum coverage of 3650–7250 Å with the FWHM $\\sim $ 6.0 Å.", "The third spectrum of KK242 was obtained with SCORPIO-1 and grism VPHG1200R during the night 2022 July 29, with the similar set-up as for the first observation.", "The long slit for this observation was positioned exactly as for the first time.", "Since the seeing also was close to that of the first observation, the main difference was the total integration time: 7200 sec in July 2022 versus 2700 sec in November 2020.", "The main goal of the latter observation was to improve the S-to-N ratio for [Sii]$\\lambda \\lambda $ 6716,6731 doublet in the resulting average spectrum, since from the previous data its uncertainty was too high to come to more or less confident conclusion on the KK242 gas metallicity.", "The main procedures of data reduction are described in [40].", "Here we briefly outline them.", "Our standard pipeline with the use of IRAFIRAF: the Image Reduction and Analysis Facility is distributed by the National Optical Astronomy Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc. (AURA) under cooperative agreement with the National Science Foundation (NSF).", "and MIDASMIDAS is an acronym for the European Southern Observatory package – Munich Image Data Analysis System.", "was applied for the reduction of long-slit spectra.", "It includes the following steps: removal of cosmic ray hits, bias subtraction, flat-field correction, wavelength calibration, night-sky background subtraction.", "Spectrophotometric standard stars observed during these nights, were used to obtain spectra in the absolute flux scale.", "In the resulting 2d spectra of all observations of KK242, two distinct H$\\alpha $ knots are seen with centres separated along the slit by $\\sim $ 2 arcsec.", "In Fig.", "3 (top panel), we show a part of the 2d spectrum for the first night.", "The Southern knot is about twice brighter in H$\\alpha $ , and has a much weaker underlying continuum in comparison to that of the N knot.", "The 1d spectra for the S and N knots were extracted, summing up 5 or 6 (for different detectors) and respectively 6 or 7 pixels along the slit ($\\sim $ 2.1 and 2.5 arcsec, respectively), without weights, centred on each of two maxima of the H$\\alpha $ line signal.", "We have no opportunity to compare directly our observed H$\\alpha $ knots with those separated by [31] on their Subaru telescope H$\\alpha $ image of KK242 with the seeing of 0.8 arcsec.", "However, their small angular size on the 2d spectrum indicates that their observed extent is mainly due to the seeing.", "Deep images of KK242 obtained with the HST/ACS in 2019 in the framework of the SNAP program 'Every Known Nearby Galaxy' (Prop.", "15922, PI R.B.", "Tully) are available in the HLAhttps://archive.stsci.edu.", "Stellar photometry of the resolved stars at the KK242 ACS images is available at the Extragalactic Distance Database [4].", "They provide us with the suitable data to get a deeper insight in the star-forming complex covered by the BTA long slit.", "We used both F606W and F814W HST/ACS images of this program to identify objects related to the H$\\alpha $ emission in our spectrum and to estimate their available physical parameters such as luminosity, size, colour.", "Close in the position to the S knot, in both HST images there is a well resolved almost round nebulosity with the diameter of $\\sim $ 0.5 arcsec (or $\\sim $ 15 pc).", "No any reliable star-like counterpart is visible within the nebulosity extent.", "On the other hand, a more careful analysis (see Section REF ) of this HST image reveals a blue star at $\\sim $ 0.7 arcsec to the North (No.", "5 in Fig.", "1, right), which could ionize the surrounding gas and, thus, to also contribute to the H$\\alpha $ emission in this region.", "For the Northern H$\\alpha $ knot, the situation also appears to be complicated.", "We discuss this issue in Section REF .", "The resulting 1d spectra are shown in the middle and bottom panels of Fig. 2.", "Figure: BTA spectra for two Hα\\alpha knots in KK242.Top panel: extract of the 2D spectrum in the range6540–6580 Å.", "North is up.The distance along the slit between the peaks of two Hα\\alpha knots is about∼\\sim 2 arcsec.", "Peak of Hα\\alpha emission for the N knot is shifted to the S by ∼\\sim 0.7 arcsec relative tothe position of the continuum peak.", "See Sec.", ".Middle panel: Average of the three 1d spectraof the N knot with the 'strong' continuum and Hα\\alpha flux twice fainter than for the S knot.", "For the 'undetected' [Sii] doublet,we show also the best two-gaussian fit in Fig.", "3 (left),which is used to estimate the flux in this doublet.See details in the text.Bottom panel: The similar average 1D spectrum of the S knotwith the weak continuum, stronger Hα\\alpha and [Sii].Figure: Two-gauss fitting of the region of [Sii]λλ\\lambda \\lambda 6716,6731doublet in KK242.", "Dotted lines show the observed signal.", "Solid lines showthe fitting.See comments on the gauss fitting in the text.Left-hand panel: The N knot.Right-hand panel: The S knot.Figure: BTA spectra of 2021.11.05 of KK242 for the N and S knots in therange from Hβ\\beta to Hα\\alpha and [Sii]λλ\\lambda \\lambda 6716,6731.Top panel: The N knot including part of continuum of star No.", "1(the extracted range along the slit of Δ\\Delta X = [–0.3,–1.5 arcsec]in Fig. 6.", "Bottom panel: The S region including continuum of the nebula(the extracted range of Δ\\Delta X = [–1.5,–3.5 arcsec] in Fig.", "6.Table: Average (top) and separate nights (bottom) parameters of S and N knots in KK242.Line fluxes in 10 -17 ^{-17} erg cm -2 ^{-2} s -1 ^{-1}; FWHM in Å.Figure: Hα\\alpha line and the adjacent continuum emission distribution alongthe slit in KK242 spectrum on 2021.11.05.", "The position of the brightest continuumpeak (X = 0) corresponds to position of Star No.1 at the HST image in Fig.", "1(right panel).Top panel: Net Hα\\alpha (red), sum of the adjacent continuum and Hα\\alpha (black), the adjacent continuum (green).", "The secondary 'peak' of continuum is at ∼\\sim 3arcsec to the South, is close to the positions of the Nebula and star No.", "5 and looksto be extended further to the South.Bottom panel: Same for the region of Hβ\\beta emission." ], [ "Emission line parameters of KK242", "The results of emission line measurements and analysis for the average spectra of all three nights are presented in Table REF .", "For the S and N emission-line 'knots', we present the absolute fluxes (in units of 10$^{-17}$ erg cm$^{-2}$  s$^{-1}$ ) for H$\\alpha $ and [Sii] doublet and their widths (FWHM), as well as the equivalent widths of H$\\alpha $ and the measured heliocentric radial velocities with their uncertainties.", "We notice that the measured line widths in the spectra of the S knot are somewhat broaden relative to the instrumental FWHM $\\sim $ 5.1 Å, as measured for the H$\\alpha $ line in the N knot.", "While for the lines of the [Sii] doublet, the S-to-N is lower, there is also a hint on their broadening in the S knot.", "The spectral resolution for the spectrum on 2021 November 5, is worse ($\\sim $ 6.2 Å as indicated by the the FWHM of H$\\alpha $ in the N knot).", "Also, for the S knot, this parameter is of the lower accuracy.", "Hence, we use for the estimate of the broadening of H$\\alpha $ in the S knot only the data from the first and third observations conducted with SCORPIO-1 with the same set-up.", "This broadening from 5.1 to 5.8$\\pm $ 0.1 Å implies that the intrinsic value of FWHM $\\sim $ 2.8$\\pm $ 0.2 Å, or the respective velocity width of $\\sim $ 128$\\pm $ 9 km s$^{-1}$ .", "The radial velocities measured on H$\\alpha $ for the S and N knots, averaged on the three independent datasets, are respectively, –77 $\\pm $ 10 km s$^{-1}$ and –55 $\\pm $ 10 km s$^{-1}$ .", "The radial velocities for the both knots seem to show the systematic difference of $\\sim $ 20 km s$^{-1}$ in all three spectra.", "We have no at the moment additional arguments to treat one of them to be a more representative of the galaxy radial velocity.", "Therefore, we adopt their average of –66 $\\pm $ 10 km s$^{-1}$ as the robust estimate of KK242 radial velocity in the line H$\\alpha $ .", "The line [Nii]$\\lambda $ 6584 is not detected.", "The upper limit on its flux F($\\lambda $ ) in the S knot, adopted as 2$\\sigma $ (noise) in the adjacent continuum multiplied by the instrumental FWHM = 5.0 Å, is 0.2 $\\times $ 10$^{-17}$ erg cm$^{-2}$  s$^{-1}$ .", "This is a factor of three smaller than the flux of the [Sii] doublet.", "The relative flux of [Sii] doublet to that of H$\\alpha $ is a sensitive parameter for checking the possible range of gas metallicity in KK242.", "For the S knot, the contribution of the emission from the nebula, a probable SN remnant, can substantially exceed the expected contribution from an Hii region excited by a nearby hot massive star (see discussion in Sect.", "REF ).", "For the N region, we have an unambiguous case, in which only the nearby hot stars provide the ionizing radiation, so that [Sii] lines should reflect the abundance and physical conditions characteristic of normal Hii regions.", "The comparison of the three individual spectra of the N knot indicates that the strength of the [Sii] doublet relative to that of H$\\alpha $ varies by a factor of 2–3.", "This means that these faint lines are in fact at the level of about one $\\sigma $ of the noise or so.", "Therefore, discussion of [Sii] doublet in the individual spectra is irrelevant.", "To derive a more reliable estimates of the strength of the [Sii] doublet, we obtained the weighted mean of all three individual spectra with the weights adopted as 1/$\\sigma _{\\rm noise}^{2}$ .", "Here the noise was estimated on the wavelength range between H$\\alpha $ and [Sii] doublet.", "As one can see in Figure 2 (middle) and Figure 3 (left), the signal in the individual lines of the doublet in the N knot is comparable to the noise spikes.", "We employ the next approach to get the best approximation of the real line fluxes.", "We use all a priori information on the line positions and widths based on the measured parameters of the H$\\alpha $ line.", "We also adopt the low-density case and the related flux ratio I(6716)I(6731) = 1.4.", "With these parameters fixed, we vary the amplitude of the main peak to get the lowest value of the residuals.", "The resulting line fluxes are shown in the top part of Table REF .", "For further discussion, we use for the N knot the parameter of flux ratio [Sii]/H$\\alpha $ = 0.18/1.63 = 0.11, with the conditional uncertainty of 50%, that is 0.11 $\\pm $ 0.055.", "For the S knot, the respective value of [Sii]/H$\\alpha $ is 0.67/2.98 = 0.225 $\\pm $ 0.073.", "We use this parameter in Section REF .", "In Fig.", "4, we present a wider range spectra of both the N and S knots of KK242.", "They show the absence of the [Oiii]$\\lambda $ 5007 that is consistent with the very low excitation in the discussed Hii regions.", "For discussion of the possible interpretation of the visible nebular emission, it is useful to examine the H$\\alpha $ flux distribution along the slit relative to the northern peak of continuum related to the brightest red star at the HST image, star No.", "1 (see the right-hand panel of Fig. 1).", "In the top panel of Fig.", "5, we show the flux distribution along the slit for H$\\alpha $ -line emission (red), for the adjacent continuum (green) and for their sum (black).", "To tie the relative positions of the line emission to the position of the 'bright' red star No.", "1 in Fig.", "1, we count X-axis coordinate in Fig.", "5 from the position of the highest peak of continuum which corresponds to this star.", "Negative values of X-axis correspond to the direction approximately to the south.", "One can see that H$\\alpha $ emission has a two-peak distribution, both shifted to the S relative to star No. 1.", "A two-Gaussian fit to the flux distribution allows us to get their positions as X(S) = –2.5 arcsec and X(N) = –0.7 arcsec.", "Finally, the total flux of H$\\alpha $ is adopted as a sum of the nominal fluxes in S and N knots according to the data on 2021.11.05 as obtained with the better seeing and a wider slit (bottom of Table 2), namely F(H$\\alpha $ ) = (5.36 $\\pm $ 0.18) $\\times $ 10$^{-17}$ erg cm$^{-2}$  s$^{-1}$ .", "We expect that due to the loss of a fraction of H$\\alpha $ emission falling outside the slit, the real flux in this region can be a factor of 1.5–2 larger.", "We address the comparison of this parameter with earlier data in Section REF ." ], [ "In Table REF , we present positions and parameters of the brightest 8 stars and of one nebula which fall within the central area of the BTA long slit.", "They all can contribute to the visible continuum and the line emission.", "All these objects are marked in Fig.", "1 (right).", "This should help to identify them in the course of the further discussion.", "Their HST/ACS pixel coordinates and visible V and I magnitudes are taken from the table KK242.phot.WEB at the EDD sitehttps://edd.ifa.hawaii.edu/get$\\_$ cmd.php?pgc=4689184, where parameters of all measured in KK242 636 stars are presented.", "Their world coordinates are taken from the available HST fits images of this region.", "For the nebula, we obtained our own aperture photometry with the radius of 0.25 arcsec, which encompasses the whole object's light.", "We linked its instrumental magnitudes with those derived for star No. 1.", "From their difference, we derived the V and I magnitudes of the nebula, presented in Table REF .", "We further discuss objects identified at the HST images in Sect.", "REF .", "Table: HST based parameters of objects in the studied SF region of KK242" ], [ "Morphology of the SF region and related issues", "The long-slit position for spectroscopy of the Hii region in KK242 was determined based on the published H$\\alpha $ images available at that time [31].", "The slit PA = –15 was also chosen to put the slit closer to the visible elongation of the Hii region in order to increase the chances to catch a knot with a higher excitation.", "As the analysis of the HST image in Fig.", "1 (right) reveals, the slit properly covers all hot stars except the bluest one, no. 6.", "For this object, accounting for the seeing of 1.1–1.3 arcsec, we probably lose up to a half of the star and the related Hii region flux relative to the other discussed stars.", "Besides, as described at the end of Sect.", "REF , the nominal measured flux of H$\\alpha $ in this region is $\\sim (0.54 \\pm 0.02) \\times 10^{-16}$ erg cm$^{-2}$  s$^{-1}$ , with the possible upwards correction by a factor of 1.5–2 due to the loss on the slit.", "There are two independent estimates of this parameter (obtained from H$\\alpha $ imaging, in units of 10$^{-16}$ erg cm$^{-2}$  s$^{-1}$ ) in papers by [31] (2.3 with the uncertainty of factor of two) and [30] (4.7$\\pm $ 2.5).", "Taking into account their large uncertainties and our upward correction of the nominal flux of H$\\alpha $ , we suggest that the real flux of H$\\alpha $ in this region is of $\\sim $ 10$^{-16}$ erg cm$^{-2}$  s$^{-1}$ .", "Our task is to connect the visible H$\\alpha $ emission on the BTA long-slit spectra with the exciting hot stars visible in this region at the HST image.", "This allows us to better understand probable parameters of the SF episode in this region.", "Besides, this information will be helpful in the attempt of modelling the observed nebular emission (e.g.", "with Cloudy package, see the Appendix).", "As the derived (V–I)$_{\\mathrm {0}}$ colours of stars in Table REF indicate, the two brightest stars, no.1 and no.2 are not hot and therefore have no relation to the observed nebular emission.", "No.1 is likely a Red Helium Burning (RHeB), while no.2 and 7 with the colour (V–I)$_{\\mathrm {0}}$ $\\sim $ 0.0 mag are likely Blue Helium Burning (BHeB) stars [34], [35].", "Both types are products of the late evolution with the He-core burning of stars with the intermediate masses of 2–15 M$_{\\odot }$ .", "The measured absolute magnitudes of M$_{\\rm V}$ = –7.0 mag for star no.", "1 and –4.3 and –4.4 mag for no.2 and 7, evidence to the extended episode of SF in this region, with the duration of at least of $\\sim $ 100 Myr [34].", "Stars no.3, 4, 5 and 6 all are rather blue and luminous, seemingly representing the main-sequence late O-type and/or early B-type stars.", "We use the sample of O and B stars from the Wing of the Small Magellanic Cloud (SMC) of [44] for which these authors present the observed and model physical parameters.", "Since this large sample of OB stars have the nearest metallicity and thus are the best proxies to KK242 massive stars, we use their M$_{\\rm V}$ as a comparison for our blue luminous stars.", "As one can see, the SMC OB stars show the substantial scatter in their M$_{\\rm V}$ .", "Accounting for this information, the blue stars found in the KK242 SF complex can be assigned to the range of O9V to B1V.", "The bluest (hottest) star No.", "6 is probably an O9V, while the remaining, a bit redder stars, are probably of B0V–B1V type.", "The positions of blue stars no.", "3, 4 and 6 are close to the position of the brightness peak of the Northern H$\\alpha $ knot.", "Respectively, the position of the brightness peak of the Southern H$\\alpha $ emission is close to the position of the blue star no. 5.", "On the other hand, it is also rather close to the position of the adjacent red nebula, so that it can contribute to the observed nebular emission of the S region as well.", "Since the projected distances between the hot stars no.", "3, 4 and 6 are of $\\sim $ 20–30 pc, for the gas density N $\\gtrsim $ 10 cm$^{-3}$ , their Strömgren radii should be smaller than their mutual distances, so it is unlikely that they form a common Hii region.", "However, the separate Hii regions around these stars can contribute to the total observed nebular emission in the northern H$\\alpha $ knot due to the insufficient angular resolution along the slit.", "A similar situation takes place for the southern H$\\alpha $ emission.", "Here, the hot star no.", "5 is a clear candidate for the line emission from the related Hii region.", "However, the adjacent nebula, a probable SN remnant, can make a major contribution.", "Indeed, as one can see from Table REF , the H$\\alpha $ line flux is a factor of two larger in the southern knot relative to that in the northern knot.", "Taking into account that in the N region, we have three hot massive stars (no.", "3, 4, 6) with parameters close to those of star no.5 in the S knot, it is reasonable to assume that the H$\\alpha $ luminosity of Hii region excited by star no.5 is lower (about factor of 2–3) than the respective luminosity for the three Hii regions comprising the N knot.", "From this consideration it follows that the main contribution to the line and continuum emission of the S knot comes from the nebula, a candidate SN remnant.", "This coarse analysis shows that the observed spectra of this SF complex, with the limited angular resolution typical of the ground-based observations, represent rather complicated combination.", "So that their interpretation, even for a higher S-to-N case, can be not that straightforward.", "They can hope that for the exceptional ground-based observational conditions, with a seeing of $\\lesssim $ 0.5 arcsec and with the appropriate slit width oriented at a proper direction, one can disentangle the contribution of various Hii regions and enable obtaining of the strong-line fluxes and the subsequent estimates of the gas metallicity.", "More chances for the detection of the strong oxygen lines are expected for the Hii region around star no.", "6, the hottest of all massive stars in KK242, judging on its V–I colour.", "Also, the projected distance of $\\sim $ 0.7 arcsec between star no.", "5 and the centre of the nebula should be sufficient to disentangle emission lines of the two objects." ], [ "Environment", "The dwarf galaxy KK242, as well as its host spiral galaxy NGC6503, resides within a nearby void, No.", "22 (also named Dra-Cep) in the list of [41].", "Their distance to the nearest luminous galaxy NGC6946 is D$_{\\rm NN}$ = 2.25 Mpc.", "The void is a bit flattened spheroid with the large diameter of 21 Mpc.", "The centre of the void is at the distance of 13.9 Mpc from the Local Group, in the direction of RA = 20.4 h, Dec. = +71.", "The Dra-Cep void is situated above the supergalactic plane SGZ=0, being adjacent at SGX $\\sim $ +10 to +20 Mpc to the largest nearby void Oph-Sgr-Cap which includes the well known Local (Tully) Void (see illustration in fig.", "A5 of [41]).", "Fifty galaxies reside within the void boundaries [41].", "Of them, 44 are classified as the 'inner' void galaxies, defined as those with the distance to their nearest luminous neighbour $D_{\\rm NN} \\ge $ 2.0 Mpc.", "Both KK242 and NGC6503 are assigned to the 'inner' void galaxies." ], [ "KK242 as a void dwarf of transition type", "The phenomenon of dTr is not well understood.", "The assignment of a dwarf to the transition type is a purely phenomenological.", "While these dwarfs have in general the substantial amount of gas, comparable with that in dIrrs, their current star formation as traced by the H$\\alpha $ emission of the related Hii regions, is (almost) quenched.", "Five the nearest and seemingly the best studied examples of dTrs (or dIrr/dSph) are found within the Local Group and reviewed by [33].", "[49] added to them three similar dwarfs in the Sculptor group and via the analysis of the HST CMDs addressed the nature of \"transition\" dwarfs.", "They conclude that the examined dTrs are similar on the gas content to dIrr and are 'found preferentially among the lowest luminosities and nearer to spiral galaxies.", "Their appearance thus is caused by the temporary interrupted star formation.", "However, the tidal effects of massive hosts also may play a role'.", "Later [52], with deep HST data, studied SF histories (SFHs) of 60 dwarfs within 4 Mpc.", "Of them, 12 are classified as dTrs.", "One of their conclusions is that despite the large diversity, the mean SFHs of dIrrs, dTrs and dSphs are similar over most of cosmic time, with the clearest difference between the three only during the most recent 1 Gyr.", "They also conclude: 'In terms of their environment, SFHs and gas fractions, the majority of the dTrs appear to be low-mass dIs that simply lack H$\\alpha $ emission, similar to the LG dTr DDO210.", "However, a handful of dTrs have remarkably low (but detectable) gas fractions, suggesting that they nearly exhausted their gas supply, analogous to the LG dTrs such as Phoenix and LGS3.'", "As mentioned in Section Introduction, to date, the great majority of the known dTrs are found in groups and near massive hosts [29].", "This can be, at least partly, due to the observational selection effects, since for the faint isolated dwarfs of this type, the determination of their radial velocity, similar to the case of dEs, is a difficult task.", "One can think on the different origin and evolutionary scenarios of dTr objects, that can be related, in particular, to their global environment.", "For the majority of the currently known dTrs, the simplest scenario assumes their relation to 'normal' dIrrs with the lowest baryonic mass, in which the intermittent SF occurs with the duty cycle larger than (a few) tens Myr.", "For the alternative scenarios, in particular, for gas-poor dTrs, there are various options.", "For example, a normal dIrr progenitor could lose the major part of its gas due to the close passage near a more massive host [10].", "Another option is a long-ago formed dSph/dE which was recently rejuvenated due to the gas accretion.", "As far as we aware, the variant of the metal-enriched gas accretion from the outer parts of a massive host to a dwarf companion is not yet modelled.", "That is one can not describe, at which circumstances this will occur, if at all.", "On the other hand, in voids, where gas velocities in filaments are low [5], the unprocessed gas accretion from filaments to small dwarfs can probably work.", "Coming back to the properties of KK242, we notice, that on the low gas content it resembles the minority of dTrs, which 'nearly exhausted their gas supply', in difference to the main group of dTrs.", "Due to the similar SFHs of dIs and dTrs, with except of the last 1 Gyr [52], one of the evolutionary scenarios for dTr objects involves the late gas removal and the related star formation quenching.", "Therefore, one can expect that the progenitors of gas-poor dTr objects have been evolving most of their lifetime similar to dIrrs with the same mass.", "Then, if the metallicity of the removed gas was typical of the gas in the whole galaxy, the remaining gas should be representative of the previous secular evolution.", "In this scenario, it is probable that the gas metallicity in dTrs is similar to that in dIs with the same stellar mass and luminosity." ], [ "Gas metallicity as expected from the global parameters", "The gas metallicity of late-type galaxies in the LV follows the trend described by the relation of 'O/H versus M$_{\\rm B}$ ' from [9].", "The respective linear regression reads as 12+log(O/H) = 6.272 – 0.107 $\\times $ M$_{\\rm B}$ , with the rms scatter of log(O/H) of 0.15 dex.", "It extends over the range of M$_{\\rm B}$ = [–9.0,–19.0].", "The great majority of this LV reference sample belongs to typical groups and their close environs.", "As shown in [40], [43], the late-type dwarfs in the nearby voids have, on average, the reduced values of log(O/H) by 0.14 dex (or by $\\sim $ 30 percent, with the rms scatter of 0.18 dex) relative to this reference relation.", "This finding was interpreted as an evidence of the slower galaxy evolution in voids.", "Consistently with this idea, void galaxies have also the elevated Hi content, on average by 40 percent [38].", "Since KK242 is not a typical late-type dwarf, the above statistical relations between O/H and blue luminosity or stellar mass, derived in [9], may be not directly applicable to it.", "Those relations are assumed to reflect the specifics of the secular evolution of disc galaxies in the wide range of baryonic mass.", "It is interesting to compare its gas metallicity with the other dwarfs of the same blue luminosity.", "We first derive the expected gas O/H in KK242 if its Hii region(s) metallicity obeys the above reference relation for the Local Volume late-type galaxies from [9].", "For M$_{\\rm B}$ (KK242) = –10.5 mag, its expected 12+log(O/H) = 7.40 $\\pm $ 0.15 dex.", "[31] present the estimate of the total stellar mass for KK242 for their adopted distance of 5.27 Mpc.", "Accounting for the scaling due to the increased distance by a factor of 1.2, we adopt it as log(M$_{*}$ ) = 6.78 dex.", "We can use this log(M$_{*}$ ) for an alternative estimate of the gas O/H, based on the similar relation from [9], namely: 12+log(O/H) = 5.61 +0.29 $\\times $ log(M$_{*}$ ), with the rms scatter of $\\sigma $ = 0.15 dex.", "This gives the value 12+log(O/H) = 7.58$\\pm $ 0.15 dex.", "Taking the average of the two independent estimates (7.40 and 7.58), we adopt the expected value of gas O/H for a typical dIrr with those M$_{\\rm B}$ and stellar mass, as 12+log(O/H) = 7.49$\\pm $ 0.10 dex.", "If we take into account that the secular evolution of KK242 took place within a void, then, as mentioned in the beginning of this section, the expected value of O/H is lower, on average by 0.14 dex, that is, of 12+log(O/H) = 7.35 $\\pm $ 0.18 dex." ], [ "Use of [S", "With a lack of information on the strong oxygen lines, our spectral data, on the first glimpse, cannot be used to derive more or less reliable empirical estimate of O/H.", "The only means to probe gas metallicity in KK242 and to check, whether this is consistent with the above estimate of the expected O/H, is the relative strength of [Sii] doublet.", "Its statistical relation with the parameter 12+log(O/H) can provide us, in principle, with the possible range of the gas metallicity of KK242 and help in the comparison with the gas metallicity expected within a particular scenario in the previous section.", "In the following discussion of the N knot, we adopt the flux ratio of the [Sii] doublet and H$\\alpha $ as a weighted mean of the three independent measurements, as shown in the top of Table REF , namely 0.110$\\pm $ 0.055.", "We adopt conditionally, for illustration, the error at the level of 50%.", "From the formal estimates, it's probably twice larger.", "In the further comparison of the strength of [Sii] doublet to galaxies with known O/H, we need its ratio to the flux of H$\\beta $ .", "We adopt it from the typical flux ratio of H$\\alpha $ and H$\\beta $ for the Case B recombination, of $\\sim $ 2.8.", "The latter is consistent with the flux ratio of the N component as visible in the intensity cuts along the slit in Fig.", "5 (top and bottom).", "Then, the respective parameter, called S2 (a ratio of [Sii] doublet flux to that of H$\\beta $ ), is adopted for further to be equal to 0.31+-0.155.", "That is the most probable range of S2 is [0.155,0.465].", "The respective value of lg(S2)=–0.509, with the most probable range of [–0.810,–0.333].", "The parameter log(S2) can be used in principle for comparison with the statistical data compiled by [37] for various Hii regions in galaxies with the wide range of O/H.", "In Fig.", "6, we plot the relation between the parameter log(S2) and 12+log(O/H) for a subsample of 161 data points from the compilation by [37] for all Hii regions with 12+log(O/H) (the direct method) in the range of $\\sim $ 7.1 – 8.0 dex.", "The vertical solid black and blue dashed lines mark the expected value of gas O/H for the absolute blue magnitude and stellar mass of KK242 and its $\\pm $ 1 rms corridor for the case when the gas metallicity (O/H) of KK242 obeys the reference relation for late-type galaxies from [9].", "As discussed above, for the S region, the H$\\alpha $ emission appears to be the sum of two components.", "The first is an Hii region excited by the star no.", "5, a probable star B0V-B1V, and the second is the emission from a round red nebula without a detectable exciting central star.", "As we argued above, the main contribution to the emission of this region is due to radiation from the nebula.", "The profile of H$\\alpha $ line in this region looks broaden by the amount corresponding to the intrinsic FWHM $\\sim $ 128 km s$^{-1}$ .", "The latter corresponds to a shell expansion with the characteristic velocity of $\\sim $ 64 km s$^{-1}$ .", "Therefore, it is very likely that the round nebula is a supernova remnant (SNR) with an age of less than 1 Myr.", "It is well known that the ratio of [Sii]$\\lambda \\lambda $ 6716,6731 flux to that of H$\\alpha $ (hereafter, [Sii]/H$\\alpha $ ) is enhanced in the optical spectra of SNR due to the shock excitation.", "The often used empirical criterion to assign the observed emission to the shock-excited is [Sii]/H$\\alpha >$ 0.4.", "However, [32] from the analysis of the shock-excitation models based on the package MAPPINGS III [3] show that this parameter can be substantially smaller for the subsolar gas metallicities and for the shock velocities less than 200 km s$^{-1}$ .", "Therefore, the observed in the Southern knot ratio [Sii]/H$\\alpha \\sim $ 0.225, accounting for a low gas metallicity and a 'small' shock velocity, is consistent with the expected in the models.", "Therefore, for the S region, we can not use this ratio for comparison with the statistical data for normal Hii regions.", "Figure: Plot of parameter log(S2) versus 12+log(O/H) for observed and modelHii regions.", "Black octagons show points for 161 Hii regions with 12+log(O/H)(T e _{\\rm e})< < 8.0.", "They are drawn from the compilation of the literature data in.Vertical solid black and blue dashed lines show the expected value of 12+log(O/H) and itsprobable variance (7.49±\\pm 0.10 dex), corresponding to M B _{\\rm B} and stellar mass of KK242,in the case if its O/H follows the reference relation 'O/H versus M B _{\\rm B}'for the LV sample of .", "Green solid and dashed verticallines show the probable O/H for similar galaxies residing in voids (7.35 dex)and its +1σ\\sigma value (7.53 dex) .", "The horizontal solid red and two bluedashed lines show the nominal value of log(S2) for KK242 Northern knotand its ±\\pm 1σ\\sigma range.We also draw the linear regression of log(S2) on log(O/H) for the above sample of161 points (solid black) and its upper envelope (dotted black).Red and green diamonds correspond to the maximal values of S2 for a given value ofO/H, which occur for models with the values of the ionisation parameterof lg(U) = –4.0 and –3.0, respectively.They occur for the lowest considered T eff _{\\rm eff}of the exciting stars, of 25–30 kK, as illustrated in Fig.", "A1 ofAppendix A.", "The red dotted line shows the linear approximation ofof positions of the red diamonds.We also add 9 points with the direct values of12+log(O/H) ≲\\lesssim 7.32 dex (blue squares), published after 2012(see references in the text), to illustrate the large scatter ofthe parameter S2 for the low values of O/H incomparison to the limited data from .", "See textfor details and discussion of this figure.For the N region, we adopt that the nebular emission, visible on the slit, is the sum of emission of three normal Hii regions excited by the hot stars no.", "3, 4 and 6.", "In Fig.", "6, the nominal value of log(S2)(KK242) = –0.509 for the N knot is shown by the red horizontal line, while the lines, corresponding to +1$\\sigma $ (–0.333) and –1$\\sigma $ (–0.81) are shown by the dashed blue lines.", "As well seen in Fig.", "6, the nominal value of log(S2)(KK242), being directly compared to the data points from [37], corresponds to rather wide range of 12+log(O/H) $\\sim $ 7.8 $\\pm $ 0.2 dex.", "The latter is significantly larger than the O/H expected for its very low M$_{\\rm B}$ and M${*}$ .", "This apparent inconsistency needs explanations and a discussion.", "Since the used data from [37] is based on Hii regions with the directly measured O/H, they all have the well detected line [Oiii]4363.", "According to the Cloudy models discussed in the Appendix, this case corresponds to Hii regions with the sufficiently large value of the ionization parameter, namely to log(U) $\\gtrsim $ –3.0.", "According to the same model grids, for a given gas metallicity, the parameter S2 can vary substantially for the range of T$_{\\rm eff}$ of the exciting stars of [25, 50] kK and the range of log(U) = [-4.0,-1.0].", "To discuss more general cases, we draw in Fig.", "6, in addition to the observed Hii regions with the direct O/H from [37], the model-predicted values of S2 for a wider range of log(U) and T$_{\\rm eff}$ , derived with the Cloudy package in Appendix.", "Here, green and red diamonds show the maximal values of log(S2) for nine values of 12+log(O/H) between 7.00 to 8.04 dex for log(U) = –3.0 and –4.0, respectively.", "These maximal values of S2 correspond to the minimal values of T$_{\\rm eff}$ from the grid with T$_{\\rm eff}$ = 25–50 kK with the step of 5 kK.", "Coming back to the N region of KK242, we notice that its nominal value of log(S2)=-0.51 corresponds to 12+log(O/H)=7.44 dex in the case of this region is ionized by the lowest T$_{\\rm eff}$ (25–30 kK) and the lowest log(U) source ($\\sim $ –4).", "This is well consistent with the expected void dwarfs metallicity mentioned above of 12+log(O/H) = 7.35 $\\pm $ 0.18 dex.", "It is worth to noting that the sample of the reference Hii regions from [37] does not cover completely the real parametric space in the plane 12+log(O/H), log(S2) for regions with the directly derived O/H, at least for the lowest gas metallicities.", "We added in Fig.", "6 ten data points with the direct 12+log(O/H) $\\lesssim $ 7.3 dex (blue squares), appeared in the literature after 2011 in papers by [22], [23], [24], [50], [17], [18], [43].", "Seven of them fit well the region defined by the data from [37] and its extension to the lower O/H.", "However, the parameter log(S2) for 'Little Cub' and regions UGC772-N2 and N3 falls significantly higher than for the remaining majority.", "In particular, the region UGC772-N2 has the S2 parameter close to that of the N knot in KK242 and very low 12+log(O/H) $\\sim $ 7.3 dex." ], [ "Probable parameters of exciting stars in KK242.\nSimilar case of Pegasus DIG", "To complete the discussion on the consistency of the observed parameter S2 in the Northern H$\\alpha $ knot in KK242 with the expected low gas metallicity, of 12+log(O/H) $\\lesssim $ 7.49 $\\pm $ 0.10 dex, we examine the range of possible parameters of the hot massive stars illuminating this H$\\alpha $ region.", "We are interested whether their effective temperatures and the ionising photon fluxes are consistent with the Cloudy package parameters resulting in the largest values of S2 for that low gas metallicity (red and green diamonds in Fig. 6).", "The observed parameters of the related stars nos.", "3, 4 and 6 are summarized in Table REF and discussed in Section REF .", "We use the results of modelling of a large sample of massive OB stars in SMC presented in [44].", "The metallicity of this sample is the closest to that expected for gas and young stars in KK242.", "Thanks to the good statistics and the large set of modelled physical parameters, this allows one to use the average parameters for a star of the given spectral class as well as to understand the real range of their scatter.", "The rough estimate of the expected parameter log(U) in the considered region can be derived taking the typical flux of the ionising photons of B0V stars in the SMC, presented in [44], as follows: log(Q) = 47.26$\\pm $ 0.3 and the effective temperature of T$_{\\rm eff}$ = 30$\\pm $ 1 kK.We notice that these parameters differ substantially downwards from the adopted in Table 2.3 in the monograph of [36].", "The expected gas density is n$_{\\rm e} \\lesssim $ 10 cm$^{-3}$ .", "This leads to the Strömgren radius of $\\sim $ 15 pc and the related value of lg(U) = –4.0.", "As the examination of the Cloudy grids in Appendix shows, for 12+log(O/H) = 7.35, the largest value of parameter S2 occurs at lg(U) = –4.0, and for the lowest T$_{\\rm eff}$ = 25–30 kK, reaching the value of 0.26.", "Similarly, for grid with 12+log(O/H) = 7.53, the largest value of parameter S2 occurs at log(U) = –4.0, reaching the value of 0.37.", "The nominal value of the observed parameter S2 = 0.31, midway between the values of S2 for the latter the lowest log(U) models, implies that the respective value of its 12+log(O/H) falls midway between 7.35 and 7.53 dex, that is $\\sim $ 7.44 dex.", "For this combination, the expected ratio of F(3727)/F(H$\\beta $ ) = 1.2 – 1.4, F(5007)/F(H$\\beta $ ) $\\lesssim $ 0.02.", "It is worth to noting that despite the current data on the value of parameter S2 allows us to safely assign the N region to the very low metallicity regime, the higher S-to-N fluxes for the [Sii] doublet are required to increase the accuracy of the derived O/H.", "It is also interesting to compare the spectra of the N knot in KK242 with that of the Hii region 'A' (a brighter of two known) in the nearby dwarf galaxy Pegasus DIG (DDO 216=UGC 12613).", "In some papers this galaxy is assigned to dTr due to its morphology and very low current SFR.", "Its the adopted modern distance, derived with TRGB by [25], is of 0.97 Mpc.", "With the total blue magnitude B$_{\\rm tot}$ =13.22 and M$_{\\rm B}$ = –12.43 (HyperLEDA), PegDIG is almost 2 magnitudes more luminous than KK242.", "The spectrum of this Hii region was analyzed by [48].", "Its parameter S2 = 0.76$\\pm $ 0.05 is $\\sim $ 2.5 times larger than the observed in the N knot of KK242 (0.31$\\pm $ 0.15).", "The lines visible in the DDO216 region 'A' spectrum in the range of 3700 – 5300 Å, indicate very soft ionising radiation, with the flux ratio of [Oiii]$\\lambda $ 5007 and [Oii]$\\lambda $ 3727 (hereafter parameter O32) less than 0.03 (at the 2$\\sigma $ level).", "As the authors argue, that low excitation spectrum can be explained only by a sufficiently low effective temperature of an exciting star, namely, T$_{\\rm eff} \\lesssim $ 32.5 kK, consistent with a star B0V.", "As our Cloudy grids evidence, this value of S2 indeed emerges for the case of gas metallicity of 12+log(O/H)=7.92 as estimated by [48], for a star with T$_{\\rm eff}$ = 30 kK when log(U) = –4.0.", "The respective parameter O32 appears of $\\sim $ 0.015, consistent with the observed upper limit.", "However, the model prediction for this case for the relative flux of [Oii]$\\lambda $ 3727 and H$\\beta $ appears $\\sim $ 2.0, a factor of 1.7 lower than the observed one." ], [ "Alternative value of KK242 gas metallicity and a possible related scenario", "In the previous sections we presented the arguments that in the N knot of KK242, the measured parameter S2 is well consistent with the value of the gas O/H, expected from [9] reference relations for log(O/H) versus M$_{\\rm B}$ and versus M${*}$ , and also with the reduced gas metallicity as expected for a void galaxy.", "With this low gas metallicity one could think on the typical late-type galaxy secular evolution and the 'recent' loss of the main gas mass and the related drop of the 'normal' star formation.", "Since the real uncertainty of the nominal value of S2 is $\\sim $ 100%, we should check the variant of interpretation of a twice larger value of S2, that is 0.62, or log(S2) = –0.21.", "As one can see in Fig.", "6, this value of log(S2) is close to the Cloudy model points for log(U) = –4.0 (red dotted line) at 12+log(O/H) $\\sim $ 7.8 dex that is too high for void dIrrs with the luminosity and mass similar to that of KK242.", "What kind of scenario could result at that elevated gas metallicity for the very low stellar mass and luminosity of KK242?", "One of the possible variants is related to the gas inflow to KK242 from the outer parts of the disc of NGC6503 in course of their pericenter passage.", "We did not find in the literature the published estimates of the gas metallicity in NGC6503 despite its 2D spectra in the range 3600–6800 Å were obtained with the integral field unit VIRUS-P [8].", "Therefore, we adopt for NGC6503 the expected gas metallicity, that follows from the relation in [9] for the Local Volume late-type galaxies.", "For its absolute magnitude of M$_{\\rm B}$ = –19.1 mag (HypeLEDA, averaged on several sources), the expected value of 12+log(O/H) is $\\sim $ 8.37 $\\pm $ 0.15 dex.", "For disc galaxies with the visible metallicity gradients, this parameter is usually adopted at the radial distance of r$_{\\rm eff}$ /2.", "Thus, taking into account rather small metallicity gradients in the subluminous galaxies like NGC6503 ($\\sim $ 0.02–0.03 dex r$_{\\rm eff}^{-1}$ ), we expect the gas metallicity in the outer disc of NGC6503 at the level of 12+log(O/H) $\\gtrsim $ 8.1–8.2 dex.", "Therefore, if we accept the hypothesis of gas inflow from NGC6503 to the extremely gas-poor predecessor of dTr KK242, with the subsequent triggered episode of star formation, we should expect the metallicity of gas in the N knot, corresponding to 12+log(O/H) $\\gtrsim $ 8.1$\\pm $ 0.15 dex.", "In Fig.", "6, the 12+log(O/H) = 8.1 dex for Cloudy models with log(U) = –4.0 (red diamonds) corresponds to the value of S2 $\\sim $ 1.0 (log(S2) = 0), that is more than 2$\\sigma $ larger than the nominal value S2 = 0.31.", "These estimates adopt the gas metallicity of NGC6503 based on the relation established by [9] on the subsample of the late-type galaxies within the Local Volume, and hence, should be nicely applicable to NGC6503.", "However, as discussed in our papers [40], [42], [43], this sample is mostly related to the typical groups and their environs.", "For galaxies with the same M$_{\\rm B}$ residing in the nearby voids, the gas metallicity (or log(O/H)) is in average lower by 0.14 dex.", "For 'luminous' galaxies similar to NGC6503, this effect is smaller, $\\sim $ 0.1 dex [43].", "Since NGC6503, as described in Section REF , resides in the void Dra–Cep, we expect, that the above estimates of 12+log(O/H) should be reduced.", "That is the expected value of gas O/H in the N knot, in the framework of the hypothesis with the gas inflow from the outer disc of NGC6503, is 12+log(O/H) $\\gtrsim $ 8.0 dex.", "We summarize this attempt to relate the highest possible value of S2 (about two S2 nominal) and the respective value of 12+log(O/H) $\\sim $ 7.8 dex for the KK242 N knot with the 'metal-rich' gas from the outer parts of NGC6503 (12+log(O/H) $\\gtrsim $ 8.0 dex), as follows.", "The gap between the two extreme possible values for the gas metallicity in KK242 and NGC6503 remains too large.", "So that it is hard to reconcile this hypothesis with the available data." ], [ "Fading stages of faint SF episodes and the problem\nof gas metallicity estimate", "In the light of the discussed above dwarf galaxies with only the weak tracers of the recent SF episode, we would like to emphasize that this type of objects should be numerous and widely spread among the low mass dwarfs.", "Due to their low masses, their gas metallicity is expected to be in the low-Z regime.", "Indeed, in a sizeable fraction of low mass late-type LSB galaxies, the observed SFR is subtle.", "See, e.g., [26] for dwarfs in the Local Volume and [15] for small gas-rich dwarfs selected from the ALFALFA survey [16] and IZw18C [21].", "The observed H$\\alpha $ luminosities per individual Hii region indicate a small number of the hot massive stars in such dwarfs.", "The LV dwarfs with the lowest observed luminosities of L(H$\\alpha $ ) of the order of $\\lesssim $ 10$^{36}$ erg s$^{-1}$ correspond to the ionising photon fluxes Q$_{0}$ of stars O9V and later.", "Several XMP galaxies from our recent papers [42], [43] fall to this regime as well.", "In addition, the new current and upcoming deep sky surveys will drastically increase the number of such galaxies.", "For an instantaneous SF episode, the probability to catch it in the early phases (say, younger than $\\lesssim $ 8–10 Myr) when sufficiently hot stars (conditionally, O5V-O8V) are still aliveif they were really formed in the modest total mass involved.", "and provide a high enough log(U) and T$_{\\rm eff}$ , is lower than to catch a region with the exciting stars of O9V-B0V, with the lower fluxes of the ionising photons and T$_{\\rm eff}$ .", "In such cases, if we wish to obtain the estimates of gas metallicity, we need some alternative means.", "For that low excitation conditions and the related very low fluxes of [Oiii] lines, the possibility to use the 'strong-line' empirical methods to estimate their gas metallicity is hampered.", "Due to various observational selection effects, such objects remain largely underexplored, in particular, in the context of their gas metallicity.", "We suggest to use for such low excitation Hii regions with the Hydrogen Balmer series lines and only a few heavy element lines detected – [Oii]$\\lambda $ 3727, [Sii]$\\lambda $ 6716,6731 doublets and probably [Nii]$\\lambda $ 6584, the grid of Cloudy package models with the low log(U) and low T$_{\\rm eff}$ as typical for Hii regions ionized by the early B-type and late O-type stars.", "The examples of such analysis for KK242 and DDO216 presented above, show that this is feasible.", "The main pre-requisite of the successful application of such models is a sufficiently good S-to-N ratio for the used heavy element line fluxes.", "The used here the Cloudy grids are based on the models of the central star with T$_{\\rm eff}$ .", "The more advanced grids, with the modern stellar atmosphere models and the stellar metallicity included, can give us an advanced mean to treat spectra and derive a more reliable estimates of the gas metallicity in the low excitation Hii regions in a large number of dwarf galaxies within the Local Volume and its environs.", "This, in turn, should allow one to address the issue of chemical evolution on a wider range of galaxy parameters.", "The output of such model grids in the form of the relative line fluxes can be used similar to the 'Counterpart' method of [37] which seeks in the reference database of the observed Hii regions with known direct O/H for a combination of the relative line fluxes which is the closest to that in the studied Hii region without detected [Oiii]$\\lambda $ 4363 line." ], [ "Physical parameters and star formation", "The dTr galaxy KK242 is very interesting in the context of its current and recent star formation.", "Its gas mass is several times smaller than the typical of the comparable stellar mass and luminosity late-type LSB dwarfs.", "Therefore, if its Hi gas distribution is not strongly clumped, this suggests the reduced column density.", "Indeed, as the VLA map in the Hi-line reveals [30], the peak Hi column density in this galaxy reaches only 3$\\times $ 10$^{19}$  atom cm$^{-2}$ .", "This is to compare with the typical threshold gas column density for the onset of star formation in dwarf irregulars, blue compact and LSB galaxies defined at the level of 1.0$\\times $ 10$^{21}$  atom cm$^{-2}$ for the linear scales of $\\sim $ 0.5 kpc [47], [51], [7], [12].", "This can be partly explained by the large effective beam-size of $\\sim 60 \\times 40$ arcsec$^2$ (or 1.85 $\\times $ 1.24 kpc), which smears the higher density features at smaller linear scales.", "However, the significant column gas underdensity of the KK242 SF region attracts the special attention to this point.", "The issue of the threshold column density for the onset of SF is not settled, however [12].", "The model calculations [45] predict a range for this parameter which depends on gas mass fraction, pressure, its metallicity, ionising flux radiation.", "Therefore, it will take much effort to understand whether the formation of the studied complex of several massive hot stars in KK242 took place in a low density gas under the special conditions.", "The important factor of the SF episode onset is the proximity of KK242 to its host NGC6503.", "As many N-body simulations indicate [11], the peak of the tidally induced SF episode occurs in a few hundred Myr after the first pericenter passage.", "We can roughly estimate the time since the passage of KK242 near the host.", "Taking the relative tangential component of velocity ($\\delta V_{\\rm tang}$ ) approximately equal to the relative radial velocity $\\delta V_{\\rm rad} \\sim $ 100 km s$^{-1}$ , and the mutual projected distance of $\\delta r \\sim $ 31 kpc, we estimate the respective time t$_{\\rm passage} \\sim $ 300 Myr.", "Therefore, the presence in the discussed SF region of RHeB and BHeB stars, with ages of $\\sim $ 100 Myr, does not contradict to the assumption that the long-lasting very localised SF episode was triggered by the strong tidal interaction with the massive host.", "To get a deeper insight into gas properties involved in the SF episode at such atypical conditions, it seems, one needs to wait for the ngVLA, which will allow one to obtain Hi data simultaneously with the high sensitivity and suitable angular resolution.", "In this study we primarily interested in the determination of the radial velocity of the ionised gas in KK242 in order to get its value independent on the previous Hi observation.", "Thanks to the proper observational conditions, we spatially resolved two faint emission regions (the N and S knots) within the studied SF complex.", "Their appearance is rather different that pushed us to analyze them individually.", "Thanks to the publicly available F606W and F814W HST images of KK242, as well as of the photometry of its individual stars, we were able to disentangle the exciting stars of the observed nebular emission.", "In the S knot we also identify a nebula, a likely SN remnant.", "Due to rather low S-to-N of the available spectra of this SF complex and comparatively low effective temperatures of the ionising massive stars, the only heavy element lines detected so far is the [Sii]$\\lambda \\lambda $ 6716,6731 doublet.", "Its flux ratio to that of H$\\beta $ (parameter S2) can be used as a rough empirical indicator of the gas metallicity.", "However, the S2 in our data has the large uncertainty that allows rather wide range of O/H.", "If we compare the observed S2 in the N knot of KK242 with the data for the sample of Hii regions with the directly derived O/H (that is with the medium or high excitation), it corresponds to values of 12+log(O/H) between 7.6 and 8.0 dex.", "However, the examination of our Cloudy package grids with the wide range of lg(U) and T$_{\\rm eff}$ of ionising stars, reveals the elevated values of S2 for the 'extremely' low values of lg(U) $\\sim $ –4.0 and T$_{\\rm eff}$ $\\sim $ 25–30 kK.", "These elevated values of S2 are consistent with the observed one for 12+log(O/H) as low as $\\sim $ 7.45 dex.", "Meanwhile, the mentioned above 'extremely' low lg(U) and T$_{\\rm eff}$ are consistent with those expected for B0V stars directly observed in this region at the HST images.", "The possibility of that 'low' value of O/H is important, since this is expected for KK242 luminosity and stellar mass from the reference relations for the LV late-type galaxies in [9].", "The gas metallicity is a crucial parameter for the choice between possible evolutionary scenarios.", "The currently available estimate of O/H is consistent with the case of the typical secular chemical evolution of late-type dwarfs as possible predecessors of the dTr KK242.", "This conclusion remains valid if we take into account that the evolution of KK242 took place in a void, and therefore one expects it to be reduced.", "We also examine an alternative scenario involving the higher metallicity gas inflow from the outer disc of NGC6503 to the 'gas-free' dE predecessor of KK242 after its pericenter passage.", "From the estimates of the possible metallicity of the in-flowed gas, this variant seems to be improbable due to the substantial gap between the upper limit of the gas metallicity in KK242 and the lower limit of that in the outer part of NGC6503.", "Therefore, the most likely scenario of the origin of KK242 as a void dTr, combines its secular evolution as a void low-mass dIrr and the 'recent' rapprochement and interaction with the much more massive host NGC6503.", "The pericenter passage of KK22 several hundred Myr ago resulted in the tidal stripping of its gas and triggered the episode of SF [10].", "The traces of this 'recent' star formation are seen in the GALEX UV images as light of stars with the ages of less than a few hundred Myr [31] as well as BHeB and RHeB stars in the studied here region with the faint H$\\alpha $ emission.", "Summarising all available data and the above analysis and discussion, we arrive at the following conclusions.", "KK242 is a transition type dwarf residing in a nearby void.", "We report its BTA spectroscopy and the new value of its radial velocity, V$_{\\rm hel}$ = –66$\\pm $ 10 km s$^{-1}$ , based on the observed H$\\alpha $ line in two adjacent regions of the star-forming complex identified by [31].", "This value is consistent with that of the recently found faint Hi emission from KK242 [30] and is lower than the radial velocity of its host spiral NGC6503 by $\\sim $ 100 km s$^{-1}$ .", "The appearance of these two regions in the BTA 2d spectra of KK242 looks very different due to the bright light of the 'unrelated' Red Supergiant at the projected angular distance of $\\sim $ 0.7 arcsec from the H$\\alpha $ intensity peak of the Northern H$\\alpha $ knot.", "On the HST images of KK242, we identify the sources responsible for the visible H$\\alpha $ emission within the BTA long slit.", "The Northern region is a superposition of three Hii regions exciting by the late O-type and early B-type stars at the mutual projected distances of $\\sim $ 0.7–1.0 arcsec (20–30 pc).", "The H$\\alpha $ emission of the Southern region is a superposition of an Hii region excited by an early B-star and of a probable supernova remnant at a projected distance of $\\sim $ 0.7 arcsec.", "Besides the early-type blue stars, we identify within the boundaries of this SF complex three additional luminous stars.", "They are tentatively classified as one RHeB star (the mentioned above Red Supergiant) and two BHeB stars.", "Their colours and absolute magnitudes imply their ages of $\\lesssim $ 100 Myr.", "This suggests that the recent SF episode as traced by several massive young main-sequence stars (O9-B1) in fact lasts in this location at least of $\\sim $ 0.1 Gyr or so.", "Due to rather low S-to-N ratio spectrum of the Northern Hii region and its low excitation, the only detectable metal lines appear those of the [Sii] doublet.", "We use the parameter S2 (the flux ratio of [Sii] to that of H$\\beta $ ) to constrain the gas metallicity in this region.", "The direct comparison of the nominal value of S2(KK242,N) = 0.31 with the Hii regions from the compilation of [37], allows the wide range of 12+log(O/H) = 7.6–8.0 dex.", "This range, however, is poorly consistent with the low 12+log(O/H) $\\sim $ 7.4$\\pm $ 0.1 dex, expected for KK242 low stellar mass and luminosity in the case it is treated as a dIrr which lost most of its gas.", "We undertake a deeper analysis of this situation taking into account the HST data on the ionising stars' parameters (O9V–B0V).", "We use the Cloudy package to construct grids of the common line flux ratios in Hii regions with the wide range of gas metallicities, ionisation parameter U and the effective temperature of the central star.", "The observed parameter S2 in the Northern knot of KK242 can be well consistent with the predicted in models with 12+log(O/H) $\\sim $ 7.4 dex, if both log(U) and T$_{\\rm eff}$ are very low, $\\sim $ -4.0 and $\\sim $ 25–30 kK, respectively.", "These log(U) and T$_{\\rm eff}$ are close to those expected for the massive hot stars visible in this region.", "For the Southern H$\\alpha $ knot, the higher S-to-N value of flux of the [Sii] doublet indicates its elevated ratio relative to the flux of H$\\alpha $ .", "We argue that since the main contribution to the emission of this knot comes from the nearby nebula, a likely SN remnant, this elevated [Sii] emission is related to the shock excitation in the SNR shell.", "We pay attention to the generalisation of the problem of gas metallicity determination in common low-excitation low-metallicity Hii regions of LSB dwarfs.", "In such regions, only a few heavy element emission lines are typically observed and the use of the popular strong-line empirical estimators can be impossible.", "We suggest to develop a grid of Cloudy package models representing the observed line ratios for the wide range of gas and young star metallicity when only a population with the ages of more than 10–15 Myr excites their related Hii regions (late O and early B-type).", "This will give one a new advanced mean to address the gas low metallicity in the dwarfs of the Local Universe with the low/subtle current SF." ], [ "Acknowledgements", "The work was supported by the Russian Scientific Fund (RScF) grant No. 22-22-00654.", "The authors thank I.D.", "Karachentsev for sharing some results on KK242 before publication.", "We also thank D.I.", "Makarov for the help with the use of the publicly available HST data.", "We acknowledge the constructive suggestions of the anonymous referee which helped us to improve and clear up the paper contents.", "The authors acknowledge the allocation of the SAO DDT time at BTA in November 2021.", "Observations with the SAO RAS telescopes are supported by the Ministry of Science and Higher Education of the Russian Federation.", "The renovation of telescope equipment is currently provided within the national project \"Science and Universities\".", "This research is partly based on observations made with the NASA/ESA Hubble Space Telescope obtained from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555.", "These observations are associated with program SNAP-15922.", "We acknowledge the use of the Cloudy photoionisation code to model the intensities of the common emission lines in Hiiregions of the low excitation.", "We acknowledge the use for this work of the database HyperLEDAhttp://leda.univ-lyon1.fr.", "This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.", "The data underlying this article are available in the The Extragalactic Distance Database (EDD)http://edd.ifa.hawaii.", "edu/.", "The HST/ACS data used in this article are available in the STScI data archive." ], [ "[S", "In this Appendix we use the Cloudy.17 version of the Cloudy radiative transfer code, as described by [13], to perform calculations of the possible range for the value of parameter S2 (flux ratio of the doublet [Sii]$\\lambda \\lambda $ 6716,6731 to that of H$\\beta $ ) for a range of Hii region gas metallicities.", "We took Oxygen abundances of 12+log(O/H) = 7.00, 7.17, 7.35, 7.53, 7.70, 7.90 and 8.04 dex.", "The relative abundances C/O, N/O, Ne/O, Si/O, S/O, Ar/O, Fe/O are adopted as the averages found in the systematic study of 54 supergiant Hii regions with the range of 12+log(O/H) = 7.1 – 8.2 dex in the paper by [20].", "Since the abundance of O/H is set in the units of the Solar one, we adopt 12+log(O/H)$_{\\odot }$ = 8.69 dex according to [6].", "We construct a grid of models with the range of the ionisation parameter U (log(U) from -4.0 to -1.0, with the step of 0.5, and the effective temperature of the central star (black-body) T$_{\\rm eff}$ from 25 to 50 kK, with the step of 5 kK.", "We then apply these results to the observational data for the Northern knot in KK242 (Fig.", "6 and Section REF ) and attempt to constrain its metallicity from the observed parameter S2.", "We run Cloudy in the mode \"sphere\" with the adopted input parameters of the inner radius of R$_{\\mathrm {0}}$ = 10$^{17}$  cm and the number density of the hydrogen nuclei of n(H)=10 cm$^{-3}$ .", "Cloudy then computes the structure of the photoionised region by requiring that the density be constant.", "The outer radius is assumed to be the one where the gas temperature falls to 4 kK since the colder gas practically does not produce optical emission lines.", "The resulting geometry of all models was closed, that is, the gas covers most of the central ionising source.", "The iterations were carried out until the optical depth of the line and continuum became stable.", "For the gas element abundances, we do not take into account a presumably small fraction of material which is possibly locked into grains.", "We remark for clarity, that the ionisation parameter U shown on the X-axes of Figs.", "A1 and A2, is obtained as a result of a model and relates to the Strömgren radius, in difference with an input parameter of a model at the inner radius.", "That is, it directly corresponds to the parameter U, estimated for a B0V star in Section REF .", "Figure: Plots showing variation of parameter S2 in an Hii region as a function of theionisation parameter log(U) for ionising central stars with the range of T eff _{\\rm eff}from 25 to 50 kK for three values of gas O/H, 12+log(O/H) = 7.35, 7.53 and 7.70,from top to bottom, respectively, as obtained with the package Cloudy.For the fixed O/H, the S2 parameter appears a factor of 2 or so larger atlog(U) ∼\\sim –4.0than for Hii regions with log(U) ≳\\gtrsim –2.5, typical of the observed forO/H (dir) (see Fig.", ").Figure: Similar plots as in Fig.", "showing variation of the line flux ratio[Oiii]λ\\lambda 4363 to Hβ\\beta .For T eff ≲_{\\rm eff} \\lesssim 35 kK, this ratio exceeds the conditional level of 0.01(for Hii regions with the direct O/H determination) only formodels with the ionisation parameter log(U) ≳-3.0\\gtrsim -3.0." ] ]
2212.05624
[ [ "Error-aware Quantization through Noise Tempering" ], [ "Abstract Quantization has become a predominant approach for model compression, enabling deployment of large models trained on GPUs onto smaller form-factor devices for inference.", "Quantization-aware training (QAT) optimizes model parameters with respect to the end task while simulating quantization error, leading to better performance than post-training quantization.", "Approximation of gradients through the non-differentiable quantization operator is typically achieved using the straight-through estimator (STE) or additive noise.", "However, STE-based methods suffer from instability due to biased gradients, whereas existing noise-based methods cannot reduce the resulting variance.", "In this work, we incorporate exponentially decaying quantization-error-aware noise together with a learnable scale of task loss gradient to approximate the effect of a quantization operator.", "We show this method combines gradient scale and quantization noise in a better optimized way, providing finer-grained estimation of gradients at each weight and activation layer's quantizer bin size.", "Our controlled noise also contains an implicit curvature term that could encourage flatter minima, which we show is indeed the case in our experiments.", "Experiments training ResNet architectures on the CIFAR-10, CIFAR-100 and ImageNet benchmarks show that our method obtains state-of-the-art top-1 classification accuracy for uniform (non mixed-precision) quantization, out-performing previous methods by 0.5-1.2% absolute." ], [ "Introduction", "Driven by advantages in scalability, privacy, low latency and cost, machine learning “on the edge” is garnering increased interest and application in diverse areas.", "However, modern state-of-the-art (SOTA) machine learning models based on dense neural networks are too large to run on edge devices, where memory and compute cycles are limited compared to the servers where models are typically trained and deployed [12].", "Quantization has emerged as an effective approach to compress full (32-bit) precision models by reducing the number of bits used to represent each model parameter.", "Unlike other methods that boost efficiency of ML models, quantization does not require altering the original model architecture or pruning weights [24] [12].", "Figure: An overview of our method: (a) Models are initialized with large quantization bin widths and correspondingly high quantization error.", "(b) After a few epochs the bin widths will reduce, increasing rounding accuracy and decreasing quantization error.", "(c) When quantization error becomes small enough, quantization noise kicks in to fine-tune model weights.Typically, the full-precision weights and/or activations of the neural network model are discretized by a quantization function.", "Post-training quantization can be performed on any existing model, but at the severe cost of accuracy, particularly at lower precisions such as 4 or 2 bits.", "Quantization-aware training mitigates this loss in accuracy by simulating quantization of full-precision weights and activations during training, allowing for model parameters to be optimized such that they form accurate predictions even when quantized.", "Quantization approaches can be further categorized into uniform and non-uniform quantization.", "Non-uniform quantization approaches have been shown to obtain better accuracy than uniform quantization [35][19], but they are not amenable to deployment on existing hardware, which would require special support for the specific mixed-precision schemes (e.g.", "additional data structures to store codebooks or quantization intervals [12]).", "In this work we therefore focus on the more practical setting of uniform quantization via quantization-aware training (QAT).", "The key challenge in QAT is approximating gradients to the non-differentiable quantization function.", "QAT methods typically quantize all the weights during the forward pass and introduce the straight through estimator (STE; [2]) to estimate the gradient of these non-differentiable layers.", "To be concrete, let $\\mathbf {Q}(w)$ be the non-differentiable quantization function and $Q_L$ and $Q_H$ be the lower and upper bound in an $n$ -bit system.", "STE sets the gradient $\\tfrac{\\partial \\mathbf {Q}(w)}{\\partial w} = 1$ for all $Q_L \\le w \\le Q_H$ and 0 otherwise.", "The gradient estimate of loss function $\\mathcal {L}$ with respect to weight $w$ is: $\\left.", "\\frac{\\partial \\mathcal {L}}{\\partial w}\\right|_{w} \\approx {\\left\\lbrace \\begin{array}{ll}\\left.", "\\frac{\\partial \\mathcal {L}}{\\partial \\mathbf {Q}(w)}\\right|_{\\mathbf {Q}(w)} & \\text{if}\\ Q_L \\le w \\le Q_H\\\\0 & \\text{otherwise}\\end{array}\\right.", "}$ This approach works reasonably well when the errors introduced by STE are small [36] e.g.", "quantizing 32-bit representations to 16 or 8 bits.", "However, since the gradient estimation shown in Eq.", "(REF ) does not exactly reflect the true gradient, it introduces bias and instability to training.", "Moreover, due to the deterministic property of such gradient estimation, there can be accumulation of periodic quantization error, as is well studied in the field of signal processing [26], [32].", "Specifically, as [9] illustrated, $\\mathbf {Q}(w)$ oscillates between the quantized value just above $(w_{+})$ and just under $(w_{-})$ the unquantized ground truth, $w_{*}$ , while $w$ oscillates around the boundary $(w_{+} + w_{-})/2$ .", "One method that has been proposed to address these challenges is by introducing additional parameters allowing for finer grained estimates.", "Learned step size quantization (LSQ; [10]) has been proposed to address these challenges by learning the quantization bin width, or step size, as a model parameter.", "The learned step size naturally provides more precision in optimization, however LSQ still falls within the regime of STE and thus still ultimately suffers from the same bias and instability issues, albeit to a lesser extent.", "An alternative approach to estimate the gradient during quantization-aware training is to introduce noise that simulates the quantization process.", "[9], [25] introduce a psuedo-noise quantizer, $\\tilde{\\mathbf {Q}}(w) = w + noise$ , inspired by analog-to-digital converter (ADC) simulation.", "The key advantage of such an additive noise approach is that it provides a potentially unbiased estimator of the true gradient when there is limited knowledge of the distribution of $\\mathbf {Q}(w)$ .", "Under the local linearity assumption: $\\mathbb {E}\\left( \\left.", "\\frac{\\partial \\mathcal {L}}{\\partial \\tilde{\\mathbf {Q}}(w)}\\right|_{w + noise}\\right) = \\left.", "\\frac{\\partial \\mathcal {L}}{\\partial \\tilde{\\mathbf {Q}}(w)}\\right|_{\\mathbb {E}[w + noise]} = \\left.", "\\frac{\\partial \\mathcal {L}}{\\partial w} \\right|_{w}$  [1] showed such noise may improve convergence.", "However, because the linearity assumption does not hold when the noise is large, and because simulating the quantization error requires the variance of noise to increase with model compression, the gradient estimator in fact does not reflect true gradient throughout the compression process.", "To propose a remedy, we revisit the theoretical foundation of SGD,  [30] proposed a framework formulating the gradient update as $W_{t+1} = W_{t} - \\eta (\\nabla \\mathcal {L}(W_{t}) + \\varepsilon (t))$ , $\\varepsilon (t)$ denotes random noise from Gaussian family updating at step $t$ towards convergence, which models the effect of estimating the gradient using mini-batches.", "They also proved that the \"temperature\" (which can be treated as the magnitude of the variance of mini-batch gradient), $T$ , is approximately $\\eta / B$ , where $\\eta $ is the learning rate and $B$ the batch size.", "Intuitively, the initial noisy phase allows the model to explore a larger fraction of the parameter space without getting trapped in sharp local minima.", "Once we find a promising region of parameter space, we reduce the noise to fine-tune the parameters through LR decay.", "However, this formulation is for vanilla SGD without quantization.", "Following such intuition, in this work, we design an additive noise to interact with quantization error during the different stages of SGD weight updates.", "We propose combining learned step size with additive noise using carefully controlled variances to further improve the performance of quantized models.", "Early during training when mini-batch gradient has large variance, we train using STE with variable step size.", "In the later stages of training once variances are small enough, quantization error aware noise kicks in to provide finer-grained updates.", "We will show such noise also tends to encourage the “flatness\" of the trained model, since it contains a second-order curvature term.", "Figure REF illustrates the overall process of our approach.", "Our experiments on ResNet architectures trained on the CIFAR-10, CIFAR-100, and ImageNet datasets demonstrate better accuracy than previous state-of-the-art uniform quantization approaches.", "Our contributions are listed as follows: We propose a novel algorithm for quantization-aware training by combining gradient scale and quantization noise, which improves state-of-the-art uniform quantized model performance without introducing additional parameters or observable computation cost.", "We provide extensive experimental analysis of our approach, motivated by a detailed theoretical analysis of the quantization and optimization process.", "We show that quantized model trained with our method tend to have flatter minima, which is favorable for generalization to unseen data." ], [ "Related Work", "Pioneering work in quantization [8], [38] looked at quantizing model weights, activations, and gradients to accelerate neural network training.", "A recent surge of interest in quantization research is driven by the need to deploy ML models onto edge devices; [24] and [12] provide comprehensive overviews of the field to date.", "Recently, there is also an renewed interest in post-training quantization (PTQ).", "[22] analyzed the loss degradation by Taylor expansion, which inspired [18] to formulate a second-order approximation.", "[5] leveraged synthetic data to fine-tune the pre-trained model, and [23] analyzed into the loss landscape.", "However, PTQ methods still lag behind quantization-aware training (QAT) methods in accuracy, which are the focus of this work.", "The terminology QAT was first introduced by [14].", "QAT incoporates quantization error as part of the overall loss minimized during optimization." ], [ "Learnable quantization parameters", "Several works proposed learning-based approaches to improve QAT around the same time: [15] proposed to learn the quantizers’s dynamic range while training; [33] advocated for learning the optimal bit-width.", "Among them, LSQ [10] and its extension LSQ+ [3] were the simplest and best performing approaches, which propose learning the optimal quantizer step size (bin width).", "From there, the community [31] has focused on minimizing the quantization error rather than merely preserving the weights.", "These methods have all observed only minor performance drops compared to full-precision models, but nevertheless inherit the same instability and bias issues of STE [11].", "In this work, we propose to alleviate this issue by carefully mixing STE with noise throughout the training process." ], [ "Quantization noise", "Additive noise to quantization in neural network models was first studied by [1], [25], and [11] crafted dropout-based noise by sampling at each layer and training step whether to use the quantized or unquantized weights.", "[9] leveraged ADC noise to completely replace the usage of STE.", "Additive noise based approaches avoid the risk of systematic bias but introduces variances to the system [25].", "There is a strong motivation for us to leverage noise to reduce the bias from STE, while controlling the variance of the noise itself.", "Uniform quantization versus non-uniform quantization The above-mentioned techniques are all uniform quantization, which builds a mapping from real values to integer values, resulting in uniformly spaced quantization values.", "Naturally, the non-uniformly spaced counterpart is called non-uniform quantization, and involves dequantization step.", "Recent works [35], [19] showed competitive performance, since they could better capture the quantized number distributions and focus more on important value regions.", "But they all require extra storage and tweaking the data structure, which limited the methods practicality to be deployed, given the exisiting DL software and hardware paradigm.", "[12]" ], [ "Stochastic Gradient Descent with momentum + Quantization", "It is known that model trained with SGD tends to generalize better than those trained with full-batch gradient descent.", "Recent works [29], [30], [28], [4], [27] theoretically explain this phenomenon, showing that, as a consequence of Central Limit Theorem, the mini-batch gradient used in SGD, $\\nabla \\hat{\\mathcal {L}}(W_t)$ , can be treated as the clean full-batch gradient, $\\nabla \\mathcal {L}(W_t)$ plus an additive Gaussian noise term evolving over time $\\varepsilon (t)$ (briefly introduced in § ) with mean of 0 and variance approximately of $\\Sigma (W_t)/B$ .", "Here, the term $\\Sigma (W_t)$ is the graident covariance matrix, which is a function of current parameter values.", "Thus, we can express the SGD update with the form $W_{t+1} = W_{t} - \\eta (\\nabla \\mathcal {L}(W_{t}) + \\varepsilon (t))$ , where $W_{t}$ is the collection of trainable parameters at step $t$ , $\\eta $ the learning rate, $B$ is the batch size, and $\\mathcal {L}(\\cdot )$ the loss function.", "Then suppose we define \"temperature\" $T = \\eta /B$ , we can then draw the noise term, $\\varepsilon _\\Phi $ from standard normal distribution $\\mathcal {N}(0,I)$ , and arrive at the following equivalent form: $W_{t+1} = W_{t} - \\eta \\nabla \\mathcal {L}(W_{t}) + \\sqrt{\\eta T} \\cdot \\Sigma (W_t)^{1/2} \\cdot \\varepsilon _\\Phi $ Here the temperature term $T$ controls the magnitude of the noise in SGD graident and it is proven by [4], [30] that a higher temperature promotes the model to converge to a flatter minima in the loss landscape.", "This is important in QAT processes.", "While in quantized models, the SGD update follows a similar pattern, because the quantization function $\\mathbf {Q}(W)$ is usually non-differentiable, a common practice is to use straight through estimator [2] to pass the gradient from $\\mathbf {Q}(W)$ to $W$ , as described in Eq.", "REF .", "Assume that $W^{\\prime } \\subseteq W$ such that $Q_L \\le W^{\\prime } \\le Q_H$ (so that the STE gradient is 1, the elements in $W \\setminus W^{\\prime }$ has STE gradient of 0, which are not updated), then the SGD update in a quantized model is: $\\begin{aligned}W_{t+1}^{\\prime } &= W_t^{\\prime } - \\eta \\frac{\\partial \\hat{\\mathcal {L}}}{\\partial \\mathbf {Q}(W)} \\frac{\\partial \\mathbf {Q}(W)}{\\partial W} = W_t^{\\prime } - \\eta \\left(\\nabla _{\\mathbf {Q(W)}} \\mathcal {L} + \\varepsilon (t) \\right) \\\\&= W_t^{\\prime } - \\eta \\nabla _{\\mathbf {Q}(W)} \\mathcal {L} + \\sqrt{\\eta T}\\cdot \\Sigma (W_t^{\\prime })^{1/2} \\varepsilon _\\Phi \\end{aligned}$ Here, we take the $\\Sigma (W_t^{\\prime })$ as the gradient covariance matrix after marginalizing out $W\\setminus W^{\\prime }$ in the original noise term $\\varepsilon $ .", "The major difference in the update comparing to full precision model is the replacement of the clean full-batch gradient term $\\nabla _W \\mathcal {L}$ with $\\nabla _{\\mathbf {Q}(W)} \\mathcal {L}$ , which results in biased gradient estimation.", "Thus, there is a stronger need for such flatness for those parameters which incur high quantization error, namely when $|\\mathbf {Q}(w)-w|$ is large.", "This is because larger quantization error leads to less accurate update in Eq.", "REF , and consequently, the parameter $w$ can get stuck at some sub-optimal value near the true minima, $w^*$ , as mentioned in [9].", "One way to alleviate the negative impact of this behavior is to prevent the model from getting stuck at sharp minima, where the loss incurred due to small deviation can be more significant.", "In order to achieve this, we introduce an extra quantization-error-dependent temperature term $T_Q$ ($T,\\ T_Q \\propto \\text{Var}[\\varepsilon (t)]$ and $T_Q$ and $T$ has same order) as the following: $W_{t+1}^{\\prime } = W_t^{\\prime } - \\eta \\nabla _{\\mathbf {Q}(W)} \\mathcal {L} + \\sqrt{\\eta T}\\cdot \\Sigma (W_t^{\\prime })^{1/2} \\varepsilon _\\Phi + \\eta \\gamma _Q\\sqrt{T_Q} \\tilde{\\varepsilon }_\\Phi $ where $\\varepsilon _\\Phi $ and $\\tilde{\\varepsilon }_\\Phi $ are independent standard Gaussian noise sources, and $\\gamma _Q$ is a scaling function of the quantization-error-dependent noise term.", "This allows the model to dynamically adjust the gradient temperature during the QAT process and encourages the model to choose flatter minima, which are more robust even if parameter update get stuck at sub-optimal values.", "In case of SGD with momentum, the momentum accumulation will introduce a scaling factor $\\alpha (m)$ (a function of the momentum coefficient $m$ ) to the noise term.", "In Appendix A.1 we provide a derivation showing that $\\alpha (m) \\approx \\frac{1}{1+m^2}$ .", "Defining the true accumulated gradient of SGD with momentum as $V_T = \\sum _{t=0}^{T} m^t \\cdot \\nabla _{\\mathbf {Q}(W)}\\mathcal {L}$ , the update in SGD with momentum is $W_{t+1}^{\\prime } = W_t^{\\prime } - \\eta V_t + \\sqrt{\\frac{\\eta T}{1-m^2}}\\cdot \\Sigma (W_t^{\\prime })^{1/2} \\varepsilon _\\Phi + \\eta \\sqrt{\\frac{T_Q}{1-m^2}}\\gamma _Q \\tilde{\\varepsilon }_\\Phi $" ], [ "Quantization function", "In order to use the idea of SGD tempering to improve the quantization convergence, we solve a particular case of Eq.", "REF with the following setting of scaling function $\\gamma _Q$ and quantization-error-dependent temperature $T_Q$ : $\\begin{aligned}\\gamma _Q &= c \\cdot \\exp \\left(-k|\\mathbf {Q}(W)-W|\\right) \\nabla ^2_{\\mathbf {Q}(W)}\\mathcal {L}\\\\T_Q &= |\\mathbf {Q}(W) - W|\\end{aligned}$ where $0<c<1$ and $k>0$ are hyperparameters.", "For the temperature term, we use the simplest quantization-error-based function.", "i.e.", "the magnitude of the quantization error.", "$|\\mathbf {Q}(W) - W|$ .", "The scaling function seems to be more complex when expanded out in gradient noise.", "However, it allows us to write a clean quantization function as in Eq.", "REF that approximately give this desired gradient noise.", "We provide a proof to this in Appendix A.2.", "Furthermore, the second order derivative term $\\nabla ^2_{\\mathbf {Q}(W)}\\mathcal {L}$ provides a powerful mechanism to lower the quantization error aware tempering term even when the quantization error is large.", "This happens when the model already ends up close to a flat minima, corresponding to smaller second order derivative term.", "In such case, as the model approaches the desired local optimal in the loss landscape, it is desirable to reduce the temperature of the gradient noise to allow better convergence.", "$\\tilde{\\mathbf {Q}}(W) = \\mathbf {Q}(W) + \\text{sg} \\left( c \\cdot \\exp ({-k| \\mathbf {Q}(W) -W |}) \\cdot \\sqrt{| \\mathbf {Q}(W) -W |} \\cdot \\varepsilon _\\Phi \\right)$ In order to test the effectiveness of quantization error aware tempering, we choose $\\mathbf {Q}(W)$ in Eq.", "REF as the step-size-based quantizer in Eq.", "REF , which is an effectively yet simply quantizing technique commonly adapted [6], [16], [10].", "We also use the stop gradient operator $\\text{sg}(\\cdot )$ to prevent any gradient from the noise term to back-prop back to the weights.", "$\\mathbf {Q}(W) = \\lfloor \\text{clip}(W/s, Q_L, Q_H) \\rceil \\cdot s$ Here, the $Q_L$ and $Q_H$ are the lowest and highest integer in $N$ -bit setting.", "We let the quantization step-size, $s$ to be trainable in our model and assign one step-size parameter per module.", "The gradient of $s$ are computed using the STE and thus has the following element-wise derivative as in [10].", "Our noise $\\varepsilon _\\Phi $ is sampled from an isotropic Guassian distribution, i.e.", "$\\varepsilon _\\Phi \\sim \\mathcal {N}(0, I)$ .", "$\\frac{\\partial \\tilde{\\mathbf {Q}}(w_i)}{\\partial s}={\\left\\lbrace \\begin{array}{ll}-w_i/s + \\lfloor w_i/s \\rceil & \\text{if $-Q_L < w_i < Q_H$}\\\\-Q_L & \\text{if $w_i \\le -Q_L$}\\\\Q_H & \\text{if $w_i \\ge Q_H$}\\end{array}\\right.", "}$ Figure: (A) Deviation of noise term versus quantization error.", "This figure shows the noise is trivial only when quantization error become smaller, which tends to happen when step-size parameter become lower.", "(B) The effect of different step sizes in quantization, the flat lines indicate value clamping.", "As the step size become smaller, the model can trade weight ranges for more accuracy in the gradient estimation, i.e.", "closer to the perfect quantizer, but get clipped with smaller value.", "At the same time, the quantization error will also decrease." ], [ "Initial Phase", "During the initial training phase, the quantization errors are usually fairly large, i.e.", "$\\mathbf {Q}(w)-w\\vert $ is relatively large with a high probability and our quantization method would limited effect in noise temperature as shown in Figure REF (a).", "Since the hyper-parameter exponential scaling factor $k$ should be set as a relatively large integer, such as 50 in our later experiments, we control the additive noise to be a small value through scaling with the exponential term in equation (REF ).", "In practice, with a large scaling, such as $k=50$ , the additive noise term becomes significant only when $|\\mathbf {Q}(w)-w| \\gtrapprox 10^{-2}$ .", "This phenomenon is illustrated in Figure REF (A), the graph is plotted by $y = \\frac{x+c \\cdot \\exp {(-kx)\\sqrt{x}}}{x}$ where $x=|\\mathbf {Q}(w)-w|$ is the quantization error.", "To illustrate effect of noise we set it at 1 standard deviation from the mean." ], [ "Deeper Into Finetuning", "As the QAT training progresses, the quantization step-size tends to decrease together with the error in quantization.", "However, due to the necessity of maintaining large enough weights, the step-size cannot goes infinitely close to 0 since then all weights will be clipped as shown in Figure REF (B).", "Depending on the choice of hyperparameter $k$ , when the step-size $s$ becomes small enough, the exponential term in the scaling factor $\\exp ({-k| \\mathbf {Q}(W) -W |})$ will be close to 1, adding the quantization-error-aware temperature to the gradient noise will encourage the model to find flatter basin in the loss landscape.", "Lastly, to help the final convergence of the model, we allow the learning rate as a factor to scale quantization-error-aware temperature and use learning rate schedule to stabilize the model at termination phase of training." ], [ "Experiments & Results", "We perform experiments on CIFAR-10, CIFAR-100, and ImageNet dataset to verify the effectiveness of our proposed quantization.", "The CIFAR-10/100 dataset contains 50k training images and 10k test images, with 10/100 classes.", "The ImageNet dataset contains 1.2M training images and 50k test images, with 1,000 classes.", "We compared our results with the state-of-the-art uniform methods using several classical Computer Vision (CV) models, including ResNet-18, ResNet-34, ResNet-50, and WideResNet.", "We listed our experiment setup details in Appendix A.3.", "Table: Results on CIFAR datasets.", "FP indicates accuracy in the full precision case, ** indicates the model has variable bit-width and thus score with equivalent model size under that precision level is reported, and — indicates no reported result.Table: Accuracy comparison to the state-of-the-art quantization methods with ResNet structure on ImageNet dataset.", "Techniques under comparison LSQ , diffQ , QIL , FAQ , NICE , PACT , LQ-Nets .", "We also added the state-of-the-art non-uniform quantization method, LCQ , as a reference.", "** denotes the result with variable bit-width but with equivalent model size under that precision level.Table REF compares the accuracy of the proposed and two other SOTA quantization methods at three different bit-widths for the CIFAR-10 and CIFAR-100 dataset.", "As we can see, our method could outperform other SOTA methods.", "Note here we reimplemented LSQ method with the exact setup (same hyper-parameters, same preprocess) as described in their paper, since the original implementation is not available.", "On ImageNet dataset, we compare our method's accuracy with state-of-the-art quantized networks and full precision baselines, as is shown in Table REF .", "To facilitate fair comparison, we only consider published works that quantize all the layer weights to the specified precision.", "In some cases, we reported higher accuracy than the original publication since our reimplementation of their approach surpassed what they claimed.", "We notice the baseline performance of FP32 models reported by previous quantization models varies about 2%, e.g.", "[19]'s basline ResNet18 FP model is ($71.8\\% > 69.7\\%$ ) 2.1% better than the timm [34] baseline which was using the PyTorch baseline.https://pytorch.org/hub/pytorch_vision_resnet/ This is significant, since the results of quantized models are usually compared to its FP baseline, missing it by a tiny margin (within 1%).", "Give a fluctuating baseline, it is difficult to gauge how much better a quantization method is.", "All of our FP model baseline results are standard numbers which is reported at the timm library readme page.", "https://rwightman.github.io/pytorch-image-models/results/ As is shown in Table REF , our proposed method could outperform LSQ across the board.", "When the model complexity goes up, and the bits go up, our method could clearly outperform all previous methods.", "Our Resnet-18 model seem to get capped off by a relatively weaker basline FP model, consequently, we were not able to reproduce the 71.1% accurarcy as claimed by LSQ and DiffQ for Resnet-18 4-bit.", "The performance of our reimplemented LSQ only reaches 70.4%.", "However, our 2-bit quantization of Resnet-18 only miss 1.3% from the full precision model and our 8-bit quantization could surpass the FP model by 1.1%.", "We found that our model achieved a higher top-5 accuracy than all previous reported approaches for 2-, 4- and 8- bit networks with the architectures considered here.", "For nearly all cases, our methods achieved top 8-bit to-date performance.", "Interestingly, our method as a uniform quantization method could match the performance of non-uniform quantization method LCQ [35].", "Figure: Max Weight LSQ Vs Our method.", "Smoothing 0.999, the background indicates the range of values without smoothing.In practice, the need to run larger models on the edge devices are more of value to the task of quantization.", "For instance, running a ResNet-18 model ($<10MB$ ) is already possible on vast majority the SOTA hardwares without quantization.", "In contrast, to shrink a well-performing large-model like ResNet-50 to fit on an edge device should be the core drive behind model quantization.", "In the light of this statement, we argue that our method should be of more practical value than all the existing quantization methods." ], [ "Flatness of Local Optima", "In order to verify that our quantization error aware noise tempering indeed allows our model to converge to a flatter local optima, we picked our best-performing ResNet18 model checkpoints at various precision on CIFAR 100, and measure the local loss landscape sharpness.", "We adopt the definition of sharpness from [17] to compute the maximum value of the loss function in a constrained neighborhood around the minima, to avoid computationally expensive task of computing eigenvalues of $\\nabla ^2\\mathcal {L}(x)$ .", "Specifically, this is achieved through projection onto a $\\ell _\\infty $ norm ball with radius $\\rho $ around the weight parameter $W$ : $\\max _{\\Vert \\epsilon \\Vert _\\infty < \\rho } \\mathcal {L}(W + \\epsilon ) - \\mathcal {L}(W)$ We follow the implementation from  [21] to compute sharpness score of the checkpoints trained with our proposed noise versus without noise.", "Our results are summarized in Table REF .", "As we can see, the models trained with noise tend to achieve lower sharpness score, indicating a flatter minima, which is favorable for generalization.", "This also echoes with the better performance of our models reported in Table  REF Table: Sharpness with and without noise tempering (lower is better)" ], [ "Effect of hyperparameters", "In our proposed method, we introduced two hyperparameters, $c$ and $k$ , as in Eq.", "REF .", "The hyperparameter $c$ controls the how much the noise temperature is raised at perticular quantization error.", "The hyperparameter $k$ , on the other hand, controls when the system should raise the temperature during the training process.", "Interestingly, we found that a hyperparameter $k \\le 50$ usually does not impact the end performance by too much as long as the model is trained long enough, even when $k=0$ , in which case the temperature is raised based on quantization error right at the begining of QAT finetuning.", "Thus, we set the $k=50$ for all experiments allowing faster convergence and mainly studied the effect of $c$ here.", "To investigate how this hyper-parameter influences the accuracy, we attempted different noise level ranging from 0, which is equivalent to LSQ [10], to 0.4 on CIFAR-100 and summarize our result in table 5 and Appendix A.4.", "Evidently, in most cases, adding the noise like our method would improve the performance.", "Even though there seems to be an optimal noise level for each model-precision pair, it is usually still possible to benefit from sub-optimal noise level.", "In our experience, it seems that a noise level factor $c = 0.2 \\sim 0.4$ is usually a good option and generate consistently better results on CIFAR 10, CIFAR 100, and ImageNet." ], [ "Step-size and quantization error and weight", "Other than the effect on accuracy, we also seek to understand how the noise impact metrics including quantization error and step-size parameter $s$ as described in equation (REF ).", "Quantization errors and the scale of weights are shown in Table 6(a,b) in the Appendix.", "A.5 As is shown, scales and quantization errors are closely correlated.", "The complexity of the models do not seem to affect the quantization error, and they are only negatively correlated to the bit-width.", "The more bits, the smaller the scale and thus the smaller quantization error.", "Compared to LSQ method, it seems our method tends to slightly inflate the quantization error to balance with the task loss.", "Figure REF shows that the mean value of the max weights of our methods is larger than LSQ methods across the board.", "This indicates that our method might be better at capturing larger weights than LSQ." ], [ "Closing Statements", "In summary, our proposal to in-corporate a pseudo quantization noise together with a learnable scale of task loss gradient seems to strike a balance between minimizing the task loss and minimizing the quantization error.", "Our experiments demonstrate that our method outperform existing methods with a visible margin.", "Without introducing additional complexity or inducing extra training time, our method seems to bring a “free lunch\" for an extra 1% accuracy gain.", "Limitation: We showed our methods to work with SGD according to clear theoretical motivation.", "However, for adaptive optimizers such as Adam, AdamW, further analysis is needed.", "Loss Landscape would also change drastically with adaptive optimizer, and so does all the weight updates during quantization.", "Broader Impact: Our method is simple to implement and can be readily reproduced in existing deep learning frameworks such as PyTorch, and TensorFlow.", "It is widely applicable to a variety ML models to improve their efficiency, the users tradeoff bit-width and accuracy as their needs.", "Furthermore, our theoretical intuition of using noise could potentially inspire more creative methods in engineering noise to suit other tasks." ], [ "Noise in SGD with momentum", "For SGD with momentum, we can define the “True Accumulation\", $V$ , and “Estimated Accumulation\", $\\hat{V}$ just like the vanilla SGD.", "The “true accumulation\" at step $T$ is defined as the the discounted sum of all true gradient from step 0 to step $T$ : $V_T = \\sum _{t=0}^{T} m^t \\cdot \\nabla _{\\mathbf {Q}(W_t)}\\mathcal {L}$ Likewise, the “estimated accumulation\" is defined as the discounted sum of all estimated gradient from mini-batches: $\\hat{V}_T = \\sum _{t=0}^{T} m^t \\cdot \\nabla _{\\mathbf {Q}(W_t)}\\hat{\\mathcal {L}} = \\sum _{t=0}^{T} m^t \\cdot \\left(\\nabla _{\\mathbf {Q}(W_t)}\\mathcal {L} + \\varepsilon (t)\\right)$ Thus, using SGD with momentum, we end up with the following update formula.", "$\\begin{aligned}\\hat{V}_{t+1} &= m\\hat{V}_{t} + \\nabla _{\\mathbf {Q}(W_t)} \\hat{\\mathcal {L}} \\\\W_{t+1}^{\\prime } &= W_{t}^{\\prime } - \\eta \\hat{V}_{t+1}\\end{aligned}$ We can expand Eq.", "REF using telescoping series and obtain the estimated accumulation using true accumulation and a combination of Gaussian noise samples.", "$\\hat{V}_T = \\sum _{t=0}^T m^t \\cdot \\nabla _{\\mathbf {Q}(W_t)}\\mathcal {L} + \\sum _{t=0}^T \\varepsilon (t) \\cdot m^t = V_T + \\sum _{t=0}^T \\varepsilon (t) \\cdot m^t$ Assuming each batches are drawn independently at random (in reality, this approximation become more accurate when $N \\gg B$ and $N \\rightarrow \\infty $ ).", "Under this assumption and suppose we are using gradient tempering in Eq.", "4 (here, we let the noise $\\varepsilon (t)$ to automatically include the tempering term we manually added.", "In general, and following derivations, this will represent the mini-batch noise without tempering), the term $\\sum _{t=0}^T \\varepsilon (t) \\cdot m^t$ is then an independent sum of Gaussian noises, each has mean 0 and variance of $\\Sigma (W_t^{\\prime })/B + \\gamma _{Q_t}^2 T_{Q_t}$ .", "Thus, if we assume the variance of $\\varepsilon (t)$ at each step does not change too much, the decaying sum of noise term has total variance of: $\\begin{aligned}\\sum _{t=0}^T m^{2t}\\left(\\Sigma (W_t^{\\prime })/B + \\gamma _{Q_t}^2 T_{Q_t}\\right) &\\le \\sum _{t=0}^\\infty m^{2t}\\left( \\Sigma (W_t^{\\prime })/B + \\gamma _{Q_t}^2 T_{Q_t} \\right)\\\\&\\approx \\left( \\Sigma (W_T^{\\prime })/B + \\gamma _{Q_T}^2 T_{Q_T} \\right) \\sum _{t=0}^T m^{2t}\\\\&= \\frac{1}{1-m^2}\\left( \\Sigma (W_T^{\\prime })/B + \\gamma _{Q_T}^2 T_{Q_T} \\right)\\end{aligned}$ In this cases, the momentum in SGD is effectively serving as a scaling factor depending on the momentum coefficient $m$ .", "It simultaneously scales both the gradient noise due to mini-batch sampling and quantization-error dependent noise which we introduced." ], [ "Relating gradient tempering to quantization function", "To show that our choice $\\gamma _Q$ and $T_Q$ in §3.2 results in the quantization function as in Eq.", "7, we assume that the hyperparameters $k$ and $c$ are choosen so that when tempering is active, the quantization error is relatively small (due to step-size parameter $s$ converging to a relatively small value).", "We further assume that the gradient noise at step $t$ , $\\varepsilon (t)$ , has similar distribution at $\\mathbf {Q}(W_t)$ and $\\tilde{\\mathbf {Q}}(W_t)$ .", "This allows us to use first order Taylor expansion to arrive at the following approximation: $\\begin{aligned}\\nabla _{\\tilde{\\mathbf {Q}}(W_t)}\\hat{\\mathcal {L}} &= \\nabla _{\\tilde{\\mathbf {Q}}(W_t)}\\mathcal {L} + \\varepsilon (t)\\\\&\\approx \\left[ \\nabla _{\\mathbf {Q}(W_t)}\\mathcal {L} + \\nabla ^2_{\\mathbf {Q}(W_t)}\\mathcal {L} \\cdot \\left(\\tilde{\\mathbf {Q}}(W_t) - \\mathbf {Q} (W_t)\\right) \\right] + \\varepsilon (t)\\\\&= \\left[ \\nabla _{\\mathbf {Q}(W_t)}\\mathcal {L} + \\nabla ^2_{\\mathbf {Q}(W_t)}\\mathcal {L} \\cdot \\left(c\\exp (-k|\\mathbf {Q}(W_t) - W_t |) \\sqrt{|\\mathbf {Q}(W_t) - W_t |} \\tilde{\\varepsilon }_\\Phi \\right) \\right] + \\varepsilon (t)\\end{aligned}$ Here the term $c\\exp (-k|\\mathbf {Q}(W) - W |)$ and $\\sqrt{|\\mathbf {Q}(W) - W |}$ are diagonal matrixes that control the variance of the noise.", "Thus, we break down the terms further into $\\gamma _Q$ and $T_Q$ with the following: $\\begin{aligned}\\nabla _{\\tilde{\\mathbf {Q}}(W_t)}\\hat{\\mathcal {L}} &\\approx \\left[ \\nabla _{\\mathbf {Q}(W_t)}\\mathcal {L} + \\nabla ^2_{\\mathbf {Q}(W_t)}\\mathcal {L} \\cdot \\left(c\\exp (-k|\\mathbf {Q}(W_t) - W_t |) \\sqrt{|\\mathbf {Q}(W_t) - W_t |} \\tilde{\\varepsilon }_\\Phi \\right) \\right] + \\varepsilon (t)\\\\&= \\nabla _{\\mathbf {Q}(W_t)}\\mathcal {L} + \\underbrace{c\\exp (-k|\\mathbf {Q}(W_t) - W_t |) \\nabla ^2_{\\mathbf {Q}(W_t)}\\mathcal {L} }_{\\gamma _Q} \\cdot \\underbrace{\\sqrt{|\\mathbf {Q}(W_t) - W_t |}}_{\\sqrt{T_Q}} \\tilde{\\varepsilon }_\\Phi + \\varepsilon (t)\\\\&=\\nabla _{\\mathbf {Q}(W_t)}\\mathcal {L} + \\gamma _Q \\sqrt{T_Q} \\tilde{\\varepsilon }_\\Phi + \\varepsilon (t)\\end{aligned}$ Taking $\\eta $ as learning rate, we can then arrive at the update as in Eq.", "4 using assumption that mini-batch gradient noise $\\varepsilon (t)$ is approximately same at $\\mathbf {Q}(W)$ and $\\tilde{\\mathbf {Q}}(W)$ $\\begin{aligned}W_{t+1} &= W_t - \\eta \\nabla _{\\tilde{\\mathbf {Q}}(W_t)}\\hat{\\mathcal {L}}\\\\&\\approx W_t -\\eta \\nabla _{\\mathbf {Q}(W_t)}\\mathcal {L} + \\eta \\gamma _Q \\sqrt{T_Q} \\tilde{\\varepsilon }_\\Phi + \\eta \\varepsilon (t)\\\\&= \\underbrace{W_t -\\eta \\nabla _{\\mathbf {Q}(W_t)}\\mathcal {L} + \\eta \\gamma _Q \\sqrt{T_Q} \\tilde{\\varepsilon }_\\Phi + \\sqrt{\\eta T}\\Sigma (W_t)^{1/2}\\varepsilon _\\Phi }_{\\text{Update in Equation 4}}\\end{aligned}$ Notice that this proof uses the first order Taylor expansion to perform estimation.", "However, this estimation can be poor if the noise is large.", "i.e.", "violation of local linearity.", "To make sure this is unlikely to happen in our model, our choice of noise term naturally bounds the variance of the noise by the step-size parameter, $s$ .", "Since the step-size tends to be small when deeper into the training phase as shown in Table REF , linear approximation is assured to perform well." ], [ "Evaluation on CIFAR10/100", "We performed experiments using ResNet-18 and WideResNet model on the CIFAR-10/100 dataset for the ease of comparison to LSQ and DiffQ.", "We trained the quantized models over 100 epoches with an initial learning rate of 0.01 for the weights.", "The weight decay was set to 1E-4.", "We adopted standard data augmentation techniques, namely random crop and horizontal flip." ], [ "Evaluation on ImageNet", "As for our experiement on ImageNet dataset, images were resized to 256 x 256, then a 224 x 224 crop was selected for training, with horizontal mirroring applied half the time.", "At test time, a 224 x 224 centered crop was chosen.", "We implemented and tested all of our methods in PyTorch.", "In order to have a fair comparison with previous works, we set weights to either 2-, 4-, or 8-bit for all matrix multiplication layers.", "All quantized networks are initialized using weights from a pre-trained full precision model load from the timm library [34] with equivalent architecture before fine-tuning in the quantized space.", "Networks were trained with SGD optimizers, a momentum of 0.9, using a softmax cross entropy loss function, and cosine learning rate annealing without restarts.", "All the networks are trained for 200 epoches, initial learning rate was set to 0.1 for full precision networks, 0.01 for 2-, and 4-bit, 8-bit networks.", "On top of adopting Cosine Annealing scheduling, we also reduce learning rate 10 folds after every 50 epoches as fine tuning, which was mentioned by [13] as the best practice." ], [ "Quantization noise level", "In this section, we provide additional experiment in studying the hyperparameter $c$ in Eq.", "7.", "Results in this table is similar to those in §5.2.", "The best performing $c$ is ranges from $0.2 \\sim 0.4$ .", "Table: Accuracy at different quantization noise levels." ], [ "Quantization error and step-size parameter at termination", "We also studied the quantization error and step-size parameter after using tempering introduced in Eq.", "7, summarized in table REF and table REF Table: Comparison between on Quantization errors and step-size parameter with or without temperingThere seems to be a slight increase in both the step-size and the quantization error after noise tempering.", "While this might seems to be worrisome, it is not necessarily true that large quantization error directly leads to worse performance in the prediction task.", "Most uniform quantization methods in previous works, such as diffQ [9] and LSQ [10], generally have the element-wise quantization function $\\mathbf {Q}(w)$ with the following form, where $\\lfloor \\cdot \\rceil $ represents the $\\text{round}(\\cdot )$ function: $\\mathbf {Q}(w) = (\\lfloor clip(w/s + b)\\rceil -b)\\cdot s$ The parameter $w,\\ s,\\ b$ may be learned or manually set, but once the training is completed, they are all merely fixed numbers.", "In our case, we set $b = 0$ and leave $s$ and $w$ as trainable parameters.", "One simple observation is that $\\mathbf {Q}(w) = \\mathbf {Q}(\\mathbf {Q}(w))$ .", "The following is the proof when clamping is not happening; the same conclusion still holds when clamping is playing a role: $\\begin{aligned}\\mathbf {Q}(\\mathbf {Q}(w)) &= (\\lfloor \\mathbf {Q}(w)/ s + b\\rceil -b)\\cdot s\\\\&=(\\lfloor (\\lfloor w/s + b\\rceil -b)\\cdot s/s + b\\rceil -b)\\cdot s\\\\&= (\\lfloor \\lfloor w/s + b \\rceil -b + b \\rceil -b)\\cdot s\\\\&= (\\lfloor \\lfloor w/s + b \\rceil \\rceil -b)\\cdot s\\\\&= \\mathbf {Q}(w)\\end{aligned}$ Hence, suppose that $w^*$ is optimal of the quantized model under loss function $\\mathcal {L}$ , $\\mathbf {Q}(w^*)$ is also an optimal under the same loss function because: $\\begin{aligned}&\\quad \\mathcal {L}(\\mathbf {Q}(\\mathbf {Q}(w^*)) + |\\mathbf {Q}(\\mathbf {Q}(w^*)) - \\mathbf {Q}(w^*)|\\cdot n,X)\\\\&= \\mathcal {L}(\\mathbf {Q}(w^*) + 0\\cdot n,X)\\\\&= \\mathcal {L}(\\mathbf {Q}(w^*),X)\\end{aligned}$ Using the fact that multiple $w$ will be rounded to $\\mathbf {Q}(w)$ , as in Fig 2(B), it should be obvious that any $w$ such that $\\mathbf {Q}(w) = \\mathbf {Q}(w^*)$ are also optimal for the quantized model, albeit with non-zero quantization error.", "The only concern raised for larger quantization error is the gradient after STE can be less accurate, which could lead to the model getting trapped at a bad sub-optima.", "We showed in §5.1 that models trained with noise tempering tend to converge to flatter minima.", "Converging to a flatter minima, errors in STE gradient would affect the model less compared to models trained without noise tempering (as explained in §3.1, stuck at sub-optimal values in a flat basin reduces the loss incurred).", "Empirically, we observe models that trained with noise tempering are less likely to lower the step-size.", "Instead, the model attempts to expand weight ranges." ], [ "Future Direction", "Our experiments demonstrate that our method outperform existing methods with a visible margin.", "Without introducing additional complexity or inducing extra training time, our method seems to bring a “free lunch\" for the extra accuracy gain.", "We hope to extend this quantization model to other tasks such as the language and audio tasks, and generalize the performance gain in ResNet to broader range of architectures like transformers.", "Going forward, we believe it would be beneficial for the research community to adopt the same baselines using standard libraries like the timm package for fair evaluation as is advocated by [34]." ] ]
2212.05603
[ [ "Approximating a flexible beam model in the Loewner framework" ], [ "Abstract The paper develops the Loewner approach for data-based modeling of a linear distributed-parameter system.", "This approach is applied to a controlled flexible beam model coupled with a spring-mass system.", "The original dynamical system is described by the Euler-Bernoulli partial differential equation with the interface conditions due to the oscillations of the lumped part.", "The transfer function of this model is computed analytically, and its sampled values are then used for the data-driven design of a reduced model.", "A family of approximate realizations of the corresponding input-output map is constructed within the Loewner framework.", "It is shown that the proposed finite-dimensional approximations are able to capture the key properties of the original dynamics over a given range of observed frequencies.", "The robustness of the method to noisy data is also investigated." ], [ "Introduction", "Model reduction techniques can be employed to replace a large-scale system with a complex structure (characterized by multidimensional systems of ordinary differential equations and/or partial differential equations), with a much simpler and smaller dynamical system (characterized by few equations with well-understood dynamics).", "In the last decades, there have been many methodologies proposed in this direction; we refer the reader to [2], [32], [24], [9], [3] for more details.", "A viable alternative to using classical model reduction approaches based on single or double-sided projections (that usually require explicit access to a large-scale model) is to use instead data-driven methods.", "These latter do not require explicit access to the large-scale model's structure or matrices.", "We mention here the Loewner framework (LF) [27], Vector fitting (VF) [19], or the AAA algorithm [28].", "When using these, low-order models can be constructed directly from data in the frequency domain (samples of the transfer function).", "Such methods can be viewed as rational approximation tools by means of interpolation (LF), least-squares fit (VF), or a mixed approach (AAA).", "Other data-driven methods that has emerged in recent years are dynamic mode decomposition (DMD) and operator inference (OpInf), which use time-domain snapshots of the state variables and then fit a particular structured model by computing the appropriate matrices (in reduced coordinates).", "Details on DMD can be found in [24], while details on OpInf can be found in [29], [6].", "We consider here the problem of data-driven rational approximation by means of fitting a linear time-invariant (LTI) dynamical system to a set of measurements (in the frequency domain).", "The fitted LTI system is characterized in the state-space by the following equations: ${\\left\\lbrace \\begin{array}{ll}{\\mathbf {E}}\\dot{{\\mathbf {x}}}(t)={\\mathbf {A}}{\\mathbf {x}}(t)+{\\mathbf {B}}u(t),\\\\ y(t) \\ \\hspace{4.2679pt} ={\\mathbf {C}}{\\mathbf {x}}(t)+{\\mathbf {D}}u(t),\\end{array}\\right.", "}$ where $u(t)\\in \\mathbb {R}$ is the input, $y(t)\\in \\mathbb {R}$ is the output, ${\\mathbf {x}}(t)\\in \\mathbb {R}^n$ is the state vector, and the system matrices are ${\\mathbf {A}}, {\\mathbf {E}}\\in {\\mathbb {R}}^{n\\times n},~{\\mathbf {B}},{\\mathbf {C}}^T\\in {\\mathbb {R}}^{n\\times 1}$ .", "The transfer function of (REF ) is given by $H(s) = {\\mathbf {C}}(s{\\mathbf {E}}-{\\mathbf {A}})^{-1}{\\mathbf {B}}+{\\mathbf {D}}$ .", "We refer to [2] for more details on various methodologies especially tailored to the reduction of linear systems.", "In recent years, such methods for linear systems have been steadily extended to particular classes of structured linear systems [5], [30], or even to nonlinear structured systems in [16] (without preservation of structure) or in [7] (with preservation of structure).", "The structures treated include distributed parameters, delay terms or integro-differential equations.", "In a data-driven setup, the structure-preserving approach is [33] was proposed.", "Note that the analytical representations of transfer functions have been obtained only for particular classes of distributed parameter systems.", "We refer to [11], [1] for surveys of results in this area.", "In the former tutorial article, the authors provide the derivation of a variety of (irrational) transfer functions for systems described by partial-differential equations.", "It is also shown that the choice of boundary conditions have an influence on the dynamics and on the locations of poles and zeros.", "In most practical situations, it is desirable to approximate the irrational transfer function by a rational one, for the purpose of controller design.", "The Loewner framework approach was shown to be extremely powerful in data-based control problems for wide classes of finite-dimensional control systems, whose transfer functions are rational [4], [3].", "However, the efficiency of the Loewner framework for distributed-parameter systems (characterized by irrational transfer functions) still remains to be verified, and the present paper aims at filling this gap.", "Preliminary analysis was provided for linear time-delay systems in [34], [26], fractional-order systems in [10], or for control purposes in [17], [31].", "A recent overview was provided in [22], including amongst others, rational approximation of the Bessel function, of a hyperbolic sine, and of a vibrating beam model from [11].", "Hence, the LF was studied in the context of approximating infinite-dimensional models of vibrating beams (with finite-dimensional ones).", "However, as far as the authors are aware, this is the first contribution that also takes into account the effects of perturbed data, i.e., under additive Gaussian noise." ], [ "Vibrating beam with attached mass", "Consider the Euler–Bernoulli equation describing the transverse vibrations of a flexible beam of length $l$ : $\\ddot{w}(x,t)+\\frac{EI}{\\rho } w^{\\prime \\prime \\prime \\prime }(x,t)+d \\, \\dot{w}^{\\prime \\prime \\prime \\prime }(x,t)=\\frac{1}{\\rho }\\sum \\limits _{j=1}^k \\psi _j^{\\prime \\prime }(x) u_j,$ where $w(x,t)$ is the beam deflection at point $x\\in [0,l]$ and time $t$ , $E$ is the Young's modulus, $I$ is the area moment of inertia of the cross-section, $\\rho $ is the mass per unit length of the beam, and $d\\ge 0$ is the structural damping coefficient.", "We denote the derivative with respect to time by a dot, while the prime denotes the spatial derivative (i.e., with respect to $x$ ).", "We assume that a mass-spring system (shaker) is attached to the beam at point $x=l_0$ , so that equation (REF ) holds for $x\\in [0,l_0)$ and $x\\in (l_0,l]$ , and the interface condition is imposed at $x=l_0$ : $(m\\ddot{w}+\\varkappa w)\\Big |_{x=l_0}=(EIw^{\\prime \\prime })^{\\prime }\\Big |_{x=l_0-0}-(EIw^{\\prime \\prime })^{\\prime }\\Big |_{x=l_0+0}+u_0.$ The beam is hinged at both ends, which is formalized by the boundary conditions $w\\Big |_{x=0}=w\\Big |_{x=l}=0,\\quad w^{\\prime \\prime }\\Big |_{x=0}=w^{\\prime \\prime }\\Big |_{x=l}=0.$ System (REF )–(REF ) is controlled by the force $u_0$ applied to the shaker at $x=l_0$ and $k$ piezo actuators, whose actions $u_j$ are characterized in terms of shape functions $\\psi _j(x)$ , $j=1, ... ,k$ .", "It is also assumed that $p$ piezo sensors are located at the points $x=l_i$ , $i=1,..,p$ , i.e.", "the system outputs are $y_i(t) = \\left.\\frac{\\partial ^2 w(x,t)}{\\partial x^2}\\right|_{x=l_i},\\quad i=1,...,p.$ The above mathematical model has been presented in [21], [20] for the case without damping; here we take into account the structural damping by introducing the parameter $d$ in (REF )." ], [ "Computation of the transfer function", "Let $u_0(t)$ , $u_1(t)$ , ..., $u_k(t)$ $(t\\ge 0)$ be inputs of the control system (REF )–(REF ) with zero initial data, denote the Laplace transform of the inputs and outputs by $U_j(s) = \\int _0^{+\\infty } u_j(t) e^{-st}dt,\\quad j=0,1,...,k,$ and $Y_i(s) = \\int _0^{+\\infty } y_i(t) e^{-st}dt,\\quad i=1,...,p,$ respectively.", "After introducing the Laplace transform of $w(x,t)$ with respect to $t$ : $W_1^s(x)=\\int _0^{+\\infty } w(x,t) e^{-st}dt$ , we obtain from (REF ) the following system of ordinary differential equations for $W^s(x)=\\left(W_1^s,W_2^s,W_3^s,W_4^s\\right)^T$ : $\\frac{d}{dx}W^s(x)=A W^s(x) + \\Phi ^s(x), A=\\left(\\begin{array}{cccc}0 & 1 & 0 & 0 \\\\0 & 0 & 1 & 0 \\\\0 & 0 & 0 & 1 \\\\- 4 \\gamma ^4 & 0 & 0 & 0 \\\\\\end{array}\\right),$ with $\\Phi ^s(x)=\\left(\\begin{array}{c}0 \\\\ 0 \\\\ 0 \\\\ \\phi ^s(x)\\end{array}\\right),\\; \\gamma = \\frac{\\alpha \\sqrt{2s}}{2},$ $\\alpha ^4 = \\frac{\\rho }{EI+\\rho d s}>0,\\, \\phi ^s(x) = \\frac{1}{EI+\\rho d s}\\sum _{j=1}^k \\psi _j(x) U_j(s),$ and the $s$ variable is treated as a parameter in (REF ).", "The general solution of (REF ) is represented with the matrix exponential as follows: $W^s(x)=\\left\\lbrace \\begin{array}{ll}e^{xA}\\bar{W}^0 + \\int _0^x e^{(x-y)A}\\Phi ^s(y)dy, & x\\in [0,l_0], \\\\e^{(x-l)A}\\bar{W}^l - \\int _x^l e^{(x-y)A}\\Phi ^s(y)dy, & x\\in (l_0,l],\\end{array}\\right.$ where $e^{xA}=\\left(\\begin{array}{cccc}z_1(x) & z_2(x) & z_3(x) & z_4(x) \\\\-4\\gamma ^4 z_4(x) & z_1(x) & z_2(x) & z_3(x) \\\\-4\\gamma ^4 z_3(x) & -4\\gamma ^4 z_4(x) & z_1(x) & z_2(x) \\\\-4\\gamma ^4 z_2(x) & -4\\gamma ^4 z_3(x) & -4\\gamma ^4 z_4(x) & z_1(x) \\\\\\end{array}\\right),$ $\\begin{aligned}z_1(x)&=\\cosh (\\gamma x)\\cos (\\gamma x),\\\\z_2(x)&=\\frac{\\cosh (\\gamma x) \\sin (\\gamma x)+\\sinh (\\gamma x)\\cos (\\gamma x)}{2\\gamma },\\\\z_3(x)&=\\frac{\\sinh (\\gamma x) \\sin (\\gamma x)}{2\\gamma ^2},\\\\z_4(x)&=\\frac{\\cosh (\\gamma x) \\sin (\\gamma x)-\\sinh (\\gamma x)\\cos (\\gamma x)}{4 \\gamma ^3}.\\end{aligned}$ Formula (REF ) represents the solutions of (REF ) in terms of their boundary values $\\bar{W}^0$ and $\\bar{W}^l$ at $x=0$ and $x=l$ , respectively, From the boundary conditions (REF ), we conclude that $\\bar{W}^0=(0,\\bar{W}^0_2,0,\\bar{W}^0_4)^T,\\; \\bar{W}^l=(0,\\bar{W}^l_2,0,\\bar{W}^l_4)^T.$ To eliminate the parameters $\\bar{W}^0_2$ , $\\bar{W}^0_4$ , $\\bar{W}^l_2$ , $\\bar{W}^l_4$ , we exploit the property that $W^s(x)$ is of class $C^2[0,l]$ together with the interface condition (REF ).", "As a result, we get the following linear algebraic system with respect to $\\bar{W}=(\\bar{W}^0_2, \\bar{W}^0_4, \\bar{W}^l_2,\\bar{W}^l_4)^T$ : $M \\bar{W} = R,$ where Table: NO_CAPTIONand $\\beta = \\frac{s^2 m+\\varkappa }{EI}.$ Thus, the vector-valued function $W^s(x)$ is defined by (REF ) and (REF ) with $\\bar{W}=M^{-1} R$ , and the components of $R=(R_1^s,R_2^s,R_3^s,R_4^s)^T$ are linear combinations of $U_0$ , ..., $U_k$ : $R_i^s = \\sum _{j=0}^k r_{ij}^s U_j,\\;\\; i=1,2,3,4,$ where the coefficient matrix $r^s=(r_{ij}^s)$ can be obtained from (REF ), (REF ).", "Then the function $W_3^s(x)$ , corresponding to the second $x$ -derivative of $w(x,t)$ , is expressed as: $\\begin{aligned}& W_3^s(x) = -4\\gamma ^4 z_4(x)\\bar{W}^0_2 + z_2(x)\\bar{W}_4^0 \\\\ & + \\sum _{j=1}^k \\frac{U_j}{EI+ \\rho d s } \\int _0^x K_j(x,y) dy\\;\\;\\text{for}\\; x\\in [0,l_0],\\end{aligned}$ $\\begin{aligned}& W_3^s(x) = -4\\gamma ^4 z_4(x-l)\\bar{W}^l_2 + z_2(x-l)\\bar{W}_4^l \\\\ & - \\sum _{j=1}^k \\frac{U_j}{EI+ \\rho d s } \\int _x^l K_j(x,y) dy\\;\\text{for}\\; x\\in (l_0,l],\\end{aligned}$ where $K_j(x,y)=z_2(x-y)\\psi _j^{\\prime \\prime }(x)$ and $(\\bar{W}^0_2, \\bar{W}^0_4, \\bar{W}^l_2,\\bar{W}^l_4)^T=M^{-1}r^s (U_0,U_1,...,U_k)^T$ because of (REF ).", "Thus, at each $x\\in [0,l]$ , the above formulas define $W_3^s(x)$ as a linear combination of $U_0$ , $U_1$ ,..., $U_k$ : $W^s_3(x) = \\sum _{j=0}^k h_j^s(x)U_j,$ with the coefficients $h_j^s(x)$ collected from (REF ).", "Recalling that the output of the considered system is given by (REF ), we summarize the computation of the transfer function in the following lemma.", "Lemma 1 The transfer function of the multi-input multi-output control system (REF )–(REF ) is presented in the form: $H(s) = \\begin{pmatrix}h_0^s(l_1) & h_1^s(l_1) & ... & h_k^s(l_1)\\\\h_0^s(l_2) & h_1^s(l_2) & ... & h_k^s(l_2)\\\\\\vdots & \\vdots & \\ddots & \\vdots \\\\h_0^s(l_p) & h_1^s(l_p) & ... & h_k^s(l_p)\\\\\\end{pmatrix},$ where $h_j^s(x)$ are taken from (REF )." ], [ "Single-input single-output (SISO) case", "Let the system be controlled by the shaker force $u_0$ only and the scalar output signal $y_1(t)$ be available.", "In this particular case, the scalar transfer function $H(s)$ is such that $Y_1(s) = H(s) U_0(s)$ .", "Lemma REF implies the following result in the considered SISO case.", "Lemma 2 Assume that $k=0$ , $p=1$ , and $l_1\\le l_0$ .", "Then the transfer function of the control system (REF )–(REF ) is $H(s) = \\frac{1}{EI}\\left( -4\\gamma ^4 z_4(l_j) M^{-1}_{14} + z_2(l_j) M^{-1}_{24}\\right).$ Here $M^{-1}_{ik}$ are elements of $M^{-1}$ (the matrix $M$ is given in (REF )) and $z_i(x)$ are defined in (REF ).", "For further numerical simulations, we consider the beam actuated by the shaker force only ($k=0$ ) with single output ($p=1$ ) and take the following realistic mechanical parameters [21] (see also [13]): $l=1.905\\,\\text{m},\\; l_0=1.4\\,\\text{m},\\;\\rho _0 = 2700 \\,\\text{kg}/\\text{m}^3,$ $S = 2.25\\cdot 10^{-4} \\text{m}^2,\\; \\rho =\\rho _0 S,\\;E=6.9\\cdot 10^{10}\\,\\text{Pa},$ $I=1.6875\\cdot 10^{-10}\\,\\text{m}^4,\\; m=0.1\\,\\text{kg},\\; \\varkappa =7\\,\\text{N/mm},$ $l_1 = 732.5\\,\\text{mm}.$" ], [ "The Loewner framework for fitting linear time-invariant systems", "In what follows, we provide a brief summary of the LF for fitting linear dynamical systems as in (REF ), from data.", "The starting point for LF is having access to measurements corresponding to the transfer function of the underlying dynamical process, which can be inferred in practice by means of experimental or model-based procedures.", "The data set is given by: $ {\\cal D} = \\lbrace (\\omega _\\ell ;H(\\omega _\\ell ))\\ \\vert \\ \\ell =1,\\ldots ,2k\\rbrace ,$ by means of sampling $H: {\\mathbb {C}}\\rightarrow {\\mathbb {C}}$ is an analytic function (not necessarily rational) on a particular (complex) grid of points $\\omega _\\ell $ 's.", "It is to be noted that data sets with an odd number of measurements can also be accommodated in the LF.", "The first step is to partition the data set in (REF ) into two disjoint subsets, as follows: $\\begin{split}{\\textrm {r}ight \\ data}&: \\ {\\cal D} _R = \\lbrace (\\lambda _j;w_j)\\ \\vert \\ j=1,\\ldots ,k\\rbrace ,~{\\textrm {a}nd}, \\\\{\\textrm {l}eft \\ data}&: \\ {\\cal D} _L = \\lbrace (\\mu _i;v_i) \\ \\vert \\ ~i=1,\\ldots ,k\\rbrace ,\\end{split}$ For simplicity, all points are assumed distinct and also $\\mu _i \\ne \\lambda _j$ , for all $1 \\le i, j \\le k$ ; extensions to Hermite interpolation were proposed in [27].", "A typical approach for splitting the data, commonly used in the LF publications, is the “alternate splitting scheme”, described as follows.", "The left and right sample nodes (and points) are chosen so that they are interlacing each other.", "More precisely, for $1 \\le i \\le k$ , we can write: $\\begin{split}\\mu _i &= \\omega _{2i-1}, \\ \\ \\lambda _i = \\omega _{2i},\\\\v_i = H(\\mu _i) = H(&\\omega _{2i-1}), \\ \\ w_i = H(\\lambda _i) = H(\\omega _{2i}).\\end{split}$ We refer the reader to [22] for a more comprehensive account of data partitioning strategies in the Loewner framework; there, the “half-hal” splitting is mentioned together with approaches that split the data based on the magnitude of the data samples.", "The goal is to find a rational function denoted with $\\tilde{H}(s)$ , such that the following interpolation conditions are (approximately) fulfilled: $ \\tilde{H}(\\mu _i)=v_i,~~~\\tilde{H}(\\lambda _j)=w_j.$ In order to accomplish this scope, we first arrange the elements of the original data set $ {\\cal D} $ , partitioned as in (REF ) in matrix format.", "Hence, the Loewner matrix ${\\mathbb {L}}\\in {\\mathbb {C}}^{k\\times k}$ and the shifted Loewner matrix ${{{\\mathbb {L}}_s}}\\in {\\mathbb {C}}^{k\\times k}$ are defined as follows $ {\\mathbb {L}}_{(i,j)}=\\frac{v_i-w_j}{\\mu _i-\\lambda _j}, \\ {{{\\mathbb {L}}_s}}_{(i,j)}=\\frac{\\mu _i v_i-\\lambda _j w_j}{\\mu _i-\\lambda _j},$ while the data vectors ${\\mathbb {V}}, {\\mathbb {W}}^T \\in {\\mathbb {R}}^k$ are given by: $ {\\mathbb {V}}_{(i)}= v_i, \\ \\ {\\mathbb {W}}_{(j)} = w_j,~\\text{for}~i,j=1,\\ldots ,k.$ The Loewner model is hence constructed as follows: ${\\mathbf {E}}=-{\\mathbb {L}},~~ {\\mathbf {A}}=-{{{\\mathbb {L}}_s}},~~ {\\mathbf {B}}={\\mathbb {V}},~~ {\\mathbf {C}}={\\mathbb {W}}.$ The following Sylvester equations are satisfied by the Loewner and shifted Loewner matrices, as shown in [4] (here, ${\\mathbb {1}}_q = \\left[ \\begin{matrix} 1 & \\cdots & 1 \\end{matrix} \\right]^T \\in {\\mathbb {C}}^q$ ): ${\\left\\lbrace \\begin{array}{ll} {\\mathbf {M}}{\\mathbb {L}}- {\\mathbb {L}}\\Lambda = {\\mathbb {V}}{\\mathbb {1}}_k^T - {\\mathbb {1}}_q {\\mathbb {W}}, \\\\{\\mathbf {M}}{{{\\mathbb {L}}_s}}- {{{\\mathbb {L}}_s}}\\Lambda = {\\mathbf {M}}{\\mathbb {V}}{\\mathbb {1}}_k^T - {\\mathbb {1}}_q {\\mathbb {W}}\\Lambda ,\\end{array}\\right.", "}\\vspace{-5.69054pt}$ where ${\\mathbf {M}}= \\text{diag}(\\mu _1,\\cdots ,\\mu _q)$ and $\\Lambda = \\text{diag}(\\lambda _1,\\cdots ,\\lambda _k)$ .", "The following relations expressing the shifted Loewner matrix to the Loewner matrix, in two distinct ways, hold: ${{{\\mathbb {L}}_s}}= {\\mathbb {L}}\\Lambda + {\\mathbb {V}}{\\mathbb {1}}_k^T = {\\mathbf {M}}{\\mathbb {L}}+ {\\mathbb {1}}_q {\\mathbb {W}}.$ Hence, the explicit computation of large Loewner matrices can be avoided by means of computing (approximated, low-rank) solutions of the Sylvester equations in (REF ).", "This can be accomplished, e.g., by means of using optimized and robust numerical tools such as [8].", "Provided that enough data are available, the pencil $({{{\\mathbb {L}}_s}},\\,{\\mathbb {L}})$ is often singular.", "For example, if the data $ {\\cal D} $ in REF were generated from a rational function $H(s)$ corresponding to a minimal LTI system of dimension $n$ , i.e., $H(s) = {\\mathbf {C}}(s{\\mathbf {I}}_n-{\\mathbf {A}})^{-1} {\\mathbf {B}}$ , then this corresponds to the case $k > n$ .", "Then, in the perfect setup (no noisy data), one would encounter $k-n$ zero singular values when performing the singular value decomposition (SVD) of the pencil $\\zeta {\\mathbb {L}}- {{{\\mathbb {L}}_s}}$ , where $\\zeta \\in {\\mathbb {C}}$ is chosen to be different than the eigenvalues of matrix ${\\mathbf {A}}$ .", "In such cases, an SVD of augmented Loewner matrices is computed, and the dominating part is selected as: $\\left[{\\mathbb {L}},~{{{\\mathbb {L}}_s}}\\right] \\approx {\\mathbf {Y}}\\widehat{\\Sigma }_{ {r}}\\tilde{{\\mathbf {X}}}^*,~\\left[\\begin{array}{c}{\\mathbb {L}}\\\\ {{{\\mathbb {L}}_s}}\\end{array}\\right] \\approx {\\tilde{{\\mathbf {Y}}}}\\Sigma _{ {r}} {\\mathbf {X}}^*,$ where $\\widehat{\\Sigma }_{ {r}}$ , $\\Sigma _{ {r}}$ $\\in {\\mathbb {R}}^{{{r}}\\times {r}}$ ,  ${\\mathbf {Y}}\\in {\\mathbb {C}}^{k\\times {r}}$ ,${\\mathbf {X}}\\in {\\mathbb {C}}^{k\\times {r}}$ , $\\tilde{{\\mathbf {Y}}}\\in {\\mathbb {C}}^{2k\\times {r}}$ , $\\tilde{{\\mathbf {X}}}\\in {\\mathbb {C}}^{r\\times {2k}}$ and $({\\mathbf {X}})^* \\in {\\mathbb {C}}^{r \\times k}$ denotes the conjugate-transpose of matrix ${\\mathbf {X}}$ .", "This is performed in order to find projection matrices ${\\mathbf {X}}_r, {\\mathbf {Y}}_r \\in {\\mathbb {C}}^{k \\times r}$ , as described in [4].", "Here, $r<n$ represents the truncation index.", "Then, the system matrices corresponding to a projected Loewner model of dimension $r$ can be computed using the truncated singular vector matrices ${\\mathbf {X}}_r$ and ${\\mathbf {Y}}_r$ : $\\begin{split}\\hat{{\\mathbf {E}}} &= -{\\mathbf {X}}_r^*{\\mathbb {L}}{\\mathbf {Y}}_r, \\ \\ \\hat{{\\mathbf {A}}} = -{\\mathbf {X}}_r^*{{{\\mathbb {L}}_s}}{\\mathbf {Y}}_r, \\\\\\hat{{\\mathbf {B}}} &= {\\mathbf {X}}_r^*{\\mathbb {V}}, \\ \\ \\hat{{\\mathbf {C}}} = {\\mathbb {W}}{\\mathbf {Y}}_r,\\end{split}$ and therefore, directly finds a state-space realization corresponding to the reduced-order system of equations ${\\left\\lbrace \\begin{array}{ll}{\\hat{\\textbf {E}}}\\dot{{\\hat{\\textbf {x}}}}(t)={\\hat{\\textbf {A}}}{\\mathbf {x}}(t)+{\\hat{\\textbf {B}}}u(t),\\\\ \\hat{y}(t) \\ \\hspace{5.12149pt}={\\hat{\\textbf {C}}}{\\hat{\\textbf {x}}}(t).\\end{array}\\right.", "}.$ The transfer function of the reduced Loewner model in (REF ) is written as ${\\hat{H}}(s) = {\\hat{\\textbf {C}}}(s {\\hat{\\textbf {E}}}- {\\hat{\\textbf {A}}})^{-1} {\\hat{\\textbf {B}}}$ , and it provides a good approximant to the original transfer function $H(s)$ .", "Then, ${\\hat{H}}(s)$ may be expanded in a pole/zero or pole/residue format.", "These values represent system invariants and can be related to the inherent dynamics.", "It is noted that the state-space realization is not unique, and that is why an extra step is required.", "More implementation details and properties of the LF procedure can be found in [4], [22]." ], [ "Analysis on the unperturbed data", "In this section, we present two numerical test cases based on sampling the transfer function explicitly derived in Section .", "We consider the SISO system with the choice of physical parameters (REF ).", "Then the scalar transfer function $H(s)$ in Lemma REF is sampled at the purely imaginary grid points $s=\\omega _\\ell $ , $\\ell =1,\\ldots ,k$ with $k=1000$ , which are equally distributed in the range of physical frequencies from $0 Hz$ to $250 Hz$ .", "The data partitioning scheme (into the left and right disjoint subsets) chosen here is the “alternate” one, previously described in Section .", "We also note that complex conjugated data is added to the process to enforce real-valued models; more details on how this is achieved can be found in [4].", "Two values of the structural damping parameter $d$ in (REF ) are considered for the numerical simulations: $d_1=0.0249$ (“large” damping) and $d_2=0.001$ (“small” damping).", "The case $d=d_1$ is depicted in Figs.", "REF –REF , for which we are fitting a Loewner model of order $r = 20$ (with a rational transfer function).", "Additionally, the case $d=d_2$ is presented in Figs.", "REF –REF ; for this case, we are fitting a Loewner model of order $r = 27$ .", "It should be noted that the Loewner framework does not automatically impose stability; post-processing methods can be applied whenever unstable poles appear, as described in [15].", "We observe that the Loewner model approximates the original transfer function with good accuracy in the whole range of test frequencies (the approximation error of order $10^{-6}$ in Fig.", "REF , and of order $10^{-5}$ in Fig.", "REF ).", "It is also clear that these approximate models preserve the stability property (the maximal real part of $\\lambda _i$ in Fig.", "REF is ${\\textrm {m}ax}({\\textrm {R}e}\\,\\lambda _i) = -5.2780 \\cdot 10^{-1}<0$ , while the maximal real part of $\\lambda _i$ in Fig.", "REF is ${\\textrm {m}ax}({\\textrm {R}e}\\,\\lambda _i) = -2.1187 \\cdot 10^{-2}<0$ ).", "Although the transfer function of the considered distributed parameter function is not rational, the decay of singular values associated with the Loewner matrices in Figs.", "REF and REF seems to indicate the opportunity to enforce rational approximation.", "More precisely, in both cases, a plateau (flat portion of the graph) is observed after a steep decay.", "In Figs.", "REF and REF , we only depict the first 50 singular values (out of 1000, which is the dimension of the Loewner matrices ${\\mathbb {L}}$ and ${{{\\mathbb {L}}_s}}$ ).", "In the first test case, the decay is faster than in the second one.", "The dimensions of the reduced-order models were chosen in accordance with this phenomenon (i.e.", "$r=20$ for the first and $r=27$ for the second).", "However, such a clear and steep decay as seen in Figs.", "REF and REF , is seldom noticed in experimental data.", "In such scenarios, the data can be perturbed, i.e., by means of noise.", "We treat this case below.", "Figure: The decay of the singular values for the augmented Loewner matrices in .Figure: The original data (transfer function samples) vs. the Loewner model fit.Figure: The approximation error.Figure: The poles of the fitted Loewner model, as eigenvalues of pencil (𝐀 ^,𝐄 ^)({\\hat{\\textbf {A}}},{\\hat{\\textbf {E}}}).Figure: The decay of the singular values for the augmented Loewner matrices in .Figure: The original data (transfer function samples) vs. the Loewner model fit.Figure: The approximation error.Figure: The poles of the fitted Loewner model, as eigenvalues of pencil (𝐀 ^,𝐄 ^)({\\hat{\\textbf {A}}},{\\hat{\\textbf {E}}})." ], [ "Analysis on the perturbed data (by means of artificial additive Gaussian noise)", "In this subsection, we analyze the robustness of the LF when applied to perturbed (noisy) data.", "A preliminary analysis of such endeavors was reported in [25], [12], [23] and in [35], [14].", "Similarly to the approaches in these publications, we will include additive Gaussian noise into the measurements of the transfer function $H(s)$ from Lemma REF .", "More precisely, for all $k=1000$ previous grid points $s=\\omega _\\ell $ , the new data are, for $\\ell =1,\\ldots ,k$ and $\\nu = 1,\\ldots ,4$ : $H( \\omega _\\ell )[1+\\epsilon ^{(\\nu )}(\\alpha _\\ell +\\imath \\beta _\\ell )].$ Here, the “noise power” $\\epsilon ^{(\\nu )} >0$ is chosen so that $\\epsilon ^{(\\nu )} = 10^{-\\nu }$ , while the values $\\alpha _\\ell $ and $\\beta _\\ell $ are drawn from the standard normal distribution.", "For simplicity, we use the term “noise level $\\nu $ ” (in the subsequent text and plots).", "Also, for the sake of brevity, we will restrict our analysis to the case of a smaller damping coefficient: $d = 0.001$ .", "The first numerical experiment that is shown here is concerned with the decay of singular values for Loewner matrices under the influence of noise.", "As previously pointed out in [25], [18], the effect of noise is generally reflected in the “flattening” of the singular value curve.", "This is especially valid for the “alternate” splitting, and not as much for the “half-half” splitting (as shown in [18]).", "This is precisely the phenomenon observed in Fig.", "REF .", "Additionally, there are a number of dominant singular values that are less prone to perturbations; depending on the noise levels, it is clear that this number decreases with the increase of the noise power.", "For example, when $\\nu =2$ there are 22 singular values that seem to be stagnant.", "This is hence another effect of the noise in data for LF; deciding the order of the fitted model becomes a more challenging task, and one needs to be careful to avoid over-fitting.", "Figure: The decay of singular values for the augmented Loewner matrices for different \"noise levels/powers”.Next we fix the noise level at $\\nu =2$ and record the poles of the fitted Loewner model on the noisy data.", "We compare those with the original poles, and the results are depicted in Fig.", "REF .", "As expected, most of the (dominant) poles seem to be matching well, with the remark that the noisy data introduces spurious poles.", "Figure: The poles of the fitted Loewner models (with and without noise).However, as shown in Fig.", "REF , the effects of the noise (for this level) do not seem to be drastic; the response of the Loewner model fitted to the noisy data faithfully follows the original (unperturbed) data.", "Figure: The original data vs. the Loewner model fit (on the noisy data for level ν=2\\nu =2) &\\& the approximation error.It is to be noted that for levels of noise higher than the one used in the previous experiment, i.e., for $\\nu =1$ , the results are significantly less accurate.", "In that case, for some perturbed data sets, the first two dominant peaks are partially or completely missed (as shown in Fig.", "REF ).", "Additionally, it was noticed that the asymptotic stability of the reduced-order Loewner model was sometimes lost (however not in the experiment reported here).", "This behavior is mostly due to the high level of the noise signal added here.", "As reported in [25], for high signal-to-noise scenarios, it is very challenging to extract all the meaningful information from the perturbed data.", "A more thorough numerical analysis based on the so-called stabilization diagrams will be left for future research endeavors.", "Figure: The original data vs. the Loewner model fit (on the noisy data for level ν=1\\nu =1) &\\& the approximation error." ], [ "Conclusion and outlook", "The contribution of this work is twofold.", "First, an analytic construction of the transfer function for an infinite-dimensional flexible structure with the Euler–Bernoulli beam and a spring-mass system has been proposed.", "Second, we have applied the Loewner framework (LF) for data-driven modeling of the considered class of flexible structures on the basis of transfer function measurements.", "The presented case study of the singular value decay for the Loewner pencil clearly indicates a possible choice of the dimension of an acceptable reduced-order model, depending on the damping parameter $d$ .", "Note that our results are not limited to deterministic measurements since the performed study evaluates the effects of noise in the LF for data sets with Gaussian perturbations as well.", "A thorough numerical analysis of the perturbed (noisy) data has been carried out.", "The preliminary results show the robustness of the method to low and moderate levels of noise, but also point out some challenges for the cases in which the data are perturbed with higher noise levels.", "For future research endeavors, we intend to tackle the following open issues: Analyze data from real experimental measurements to supplement the information attained from simulation data.", "Take into account and quantify the effects of measurement noise in the LF for such data sets (using pseudospectra theory [14]).", "Study the decay of the singular values depending on the influence of the damping parameter and on data splitting strategies.", "Extend this study to flexible structures with a different asymptotic distribution of the eigenvalues, e.g., the Timoshenko beam with attached rigid bodies [36].", "Examine the applicability of the Loewner-based reduced-order systems for the control design of the original infinite-dimensional plant." ] ]
2212.05600
[ [ "An overview of maximal distance minimizers problem" ], [ "Abstract Consider a compact $M \\subset \\mathbb{R}^d$ and $l > 0$.", "A maximal distance minimizer problem is to find a connected compact set $\\Sigma$ of the length (one-dimensional Hausdorff measure $\\H$) at most $l$ that minimizes \\[ \\max_{y \\in M} dist (y, \\Sigma), \\] where $dist$ stands for the Euclidean distance.", "We give a survey on the results on the maximal distance minimizers and related problems." ], [ "Introduction", "This work is devoted to solutions of the following maximal distance minimizer problem.", "Problem 1.1 For a given compact set $M \\subset \\mathbb {R}^n$ and $l > 0$ to find a connected compact set $\\Sigma $ of length (one-dimensional Hausdorff measure ${\\mathcal {H}}^1$ ) at most $l$ that minimizes $\\max _{y \\in M} \\mathrm {dist}\\,(y, \\Sigma ),$ where $\\mathrm {dist}\\,$ stands for the Euclidean distance.", "It appeared in a very general from in [3] and later has been studied in [11], [13].", "A maximal distance minimizer is a solution of Problem REF .", "Such sets can be considered as networks of radiating Wi-Fi cables with a bounded length arriving to each customer (for the set $M$ of customers) at the distance $r$ , where such $r$ is the smallest possible." ], [ "Class of problems", "Maximal distance minimizers problem could be considered as a particular example of shape optimization problem.", "A shape optimization problem is a minimization problem where the unknown variable runs over a class of domains; then every shape optimization problem can be written in the form $\\min F(\\Sigma ) : \\Sigma \\in A$ where $A$ is the class of admissible domains and $F()$ is the cost function that one has to minimize over $A$ .", "So for a given compact set $M$ and positive number $l\\ge 0$ let the admissible set $A$ be a set of all closed connected set $\\Sigma ^{\\prime }$ with length constraint ${\\mathcal {H}}^1(\\Sigma ^{\\prime }) \\le l$ ; and let cost function be the energy $F_M(\\Sigma )=\\max _{y \\in M} \\mathrm {dist}\\,(y,\\Sigma )$ ." ], [ "Dual problem", "Define the dual problem to Problem REF as follows.", "Problem 1.2 For a given compact set $M \\subset \\mathbb {R}^d$ and $r > 0$ to find a connected compact set $\\Sigma $ of the minimal length (one-dimensional Hausdorff measure ${\\mathcal {H}}^1$ ) such that $\\max _{y \\in M} \\mathrm {dist}\\,(y, \\Sigma ) \\le r.$ In a nondegenerate case (i.e.", "for $F_M(\\Sigma )>0$ ) the strict and dual problems have the same sets of solutions for the corresponding $r$ and $l$ (see [13]) and hence an equality is reached." ], [ "The first parallels with average distance minimizers problem", "Maximal distance minimizers problem is similar to another shape optimization problem: average distance minimizers problem (see the survey of Antoine Lemenant [10]) and it seems interesting to compare the known results and open questions concerning these two problems.", "In the average distance minimizers problem's statement the admissible set $A$ is the same as in Maximal distance minimizers problem, but the function $F(\\Sigma _a)$ is defined as $\\int _{M} A(\\mathrm {dist}\\,(y,\\Sigma _a)) d\\phi (x) $ where $A: \\mathbb {R}^+\\rightarrow \\mathbb {R}^+$ is a nondecreasing function and $\\phi ()$ is a finite nonnegative measure with compact nonempty support in $\\mathbb {R}^n$ .", "Minimization problems for average distance and maximum distance functionals are used in economics and urban planning with similar interpretations.", "If it is required to find minimizers under the cardinality constraint $\\sharp \\Sigma \\le k$ , instead of the length and the connectedness constraints, where $k \\in \\mathbb {N}$ is given and $\\sharp $ denotes the cardinality, then the corresponding problems are referred to as optimal facility location problems." ], [ "Notation", "For a given set $X \\subset \\mathbb {R}^n$ we denote by $\\overline{X}$ its closure, by $\\mathrm {Int}\\,(X)$ its interior and by $\\partial X$ its topological boundary.", "Let $B_\\rho (O)$ stand for the open ball of radius $\\rho $ centered at a point $O$ , and let $B_\\rho (T)$ be the open $\\rho $ -neighborhood of a set $T$ i.e.", "$B_\\rho (T) := \\bigcup _{x\\in T} B_\\rho (x)$ (in other words $B_\\rho (T)$ is a Minkowski sum of a ball $B_\\rho $ centered in the origin and $T$ ).", "Note that the condition $\\max _{y \\in M} \\mathrm {dist}\\,(y, \\Sigma ) \\le r$ is equivalent to $M \\subset \\overline{B_r(\\Sigma )}$ .", "For given points $B$ , $C$ we use the notation $[BC]$ , $[BC)$ and $(BC)$ for the corresponding closed line segment, ray and line respectively." ], [ "Existence. Absence of loops. Ahlfors regularity and other simple properties", "For the both problems existence of solutions is proved easily: according to the classical Blaschke and Golab Theorems, the class of admissible sets is compact for the Hausdorff distance and both of the functions (maximal distance and also the average distance) is continuous for this convergence because of the uniform convergence of $x \\rightarrow \\mathrm {dist}\\,(x, \\Sigma )$ .", "Definition 1.3 A closed set $\\Sigma $ is said to be Ahlfors regular if there exists some constants $C_1$ , $C_2>0$ and a radius $\\varepsilon _0>0$ such that $C_1\\varepsilon \\le {\\mathcal {H}}^1(\\Sigma \\cap B_\\varepsilon (x))\\le C_2\\varepsilon $ for every $x\\in \\Sigma $ and $\\varepsilon <\\varepsilon _0$ .", "In the work [13] Paolini and Stepanov proved the absence of closed loops for maximum distance minimizers and, under general conditions on $\\phi $ , the absence of closed loops for average distance minimizers; the Ahlfors regularity of maximum distance minimizers and, under the additional summability condition on $\\phi $ , the Ahlfors regularity of average distance minimizers.", "Gordeev and Teplitskaya [8] refine Ahlfors constants of maximum distance minimizers to the best possible, i.e.", "show that ${\\mathcal {H}}^1(\\Sigma \\cap B_\\varepsilon (x)) = \\mathrm {ord}\\,_x\\Sigma \\cdot \\varepsilon + o(\\varepsilon )$ , where $\\mathrm {ord}\\,_x\\Sigma \\in \\lbrace 1, 2, 3\\rbrace $ .", "Maximal distance minimizers problem and the dual problem have the same sets of solutions (planar case was proved before by Miranda, Paolini, Stepanov in [11]).", "It particularly implies that maximal distance minimizers must have maximum available length $l$ .", "Paolini and Stepanov also proved that average distance minimizers, (with additional properties of $\\phi $ ) have maximum available length.", "In the work [6] the following basic results were showed.", "(i) Let $\\Sigma $ be an $r$ -minimizer for some $M$ .", "Then $\\Sigma $ is an $r$ -minimizer for $\\overline{B_r(\\Sigma )}$ .", "(ii) Let $\\Sigma $ be an $r$ -minimizer for $\\overline{B_r(\\Sigma )}$ .", "Then $\\Sigma $ is an $r^{\\prime }$ -minimizer for $\\overline{B_{r^{\\prime }}(\\Sigma )}$ , where $0 < r^{\\prime } < r$ ." ], [ "Local maximal distance minimizers", "Definition 1.4 Let $M \\subset \\mathbb {R}^n$ be a compact set and let $r > 0$ .", "A connected compact set $\\Sigma \\subset \\mathbb {R}^n$ is called a local maximal distance minimizer if $F_M(\\Sigma ) \\le r$ and there exists $\\varepsilon > 0$ such that for any connected set $\\Sigma ^{\\prime }$ satisfying $F_M(\\Sigma ^{\\prime }) \\le r$ and $\\mathrm {diam}\\,(\\Sigma \\triangle \\Sigma ^{\\prime }) \\le \\varepsilon $ the inequality ${\\mathcal {H}}^1(\\Sigma ) \\le {\\mathcal {H}}^1(\\Sigma ^{\\prime })$ holds, where $\\triangle $ is the symmetric difference.", "Any maximal distance minimizer is also a local minimizer.", "Usually the properties of maximal distance minimizers are also true for local maximal distance minimizers (see [8])." ], [ "Tangent rays. Blow up limits in $\\mathbb {R}^n$", "Definition 2.1 We say that the ray $ (ax] $ is a tangent ray of the set $ \\Gamma \\subset \\mathbb {R}^n $ at the point $ x\\in \\Gamma $ if there exists a sequence of points $ x_k \\in \\Gamma \\setminus \\lbrace x\\rbrace $ such that $ x_k \\rightarrow x $ and $ \\angle x_kxa \\rightarrow 0 $ .", "In the work [8] it is proved that for every maximal distance minimizer $\\Sigma $ at any point of $\\Sigma $ the pairwise angles between the tangent rays are at least $2\\pi / 3$ .", "Thus every point $x \\in \\Sigma $ has at most 3 tangent rays of $\\Sigma $ .", "In works concerning average distance minimizers the notion of blow up limits is used.", "Santambrogio and Tilli in [14] proved that for any average distance minimizers blow up sequence $\\Sigma _\\varepsilon :=\\varepsilon ^{-1} (\\Sigma _a \\cap B_\\varepsilon (x) \\text{ -- } x)$ with $x \\in \\Sigma _a$ , converges in $B_1(0)$ (for the Hausdorff distance) to some limit $\\Sigma _0(x)$ when $\\varepsilon \\rightarrow 0$ , and the limit is one of the following below (see Pic.", "REF which is analogues to a picture from [10]), up to a rotation.", "Figure: NO_CAPTIONIt is clear that for maximal distance minimizers blow up limits also exists and are more or less the same: $\\Sigma _0$ can be a radius, a diameter, a corner points with the angle between the segments greater or equal to $2\\pi /3$ or a center of a regular tripod.", "Herewith at the second and third case (id est when $\\psi (x)>0$ ) the point $x$ has to be energetic; see the following definition.", "Definition 2.2 A point $x \\in \\Sigma $ is called energetic, if for all $\\rho >0$ one has $F_{M}(\\Sigma \\setminus B_{\\rho }(x)) > F_{M}(\\Sigma ).$ Herewith if a point $x$ of a maximal distance minimizer $\\Sigma $ is energetic then there exists such a point $y \\in M$ (may be not unique) such that $\\mathrm {dist}\\,(x, y) = r$ and $B_{r}(y)\\cap \\Sigma =\\emptyset $ ; such $y$ is called corresponding to $x$ .", "If a point $x\\in \\Sigma $ is not energetic then in a sufficiently small neighbourhood it is a center of a regular tripod or a segment (and coincides there with its one-sided tangents).", "A key object in all the study of the average distance problem is the pull-back measure of $\\mu $ with respect to the projection onto $\\Sigma _a$ , where $\\Sigma _a$ is a solution of the average distance minimizer problem.", "More precisely, if $\\mu $ does not charge the Ridge set (which is defined as the set of all $x \\in \\mathbb {R}^n$ for which the minimum distance to $\\Sigma _a$ is attained at more than one point) of $\\Sigma _a$ (this is the case for instance when $\\mu $ is absolutely continuous with respect to the Lebesgue measure), then it is possible to choose a measurable selection of the projection multimap onto $\\Sigma $ , i.e.", "a map $\\pi _\\Sigma : M \\rightarrow \\Sigma $ such that $d(x, \\Sigma )=d(x, \\pi _{\\Sigma _a})$ (this map is uniquely defined everywhere except the Ridge set).", "Then one can define the measure $\\psi $ as being $\\psi (A):=\\mu (\\pi _{\\Sigma _a}^{-1}(A))$ , for any Borel set $A \\subset M$ .", "In other words $\\psi = \\pi _{\\Sigma _a} \\sharp \\mu $ .", "For the maximal distance minimizers in $\\mathbb {R}^n$ we can define measure $\\psi $ the similar way, but replace $M$ by $\\partial B_r(\\Sigma )$ and with $(n-1)$ -dimensional Hausdorff measure as $\\mu $ (or accordingly $\\overline{B_r(\\Sigma )}$ and $n$ -dimensional Hausdorff measure).", "Thus Fig.", "REF is true both for maximal and average distance minimizers." ], [ "Properties of branching points in $\\mathbb {R}^2$", "It is known at the plane (see [8]) that for every compact set $M$ and a positive number $r$ a maximal distance minimizer can have only finite number of points with 3 tangent rays.", "At the plane it is also known (see [2]) that every average distance minimizer is topologically a tree composed by a finite union of simple curves joining by number of 3.", "Every branching point of a planar maximal distance minimizer should be the center of a regular tripod.", "If $x\\in \\Sigma \\subset \\mathbb {R}^2$ has 3 tangent rays then there exists such a neighbourhood of $x$ in which the minimizer coincides with its tangent rays.", "Id est, there exists such $\\varepsilon >0$ that $\\Sigma \\cap \\overline{B_\\varepsilon (x)}=[Ax]\\cup [Bx]\\cup [Cx]$ where $\\lbrace A,B,C\\rbrace =\\Sigma \\cap \\partial B_\\varepsilon (x)$ and $\\angle AxB = \\angle BxC = \\angle CxA =2\\pi /3$ .", "For planar average distance minimizers it is proved that any branching point admits such a neighbourhood in which three pieces of $\\Sigma $ are $C^{1,1}$ ." ], [ "Continuity of one-sided tangent rays in $\\mathbb {R}^2$", "Definition 2.3 We will say that the ray $ (ax] $ is a one-sided tangent of the set $ \\Gamma \\subset \\mathbb {R}^n $ at the point $ x \\in \\Gamma $ if there exists a connected component $\\Gamma _1$ of $\\Gamma \\setminus \\lbrace x\\rbrace $ with the property that any sequence of points $x_k \\in \\Gamma _1$ such that $x_k \\rightarrow x$ satisfies $\\angle x_kxa \\rightarrow 0$ .", "In this case we will also say that $(ax]$ is tangent to the connected component $\\Gamma _1$ .", "Lemma 2.4 Let $\\Sigma $ be a (local) maximal distance minimizer and let $x \\in \\Sigma $ .", "Let $\\Sigma _1$ be a connected component of $\\Sigma \\setminus \\lbrace x\\rbrace $ with one-sided tangent $(ax]$ (it has to exist) and let $\\bar{x} \\in \\Sigma _1$ .", "For any one-sided tangent $(\\bar{a}\\bar{x}]$ of $\\Sigma $ at $\\bar{x}$ the equality $\\angle ((\\bar{a}\\bar{x}), (ax)) = o_{|\\bar{x}x|}(1)$ holds.", "Let $(\\bar{a}\\bar{x}]$ be a one-sided tangent at $\\bar{x}$ of any connected component of $\\Sigma \\setminus \\lbrace \\bar{x}\\rbrace $ not containing $x$ .", "Then $\\angle ((\\bar{a}\\bar{x}], (ax]) = o_{|\\bar{x}x|}(1)$ .", "For planar average distance minimizers it is proved (see [10]) that away from branching points an average distance minimizer $\\Sigma _a$ is locally at least as regular as the graph of a convex function, namely that the Right and Left tangent maps admit some Right and Left limits at every point and are semicontinuous.", "More precisely, for a given parametrization $\\gamma $ of an injective Lipschitz arc $\\Gamma \\subset \\Sigma _a$ , by existence of blow up limits one can define the Left and Right tangent half-lines at every point $x \\in \\Gamma $ by $T_R(x) := x+\\mathbb {R}^+.\\lim _{h \\rightarrow 0}\\frac{\\gamma (t_0+h)-\\gamma (t_0)}{h}$ and $T_L(x) := x+\\mathbb {R}^+.\\lim _{h \\rightarrow 0}\\frac{\\gamma (t_0-h)-\\gamma (t_0)}{h}.$ Then the following planar theorem for average distance minimizers holds.", "Theorem 2.5 (Lemenant, 2011 [9]) Let $\\Gamma \\subset \\Sigma _a$ be an open injective Lipschitz arc.", "Then the Right and Left tangent maps $x \\rightarrow T_R(x)$ and $x \\rightarrow T_L(x)$ are semicontinuous, id est for every $y_0 \\in \\Gamma $ there holds $\\lim _{y \\rightarrow y_0; y<_\\gamma y_0}T_L(y)=T_L(y_0)$ and $\\lim _{y \\rightarrow y_0; y>_\\gamma y_0}T_R(y)=T_R(y_0)$ .", "In addition the limit from the other side exists and we have $\\lim _{y \\rightarrow y_0; y>_\\gamma y_0}T_L(y)=T_R(y_0)$ and $\\lim _{y \\rightarrow y_0; y<_\\gamma y_0}T_R(y)=T_L(y_0)$ .", "An immediate consequence of the theorem is the following corollary: Corollary 2.6 Assume that $\\Gamma \\subset \\Sigma $ is a relatively open subset of $\\Sigma $ that contains no corner points nor branching points.", "Then $\\Gamma $ is locally a $C^1$ -regular curve." ], [ "Planar example of infinite number of corner points", "Recall that each maximal distance minimizer at the plane is a finite union of simple curves.", "These curves should have continuous one-sided tangents but do not have to be $C^1$ : there exists a minimizer with infinite number of points without tangent lines.", "The following example is provided in [6].", "Fix positive reals $r$ and $R$ and let $N$ be a large enough integer.", "Consider a sequence of points $\\lbrace A_i\\rbrace _{i=1}^\\infty $ belonging to circumference $\\partial B_R(0)$ such that $N \\cdot |A_2A_1|=r$ , $|A_{i+1}A_{i+2}| = \\frac{1}{2}|A_iA_{i+1}|$ and $\\angle A_iA_{i+1}A_{i+2} > \\frac{\\pi }{2}$ for every $i \\in \\mathbb {N}$ (see Fig.", "REF ).", "Let $A_\\infty $ be the limit point of $\\lbrace A_i\\rbrace $ .", "We claim that polyline $\\Sigma = \\bigcup _{i=1}^{\\infty } A_iA_{i+1}$ is a unique maximal distance minimizer for the following $M$ .", "Let $V_0 \\in (A_1A_2]$ be such point that $|V_0A_1| = r$ ; say that $A_0 := V_0$ .", "For $i\\in \\mathbb {N}\\cup \\lbrace \\infty \\rbrace $ define $V_i$ as the point satisfying $|V_iA_i|=r$ and $\\angle A_{i-1}A_iV_{i}=\\angle A_{i+1}A_iV_{i}>\\pi /2$ .", "Finally, let $V_{\\infty + 1}$ be such point that $V_{\\infty + 1}A_\\infty \\perp OA_\\infty $ and $|V_{\\infty +1} A_\\infty | = r$ .", "Clearly $M := \\lbrace V_i\\rbrace _{i=0}^{\\infty +1}$ is a compact set.", "Figure: The example of a minimizer with infinite number of corner pointsTheorem 2.7 (Cherkashin–Teplitskaya, 2022 [6]) Let $\\Sigma $ and $M$ be defined above.", "Then $\\Sigma $ is a unique maximal distance minimizer for $M$ ." ], [ "Every $C^{1,1}$ -smooth simple curve is a minimizer", "For planar average distance minimizers Tilli proved in [15] that any $C^{1,1}$ simple curve is a minimizer for some given data.", "The same thing with a similar but much simpler explanations is true for maximal distance minimizers.", "Theorem 2.8 Let $\\gamma $ be a $C^{1,1}$ -curve.", "Then $\\gamma $ is a maximal distance minimizer for a small enough $r$ and $M=\\overline{B_r(\\gamma )}$ ." ], [ "Explicit examples for maximal distance minimizers", "Recall that Theorems REF and REF provide explicit examples." ], [ "Simple examples. Finite number of points and $r$ -neighbourhood. Inverse minimizers", "Here we consider Problem REF in a case when $M$ is a finite set.", "Then it is closely related with the following Steiner problem.", "Problem 3.1 For a given finite set $P = \\lbrace x_1,\\dots ,x_n\\rbrace \\subset \\mathbb {R}^n$ to find a connected set ${\\mathcal {S}t}(P)$ with the minimal length (one-dimensional Hausdorff measure) containing $P$ .", "A solution of Problem REF is called Steiner tree.", "Any maximal distance minimizer for any finite set in $\\mathbb {R}^n$ is a finite union of at most $2\\sharp M-1$ segments.", "In this case maximal distance minimizers problem comes down to connecting $r$ -neighborhoods of all the points from $M$ .", "If $\\overline{B_r(A)}$ are disjoint for every $A\\in M$ then a maximal distance minimizer is a Steiner tree connecting points of $\\partial B_r(A)$ , $A\\in M$ .", "The following observations and statements of this paragraph are from the paper [6].", "Remark 3.2 (i) Let $\\Sigma $ be a maximal distance minimizer for some $M$ and $r>0$ .", "Then $\\Sigma $ is a maximal distance minimizer for $\\overline{B_r(\\Sigma )}$ , $r$ .", "(ii) Let $\\Sigma $ be a minimizer for $\\overline{B_r(\\Sigma )}$ and $r>0$ .", "Then $\\Sigma $ is a minimizer for $\\overline{B_{r^{\\prime }}(\\Sigma )}$ and $r^{\\prime }$ , where $0 < r^{\\prime } < r$ .", "Figure: NO_CAPTIONIn all known examples a ${\\mathcal {S}t}$ with $n$ terminals is an $r$ -minimizer for a set $M$ of $n$ points and a small enough positive $r$ if and only if ${\\mathcal {S}t}$ in the unique Steiner tree for its terminals.", "Theorem 3.3 (Cherkashin–Teplitskaya, 2022 [6]) Let ${\\mathcal {S}t}$ be a Steiner tree for terminals $A = (A_1,\\dots , A_n)$ , $A_i \\in \\mathbb {R}^d$ such that every Steiner tree for an $n$ -tuple in the closed $2r$ -neighbourhood of $A$ (with respect to $\\rho $ ) has the same topology as ${\\mathcal {S}t}$ for some positive $r$ .", "Then ${\\mathcal {S}t}$ is an $r$ -minimizer for an $n$ -tuple $M$ and such $M$ is unique.", "Proposition 3.4 Suppose that ${\\mathcal {S}t}$ is a full Steiner tree for terminals $A_1,\\dots , A_n \\in \\mathbb {R}^2$ , which is not unique.", "Then ${\\mathcal {S}t}$ can not be a minimizer for $M$ being an $n$ -tuple of points.", "Fig.", "REF shows that another Steiner tree connecting the vertices of a square becomes an $r$ -minimizer for every positive $r$ .", "Figure: An example to Proposition" ], [ "Circle. Curves with big radius of curvature", "Theorem 3.5 (Cherkashin–Teplitskaya, 2018 [5]) Let $r$ be a positive real, $M$ be a convex closed curve with the radius of curvature at least $5r$ at every point, $\\Sigma $ be an arbitrary minimizer for $M$ .", "Then $\\Sigma $ is a union of an arc of $M_r$ and two segments that are tangent to $M_r$ at the ends of the arc (so-called horseshoe, see Fig.", "REF ).", "In the case when $M$ is a circumference with the radius $R$ , the condition $R > 4.98r$ is enough.", "Miranda, Paolini and Stepanov [11] conjectured that all the minimizers for a circumference of radius $R > r$ are horseshoes.", "Theorem REF solves this conjecture with the assumption $R > 4.98r$ ; for $4.98r \\ge R > r$ the conjecture remains open.", "Problem 3.6 Find maximal distance minimizers for a circumference of radius $4.98r>R > r$ .", "At the same time, the statement of Theorem REF does not hold for a general $M$ if the assumption on the minimal radius of curvature is omitted as we show below.", "Define a stadium to be the boundary of the $R$ -neighborhood of a segment.", "By the definition, a stadium has the minimal radius of curvature $R$ .", "Let us show that if $R < 1.75r$ and a stadium is long enough, then there is the connected set $\\Sigma ^{\\prime }$ that has the length smaller than an arbitrary horseshoe and covers $M$ .", "Figure: Horseshoe is not a minimizer for long enough stadium with R<1.75rR < 1.75r.Define $\\Sigma _0$ to be the local Steiner tree depicted in Fig.", "REF .", "Let $\\Sigma ^{\\prime }$ consist of copies of $\\Sigma _0$ , glued at points $A$ and $B$ along the stadium.", "Note that $F_M(\\Sigma ^{\\prime }) \\le r$ by the construction.", "In the case $R < 1.75r$ the length of $\\Sigma _0$ is strictly smaller than $2|AB|$ .", "Thus for a long enough stadium $\\Sigma ^{\\prime }$ has length $\\alpha L + O(1)$ , where $L$ is the length of the stadium and $\\alpha < 2$ is a constant depending on $\\Sigma _0$ and $R$ .", "On the other hand, any horseshoe has length $2L + O(1)$ .", "This example leads to the following problems.", "Problem 3.7 Find the minimal $\\alpha $ such that Theorem REF holds with the replacement of $5r$ with $\\alpha r$ .", "Problem 3.8 Describe minimizers for a given stadium." ], [ "Rectangle", "Theorem 3.9 (Cherkashin–Gordeev–Strukov–Teplitskaya, 2021 [4]) Let $M = A_1A_2A_3A_4$ be a rectangle, $0 < r < r_0(M)$ .", "Then a maximal distance minimizer has the following topology with 21 segments, depicted in the left part of Fig.", "REF .", "The middle part of the picture contains an enlarged fragment of the minimizer near $A_1$ ; the labeled angles are equal to $\\frac{2\\pi }{3}$ .", "The rightmost part contains a much larger fragment of minimizer near $A_1$ .", "All maximal distance minimizers have length approximately $Per - 8.473981r$ , where $Per$ is the perimeter of the rectangle.", "In fact, every maximal distance minimizer is very close (in the sense of Hausdorff distance) to the one depicted in the picture.", "Figure: The minimizer for a rectangle MM with r<r 0 (M)r < r_0(M).Analogously to the stadium case one can easily show that for some sufficiently small $\\frac{|A_1A_2|}{|A_2A_3|}<1$ and some $r>0$ a minimizer should have another topology than depicted at Fig.", "REF .", "Also one may consider the following relaxation of Problem REF .", "Problem 3.10 Fix a real $a > 2r$ .", "Let $M(l)$ be the union of 2 sides of length $l$ of a rectangle $a \\times l$ and $\\Sigma (l)$ be a minimizer for $M(l)$ .", "Find $s(a) := \\lim _{l \\rightarrow \\infty } \\frac{{\\mathcal {H}}^1(\\Sigma (l))}{l}.$ If $a > 10r$ one may add up $M(l)$ to a stadium and use Theorem REF to get $s(a) = 2$ ." ], [ "Tools", "For the planar problem the notion of energetic points (which is also correct in $\\mathbb {R}^n$ ) is very useful.", "Recall that a point $x \\in \\Sigma $ is called energetic, if for all $\\rho >0$ one has $F_{M}(\\Sigma \\setminus B_{\\rho }(x)) > F_{M}(\\Sigma )$ .", "The set of all energetic points of $\\Sigma $ is denoted by $G_\\Sigma $ .", "Each minimizer $\\Sigma $ can be split into three disjoint subsets: $\\Sigma =E_{\\Sigma }\\sqcup \\mathrm {X}_{\\Sigma }\\sqcup \\mathcal {S}_{\\Sigma },$ where $X_{\\Sigma }\\subset G_\\Sigma $ is the set of isolated energetic points (i.e.", "every $x\\in X_\\Sigma $ is energetic and there is a $\\rho >0$ possibly depending on $x$ such that $B_{\\rho }(x)\\cap G_{\\Sigma }=\\lbrace x\\rbrace $ ), $E_{\\Sigma } := G_{\\Sigma }\\setminus X_{\\Sigma }$ is the set of non isolated energetic points and $S_\\Sigma :=\\Sigma \\setminus G_\\Sigma $ is the set of non energetic points also called the Steiner part.", "Note that it is possible for a (local) minimizer in $\\mathbb {R}^n$ , $n>2$ to have no non-energetic points at all.", "Moreover, in some sense, any (local) minimizer does not have non-energetic points in a larger dimension: Example 4.1 Let $\\Sigma $ be a (local) minimizer for a compact set $M \\subset \\mathbb {R}^n$ and $r > 0$ .", "Then $\\bar{\\Sigma }:=\\Sigma \\times \\lbrace 0\\rbrace \\subset \\mathbb {R}^{n + 1}$ is a (local) minimizer for $\\bar{M} = (M \\times \\lbrace 0\\rbrace ) \\cup (\\Sigma \\times \\lbrace r\\rbrace ) \\subset \\mathbb {R}^{n + 1}$ and $\\mathrm {E}_{\\bar{\\Sigma }} = \\bar{\\Sigma }$ .", "Recall that for every point $x \\in G_\\Sigma $ there exists a point $y \\in M$ (may be not unique) such that $\\mathrm {dist}\\,(x, y) = r$ and $B_{r}(y)\\cap \\Sigma =\\emptyset $ .", "Thus all points of $\\Sigma \\setminus \\overline{B_r(M)}$ can not be energetic and thus $\\overline{\\Sigma \\setminus \\overline{B_r(M)}}$ is so-called Steiner forest id est each connected component of it is a Steiner tree with terminal points on the $\\partial B_r(M)$ .", "At the plane it makes sense to define energetic rays.", "Definition 4.2 We say that a ray $ (ax] $ is the energetic ray of the set $ \\Sigma $ with a vertex at the point $ x\\in \\Sigma $ if there exists non stabilized sequence of energetic points $ x_k \\in G_\\Sigma $ such that $ x_k \\rightarrow x $ and $ \\angle x_kxa \\rightarrow 0 $ .", "Remark 4.3 Let $\\lbrace x_k\\rbrace \\subset G_\\Sigma $ and let $x\\in E_\\Sigma $ be the limit point of $\\lbrace x_k\\rbrace $ : $x_k \\rightarrow x$ .", "By basic property of energetic points for every point $x_k \\in G_\\Sigma $ there exists a point $y_k \\in M$ (may be not unique) such that $\\mathrm {dist}\\,(x_k, y_k) = r$ and $B_{r}(y_k)\\cap \\Sigma =\\emptyset $ .", "In this case we will say that $y_k$ corresponds to $x_k$ .", "Let $y$ be an arbitrary limit point of the set $\\lbrace y_k\\rbrace $ .", "Then set $\\Sigma $ does not intersect $r$ -neighbourhood of $y$ : $B_r(y) \\cap \\Sigma =\\emptyset $ and the point $y$ belongs to $M$ and corresponds to $x$ .", "Let $[sx] \\subset \\Sigma $ be a simple curve.", "Let us define $ \\mathrm {turn}\\,(\\breve{[sx]}) $ as the upper limit (supremum) over all sequences of points of the curve: $\\mathrm {turn}\\,(\\breve{[sx]})=\\sup _{n\\in \\mathbb {N}, s\\preceq t^1 \\prec \\dots \\prec t^n \\prec x} \\sum _{i=2}^{n} \\widehat{t^i,t^{i-1}},$ where $ t ^ i $ denotes the ray of the one-sided tangent to the curve $ \\breve{[st_i]} \\subset \\breve{[sx[} $ at point $t_i$ , and $ {t_1, \\dots , t_n}$ is the partition of the curve $ \\breve{[sx[} $ in the order corresponding to the parameterization, for which $ s $ is the beginning of the curve and $ x $ is the end.", "In this case, the angle $ \\widehat{ (t^i, t^{i+1})} \\in [-\\pi , \\pi [$ between two rays is counted from ray $ t^i $ to ray $ t^{i+1} $ ; positive direction is counterclockwise.", "Let $\\breve{sx}$ lay in the sufficiently small neighbourhood of $x$ .", "Then if $B_r(y(x))\\cap \\breve{[sx]} = \\emptyset $ , it is true that $|\\mathrm {turn}\\,([sx])| < 2 \\pi .$ This property is the first one which is true for the plane and false in $\\mathbb {R}^n$ with $n>2$ , so this is the main difference between planar and non-planar cases.", "At the plane the turn is a very useful tool, see for example the proof of Theorem REF  [5].", "The second main differ between plane and other Euclidean spaces is also concerning angles: at the plane if you know the angles $\\widehat{t^i,t^{i-1}}$ for $i =2, \\ldots k$ then you know the angle $\\widehat{t^1,t^k}$ which is not true for $\\mathbb {R}^n$ with $n>2$ ." ], [ "Derivation in the picture", "During this subsection $M$ is a planar convex closed smooth curve with the radius of curvature greater than $r$ .", "Lemma 4.4 (i) Let $x$ be an isolated energetic point of degree 1 (i.e.", "$Q$ is the end of the segment $[QX] \\subset \\Sigma $ ) with unique $y(Q)$ .", "Then $Q$ , $X$ and $y(Q)$ lie on the same line.", "(ii) Let $W$ be an isolated energetic point of degree 2 (i.e.", "$W$ is the end of the segments $[WZ_1]$ and $[WZ_2] \\subset \\Sigma $ ) with unique $y(W)$ .", "Then $\\angle Z_1Wy(W) = \\angle y(W)WZ_2$ .", "The following proposition describes the possible situation to apply some calculus of variation.", "Proposition 4.5 Let $y \\in M$ be a point such that $B_r(y) \\cap \\Sigma = \\emptyset $ and $\\partial {B_r(y)}$ contains energetic points $x_1$ and $x_2$ .", "Define $Y = \\partial B_r(y) \\cap M_r$ .", "Then (i) points $x_1$ and $x_2$ lie on opposite sides of the line $(yY)$ ; (ii) derivatives of length of $\\Sigma $ in neighborhoods of $x_1$ and $x_2$ when moving $y$ along $M$ are equal.", "In [7] the derivative of length of $\\Sigma $ in a neighborhood of $x$ when moving $y$ along $M$ is calculated.", "The derivative depends on the behavior of $\\Sigma $ in the neighborhood of $x$ .", "Since $M$ has large radius of curvature, $\\partial B_r(x)$ intersects $M$ in at most 2 points, so every energetic point has at most 2 corresponding $y$ .", "Thus, we have the following 4 cases." ], [ "1. $x$ has order 1, and there is unique corresponding {{formula:21885887-77c7-4451-9cc3-a76d9af7eb13}} .", "Then the derivative is equal to $\\cos \\alpha ,$ where $\\alpha = \\angle ([xy(x)) , l)$ , $l$ is a tangent ray to $M$ at point $y(x)$ , in the direction of increasing $\\gamma (x)$ ." ], [ "2. $x$ has order 2, and the unique corresponding {{formula:2e26868e-abde-451f-a196-8544d20b92d9}} .", "Since $x$ has order 2, $B_\\varepsilon (x) \\cap \\Sigma = [xz_1]\\cup [xz_2]$ for small enough $\\varepsilon > 0$ .", "Then the derivative is equal to $2\\cos \\alpha \\cos \\frac{\\angle z_1xz_2}{2},$ where $\\alpha = \\angle ([xy(x)) , l)$ , $l$ is a tangent ray to $M$ at point $y(x)$ , in the direction of increasing $\\gamma (x)$ ." ], [ "3. The degree of point $x$ is 1, and there are two distinct points {{formula:aa4ccad1-587e-478b-9a54-383b4d80b7ec}} and {{formula:eeee5b87-f530-48d8-ad7e-79d6f866e750}} .", "Define $\\alpha = \\angle xy_1(x)y_2(x) = \\angle xy_2(x)y_1(x)$ and let $\\delta $ be the angle between $y_1(x)y_2(x)$ and $M$ .", "Let $\\beta $ be the angle between $[zx]$ and the $x$ axis.", "Then the derivative is equal to $\\frac{\\cos (\\alpha + \\delta )\\sin (\\alpha +\\beta )}{\\sin (2\\alpha )}.$" ], [ "4. The degree of point $x$ is 2, and there are two distinct points {{formula:4e818b2e-7731-4617-a29f-1896f9c8630e}} and {{formula:b352107e-5041-4930-a879-0058e89499b2}} .", "The derivative is equal to $\\frac{\\cos (\\alpha + \\delta )}{\\sin (2\\alpha )} (\\sin (\\alpha +\\beta ) + \\sin (\\alpha +\\gamma )),$ where $\\beta $ and $\\gamma $ are the angles between the $x$ axis and the segments $[z_1x]$ and $[z_2x]$ , respectively, $\\alpha $ and $\\delta $ are similar to the previous case.", "If $M$ is piece-wise smooth one can also apply such type of derivation, in particular it is heavily used in the proof of Theorem REF ." ], [ "Convexity argument", "Suppose that we fix some $M_0 \\subset M$ and consider a (possible infinite) tree $T$ which vertices are encoded by points of $M_0$ .", "Let us pick an arbitrary point from $\\overline{B_r(m)}$ for every $m \\in M_0$ and connect such points by segments with respect to $T$ .", "Consider the length $L$ of such a representation of $T$ ; note that we allow the representation to contain cycles or edges of zero length.", "Then $L$ is a convex function from $(\\mathbb {R}^d)^{M_0}$ to $\\mathbb {R}$ .", "Also if $v,u \\in \\overline{B_r(m)}$ , then $\\alpha v + (1-\\alpha )u$ also lies in $\\overline{B_r(m)}$ .", "It implies that the sets of local and global minimums of $L$ coincide and form a convex set.", "It usually means that $L$ is a unique local minimum.", "This approach allows to show that if one fix a topology of a solution, then Steiner problem has a unique solution." ], [ "$\\Gamma $ -convergence", "$\\Gamma $ -convergence is an important tool in studying minimizers based on approximation of energy.", "For Euclidean space the following definition of $\\Gamma $ -convergence can be used.", "Let $X$ be a first-countable space and $F_{n}$ : $X\\rightarrow \\overline{\\mathbb {R} }$ a sequence of functionals on $X$ .", "Then $F_{n}$ are said to $\\Gamma $ -converge to the $\\Gamma $ -limit $F$ : $X\\rightarrow \\overline{\\mathbb {R} }$ if the following two conditions hold: Lower bound inequality: For every sequence $x_{n}\\in X$ such that $x_n\\rightarrow x$ as $ n\\rightarrow +\\infty $ , $F(x)\\le \\liminf _{{n\\rightarrow \\infty }}F_{n}(x_{n})$ .", "Upper bound inequality: For every $x\\in X$ , there is a sequence $x_{n}$ converging to $x$ such that $ F(x)\\ge \\limsup _{{n\\rightarrow \\infty }}F_{n}(x_{n})$ .", "In the case of maximal distance minimizers for a given compact set $M$ and number $l>0$ we can consider a space $X$ of connected compact sets with one-dimensional Hausdorff measure at most $l$ ; the distance in $X$ is Hausdorff distance (the distance between $A, C \\in X$ is the smallest $\\rho $ such that $A \\subset \\overline{B_\\rho (C)}$ and $C \\subset \\overline{B_\\rho (A)}$ ); for $S\\in X$ let us define $F_n(S) :=F_{M_{n}}(S)=\\max _{y \\in M_n} \\mathrm {dist}\\,(y,S)$ and a sequence $x_n$ in the second condition is a solution of the dual maximal distance minimizer problem for $l>0$ and $M_n$ (id est $x_n$ minimizes $F_n()$ among all points of $X$ ), where a finite set $M_n \\subset M$ is a finite $1/n$ -network of $M$ .", "Clearly, both conditions hold as $F(x)=\\lim _{n \\rightarrow \\infty } x_n$ for every sets $x_n \\rightarrow x$ .", "Thus, as each maximal distance minimizer for a finite set should be a Steiner tree with a finite number of leaves, we get that every maximal distance minimizer is a limit of Steiner trees.", "This result is also proved in [1]).", "A relations of finite Steiner trees and maximal distance minimizer are considered in Section REF ." ], [ "Penalized form", "Let $M$ be a given compact set.", "Let us consider a problem of minimization $F_M(S)+\\lambda {\\mathcal {H}}^1(S)$ for some $\\lambda >0$ , where $F_M(S)=\\max _{y \\in M} \\mathrm {dist}\\,(y, S)$ among all connected compact sets $S$ .", "We will call this problem $\\lambda $ -penalized.", "Clearly every set $T$ which minimizes $\\lambda $ -penalized problem for some $\\lambda $ is a maximal distance minimizer for a given data $M$ and the restriction of energy $r:=F_M(T)$ .", "Hence the solutions of this problem inherit all regularity properties of maximal distance minimizers.", "As usual in variational calculus on a restricted class, it may happen for a small variation $\\Phi _\\varepsilon (\\Sigma )$ of $\\Sigma $ , that the length constraint ${\\mathcal {H}}^1(\\Phi _\\varepsilon (\\Sigma ))\\le l$ is violated.", "Hence to compute Euler–Lagrange equation associated to the maximal distance minimizers problem a possible way is to consider first the penalized functional $F_M(S)+\\lambda {\\mathcal {H}}^1(S)$ for some constant $\\lambda $ , for which any competitor $\\Sigma $ is admissible without length constraint.", "Hence it is also make sense to consider local penalisation problem: the problem of searching such a connected compact set $S$ that ${\\mathcal {H}}^1(S)+\\lambda F_M(S) \\le {\\mathcal {H}}^1(T)+\\lambda F_M(T)$ for every connected compact $T$ with $\\mathrm {diam}\\,(S \\triangle T)< \\varepsilon $ for sufficiently small $\\varepsilon >0$ .", "The solutions of this problems also inherit properties of local maximal distance minimizers.", "Proposition 5.1 Consider $\\min _{\\Sigma \\text{ compact and connected}} F_M(\\Sigma )+\\lambda ({\\mathcal {H}}^1(\\Sigma )-l)^{+}$ for any constant $\\lambda >1$ .", "Then this problem is equivalent to maximal distance minimizers problem.", "The same as for average distance minimizers (see Proposition 23 in [10]).", "We use the fact that for a connected set $S \\setminus T_\\varepsilon $ if $S$ is a maximal distance minimizer and ${\\mathcal {H}}^1(T_\\varepsilon )=\\varepsilon $ there holds $r-F_M(S \\setminus T_\\varepsilon )\\le \\varepsilon $ ." ], [ "Lower bounds on the length of a minimizer", "The proof of the following folklore inequality can be found, for instance in [12].", "Lemma 5.2 Let $\\gamma $ be a compact connected subset of $\\mathbb {R}^d$ with ${\\mathcal {H}}^1(\\gamma ) < \\infty $ .", "Then ${\\mathcal {H}}^d(\\lbrace x \\in \\mathbb {R}^d: \\mathrm {dist}\\,(x,\\gamma ) \\le t\\rbrace ) \\le {\\mathcal {H}}^1(\\gamma ) \\omega _{d-1}t^{d-1} + \\omega _d t^d,$ where $\\omega _k$ denotes the volume of the unit ball in $\\mathbb {R}^k$ .", "The following corollary is very close to a theorem of Tilli on average distance minimizers [15].", "Corollary 5.3 Let $V$ and $r$ be positive numbers.", "Then for every set $M$ with ${\\mathcal {H}}^d(M) = V$ a maximal distance $r$ -minimizer has the length at least $\\max \\left(0, \\frac{V - \\omega _d r^d}{\\omega _{d-1}r^{d-1}} \\right).$ Theorem REF follows from the fact that for a $C^{1,1}$ -curve and small enough $r$ the inequality in Corollary REF is sharp.", "Let us provide a lower bound on the length of a minimizer in planar case.", "Proposition 5.4 Let $M$ be a planar convex set and $\\Sigma $ is an $r$ -minimizer for $M$ .", "Then ${\\mathcal {H}}^1(\\Sigma ) \\ge \\frac{{\\mathcal {H}}^1(\\partial M) - 2\\pi r}{2}.$" ] ]
2212.05607
[ [ "DOSnet as a Non-Black-Box PDE Solver: When Deep Learning Meets Operator\n Splitting" ], [ "Abstract Deep neural networks (DNNs) recently emerged as a promising tool for analyzing and solving complex differential equations arising in science and engineering applications.", "Alternative to traditional numerical schemes, learning-based solvers utilize the representation power of DNNs to approximate the input-output relations in an automated manner.", "However, the lack of physics-in-the-loop often makes it difficult to construct a neural network solver that simultaneously achieves high accuracy, low computational burden, and interpretability.", "In this work, focusing on a class of evolutionary PDEs characterized by having decomposable operators, we show that the classical ``operator splitting'' numerical scheme of solving these equations can be exploited to design neural network architectures.", "This gives rise to a learning-based PDE solver, which we name Deep Operator-Splitting Network (DOSnet).", "Such non-black-box network design is constructed from the physical rules and operators governing the underlying dynamics contains learnable parameters, and is thus more flexible than the standard operator splitting scheme.", "Once trained, it enables the fast solution of the same type of PDEs.", "To validate the special structure inside DOSnet, we take the linear PDEs as the benchmark and give the mathematical explanation for the weight behavior.", "Furthermore, to demonstrate the advantages of our new AI-enhanced PDE solver, we train and validate it on several types of operator-decomposable differential equations.", "We also apply DOSnet to nonlinear Schr\\\"odinger equations (NLSE) which have important applications in the signal processing for modern optical fiber transmission systems, and experimental results show that our model has better accuracy and lower computational complexity than numerical schemes and the baseline DNNs." ], [ "Introduction", "Evolutionary partial differential equations arise as mathematical models for evolving in a wide range of fields in science and engineering.", "To simulate the evolution of such a real system by solving the associated evolutionary PDE is typically computationally expansive.", "One engineering application is nonlinear compensation in optical fiber [1], [2], [3], which aims to recover the clean signal from the distorted signal induced by long-distance propagation in fiber.", "Mathematically, the compensation can be obtained by solving an inversed nonlinear Schrödinger equation (NLSE).", "The challenge is how to obtain an accurate solution to the inversed NLSE with low computational complexity to meet the engineering needs.", "One classical numerical approach to solving the inversed NLSE over a long distance is the split-step Fourier method (SSFM) [4], [5], which is based on operator splitting [6].", "In one calculation step, the operator splitting scheme splits the inversed NLSE into linear and nonlinear parts, then combines the analytical solution of each part to form the final solution.", "However, the idea of splitting brings the splitting error controlled by step size.", "A smaller step size yields a smaller splitting error, nevertheless, results in more computation steps thus makes it impractical in real-time implementation [3].", "In the past two decades, various works have been proposed to reduce the computational complexity in SSFM by balancing the step size and splitting errors [7], [8], [9], [10], [11], [12], [13], [14].", "Some works apply well-designed combinations of the linear and nonlinear operators to pursue a larger step size meanwhile maintain an acceptable splitting error [9], [7], [8].", "In modified SSFM (M-SSFM) [8], they introduce a coefficient to shift the position of the nonlinear operator calculation point along with the optimization of linear operator.", "By balancing the combination between the linear and nonlinear operator, M-SSFM reduces the computational cost.", "Similarly, logarithmic SSFM (L-SSFM) [9] achieves high computational efficiency by introducing a logarithmic step-size assignment for the linear and nonlinear operators according to the exponential decay of signal power.", "Another class of methods reduce the computational complexity by modifying the composition inside linear and nonlinear operators of SSFM [10], [11], [14], [12], [13].", "For example, Correlated-DBP (CBP) [11], [14] modifies the original nonlinear operator by adding an additional correlation term with adjacent symbols.", "It considers the physical nature that the nonlinear distortion imprinted on one symbol is related to the pulse broadening of neighboring symbols, thus achieves better computation accuracy with a large step size.", "Besides that, Perturbation BP (PBP) [12], [13] applies the nonlinear perturbation analysis and adds a first-order perturbation term to nonlinear operators.", "Although those works reduce the computational complexity of SSFM, it remains challenging to fulfill the increasing demand for transmission rate and distance requirements in the engineering application.", "The last decade has seen a significant amount of research on solving partial differential equations (PDEs) using deep neural networks (DNNs) [15], [16], [17], [18].", "One of the typical DNN-based PDE solvers is differential operator approximation (DOA) [17], [19], in which the trained DNN serves as a surrogate operator for the PDE and infers the solution at the target time from arbitrary initial condition.", "The performance of DOA depends on the bias and variance of models rather than the step size in numerical schemes [20].", "This shows that DNNs can achieve large step-size evolution of PDEs.", "For nonlinear compensation tasks in optical fiber, some works have been proposed using the idea of DOA [21], [22], [23], [24], [25], [26].", "Among them, the “black-box\"-type models mitigate the nonlinear distortion directly from data using large amounts of parameters.", "To control the computational complexity of such models, weight pruning [21], recurrent structure [22], or shallow neural network [23] are used.", "In contrast, the SSFM-based models design the neural network with prior knowledge from the inversed NLSE [24], [25], [26] using much fewer parameters.", "They unroll the linear and nonlinear steps of SSFM as layers in a neural network.", "In the nonlinear step, the nonlinear operator of SSFM is fixed as the activation function in the network.", "Meanwhile, convolution layers with small kernel sizes replace the dense linear operators in SSFM and the parameters of the convolutions are optimized to minimize the loss function.", "A limitation of the SSFM-based models is the predetermined initialization in convolutional layers.", "Usually, those models use the truncation version of the original linear operators of SSFM as the initialization of convolution layers to accelerate the optimization.", "However, the predetermined initialization introduces extra truncation errors during training.", "Moreover, the predetermined initialization pre-specifies the step size of the operator to be approximated by each layer of the network.", "Thus it loses the flexibility to learn the adaptive step size from data and limits the network’s performance.", "In this paper, we develop an adaptive-step-size SSFM-based neural network, Deep Operator Splitting Net (DOSnet), to solve the evolutionary PDEs.", "We introduce the prior knowledge of the PDE in the design of the network by the dynamical structure, Autonomous Flow (Autoflow), which consists of several blocks with the same dimension for the input and output to mimic the evolution of the PDE.", "Compared to the standard neural networks, which transform the inputs into features in higher dimensions by large amounts of parameters, Autoflow has much fewer parameters by restricting the sizes of intermediate outputs.", "Compared to the previous SSFM-based neural networks, we do not use the predetermined initialization that restricts the step size represented by each block.", "Instead, we apply random initialization for the parameters inside Autoflow.", "Thus, each block inside Autoflow learns an adaptive step-size operator.", "Each block of Autoflow is an operator splitting block (OSB) which contains a series of linear layers and the particular nonlinear activation functions from the original PDE.", "This special activation function from the PDE reflects the properties of the PDE and increases the expressive power of the network.", "See Fig.1 for the structure of AutoFlow and OSB of our DOSnet and comparison with the structure of a standard neural network.", "Experimentally, we apply this method to several different classes of PDEs.", "We first validate the Autoflow structure on two types of linear PDEs as toy examples.", "We observe that the transition states inside Autoflow follow the true physical law in the linear models.", "For this observation, we give a mathematical explanation based on the behavior of weights of Autoflow during training.", "Then, we test our algorithm on the Allen-Cahn equation.", "Our result shows that DOSnet has high accuracy in predicting the solutions during the long-term evolution.", "Moreover, the intermediate outputs of Autoflow represent states on the true trajectory of the evolution of the system, which cannot be observed in standard neural networks.", "We apply our DOSnet to the nonlinear Schrödinger equation for the engineering application of nonlinear compensation in optical fiber.", "For a single-channel single-polarization 16 QAM system over optical fiber of $20 \\times 80$ km under 100 GBaud/s transmission rate, experimental results show that our model has better accuracy and lower computational complexity than numerical schemes and the baseline DNNs.", "Finally, we briefly review the classical operator splitting method [6], [27].", "In this paper, we focus on the autonomous evolutionary PDEs with decomposable operators.", "It can be generally written as $u_{t} = \\mathcal {L}u + \\mathcal {N}u,$ where $\\mathcal {L}$ and $\\mathcal {N}$ are the linear and nonlinear part of the PDE, respectively.", "With some given initial condition $u\\left(\\mathbf {x},0\\right) = u_0$ , the exact solution can be written as $u(\\mathbf {x},t) = e^{t(\\mathcal {L}+\\mathcal {N})}u_0$ .", "One numerical scheme that utilizes the decomposable property to solve Eq.", "(REF ) is operator splitting [6], [27].", "The key idea of operator splitting is to split Eq.", "(REF ) into two simpler sub-equations: $\\left\\lbrace \\begin{aligned}&u_{t} = \\mathcal {L}u \\\\&u_{t} = \\mathcal {N}u,\\end{aligned}\\right.$ and the two sub-equations are solved in closed forms on small time intervals.", "Specifically, suppose that we want to solve Eq.", "(REF ) up to time $T$ .", "The time interval $[0,T]$ is first divided into sub-intervals $0=t_{0}<t_{1}<\\dots <t_{N}=T$ such that $t_{n}-t_{n-1}=\\tau $ for all $n=1,2,\\dots , N$ .", "On each sub-interval, instead of solving the entire equation Eq.", "(REF ), one can solve the two equations in Eq.", "(REF ) successively.", "Hence the final solution $u(\\mathbf {x},T)$ is expressed as a stack of alternating compositions of linear and nonlinear evolution operators $u(\\mathbf {x},T)\\approx e^{\\tau \\mathcal {N}} e^{\\tau \\mathcal {L}} \\dots e^{\\tau \\mathcal {N}} e^{\\tau \\mathcal {L}} u_{0} ,$ The accuracy of the operator splitting depends on both the arrangement of the linear and nonlinear operators and the temporal step size $\\tau $ .", "For example, the plain splitting $e^{\\tau \\mathcal {N}} e^{\\tau \\mathcal {L}}$ has an error $O(\\tau )$ , while a more symmetric Strang splitting [6] $e^{\\frac{\\tau }{2}\\mathcal {L}} e^{\\tau \\mathcal {N}}e^{\\frac{\\tau }{2}\\mathcal {L}}$ results in an error $O(\\tau ^{2})$ .", "The challenge of the classical operator splitting is how to balance the solution accuracy and computational cost.", "Figure: Architecture of DOSnet as depicted in (a), where network structure is designed by a hierarchy of two level: in the coarse-grained level, an Autoflow structure is used to ensure that each block functions as an autonomous flow with identical input and output dimensions; whereas in the finer level inside each block, a cascade of learnable linear layer with nonlinear layers is imposed whose nonlinear functioning comes from the underlying equation.", "In contract, a standard DNN model such as the continum model of VGG19 as shown in (b) is characterized by completely flexible layers between the input and output: in this particular example,layers maps the original input into the higher dimensional feature space (such as 64 dim, 128 dim or higher)." ], [ "Deep Operator-Splitting Neural Network (DOSnet)", "We propose an adaptive-step-size, low computational burden Deep Operator Splitting Net (DOSnet) to solve Eq.", "(REF ) over a long time $T$ .", "The key to our method is a general dynamical architecture we call Autonomous Flow (Autoflow), describing the action of the evolution operators.", "We design this structure to reduce the size of network by restricting the intermediate outputs of network.", "Autoflow consists of multiple learnable blocks called operator splitting block (OSB) that increase the expressive power of DOSnet in modeling the operator of nonlinear PDEs.", "Below we describe our model in detail." ], [ "Dynamical Architecture: Autoflow", "The aim of the proposed Autoflow is to obtain the solution $u(\\mathbf {x},t) \\in \\mathcal {X}$ after a long time $T$ given the initial solution $u(\\mathbf {x},t=0)=u_0 \\in \\mathcal {X}$ , where $\\mathcal {X}$ is a function space of states.", "Instead of solving the Eq.", "(REF ) directly, we employ the learnable operators $\\psi _{\\mathbf {\\theta }}: \\mathcal {X}\\rightarrow \\mathcal {X}$ , parmaeterized by $\\mathbf {\\theta }$ , to approximate the true operator in Eq.", "(REF ).", "The structure of Autoflow is a composition of learnable operators $\\lbrace \\psi _{\\mathbf {\\theta }_i}\\rbrace _{i=1}^{M}$ as shown in Fig.", "REF (a), where $\\psi _{\\mathbf {\\theta }_i}$ is a neural network block with parameters $\\mathbf {\\theta }_i$ and $M$ denotes the number of blocks inside Autoflow.", "The output of Autoflow can be written as, $ \\psi _{\\mathbf {\\theta }_T}\\left(u_0\\right) = \\psi _{\\mathbf {\\theta }_M}\\circ \\psi _{\\mathbf {\\theta }_{M-1}}\\circ \\cdots \\circ \\psi _{\\mathbf {\\theta }_1}\\left(u_0\\right),$ The learnable parameters $\\mathbf {\\theta }= \\lbrace \\mathbf {\\theta }_i\\rbrace _{i=1}^{M}$ inside Autoflow can be optimized via minimizing the mean square loss $ \\mathbf {\\theta }^{\\star } = \\operatornamewithlimits{arg\\,min\\;}_{\\mathbf {\\theta }} \\mathcal {L}\\left(\\mathbf {\\theta }\\right) = \\frac{1}{K}\\sum \\limits _{i=1}^{K}\\left(\\psi _{\\mathbf {\\theta }_T}\\left(u_0^{\\left(i\\right)}\\right)-u^{\\left(i\\right)}\\left(\\mathbf {x},T\\right)\\right)^2,$ where $K$ denotes the total number of data and $u^{\\left(i\\right)}$ is the $i$ th data.", "This Autoflow architecture is inspired by the existence of the evolution flow [29] for PDEs describing physical evolution of states $u\\in \\mathcal {X}$ , where $\\mathcal {X}$ is the function space of states.", "The evolution flow underlying Eq.", "(REF ) describes a family of operators $\\lbrace \\phi ^t: \\mathcal {X}\\rightarrow \\mathcal {X}\\rbrace $ parameterized by $t\\in \\mathbb {R}$ such that for any $u\\in \\mathcal {X}$ , there is $ \\begin{aligned}\\phi ^{0}\\left(u\\right) = u_0,\\quad \\phi ^s\\left(\\phi ^t\\left(u\\right)\\right) = \\phi ^{t+s}\\left(u\\right).\\end{aligned}$ Due to the additive group law shown in Eq.", "(REF ), the true solution at time $T$ , $u\\left(\\mathbf {x},T\\right)$ can be written in the form of $ u\\left(\\mathbf {x},T\\right) = \\phi ^{\\tau _N}\\circ \\phi ^{\\tau _{N-1}}\\circ \\cdots \\circ \\phi ^{\\tau _1}\\left(u_0\\right)=\\phi ^{\\tau _1+\\tau _2+\\cdots +\\tau _N}\\left(u_0\\right) = \\phi ^T\\left(u_0\\right)$ such that $\\tau _1+\\tau _2+\\cdots +\\tau _N = T$ , where $N$ denotes the number of operators.", "In a traditional numerical scheme, $N$ is set as a large number; in other words, each operator $\\phi ^{\\tau _i}, i=1,\\cdots , N$ , represents the mapping between two states over a small time step.", "The smaller time step allows the approximation with lower errors.", "However, large $N$ brings extensive computational complexity.", "In contrast, if $N =1$ , the operator over a long time $T$ is highly nonlinear and thus is difficult to approximate directly.", "In this case, although a deep neural network has powerful expressivity, it still needs enormous parameters to approximate $\\phi ^T$ .", "Therefore, the desired network should have a good balance between the number of operators and the size of the networks.", "The advantage of Autoflow is that it provides a balance between computationally extensive numerical schemes and large-scale deep neural networks.", "Specifically, it is designed to mimic the additivity of evolution flow to reduce the size of the neural network.", "One key feature of Autoflow that distinguishes it from standard DNN architectures is that $\\psi _{\\mathbf {\\theta }}$ maps $u$ into a function in the same state space $\\mathcal {X}$ , i.e.", "$\\psi _{\\mathbf {\\theta }}\\left(u\\right)$ has the same dimension as $u$ .", "In a standard DNN, the operator inside each layer ofen maps the input function to a function in some space with different dimension (Fig.", "REF ), and the network approximates the operator between the input state and the target state directly.", "While in an $M$ -layer Autoflow, the output of $\\psi _{\\mathbf {\\theta }_i}, i = 1,\\cdots ,M,$ can be interpreted as an intermediate state of the evolution process, which means that $\\psi _{\\mathbf {\\theta }_i}$ approximates a particular $\\phi ^{\\tau }$ for some $\\tau $ .", "In Autoflow, the evolution time $\\tau $ is implicitly encoded by the parameter $\\mathbf {\\theta }_i$ , and is learned adaptively from data.", "The number of operators $M$ inside AutoFlow can be much smaller than $N$ in Eq.", "(REF ).", "Therefore, $\\psi _{\\mathbf {\\theta }_i}$ learns a larger time-step underlying operator, and leads to lower computational complexity than a traditional numerical scheme with small time-step.", "Furthermore, the design of intermediate output states enables fewer parameters in the network than a standard DNN." ], [ "Building blocks: Operator Splitting Blocks", "Each $\\psi _{\\mathbf {\\theta }_i}$ inside Autoflow is a single operator splitting block (OSB), which contains a series of linear layers $\\psi _{\\mathcal {L}_{\\mathbf {\\theta }_i}}$ and the special nonlinear activation functions $\\psi _{\\mathcal {N}}$ from the original PDE; see Fig.", "REF (a).", "The $i$ th OSB $\\psi _{\\mathbf {\\theta }_i}$ in Autoflow is, $\\psi _{\\mathbf {\\theta }_i} = \\psi _{\\mathcal {N}_i^n}\\psi _{\\mathcal {L}_{\\mathbf {\\theta }_i}^n}\\cdots \\psi _{\\mathcal {N}_i^2}\\psi _{\\mathcal {L}_{\\mathbf {\\theta }_i}^2}\\psi _{\\mathcal {N}_i^1}\\psi _{\\mathcal {L}_{\\mathbf {\\theta }_i}^1}$ where $\\psi _{\\mathcal {L}_{\\mathbf {\\theta }_i}^{j}}, j=1,\\cdots ,n,$ denote the convolutional layers with the learnable parameters, $\\psi _{\\mathcal {N}_i^{j}} = e^{\\eta _i^{j}\\mathcal {N}}$ are the special nonlinear activations with $\\mathcal {N}$ being the nonlinear operator in Eq.", "(REF ), $\\eta _i^{j}$ are learnable scalars, $n$ is the number of convolutional layers inside one OSB.", "The last convolutional layer in $i$ th OSB $\\psi _{\\mathbf {\\theta }_i}^n$ maps the features to the output with the same size as the input of this OSB.", "We discuss the behavior of our network with only one OSB with one pair of linear layer and nonlinear activation (i.e., n=1).", "For simplicity of notations, below we omit the superscript and subscript of $\\psi _{\\mathbf {\\theta }}$ and $\\psi _{\\mathcal {N}}$ .", "Supposing that we use this network to approximate the evolution from initial state $u_0$ to the solution $u_{T_{1}}$ , the solution at time $T_1$ can be approximated as $ u_{T_1} \\approx \\psi _{\\mathcal {N}}\\psi _{\\mathcal {L}_{\\mathbf {\\theta }}}u_{0}.$ Rather than the commonly used activation functions such as ReLU [30] and Sigmoid function [31], we choose $\\psi _{\\mathcal {N}}u = e^{T_1\\mathcal {N}}(u)$ in OSB, which is the closed-form solution of the subequation $\\frac{du}{dt} = \\mathcal {N}u$ with $\\mathcal {N}$ being the nonlinear operator from the PDE (REF ).", "The reason to use this PDE-based activation function is that it reflects the properties of the PDE.", "For example, in the Allen-Cahn equation to be discussed in Section REF , the nonlinear operator $\\mathcal {N}u = F^{\\prime }(u)$ , where $F$ is the double well potential that attains its global minimum value at $u = \\pm 1$ .", "Our PDE-based activation function $\\psi _{\\mathcal {N}}$ guarantees that the fixed points $u=\\pm 1$ are kept after the activation function is applied, which will significantly accelerate the convergence of training.", "However, the nature of splitting cause the splitting errors which are unavoidable by both the traditional numerical method in Eq.", "(REF ) and OSB.", "Unlike a traditional numberical method, OSB can further reduce the splitting error by training.", "In fact, using the Baker–Campbell–Hausdorff (BCH) formula [32], the operator that $\\psi _{\\mathcal {L}_{\\mathbf {\\theta }}}$ needs to approximate is, $ \\begin{split}e^{-T_1\\mathcal {N}}e^{T_1(\\mathcal {L}+\\mathcal {N})}= e^{T_1\\left(\\mathcal {L}+\\frac{T_1}{2}[\\mathcal {N}, \\mathcal {L}]+\\mathcal {O}(T_1^2)\\right)}\\end{split}$ From Eq.", "(REF ), the error is mainly induced by the Lie bracket $[\\mathcal {N}, \\mathcal {L}] = \\mathcal {N}\\mathcal {L}-\\mathcal {L}\\mathcal {N}$ for a small $T_1$ .", "We discuss the Lie bracket by the following three cases.", "First, if $[\\mathcal {N}, \\mathcal {L}]=0$ , then the splitting error vanishes, and the numerical operator splitting can obtain the exact solution with the arbitrary step size.", "However, for most of the PDEs, $[\\mathcal {N}, \\mathcal {L}] \\ne 0$ .", "Secondly, when $[\\mathcal {N}, \\mathcal {L}]$ is linear, then one OSB is good enough to approximate $e^{T_1(\\mathcal {L}+\\mathcal {N})}$ .", "Finally, when $[\\mathcal {N}, \\mathcal {L}]$ is nonlinear, the capacity of the linear layer inside only one OSB is limited for the approximation of $[\\mathcal {N}, \\mathcal {L}]$ .", "Fortunately, Autoflow structure has the stacking of multiple OSBs, thus provide a nonlinear approximation for $[\\mathcal {N}, \\mathcal {L}]$ .", "In summary, inside one OSB, $\\psi _{\\mathcal {L}_{\\mathbf {\\theta }}}$ is not only used to approximate the linear operator $\\mathcal {L}$ but also reduce the splitting error by providing a linear approximation for $[\\mathcal {N}, \\mathcal {L}]$ .", "The nonlinearity of $[\\mathcal {N}, \\mathcal {L}]$ can be approximate by the stacking of multiple OSBs." ], [ "Straightforward Adaptability", "Besides the task of approximating the solution at a specific time, our proposed network is also flexible and adaptable for some additional tasks.", "The design of intermediate output in Autoflow allows us to replace the default convolutional layers with weight matrices that are consistent with the properties of the PDE.", "For example, in a NLSE, we may replace $\\psi _{\\mathcal {L}_{\\mathbf {\\theta }}}$ with a learnable unitary matrix [33], [34] to guarantee the stability of the long-time prediction.", "This is validated numerically in Appendix .", "Moreover, for those PDEs satisfying a certain symmetry, $\\psi _{\\mathcal {L}_{\\mathbf {\\theta }}}$ with symmetric weight matrices can be employed to restrict the intermediate output of Autoflow close to an intermediate state on the actual solution trajectory." ], [ "Validation of the AutoFlow Structure", "We first validate the proposed Autoflow structure by considering systems without nonlinearity as benchmark problems.", "Example 1: Advection equation.", "Our first example is a standard one dimensional (1D) linear advection equation $ \\left\\lbrace \\begin{array}{lr}u_t + u_x = 0, \\quad x \\in [-\\pi ,\\pi ],\\, t>0, & \\\\u\\left(x,0\\right) = \\sum _{k=1}^m c_k \\sin \\left(kx\\right)+q_k\\cos \\left(kx\\right),&\\\\u\\left(\\pi ,t\\right) = u\\left(-\\pi ,t\\right), &\\end{array}\\right.$ where $c_k, q_k \\sim \\mathcal {N}(0,\\,1)$ with $\\mathcal {N}(0,\\,1)$ being normal distribution and $m=10$ .", "We discretize the spatial domain $ [-\\pi ,\\pi ]$ into 200 uniform grid points.", "In order to predict the behavior of the solution $u$ from 0s to 0.3s, we generate 5000 data pairs of $u\\left(x,0\\right)$ and $u\\left(x,0.3\\right)$ from the analytical solution as the input and ground truth of our network, respectively.", "These 5000 data are divided into training data of 3750 and test data of 1250.", "Example 2: Diffusion equation.", "The second example is a 1D diffusion equation $ u_t = u_{xx}, \\quad x \\in [-\\pi ,\\pi ], \\, t>0,$ with the same initial and boundary conditions as those in Eq.", "(REF ).", "In this example, we examine the evolution of this system from 0s to 0.03s.", "The generation of data is the same as that in Example 1.", "In all two examples, we use a 3-block linear Autoflow.", "Our model is trained using the Adam optimizer through 100 epochs with a learning rate of $10^{-3}$ and $L_2$ regularization of $10^{-4}$ .", "Inside Autoflow, each block contains one convolutional operator of kernel size of 21.", "Constant initialization of weights $\\frac{1}{k}$ is used in our model.", "Results obtained using a 3-block linear DOSnet on these two linear examples are shown in Figs.", "REF and REF .", "We observe that DOSnet has a solid capacity to approximate the solution of the PDEs at a specific time from the results in Fig.", "REF .", "Besides, one key feature that distinguishes DOSnet from the standard DNNs is that the AutoFlow structure allows the linear DOSnet to perform as a numerical operator with a fixed step size.", "From the results in the left column of Fig.", "REF , we observe that the intermediate outputs of DOSnet characterize the transition states at the referenced time levels in the original PDEs.", "The referenced time of the exact solutions is obtained by calculating the $L_2$ losses between the intermediate outputs of the DOSnet and the exact solutions of the original PDE in the whole time trajectory and choosing the corresponding time of the exact solution with the minimum losses.", "Moreover, it shows that the referenced time matched by the intermediate outputs of DOSnet divides the whole time trajectory equally.", "Also, the profiles of the weights in the third column indicate that the weights of different blocks in DOSnet converge to be the same after training.", "Therefore, the same weight inside each block of DOSnet and the intermediate outputs with equal time steps demonstrate that the linear DOSnet performs as a numerical operator with a fixed step size.", "We provide a theoretical analysis for these behaviors of weights in Appendix .", "Figure: Predictions of a 3-block DOSnet (red dashed line) on (a) diffusion equation, (b) advection equation compared with their ground truths (blue solid line) at the target time.Figure: Intermediate outputs and weight profiles of a 3-block DOSnet on (a) diffusion equation, (b) advection equation.In the left column, comparisons between outputs from the intermediate blocks of DOSnet (dashed line) and the exact solutions at the referenced time (solid line) are visualized.", "In the right column, the converged weights of each block in DOSnet after training are visualized." ], [ "Application: Allen-Cahn equation", "In this subsection, we apply DOSnet to the Allen-Cahn equation, which is a reaction-diffusion equation that describes the process of phase separation and has been widely used as a phase field model for the interface dynamics [35].", "We consider a two dimensional Allen-Cahn equation with periodic boundary condition, $\\begin{split}&\\partial _t u\\left(\\mathbf {x},t\\right) - \\epsilon ^2 \\nabla ^2 u\\left(\\mathbf {x},t\\right) + [u\\left(\\mathbf {x},t\\right)]^3-u\\left(\\mathbf {x},t\\right) = 0 , \\quad \\mathbf {x}\\in [-1,1]^2,\\,t>0, \\\\&u\\left(\\mathbf {x},0\\right) = u_0\\left(\\mathbf {x}\\right), \\quad \\mathbf {x}\\in [-1,1]^2.\\end{split}$ In our calculations, the initial data $u_0$ is given by a Fourier series whose frequencies are less than 8 with random coefficients sampled from the Gaussian distribution $\\mathcal {N}\\left(0,I\\right)$ , and $\\epsilon $ is a parameters which indicates the width of region between phase separation.", "The obtained results using our DOSnet will be compared with those obtained by using the numerical method of the symmetrized Strang splitting scheme [6].", "See Appendix for the formulation of this numerical scheme being used.", "The numerically \"exact\" solutions are obtained by using this splitting scheme using a fine mesh and small time step.", "We first examine the effect of model structure on accuracy.", "The accuracy of a neural network depends on are two factors: (1) network architecture (DNN or Autoflow), (2) nonlinear activation (ReLU or OSB), out of which Autoflow and OSB are used in our DOSnet.", "Table REF shows the test errors at time $T$ and $2T$ under four configurations that are combinations of the above two factors.", "The results show that the configuration, Autoflow+OSB, achieves the lowest errors in predicting the solutions at time $T$ and $2T$ .", "We further conduct the $2^2$ full factorial design to analyze the results quantitatively and quantify how different model settings affect the network performance.", "Note that the full factorial design is an experiment used to evaluate the effect of factors and interaction between factors on the response variable.", "The factors and corresponding levels are listed in Table REF .", "We assume that the relation between the test error $y$ and each factors can be described by a regression model: $\\log (y) = q_0 + q_Ax_A + q_Bx_B + q_{AB}x_{AB}$ , where $q$ is the effect of the corresponding factor, $x_A, x_B, x_{AB}$ are the levels of factors A (network architecture), B (nonlinear activation), and AB (interaction), respectively, as listed in Table REF .", "The obtained effects at time $T$ and $2T$ are shown in Table REF .", "As the results shown in in Table REF , for the prediction at time $T$ , the interaction factor $AB$ contributes the largest effect $q_{AB} = -1.2480$ to the error $\\log (y)$ .", "That means the configuration with the level $x_{AB} = +1$ should be chosen to reduce the error.", "Therefore, it is necessary to use the combination Autoflow+OSB/ DNN+ReLU in order to give the lowest errors in the prediction at time $T$ .", "Similarly, for the inference at time $2T$ , the nonlinearity with the factor $B$ has the most significant effects $q_B = -1.7806$ .", "Therefore, the configuration with level $q_B = +1$ , i.e., OSB, is the best choice for the prediction in time $2T$ .", "By these two conclusions, the Autoflow+OSB combination adopted in our DOSnet is the best configuration.", "Table: Relative errors of four model settings at time TT and time 2T2T.Table: 2 2 2^2 Full factorial design.Table: Effect estimate in the 2 2 2^2 full factorial design.Next, we explore generalization of the four configurations in a longer-term evolution.", "As shown by the pipeline in Fig.REF , our model is trained from $u_0$ to $u_T$ , and then this learnt model is used to iteratively inference the evolution for time steps, i.e., $u_T$ to $u_{2T}$ , $u_{2T}$ to $u_{3T}$ , ......, and $u_{7T}$ to $u_{8T}$ , where $T = 5s$ .", "Then, we record the inference errors at $u_T, u_{2T},\\cdots ,u_{8T}$ .", "The inference errors at next 8 time steps after the trained one are shown in Fig.REF , and their visualization results in Fig.REF .", "From these results, we can observe that DOSnet always achieves the best performance in predicting the true evolution in Allen-Cahn model from $T$ to $8T$ .", "The error of the standard setting DNN+RELU exhibits exponential increasing in the long-term inference (Figure REF (4)).", "While if any one factor in model is changed (Figure REF (1,2,3), the curves of inference errors are linear, with a large drop in error, among which the Autoflow+OSB combination achieves the lowest error $0.022$ even at time $8T$ .", "Figure: Inference pipeline of 8T8 T evolution, T=5sT=5s.Figure: Relative errors at 8 time steps after the training time, with 4 different model settings.Figure: Visualization results of long-time evolution (from 0 to 8T) with 4 model settings, where T is the training time.", "The first row is the ground truth generated by a numerical method (see Appendix ) from states 0 (the initial one) to 8T.", "The other rows show long-time prediction by (1) DOSnet (contains Autoflow and OSB).", "(2) Autoflow + ReLU.", "(3) DNN+OSB.", "(4) DNN+ReLU.We also investigate the time trajectory of the evolving system using our model.", "Since Autoflow naturally guarantees that the intermediate states are interpretable, we would like to examine if these intermediate outputs present true states of the Allen-Cahn evolution, and if they are, what are the time steps between them?", "Thus, we match these intermediate outputs $O_n, n = 0,1,\\cdots , N$ , where N is the number of blocks , within a large amount of numerical results $y\\left(t_n\\right), 0 \\le t_n \\le T,$ by minimizing the errors between them.", "That is, the predicted time $t_n$ for each intermediate output is $t^\\star _n = \\operatornamewithlimits{arg\\,min\\;}_{t_j} \\Vert O_n - y\\left(t_j\\right)\\Vert _2^2, j = 1,\\cdots ,K$ , where $K$ is the number of numerical results.", "The matched results for models with 2-7 blocks obtained by 5 trials are presented in Fig.REF .", "We find that, for the models with few blocks ( 2-5 in Fig.REF (a-d)), the time steps between their intermediate outputs are nearly constant and divide the total evolution time equally.", "While with number of blocks increasing ( 6 and 7 in Fig.REF (e-f)), the total evolution time cannot be divided into shorter time steps.", "We believe that this behavior different from numeral solutions may be due to the difficulty of training in DNN-based model with deeper layers.", "The visualization of intermediate outputs of 5-block DOSnet are shown in Fig.REF .", "Visualization of intermediate outputs of models with other number of blocks are shown in the supplementary materials.", "Figure: Boxplot results of time-step matching using 2-7 blocks models by 5 trials.", "The bars show the interquartile range and the red lines denote the medians of 5 trial results.Figure: (a) Visualization of intermediate outputs of 5-block DOSnet.", "(b) The numerical results at the matched time.We also examine the computational complexity, we compared the trainable parameters between 5-block DOSnet and 5-block baseline DNN used in previous experments.", "Under the same kernel size of each convolution, our DOSnet has 19360 trainable parameters, which is much fewer than 96896 parameters of baseline DNN.", "This shows that the Autoflow structure in DOSnet is able to significantly reduce the computational complexity.", "Note that using a traditional numerical scheme, in order to reduce the error, the time step has to be very small, leading to a large number of steps.", "Once it is trained, our network DOSnet is able to describe the dynamics of Allen-Cahn equation with much lower computational complexity.", "Finally, we discuss the effect of number of blocks in DOSnet.", "The default number of the blocks (OSBs) in our model is set to be 5.", "This setting is empirically the best.", "Fig.REF shows the logarithmic relative errors by 5 trials for solving the Allen-Cahn equation for different number of OSBs.", "Each OSB has the same structure: two convolutions and two activation functions, and each convolution has 16 channels with kernel size of 11.", "We observed that the 5-block model achieved smallest average relative error with acceptable variance.", "Figure: Logarithmic relative errors for models with 2-7 blocks by 5 trails for solving the Allen-Cahn equation." ], [ "Application: Nonlinear Schördinger equation", "In this section, we aim to address the nonlinear compensation problem in optical communication by applying DOSnet to solve the NLSE inversely.", "Since being invented by Sir Charles K. Kao in the 1960s, the technology of fiber-optic communication has been rapidly developed and has become one of the foundations in the era of information [36].", "One of important applications in the signal processing for the modern optical fiber transmission systems is nonlinear compensation, in which nonlinear Schrödinger equations (NLSE) are solved inversely to recover the original signals at the transmitter from the highly distorted signal at the receiver.", "The distorted signal is caused by four factors during transmission: noise from inline optical amplifier, attenuation, dispersion, and nonlinear effect from the characteristics of fiber which can be described by NLSE.", "In order to recovery the original information of this signal, nonlinear compensation is required.", "More on the optical transmission system are given in Appendix .", "Mathematically, the forward propagation of a signal in the fiber can be described by the NLSE [37], $\\frac{\\partial u}{\\partial x} = -\\frac{\\alpha }{2} u - \\frac{i\\beta }{2}\\frac{\\partial ^{2}u}{\\partial t^{2}} + i\\gamma \\vert u \\vert ^{2}u,$ where the propagation parameters $\\alpha >0, \\beta \\in \\mathbb {R}, \\gamma \\in \\mathbb {R}$ are characteristics of the optical fiber and $i=\\sqrt{-1}$ .", "Here $u$ is a complex-valued function of $(t,x)\\in \\mathbb {R}\\times [0,X]$ , which represents the light field in the fiber whose length is $X$ .", "The simulated distorted signal at the receiver can be obtained by solving Eq.", "(REF ) with a given initial condition $u(t,x=0)=u_{0}(t)$ , which is the signal to be transmitted to a given distance $x$ .", "Notice that in this equation, the optical signal is a function of $t$ and it propagates along $x$ , which indicates the distance of propagation.", "Conversely, the backward propagation, which recoveries the clean signal at the transmitter from the distorted signal at the receiver, can be simulated by solving the NLSE (REF ) inversely, i.e., taking the negative sign of the propagation parameters $\\alpha , \\beta $ , and $\\gamma $ in Eq.", "(REF ).", "Traditional numerical methods [2], [4] for the simulation of backward propagation is based on the operator splitting of linear and nonlinear parts as two sub equations: $&\\frac{\\partial u}{\\partial x} = \\mathcal {L}u = \\frac{\\alpha }{2} u + \\frac{i\\beta }{2}\\frac{\\partial ^{2}u}{\\partial t^{2}}, \\\\&\\frac{\\partial u}{\\partial x} = \\mathcal {N}u = -i\\gamma \\vert u\\vert ^{2}u.", "$ Note that the solution of the linear PDE (REF ) is $u(t,x+\\xi ) = \\sqrt{\\frac{i}{2\\pi \\beta \\xi }}\\exp {\\left(\\frac{\\alpha \\xi }{2}+\\frac{i t^{2}}{2\\beta \\xi }\\right)} * u(t,x) $ , where $*$ denotes the convolution, and $\\xi $ is the distance of propagation.", "It describes the attenuation and dispersion of the signal along the distance of propagation $\\xi $ .", "The solution of the nonlinear PDE () is $u(t,x+\\xi ) = u(t,x)\\exp {(-i\\gamma \\xi \\vert u(t,x)\\vert ^{2})}$ , which gives the nonlinear change of phase rotation in the signal.", "In DOSnet, we use a complex-value convolution with learnable parameters instead of the analytic solution for the linear PDE (REF ).", "The nonlinear activation function in DOSnet follows the solution of the nonlinear PDE () rather than the common-used activation function in the neural network, such as ReLU, sigmoid.", "We examine the performance of our algorithm in nonlinear compensation for optical signal.", "The performance of the recovery is measured by the bit-error-rate (BER), which is the ratio of error bits to the total number of transmitted bits at the decision point, and is a commonly used measurement in optical transmission system.", "Smaller BER indicates a better algorithm.", "The data we used in experiments are shown in section REF and our model setting is described in REF .", "In our experiments, we compare the performance of nonlinear compensation with the classical numerical algorithm, SSFM, under the launched power from $-6$ dBm to 6 dBm.", "Larger launched power indicates stronger nonlinear effect during transmission.", "When the launch power is low, the nonlinear effect in NLSE (REF ) is small, the difficulty of compensation mainly comes from the noise of inline amplifier.", "BER results obtained using DOSnet and comparisons with those of SSFM are shown in Fig.REF .", "From the results, we can observe that DOSnet achieves much lower BER than SSFM when the launched powers are large (0-6dBm), i.e., the nonlinear effect is strong.", "This demonstrates that our method is better at handling the nonlinear interaction effect than traditional numerical methods.", "Note that when the launched power is low, DOSnet gives slightly higher BER than SSFM.", "This is because the linear operators in DOSnet is learned from noisy data, while the results of SSFM are obtained by the deterministic equation (REF ) without consider the noise.", "Thus the determined operators of SSFM is more robust to noise.", "Figure: BER (Bit error rate) of the results of DOSnet and multi steps SSFM algorithm for launch power from -6-6 dBm to 6 dBm.We also examine the computational complexity in the experiments.", "We compare the results of 2-block DOSnet (24,008 parameters) with multiple steps SSFM, and the results in Fig.REF .", "It can be seen from the results that our method achieves the same BER with as the 80 steps SSFM in 0dBm.", "As the launch power increases, SSFM requires more numerical steps to obtain the same performance as DOSnet.", "With more numerical steps, the performance of SSFM converges, and when the launch power is below 6dBm, its converged BER is still much higher than that of DOSnet.", "Therefore, DOSnet shows much better BER and lower computational complexity than SSFM from 0dBm to 6dBm.", "In Fig.REF , we show some qualitative results of the signals recovered by DOSnet.", "Figure: Signals recovered by DOSnet and the original signals at transmitter (i.e., the ground truth) at 2 dBm.Finally, we compare the performance of the common-used models in deep learning, such as multilayer perceptron (MLP) [20], VGG16 [28], ResNet[38], and bidirectional long short-term memory network (Bi-LSTM) [39] with that of DOSnet, and the results are summarized in Table.", "REF .", "The results show that DOSnet can achieve the best BER with the fewest model parameters.", "We observe that DOSnet outperforms MLP, a basic neural network, with 62.5 times fewer parameters.", "Moreover, from the result of VGG16 and ResNet18, the standard DNNs with high-dimensional features, we observe that their huge amounts of parameters do not benefit the performance of solving the inversed NLSE.", "Also, the performance of Bi-LSTM, one of the classical recurrent neural networks, is worse than that of DOSnet, even with much more parameters.", "Table: Results of DOSnet comparing with baseline DNNs based on the optical transmission system described in Section with 0 dBm launch power.", "We trained those baseline DNNs using the same training data as those of DOSnet.", "M denotes a million." ], [ "Model setting", "Architecture: For the Allen-Cahn equation, the architecture of Autoflow contains five operator splitting blocks (OSBs) by default.", "Each OSB consists of two convolutional layers (hereinafter referred to as conv) and two activation layers (referred to as act), i.e.", "conv-act-conv-act with channel number alternating from $1-16-1$ .", "Each convolutional operator has kernel size of 11.", "The baseline DNN for solving the Allen-Cahn equation has layers: conv-act-[conv-act]*3-ac-conv, where each convolution layer converts the input into 16 channels with kernel size of 11, expect for the last convolution which maps the input into the output of one channel with kernel size of 11.", "For the NLSE, we choose a two-block DOSnet, where each block contains one convolution and one activation.", "Each convolution has 2 channels with kernel size of 3001.", "Optimizer: Adam optimizer is used for training with learning rate 0.0004 in the Allen-Cahn equation and 0.001 in the NLSE.", "The learning rate decays to its 10% for every 30 epochs.", "The batch size is 64 without use batch normalization.", "The weight initialization in networks are chosen as orthogonal initialization in NLSE and as Kaiming uniform initialization in Allen-Cahn equation.", "Network training: All trainings in this paper are based on the deep learning framework Pytorch [40] and all computation are implemented on a machine equipped with an Intel Core i9-10900X CPU and a NVIDIA GeForce RTX 2080 Super GPU." ], [ "Allen-Cahn Equation", "In the Allen-Cahn equation, we generate a dataset of 500 data pairs by Strang splitting [6] with time step $2\\times 10^{-4}$ s. The hyperparameter $\\epsilon $ in Eq.", "(REF ) is 0.02.", "Each data pair contains $u_0\\left(\\mathbf {x}\\right)$ and $u\\left(\\mathbf {x},T\\right)$ , where $T=5s$ for $\\mathbf {x}\\in [-1,1]^2$ and it is discretized into $128\\times 128$ uniform grids." ], [ "NLSE", "The data we used in NLSE experiment is simulated signals in optical fiber of 1600 km, with transmission rate 100 GBaud/s.", "A sequence of 16-QAM signal contains 96,000 symbols with sampling rate of 4 samples per symbol (sps) is generated at the transmitter.", "Each symbol is a segment of the signal with the same length.", "The signal is transmitted through the optical fiber with 20 spans.", "The parameters in NLSE are $\\alpha =0.063, \\beta =-21.68, \\gamma =1.66$ for the fiber, which is solved by the split-step Fourier method (SSFM) [4], [5] to obtain the training data.", "Noise with 4.5 dB amplifier noise figure is added during the transmission every 80km (see Appendix for details).", "Since the goal is to recover the original signal from the distorted signal, the distorted data at the receiver is the input of our network and the undistorted data at the transmitter is the target for DOSnet is to recover.", "Thus, the data for training DOSnet are two signals with length of 384,000 samples each.", "In practice, each signal is first downsampled to 2 sps and then cropped into 24,000 segments with 6016 samples with stride of 8 samples.", "Inside each data of 6016 samples, the middle 16 samples are valid samples and the remaining 3000 samples on both side of them are used for padding.", "After passing the 2-block DOSnet where each convolutional layer with kernel size of 3001, the output of DOSnet is a predicted recovery data with 16 samples (See Fig.REF ).", "The training and validation set contain 12,000 segments, respectively.", "Figure: Singal preprocessing in NLSE example.", "A long signal after downsampling at receiver contains 192000 samples.", "This signal is cropped into 24000 segments.", "Each segment has the length of 6016 samples (16 samples are valid, the remaining samples are for padding).", "Those segments are the input of DOSnet.", "After passing through DOSnet, only 16 samples are predicted." ], [ "Conclusions", "This paper proposes the DOSnet, an operator-splitting-based neural network, to solve evolutionary PDEs.", "The structure of the DOSnet contains two levels: on a coarse level, it is a general dynamical architecture Autoflow (autonomous flow) describing the action of the evolution operators, which consists of multiple learnable OSB (operator splitting blocks); and inside each OSB, there is a series of linear layers and particular nonlinear layers based on the nonlinear operator in the PDE to reflect its properties.", "A critical property of DOSnet is that the input and output of each block have the same dimension to mimic the evolution of the PDE.", "This feature distinguishes DOSnet from the standard DNNs: each block of DOSnet models the transition between the states in the physical evolution systems rather than mapping the input into some higher dimensional features.", "Moreover, the design of identical input and output dimensions in DOSnet vastly reduces the parameters of the neural network.", "It makes DOSnet a lightweight neural network that meets the need for efficient simulation in the engineered applications, such as the nonlinear compensation in optical fiber transmission systerms.", "We validate DOSnet on several types of operator-decomposable differential equations.", "For solving the Allen-Cahn equation, experimental results show that DOSnet achieves better accuracy in predicting the solutions than the baseline DNN.", "DOSnet also shows several unique properties that the baseline DNN cannot represent.", "First, DOSnet can `stabilize' the error increasing in the long-term evolution.", "The curve of the generalization errors of the baseline DNN in long-term evolution grows exponentially, while that of the DOSnet represents linearity with a significant drop in errors.", "Moreover, we observe that intermediate outputs of DOSnet can match the time trajectory of the evolving system, and their time steps are nearly constant.", "We also apply DOSnet to solve the inversed NLSE, which has important applications in the nonlinear compensation of optical communication.", "In this application, DOSnet has demonstrated its significant advantages in mitigating nonlinear distortion with low computation cost.", "Compared to the traditional numerical schemes, the SSFM with multiple iteration steps, a 2-block DOSnet can achieve better BER in high launched powers.", "Compared to the standard DNNs with millions of parameters, our 2-block DOSnet has a significantly better BER than theirs with at least $62.5$ times fewer parameters.", "The proposed DOSnet is developed based on operator splitting of the PDEs.", "The results and analysis in this paper also demonstrate that it is much more efficient to utilize a neural network with the help of the physical properties of the underlying PDEs rather than extracting the general high-level features with millions of parameters." ], [ "Acknowledgments", "The work of Yang Xiang was supported in part by HKUST IEG19SC04 and the Project of Hetao Shenzhen-HKUST Innovation Cooperation Zone HZQB-KCZYB-2020083." ], [ "Error Analysis for Operator Splitting Block (OSB)", "We show error analysis for OSB in details based on the Allen-Cahn equation in main text section REF .", "The Allen-Cahn equation is $\\begin{split}\\partial _t u\\left(\\mathbf {x},t\\right)=&\\mathcal {L}u +\\mathcal {N}u \\\\=&\\epsilon ^2 \\nabla ^2 u\\left(\\mathbf {x},t\\right) + \\left(u\\left(\\mathbf {x},t\\right) - [u\\left(\\mathbf {x},t\\right)]^3\\right), \\quad \\mathbf {x}\\in \\Omega ,\\,t>0.\\end{split}$ As we have shown in main text section REF , the solution of linear and nonlinear parts in Eq.", "(REF ) can be written respectively as, $ u(\\mathbf {x},t+h) = \\mathcal {W}_h u(\\mathbf {x},t) = e^{h\\mathcal {L}}u(\\mathbf {x},t),$ $u(\\mathbf {x},t+h) = \\mathcal {S}_h u(\\mathbf {x},t) = e^{h\\mathcal {N}}u(\\mathbf {x},t),$ where the linear operator $\\mathcal {W}_h = e^{h\\mathcal {L}} $ , and the nonlinear operator $\\mathcal {S}_h=e^{h\\mathcal {N}}$ , with $h$ being the time step.", "We use the second order expansion of $\\mathcal {W}_h, \\mathcal {S}_h$ , and $\\mathcal {S}_h^{-1}$ at $t=0$ , for a small h: $ \\mathcal {W}_h u_0 = u_0 + h\\partial _{xx} u_0 + \\frac{h^2}{2} \\partial _{xxxx} u_0 + \\mathcal {O}(h^3),$ $\\mathcal {S}_h u_0 = u_0 + h (-u_0^3+u_0)+ \\frac{h^2}{2}u_0 (1-4u_0^2+3u_0^4) + \\mathcal {O}(h^3),$ $\\mathcal {S}_{h}^{-1} u_0 = u_0 - h (-u_0^3+u_0)+ \\frac{h^2}{2}u_0 (1-4u_0^2+3u_0^4) + \\mathcal {O}(h^3).$ where $u(x,0)$ is denoted as $u_0$ for simplicity.", "We understand OSB by comparison with the traditional operator splitting algorithm.", "Suppose that one OSB is used to approximate the evolution from $u_0$ to $u_{T_{1}}$ .", "Recall that in the operator splitting algorithm (Eq.", "(REF ) in main text), $u_{T_{1}}$ is estimated by a stack of compositions of $N$ linear and nonlinear operators on $u_0$ , and when $N$ is large enough, the approximation error is small.", "While in OSB, we have $u_{T_1}\\approx \\psi _{\\mathbf {\\theta }}u_0 = \\psi _{\\mathcal {N}_\\alpha }\\psi _{\\mathcal {L}_{\\mathbf {\\theta }}}u_{0}$ , where $\\mathbf {\\theta }$ and $\\alpha $ are all the learnable parameters.", "Thus, we have $ u_{T_1} = \\mathcal {S}_{h}\\mathcal {W}_{h}\\dots \\mathcal {W}_{h}\\mathcal {S}_{h}\\mathcal {W}_{h} u_{0} + \\mathcal {O}(Nh^2) \\approx \\psi _{\\mathcal {N}_\\alpha }\\psi _{\\mathcal {L}_{\\mathbf {\\theta }}}u_{0}.$ To better analyze the approximation error of the learnable layer $\\psi _{\\mathcal {L}_{\\mathbf {\\theta }}}$ in the OSB, we set $\\psi _{\\mathcal {N}_\\alpha } = \\overbrace{\\mathcal {S}_{h}\\cdots \\mathcal {S}_{h}}^{N}$ .", "Then for Eq.", "(REF ) to hold, $ \\psi _{\\mathcal {L}_{\\mathbf {\\theta }}}u_0$ can be written as $ \\begin{split}\\centering \\psi _{\\mathcal {L}_{\\mathbf {\\theta }}}u_0 = &\\psi _{\\mathcal {N}_\\alpha }^{-1}\\overbrace{\\mathcal {S}_{h}\\mathcal {W}_{h}\\dots \\mathcal {W}_{h}\\mathcal {S}_{h}\\mathcal {W}_{h}}^{2N} u_{0}\\\\=&\\overbrace{\\mathcal {S}_{h}^{-1}\\cdots \\mathcal {S}_{h}^{-1}}^{N}\\overbrace{\\mathcal {S}_{h}\\mathcal {W}_{h}\\dots \\mathcal {W}_{h}\\mathcal {S}_{h}\\mathcal {W}_{h}}^{2N} u_{0} \\\\=& \\mathcal {W}_{Nh}u_0 - \\mathcal {S}_{h}^{-1}[\\mathcal {S}_h,\\mathcal {W}_{Nh}]\\mathcal {W}_h u_0 - \\mathcal {S}_{h}^{-1}\\mathcal {S}_{h}^{-1}[\\mathcal {S}_h,\\mathcal {W}_{(N-2)h}]\\mathcal {W}_h \\mathcal {S}_{h}\\mathcal {W}_hu_0 \\\\& - \\cdots - \\underbrace{\\mathcal {S}_{h}^{-1}\\cdots \\mathcal {S}_{h}^{-1}}_{N-1}[\\mathcal {S}_h,\\mathcal {W}_{h}]\\underbrace{\\mathcal {W}_h \\cdots \\mathcal {S}_{h}\\mathcal {W}_h}_{2N-1} u_0 .\\end{split}$ In Allen-Cahn equation, $\\mathcal {W}_h$ and $\\mathcal {S}_h$ are not commutative, and their Lie bracket can be calculated as $[\\mathcal {S}_h,\\mathcal {W}_h] u_0 = \\mathcal {S}_h\\mathcal {W}_h u_0 - \\mathcal {W}_h\\mathcal {S}_h u_0 = -6 u_0(\\partial _x u_0)^2h^2 + \\mathcal {O}(h^3)$ by Eq.", "(REF ) and Eq.", "(REF ).", "Furthermore, $\\left[[\\mathcal {S}_h,\\mathcal {W}_h],\\mathcal {S}_h^{-1}\\right] = \\mathcal {O}(h^3).$ Therefore, in Allen-Cahn equation, Eq.", "(REF ) gives: $\\begin{split}\\centering \\psi _{\\mathcal {L}_{\\mathbf {\\theta }}}u_0= \\mathcal {W}_{Nh}u_0 - [\\mathcal {S}_h,\\mathcal {W}_h]\\left(\\mathcal {W}_{(N-1)h} + 2\\mathcal {W}_{(N-2)h} +\\cdots (N-1)\\mathcal {W}_h\\right)u_0 +\\mathcal {O}(h^3)\\end{split}$ Note that on the right-hand side (RHS) of Eq.", "(REF ), the first term is linear, while the second term is nonlinear since the term $[\\mathcal {S}_h,\\mathcal {W}_h] = -6 u_0(\\partial _x u_0)^2h^2 + cO(h^3)$ is nonlinear.", "In practice, $\\psi _{\\mathcal {L}_{\\mathbf {\\theta }}}$ in DOSnet is expected to be a fully linear layer.", "Therefore, besides approximating the first linear term $\\mathcal {W}_{Nh}u_0$ on the RHS of Eq.", "(REF ), the linear layer in the DOSnet provides a linear approximation for the second nonlinear term on the RHS of Eq.", "(REF )." ], [ "Stability of Long Time Evolution in Schrödinger Equation", "In this example, we validate the structure of Autoflow from another point of view, the stability of linear Autoflow in a long time evolution.", "We implement our experiment on the Schrödinger equation as, $i u_t = -u_{xx}+V\\left(x,t\\right)u,\\quad x \\in [-\\pi ,\\pi ],\\, t>0,$ with potential $V \\equiv 0$ , which describes a pure phase rotation from 0s to 0.03s.", "The initial and boundary conditions are the same as those in Eq.", "(REF ).", "The experimental result in Fig.REF shows that the Autoflow with unitary matrices can “stabilize\" the inference errors in long time evolution.", "It is noticed that the design of intermediate dimension in Autoflow promises the use of unitary matrices which respect the property of Schrödinger equation.", "In contrast, it is difficult to utilize unitary matrices in a standard Convolutional Neural Network (CNN) with higher dimensional feature space.", "This shows that our model is flexible and scalable to respect the property of original PDEs.", "Specifically, in order to guarantee the long-range stability [33], [34], we employ unitary matrices [33] in each layer of linear Autoflow to enforce the probability conservation property of the Schrodinger equation.", "We test the performance of long-time evolution on the two model settings: linear Autoflow with unitary matrices and regular linear Autoflow without unitary matrices.", "First, the model is trained to approximate the solution at time $T=0.03$ s given input solution at 0s.", "Next, we apply the trained model multiple times and inference the solutions on test data from T to 4T.", "The test results under the two model settings are shown in Fig.REF .", "The results are examined by the relative error, $\\frac{1}{N}\\sum _{i=0}^{N}\\left(\\frac{\\hat{u}(x_i,t)-u(x_i,t)}{u(x_i,t)}\\right)^2$ , where $\\hat{u}$ is the prediction of the network and $u$ is the target.", "The results show that the Autoflow without restriction shows “exponential\" errors after being applied multiple times, while the Autoflow with unitary matrices can “stabilize\" the inference errors in a long time evolution.", "Figure: The relative errors of linear Autoflow with unitary matrices (blue line) and regular linear Autoflow (orange line) for long-time inference." ], [ "Analysis of Behaviors of the Weights in DOSnet", "We analyze the converged weights in the linear DOSnet when it is applied to solve a linear PDE.", "In supervised learning, the dataset contains $M$ input-target pair $\\lbrace u_0^{(i)},u_T^{(i)}\\rbrace ,\\,i=1,\\cdots ,M$ .", "Consider a two-layer linear neural network[41] to learn the mappling: $f: u_0\\in \\mathbb {R}^{N_1} \\rightarrow u_T \\in \\mathbb {R}^{N_3}$ .", "The output $o = W^{32}h =W^{32}W^{21}u_0 \\in \\mathbb {R}^{N_3}$ is trained to approximate $u_T$ , where the hidden state $h = W^{21}u_0 \\in \\mathbb {R}^{N_2}$ .", "In the linear Autoflow, $N_1 = N_2 = N_3$ .", "The loss function is defined as $ L = \\sum _{i=1}^{M}\\Vert u_T^{(i)} - W^{32}W^{21}u_0^{(i)}\\Vert ^2+\\beta \\left(\\Vert W^{32}\\Vert ^2 + \\Vert W^{21}\\Vert ^2\\right),$ we explore the behavior of the weights $W^{32}$ and $W^{21}$ by analyzing the gradient descent dynamics that optimizes Eq.", "(REF ): $ \\frac{1}{\\lambda }\\frac{dW^{21}}{dt} = (W^{32})^{T}\\left(\\Sigma ^{31}-W^{32}W^{21}\\Sigma ^{11}\\right)-\\beta W^{21},$ $ \\frac{1}{\\lambda }\\frac{dW^{32}}{dt} = \\left(\\Sigma ^{31}-W^{32}W^{21}\\Sigma ^{11}\\right)(W^{21})^{T}-\\beta W^{32},$ where $t$ measures time in the unit of iterations, $\\lambda $ is the step size, $\\Sigma ^{11} = \\frac{1}{M}\\sum _{i = 1}^{M}u_0^{(i)}u_0^{(i)T}$ is the $N_1\\times N_1$ input covariance matrix, $\\Sigma ^{31} = \\frac{1}{M}\\sum _{i = 1}^{M}u_T^{(i)}u_0^{(i)T}$ is the $N_3\\times N_1$ input-output cross-covariance matrix.", "We obtain the following proposition for the behaviors of the weights in the linear Autoflow.", "This proposition holds when the linear Autoflow is applied to a general problem including solving a linear PDE.", "Note that $N_1=N_2=N_3$ in these propositions.", "Proposition 1 Assume the initial weights follow orthogonal initialization from [41], then the converged weight matrices $W^{32}$ and $W^{21}$ has the following relation, $ W^{21} = \\text{sgn}(S^{31}) (W^{32})^{T} U^{33}(V^{11})^{T},$ $ W^{21} = \\pm R\\sqrt{\\text{sgn}(S^{31})S^{31}-\\beta } (V^{11})^{T},$ $W^{32} = \\pm \\text{sgn}(S^{31}) U^{33}\\sqrt{\\text{sgn}(S^{31})S^{31}-\\beta } R^T,$ where $S^{31}$ is the diagonal matrix, $V^{11}$ and $U^{33}$ are orthogonal matrices from the singular value decomposition (SVD) of $\\Sigma ^{31}$ , and $R$ is an arbitrary orthogonal matrix.", "Consider the dynamics of weights given by Eqs.", "(REF ) and (REF ).", "Following the assumption in [41], we assume $\\Sigma ^{11}=I$ , and consider the SVD decomposition of $\\Sigma ^{31}$ : $ \\Sigma ^{31} = U^{33}S^{31}(V^{11})^{T} = \\sum _{\\alpha =1}^{N_1} s_\\alpha u_\\alpha v_\\alpha ^T,$ where $S^{31} $ is a $N_3\\times N_1$ diagonal matrix whose diagonal elements are singular values $s_\\alpha $ , $\\alpha = 1,\\cdots ,N_1$ , $s_1\\ge s_2 \\ge \\cdots \\ge s_{N_1}$ , and $V^{11}$ and $U^{33}$ are orthogonal matrices.", "By changes of variable $W^{21} = \\bar{W}^{21}(V^{11})^{T}, W^{32} = U^{33}\\bar{W}^{32}$ , Eqs.", "(REF ) and (REF ) can be rewritten as $ \\Gamma \\frac{d\\bar{W}^{21}}{dt} = (\\bar{W}^{32})^{T}\\left(S^{31}-\\bar{W}^{32}\\bar{W}^{21}\\right)-\\beta \\bar{W}^{21},$ $ \\Gamma \\frac{d\\bar{W}^{32}}{dt} = \\left(S^{31}-\\bar{W}^{32}\\bar{W}^{21}\\right)\\bar{W}^{21T}-\\beta \\bar{W}^{32},$ where $\\Gamma = \\frac{1}{\\lambda }$ .", "Denote $\\mathbf {a}^{\\alpha }$ as the $\\alpha \\text{-}th$ column of $\\bar{W}^{21}$ and $\\mathbf {b}^{\\alpha T}$ the $\\alpha \\text{-}th$ row of $\\bar{W}^{32}$ .", "We rewrite Eqs.", "(REF ) and (REF ) by the column vectors of the two weight matrices, $ \\Gamma \\frac{d \\mathbf {a}^{\\alpha }}{dt} = \\left(s_\\alpha -\\mathbf {a}^{\\alpha }\\cdot \\mathbf {b}^{\\alpha }\\right)\\mathbf {b}^{\\alpha } - \\sum _{\\gamma \\ne \\alpha }\\mathbf {b}^{\\gamma }\\left(\\mathbf {a}^{\\alpha }\\cdot \\mathbf {b}^{\\gamma }\\right) - \\beta \\mathbf {a}^{\\alpha },$ $ \\Gamma \\frac{d \\mathbf {b}^{\\alpha }}{dt} = \\left(s_\\alpha -\\mathbf {a}^{\\alpha }\\cdot \\mathbf {b}^{\\alpha }\\right)\\mathbf {a}^{\\alpha } - \\sum _{\\gamma \\ne \\alpha }\\mathbf {a}^{\\gamma }\\left(\\mathbf {b}^{\\alpha }\\cdot \\mathbf {a}^{\\gamma }\\right) - \\beta \\mathbf {b}^{\\alpha }.$ The energy function can be written accordingly as, $E = \\frac{1}{2\\Gamma }\\sum _\\alpha \\left(s_\\alpha -\\mathbf {a}^{\\alpha }\\cdot \\mathbf {b}^{\\alpha }\\right)^2+\\frac{1}{2\\Gamma }\\sum _{\\alpha \\ne \\gamma } \\left(\\mathbf {a}^{\\alpha }\\cdot \\mathbf {b}^{\\gamma }\\right)^2 + \\frac{\\beta }{2\\Gamma } \\sum _\\alpha \\left(\\left(\\mathbf {a}^{\\alpha }\\right)^2+\\left(\\mathbf {b}^{\\alpha }\\right)^2\\right).$ In order to decouple the interaction terms in Eqs.", "(REF ) and (REF ), following the assumption on initial conditions in [41], we assume that the initial value of $\\mathbf {a}^\\alpha $ and $\\mathbf {b}^\\alpha $ are all parallel to $\\mathbf {\\gamma }^\\alpha $ , i.e., $\\mathbf {a}^\\alpha \\mathop {//} \\mathbf {b}^\\alpha \\mathop {//}\\mathbf {\\gamma }^\\alpha $ , where $\\left\\lbrace \\mathbf {\\gamma }^\\alpha \\right\\rbrace $ is a fixed collection of $N_2$ -vectors that form an orthonormal basis.", "Under this assumption, $\\mathbf {a}^\\alpha $ and $\\mathbf {b}^\\alpha $ are in the same direction and only differ in their scalar magnitudes, and are orthogonal to each other.", "Consider the magnitude of $\\mathbf {a}^\\alpha $ and $\\mathbf {b}^\\alpha $ on $\\mathbf {\\gamma }^\\alpha $ , $a^{\\left(\\alpha \\right)} = \\mathbf {a}^\\alpha \\cdot \\mathbf {\\gamma }^\\alpha $ , $b^{\\left(\\alpha \\right)} = \\mathbf {b}^\\alpha \\cdot \\mathbf {\\gamma }^\\alpha $ .", "The dynamics of the scalar magnitudes are, $ \\Gamma \\frac{d a^{\\left(\\alpha \\right)}}{dt} = b^{\\left(\\alpha \\right)}\\left(s_\\alpha -a^{\\left(\\alpha \\right)}b^{\\left(\\alpha \\right)}\\right)-\\beta a^{\\left(\\alpha \\right)},$ $\\Gamma \\frac{d b^{\\left(\\alpha \\right)}}{dt} = a^{\\left(\\alpha \\right)}\\left(s_\\alpha -a^{\\left(\\alpha \\right)}b^{\\left(\\alpha \\right)}\\right)-\\beta b^{\\left(\\alpha \\right)}.$ The energy function is, $ E\\left(a^{\\left(\\alpha \\right)},b^{\\left(\\alpha \\right)}\\right) = \\frac{1}{2\\Gamma }\\left(s_\\alpha -a^{\\left(\\alpha \\right)}b^{\\left(\\alpha \\right)}\\right)^2+\\frac{\\beta }{2\\Gamma }\\left(a^{\\left(\\alpha \\right)2}+b^{\\left(\\alpha \\right)2}\\right)$ Thus the fixed points should satisfy $\\left\\lbrace \\begin{aligned}& a^{\\left(\\alpha \\right)}b^{\\left(\\alpha \\right)}\\pm \\beta = s_\\alpha , \\\\& a^{\\left(\\alpha \\right)} = \\pm b^{\\left(\\alpha \\right)}.\\end{aligned}\\right.$ Thus the fixed points have the following three cases: Case 1: $(a^{(\\alpha )},b^{(\\alpha )})= \\left(0,0\\right)$ Case 2: when $a^{\\left(\\alpha \\right)}=-b^{\\left(\\alpha \\right)}, a^{\\left(\\alpha \\right)}b^{\\left(\\alpha \\right)}-\\beta = s_\\alpha $ , two fixed points are $\\left(-\\sqrt{-s_\\alpha -\\beta },\\sqrt{-s_\\alpha -\\beta }\\right)$ , $\\left(\\sqrt{-s_\\alpha -\\beta },-\\sqrt{-s_\\alpha -\\beta }\\right)$ .", "Case 3: when $a^{\\left(\\alpha \\right)}=b^{\\left(\\alpha \\right)}, a^{\\left(\\alpha \\right)}b^{\\left(\\alpha \\right)}+\\beta = s_\\alpha $ , two fixed points are $\\left(-\\sqrt{s_\\alpha -\\beta },-\\sqrt{s_\\alpha -\\beta }\\right)$ , $\\left(\\sqrt{s_\\alpha -\\beta },\\sqrt{s_\\alpha -\\beta }\\right)$ .", "To examine the convergence of the dynamics of Eqs.", "(REF ) and (REF ), we examine the stability of the above fixed points by linear stability analysis.", "Here we set $\\beta > 0, \\Gamma = 1$ .", "In case 1, the linearization of Eqs.", "(REF ) and (REF ) is $\\frac{d}{dt}\\begin{pmatrix}\\Delta a^{\\left(\\alpha \\right)}\\\\\\Delta b^{\\left(\\alpha \\right)}\\end{pmatrix} = \\begin{pmatrix}-\\beta & s_\\alpha \\\\s_\\alpha & -\\beta \\end{pmatrix} \\begin{pmatrix} \\Delta a^{\\left(\\alpha \\right)}\\\\\\Delta b^{\\left(\\alpha \\right)}\\end{pmatrix},$ where $\\Delta a$ and $\\Delta b$ are deviations from the fixed point a and b, respectively.", "Eigenvalues of the linearized operator $\\big ({\\begin{matrix}-\\beta & s_\\alpha \\\\s_\\alpha & -\\beta \\end{matrix}}\\big )$ are $-\\beta -s_\\alpha $ and $s_\\alpha -\\beta $ .", "Thus, when $\\vert s_{\\alpha }\\vert < \\beta $ , the fixed point $(0,0)$ is linearly stable.", "For case 2, the linearized operator has the form $\\big ({\\begin{matrix} s_\\alpha & s_\\alpha +2\\left(-s_\\alpha -\\beta \\right)\\\\s_\\alpha +2\\left(-s_\\alpha -\\beta \\right) & s_\\alpha \\end{matrix}}\\big )$ .", "The eigenvalues are $-2\\beta $ and $2\\left(s_\\alpha +\\beta \\right)$ .", "When $\\left(s_\\alpha +\\beta \\right) < 0$ , i.e.", "$s_\\alpha <-\\beta $ , the fixed point $a = -b$ is linearly stable.", "For case 3, the eigenvalues of the linearized operator $\\big ({\\begin{matrix} -s_\\alpha & s_\\alpha -2\\left(s_\\alpha -\\beta \\right)\\\\s_\\alpha -2\\left(s_\\alpha -\\beta \\right) & -s_\\alpha \\end{matrix}}\\big )$ are $-2\\beta $ and $-2\\left(s_\\alpha -\\beta \\right)$ .", "Thus, when $s_\\alpha > \\beta $ , the fixed point $a = b$ is linearly stable.", "We show the dynamics of Eqs.", "(REF ) and (REF ) in the above three cases in Fig.REF , .", "Figure: Vector field defined by Eqs.", "() and ().", "(a) Case 1: vector field with β=0.5,s=0.1,Γ=1.\\beta =0.5, s=0.1, \\Gamma = 1.", "(b) Case 2: vector field with β=0.1,s=-1,Γ=1.\\beta =0.1, s=-1, \\Gamma = 1.", "(c) Case 3: vector field with β=0.1,s=1,Γ=1.\\beta =0.1, s=1, \\Gamma = 1.", "The red curves indicate s=ab±βs = ab \\pm \\beta , and the blue straight lines show a=±ba = \\pm b.", "The red points are the stable fixed points.In summary, for any $\\alpha $ , when $\\vert s_{\\alpha }\\vert > \\beta $ , the nontrivial fixed points will be linearly stable.", "Especially, $\\mathbf {a}^{\\alpha }$ = $\\text{sgn}(s_\\alpha ) \\mathbf {b}^{\\alpha }$ .", "In contrast, when $\\vert s_{\\alpha }\\vert < \\beta $ , $\\mathbf {a}^\\alpha $ and $\\mathbf {b}^\\alpha $ will converge to zero.", "In the nontrivial case, we have $\\bar{W}^{21} = \\text{sgn}(S^{31})(\\bar{W}^{32})^{T}$ .", "Furthermore, by the assumption that the columns of $\\bar{W}^{21}$ and the rows of $W^{32}$ satisfy $\\mathbf {a}^\\alpha \\mathop {//} \\mathbf {b}^\\alpha \\mathop {//}\\mathbf {\\gamma }^\\alpha , \\alpha = 1,\\cdots ,N_2$ , we can write $\\bar{W}^{21}$ and $\\bar{W}^{32}$ according to their converged magnitudes $(a^{(\\alpha )},b^{(\\alpha )})$ in Case 2 and 3 as $\\bar{W}^{21} = \\pm R\\sqrt{\\text{sgn}(S^{31})S^{31}-\\beta },$ $\\bar{W}^{32} = \\pm \\text{sgn}(S^{31}) \\sqrt{\\text{sgn}(S^{31})S^{31}-\\beta } R^T,$ where $R$ is the orthogonal matrix whose $\\alpha $ -th column is $\\mathbf {\\gamma }_\\alpha , \\alpha = 1,\\cdots ,N_2.$ Thus $W^{21}$ and $W^{32}$ satisfies Eqs.", "(REF )-(REF ).", "Consider a linear PDE with some symmetry properties.", "Suppose the linear PDE $\\mathcal {L}u(x,t)=0$ , where $\\mathcal {L}$ is a differential operator, is defined on a periodic spatial domain with period $L$ , and $t\\ge 0$ .", "The initial condition is $u(x,0)=g(x)$ .", "The fundamental solution $\\phi (x,x^{\\prime };t,t^{\\prime })$ of this PDE satisfies $\\begin{split}\\mathcal {L}\\phi (x,x^{\\prime };t,t^{\\prime }) &= 0, \\\\\\text{s.t.}", "\\quad \\phi (x,x^{\\prime };t^{\\prime },t^{\\prime })&=\\delta (x-x^{\\prime }),\\end{split}$ where $\\delta $ is the Dirac Delta function.", "Consider the following symmetry properties of the PDE.", "Spatial translational symmetry.", "Then we can derive $\\mathcal {L}u(x+\\delta x,t)=0$ from $\\mathcal {L}u(x,t)=0$ , where $\\delta x$ is an arbitrary distance, and the fundamental solution $\\phi $ given in Eq.", "(REF ) satisfies $\\phi (x,x^{\\prime };t,t^{\\prime }) = \\phi (x-x^{\\prime };t,t^{\\prime })$ .", "Temporal translational symmetry.", "We can derive $\\mathcal {L}u(x,t+\\delta t)=0$ from $\\mathcal {L}u(x,t)=0$ , where $\\delta t$ is an arbitrary time, and the fundamental solution $\\phi $ given in Eq.", "(REF ) satisfies $\\phi (x,x^{\\prime };t,t^{\\prime }) = \\phi (x,x^{\\prime };t-t^{\\prime })$ .", "With the spatial and temporal translational symmetry, the fundamental solution given in Eq.", "(REF ) satisfies $\\phi (x,x^{\\prime };t^{\\prime },t^{\\prime }) = \\phi (x-x^{\\prime },t-t^{\\prime })$ .", "Even parity symmetry.", "We can derive $\\mathcal {L}u(-x,t)=0$ from $\\mathcal {L}u(x,t)=0$ , and the fundamental solution $\\phi $ satisfies $\\phi (x-x^{\\prime },t-t^{\\prime }) = \\phi (x^{\\prime }-x,t-t^{\\prime })$ .", "Time-reversal symmetry.", "We can derive $\\mathcal {L}u(x,-t)=0$ from $\\mathcal {L}u(x,t)=0$ , and the fundamental solution $\\phi $ satisfies $\\phi (x-x^{\\prime },t-t^{\\prime }) = \\phi (x-x^{\\prime },t^{\\prime }-t)$ .", "The following proposition summarizes the behaviors of the weights in the linear Autoflow when it is applied to solve a linear PDE with some symmetry properties.", "Especially, under some symmetries of the PDE, the weights between different layers in the network are equal.", "Proposition 2 () Consider the initial value problem of a linear PDE with periodic boundary condition in spatial domain as described above.", "The spatial coordinate $x$ is discretized into $N_1$ uniform grid points, and the initial condition $g(x)$ of the PDE on the discrete mesh satisfies the covariance matrix $\\Sigma ^{11}=I$ .", "If this linear PDE satisfies the spatial and temporal translational symmetries, and the even parity symmetry ( or the time-reversal symmetry), then $\\Sigma ^{31}$ , the input-output cross-covariance matrix between $u(x_j,0)$ and $u(x_k,T)$ where $j,k = 1,2,\\cdots ,N_1$ , is symmetric up to the leading order of $\\Delta x$ , where $\\Delta x$ is the grid constant.", "As a result, we can specify $U^{33} = V^{11}$ , thus $W^{21}$ and $ W^{32}$ in the linear Autoflow for solving this PDE satisfy $W^{21} = \\text{sgn}(S^{31}) (W^{32})^{T}$ .", "Moreover, when $W^{32}$ is symmetric, we have $W^{21} = \\text{sgn}(S^{31}) W^{32}$ .", "By Eq.", "(REF ), $U^{33}$ and $V^{11}$ are determined by the input-output cross-covariance matrix $\\Sigma ^{31}$ .", "In the case being considered, the input and output data are the solutions of the linear PDE at different time levels.", "Assume that the input data is the solution of the linear PDE at initial time $t=0$ : $u\\left(x,0\\right)=g\\left(x\\right)$ , where $x$ is spatial coordinate.", "Ground truth at time T is the solution $u\\left(x, T\\right)$ .", "The cross-covariance between $u(x_j,t_1)$ and $u(x_k,t_2)$ , where $j,k = 1,2,\\cdots ,N_1$ and $t_1, t_2 \\in [0,T] $ is $\\left< u(x_j,t_1), u(x_k,t_2)\\right> = \\frac{1}{M}\\sum _{i = 1}^{M}u(x_j,t_1)^{(i)}u(x_k,t_2)^{(i)}$ , where $M$ is the number of data.", "Specifically, the cross-covariance matrix between $u(x_j,0)$ and $u(x_k,T)$ is $ \\begin{aligned}\\Sigma ^{31}_{jk} = & \\left< u(x_j,0),u(x_k,T)\\right>\\\\\\approx & \\left< g(x_j),\\sum _{\\ell =1}^{N_1}\\phi (x_\\ell ,x_k;T,0)g(x_\\ell )\\Delta x\\right> \\\\=&\\sum _{\\ell =1}^{N_1}\\phi (x_\\ell ,x_k;T,0)\\left< g(x_j),g(x_\\ell )\\right> \\Delta x \\\\=& \\sum _{\\ell =1}^{N_1} \\phi (x_\\ell ,x_k;T,0)\\delta _{jl}\\Delta x \\\\=&\\phi (x_j,x_k;T,0) \\Delta x , \\\\\\end{aligned}$ where $\\phi $ is the fundamental solution of the linear PDE.", "The assumption of $\\Sigma ^{11} = I$ , means that the initial data $g(x)$ satisfies $\\left< g(x_j),g(x_\\ell )\\right> =\\delta _{jl} = \\left\\lbrace \\begin{array}{lr} 0, \\, \\text{if} \\, j \\ne l ,\\\\ 1, \\, \\text{if} \\, j=l.", "\\end{array}\\right.$ Since the linear PDE satisfies the spatial and temporal translational symmetries, its fundamental solution has the property that $\\phi (x_j,x_k; T,0) = \\phi (x_j-x_k, T)$ .", "There are two cases in which we derive the symmetry of $\\Sigma ^{31}$ : (i) if the linear PDE satisfies the even parity symmetry, then we have $\\phi (x_j-x_k, T) = \\phi (x_k-x_j, T)$ ; (ii) if the linear PDE has time-reversal symmetry, according to the definition of cross-covariance, we have $\\left< u(x_j,0),u(x_k, T)\\right> = \\phi (x_j-x_k, T) = \\left< u(x_k, T), u(x_j,0)\\right> = \\phi (x_k-x_j,-T) = \\phi (x_k-x_j, T)$ .", "By the above conditions, we can derive that the input-output cross-covariance matrix $\\Sigma ^{31}_{jk}$ has the symmetric leading order behavior $\\phi (x_j,x_k;T,0)\\Delta x,$ where $j,k = 1,\\cdots ,N_1$ , as $\\Delta x \\rightarrow 0$ .", "Thus we can specify $U^{33} = V^{11}$ .", "Further using Eq.", "(REF ), we have $W^{21} = \\text{sgn}(S^{31}) (W^{32})^{T}$ .", "Especially, when $W^{32}$ is symmetric, i.e., $W^{32} = (W^{32})^T$ , then $W^{21} = \\text{sgn}(S^{31})W^{32}$ .", "The following proposition gives a necessary condition for the intermediate output in linear Autoflow to be a true intermediate state of the solution of the linear PDE.", "Proposition 3 If the intermediate output $h$ of the two-layer linear Autoflow is the solution of the PDE in Proposition REF , then $W^{32}$ and $W^{21}$ should be symmetric.", "If the intermediate output $h$ of the linear Autoflow is the solution of the linear PDE in Proposition REF , then the conclusion in Proposition REF holds for $h$ .", "That means the input-intermediate output cross-covariance matrix $\\Sigma ^{21} = \\frac{1}{M}\\sum _{i = 1}^{M}h^{(i)}u_0^{(i)T}$ should be symmetric.", "By specifying $R=U^{33}=V^{11}$ , $W^{32}$ and $W^{21}$ are symmetric." ], [ "A Numerical Scheme of Solving Allen-Cahn Equation", "Numerically, the operator splitting method is commonly used to solve the Allen-Cahn equation (REF ).", "Denoted $u_n = u\\left(\\cdot , t_n\\right)$ , and $t_n = n\\tau $ where $\\tau $ is the time step.", "In each time step, the symmetrized Strang splitting scheme [6] for solving the Allen-Cahn equation consists the following three sub equations successively, $&\\partial _t u_1 =\\epsilon ^2 \\nabla ^2 u_1 \\,\\,\\, \\text{in}\\, \\left(t_n,t_{n+1/2}\\right), \\, u_1\\left(t_n\\right) = u_n \\rightarrow u_1^\\star = u_1\\left(t_{n+1/2}\\right) \\\\&\\partial _t u_2 =-\\left(u_2^3-u_2\\right) \\,\\,\\, \\text{in}\\, \\left(t_n,t_{n+1}\\right), \\, u_2\\left(t_n\\right) = u_1^\\star \\rightarrow u_2^\\star = u_2\\left(t_{n+1}\\right) \\\\&\\partial _t u_3 =\\epsilon ^2 \\nabla ^2 u_3 \\,\\,\\, \\text{in}\\, \\left(t_{n+1/2},t_{n+1}\\right),\\, u_3\\left(t_{n+1/2}\\right) = u_2^\\star \\rightarrow u\\left(t_{n+1}\\right) = u_3\\left(t_{n+1}\\right).", "$ The diffusion equations (REF ) and () can be solved by the fast Fourier transform (FFT) method with computational complexity $O\\left(N^2\\text{log}N\\right)$ , where $N^2$ is number of discretized grid in the two dimensional spatial domain.", "The second nonlinear equation () can be solved analytically as, $ u_2\\left(t_{n+1}\\right) = u_1^\\star \\Big {/}\\sqrt{e^{-2\\tau }+\\left(1-e^{-2\\tau }\\right)\\left(u_1^\\star \\right)^2}.$ The truncated error of this numerical scheme is $O\\left(\\tau ^2\\right)$ .", "In order to reduce the error, the time step $\\tau $ has to be very small, leading to a large number of steps.", "The FFT method in equation (REF ) and () gives the largest contribution to the computational complexity during the numerical simulation." ], [ "Optical transmission system", "Here we give a more detailed description of the optical transmission system [37] which is studied as an application of DOSnet in section REF .", "In our experiments, we only consider the single channel system.", "There are three basic components in an optical transmission system: transmitter, optical fiber, and receiver.", "In the beginning, the electrical signal is converted into optical signal at the transmitter.", "Then the optical signal propagates through the fiber for a long distance.", "Finally it arrives at the optical receiver.", "The received signal is then converted back to electrical domain and is processed by digital signal processor (DSP).", "The processed signal is used for information detection.", "The clean signal at transmitter propagate through 20 spans optical fiber and arrived at the receiver.", "Each span of fiber is 80 km and contains an inline optical amplifier (OA).", "OA induces amplified spontaneous emission noise.", "This noise, together with the three factors in the fiber, loss, dispersion, and nonlinear interaction, cause the signal to be distorted inside optical fiber.", "In order to recovery the original information of this signal, nonlinear compensation is required before the decision procedure is applied.", "The whole pipeline is shown in Fig.REF .", "Next, we describe each component of the system in details.", "Signal generated at transmitter.", "The input signal at the transmitter takes the form of $u_0(t) = \\sqrt{P_L}\\sum _{n} a_n h(t-n T_s),$ where $P_L$ is the called launch power.", "We choose launch power as 0 dBm here, i.e., $P_L = 10^{-3}$ W. Each $a_n\\in \\mathcal {C}$ represents a symbol and $h$ is the root-raised-cosine filter.", "Here $\\mathcal {C}:=\\lbrace \\pm (2k+1)\\pm i (2l+1)\\rbrace _{0\\le k,l\\le 1}\\subset \\mathbb {C}$ is a set of 16 grid points, called 16QAM constellation.", "The real function $h$ is called root-raised-cosine filter, which is defined by $h\\left(t\\right)=\\left\\lbrace \\begin{aligned}&\\frac{1}{T_s}\\left(1+\\rho \\left(\\frac{4}{\\pi }-1\\right)\\right), \\quad &t= 0 \\\\&\\frac{\\rho }{T_s\\sqrt{2}}[\\left(1+\\frac{2}{\\pi }\\right)\\sin \\left(\\frac{\\pi }{4\\rho }\\right)+\\left(1-\\frac{2}{\\pi }\\right)\\cos \\left(\\frac{\\pi }{4\\rho }\\right)],\\quad &t=\\pm \\frac{T_s}{4\\rho } \\\\&\\frac{1}{T_s}\\frac{\\sin [\\pi \\frac{t}{T_s}\\left(1-\\rho \\right)]+4\\rho \\frac{t}{T_s}\\cos [\\pi \\frac{t}{T_s}\\left(1+\\rho \\right)]}{\\pi \\frac{t}{T_s}[1-\\left(4\\rho \\frac{t}{T_s}\\right)^2]}, \\quad &\\text{otherwise}\\end{aligned}\\right.$ where $\\rho =0.1$ and $T_s$ is the reciprocal of the symbol-rate.", "Signal propagation at fiber.", "Optical fiber is a fundamental component in optical transmission systems.", "Our 1600 km transmission system consists of 20 spans of 80 km fibers.", "At the end of each span, optical amplifier (OA) is used to exactly recover the amplitude of the signal.", "Besides, OA also add to the signal a white Gaussian noise with zero mean and variation $\\sigma ^2 = Fh\\nu _0\\left(G-1\\right)\\Delta \\nu $ , where $h$ is the Planck's constant, $\\nu _0$ denotes the carrier frequency, and $G$ is the gain of amplifier.", "Here we choose noise figure $F=4.5$ dB, $\\Delta \\nu = 50 \\text{GHz}$ where samples per symbol (sps) is 4.", "Signal received at receiver.", "When the signal arrives at the receiver, a series of operations are performed to recover the data being transimitted.", "The incoming signal is downsampled and preamplified by the amplifier first.", "Then it passes through a matched filter with the same impulse responses at Eq.", "(REF ) to reduce the noise.", "Next, in order to mitigate the distortion caused by transimission, an efficient recovery algorithm is needed.", "Our DOSnet is proposed at this end to compensates the linear and nonlinear distortion.", "Finally, the signal is downsampled again such that each symbol contains only one sample, and a classification/decision is carried out for each sample according to its distance to the points in the standard constellation $\\mathcal {C}$ .", "Then the signal is classified into 16 grid points on the constellation.", "Furthermore, those grid points on the constellation are converted to a binary sequence by Gray code (See Fig.REF ).", "The original signal is recovered.", "Figure: 16 grids points on 16 QAM constellation and their corresponding gray codes." ] ]
2212.05571
[ [ "Current-voltage characteristics of superconductor-normal\n metal-superconductor junctions" ], [ "Abstract We develop a theory of current-voltage (I-U) characteristics for superconductor-normal metal-superconductor (SNS) junctions.", "At small voltages and sufficiently low temperatures the I-U characteristics of the junction is controlled by the inelastic relaxation time, \\tau_{in}.", "In particular, the linear conductance is proportional to, \\tau_{in}.", "In this regime the I-U characteristics can be expressed solely in terms of dependence of the density of states in the normal region, \\nu(\\chi), on the phase difference of the order parameter across the the junction.", "In contrast, at large voltages the I-U characteristics of the device is controlled by the elastic relaxation time, \\tau_{el}, which is much smaller than the inelastic one." ], [ "Introduction", "The theory of current-voltage (I-U) characteristics of superconducting weak links at relatively large voltages has been developed in many articles (see for example [1], [2], [3], [4], and references therein).", "However at small voltages the I-U characteristics exhibit interesting features which are quite different from those at large voltages, this regime attracted much less attention.", "In this article we focus on the theory of I-U characteristics of SNS junctions in this regime.", "A schematic picture of an SNS junction in which the normal metal section of the junction is sandwiched in between two s-wave superconductors, is presented in Fig.", "REF .", "The difference between the phases of the order parameter on different sides on the junction $\\chi =\\chi _{1}-\\chi _{2}$ is related to the voltage across the junction $U$ by the Josephson relation, $\\frac{d\\chi }{dt} = 2eU(t).$ The most general description of quantum systems is in terms of the statistical matrix (or many-body density matrix) $\\hat{w}$ .", "Let us represent this matrix in the basis of eigenstates for the instantaneous Hamiltonian $\\hat{H} (t) $ .", "The expectation value of the current operator, $\\langle J\\rangle = \\mathrm {tr} (\\hat{J} \\hat{w})$ , may be written as $\\langle J\\rangle = \\sum _n w_{nn}J_{nn} + \\sum _{n\\ne m} w_{nm}J_{mn}= J_{d}+J_{nd}.$ Here the first term represents the diagonal contribution to the current, and the second term represents the non-diagonal contribution.", "In particular, in thermal equilibrium, where the statistical matrix is given by the Gibbs distribution, $\\hat{w}= \\exp (-\\beta \\hat{H})/Z$ , with $Z$ being the partition function, the diagonal contribution corresponds to the equilibrium current.", "A canonical example of the diagonal component, $J_{d}$ , is the equilibrium super-current in superconductors.", "We note that in non-equilibrium situations $J_{d}$ contains both the dissipative and non-dissipative parts.", "In a situation where the statistical matrix contains non-diagonal elements, the expectation value of the current acquires a non-diagonal contribution, $J_{nd}$ .", "An example of the non-diagonal component, $J_{nd}$ , is the ohmic current in normal metals.", "In this case, according to the Kubo formula, $J_{nd}$ is related to transitions between electronic eigenstates induced by the external electric field.", "We show below that at small voltages in an SNS junctions, $J_{d}\\gg J_{nd}$ , the diagonal component of the current controls both the dissipative and non-dissipative part of the current.", "The reason for this is that the dissipative part of $J_{d}$ is proportional to the inelastic mean free time $\\tau _{in}$ , while $J_{nd}$ is proportional to the elastic one $\\tau _{el}$ , which is usually much shorter than $\\tau _{in}$ .", "In this regime, $J_{d}$ can be evaluated in the adiabatic approximation, and it can be expressed in terms of the phase $\\chi $ and energy $\\epsilon $ dependence of the quasi-particle density of states in the normal part of the junction $\\nu (\\epsilon ,\\chi )$ .", "The physical origin of this contribution to the current is similar to the Debye mechanism of microwave absorption in gases [5], Mandelstam-Leontovich mechanism of the second viscosity in liquids [6], the Pollak-Geballe mechanism of microwave absorption in the hopping conductivity regime [7], and the mechanism of low frequency microwave absorption in superconductors [8], [9].", "In principle, such a mechanism exists independently of the nature of electronic states in the normal region of SNS junctions.", "It is also valid in the case where the electronic state in the normal region is strongly correlated; for example, the quantum Hall states [10], [11].", "In this article, however, we restrict ourselves to the case where the exited states of the electronic liquid can be described by system of Fermionic quasi-particles.", "Figure: Qualitative representation of a) 1D SNS junction b) Bulk junction with closed boundaries c) Bulk junction with open boundaries.The I-U characteristics of SNS junctions depend on the external circuits to which they are connected.", "In what follows, we will be interested in I-U characteristics of the junctions in situations where either the voltage (voltage bias setup) or current (current bias setup) is fixed by the external circuit.", "In Figs.", "REF , REF , and REF we qualitatively summarize our results for the cases of voltage- and current-biased junctions.", "In the case of voltage-biased junctions the I-U characteristic turns out to be non-monotonic, and the maximum current $J_{max}$ is reached at $eU\\sim \\tau _{in}^{-1}$ [12].", "We will show that the value of $J_{max}$ can be significantly larger than the temperature-dependent critical current $J_{c}(T)$ , and in some cases it can be as large as the zero temperature critical current $J_{c}(0)$ .", "At even larger voltages the I-U characteristic reaches a minimum, after which the current increases with voltage.", "In the case of current-biased junctions at $J_{c}<J<J_{jump}$ the voltage monotonically increases from zero to a relatively small value, which is inversely proportional to $\\tau _{in}$ .", "Then, at $J=J_{jump}\\sim J_{max}$ the I-U characteristic exhibits a jump to a significantly higher voltage.", "The presentation below is organized as follows.", "In Sec.", "we obtain general expressions for the diagonal contribution to current in terms of the inelastic relaxation time $\\tau _{in}$ and sensitivity of quasi-particle energy levels to the change in the phase difference across the junction.", "In Sec.", "we discuss the characteristic features of the current-voltage characteristics of voltage and current-biased SNS junctions, which are caused by the presence of the long inelastic relaxation time, $\\tau _{in}$ in the system.", "In Sec.", "we apply the general formalism developed in Sec.", "to study the I-U characteristics of ballistic single channel junctions (Sec.", "REF ) and diffusive multi-channel junctions (Sec.", "REF ).", "We present our conclusions in Sec. .", "Finally, in we present a derivation of our general equations in Sec.", "in the diffusive regime starting from the Larkin-Ovchinnikov equations for the quasi-classical Green's functions." ], [ "Description of the dynamics of SNS junction in adiabatic approximation.\n", "Due to Andreev reflection from the normal metal-superconductor boundaries of the SNS junction, low energy ($\\epsilon < \\Delta $ ) quasi-particles are trapped inside the normal region.", "If the voltage across the SNS junction is sufficiently small, the quasi-particle energies $\\epsilon _{i}(\\chi (t))$ can be calculated in the adiabatic approximation, treating the phase difference $\\chi (t)$ as a parameter.", "At finite temperature, the quasi-particles occupying these levels move in energy space together with the levels.", "This motion creates a non-equilibrium quasi-particle distribution, which relaxes via inelastic scattering and leads to dissipation.", "There are two equivalent ways to describe this non-equilibrium distribution.", "The first is to describe the occupancy of time-dependent energy levels.", "This description is similar to the Lagrangian description of fluid dynamics.", "The second approach is to consider the electron distribution as a function of energy, in analogy to the Eulerian description of fluid dynamics.", "The Lagrangian description is convenient in the cases where individual quasi-particle energy levels are well resolved, and the Eulerian description is more suitable for systems where energy levels form a continuum.", "In order to obtain the kinetic description of non-equilibrium dynamics of the junctions it is easier to start with the Lagrangian description.", "The corresponding equations in the Eulerian approach are then obtained by a straightforward change of variables." ], [ "Lagrangian description of dynamics of SNS junctions. ", "Let us introduce the occupation number of $i_{th}$ level $n_{i}(t)$ .", "In the adiabatic approximation only scattering can change the occupation of a particular level, so the time evolution of $n_i (t)$ is controlled by the following equation, $\\frac{d n_{i}(t)}{d t}=I_{st} \\lbrace n_{i}\\rbrace .$ We will use an expression for the scattering integral in the relaxation time approximation $I_{st} = \\frac{ n_F(\\epsilon _i(t)) - n_i(t)}{\\tau _{in}},$ where $n_{F}(\\epsilon )=1/(1+\\exp (\\epsilon /T))$ is the Fermi distribution function, and we assume that the relaxation time $\\tau _{in}(T)$ depends only on the temperature.", "In general, the relaxation time approximation is valid with precision of order one.", "However, in some cases this approximation turns out to be asymptotically exact.", "In particular, this is the case when the normal part of the junction is in the diffusive limit and the temperature is larger than the Thouless energy.", "(See the corresponding discussion in Section (REF )) At $t\\gg \\tau _{in}$ the general solution of Eqs.", "(REF ),(REF ) is given by $n_i(t) = \\int ^\\infty _ 0 \\frac{d\\tau }{\\tau _{in}} e^\\frac{-\\tau }{\\tau _i} n_F (\\epsilon _i(t-\\tau )).$ The diagonal component of the current through the junction can be written as $J_{d} = 2e\\frac{\\partial E}{\\partial \\chi }= J_c(0) Y(\\chi ,0)+ 2e\\sum _{i}\\frac{\\partial \\epsilon _{i}(\\chi )}{\\partial \\chi }n_{i}.$ The first term in Eq.", "(REF ) represents the super-current through the system in the ground state.", "Here $J_{c}(0)$ is the critical current at zero temperature, and $Y(\\chi , 0)$ is a periodic function with maximum 1 and a period $2\\pi $ ." ], [ "Eulerian description", "In the Eulerian description the quasi-particle distribution function inside the normal region is a function of energy and time $n(\\epsilon , t)$ .", "This description is convenient in the case where the energy levels are broadened on the energy scale larger than the level spacing.", "The number of levels in the system is conserved, so the density of states is therefore subject to the continuity equation in energy space $\\partial _t \\nu (\\epsilon , \\chi ) + \\partial _\\epsilon \\big ( v_{\\nu } (\\epsilon , \\chi )\\nu (\\epsilon , \\chi ) \\big ) =0,$ where $v_\\nu (\\epsilon ,\\chi )$ is the level “velocity” in energy space.", "Using Eqs.", "(REF ), (REF ) the level velocity can be expressed in the form $v_{\\nu } (\\epsilon , \\chi )= 2 e U \\cdot V_{\\nu }(\\epsilon , \\chi ),$ where $V_{\\nu } (\\epsilon ,\\chi ) = - \\frac{1}{\\nu (\\epsilon , \\chi )} \\int _{0}^{\\epsilon } d \\tilde{\\epsilon } \\frac{\\partial \\nu (\\tilde{\\epsilon }, \\chi )}{\\partial \\chi }$ characterizes the sensitivity of the energy levels to changes of $\\chi (t)$ .", "In the absence of inelastic scattering, the time evolution due to the spectral flow is described by the continuity equation $\\partial _t(\\nu n)+\\partial _\\epsilon (v_{\\nu } \\nu n) =0$ .", "Combining it with Eq.", "(REF ) for $\\nu (\\epsilon , \\chi )$ and allowing for inelastic collisions we obtain the kinetic equation $\\partial _{t} n (\\epsilon , t)+ 2 e U(t)\\cdot V_{\\nu } (\\epsilon , \\chi )\\, \\partial _\\epsilon n(\\epsilon , t) = I_{\\mathrm {in}}\\lbrace n\\rbrace .$ The expression for the current in the Eulerian description has a form $J_{d}=J_{c}(0)Y(\\chi , 0) -2e \\int _{0}^{\\infty } d \\epsilon \\nu (\\epsilon , t) n(\\epsilon , t) V_{\\nu }(\\epsilon , \\chi ).$ Introducing the integrated density of states $N(\\epsilon ,t) = \\int ^\\epsilon _0 d \\epsilon \\nu (\\epsilon ,t),$ and changing the variables from $\\epsilon $ to $N(\\epsilon ,t)$ , we can write Eq.", "(REF ) as $\\partial _t n(N,t) = \\frac{n_F(\\epsilon (N,t)) - n(N,t)}{\\tau _{in}}.$ It has a general solution given by, $n(N,t) = \\int ^\\infty _ 0 \\frac{d\\tau }{\\tau _{in}} e^\\frac{-\\tau }{\\tau _{in}} n_F (\\epsilon (N,t-\\tau )).$ Small voltage regime: The description of dissipative current presented above simplifies significantly for slow time-dependence of the phase difference, $\\dot{\\chi }(t) =2eU (t) \\ll \\tau _{in}^{-1}$ .", "In this case, to first order accuracy in $U(t)$ , the diagonal contribution to the current can be written in the form $J_{d} (t, T)=J_{c}(T)Y(\\chi (t), T)+G_{d}[\\chi (t)] U(t).$ Here the first term represents the equilibrium super-current corresponding to the instantaneous value of $\\chi (t)$ .", "It is convenient to express it as a product of the temperature dependent critical current $J_{c}(T)$ and a dimensionless periodic function periodic function of $\\chi $ of unit amplitude, $Y(\\chi , T)$ .", "For example, at large temperatures $Y(\\chi , T)\\sim \\sin \\chi $ .", "The second term in Eq.", "(REF ) describes the diagonal contribution of the dissipative current and is characterized by the “diagonal conductance” $G_{d}[\\chi (t)]$ , which depends on the instantaneous phase difference phase difference $\\chi (t)$ .", "It can be evaluated by solving Eqs.", "(REF ),  (REF ), and (REF ) to first order in $U(t)$ , then substituting the result into equation Eqs.", "(REF ), (REF ).", "This yields the following expressions for the diagonal conductance in the Lagrangian and Eulerian variables $G_{d}[\\chi ] & = & -4e^2 \\tau _{in} \\sum _i \\partial _\\epsilon n_F(\\epsilon _i) \\big (\\partial _\\chi \\epsilon _{i}(\\chi ) \\big )^{2} \\\\& = & -4e^2 \\tau _{in} \\int _{0}^{\\infty } d \\epsilon \\nu (\\epsilon , \\chi ) V^2_{\\nu }(\\epsilon , \\chi )\\partial _\\epsilon n_F(\\epsilon ).$ Thus, at sufficiently small voltages the diagonal contribution to the current can be expressed in terms of the phase dependent density of states $\\nu (\\epsilon ,\\chi )$ , and is proportional to the inelastic relaxation time $\\tau _{in}$ .", "The non-diagonal contribution to the current corresponds to elastic electron transfer between the superconducting banks of the junction, and may be expressed as $J_{nd} = G_{nd} U(t) $ .", "Since it is not proportional to $\\tau _{in}$ we have $G_{nd}\\ll G_{d}$ .", "Therefore, at small voltages, it is possible to neglect $J_{nd}$ compared to $J_{d}$ .", "Equations (REF )-(REF ) which describe slow dynamics of SNS junctions in terms of the $\\chi $ and $\\epsilon $ -dependence of the quasi-particle density of states $\\nu (\\epsilon ,\\chi )$ are quite general.", "They hold at relatively small voltages, where the spectrum of quasi-particles in the normal region of the junction can be calculated in the adiabatic approximation, and the quasi-particle distribution function inside the normal region is spatially uniform.", "In we present a derivation of Eqs.", "(REF ), (REF ), and (REF ) in the diffusive regime, $L\\gg l$ , using a procedure developed by Larkin-Ovchinnikov [13].", "Here $l=v_{F}\\tau _{el}$ is the elastic mean free path, $v_{F}$ is the Fermi velocity, and $L$ is the length of the junction (See Fig.", "REF )." ], [ "General features of I-U characteristics of SNS junctions", "The form of $\\chi (t)$ in a junction depends on the external circuit.", "Below we consider the I-U characteristics for two common setups: voltage-biased junction, and current-biased junction.", "We show that the existence of the long inelastic relaxation time $\\tau _{in}$ has a dramatic effect on the shape of the I-U characteristics of the junctions.", "In the voltage bias case the I-U characteristic becomes non-monotonic: it acquires an $N$ -shape, as illustrated in Fig.", "REF .", "In the current-bias case the voltage dependence on the applied current is illustrated in Fig.", "REF .", "Broadly speaking it consists of two regions: 1) At relatively small excess of the bias current over the critical current the time-averaged voltage across the junction monotonically increases from zero, while its value remains rather small (inversely proportional to the inelastic relaxation time), 2) At larger bias currents, $J \\sim J_{jump}$ , the voltage exhibits a sharp jump to a much higher value.", "This feature of the dependence of the voltage on the bias current may have important implications for the interpretation of experimental data; because of the low values of the voltage in region 1) the transition to region 2) may be mistaken for the transition from the dissipationless to the dissipative state of the junction.", "Below, we show that the shape of the I-U characteristics at low voltages can be described in terms of the phase-dependence of the quasi-particle density of states in the junction." ], [ "Voltage biased SNS junctions", "In the voltage bias case we define the nonlinear conductance $\\bar{G}(U)$ as $\\bar{G}(U) = \\frac{\\langle J (t) \\rangle }{U},$ where $\\langle \\ldots \\rangle $ denotes averaging over time.", "Since the phase winds at a constant rate via the Josephson relation Eq.", "(REF ), after averaging over time the non-dissipative component of the current vanishes.", "We focus on the regime of low bias voltages, where the dissipative component of the current is dominated by the diagonal contribution.", "We choose here to work in Eularian variables.", "To obtain the expression for the nonlinear conductance in this regime we substitute Eqs.", "(REF ) and (REF ) into Eq.", "(REF ).", "It is convenient to change from integration over time $\\tau $ in Eq.", "(REF ) to an integration over phase $\\phi $ , $n(N,t) =& \\frac{1}{2eU\\tau _{in}} \\int ^\\infty _0 d\\phi e^{-\\phi /2eU\\tau _{in}} n_F\\big [ \\epsilon \\big ( N, \\chi (t) - \\phi \\big ) \\big ].$ When the temperature is large as compared to the typical range of motion of the quasi-particle energy levels, we can expand the Fermi function deviations of the instantaneous quasi-particle energies from their average positions $\\langle \\epsilon (N,\\phi ) \\rangle _\\phi $ , $\\delta \\epsilon (N, \\chi ) \\equiv & \\, \\epsilon (N, \\chi ) - \\langle \\epsilon (N,\\phi ) \\rangle _\\phi .$ This yields, $n\\big (N, t \\big ) = & \\, n_F\\big [ \\langle \\epsilon (N,\\phi ) \\rangle _\\phi \\big ] \\nonumber \\\\& + \\, \\frac{\\partial _\\epsilon n_F[ \\langle \\epsilon (N,\\phi ) \\rangle _\\phi \\big ] }{2eU\\tau _{in}} \\int ^\\infty _0 d\\phi \\exp \\left( - \\frac{\\phi }{2eU\\tau _{in}}\\right)\\delta \\epsilon (N, \\chi (t) - \\phi ) .$ Expanding the periodic phase dependence of the energy of quasi-particle levels in a Fourier series, $\\delta \\epsilon \\big ( N, \\chi ) = \\sum _{k\\ne 0} C_k\\big ( N\\big ) e^{ik\\chi },$ and using Eqs.", "(REF ), (REF ) and the expression for the current Eq.", "(REF ), we obtain the following expression for the non-linear conductance $\\begin{split}\\bar{G}_d(U) =-4e^2 \\tau _{in} \\int ^\\infty _0 dN \\partial _\\epsilon n_F\\big [\\big \\langle \\epsilon \\big (N,\\chi \\big ) \\big \\rangle _\\chi \\big ]\\sum _{k \\ne 0}\\frac{k^2 }{1 + (2keU\\tau _{in})^2} | C_k(N)|^2 .\\end{split}$ At small voltages, $eU\\ll \\tau _{in}^{-1}$ , we obtain the linear conductance, $\\begin{split}\\bar{G}_d(0) =-4e^2 \\tau _{in} \\int ^\\infty _0 dN \\partial _\\epsilon n_F\\big [\\big \\langle \\epsilon \\big (N,\\chi \\big ) \\big \\rangle _\\chi \\big ]\\sum _{k \\ne 0}k^2 | C_k(N)|^2 .\\end{split}$ Comparing with Eq.", "(REF ) we see that the linear conductance can be equivalently expressed in the terms of the phase dependent conductance $G_d[\\chi ]$ introduced in Eq.", "(REF ), $\\bar{G}_d(0) = \\int _{0}^{2\\pi } \\frac{d \\chi }{ 2\\pi } G_{d} [\\chi ].$ At large voltages, $eU\\gg \\tau ^{-1}_{in}$ , Eq.", "(REF ) yields $ \\bar{G}_d(U) = - \\frac{1}{U^2\\tau _{in}}\\int ^\\infty _0 dN \\partial _\\epsilon n_F\\big [\\big \\langle \\epsilon \\big (N,\\chi \\big ) \\big \\rangle _\\chi \\big ]\\sum _{k \\ne 0} | C_k(N)|^2 .$ For a typical phase-dependence of the quasi-particle spectrum, the Fourier sums in Eqs.", "(REF ) and (REF ) are dominated by $k$ of order unity.", "In this case, nonlinear conductance at $eU\\gg \\tau ^{-1}_{in}$ can be estimated as $ \\bar{G}_d(U)\\sim \\frac{ \\bar{G}_d(0) }{(eU \\tau _{in})^{2}}.$ According to Eq.", "(REF ), at $eU\\gg \\tau ^{-1}_{in} $ the dc current $ \\langle J\\rangle = \\bar{G}_d(U) U$ decreases as the voltage increases.", "Thus, $\\langle J\\rangle $ has a maximum at $eU\\sim \\tau ^{-1}_{in} $ .", "The maximal current, $J_{max} \\sim \\frac{\\bar{G}_{d}(0)}{e \\tau _{in}},$ can be expressed in terms of the $\\chi $ -dependence of the quasi-particle spectrum using Eqs.", "(REF ) and (REF ).", "In the Lagrangian and Eulerian variables the corresponding expressions have the form $J_{max}&\\sim & -4e \\int d \\chi \\sum _i \\partial _\\epsilon n_F(\\epsilon _i) \\big (\\partial _\\chi \\epsilon _{i}(\\chi (t)) \\big )^{2} \\nonumber \\\\&=& -4e \\int d \\chi \\int _{0}^{\\infty } d \\epsilon \\nu (\\epsilon , \\chi ) V^2_{\\nu }(\\epsilon , \\chi )\\partial _\\epsilon n_F(\\epsilon ).$ It is worth noting that, since at high temperatures the equilibrium critical current $J_{c}(T)$ is exponentially decaying function of $T$ , the value of $J_{max}$ can be much larger $J_{c}(T)$ , and in some cases it can be as large as critical super-current at zero temperature $J_{c}(0)$ .", "Equation (REF ) describing the decrease of the nonlinear conductance with increasing voltage applies as long as the non-diagonal contribution to the dissipative current $J_{nd} = G_{nd} U(t) $ can be neglected.", "At voltages $U\\sim U_{min}\\equiv \\frac{1}{\\tau _{in}}\\bigg [\\frac{ \\bar{G}_d(0) }{ G_{nd}}\\bigg ]^{1/2} \\gg \\frac{1}{\\tau _{in}},$ the $I-U$ characteristic develops a minimum.", "At $U> U_{min}$ the dissipative current is dominated by the non-diagonal contribution $J_{nd}$ , which increases with $U$ .", "It has been studied in many articles, see for example Refs.", "[14], [15], [16], [17], [18], [19].", "The shape of the $I-U$ characteristics of the junctions of voltage-biased junctions is illustrated in Fig.", "REF .", "A somewhat different mechanism of N-type I-U characteristics of weak link has been discussed in Refs.", "[3], [12], [20].", "Figure: A schematic picture of an I-U characteristics of a voltage-biased SNS junction.", "The value of J (max) >J c (T)J_{(max)}>J_{c}(T) can be significantly larger than the value of the equilibrium critical current of the junction J c (T)J_{c}(T)." ], [ "I-U characteristics of current-biased junctions", "In the current bias setup, the SNS junction undergoes a transition into a resistive state when the bias current $J$ exceeds the critical current $J_c(T)$ .", "In this case the phase difference $\\chi (t)$ increases monotonically, while the voltage $U(t)$ changes periodically with time, as illustrated in Fig.", "REF .", "In the following we will be interested in the dependence of the voltage averaged over the period of oscillations, $ \\langle U(t) \\rangle $ , on the bias current $J$ .", "Qualitatively, the I-U characteristics of the current-biased SNS junctions is shown in Fig.", "REF .", "In a wide interval of bias currents $J > J_c (T) $ the average voltage on the junction is relatively small because it is inversely proportional to the the inelastic relaxation time $\\tau _{in}$ , which is the longest relaxation time in the system.", "At a higher bias current, $J \\approx J_{jump}$ , the voltage exhibits a relatively sharp jump to a much larger value.", "The magnitude of $J_{jump}$ turns out to be of the same order as the maximal current $J_{max}$ in the voltage-bias case, which is given by Eq.", "(REF ).", "We will focus on the range of bias currents $J_c(T)<J<J_{jump}$ , in which the current is dominated by the diagonal component $J_{d}$ .", "It is important to note however that according to Eqs.", "(REF ) and (REF ), at the time-reversal invariant points $\\chi = \\pi n$ , where $n$ is an integer, the sensitivity of all quasi-particle levels with respect to the phase change vanishes.", "As a result, $J_{d}$ vanishes at these points, and in some intervals near these points the bias current must be carried by the non-diagonal contribution, $J_{nd}$ .", "Thus, the phase and time periods of the oscillations can be separated into two diagonal and two non-diagonal intervals, $t_{p}=(t_{d,1} + t_{d,2} + t_{nd,1} + t_{nd,2})$ , and $2\\pi =\\chi _{d,1}+\\chi _{nd,1}+\\chi _{d,2}+\\chi _{nd,2}$ , in which the bias current is dominated by the diagonal, $J_{d}$ , or non-diagonal, $J_{nd}$ , contributions respectively.", "The relatively sharp distinction between these two intervals is possible because $ \\bar{G}_d(0) \\gg G_{nd}$ .", "The boundaries of the non-diagonal intervals $\\chi _{nd}$ can be determined from the condition that, at $\\dot{\\chi } \\sim 1/\\tau _{in}$ the bias current can be carried by the maximal diagonal contribution, $J_d \\sim G_d(\\chi )/e\\tau _{in} =J$ .", "In the vicinity of the time-reversal invariant points, $\\chi = \\pi n +\\delta \\chi $ , we have $G_d(\\chi ) \\sim \\frac{\\delta \\chi ^{2}}{2} \\left.", "\\frac{ d^2 G_{d}(\\chi )}{d \\chi ^2}\\right|_{\\chi = \\pi n} \\sim \\bar{G}_d(0) \\frac{\\delta \\chi ^{2}}{2}.$ As a result, we get the following estimate for the width of the non-diagonal phase intervals: $\\chi _{nd} \\sim \\sqrt{J/J_{jump}} $ .", "Inside the diagonal interval the phase winds at a rate of order of $eU_{d}=eJ/\\bar{G}_{d}(0)$ , whereas inside the non-diagonal interval it winds at a rate $eU_{nd}= eJ/G_{nd}$ .", "Therefore we can neglect $t_{nd}\\sim (G_{nd}/ \\bar{G}_{d}(0))(\\chi _{nd}/\\chi _{d}) t_{d}\\ll t_{d}$ in Eq.", "(REF ).", "Thus, using the Josephson relation (REF ), the average voltage can be expressed in terms of the duration of the diagonal time intervals only, $\\langle U \\rangle = \\frac{\\pi }{e t_p} \\approx \\frac{\\pi }{e \\left(t_{d,1} + t_{d,2} \\right)}.$ If the instantaneous phase is not to close to the time-reversal invariant points, $\\chi = n\\pi $ , the rate of change of phase is small, and the current may be expressed in terms of the instantaneous phase $\\chi $ and its derivative via Eq.", "(REF ).", "Using this relation, the duration of the diagonal time intervals may be expressed as $t_{d,i} = \\frac{1}{2e} \\int _{\\chi _{d,i}} \\frac{ G_{d}[\\chi ] d\\chi }{J - J_{c}(T)Y(\\chi ,T)},$ where $i=1,2$ , and the integration is taken over the phase interval $\\chi _{d,i}$ .", "At small excess current, $J-J_{c}(T) \\ll J_{c}(T)$ , we can expand $Y(\\chi , T)$ near its maximum at $\\chi =\\chi _{m}$ , while at $J>J_{c}$ we can neglect the second term in the denominator in Eq.", "(REF ).", "Then, using Eq.", "(REF ) we get $\\langle U(J) \\rangle ={\\left\\lbrace \\begin{array}{ll}\\sqrt{2J_c (J - J_c)}/G_{d}[\\chi _{m}] & ,J-J_{c} \\ll J_{c} ,\\\\\\sim J/ \\bar{G}_{d}(0) & ,J_{c}\\ll J\\ll J_{jump}.\\end{array}\\right.", "}$ According to Eqs.", "(REF ) and (REF ), at relatively small currents the voltage across the junction is smaller than $1/\\tau _{in}$ .", "This justifies the use of linear in $\\dot{\\chi }$ approximation for the dissipative part of the current through the junction.", "Similarly to the case of voltage-based junction, in the current-biased case the diagonal component of the current $J_{d}$ has a maximum at $U\\sim 1/e\\tau _{in}$ , which is of order of $J_{jump}$ .", "When the bias current, $J$ reaches this value, the widths of the phase intervals $\\chi _{d,i}$ shrinks to zero, and the voltage-current dependence $\\langle U(J) \\rangle $ jumps to the branch dominated by the non-diagonal contribution to the current $J_{nd}$ .", "Near $J = J_{jump}$ the non-diagonal interval covers nearly the entire phase interval from 0 to $2\\pi $ , with the exception of points near $\\chi _{max}^{(G)}$ , where $G_{d}[\\chi ]$ reaches its maximum.", "Expanding $G_{d}[\\chi ]$ near its maximum, we estimate the conductance at the edge of the diagonal interval $G_{d}[\\chi _{max}^{(G)} + \\chi _d] \\sim J_{jump} \\tau _{in} - \\bar{G}_d(0) \\chi ^2_d.$ If the current is fixed at $J = J_{jump} - \\delta J$ , then the size of the diagonal interval is given by $\\delta J \\sim \\bar{G}_d(0) \\chi ^2_d / \\tau _{in} \\sim J_{jump} \\chi _d^2$ .", "Within the width of the jump the diagonal and non-diagonal time intervals are of the same order $t_{nd} \\sim t_{d}$ , which means that $\\chi _{d} \\sim \\bar{G}_d(0) / G_{nd} $ .", "Therefore, we estimate the width of the jump to be, $\\delta J\\sim J_{jump} \\bigg ( \\frac{G_{nd}}{ \\bar{G}_{d}(0)} \\bigg )^2 \\ll J_{jump}.$ In the regime $J>J_{jump}$ the voltage on the junction $U\\sim J/G_{nd}\\gg 1/\\tau _{in}$ , the diagonal contribution to the current $J_{d}$ is suppressed, and the I-U characteristics of the junctions are controlled by the non-diagonal contribution to the current, $J_{nd}=G_{nd}U$ .", "Figure: Time dependence of voltage at a current-based SNS junction when J>J c (T)J>J_{c}(T).Figure: Schematic picture of the I-U characteristic of the current-based SNS junction." ], [ "I-U characteristics of SNS junctions in clean and diffusive regimes", "As was shown in Sec.", ", the I-U characteristics of both voltage- and current- biased SNS junctions can be characterized by the parameter $ \\bar{G}_{d}(0) \\sim G_{d}[\\chi _{m}]$ ; see Eqs.", "(REF ), (REF ), and (REF ).", "In this section we will evaluate this parameter in the cases of ballistic single channel junctions, and diffusive multi-channel junctions." ], [ "Clean 1D SNS junction.", "In this subsection we consider a junction, in which the normal region consists of a clean single channel metallic wire.", "We assume that the length of the wire $L$ is larger than the superconducting coherence length.", "In this case one can evaluate the quasi-particle spectrum by solving the stationary Bogoliubov-De Gennes equations in the normal metal at fixed value of $\\chi $ with appropriate boundary conditions at NS boundaries (see for example Refs.", "[21] and [22] ) $\\begin{bmatrix}\\frac{\\hat{p}^2}{2m}-\\mu & 0 \\\\0 & - \\frac{\\hat{p}^2}{2m}+\\mu \\end{bmatrix} \\begin{bmatrix}\\psi _e\\\\\\psi _h\\end{bmatrix}= \\epsilon \\begin{bmatrix} \\psi _e\\\\\\psi _h\\end{bmatrix}.$ Below we assume that the transmission coefficients of both contacts are the same and equal to $r$ .", "In the limiting cases of high and low transparency the spectrum for $\\epsilon < \\Delta $ is given by $\\epsilon ^{\\pm }_{n} (\\chi )= \\frac{v_F}{L} {\\left\\lbrace \\begin{array}{ll}\\pi (n+\\frac{1}{2}) \\pm \\frac{\\chi }{2},& r = 1, \\\\n \\pi \\pm 2 r \\sqrt{2(1 + \\cos \\chi ) }, & r \\ll 1.\\end{array}\\right.", "}$ Here $n=0,1...$ is integer, $v_{F}$ is the Fermi velocity, and the phase $\\chi $ is understood modulo $2\\pi $ .", "Below we evaluate the linear conductance $\\bar{G}_d(0)$ given by Eqs.", "(REF ) and (REF ), which can then be used to determine the values of the maximal current in the voltage-biased set up, $J_{max}$ and the current at which the transition to a high resistance state occurs in the current- biased set up, $J_{jump}$ , via Eq.", "(REF ).", "Substituting Eq.", "(REF ) into Eq.", "(REF ) we obtain an expression for the linear conductance at high temperatures $\\bar{G}_d(0) =\\frac{e^2}{\\pi } \\frac{v_F}{L}\\tau _{in} A(r), \\,\\,\\,\\,\\ T\\gg v_{F}/L,$ where $A(r) ={\\left\\lbrace \\begin{array}{ll}1 &, r=1,\\\\8r^2 &, r \\ll 1.\\end{array}\\right.", "}$ We note that the conductance of a pure single channel SNS junction, Eq.", "(REF ), exceeds the normal state conductance $Ae^{2}/\\hbar $ , by a large factor $\\frac{v_{F}}{L}\\tau _{in}\\gg 1$ .", "Substituting Eq.", "(REF ) into Eq.", "(REF ), we get $J_{jump}\\sim & J_{max} \\sim \\frac{e v_F}{L} A(r).$ The maximal current turns out to be temperature independent.", "The reason for this is that at low energies, $\\epsilon _{n}\\ll \\Delta $ , the sensitivity of the levels to a change in $\\chi $ is independent of the energy.", "It is instructive to compare value of $J_{max}$ and $J_{jump} $ in Eq.", "(REF ) with the critical current $J_{c}(T)$ .", "The latter can be obtained by substituting Eq.", "(REF ), and the equilibrium Fermi distribution function $n_{F}(\\epsilon _{i})$ into Eq.", "(REF ), see Ref. [21].", "$J_{c}(T) = B(r) \\frac{ev_F}{2L } {\\left\\lbrace \\begin{array}{ll}1 &, \\quad T\\ll \\frac{v_{F}}{L}, \\\\\\exp (- \\frac{2 \\pi T L}{v_F} ) &, \\quad T\\gg \\frac{v_{F}}{L}.\\end{array}\\right.}", "$ where the dimensionless coefficient $B(r)$ has the following limiting values at high and low contact transparencies, $B(r) ={\\left\\lbrace \\begin{array}{ll}1 &, r=1,\\\\r^2/ 2\\pi &, r \\ll 1.\\end{array}\\right.", "}$ Comparing Eqs.", "(REF ) and (REF ) we arrive to a somewhat surprising conclusion that at high temperatures, $T\\gg v_{F}/L$ , the values of $J_{max}$ and $J_{jump} $ are of order of the critical current at zero temperature, $J_{max}\\sim J_{jump} \\sim J_{c}(0) \\gg J_{c}(T).$ At small temperatures, $T\\ll v_{F}/L$ , the situation depends on the value of the transmission coefficient $r$ .", "At $r=1$ the gap in the quasi-particle spectrum closes at $\\chi =0,\\pi $ .", "In this case the main contribution to $ \\bar{G}_d(0)$ comes from the interval of times where the gap is of order $T$ .", "In this case Eqs.", "(REF ), (REF ), (REF ) yield the same value for $\\bar{G}$ , $J_{jump}$ , and $J_{max}$ as in Eqs.", "(REF ), (REF ).", "If $r<1$ , the gap in the spectrum does not close at any value of $\\chi $ .", "Therefore, at $T\\ll v_{F}/L$ quasi-particle concentration inside the junction is exponentially low $n_{i}\\sim \\exp (-v_{F}/LT)$ .", "In this case there are two relaxation times characterizing the dynamics of the system: relaxation time characterizing processes which conserve the total number of quasi-particles, $\\tau _{in}$ , and the exponentially long recombination time ,$\\tau _{in,r}\\sim \\tau _{in} \\exp (v_{F}/LT)$ , which characterizes the processes changing the total number of particle.", "The two exponential factors are canceled in Eq.", "(REF ) and we can estimate the low temperature linear conductance to be roughly the same order as in the high temperature case, $\\bar{G}_d(U) \\sim e^{2} \\frac{v_F}{L} \\tau _{in}{\\left\\lbrace \\begin{array}{ll}1 & ,(eU \\tau _{in,r})\\ll 1 ,\\,\\,\\,\\,\\,\\,\\, r = 1 ,\\\\(eU\\tau _{in,r})^{-2} & ,(eU \\tau _{in,r})\\gg 1 ,\\,\\,\\,\\,\\,\\,\\, r= 1.\\end{array}\\right.", "}$ Note that since in this case $U_{max} \\sim 1/e \\tau _{in,r}\\ll 1/\\tau _{in}$ , the value of the maximum current in the adiabatic regime turns out to be smaller than its value at high temperatures by an exponentially small factor $ e^{-v_{F}/LT}$ .", "Accordingly, at small temperatures, $T\\ll v_{F}/L$ , we have $J_{max} \\ll J_{c}(0) $ ." ], [ "Diffusive SNS junctions.", "Let us now consider the case of a diffusive SNS junction shown in Fig.", "REF , where two sides of a diffusive metal with the dimensions $L,L_{1},L_{2}\\gg l=v_{F}\\tau _{el}$ are attached to two superconducting parts of the junction, while the other two sides are in contact with insulator.", "A general scheme of description of the kinetic phenomena in superconductors in the diffusive regime ($L\\gg l$ ) has been developed by Larkin and Ovchinnikov [13].", "It describes both diagonal $J_{d}$ and non-diagonal $J_{nd}$ parts of the current as long as $eU<\\Delta $ .", "In the appendix we review derivation Larkin-Ovchinnikov equations and show that at $eU< E_{T}$ they can be reduced to Eqs.", "(REF ) and (REF ).", "Here $E_T = \\frac{D}{L^2}$ is the Thouless energy and $D = v_F^2 \\tau _{el} / 3$ is the diffusion coefficient in the normal metal.", "The density of states in the normal metal part of the junction can be written in terms of retarded Green's function, $\\nu (\\epsilon , \\chi ) =\\int _{V} d {\\bf r} \\tilde{\\nu }(\\epsilon , {\\bf r}, \\chi )={2} \\tilde{\\nu }_N \\mathrm {Re} \\int _{V} d {\\bf r} G_0^{R} (\\epsilon , {\\bf r}, \\chi ) .", "\\color {black}$ Here the integral is taken over the normal metal region, $\\tilde{\\nu }({\\bf r})$ is the local density of states in SNS junction, and $\\tilde{\\nu }_N $ is the density of states per unit volume per spin projection of the normal state, and $G_0^{R}(\\epsilon , {\\bf r},\\chi )$ is the dimensionless semi-classical retarded Green's function [13].", "In the geometry of the SNS junction shown in Fig.", "REF b, $\\tilde{\\nu }$ depends only on the $x$ coordinate.", "Using the normalization condition for the normal and anomalous retarded Green's functions (see Eq.", "(REF ) in the appendix), we parameterize them as follows $G_0^{R}(\\epsilon ,x) = \\cos \\theta (\\epsilon ,x), \\,\\,\\ F_0^{R} = e^{i\\tilde{\\chi }(\\epsilon ,x)} \\sin \\theta (\\epsilon ,x), \\,\\,\\ F_0^{R+} = e^{-i\\tilde{\\chi }(\\epsilon ,x)} \\sin \\theta (\\epsilon ,x),$ and $\\nu (\\epsilon , \\chi ) ={ 2} \\tilde{\\nu }_N (L_1L_2) \\int ^\\frac{L}{2} _{-\\frac{L}{2}} dx \\mathrm {Re} \\big [\\cos \\theta (\\epsilon ,x) \\big ].$ Here $\\theta $ and $\\chi $ are complex.", "In the diffusive regime the dependence of $\\theta $ and $\\tilde{\\nu }(\\epsilon , \\chi , {\\bf r})$ on $x$ and the phase difference $\\chi $ can be obtained by solving the Usadel equations [23] (see Eqs.", "(REF ) and (REF ) in the Appendix).", "In the normal region, where $\\Delta ({\\bf r})=0$ they have the form $\\frac{D}{2} \\bigg (\\partial ^2_x \\theta - \\frac{1}{2} \\sin (2\\theta ) \\big (\\partial _x \\tilde{\\chi } \\big )^2 \\bigg ) = &-\\epsilon \\sin \\theta ,\\\\\\partial _x \\big (\\sin ^2\\theta \\partial _x \\tilde{\\chi }\\big )= & \\, 0.$ The boundary conditions for these equations at $\\epsilon <\\Delta $ and $r =1$ are (see Ref.", "[24]) $\\begin{split}\\theta (\\epsilon ; x=0,L ) = \\frac{\\pi }{2} ; \\,\\,\\,\\,\\,\\,\\tilde{\\chi }(\\epsilon ;x=0,L ) = \\pm \\frac{\\chi }{2}.\\end{split}$ For $r\\ll 1$ the boundary conditions have a form $\\begin{split}D\\partial _x \\theta |_{\\epsilon ; x = 0,L}= \\pm r v_F \\big ( \\cos \\theta \\big ) \\cos (\\tilde{\\chi }\\pm \\frac{\\chi }{2}) |_{\\epsilon ; x = 0,L} , \\\\D\\sin \\theta \\partial _x \\tilde{\\chi }|_{\\epsilon ; x = 0,L}= \\pm r v_F \\sin (\\tilde{\\chi }\\pm \\frac{\\chi }{2})|_{\\epsilon ; x = 0,L}.\\end{split}$ Solutions of Eqs.", "(REF ),() were investigated in several articles (see for example, Refs.", "[17] and [25]).", "The density of states in the normal region of SNS junctions differs from that in the normal metal only at small energies of the order of mini-gap $E_{g}$ .", "For our purposes we need only rough features of $\\epsilon $ and $\\chi $ dependencies of the density of states, $\\nu (\\epsilon , \\chi ) = {2} v\\tilde{\\nu }_N {\\left\\lbrace \\begin{array}{ll}0 & ,\\epsilon < E_g(\\chi ),\\\\h(\\epsilon , \\chi ) & ,\\epsilon - E_g(\\chi ) \\sim E_g(\\chi ), \\\\1 & ,\\epsilon - E_g(\\chi ) >> E_g(\\chi ).\\end{array}\\right.", "}$ where $h(\\epsilon , \\chi )$ is of order unity, and $v=LL_{1}L_{2}$ is the volume of the normal metal region.", "When the phase winds from 0 to $2\\pi $ the value of mini-gap changes on the order of $E_{g}(0)\\sim E_{T} {\\left\\lbrace \\begin{array}{ll}1 &, r> \\frac{l}{L}, \\\\r\\frac{L}{l} & ,r< \\frac{l}{L},\\end{array}\\right.", "}$ which implies that $\\partial _\\chi E_g(\\chi ) \\sim E_{g}(0)$ .", "Substituting Eq.", "(REF ) into Eqs.", "(REF ),(REF ) and averaging the result over the period of oscillations we can estimate the conductance of the junction as follows $\\bar{G}_d(0,T) \\sim G^{\\prime }_{N} \\tau _{in} {\\left\\lbrace \\begin{array}{ll}\\frac{E_{g}^{2}(0)}{T}& \\,\\,\\,\\ ,T>E_{g}(0), \\,\\,\\,\\,\\ U\\ll 1/\\tau _{in}, \\\\T & \\,\\,\\,\\,\\, ,T<E_{g}(0), \\,\\,\\,\\,\\ U\\ll 1/\\tau _{in}, \\\\\\end{array}\\right.", "}$ where $G^{\\prime }_{N} = e^2 E_{g}(0) \\tilde{\\nu }_{N} v$ .", "The situation at large voltages, $eU\\gg \\tau _{in}^{-1}$ , is similar to that described in Sec (REF ) for a clean one-dimensional SNS junction.", "Namely, the nonlinear conductance $ \\bar{G}_d(U)$ is reduced from its linear value, Eq.", "(REF ), by the factor $(2eU\\tau _{in})^{-2}$ .", "Thus, the I-U characteristics of a voltage-biased junction has a maximum at $eU\\sim \\tau _{in}^{-1}$ .", "The magnitude of the maximal current can be estimated as $J _{max} \\sim J_{c}(0) {\\left\\lbrace \\begin{array}{ll}\\frac{E_{g}(0)}{T} & \\,\\,\\,\\ ,T>E_{g}(0), \\,\\,\\,\\,\\ \\\\\\frac{T}{E_{g}(0)} & \\,\\,\\,\\,\\, ,T<E_{g}(0) .\\,\\,\\,\\,\\ \\\\\\end{array}\\right.", "}$ Here $J_{c}(0)=\\frac{1}{e}G^{\\prime }_{N}E_{g}(0)$ is the critical current of a diffusive SNS junction at $T=0$ .", "We note that the value of $J_{max} $ can be significantly larger than $J_{c}(T)$ .", "At even larger larger voltages the dominant contribution to the current comes from $J_{nd}$ , which is an increasing function of voltage.", "Let us consider the case $T\\gtrsim E_{g}$ .", "In this regime the part of the resistance of the junction corresponding to $J_{nd}$ is, essentially, the resistance of the sequence of the tunneling barriers and the normal metal resistances.", "It has been considered in many articles [14], [15], [16], [17], [18], [26] and [27].", "For example, if $r=1$ and $E_{g}(0) \\sim E_{T}$ , then the contribution to the current from the non-diagonal part is on the order of the current in the normal state $J=G_{N}U$ , where $G_{N}=e^{2} E_{T}\\tilde{\\nu }_{N} v$ is the conductance of normal metal part of SNS junction (see Eqs.", "(REF ), (REF ) in the appendix).", "As a result, using Eq.", "(REF ), we get an estimate for $U_{min}\\sim \\frac{ E_{T}}{(T\\tau _{in})^{1/2}}\\ll E_{T}.$" ], [ "Conclusions.", "We have shown that the I-U characteristics of SNS junctions at low temperatures and low voltages can be expressed in terms of the energy and the phase dependence of the density of states $\\nu (\\epsilon , \\chi )$ .", "In this case, they are controlled by the inelastic quasi-particle relaxation time $\\tau _{in}$ .", "In contrast, at large bias voltages and currents the I-U characteristics are controlled by the elastic relaxation time $\\tau _{el}$ .", "Qualitatively, our results are shown in Figs.", "REF and REF .", "An interesting aspect of the problem is that for current-biased junctions, the jump in the I-U characteristics from the low voltage to high voltage regime (see Fig.", "REF ) occurs at the value of the current $J=J_{jump}$ , which can be significantly larger then the value of the equilibrium critical current $J_{c}(T)$ .", "Therefore, determination of the critical currents of SNS junctions may require measurements of I-U characteristics at relatively small voltages, $eU< 1/\\tau _{in}$ .", "The results presented above are valid in situations where the low energy quasi-particles are trapped inside the normal region of the junction, and the only channel of the quasi-particle relaxation is the inelastic energy relaxation.", "In a different geometry, where the normal region of the junction is open to the bulk normal metal, as shown in Fig.", "REF , there is another channel of the relaxation via diffusion of quasi-particles into the bulk of the normal metal.", "In this case one can obtain an estimate for the conductance of the system substituting in Eq.", "(REF ) $\\tau _{in}\\rightarrow min[\\tau ^{(*)}, \\tau _{in} ]$ where $\\tau ^{*}\\sim L^{2}_{1}/D$ is the time of diffusion on the length $L_{1}$ .", "Finally, it should be mentioned that the only symmetry requirement for the density of state in the time reversal symmetrical system is $\\nu (\\epsilon , \\chi , {\\bf H})= \\nu (\\epsilon , -\\chi , -{\\bf H})$ .", "Therefore, for example, in the case on non-centrosymmetric films in the parallel magnetic field $\\nu (\\epsilon , \\chi , H)\\ne \\nu (\\epsilon ,\\chi , -H)$ and $\\nu (\\epsilon , \\chi , H)\\ne \\nu (\\epsilon ,-\\chi , H)$ .", "As a result, in general, the I-U characteristics of the SNS junctions are non-reciprocal: $|J(U)\\ne |J(-U)|$ , and $|J(H)\\ne |J(-H)|$ ." ], [ "Acknowledgment", "This work of was supported by the National Science Foundation Grant MRSEC DMR-1719797 and also in part by the Thouless Institute for Quantum Matter and the College of Arts and Sciences at the University of Washington.", "The work BZS was funded by the Gordon and Betty Moore Foundation’s EPiQS Initiative through Grant GBMF8686." ], [ "Derivation of Eqs. (", "We start with the Gorkov equations for the Green's functions in Keldysh representation [28].", "We will denote matrices in Nambu space with a hat, $\\hat{A}$ , and matrices in both Nambu and Keldysh space with a check, $\\check{A}$ .", "We have chosen units such that $\\hbar = c = 1$ .", "The Green's function is defined by the following equation, $\\begin{split}\\bigg ( i \\hat{\\tau }_3 \\partial _{t_1} + \\frac{1}{2m} \\bigg ({\\bf \\nabla _{r_1}} - ie\\hat{\\tau }_3 {\\bf A}({\\bf r_1},t_1) \\bigg )^2+ \\mu \\\\+ \\hat{\\Delta }({\\bf r_1}, t_1) - e \\phi ({\\bf r_1},t_1) \\bigg )\\check{G} ({\\bf r_1}, {\\bf r_2},t_1, t_2)\\\\- (\\check{\\Sigma }\\otimes \\check{G})({\\bf r_1}, {\\bf r_2},t_1, t_2) = \\delta (r_1 - r_2) \\delta (t_1 - t_2).\\end{split}$ Here $ {\\bf A} ({\\bf r},t)$ is the vector potential, $\\hat{\\Delta }({\\bf r},t) $ is the superconducting order parameter, $\\mu $ is the chemical potential, and $\\phi ({\\bf r})$ is the scalar potential, $\\check{G} =\\begin{pmatrix}\\hat{G}^R & \\hat{G}^K \\\\0 & \\hat{G}^A\\end{pmatrix}, &&\\check{\\Sigma }=\\begin{pmatrix}\\hat{\\Sigma }^R & \\hat{\\Sigma }^K \\\\0 & \\hat{\\Sigma }^A\\end{pmatrix},$ where $\\hat{G}^{R,A,K}$ are retarded, advanced, and Keldysh Green's functions, and $\\check{\\Sigma }$ is the self energy.", "The cross operator in Eq.", "(REF ) represents a convolution, $(O_1 \\otimes O_2) ({\\bf r_1}, {\\bf r_2}, t_1, t_2) = \\int d {\\bf r} \\int dt O_1 ({\\bf r_1}, {\\bf r}, t_1, t) O_2 ({\\bf r}, {\\bf r_2}, t, t_2).$ The conjugate equation to Eq.", "(REF ) is given by $\\begin{split}\\check{G} ({\\bf r_1}, {\\bf r_2},t_1, t_2) \\bigg ( -i \\hat{\\tau }_3 \\partial _{t_2} + \\frac{1}{2m} \\bigg ({\\bf \\nabla _{r_2}} + ie\\hat{\\tau }_3 {\\bf A}({\\bf r_2},t_2) \\bigg )^2 +\\\\ \\mu + \\hat{\\Delta }({\\bf r_2}, t_2) - e \\phi ({\\bf r_2},t_2) \\bigg )\\\\- (\\check{G}\\otimes \\check{\\Sigma })({\\bf r_1}, {\\bf r_2},t_1, t_2) = \\delta (r_1 - r_2) \\delta (t_1 - t_2),\\end{split}$ where the derivatives are understood to be acting towards the left.", "These equations should be supplemented with the self-consistent equation for the order parameter $\\Delta (r,t) = \\lambda F(r,t),$ where $\\lambda $ is the electron interaction constant." ], [ " Quasi-classical approximation for the Gorkov equations.", "Subtracting Eq.", "(REF ) from Eq.", "(REF ) gives the following equation for the Green's functions, $\\begin{split}i \\hat{\\tau }_3 \\partial _{t_1}\\check{G}({\\bf r_1}, {\\bf r_2},t_1, t_2) + i\\hat{\\partial }_{t_2} \\check{G}({\\bf r_1}, {\\bf r_2},t_1, t_2) \\hat{\\tau }_3 \\\\+\\bigg (\\frac{1}{2m} \\bigg ({\\bf \\nabla _{r_1}} - ie {\\bf A}({\\bf r_1},t_1) \\bigg )^2 + \\mu + \\hat{\\Delta }({\\bf r_1}, t_1) - e \\phi ({\\bf r_1},t_1) \\bigg ) \\check{G}({\\bf r_1}, {\\bf r_2},t_1, t_2) \\\\- \\check{G}({\\bf r_1}, {\\bf r_2},t_1, t_2) \\bigg ( \\frac{1}{2m}({\\bf \\nabla _{r_2}} + ie \\hat{\\tau }_3 {\\bf A}({\\bf r_2},t_2) )^2 + \\mu + \\hat{\\Delta }({\\bf r_2}, t_2) - e \\phi ({\\bf r_2},t_2) \\bigg ) \\\\= ( \\check{\\Sigma }\\otimes \\check{G}) ({\\bf r_1}, {\\bf r_2},t_1, t_2) -(\\check{G} \\otimes \\check{\\Sigma }) ({\\bf r_1}, {\\bf r_2},t_1, t_2) .\\end{split}$ In the limit where fields are slowly varying in space and time, we can use the quasi-classical approximation.", "Introducing the Wigner coordinates, $\\begin{aligned}r = \\frac{1}{2} (r_1 + r_2), && \\tilde{r} = r_1 - r_2 ,\\\\t = \\frac{1}{2} (t_1 + t_2), && \\tilde{t} = t_1 - t_2 ,\\end{aligned}$ Fourier transforming equation Eq.", "(REF ) over the relative position ${\\bf \\tilde{r}}$ as well as the relative time $\\tilde{t}$ , and dropping terms which are second order in derivatives, we arrive at the following equation $\\begin{split}\\frac{1}{2} \\partial _t \\big \\lbrace \\hat{\\tau }_3, \\check{G}(\\epsilon , {\\bf r},t,{\\bf p}) \\big \\rbrace - i\\epsilon \\big [ \\hat{\\tau }_3, \\check{G}(\\epsilon , {\\bf r},t,{\\bf p}) \\big ]+ \\frac{{\\bf p} }{m} \\cdot {\\bf \\nabla _r} \\check{G}(\\epsilon , {\\bf r}, {\\bf p})\\\\+ \\big [\\hat{H}({\\bf r}, t, {\\bf p}) , \\check{G}(\\epsilon ,{\\bf r}, t, {\\bf p}) \\big ]- \\frac{i}{2} \\big \\lbrace \\partial _t \\hat{H}({\\bf r}, t, {\\bf p}) , \\partial _\\epsilon \\check{G}(\\epsilon ,{\\bf r}, t, {\\bf p}) \\big \\rbrace \\\\- \\frac{e}{2m} {\\bf A}({\\bf r},t) \\cdot {\\bf \\nabla _r} \\big \\lbrace \\hat{\\tau }_3, \\check{G}(\\epsilon , {\\bf r}, t, {\\bf p}) \\big \\rbrace + \\frac{i}{2} \\big \\lbrace {\\bf \\nabla _r} \\hat{H}({\\bf r}, t, {\\bf p}), {\\bf \\nabla _p} \\check{G}(\\epsilon ,{\\bf r}, t, {\\bf p}) \\big \\rbrace \\\\= -i\\big [ \\check{\\Sigma }(\\epsilon ,{\\bf r}, t, {\\bf p}), \\check{G}(\\epsilon ,{\\bf r}, t, {\\bf p}) \\big ]\\\\+ \\frac{1}{2} \\big \\lbrace {\\bf \\nabla _r}\\check{\\Sigma }(\\epsilon ,{\\bf r},t,{\\bf p}), {\\bf \\nabla _p}\\check{G} (\\epsilon ,{\\bf r},t,{\\bf p}) \\big \\rbrace - \\frac{1}{2} \\big \\lbrace {\\bf \\nabla _p}\\check{\\Sigma }(\\epsilon ,{\\bf r},t,{\\bf p}), {\\bf \\nabla _r}\\check{G} (\\epsilon ,{\\bf r},t,{\\bf p}) \\big \\rbrace \\\\- \\frac{1}{2}\\big \\lbrace \\partial _t \\check{\\Sigma }(\\epsilon ,{\\bf r},t,{\\bf p}), \\partial _\\epsilon \\check{G} (\\epsilon ,{\\bf r},t,{\\bf p}) \\big \\rbrace + \\frac{1}{2} \\big \\lbrace \\partial _\\epsilon \\check{\\Sigma }(\\epsilon ,{\\bf r},t,{\\bf p}), \\partial _t \\check{G} (\\epsilon ,{\\bf r},t,{\\bf p}) \\big \\rbrace .\\end{split}$ Here the brackets $[\\cdot , \\cdot ]$ and $\\lbrace \\cdot , \\cdot \\rbrace $ stand for commutators and anti-commutators, and we have defined, $\\check{G}(\\epsilon , {\\bf r}, t, {\\bf p}) = \\int dt \\int d^3{\\bf r} \\check{G}({\\bf r_1}, {\\bf r_2}, t_1, t_2) e^{-i {\\bf p}\\cdot {\\bf \\tilde{r}} + i\\epsilon \\tilde{t} },\\\\\\hat{H}({\\bf r}, t, {\\bf p}) = \\frac{-ie}{m} {\\bf A}({\\bf r}, t)\\cdot {\\bf p} \\hat{\\tau }_3 -i \\hat{\\Delta }({\\bf r},t)+ \\frac{ie^2}{m} {\\bf A}^2({\\bf r}, t) + ie \\phi ({\\bf r},t).&$" ], [ " The diffusion approximation for Gorkov equations.", "The self-energy $\\check{\\Sigma }=\\check{\\Sigma }_{el}+\\check{\\Sigma }_{in}$ is a sum of two contributions corresponding to elastic and inelastic scattering respectively.", "In the case when the total scattering rate $\\check{\\Sigma }$ is smaller than the characteristic quasi-particle energy, it can be dropped from the equation for the retarded Green's function.", "In this case the quasi-particle momentum is a good quantum number, and one can use a conventional Boltzmann kinetic equation for quasi-particle distribution function to describe slow superconducting dynamics [29].", "In this article we will be interested in the opposite limit, where the quasi-particle momentum is not a good quantum number, and $\\tau _{el}^{-1}>E_{T}>\\tau _{in}^{-1}.", "$ We note that the Thouless energy, $E_{T}$ is a characteristic quasi-particle energy relevant to the problem.", "In this case $\\check{\\Sigma }_{in}$ still can be dropped from the equation for the retarded Green's function, however $\\check{\\Sigma }_{el}$ is the largest term in Eq.", "(REF ), and can not be neglected.", "An effective approach to describe the quasi-particle dynamics in this limit was developed in Ref.[13].", "This method is based on the fact that the elastic part of the self-energy can be expressed in terms of the Green's functions, $\\check{\\Sigma }_{el}(\\epsilon , {\\bf r}, t) = \\frac{-1}{2\\pi \\tau _{el}} \\int d^3 {\\bf p} \\check{G}(\\epsilon , {\\bf r},t, {\\bf p}),$ and thus $\\check{\\Sigma }_{el}$ does not depend on ${\\bf p}$ .", "Let us integrate Eq.", "(REF ) over $\\xi _p = \\frac{{\\bf p}^2}{2m} - \\mu $ for a fixed momentum direction $\\mathbf {n} = \\mathbf {p}/p$ .", "On length scales larger then the Fermi wave length $p_F^{-1}$ , to leading order in spacial gradients we get $\\begin{split}\\frac{1}{2} \\partial _t \\big \\lbrace \\hat{\\tau }_3, \\check{g}(\\epsilon , {\\bf r},t,{\\bf n}) \\big \\rbrace - i\\epsilon \\big [ \\hat{\\tau }_3, \\check{g}(\\epsilon , {\\bf r},t,{\\bf n}) \\big ]+ v_F {\\bf n} \\cdot {\\bf \\nabla _r} \\check{g}(\\epsilon , {\\bf r}, {\\bf n}) \\\\+ \\big [\\hat{H}({\\bf r}, t, p_F {\\bf n}) , \\check{g}(\\epsilon ,{\\bf r}, t, {\\bf n}) \\big ]- \\frac{i}{2} \\big \\lbrace \\partial _t \\hat{H}({\\bf r}, t, p_F {\\bf n}) , \\partial _\\epsilon \\check{g}(\\epsilon ,{\\bf r}, t, {\\bf n}) \\big \\rbrace \\\\= -i \\big [\\check{\\Sigma }_{el}(\\epsilon , {\\bf r},t), \\check{g} (\\epsilon ,{\\bf r}, t, {\\bf n}) \\big ]-i \\big [\\check{\\Sigma }_{in}(\\epsilon , {\\bf r},t), \\check{g} (\\epsilon ,{\\bf r}, t, {\\bf n}) \\big ],\\end{split}$ where we have defined $\\check{g}(\\epsilon ,{\\bf r}, t, {\\bf n}) = \\frac{i}{\\pi } \\int d\\xi _p \\check{G}(\\epsilon , {\\bf r}, t, {\\bf p}).$ We have introduced the factor $i/\\pi $ to have the same notation as in Ref. [13].", "Taking into account the normalization condition(see for example [30]), $(\\check{g} \\otimes \\check{g}) ({\\bf r}, t_1,t_2, n) = \\delta (t_1 - t_2),$ we can parameterize the Keldysh component of $\\check{g}$ as, $\\hat{g}^K(t_1,t_2, n) = (\\hat{g}^R\\otimes \\hat{f}) (r, t_1,t_2, n) - (\\hat{f} \\otimes \\hat{g}^A) (r, t_1,t_2, n) .$ Since the matrix in the Nambu space $\\hat{f}(r,t_1,t_2, {\\bf n})$ has no off-diagonal component, we can expand it as $\\hat{f} ({\\bf r}, t_1,t_2, n) =f({\\bf r},t_1,t_2, n) + \\hat{\\tau }_3 f_{1} ({\\bf r},t_1,t_2, n).$ To obtain $\\hat{g}^K(\\epsilon ,{\\bf r},t,{\\bf n})$ , we must Fourier transform Eq.", "(REF ) with respect to the relative time difference.", "To zeroth order in time derivatives, we have $\\hat{g}^K(\\epsilon ,{\\bf r}, t, {\\bf n}) = \\hat{g}^R (\\epsilon ,{\\bf r}, t, {\\bf n}) \\hat{f}(\\epsilon , {\\bf r}, t,{\\bf n}) - \\hat{f}(\\epsilon ,{\\bf r}, t, {\\bf n}) \\hat{g}^A (\\epsilon ,{\\bf r}, t,{\\bf n}).$ We can write $\\hat{g}^K$ in the form, $\\hat{g}^K(\\epsilon ,{\\bf r},t, {\\bf n}) = 2 f(\\epsilon , {\\bf r},t, {\\bf n}) \\hat{\\delta }(\\epsilon ,{\\bf r},t, {\\bf n}) + 2 f_{1}(\\epsilon ,{\\bf r},t, {\\bf n})\\hat{\\alpha }(\\epsilon ,{\\bf r},t,{\\bf n}),$ where we have defined, $2 \\hat{\\alpha }(\\epsilon ,{\\bf r},{\\bf n}) = \\hat{g}^R(\\epsilon ,{\\bf r},{\\bf n}) \\hat{\\tau }_3 - \\hat{\\tau }_3 \\hat{g}^A(\\epsilon ,{\\bf r},{\\bf n}),$ $2 \\hat{\\delta }(\\epsilon ,{\\bf r},{\\bf n}) = \\hat{g}^R(\\epsilon ,{\\bf r},{\\bf n}) - \\hat{g}^A(\\epsilon ,{\\bf r},{\\bf n}).$ In the diffusive limit, where $\\tau _{el}^{-1}$ is much larger than the typical energy scales of the problem, Greens functions are almost isotropic, and we can expand them in the spherical harmonics.", "$\\check{g}(\\epsilon ,{\\bf r}, t, {\\bf n}) = \\check{g}_0(\\epsilon ,{\\bf r}, t) + \\check{\\bf g_1} (\\epsilon ,{\\bf r}, t) \\cdot {\\bf n},\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\, \\check{g}_0(r, t_1, t_2) \\gg \\check{\\bf g}_1(r, t_1, t_2) \\cdot n.$ It follows from the normalization condition Eq.", "(REF ), that $\\check{g}_0(\\epsilon ,{\\bf r}, t) \\check{g}_0(\\epsilon ,{\\bf r}, t) = 1, \\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\, \\ \\check{\\bf g_1} (\\epsilon ,{\\bf r}, t) \\check{g}_0(\\epsilon ,{\\bf r}, t) = -\\check{g}_0(\\epsilon ,{\\bf r}, t) \\check{\\bf g_1}(\\epsilon ,{\\bf r}, t).$ Substituting Eq.", "(REF ) into (REF ), using Eq.", "(REF ) and the fact that $ \\check{\\Sigma }_{el} = \\frac{-i}{2\\tau _{el}} \\check{g}_0$ , in the linear in spacial gradients approximation we get $\\check{\\bf g_1} (\\epsilon ,{\\bf r}, t) =&-\\frac{3D}{v_F} \\bigg (\\check{g}_0(\\epsilon ,{\\bf r}, t) {\\partial _{\\bf r}} \\check{g}_0(\\epsilon ,{\\bf r}, t)\\\\ \\nonumber &\\, + \\frac{e}{2}\\partial _t {\\bf A}({\\bf r}, t) \\check{g}_0(\\epsilon , {\\bf r}, t)\\big \\lbrace \\hat{\\tau }_3,\\partial _\\epsilon \\check{g}_0(\\epsilon , {\\bf r}, t) \\big \\rbrace \\bigg ) .$ Here $\\partial _{\\bf r} = {\\bf \\nabla _r} - ie {\\bf A}({\\bf r},t)[\\hat{\\tau }_3, \\cdot ]$ is the covariant derivative.", "Substituting Eqs.", "(REF ), (REF ) into (REF ), and averaging the result over direction of ${\\bf n}$ , we get an equation for the isotropic part of the Green's functions $\\check{g}_{0}$ , $\\begin{split}\\frac{1}{2} \\partial _t \\big \\lbrace \\hat{\\tau }_3, \\check{g}_0(\\epsilon , {\\bf r},t) \\big \\rbrace - i\\epsilon \\big [ \\hat{\\tau }_3, \\check{g}_0(\\epsilon , {\\bf r},t) \\big ]-D {\\partial _{\\bf r}} \\cdot \\bigg (\\check{g}_0(\\epsilon ,{\\bf r}, t) {\\partial _{\\bf r}} \\check{g}_0(\\epsilon ,{\\bf r}, t) \\bigg )\\\\+ \\frac{eD}{2}\\partial _t {\\bf A} ({\\bf r},t) \\partial _\\epsilon \\big \\lbrace \\hat{\\tau }_3 , \\check{g}_0 (\\epsilon , {\\bf r}, t) {\\partial _{\\bf r}} \\check{g}_0 (\\epsilon , {\\bf r}, t) \\big \\rbrace -i\\big [ \\hat{\\Delta }({\\bf r},t), \\check{g}_0 (\\epsilon , {\\bf r}, t) ]\\\\ - \\frac{1}{2} \\big \\lbrace \\partial _t \\hat{\\Delta }({\\bf r},t), \\partial _\\epsilon \\check{g}_0 (\\epsilon , {\\bf r}, t) \\big \\rbrace + \\partial _t \\phi ({\\bf r},t) \\partial _\\epsilon \\check{g}_0 (\\epsilon , {\\bf r}, t)\\\\= -i \\big [\\check{\\Sigma }_{in}(\\epsilon , {\\bf r},t), \\check{g}_0 (\\epsilon ,{\\bf r}, t) \\big ].\\end{split}$ In the adiabatic approximation, valid when the external perturbations vary slowly compared to $\\Delta ^{-1}$ , time derivatives can be dropped in the diagonal components of Eq.", "(REF ) and we get Usadel's equations [23] in matrix form $\\begin{split}i\\epsilon \\big [ \\hat{\\tau }_3, \\hat{g}^R_0(\\epsilon , {\\bf r},t) \\big ]+D {\\partial _{\\bf r} } \\cdot \\bigg (\\hat{g}^R_0(\\epsilon ,{\\bf r}, t) \\partial _{\\bf r} \\hat{g}^R_0(\\epsilon ,{\\bf r}, t) \\bigg )\\\\ + i \\big [ \\hat{\\Delta }({\\bf r}, t), \\hat{g}^R_0(\\epsilon , {\\bf r}, t) \\big ]= 0.\\end{split}$ To get the two equations $f$ and $f_1$ , we look at the Keldysh component of Eq.", "(REF ) and take the trace in Nambu space (multiplying by a factor of $\\hat{\\tau }_3$ before taking the trace to get the second equation).", "As a result we have, $\\begin{split}\\tilde{\\nu }(\\epsilon , {\\bf r},t) \\partial _t f(\\epsilon , {\\bf r},t) + \\frac{1}{4}e D \\tilde{\\nu }_N \\partial _\\epsilon f(\\epsilon , {\\bf r},t) \\bigg ( \\partial _t {\\bf A} ({\\bf r},t) \\cdot {\\bf j_\\epsilon }({\\bf r},t)\\\\- 2\\mathrm {Tr} \\big \\lbrace \\partial _t \\hat{\\Delta }({\\bf r},t) \\hat{\\delta }(\\epsilon , {\\bf r},t) \\big \\rbrace \\bigg )- \\frac{1}{4} D\\tilde{\\nu }_N {\\bf \\nabla _r} \\cdot \\big ( \\Pi _1(\\epsilon , {\\bf r},t) {\\bf \\nabla _r} f(\\epsilon , {\\bf r},t)\\big )\\\\- \\frac{1}{4} D\\tilde{\\nu }_N {\\bf \\nabla _r} \\cdot \\big ( f_1(\\epsilon , {\\bf r},t){\\bf j_\\epsilon }({\\bf r},t) \\big )= I_1 \\lbrace f \\rbrace ,\\end{split}$ $\\begin{split}\\partial _t \\big (f_1(\\epsilon , {\\bf r},t) \\tilde{\\nu }(\\epsilon , {\\bf r},t) \\big )- \\frac{1}{4} D \\tilde{\\nu }_N {\\bf \\nabla _r}\\cdot \\big ( \\Pi _2(\\epsilon , {\\bf r},t) {\\bf \\nabla _r} f_1(\\epsilon , {\\bf r},t) \\big ) \\\\- D\\tilde{\\nu }_N {\\bf \\nabla _r} \\cdot \\big ( f(\\epsilon , {\\bf r},t) j_\\epsilon ({\\bf r},t) \\big )- \\frac{i}{2} f_1(\\epsilon , {\\bf r},t) \\mathrm {Tr} \\big \\lbrace \\big (g^R(\\epsilon , {\\bf r},t) \\\\+ g^A(\\epsilon , {\\bf r},t) \\big )\\hat{\\Delta }(\\epsilon , {\\bf r},t) \\big \\rbrace + \\frac{1}{2} \\partial _\\epsilon f(\\epsilon , {\\bf r},t) \\mathrm {Tr} \\big \\lbrace e \\partial _t \\phi \\hat{\\alpha }(\\epsilon , {\\bf r},t)- \\\\\\partial _t \\hat{\\Delta }(\\epsilon , {\\bf r},t) \\big \\rbrace = I_2 \\lbrace f_1 \\rbrace .\\end{split}$ Here $\\mathrm {Tr} $ stands for a trace in the Nambu space, and we have defined, $\\Pi _1(\\epsilon , {\\bf r},t) = \\mathrm {Tr}\\lbrace 1- \\hat{g}^A_0 \\hat{g}^R_0 \\rbrace ,$ $\\Pi _2 (\\epsilon ,{\\bf r}, t) = \\mathrm {Tr}\\lbrace 1- \\hat{\\tau }_3 \\hat{g}^A_0 \\hat{\\tau }_3\\hat{g}^R_0 \\rbrace ,$ ${\\bf j_\\epsilon } = \\mathrm {Tr} \\big \\lbrace \\hat{\\tau }_3 \\big ( \\hat{g}^R_0 {\\partial _{\\bf r}} \\hat{g}^R_0 - \\hat{g}^A_0 {\\partial _{\\bf r}} \\hat{g}^A_0\\big )\\big \\rbrace .$ Since $\\tau _{in}^{-1}\\ll E_{T}$ the scattering integrals $I$ and $I_{1}$ have a standard form (see Refs.", "[13] and [29]) which can be obtained by substituting Eq.", "(REF ) into the corresponding expression for $\\Sigma _{in}$ .", "They vanish when $f(\\epsilon ) = \\tanh (\\epsilon /2T)$ and $f_1 = 0$ .", "The current density can be expressed in terms of the Keldysh Green's function, ${\\bf j} ({\\bf r, t}) = -\\frac{e \\tilde{\\nu }_N v_F}{4} \\int ^\\infty _{-\\infty } d \\epsilon \\int \\frac{d\\Omega _n}{4\\pi }\\mathrm {Tr}\\big \\lbrace \\hat{\\tau }_3 \\hat{g}^K(\\epsilon ,{\\bf r},t, {\\bf n})\\big \\rbrace {\\bf n},$ where $\\int \\frac{d\\Omega _n}{4\\pi }$ indicates an integration over the direction of the momentum.", "Substituting the Keldysh component of Eq.", "(REF ) into Eq.", "(REF ) we get an expression for the current density ${\\bf j} ={\\bf j_d} + {\\bf j_{nd}}$ , where ${\\bf j_d}$ and ${\\bf j_{nd}}$ are given by, ${\\bf j_d} ({\\bf r}, t) = \\frac{eD\\tilde{\\nu }_N }{4} \\int ^\\infty _{-\\infty } d \\epsilon {\\bf j_\\epsilon } ({\\bf r}, t) f(\\epsilon ,{\\bf r}, t),$ $\\begin{split}{\\bf j_{nd}}({\\bf r}, t) = \\frac{e \\tilde{\\nu }_N D }{8}\\partial _t {\\bf A}({\\bf r}, t) \\int ^\\infty _{-\\infty } d \\epsilon \\bigg (\\Pi _2(\\epsilon ,{\\bf r}, t) {\\bf \\nabla _r} f_1(\\epsilon ,{\\bf r}, t) + \\\\\\mathrm {Tr}\\bigg \\lbrace 2f_1 \\hat{\\tau }_3 (\\hat{g}^R_0 \\partial _\\epsilon \\hat{g}^R_0- \\hat{g}^A_0 \\partial _\\epsilon \\hat{g}^A_0 )+ \\partial _\\epsilon f(1 - \\hat{g}^R_0 \\hat{g}^A_0) \\\\+ \\partial _\\epsilon f_1(1 - \\hat{\\tau }_3 \\hat{g}^R_0 \\hat{\\tau }_3 \\hat{g}^A_0)+ f \\hat{\\tau }_3 (\\hat{g}^R_0 \\hat{\\tau }_3 \\partial _\\epsilon g^R_0 - g^A_0 \\hat{\\tau }_3 \\partial _\\epsilon g^A_0 )\\\\+ \\partial _\\epsilon f \\hat{\\tau }_3 (g^R_0 \\hat{\\tau }_3 g^R_0 - \\hat{g}^R_0 \\hat{\\tau }_3 \\hat{g}^A_0)+ \\partial _\\epsilon f_1 \\hat{\\tau }_3 (1 - g_0^R g_0^A )\\bigg \\rbrace \\bigg ).\\\\\\end{split}$ The current conservation equation, ${\\bf \\nabla _r} \\cdot {\\bf j_\\epsilon }({\\bf r}, t) = 0,$ can also be derived by multiplying Eq.", "(REF ) by $\\hat{\\tau }_3$ and taking the trace in Nambu space.", "In summary, we have derived a set of equations which describe the kinetics of superconductors in the diffusive regime.", "The density of states is determined by Usadel's equation (REF ), the distribution functions are determined by (REF ) and (REF ), and the expression for the current is given by (REF ) and (REF )." ], [ "Application of the general scheme to the case of SNS Junctions.", "We consider the case where the interaction constant, and consequently the value of the order parameter inside the normal region of the SNS junction is zero.", "In this case we can use the following parametrization for the Green's functions, $\\hat{g}^R_0 = \\begin{pmatrix}G_0^R & F^R_0 \\\\-F^{R +}_0 & -G_0^R\\end{pmatrix}.$ Taking into account that the order parameter inside the normal metal region is zero, we get from Eqs.", "(REF ),(REF ) Usadel's equations in the form ${\\bf \\nabla _{\\bf r} } \\cdot \\bigg (G^R_0 {\\bf \\nabla _{\\bf r}} G_0^R - F_0^R ({\\bf \\nabla _r} + 2ie{\\bf A}) F^{R+}_0 \\bigg ) = 0,$ $i\\epsilon F^R_0 + \\frac{D}{2} \\bigg ({\\bf \\nabla _r} - 2ie{\\bf A} \\bigg ) \\cdot \\bigg (G_0^R ({\\bf \\nabla _r}- 2ie{\\bf A}) F^R_0 - F^R_0 {\\bf \\nabla _r} G^R_0 \\bigg ) =0,$ $\\big ( G_0^R \\big ) ^2 + F^R_0 F^{R+}_0 = 1 .$ We note that at $T\\ll \\Delta $ , and if $L<\\sqrt{D\\tau _{in}}$ , the distribution function $f_{1}=0$ vanishes everywhere in the sample.", "At small voltages $eU \\ll E_T$ and in the case of closed boundaries (As it us shown in Fig.", "REF ) the distribution function $f(\\epsilon , t)$ is spatially uniform.", "It is convenient to choose a gauge where $\\chi =0$ , and $\\phi = 0$ , ${\\bf p_s}({\\bf r},t) = - e {\\bf A}({\\bf r},t), && {\\bf E}({\\bf r},t) = - \\partial _t {\\bf A}({\\bf r}, t),$ where ${\\bf p_S}$ is the super-fluid momentum and ${\\bf E}$ is the electric field.", "Then Eq.", "(REF ) simplifies to $\\tilde{\\nu }(\\epsilon , {\\bf r},t) \\partial _t f(\\epsilon , t) + \\frac{1}{4}e D \\tilde{\\nu }_N \\partial _\\epsilon f(\\epsilon , t) \\partial _t {\\bf A} ({\\bf r},t) \\cdot {\\bf j_\\epsilon }({\\bf r},t)= I_1 \\lbrace f \\rbrace .$ Integrating Eq.", "(REF ) over the volume of the normal region of the junction we get $\\begin{split}\\nu (\\epsilon ,t) \\partial _t f(\\epsilon , t) - \\frac{ eD \\tilde{\\nu }_N S }{4} \\partial _\\epsilon f(\\epsilon ,t)U \\big ({\\bf j_\\epsilon }(t) \\big )_x= I_1 \\lbrace f \\rbrace ,\\end{split}$ where $S$ is the cross sectional area of the junction.", "Note that $j_\\epsilon $ must be spatially uniform in this geometry due to the fact that ${\\bf j_\\epsilon }$ can only depend on the $x$ coordinate and also has a vanishing divergence.", "Next we use the diagonal components of Eq.", "(REF ) and write $j_\\epsilon $ in the following form ${\\bf j_\\epsilon }(t) &= \\frac{-v_F}{3D}\\mathrm {Tr} \\bigg \\lbrace \\hat{\\tau }_3 \\big ( \\hat{\\bf g}^R_1(\\epsilon ,{\\bf r} ,t) - \\hat{\\bf g}^A_1(\\epsilon ,{\\bf r} ,t) \\big ) \\bigg \\rbrace \\\\ \\nonumber &= \\frac{-2v_F}{D} \\mathrm {Tr} \\bigg \\lbrace \\hat{\\tau }_3 \\int \\frac{d\\Omega _n}{4\\pi } \\mathrm {Re} \\lbrace \\hat{g}^R (\\epsilon , {\\bf r},t, {\\bf n})\\rbrace {\\bf n} \\bigg \\rbrace \\\\ \\nonumber &=\\frac{2}{m\\pi D} \\mathrm {Im} \\bigg ( \\int d^3{\\bf p} \\mathrm {Tr} \\bigg \\lbrace \\hat{\\tau }_3\\hat{G}^R (\\epsilon , {\\bf r}, t, {\\bf p}) \\bigg \\rbrace {\\bf p} \\bigg ).$ Differentiating of both sides of Eq.", "(REF ) over $\\epsilon $ and integrating over the volume we get, $\\partial _\\epsilon {\\bf j_\\epsilon }(t) = \\frac{2}{\\pi DSL} \\partial _\\epsilon \\mathrm {Im} \\bigg (\\int d^3{\\bf r} \\int d^3{\\bf p} \\mathrm {Tr} \\bigg \\lbrace \\hat{G}^R (\\epsilon , {\\bf r}, t, {\\bf p}) \\frac{\\bf p}{m}\\hat{\\tau }_3\\bigg \\rbrace \\bigg ).$ Using the fact that $\\frac{d \\hat{H}}{d {\\bf A}} = -\\frac{ie\\bf p}{m} \\hat{\\tau }_3 + \\frac{2ie^2}{m} {\\bf A} $ , and that ${\\bf p} \\ll e{\\bf A}$ in the quasi-classical approximation, we can write Eq.", "(REF ) in the form, $\\partial _\\epsilon {\\bf j_\\epsilon } = \\frac{2i}{e\\pi DSL} \\partial _\\epsilon \\mathrm {Im} \\bigg (\\int d^3{\\bf r} \\int d^3{\\bf p} \\mathrm {Tr} \\bigg \\lbrace \\hat{G}^R (\\epsilon , {\\bf r}, t, {\\bf p}) \\frac{d \\hat{H}}{d {\\bf A}} \\bigg \\rbrace \\bigg ) .$ To proceed further, we need to derive the following identity relating derivatives of the Green's functions.", "$\\int d^3 {\\bf r} \\int d^3 {\\bf p} \\mathrm {Tr} \\bigg \\lbrace \\hat{\\tau }_3 \\frac{d\\hat{G}}{d \\lambda } \\bigg \\rbrace = i\\partial _\\epsilon \\int d^3 {\\bf r} \\int d^3 {\\bf p} \\mathrm {Tr}\\bigg \\lbrace \\hat{G} \\frac{d \\hat{H}}{d\\lambda }\\bigg \\rbrace .$ In order to derive this identity, first consider a Hamiltonian and corresponding Green's function with some parametric dependence on $\\lambda $ , $\\hat{\\mathcal {G}}(\\epsilon , \\lambda )= \\frac{1}{i\\epsilon \\hat{\\tau }_3 - \\hat{\\mathcal {H}}(\\epsilon ,\\lambda )}.$ Here $\\hat{\\mathcal {H}}$ is the Hamiltonian with a particular impurity potential, and $\\hat{\\mathcal {G}}$ is the exact Green's function of this Hamiltonian.", "Calculating the mixed derivatives of the spectral determinant by performing the derivatives $\\partial _\\epsilon $ and $\\partial _\\lambda $ in opposite orders, we have the following relations, $\\partial _\\lambda \\partial _\\epsilon \\int d^3 {\\bf r} \\int d^3 {\\bf p} \\mathrm {Tr} \\bigg ( \\ln \\big ( \\hat{\\mathcal {G}}^{-1} \\big ) \\bigg )= \\partial _\\epsilon \\partial _\\lambda \\int d^3 {\\bf r} \\int d^3 {\\bf p} \\mathrm {Tr} \\bigg ( \\ln \\big ( \\hat{\\mathcal {G}}^{-1} \\big ) \\bigg ),$ $\\partial _\\lambda \\int d^3 {\\bf r} \\int d^3 {\\bf p} \\mathrm {Tr} \\big ( \\hat{\\mathcal {G}}\\partial _\\epsilon \\hat{\\mathcal {G}}^{-1} \\big )= \\partial _\\epsilon \\int d^3 {\\bf r} \\int d^3 {\\bf p} \\mathrm {Tr} \\bigg ( \\hat{\\mathcal {G}} \\partial _\\lambda \\hat{\\mathcal {G}}^{-1}\\bigg ),$ $\\partial _\\lambda \\int d^3 {\\bf r} \\int d^3 {\\bf p} \\mathrm {Tr}\\big ( \\hat{\\tau }_3 \\hat{\\mathcal {G}}\\big )= i\\partial _\\epsilon \\int d^3 {\\bf r} \\int d^3 {\\bf p} \\mathrm {Tr} \\big ( \\hat{\\mathcal {G}} \\partial _\\lambda \\hat{\\mathcal {H}}\\big ).$ Next we average Eq.", "(REF ) over impurity configurations.", "In the case where $\\partial _\\lambda \\hat{H}$ is independent of the impurity potential, we have equation (REF ).", "Using the Eqs.", "(REF ) and (REF ), in the case of $\\lambda \\equiv {\\bf A} $ , we have, $\\partial _\\epsilon {\\bf j_\\epsilon }= \\frac{2}{e\\pi DSL} \\mathrm {Im} \\bigg (\\int d^3{\\bf r} \\int d^3{\\bf p} \\mathrm {Tr} \\bigg \\lbrace \\hat{\\tau }_3 \\frac{d \\hat{G}}{d {\\bf A}} \\bigg \\rbrace \\bigg )=\\frac{2}{ e D \\tilde{\\nu }_NS } \\frac{1}{L} \\frac{d \\nu }{d {\\bf A}}.$ Integrating Eq.", "(REF ) with respect to $\\epsilon $ and using the fact that ${\\bf j_\\epsilon }$ is a spatially independent vector which points in the x-direction, we have $\\big ( {\\bf j_\\epsilon } (t) \\big )_x =\\frac{- 4}{ S D\\tilde{\\nu }_N} \\int ^\\epsilon _0 d \\tilde{\\epsilon }\\partial _\\chi \\nu (\\epsilon , t) = \\frac{ 4}{ S D\\tilde{\\nu }_N} \\nu (\\epsilon ) V_\\nu (\\epsilon ),$ where $\\chi (t) =-2e\\int ^{L/2}_{-L/2} dx {\\bf A}_x (x,t)$ is the phase difference across the junction.", "Substituting Eq.", "(REF ) into (REF ) we reproduce Eq.", "(REF ) in the main text.", "$\\partial _t f(\\epsilon , t) + 2 eU \\partial _\\epsilon f (\\epsilon , t) V_\\nu (\\epsilon ) = I_1 \\lbrace f\\rbrace .$" ], [ "Expression for the current", "Let us consider the equation for the diagonal current ${\\bf j_d}$ at small voltages when the distribution function $f$ is specially uniform.", "$J_d =\\frac{eD\\tilde{\\nu }_N S }{4 L} \\int ^\\infty _{0} d \\epsilon f(\\epsilon ) \\int ^{L/2}_{-L/2} dx \\big ( {\\bf j_\\epsilon }(t) \\big )_x .$ Substituting Eq.", "(REF ) into Eq.", "(REF ) we get, $J_d & = e \\int ^\\infty _{0} d\\epsilon \\nu (\\epsilon ) f(\\epsilon ) V_\\nu (\\epsilon ) & \\nonumber \\\\& \\equiv J_c(0) Y(\\chi ,0) - e \\int ^\\infty _0 d\\epsilon \\nu (\\epsilon ) V_\\nu (\\epsilon )(1 - f(\\epsilon )).&$ Using the relationship, $n(\\epsilon ) = \\frac{1}{2}\\big (1- f(\\epsilon ) \\big ), \\,\\ \\epsilon >0 $ , we see that Eq.", "(REF ) is equivalent to the expression for the diagonal current used in the main text(see Eq.", "(REF )).", "Let us now turn to the non-diagonal contribution to the current, $j_{nd}$ .", "To linear order in ${\\bf E}$ an estimate for $j_{nd}$ can be obtained by substituting the equilibrium distributions into Eq.", "(REF ), $\\begin{split}{\\bf j}_{nd} ({\\bf r}, t) = \\frac{e \\tilde{\\nu }_N D }{2} {\\bf E}({\\bf r},t) ({\\bf r}, t) \\int ^\\infty _{0} d \\epsilon \\bigg [\\tanh (\\epsilon /2T) \\partial _\\epsilon \\bigg ( (G^R_0)^2 + F^R_0 F^{R+}_0 \\\\- (G^A_0)^2 - F^A_0 F^{A+}_0 \\bigg )+ \\frac{2}{T}\\frac{(G^R_0)^2 + |G^R_0|^2 }{\\cosh ^2(\\epsilon /2T) }\\bigg ].\\end{split}$ The dominant contribution to the integral comes from the region when $\\epsilon \\sim T$ .", "In the case when $T \\gg E_T$ , the Green's functions are equal to the normal metal Green's functions in the relevant energy intervals.", "In this case $\\begin{split}{\\bf j}_{nd} ({\\bf r}, t) \\sim G_N {\\bf E}({\\bf r},t).\\end{split}$ Thus we have shown that the non-diagonal current is of the same order as the dissipative current in the normal state." ] ]
2212.05625
[ [ "Learning Neural Volumetric Field for Point Cloud Geometry Compression" ], [ "Abstract Due to the diverse sparsity, high dimensionality, and large temporal variation of dynamic point clouds, it remains a challenge to design an efficient point cloud compression method.", "We propose to code the geometry of a given point cloud by learning a neural volumetric field.", "Instead of representing the entire point cloud using a single overfit network, we divide the entire space into small cubes and represent each non-empty cube by a neural network and an input latent code.", "The network is shared among all the cubes in a single frame or multiple frames, to exploit the spatial and temporal redundancy.", "The neural field representation of the point cloud includes the network parameters and all the latent codes, which are generated by using back-propagation over the network parameters and its input.", "By considering the entropy of the network parameters and the latent codes as well as the distortion between the original and reconstructed cubes in the loss function, we derive a rate-distortion (R-D) optimal representation.", "Experimental results show that the proposed coding scheme achieves superior R-D performances compared to the octree-based G-PCC, especially when applied to multiple frames of a point cloud video.", "The code is available at https://github.com/huzi96/NVFPCC/." ], [ "Introduction", "We are in an era when the 3D visual applications are emerging.", "Modern 3D visual capturing devices are widely deployed on autonomous cars, mobile phones, drones, etc.", "In these applications, point clouds serve as the raw representations for processing and analysis of 3D scenes.", "Due to the computation limitation at the capturing devices, it is common practices to compress and transmit captured point clouds to remote processors and store them for future analysis.", "On the other hand, in AR/VR applications, 3D contents represented by point clouds need to be compressed and delivered from servers to end users or between end users.", "Both scenarios require efficient compression schemes for point clouds.", "Due to the higher dimensionality and the sparsity in nature, 3D visual data are generally more difficult to process compared to 2D visual data, i.e.", "images and videos.", "In terms of compression, the challenges coming with point clouds are mainly twofold.", "Transform Design.", "For images and videos, time-frequency transforms like DCT and DWT are shown to perform well for transform coding.", "Different from images, where the sampling of the signal is on a dense grid, point clouds are sparsely sampled in the 3D space.", "In terms of voxel grid occupancy, even a dense point cloud is sparse from the signal point of view.", "Therefore, existing time-frequency transforms cannot be directly applied to point clouds.", "To tackle this problem, machine learning based point cloud coding schemes are developed.", "One category of learned point cloud coder follows the octree coding structure.", "Given the already coded parent nodes [5] and siblings [9], [6] in the voxel grids, the probability distribution of the occupancy in the children nodes is predicted, and directly serves entropy coding.", "The other category adopts the binary voxel grid data structure, where an 3D auto-encoder is adopted to down-sample a point cloud into compact latent representations [13].", "Although these methods achieve impressive rate-distortion performances, they show limitations when adapting to dynamic point cloud coding.", "Motion Compensation For conventional 2D videos, prediction and motion compensation are shown to be the effective to exploit the temporal redundancy.", "Due to the memory and computational inefficiency, point clouds are usually not represented as voxel grids but rather octrees [10].", "Such octrees vary from frame to frame and it is difficult to find the correspondences between octree nodes for motion compensation.", "Therefore, coding of point cloud videos is far more challenging than 2D videos.", "The Moving Picture Experts Group (MPEG) has developed two approaches for point cloud coding [3], where the V-PCC adopts a video codec-based method to code point cloud videos.", "Such methods rely on projections from 3D points to 2D frames during encoding, and reprojection from 2D to 3D during decoding, introducing distortion and complexity.", "However, because it can leverage the significant progress in 2D video coding over the past 30 years, and effectively exploit the temporal redundancy through motion compensation, V-PCC has substantially better performance than G-PCC applied to individual frames.", "In [2], a learned inter-frame predictive point cloud coding framework is developed, and achieves improved R-D performance over V-PCC.", "However, explicit motion compensation is still required over voxel grids.", "Figure: Overall framework of the proposed method.In this paper, we present a unified approach suitable to compress both static and dynamic point clouds, without explicit motion compensation.", "We focus on the coding of the geometry only.", "Our work is inspired by a recent advance in 3D modeling, i.e.", "Neural Radiance Field (NeRF) [7].", "Our core idea is to represent a 3D point cloud with an implicit neural field.", "Specifically, we divide the space occupied by a point cloud into subregions based on a shallow octree.", "Each subregion is associated with a latent code.", "The occupancies inside a subregion are reconstructed by its latent code and a network that is shared among all subregions.", "Both the network parameters and the latent representations are rate-distortion optimized, quantized and entropy coded.", "The collection of the network parameters and all the latent codes forms the neural field.", "To decode, we execute the network with the latent codes and the network parameters to reconstruct the point cloud.", "This method can be easily extended to code a point cloud video, by learning a single shared network for all the subregions over a group of frames.", "The benefits of our approach are three folds: We take advantage of machine learning to design non-linear transforms suitable for point cloud signals.", "Such transforms are effective to reduce the redundancy.", "The same method works on both static and dynamics point clouds, yielding more compression gains on dynamic point clouds without explicit motion estimation.", "Our approach does not rely on any dataset to train, providing more flexibility for the highly diverse point cloud data.", "Experimental results show that the proposed method achieves superior rate-distortion performance compared with G-PCC[8], the MPEG octree based point cloud codec.", "We further show that our proposed method has the potential to achieve greater improvements when coding point cloud videos.", "Neural fields, i.e.", "neural networks overfitted on a given scene, are capable of constructing implicit volumetric functions to represent 3D environments [7].", "To utilize neural fields for point cloud compression, three problems need to be addressed: 1) We need to represent a point cloud by a volumetric function, so a network can be trained to fit the function and reconstruct a 3D volume.", "2) the entropy of the neural field parameters should be controllable to reach different bit-rates.", "3) With the original NeRF structure, we need to query every voxel to reconstruct the occupancy, where a large portions of them are empty.", "Such process is time consuming and not suitable for decoding.", "Directly converting a point cloud to the volumetric form, i.e.", "voxel grid, is also memory inefficient.", "Besides, as we need many parameters to fully characterize the entire voxel grid, training the network and controlling the bit-rate is difficult.", "We address these issues by constraining the volumetric space represented by the neural field, and using a convolutional network to generate groups of points at one time.", "We first build a shallow octree from the point cloud.", "Such a shallow octree only takes very few bits to represent.", "Each octree leaf node that is 1 corresponds to a non-empty subregion (thereafter called a cube) of the original point cloud space that contains points.", "We use a neural network along with a input latent code to represent the occupancy in each non-empty cube, so the burden to characterize the entire volume is lightened.", "A shared network that can reconstruct all cubes of a point cloud, along with their respective latent codes, are learned to represent all the cubes.", "Unlike the original NeRF work, where the input is a user-specified view point, and only the network is trained to generate the 2D projection from that view point, we learn both the network and the latents associated with all the cubes.", "We call the collection of the network parameters and the latents as the neural field.", "The framework of the proposed approach is illustrated in Fig.", "REF .", "Assuming the original point cloud is voxelized and represented with an $M+N$ level octree.", "We take the first $M$ levels as the shallow octree $T$ .", "It describes the point cloud $\\mathbf {x}$ at a coarse resolution.", "Each non-empty leaf node of $T$ is associated with a subtree of level $N$ , corresponding to a $(2^N,2^N,2^N)$ binary cube.", "For example, if the original point cloud is described by a 10-level octree, we may choose $M$ =5, and $N$ =5, so that each cube has a shape of $(32, 32, 32)$ .", "The $k$ -th cube is associated with a trainable latent code ${z}_k$ .", "Given a point cloud, the encoder learns the network parameters $\\mathbf {y}$ and the latent codes $\\mathbf {z}=\\lbrace z_k, \\forall k\\rbrace $ through backpropagation using a rate-distortion loss function.", "The quantized $\\mathbf {y}$ and $\\mathbf {z}$ are entropy coded, and the resulting bits together with the bits describing the shallow tree $T$ form the coded representation of the original point cloud.", "At the decoder, we reconstruct the voxelized representation of the point cloud by feeding the latent code ${z}_k$ through a shared neural network with parameters $\\mathbf {y}$ to reconstruct each non-empty cube." ], [ "Neural Field Network Structure", "In this section, we describe the neural network structure for generating a non-empty cube in more detail.", "Because we want to generate a size $(2^N,2^N,2^N)$ binary cube, and we let the latent code ${z}_k$ to have a spatial dimension of $(2^L,2^L,2^L), L<N$ with $J$ channels.", "The generator consists of several convolution layers, some with transposed convolutions as the upsampling method.", "We add a latent code generator in front of the cube generator consisting of a $1\\times 1$ convolutional layer and a 3D GDN [1], followed by a rounding function.", "The input to the latent code generator $v_k$ has the same dimension as $z_k$ .", "The purpose of the $1\\times 1$ convolutional layer and the 3D GDN layer is to decorrelate the elements in $z_k$ and furthermore make each element (before rounding) have a distribution similar to Gaussian [1].", "An example network with $N=5, L=1, J=4$ is shown in Fig.", "REF .", "Notice that this is a very light neural network, thanks to the small size of the cube." ], [ "Rate Constraint Loss", "We will optimize the latent codes and the network parameters by minimizing a rate-distortion loss function, $\\centering \\small \\begin{split}{L} = {L}_{R_z} + {L}_{R_y} + \\lambda {L}_D.\\end{split}$ Different $\\lambda $ is set to achieve different R-D tradeoff.", "We first discuss how to estimate the rate terms ${L}_{R_z} $ and ${L}_{R_y}$ in the following.", "By assuming each element $z_i$ in $\\mathbf {z}$ (before quantization) follows a Gaussian distribution with mean $\\mu _z$ and scale $\\sigma _z$ , we can calculate the lower bound of the bits needed to encode the quantized $\\mathbf {z}$ , as, $\\small {L}_{R_z} = \\sum _{i} -\\log (q(z_i)),$ where $q(z_i)$ is the estimated probability of $z_i$ , given by, $\\small q(z_i) = \\phi (\\frac{z_i-\\mu _z+\\Delta /2}{\\sigma _z}) - \\phi (\\frac{z_i-\\mu _z-\\Delta /2}{\\sigma _z}),$ where $\\Delta $ equals the quantization step, and $\\phi $ is the standard normal cumulative distribution function.", "We assume each element of the network parameter $\\mathbf {y}$ before quantization also has a Gaussian distribution, with parameters $\\mu _y, \\sigma _y$ , from which we can estimate the lower bound on the rate for encoding $\\mathbf {y}$ , denoted by ${L}_{R_y}$ .", "We use a quantization step size of $\\Delta =1/16$ for the network parameters and $\\Delta =1$ for the latent code.", "During training, to estimate the bit-rate for $\\mathbf {z}$ and $\\mathbf {y}$ and make the decoder quantization resilient, we add a uniform noise $\\mathcal {U}(-\\Delta /2, \\Delta /2)$ to each element in $\\mathbf {z}$ and $\\mathbf {y}$ rather then performing quantization.", "The encoding of a point cloud is de facto the training process of the network using backpropagation, but we update both network parameters $\\mathbf {y}$ and the input $v_k, \\forall k$ (and consequently the latent code ${ z}_k, \\forall k$ ).", "In addition, we also update the distribution parameters $\\mathbf {q}=\\lbrace \\mu _z, \\sigma _z, \\mu _y, \\sigma _y \\rbrace $ .", "Since $\\mathbf {q}$ only contains a few floating point values, after the rate-distortion training, we simply transmit $\\mathbf {q}$ in its original 32-bit floating point form to the decoder.", "Then the quantized latent code $\\mathbf {z}$ for all leaf nodes and the quantized network parameters $\\mathbf {y}$ are entropy coded based on $\\mathbf {q}$ .", "Note that the parameters in the latent code generator part of the neural network (see Fig.REF ) will be trained together with the cube generator part, but they do not need to be encoded, as the decoder only need to apply the latent code to the cube generator part to decode each cube.", "Therefore $\\mathbf {y}$ only includes parameters in the cube generator.", "Similarly, during training, we update $v_k$ instead of $z_k$ .", "We encode the final optimal $z_k$ corresponding to the optimized $v_k$ ." ], [ "Network Parameters Initialization", "Since we need to entropy code both the network parameters and the latent representation, we need to apply rate constraints on both during the training.", "In this circumstance, the commonly used random weight initialization introduces high entropy to the parameters at the beginning, making it hard to control the bit-rate throughout the training process.", "To address this problem, we propose to separate the initialization from the coded network parameters, as illustrated in Fig.", "REF .", "We use a Kaiming pseudo ramdon initialization [4] tensor $\\mathbf {p}$ , with the same shape as $\\mathbf {y}$ , to initialize the network parameters.", "The initialization $\\mathbf {p}$ is fixed for all point clouds and shared with the decoder.", "The actual network parameters are represented by $\\mathbf {w}=\\mathbf {p}+\\mathbf {y}$ .", "During the training, we only update $\\mathbf {y}$ .", "Because $\\frac{\\partial {L}}{\\partial \\mathbf {w}} = \\frac{\\partial {L}}{\\partial \\mathbf {y}}$ , we can use the standard backpropagation gradient to update $\\mathbf {y}$ .", "By initializing $\\mathbf {y}$ with zeros, we can limit its entropy throughout the entire training process.", "For the latent code, we initialize with zeros for all elements, which help to minimize the rate of the optimized latent code.", "Figure: Separation of initialization and the coded network parameters." ], [ "Distortion Loss Function", "The distortion loss function we use is based on the binary focal loss, formulated as, $\\centering \\begin{split}&{L}_{\\text{focal}} = - \\sum _{i} \\alpha _i (1-F_i)^\\gamma \\log F_i, \\\\& F_i = {\\left\\lbrace \\begin{array}{ll}q_i &\\text{ if } y_i=1 \\\\1-q_i &\\text{ if } y_i=0\\end{array}\\right.", "}, \\alpha _i = {\\left\\lbrace \\begin{array}{ll}\\alpha &\\text{ if } y_i=1 \\\\1-\\alpha &\\text{ if } y_i=0\\end{array}\\right.", "},\\end{split}$ where $y_i \\in \\lbrace 0,1\\rbrace $ is the ground truth occupancy of voxel $i$ , and $q_i$ is the predicted probability that voxel $i$ is occupied.", "Since in the voxel grid generated from a typical point cloud, the non-empty voxels only occupy a very small portion, the original BCE loss tends to make the trained network predict all zeros.", "The focal loss is used to balance the empty and non-empty voxels during training, which applies a higher weight to the voxels that are non-empty.", "In our experiments, $\\alpha $ is set to the portion of empty voxels in the binary grids.", "However, the focal loss is designed for classification problems, where every sample with the same ground truth label has the same weight.", "In voxel grid prediction, empty voxels far away from the surface, if misclassified as occupied, introduce greater geometric distortion.", "Hence, it is intuitive to apply a higher penalty to these far-away errors.", "Following this idea, in this work, we propose the distance-weighted focal loss function, formulated as, $\\centering \\small \\begin{split}& {L}_{\\text{d}} = - \\sum _{i} \\alpha _i(1-F_i)^\\gamma D_i \\log F_i, \\text{ } D_i = \\min _{\\mathbf {p}\\in S} ||\\mathbf {p}_i - \\mathbf {p}||_2,\\end{split}$ where $\\mathbf {p}_i \\in \\mathbb {R}^3$ is the 3D coordinate of voxel $i$ and the minimization is taken over the set $S$ of all voxels that are occupied in the ground truth point cloud.", "For positions farther away from any occupied voxel in the ground truth, the loss function applies higher penalties on false positive predictions.", "In our implementation, before training, we compute the distances from every possible voxel to its nearest point in the original point cloud in advance.", "Hence no extra computation of distances is needed during the training.", "Figure: R-D curves by the proposed method and G-PCC.Since we have multiple upsampling layers in the generator network, we can have a better supervision by calculating distortion loss at multiple spatial scales, i.e.", "$(2^{N},2^{N},2^{N})$ , $(2^{N-1},2^{N-1},2^{N-1})$ and $(2^{N-2},2^{N-2},2^{N-2})$ as shown in Fig.", "REF .", "The distortion term is the summation of the distortion loss at different scales, as, $\\centering \\begin{split}& {L}_D = {L}_{D_1} + {L}_{D_2} + {L}_{D_3},\\end{split}$ where ${L}_{D_2}$ and ${L}_{D_3}$ are focal losses calculated with the down-sampled ground truth voxel grids.", "For ${L}_{D_1}$ we use the distance weighted focal loss in Eq.", "(REF )." ], [ "Coding Procedure", "Fig.", "REF b summarizes the encoding and decoding processes.", "As shown, two bit-streams are formed by the encoder to represent a point cloud, i.e., the shallow octree $T$ and the neural field $(\\mathbf {y}, \\mathbf {z})$ .", "Since the shallow subtree $T$ has only a small number of levels, in our experiments we code them by simply traversing the octree in BFS order and write out every occupancy bit, which account for a very small portion of the total bits.", "We assume that the parameters in $\\mathbf {y}$ and/or $\\mathbf {z}$ have Gaussian distribution, with Gaussian parameters described by $\\mathbf {q}$ .", "The entropy coding will be aided with $\\mathbf {q}$ .", "In Fig.", "REF b, we use $W$ to represent the bits for $\\mathbf {y}$ and $\\mathbf {z}$ .", "In general, $\\mathbf {q}$ is content-dependent and will also need to be coded.", "As described earlier, since $\\mathbf {q}$ only contains a few floating point values, we simply use the original 32-bit floating point form to represent each value.", "The bits for $T, q$ , $\\mathbf {y}$ , and $\\mathbf {z}$ form the bit-stream for $x$ .", "At the decoder, we first entropy decode $\\mathbf {y}$ and $\\mathbf {z}$ based on $\\mathbf {q}$ , and construct the network using $\\mathbf {y}$ .", "We then enumerate all the non-empty leaf nodes of $T$ and use the decoded latent for each leaf node as the input to the network.", "The network generates occupancy probabilities for all voxels in each cube.", "The probabilities are binarized to reconstruct the point cloud." ], [ "Experimental Settings", "We conduct experiments on the 8i Voxelized Full Bodies (8iVFB) dataset (Longdress, Loot, Redandblack, Soldier), which are adopted by MPEG Common Testing Condition [11].", "The point clouds are all of bit-depth 10.", "We choose $M = 5$ and $N=5$ .", "We use a spatial dimension of $(2,2,2)$ (i.e.", "$L=1$ ) for the latent code for each leaf node.", "We compare our method with G-PCC TMC13 [8], for which the octree coding scheme is used.", "Since we generate a probability for each cube element, we threshold the probabilities to produce the reconstructed point cloud.", "In the experiment, we choose the threshold that balance the two PSNRs in determining the D1 PSNR [12] for each point cloud.", "The threshold is signaled to the decoder with 32 bits.", "We evaluate our proposed method in terms of R-D performance.", "We measure the bit-rate as bit-per-point (bpp), where the number of bits is the summation of the number needed to code the first $M$ -level octree, the network parameters, the latent representations, and the distribution parameters.", "The distortion is measured with the point-to-point error PSNR (a.k.a.", "D1 PSNR)." ], [ "R-D Performance", "The comparison in R-D performance of the proposed method and G-PCC on single frames of the point clouds is shown in Fig.", "REF .", "For the same point cloud, we reach different R-D points by training multiple networks with different $\\lambda $ , different number of channels (i.e.", "$J$ ) in the latent code and different widths of the generator network (i.e.", "the number of output channels for the intermediate layers).", "The network settings are fixed for the same $\\lambda $ among the entire testing dataset.", "As shown, on all four point clouds, our method achieve improved R-D performance than G-PCC.", "The proposed method can be directly applied to dynamic point clouds.", "Since frames in a dynamic point cloud sequence share common geometry patterns, we can share the network parameters among all leaf nodes in all frames from a dynamic point cloud sequence.", "The bits to encode the network parameters are therefore amortized over the larger number of points.", "We further conduct the dynamic point cloud compression experiments on the Longdress and Redandblack sequences, where we code 16 successive frames in each sequence, i.e.", "longdress_vox10_1300 to longdress_vox10_1315 in Longdress, redandblack_vox10_1450 to redandblack_vox10_1465 in Redandblack.", "Since G-PCC only supports all-intra coding mode, we can take the G-PCC R-D curve as a reference to the dynamic coding scenario.", "As shown in Fig.", "REF (a, b), dynamic point cloud can significantly benefits from our method, where the neural field learns the mutually shared patterns among leaf nodes over 16 frames." ], [ "Ablation Study on Architecture and Loss Function", "To investigate the contribution by different components of the proposed scheme, we compare different models obtained with different configurations in a ablation study.", "The detailed configurations of these models are shown in Table REF .", "If D-weighted Loss is not checked, the original focal loss is used.", "If Rate Loss is not checked, the bit-rate term is removed from the loss function.", "If Init.", "Separation is not checked, the actual network parameters are coded, rather than the difference from the Kaiming initialization.", "Table: Model variations in the ablation study.The results are shown in Fig.", "REF .", "Comparing M0 to Proposed, we observe that without the rate-distortion training, the bit-rates are much higher at the same distortion level, and the lower bit-rate range cannot be reached.", "The improvement of M1 over M2 demonstrates that initialization separation enables the network to be extended to low-rate range.", "Finally, comparing M1 and Proposed, we see that the distance-weighted focal loss helps improve the rate-distortion performance significantly and consistently over the entire range." ], [ "Conclusion", "In this paper, we introduce a novel approach to point cloud compression.", "To overcome the challenges in designing transforms and motion-compensated prediction techniques for point clouds, we utilize an iterative optimization process as the way to find a good compressive representation for a point cloud.", "Specifically, we divide the entire space into small cubes and represent each non-empty cube by a neural network and an input latent code.", "The network is shared among all the cubes in a single frame or multiple frames, to exploit the spatial and temporal redundancy.", "We train the network parameters and latent codes as the representation for all the non-empty cubes of a point cloud.", "To the best of our knowledge, this is the first work to utilize learned neural fields for point cloud compression.", "We develop a series of techniques to control the bit-rates and improve the reconstruction quality.", "Experimental results show that these techniques are effective, and our method achieves a better rate-distortion performance than G-PCC.", "Even though the R-D performance of the proposed approach for static point clouds are below some of the recently published works e.g.,[14], [13], it opens a new avenue for exploration, and future research in this direction may significantly improve the performance.", "More importantly, such an approach has great potential to effectively compress point cloud videos, because it can easily exploit the inter-frame redundancy through training a shared network for multiple frames." ], [ "Acknowledgement", "This work was supported in part by an unrestricted gift from FutureWei Technologies, Inc. to support fundamental research." ] ]
2212.05589
[ [ "Robust Relationship Between Mid-latitudes CAPE and Moist Static Energy\n in Present and Future Simulations" ], [ "Abstract Convective available potential energy (CAPE), a metric associated with severe weather, is expected to increase with warming.", "Under the most widely-accepted theory, developed for strongly convective regimes, mean CAPE should rise following the Clausius-Clapeyron (C-C) relationship at 6-7%/K.", "We show here that although the magnitude of CAPE change in high-resolution model output is only slightly underestimated with simple theories, it is insufficient to describe the distributional changes, which has a down-sloping structure and is crucial for impact assessment.", "A more appropriate framework for understanding CAPE changes uses the tight correlation between CAPE and moist static energy (MSE) surplus.", "Atmospheric profiles develop appreciable CAPE only when MSE surplus becomes positive; beyond this point, CAPE increases as $\\sim$25% of the rise in MSE surplus.", "Because this relationship is robust across climate states, changes in future CAPE distributions can be well-captured by a simple scaling of present-day data using only three parameters." ], [ "Introduction", "Convective Available Potential Energy (CAPE), loosely defined as the vertically integrated buoyancy of a near-surface air parcel, is a metric closely associated with the extreme convective weather events that can cause substantial socioeconomic damages e.g.,>johnssevere1992.", "CAPE is derived from the difference between the temperature profile of a parcel rising pseudo-adiabatically from the surface and that of the background environment [16], which determines the maximum possible updraft velocity during undiluted ascent.", "In meteorology, CAPE is used to predict thunderstorm events and in particular hail [10], [13], [12].", "Studies have also used the covariate of CAPE and wind shear to explain differences in thunderstorm frequency across locations [3], [2] or across climate states [29], [5].", "Studies of CAPE in observations have tended to focus on decadal-scale trends, often finding large increases.", "For example, gettelmanmultidecadal2002 found trends equivalent to $\\sim $ 50%/K in 15 tropical radiosonde stations.", "(See SI Section S1 for a wider review.)", "Model studies of CAPE under climate change have tended to produce smaller effects.", "Several recent studies that simulate the tropics using convection-permitting models (0.2–4 km resolution) without advection, i.e.", "approximating radiative-convective equilibrium, find CAPE increases of 8%/K [17], 8%/K [22], 12%/K [26], 7%/K [25], and 6–7%/K from theory [23].", "Analyses of coarser-resolution global models have found even smaller changes in the tropical W. Pacific, of $\\sim $ 4.5%/K [32] and $\\sim $ 5%/K [4].", "In the mid-latitudes, changes may be larger: chenchanges2019 show $\\sim $ 10%/K over a selected region of the continental United States.", "Theoretical frameworks to explain climatological CAPE fall into two groups.", "One approach assumes that environmental profiles are fully determined by surface temperature, and predicts the background environmental temperature profile by considering the effects of convective entrainment.", "singhinfluence2013 proposed a “zero-buoyancy model” based on the assumption that entrainment makes actual in-cloud buoyancy in an ascending convective plume small relative to CAPE, and singhincreases2015 evaluated its applicability in radiative-convective equilibrium simulations.", "zhouconceptual2019 extended the model to use an ensemble of plumes.", "The zero-buoyancy concept is intended to represent convective regions such as the tropics, where environmental temperature profiles are largely set by convection, with horizontal advection playing a negligible role.", "It would not be expected to explain variations in CAPE across space or on short timescales over mid-latitudes land.", "A second approach, which may be more generally applicable, treats surface and mid-tropospheric conditions as independent variables.", "Early efforts sought to characterize empirical relationships in CAPE as a function of near-surface temperature and moisture [31], [32].", "emanuelmoist1996 (henceforth EB96) considered the moist static energy $h$ instead and described the relationship as ${\\mathrm {C}APE} \\ = \\ A \\cdot (h_s - h_m)$ where $h_s$ and $h_m$ are moist static energy (MSE) at near-surface (boundary layer) and mid-troposphere, respectively.", "The dimensionless constant $A$ in EB96 reduces to $(1 - \\overline{T}/T_s)$ , analogous to a Carnot efficiency, where $T_s$ is the near-surface temperature and $\\overline{T}$ relates to the temperature of those levels emitting radiation to space.", "In this perspective, CAPE represents the maximum possible kinetic energy that could be generated given a heat transfer of $(h_s - h_m)$ .", "Recent work has further extended on EB96 and tested applicability to mid-latitudes CAPE.", "agardclausiusclapeyron2017 (henceforth AE17) and limidlatitude2021 use a similar functional form but slightly different formulations for the slope $A$ and for the `threshold' term.", "limidlatitude2021 confirm that their model broadly captures both the spatial pattern and diurnal variation of CAPE in renalysis data over the continental United States.", "These theories do not fully predict future CAPE, since they provide no guidance on future changes in the threshold term relative to $h_s$ , i.e.", "on changes in the shape of the environmental temperature profile.", "However, because all are grounded in simple mathematical definitions – for moderately convective conditions, a linear CAPE dependence on surface MSE is a necessary consequence in any dataset where mid-tropospheric conditions are decoupled from the surface – they should provide a useful framework for understanding model-projected changes.", "In this work we use a modified formulation with a different threshold term.", "Mathematically, CAPE is proportional to the vertically integrated difference between $h_s$ and the local “saturation MSE” $h^*_z$ , neglecting the virtual temperature effect and difference in $q^*$ between parcel and environment [7], [18].", "If we assume the shape of the environmental temperature profile does not vary strongly with $h_s$ , the definition of CAPE can be reduced to: $CAPE = A \\cdot (h_s - h^*_{m})$ where $h^*_{m}$ is the minimum value of mid-tropospheric saturation MSE, and we term the difference $h_s - h^*_{m}$ the `MSE surplus'.", "The value of A must be determined empirically, and because its value depends on the shape of environmental profiles, it does not necessarily remain constant between climate states.", "Despite the interest in understanding potential future CAPE increases, few studies have systematically evaluated these frameworks, especially in the continental mid-latitudes where severe thunderstorm impacts are greatest.", "In this work, we diagnose CAPE relationship to surface and mid-tropospheric conditions in both observation and high-resolution convection-permitting model simulations of continental North America, to determine what aspects of the relationship are robust under climate change.", "Our goal is to quantify projected CAPE changes in the mid-latitudes and to provide a simple framework that explains them.", "The convection-permitting model output used here is a paired set of present and future dynamically downscaled simulations over continental North America from the Weather Research and Forecasting model (WRF, version 3.4.1) run at 4 km resolution.", "Both runs are described in liucontinental-scale2017, and model output is available from NCAR Research Data Archive ds612.0 [19].", "The present-day simulation (CTRL) is forced by ERA-Interim reanalysis for initial and boundary conditions; the future simulation is a pseudo-global-warming (PGW) scenario that applies a spatially varying offset to ERA-Interim based on the CMIP5 multi-model mean projection under RCP8.5.", "In both runs, spectral nudging is applied to levels above the planetary boundary layer.", "Note that hot and dry biases over the Central U.S. lead to a small underestimation of CAPE in the high tail by 6–10% [15], [30], but this bias does not necessarily affect fractional future changes.", "In this work, we use the years 2001–2012 and equivalent future period.", "For `paired' comparisons we match each profile in CTRL with its equivalent in PGW.", "We calculate surfaced-based CAPE and subset to 80 grid points that match the International Global Radiosonde Archive (IGRA) weather stations as in wangreanalyses2021.", "See SI Section S2 for spatial distribution of stations and further model validation.", "Most analyses here use observations in summertime (MJJA) only, when convection is most active, following sunevaluation2016 and rasmussenchanges2017." ], [ "Methodology", "To maintain the focus on highly convective conditions, many comparisons here involve values for profiles above the 73rd quantile in CAPE, which corresponds to CAPE $>$ 1000 J/kg in CTRL (e.g.", "Figure 3 and Figure 4, left).", "When computing linear fits, we use orthogonal distance regression (ODR) because it is most appropriate in conditions where errors in both dependent and independent variables matter.", "When computing fractional changes between CTRL and PGW climate states, we define them as $\\ln $ (PGW/CTRL)/$\\Delta $ T. See SI Section S3 for details on subsetting and averaging, and schwarzwaldchanges2021 for discussion of ODR." ], [ "Synthetic profiles", "We construct five synthetic CAPE distributions to help understand the minimal information needed to realistically reproduce future distributional changes.", "All are constructed based an assumed 3.92 K surface temperature increase, the mean change for profiles above the 73rd CAPE quantile.", "(Note that this change is smaller than the 4.65 K average for the entire dataset; see SI Section S3.)", "All cases but #4 take the CTRL profiles and CAPE values as the baseline.", "One case (#1) is a simple transformation of the CTRL CAPE distribution, and three (#2–4) require re-calculating CAPE for a set of synthetic atmospheric profiles.", "See SI Section S4 for further details.", "For Clausius-Clapeyron scaling, shown for illustrative purposes only, we simply multiply each CTRL CAPE value by 1.27 ($e^{0.061 \\cdot 3.92}$ ), where 6.1%/K is the C–C change for the mean temperature of high-CAPE profiles, 301.8 K. We omit several systematic changes that largely cancel: C–C would be changed by -0.4%/K by including the projected reduction in surface RH, by -0.1%/K by treating profiles separately, and by +0.6%/K by incorporating the rise in the Level of Neutral Buoyancy (LNB).", "For the constant offset case, we add 3.92 K to each CTRL profile at each level from surface to 200 hPa, near the level of neutral buoyancy in the mean CTRL profile.", "From 200 hPa we linearly interpolate to zero change at 75 hPa.", "We also adjust surface RH by -0.9%, the mean change above the 73rd CAPE percentile.", "For the lapse rate adjustment synthetic case, we modify the constant offset procedure to also include a change in lapse rate.", "That is, we linearly interpolate between the 3.92 K surface warming and a similarly derived 200 hPa warming of 4.94 K. We apply the same surface RH adjustment as in constant offset.", "For the SO13 case, we add 3.92 K to surface temperatures and calculate a climatological mean profile using the zero-buoyancy model of singhinfluence2013.", "We use an entrainment rate of 0.62 and column RH of 0.44.", "We construct profiles in both CTRL and PGW environments, so that the theory provides a self-consistent prediction of changes." ], [ "Changes in CAPE distributions", "We begin our analysis by asking: in mid-latitudes model projections, how much and how does CAPE change with warming?", "Over the entire dataset, mean CAPE rises 61% between CTRL and PGW, from 684 to 1103 J/kg, yielding a 10%/K increase given the mean surface temperature rise of 4.65 K (assuming incremental changes).", "The mean change may not be the most relevant metric, however, since mid-latitude CAPE distributions are zero-inflated even in the convective summertime, and the strongest temperature changes occur in conditions where CAPE is small or zero.", "An alternate approach that emphasizes changes in higher-CAPE conditions is to take an orthogonal regression to the density distribution of paired profiles in present and future runs (Figure 1, left, solid line).", "This distribution shows a clear shift upwards, even though weather systems are not identical in the two runs and the scatter is therefore large.", "The regression slope gives a CAPE increase of 45% or 8.0%/K, slightly larger than Clausius Clapeyron (6.1%/K).", "By contrast, the constant offset synthetic overpredicts CAPE increases (11.7%/K) and the SO13 theory underpredicts them (5.8%/K); see SI Section S5.1.", "Figure: (Left) Comparison of CAPE in present (CTRL) and future (PGW) model runs as a density plot of paired profiles (see Methodology), using all pairs where both have nonzero CAPE.", "Dashed line is the one-to-one line; solid line is the orthogonal regression; and dots are quantiles of the distribution (large dots, Δ\\Delta = 1% increments from 0-0.99; small dots Δ\\Delta =0.1% above 0.99).", "(Right) Quantile ratio plot, constructed by taking the ratio of future CAPE quantiles over those of present climate from actual model output(black, dots as in L. panel), and three synthetic datasets: C-C scaling (light blue), constant offset (limegreen), and SO13 (purple).", "All data are used and zeroes are included.", "For internal consistency, SO13 changes are computed relative to its own CTRL distribution; see methods for details.", "Gray horizontal line marks the mean CAPE fractional change from the orthogonal distance regression line in left panel.", "Four vertical tick bars mark the percentiles matching 1000, 2000, 3000, and 4000 J/kg (73.2%, 86.5%, 95.1%, and 98.9%, respectively).", "We begin the x-axis at 40% to omit quantiles where CTRL CAPE is zero.", "Model future CAPE changes resemble a constant offset with a small lapse rate adjustment.The orthogonal regression implicitly assumes that the change in CAPE distributions is a simple multiplicative shift.", "To test this assumption, we also show in Figure 1 a quantile regression, which compares individual quantiles of CTRL and PGW distributions.", "The future CAPE distribution is in fact narrower than in the simple multiplicative case.", "Comparing to the orthogonal regression, the lower quantiles lie above the 45% line and the most extreme quantiles ($> \\sim $ 3000 J/kg) below it (left panel, dots).", "This narrowing effect is even more clear in a plot of the quantile ratio of future vs. present-day CAPE (right, black); it manifests as a downward slope.", "Both the constant offset (green) and SO13 (purple) cases also show similar narrowing, despite their different mean predicted changes.", "Distributional changes in model CAPE therefore resemble an offset with a small lapse rate adjustment that lowers CAPE.", "Because the SO13 theory was developed to represent the mean profile in highly convective conditions, we also test whether it can capture the present-future CAPE change of the averaged late-afternoon (00 UTC) profile in our simulations, but the underprediction remains substantial.", "(See SI Section 5.1.)", "Changes in mid-latitudes lapse rates require a new explanatory framework." ], [ "Changes in environmental profiles", "To quantify the effect of changing environmental lapse rates on future CAPE, we examine mean CAPE in surface temperature and humidity (T–H) space following wangreanalyses2021.", "Since surface T and H uniquely define the moist adiabat on which a parcel rises, a change in CAPE for a given T,H is due only to an altered environmental profile.", "This approach allows decomposing CAPE changes into two governing factors: $f_{\\mathrm {s}amp}$ is the fractional change that would result from only changed surface T,H sampling (Figure REF , top row) and $f_{\\mathrm {e}nv}$ is that resulting from only changes in environmental profiles (Figure REF , bottom row).", "Both factors are defined for CTRL CAPE $>$ 1000 J/kg conditions.", "In these model runs, increased sampling of warmer surface conditions in PGW would more than double CAPE from its CTRL value ($f_{\\mathrm {s}amp} \\sim $ 2.2) if lapse rates did not change.", "However, CAPE contours shift strongly in the PGW run, so that warmer or wetter surface conditions are required to achieve the same CAPE.", "If T,H sampling remained the same, CAPE would fall by a third due to environmental effects alone ($f_{\\mathrm {e}nv} \\sim 0.64$ ).", "The combined effect is $f_{\\mathrm {s}amp} \\cdot f_{\\mathrm {e}nv} = 1.40$ , close to the 1.45 derived from orthogonal regression in Figure 1.", "(See SI Section S5.2 for details on calculations.)", "Figure: (Top) Density heatmap of T–H bins sampled and (bottom) of mean CAPE in each T–H bin, in CTRL (left) and PGW (right) runs during summer (MJJA).", "Bins shown are all those with 3 or more observations.", "Solid and dashed lines mark RH of 100 and 50%.", "In bottom row, dashed/dotted lines mark CAPE contours at 2000 and 4000 J/kg (with contours cut off at RH=100% to avoid artifacts).", "Both future distributions move up and to the right.", "The PGW run samples higher maximum temperatures (top), which in fixed environmental conditions would lead to higher CAPE by f samp f_{\\mathrm {s}amp} = 2.2, but CAPE contours also shift (bottom), reducing CAPE changes by f samp f_{\\mathrm {s}amp} = 0.62.", "Note that CAPE contours resemble those of moist static energy (SI Section S5.2); their future shift means that higher MSE on average is required for a given CAPE value.The effects seen in Figure REF do not necessarily mean there is substantial excess warming at altitude.", "Most of the environmental damping of potential CAPE increases occurs even in the constant offset case of uniform warming, because present-day environmental profiles are correlated with surface temperature.", "Since upper tropospheric temperature is relatively homogeneous, extreme local surface temperature necessarily implies a steep lapse rate.", "Under climatological warming, surface temperatures that were previously extreme become associated with more normal lapse rates instead.", "For this reason even the constant offset case shows an $f_{\\mathrm {e}nv}$ of 0.77, i.e.", "apparent potential CAPE increases are damped by 23% by this covariance effect alone.", "(The total derived CAPE change in constant offset is 1.71, close to its orthogonal regression slope of 1.72.)", "Excess warming at altitude is therefore required only to explain the residual difference between effects in PGW ($f_{\\mathrm {e}nv}$ = 0.64) and in constant offset ($f_{\\mathrm {e}nv}$ = 0.77).", "Changes in temperature profiles between present and future runs are in fact very subtle.", "If the entire dataset is averaged, warming is actually greater at surface than at altitude ($\\Delta $ T$_s = 4.65$ K and $\\Delta $ T$_{200} = 4.05$ K), an effect that would tend to amplify CAPE.", "However, as discussed in Methods, when data is subdivided to include only conditions that can produce substantial CAPE, lapse rate changes are weakly positive ($\\Delta $ T$_s = 3.92$ K and $\\Delta $ T$_{200} = 4.94$ ).", "That is, in conditions favorable for convection, future environmental changes should slightly dampen the CAPE increase expected from surface warming alone." ], [ "CAPE-MSE framework", "It is clear that CAPE in our dataset must exhibit a strong relationship with surface MSE, since the contours of CAPE in T–H space in Figure REF are closely aligned with those of MSE.", "(See SI Section S5.2; this effect was also shown by previous paper, e.g.", "donnerthree-dimensional1999.)", "The relationship is in fact reasonably linear in each climate state (Figure REF , left, which shows all CAPE values $>$ 1000 J/kg), but shifts as the climate warms.", "In both CTRL and PGW model runs, the x-intercept to the fitted regression matches the mean mid-tropospheric saturation MSE to $<0.3$ %: on average, CAPE does not develop unless surface MSE ($h_s$ ) exceeds saturation MSE ($h^*_m$ ) in the mid-troposphere.", "These results suggest that the more fundamental relationship is between CAPE and MSE surplus ($h_s-h^*_m$ ), as in Equation 2.", "When CAPE is plotted against MSE surplus (Figure REF , right), residual variance does indeed become smaller (24% vs. 31% for CTRL and 8% vs. 26% for PGW) and intercepts become almost zero (0.67 and 1.07 kJ/kg for CTRL and PGW, respectively).", "Figure: Relationships between CAPE in N. America summertime and MSE (left) and MSE surplus (right), for CTRL (blue, dotted) and PGW (red, solid) runs.", "Here we use all cases where CAPE is larger than 1000 J/kg.", "Lines are fitted orthogonal regressions.", "MSE surplus is calculated as h s h_s-h m * h^*_m, where h m * h^*_m is the minimum saturation MSE in each profile.", "Color shading increments are 1.5% for the left panel and 0.75% for the right panel.", "Median in CAPE bins are used for the orthogonal regression to remove the role of uneven sampling across low to high CAPE conditions.", "Slopes of CAPE-MSE (left) are 0.249 and 0.239 for CTRL and PGW, respectively, and of CAPE-MSE surplus (right) are 0.271 and 0.270.The relationship between CAPE and MSE surplus is in fact sufficiently fundamental that it holds across climate states.", "Fitted slopes are nearly identical in CTRL and PGW runs, at 0.27 (Figure REF , right).", "In this perspective, the effects of climate change reduce to only a greater sampling of conditions with high MSE surplus.", "Furthermore, the relationship between CAPE and MSE surplus is robust across other temporal and spatial comparisons as well.", "Fitted slopes and variance explained remain similar when the dataset is divided by latitude (northern vs. southern stations), by daytime vs. nighttime profiles, by anomalously warm vs. cold years, or even by winter vs. summers (SI Section S5.3).", "Using an alternative fitting method (all samples above 1000 J/kg CAPE instead of binned median values) produces smaller slopes (0.17 and 0.16 for CTRL and PGW), but they remain consistent across all comparisons.", "The fact that WRF output and observations are well-described by Eq.", "2 – $CAPE = A \\times (h_s - h^*_{m})$ – will naturally follow if the mid-troposphere is reasonably decoupled from the surface.", "If variation in $h^*_{m}$ is uncorrelated with that in $h_s$ , a linear relationship between CAPE and MSE surplus is a straightforward mathematical consequence.", "As a partial test of this condition, we plot saturation MSE profiles for data subset by a variety of CAPE thresholds (SI Section S5.4).", "In all conditions with any appreciable CAPE ($>$ 100 J/kg), the minimum of saturation MSE in the mid-troposphere remains nearly constant across subsets, suggesting that mid-tropospheric temperature and $h^*_{m}$ are not strongly coupled to surface conditions in these mid-latitudes simulations." ], [ "A simple lapse rate adjustment framework", "While theories of future CAPE based only on surface conditions do not work well in the mid-latitudes, we consider whether adding a single parameter to describe mid-tropospheric effects can yield accurate predictions of future CAPE distributions.", "As described in Section 2.3, we construct a transformation of present-day atmospheric profiles based on only 3 parameters: mean changes in surface temperature and humidity, and a separate value for warming at 200 hPa ($\\Delta T_s$ , $\\Delta $ RH, $\\Delta T_{200}$ ).", "To evaluate how well this lapse rate adjustment captures CAPE changes in actual model output, we show also results for a two-parameter transformation – the constant offset shift with RH adjustment, which uses only mean surface $\\Delta T_s$ and $\\Delta $ RH – and for reference, a simple C-C scaling applied to each individual profile.", "See Section 2.3 and SI Section S5.5 for details.", "Figure: Comparison of present and future CAPE in model output (black) and synthetics, with those built from existing theories (C–C scaling, light blue; and from this work in the bottom row (constant offset, dark orange; lapse rate adjustment, green).", "(Left): Fitted regression lines of the future CAPE-MSE relationship as in Figure .", "Model CTRL is shown for reference (dashed black).", "See SI Section S5.5 for more details, including table of slopes and x-intercepts.", "(Right) Future changes in CAPE as quantile ratio plots, with dots marking quantiles at 1% increments.", "As in Figure 1, four x-axis ticks mark 1000–4000 J/kg, and PGW/CTRL CAPE values are on the numerator/denominator.", "All the synthetic future (scatters) fractional changes are referenced to CTRL.", "CAPE-MSE instead of the CAPE-MSE surplus framework is shown because the latter requires further assumptions about how the mid-tropospheric MSE would change.", "The lapse rate adjustment synthetics best reproduce future CAPE.The three-parameter lapse rate adjustment transformation does indeed capture the characteristics of future CAPE changes (in high-CAPE conditions).", "In the CAPE-MSE perspective (Figure REF , left), it realistically captures the future relationship, both in its slope and x-intercept.", "In the quantile ratio perspective (Figure REF , right), it reproduces both the downsloping structure and the magnitude of fractional change in the high CAPE quantiles.", "On a T–H diagram, lapse rate adjustment reproduces the future CAPE contours well while other transformations produce clear discrepancies (SI Section S5.5).", "Note that in the highest CAPE conditions, future changes in model output and in lapse rate adjustment begin to approach Clausius-Clapeyron, but remain above it.", "Changes in the 99th quantile are 6.9%/K in WRF and 7.1%/K in lapse rate adjustment, while the C–C line in Figure REF is shown as a constant 6.1%/K, and would be similar even if treated more realistically.", "(See Methodology, and SI Section 5.5 for more extensive comparisons.)", "While mid-latitudes CAPE is too complex to be treated with simple scalings, a relatively straightforward 3-parameter transformation appears to reproduce its full distributional change in a future warmer climate.", "Increases in severe weather events, which are associated with high CAPE, are a substantial societal concern under global warming.", "We show here that the projected increase in mean mid-latitudes CAPE in high-resolution model output is substantially higher than in theories developed under assumptions appropriate for the tropics, which are close to Clausius-Clapeyron (C–C).", "The discrepancy is smaller for the most extreme conditions, but even in the highest quantiles in this analysis, model CAPE changes are over 20% above C–C.", "This difference translates to large changes in the projected occurrence of CAPE exceeding a given threshold.", "For example, incidences of summertime CAPE $>$ 2000 J/kg, a commonly-used threshold for severe weather, rise twice as much in model projections as in Clausius–Clapeyron scaling: from 13% in CTRL to over 24% in the future PGW projection, vs. to only 19% under C–C scaling.", "The midlatitudes apparently require a different framework for understanding CAPE changes than the convective tropics.", "Both the influence of advection and the strong surface diurnal variation means that mid-tropospheric values cannot be predicted from surface conditions.", "Furthermore, the wide range of surface conditions in the mid-latitudes continental U.S. mean that lapse rate effects vary spatially across the domain, with upper tropospheric warming strongest in the subtropics and lapse rates changes actually negative north of 33N (SI Section S7).", "Nevertheless, we find that future CAPE distributional changes can be well-captured by a simple synthetic transformation based only on three changes averaged over the entire domain ($\\Delta T_s$ , $\\Delta $ RH$_s$ , and either $\\Delta T_{200}$ or $\\Delta T_{650}$ ).", "These three parameters can be folded into a single metric of “MSE surplus”, the difference between surface MSE and mid-tropospheric saturation MSE.", "In the model output described here, CAPE does exhibit a strong dependence on MSE surplus, as expected: in each climate state the relationship is a straightforward mathematical consequence.", "We show here that the relationship is robust even across climate states (empirical slopes of 0.27 and 0.26 in Figure REF ) implying that atmospheric structure does not change dramatically.", "These results can be compared to prior theories based on analogies to heat engines.", "The slope $A$ can be thought of as the maximum conversion rate of MSE surplus to mechanical work.", "Similarly, theories such as EB96 treated CAPE as the maximum work possible given a flow of energy between hot and cold reservoirs, and therefore predicted a Carnot-like slope of $(1 - \\overline{T}/T_s)$ .", "This theoretical value can be derived by constructing a mean atmospheric profile (using all incidences of CAPE $>$ 1000 J/kg); our dataset yield theoretical slopes of 0.18 for both radiosonde observations and CTRL model output, similar to the 0.14 in emanuelmoist1996.", "This value is lower than the empirical slopes of Figure 3 (right), but is nearly identical to slopes derived without fitting the binned median values: 0.18 for observations and 0.17 for CTRL.", "It appears that the heat engine framework does capture some physical constraint on CAPE, though MSE surplus $(h_s - h^*_m)$ is the more fundamental regressor.", "Note that CAPE represents only the potential production of kinetic energy, not the true conversion rate, which is affected by factors that reduced efficiency below Carnot e.g.>rompsdry2008.", "Understanding how CAPE responds to CO$_2$ -induced warming is a key scientific question with significant societal consequences.", "This work suggests that in the mid-latitudes, the decoupling of surface and mid-troposphere means that changes in CAPE can be larger than predicted by theories developed for the convective tropics.", "We find that a simple 3-parameter transformation captures not only future mean increases in midlatitudes CAPE but their full distributional shifts.", "It does remain an outstanding question how the present-day mapping of CAPE to convective updraft velocities and extreme convective events may alter under climate change.", "However, the strong and consistent dependence of CAPE on MSE surplus provides a simple but robust framework for predicting and understanding changes in CAPE distributions.", "Acknowledgements The authors thank Dan Chavas, Tiffany Shaw, Funing Li, Zhihong Tan, and Osamu Miyawaki for constructive comments, and the National Center for Atmospheric Research (NCAR) for providing the WRF dataset.", "This work is supported by the Center for Robust Decision-making on Climate and Energy Policy (RDCEP), funded by the NSF Decision Making Under Uncertainty program, Award SES-1463644, and was completed in part with resources provided by the University of Chicago Research Computing Center.", "Data Availability Statement The 4-km WRF Convection-permitting model output is downloaded from NCAR RDA https://rda.ucar.edu/datasets/ds612.0/ (http://doi.org/10.5065/D6V40SXP).", "The IGRA radiosonde data is downloaded from https://www.ncei.noaa.gov/products/weather-balloon/integrated-global-radiosonde-archive (http://doi.org/10.7289/V5X63K0Q)." ] ]
2212.05548
[ [ "Technical Debt Management in OSS Projects: An Empirical Study on GitHub" ], [ "Abstract Technical debt (TD) refers to delayed tasks and immature artifacts that may bring short-term benefits but incur extra costs of change during maintenance and evolution in the long term.", "TD has been extensively studied in the past decade, and numerous open source software (OSS) projects were used to explore specific aspects of TD and validate various approaches for TD management (TDM).", "However, there still lacks a comprehensive understanding on the practice of TDM in OSS development, which penetrates the OSS community's perception of the TD concept and how TD is managed in OSS development.", "To this end, we conducted an empirical study on the whole GitHub to explore the adoption and execution of TDM based on issues in OSS projects.", "We collected 35,278 issues labeled as TD (TD issues) distributed over 3,598 repositories in total from the issue tracking system of GitHub between 2009 and 2020.", "The findings are that: (1) the OSS community is embracing the TD concept; (2) the analysis of TD instances shows that TD may affect both internal and external quality of software systems; (3) only one TD issue was identified in 31.1% of the repositories and all TD issues were identified by only one developer in 69.0% of the repositories; (4) TDM was ignored in 27.3% of the repositories after TD issues were identified; and (5) among the repositories with TD labels, 32.9% have abandoned TDM while only 8.2% adopt TDM as a consistent practice.", "These findings provide valuable insights for practitioners in TDM and promising research directions for further investigation." ], [ "Introduction", "Technical debt (TD) refers to delayed tasks and immature artifacts that constitute a “debt” since they may bring short-term benefits but incur extra costs of change during maintenance and evolution in the long term [11], [2].", "TD can be classified into several types from the perspective of different stages of the software development lifecycle, such as architecture TD and design TD [36], [25].", "Due to its significant impact on software quality, TD has been a hot topic attracting great attention from both academia and industry in the last decade [25], [1], [38], [21], [8], [35], [16], [39] , and many open source software (OSS) projects were used to explore various aspects of TD [37], [28], [41], [33], [42].", "However, there still lacks a comprehensive understanding on TD management (TDM) in OSS development, which penetrates the OSS community's perception of the TD concept and how TD is managed in OSS development.", "On GitHub, a proportion of issues that are explicitly tagged as TD, which are called as TD issues in this paper.", "Every TD issue is intentionally tagged with a TD label by a practitioner, which means that the practitioner actively performs TD identification, a core activity of TDM [25].", "Since TD issues are identified by the practitioners themselves and no biases of researchers are introduced, TD issues can directly reflect the practitioner's perception and management strategy of TD.", "TD issues are essentially a type of self-admitted technical debt (SATD) [12], [40].", "Therefore, TD issues can be used as an important data source to comprehensively investigate TDM in OSS development.", "The goal of this work is to explore TDM in OSS development based on TD issues.", "Specifically, we aim to investigate the trend of the adoption of TDM, the practitioners' perception on the TD concept, and how practitioners manage TD in OSS development.", "To this end, we conducted a large-scale empirical study on the TD issues of all OSS projects hosted on GitHub.", "The main contributions of this work are summarized as follows: (1) To our knowledge, this work is the first study that took all repositories hosted on GitHub as the search space to understand the state of TDM in practice in a large-scale OSS ecosystem.", "(2) We performed a large-scale empirical study on TDM based on 35,278 issues with TD labels (i.e., TD issues) distributed over 3,598 repositories.", "(3) We explored TD issues from the perspectives of the involved people and management process.", "(4) We found that the practitioners' awareness of TDM is increasing over the past decade given the relatively high growth rates of new TD issues and new repositories adopting TDM.", "(5) We found that only 298 (8.2%) repositories adopt TDM as a consistent practice, while 1,187 (32.9%) repositories have abandoned TDM.", "(6) We found that the power-law characteristic exists in the distributions of the proportion of repositories with TD labels over the number of TD issues and of the proportion of resolved TD issues over their open time.", "The remaining of this paper is organized as follows.", "Section discusses the related work; Section describes the study design; Section presents the study results; Section interprets the study results with the implications; Section highlights the threats to the validity of the results; and Section concludes this work with future research directions.", "Many studies investigated TD in OSS from diverse perspectives.", "Digkas et al.", "looked into the evolution of TD in 66 Java projects from the Apache ecosystem [14].", "They studied three aspects, i.e., the evolution of normalized TD, the most common types of TD, and the most costly TD.", "Tan et al.", "conducted a case study also on the Apache ecosystem to investigate the evolution of TD remediation in Python [33].", "Li et al.", "performed an empirical study on 59 Apache OSS projects to investigate the characteristics of the interest of defect TD [28].", "Tsoukalas et al.", "carried out an empirical study on more than 100 open-source repositories from multiple OSS platforms for TD forecasting [37].", "Lenarduzzi et al.", "performed a case study of 33 Apache projects to explore the spread of TD and how quickly it can be fixed [23].", "Digkas et al.", "conducted a case study of 27 Apache projects to investigate the stability of the introduction of TD, and the correlation between the introduction of TD and the workload of the development team [13].", "The aforementioned studies took specific OSS projects as research objects to explore certain phenomena with respect to TD.", "In contrast, our work takes all repositories hosted on GitHub as our search space, and much more repositories with TD issues were included in our dataset." ], [ "Technical Debt Management", "Literature reviews on technical debt management.", "A number of literature reviews have been performed to study TDM as a whole or some specific aspects of TDM.", "Li et al.", "conducted a systematic mapping study to examine the current state of TDM, including categories of TD, activities of TDM, approaches and tools for TDM, and challenges to TDM [25].", "Becker et al.", "performed a systematic literature review on trade-off decisions across time in TDM [6].", "Lenarduzzi et al.", "published a systematic literature review to understand the priority of refactoring TD over the lifespan of software, compared with developing features and fixing bugs [22].", "In the works mentioned above, the state of TDM was studied based on literature, while in our work we investigated the state of TDM based on practices in OSS repositories.", "Management of self-admitted technical debt.", "In recent years, intensive efforts have been invested to the research of a special kind of TD – self-admitted technical debt (SATD).", "Potdar and Shihab coined the concept of SATD, which is sub-optimal solutions intentionally introduced (e.g., temporary fixes) and explicitly documented using code comments [29], [32].", "Most studies on SATD investigated TD admitted in the comments of source code.", "Huang et al.", "proposed a model to predict whether a comment in software contains TD [18].", "Ren et al.", "presented a CNN-based approach to detect SATD in code comments [30].", "Yu et al.", "proposed Jitterbug to automatically identify SATD [42].", "Bavota et al.", "analyzed the diffusion, evolution, participants, and impacted quality of SATD based on commits and comments of projects [5], and their study facilitates the evaluation of the impact of TD in TDM.", "Codabux et al.", "explored the types and characteristics of TD in software systems written in R language by manually analyzing the comments in the peer-review documentation of R-language OSS packages [9].", "Zampetti et al.", "surveyed industrial developers and OSS developers to compare the practices on SATD in industry and OSS development [43].", "They found that both industrial and OSS developers share similar perceptions and actions regarding TD [43].", "Besides, industry developers were more reluctant to acknowledge TD than OSS developers.", "Unlike the aforementioned works that investigate SATD and its management based on code comments, our work studies TD and TDM based on issues.", "Technical debt management based on issues.", "There are also a few studies that used issues in issue tracking systems as a data source to investigate TD and TDM.", "Bellomo et al.", "manually examined 1,264 issues from four software projects and identified 109 issues as TD items, which were used as the dataset to explore characteristics of TD [7].", "Dai and Kruchten manually checked all issues of a commercial software project and identified 331 issues containing TD, and then used natural language processing and machine learning to automatically detect TD issues of the same project [12].", "Li et al.", "regarded manually-identified issues containing TD in issue tracking systems as SATD, and further investigated the identification and remediation of such TD issues [24].", "In contrast, Xavier et al.", "considered issues with TD labels (i.e., TD issues) as a kind of SATD, and looked into why TD issues are introduced and paid [40].", "Similar to Xavier et al.", "[40], in this work we only investigated issues that were tagged with TD labels (TD issues) in issue tracking systems." ], [ "Objective and Research Questions", "The objective of this study, described using the Goal-Question-Metric (GQM) approach [4], is: to analyze software issues tagged as TD for the purpose of exploration with respect to the current state of TDM in practice from the point of view of software practitioners in the context of OSS development.", "Based on the aforementioned goal, we formulated five research questions (RQs), which are described as follows: RQ1: How popular do software repositories adopt TDM?", "Rationale: With this RQ, we investigated the popularity of the adoption of TDM in OSS development, which can reflect the degree of acceptance of the TD concept and practitioners' awareness of TDM in the context OSS development.", "RQ2: What is the understanding of practitioners on the TD concept?", "Rationale: Practitioners understand the TD concept in different ways, which may influence the strategies and activities of TDM.", "The answer to this RQ is helpful to understand and improve TDM strategies.", "RQ3: How are TD issues identified?", "Rationale: TD identification is the first step of TDM.", "With this RQ, we investigated when and by whom TD issues are identified.", "The answer to this RQ enables us to evaluate practitioners' awareness of TDM and existing strategies of TDM.", "RQ4: How are TD issues resolved?", "Rationale: With this RQ, we investigated how TD issues are handled.", "In particular, the open time, characteristics, and proportion of reopens of resolved TD issues can provide us with multiple perspectives to understand the resolution process of TD issues.", "RQ5: What is the continuity of adopting TDM for the repositories with TD labels?", "Rationale: This RQ investigates whether TDM has been continuously employed for the repositories with TD labels, and to further explore the different characteristics between repositories with different levels of continuity of adopting TDM.", "First, to answer the five RQs formulated in Section REF , we needed to collect all TD issues on the whole GitHub.", "Such TD issues are included in dataset DS1, which is the main dataset of this work.", "Second, to answer RQ2, besides DS1, we also needed to collect additional data items regarding the quality attributes (QAs) affected by and specific TD types of TD issues.", "Since manual analysis of the affected QAs and TD types of TD issues requires considerable effort, we analyzed a representative subset of DS1, which is denoted as DS2.", "Besides, in order to get a deeper understanding of the features of TD issues, we needed to collect data on TD issues and non-TD issues of repositories that each includes a specific amount of TD issues.", "Such data constitute another dataset, which is denoted as DS3.", "The data items to be collected are listed in Table REF , which also provides the information on the dataset(s) containing each data item.", "Table REF shows the dataset(s) and data items used by each RQ.", "In this study, we collected TD issues that were reported between Jan. 1st, 2009 and Dec. 31st, 2020.", "Extra data items on the whole GitHub were also collected (Section REF ).", "The data collection processes for the three datasets are shown in Figure REF , in which the data collection process for the extra data is not included.", "Table: Data items to be collected for each issue.Table: The dataset(s) and data items used by each RQ.Figure: Procedure of data collection." ], [ "Data collection for DS1", "The data collection procedure for dataset DS1 is composed of the following 5 steps: Step1: Collect TD labels.", "In GitHub, developers can use a TD label (e.g., tech-debt) to explicitly tag an issue as TD.", "To collect a complete list of candidate TD labels, we first came up with a set of equivalents (e.g., case conversion and word abbreviation) to \"technical debt\".", "Then, we used \"debt\" as a keyword to search on the issue tracking system of GitHub, and manually checked the returned issues to collect more candidate TD labels.", "We could not find more new TD labels after we read around the top 300 returned issues and stopped the reading process after we read the top 500 returned issues.", "Finally, we added labels of TD types according to the classification of TD in [25].", "The final set of candidate TD labels to search TD issues are provided in Table REF , where the case of the TD labels is ignored.", "Step 2: Download requested data.", "In this step, we downloaded TD issues and related data (e.g., labels) according to the candidate TD labels collected, using a dedicated tool that we developed.", "Step 3: Clean auto-generated data.", "ImDone, an automatic issues generating tool, can automatically generate issues according to code comments [34].", "When the text of the comment contains \"TODO\", imDone automatically generates an issue and tags it as “debt”.", "Such auto-generated issues with label “debt” are invalid: the ImDone team forked many GitHub repositories to test ImDone, producing a large number of issues with label “debt” in a short time, which damaged the true and real state and life cycle of some issues.", "In addition, since an issue was automatically tagged with a TD label by imDone, it cannot guarantee that the code comment author considered the “TODO” as TD.", "Hence, such auto-generated issues with label “debt” by imDone should be filtered out to ensure the authenticity and accuracy of the datasets.", "Step 4: Clean duplicated TD issues.", "Considering that some issues were submitted multiple times on GitHub, duplicated issues should be cleaned to ensure the uniqueness of data.", "Duplicated issues are defined as follows: if the Repo, Reporter, RepTime, IssueTitle, IssueDesc of issues are exactly the same, these issues are considered duplicated.", "Only one issue will retain in the dataset.", "Step 5: Clean issues with TD labels without the meaning of TD.", "Some issues tagged with TD labels do not have practical meaning related to TD, but have repository-specific meaning.", "For example, label \"TD\" does not refer to technical debt in repository w3c/wot.", "Therefore, such issues should be filtered out.", "Table: TD labels to be chosen." ], [ "Data collection for DS2", "In order to explore the understanding of practitioners on the concept of TD, we built dataset DS2 in the following two steps: Step1: Sample TD issues.", "According to Israel's theory of determining sample size [20], we randomly sampled the resolved TD issues in dataset DS1, setting the margin of error as 5% and the confidence level as 99%.", "The reason for choosing resolved TD issues is that the status of such TD issues is stable.", "Step2: Analyze the sample manually.", "In this step, we manually tagged the TD type of and the QA affected by each TD issue in DS2 to answer RQ2.", "The definitions of TD types can be found in [25], and the QAs are adopted from the product QAs defined in the ISO 25010 standard [19].", "In the tagging process, if an issue involves multiple TD types or QAs, we chose the major one.", "The tagging process is described as follows: At first, two researchers made a pilot tagging on 50 TD issues to reach a consensus on the understanding of the TD types and QAs.", "All disagreements were recorded, and a third researcher joined in to discuss until the three researchers reached a consensus.", "After this step, the researchers reached an agreement on the scope and boundary of TD types and QAs.", "Later, the two researchers independently tagged a round of 50 TD issues and the Cohen's Kappa coefficient indicating the agreement between the two researchers was calculated [10].", "If the coefficient was less than 0.8, the issues with inconsistent tagging results were revisited among the three researchers until agreement was reached.", "The next round of tagging was performed until the Cohen's Kappa coefficient is greater than 0.8.", "After that, the remaining issues were divided into two parts and each was tagged by one of the two researchers independently." ], [ "Data collection for DS3", "In order to understand the characteristics of TD issues, we additionally added a dataset DS3, containing resolved TD issues and resolved non-TD issues.", "The data collection process for DS3 consists of two steps: Step1: Select TD issues from specific repositories.", "Specifically, this step is to select resolved TD issues of repositories with not less than 100 resolved TD issues from DS1.", "Step2: Select non-TD issues from specific repositories.", "To be specific, this step is to download and store all the resolved non-TD issues of the repositories mentioned in Step1 from GitHub.", "The reason for selecting such repositories is that we believed that the participants of such repositories are more likely to have mature knowledge about TD, and the status of a resolved issue is more stable than an unresolved one." ], [ "Extra data", "In order to understand the overall issues on GitHub and compare them with TD issues, we collected the number of newly reported issues on GitHub each year, the number of newly reported resolved issues on GitHub each year, the number of repositories newly adopting TDM on GitHub each year, the number of issues of each repository, the number of resolved issues of each repository.", "To understand the maintenance status of each repository, the creating time of the last pull request for each repository was collected as well.", "Since the extra data can be retrieved from GitHub by simply using the GitHub APIs, the collection process of extra data is not shown in Figure REF ." ], [ "Popularity of adopting TDM (RQ1)", "To answer RQ1, we analyze dataset DS1 from three aspects: (1) trend of the number of repositories newly adopting TDM, (2) trend of the number of newly reported TD issues, and (3) distribution of ratio of repositories over the number of TD issues.", "The time when the first TD issue is reported for a repository is considered as the time of the repository adopting TDM.", "A repository containing one or more TD issues is considered as it adopting TDM.", "Every TD issue is intentionally tagged with a TD label by a practitioner, which means that the TD identification activity of TDM is actually performed by the practitioner of the repository.", "Thus, we consider that the practitioner actively adopts TDM for the repository even it contains only one single TD issue.", "Besides descriptive statistics, the compound annual growth rate (CAGR) was used to measure the growth trend of the number of repositories newly adopting TDM and newly reported TD issues in whole DS1.", "CAGR is defined as: $CAGR=((n_2/n_1)^{1/(y_2-y_1)}-1)\\times {100\\%}$ where $n_1$ and $n_2$ are the numbers in the years $y_1$ and $y_2$ , respectively." ], [ "Understanding on the TD concept (RQ2)", "QAs affected by TD are a central concern in TD research [25], and issue labels tagged to TD issues can reflect the characteristics of TD issues to some extent.", "Thus, we investigate the understanding of practitioners on the TD concept from two perspectives: (1) QAs affected by TD, and (2) co-occurring issue labels with TD labels.", "QAs affected by TD.", "As described in Section REF , we manually analyzed a sample dataset (i.e., DS2) of all TD issues to explore their TD types and the QAs affected.", "We then analyzed how each QA is affected by TD.", "From the QA perspective, we analyzed the TD types affecting each QA, and the extent of the impact.", "Co-occurring labels with TD labels.", "To understand characteristics of TD issues in terms of co-occurring labels, descriptive statistics were used to analyze the number of occurrences of TD labels and the number of labels accompanying TD labels.", "After obtaining all the issue labels, we further screened all the issue labels to pick out labels with clear meaning and general purposes.", "The screening process includes three steps: a) keep clear and general issues (i.e., not specific to a certain project/repository and not describing the status of the issue), b) filter out labels that occur for less than 100 times, and c) combine labels with the same meaning." ], [ "Identification of TD issues (RQ3)", "On GitHub, we can trace the timeline of the labeled events (i.e., data item LabeledEvent) pertaining to each issue to further analyze the characteristics of TD issues.", "A LabeledEvent contains information, such as the person who tagged a label, the label name, the time of tagging the label to the issue.", "In the following, RQ3 is analyzed from three perspectives.", "(1) When TD issues are identified.", "We considered the time when a TD label is assigned to the issue as the time when TD is identified for an issue.", "We counted the difference between the reported time and the identification time for each TD issue.", "Further, TD identification time can happen at two possible points of time: a) the time when an issue is reported and b) a specific time after an issue is opened to resolve.", "(2) Who identifies TD issues.", "To understand the participants' awareness of TDM, we analyzed the participants from several viewpoints.", "First, we examined the overall distribution and characteristics of participants.", "Second, we focused on issue reporters who are a key type of participants to identify TD issues.", "GitHub defines a set of relationships between issue reporter and repository, including: COLLABORATOR, CONTRIBUTOR, FIRST_TIMER, FIRST_TIME_CONTRIBUTOR, MANNEQUIN, MEMBER, OWNER, and NONE.", "Please refer to Table REF for their detailed definitions.", "We calculated the distribution of the reporters who identified TD issues.", "(3) Phase of adopting TDM in the development lifecycle.", "For each repository, let $Period_{TD}$ denote the period between the time when the first TD issue is identified and the time when the first issue is reported, let $Period_{Full}$ denote the period between the time when the last pull request is handled and the time when the first issue is reported, and $Phase_{TD}$ is defined as $Phase_{TD}=Period_{TD}/Period_{Full} .$ Then, $Phase_{TD} \\ge 0$ , and a smaller $Phase_{TD}$ means earlier awareness of TDM for the repository.", "Table: Relationships between issue reporter and repository." ], [ "Resolution of TD issues (RQ4)", "To answer RQ4, we made analysis in four aspects: (1) Proportion of resolved TD issues.", "For each repository in dataset DS1, we calculated the proportion of resolved TD issues over the total TD issues based on data item IsResolved (i.e., D11).", "We then calculated the distribution of the number of repositories over the proportion of resolved TD issues.", "In addition, we also calculated the proportion of TD issues resolved and the proportion of resolved issues on the whole GitHub from 2009 to 2020.", "(2) Open time of resolved TD issues.", "At first, we calculated the open time of each resolved TD issue as the difference between ResTime and RepTime of the issue.", "In the subsequent data analysis, we calculated the distribution of the proportion of resolved TD issues to all resolved TD issues in DS1 over the open time, and observed the characteristics of resolved TD issues in terms of open time; we also compared the differences in the distribution of resolved TD issues and resolved non-TD issues in DS3.", "(3) Characteristics of resolved TD issues.", "We explored the TD issue resolution process based on the characteristics of resolved issues.", "Only resolved issues were used as the data source because their status is much more stable than unresolved issues.", "On GitHub, each issue allows participants to discuss the issue by posting comments on it, in addition to the title and descriptive text provided by the reporter.", "When a participant posts a comment on the issue, a comment event (i.e., data item CmntEvent) for the issue is generated.", "Then, we investigated whether the resolved TD issue and the resolved non-TD issue differ significantly in terms of the issue content and the corresponding comment events, including four characteristics: the size of the title and description (IssueSize), the number of all CmntEvents (CmntNo), the size of the content of all CmntEvents (CmntSize), the number of the participants who added all CmntEvents (ParticipantNo).", "We ran Mann-Whitney U Tests [17] on the four issue characteristics of resolved TD issues and resolved non-TD issues in DS3, and corrected the p-value of Mann-Whitney U Tests by the Bonferroni correction in account for the multiple comparisons problem [15].", "(4) Reopen of resolved TD issues.", "On GitHub, if an issue is resolved, the status of the issue is updated to \"closed\".", "However, an issue can be changed to \"opened\" again, which means that the issue may need further discussion and fixes.", "We considered that the probability for a resolved issue to reopen reflects the quality of the previous fix of the issue.", "Thus, we calculated the probabilities of resolved TD issues and non-TD issues based on the reopened events (i.e., data item ReopenedEvent) of all issues in DS3." ], [ "Continuity of TDM for the repositories with TD labels (RQ5)", "To better understand the continuity of TDM of repositories with TD labels, we defined three levels of TDM continuity (namely Abandoned, Unclear, and Consistent) for such repositories.", "First, for convenient description, we define several notations.", "$Period_{PT}$ denotes the period between the time when the last pull request is handled and the time when the last TD issue is reported.", "$Period_{TT}$ denotes the period between the time when the last TD issue is reported and the time when the first TD issue is reported.", "$Period_{Y}$ is a constant, and defined as: suppose that the sequence of time differences between the tagging time of each pair of adjacent TD issues of each repository is known, and $Period_{Y}$ is the 99th percentile of this sequence of time differences.", "$Count_{R}$ denotes the number of TD issues of the repository.", "$Count_{Z}$ is a constant, which denotes the 75th percentile of the number of TD issues of the repositories in DS1.", "Then, the conditions for the three TDM continuity levels for a repository are defined as follows.", "1) Abandoned: the repository has abandoned TDM, i.e., $Period_{PT} > Period_{Y}$ .", "2) Unclear: the repository does not explicitly show TDM is adopted continuously, i.e., ($Period_{PT}\\le Period_{Y}$ ) $\\wedge $ (($Period_{TT} \\le Period_{Y}$ )$\\vee $ ($Count_{R} < Count_{Z}$ )) is true.", "3) Consistent: the repository keeps adopting TDM, i.e., ($Period_{PT}\\le {Period_{Y}}$ )$\\wedge $ ($Period_{TT}> Period_{Y}$ )$\\wedge $ ($Count_{R}\\ge {Count_{Z}}$ ) is true.", "Subsequently, we analyzed the levels of all repositories in DS1, and further analyzed the differences between repositories at level Abandoned and level Consistent.", "We focused on the differences between the two levels of repositories in terms of total issues and TD issues.", "Seven characteristics for each repository were extracted: the number of total issues (RepoIssueNo), the proportion of total issues resolved (RepoResolvedIssuePro), the number of TD issue reporters (RepoTDRepoterNo), the number of TD issues (RepoTDIssueNo), the proportion of TD issues resolved (RepoResolvedTDIssuePro), the average open time of TD issues (RepoOpentimeAvg), and the average length of the comments of TD issues (RepoCmntSizeAvg).", "Furthermore, we used Mann-Whitney U Tests [17] on the characteristics to compare if there is a significant difference between the repositories of level Abandoned and those of level Consistent.", "In addition, given the problem of multiple comparisons, we performed the Bonferroni correction to correct the p-value of Mann-Whitney U tests [15]." ], [ "Overview of the Obtained Datasets", "Following the data collection procedure described in Section REF , we obtained DS1, DS2, and DS3.", "48,915 issues were retrieved by the searches with candidate TD labels, and finally DS1 includes 35,278 TD issues collected from 3,598 repositories after data cleaning steps.", "Noticeably, among the 35,278 TD issues, there are 914 issues whose LabeledEvents cannot be retrieved from GitHub, but their TD labels can be retrieved.", "Considering the relatively large number of such issues, we kept them in the final dataset (DS1) of TD issues.", "This may influence the results related to the LabeledEvents of TD issues.", "Specifically, when analyzing TD issues related to the LabeledEvents, we used the left 34,364 TD issues excluding these 914 TD issues.", "DS2 contains 652 issues randomly extracted from DS1 according to the sampling requirements stated in Section REF .", "DS3 contains 5,202 resolved TD issues and 205,927 resolved non-TD issues of the 24 repositories with at least 100 resolved TD issues.", "The list of these repositories is available online [27].", "All the three datasets (i.e., DS1, DS2, and DS3) are also available online [26]." ], [ "RQ1: Popularity of adopting TDM", "We studied the popularity of adopting TDM in OSS development practice in the following three aspects." ], [ "Trend of the number of repositories newly adopting TDM", "Figure REF shows that the number of repositories newly adopting TDM had been lastingly increasing over the years from 2 in 2009 to 880 in 2020, and the CAGR is 73.9%.", "In the same period, the number of newly created repositories had also been continuously growing from 80,318 in 2009 to 23,807,596 in 2020, and its CAGR is 67.8%, which is little lower than the CAGR of the number of repositories newly adopting TDM.", "Figure: Distribution of the number of repositories newly adopting the TDM practice over years (RQ1)." ], [ "Trend of the number of newly reported TD issues", "Figure REF shows that the number of newly reported TD issues had been continuously increasing over the years from 3 in 2009 to 9,840 in 2020, and the CAGR is 108.7%.", "Meanwhile, the number of newly reported issues had been continuously growing from 92,552 in 2009 to 58,335,696 in 2020, and its CGAR is 79.7%, which is much lower than the CGAR of the number of newly reported TD issues.", "In addition, the CGAR of the proportion of newly reported TD issues over the total newly reported issues was also calculated, and the CGAR is 16.2%.", "However, the proportion of newly reported non-TD issues over the total newly reported issues decreases with a CGAR close to 0.0% (i.e., -0.00124%) during the 12 years.", "Figure: Distribution of the number of newly reported TD issues over years (RQ1)." ], [ "Distribution of the ratio of repositories over the number of TD issues", "Among the total 3,598 repositories with TD labels, there are 1,084 (30.1%), 563 (15.6%), and 2,937 (81.6%) repositories that each has only one, two, and less than ten TD issues, respectively.", "Figure REF depicts the distribution of the ratio of repositories over the number of TD issues.", "This distribution obeys a power law [3], i.e., $y=bx^{-a}$ , where $y$ denotes the ratio of repositories, $x$ denotes the number of TD issues, $a$ and $b$ are constants.", "The fitted results are that $a=1.275$ and $b=0.170$ , i.e., $y=0.170x^{-1.275}$ (see Figure REF ).", "Figure: Distribution of ratio of repositories with TD labels over the number of TD issues (RQ1).Summary (RQ1): The number of repositories adopting TDM and the number of newly reported TD issues had been continuously and rapidly increasing, and their CAGRs are 73.9% and 108.7%, respectively.", "The distribution of the ratio of repositories over the number of TD issues obeys a power law." ], [ "QAs affected by TD", "During our manual analysis, we found that the existing widely-used TD classification proposed in [25] cannot fully cover all the TD issues.", "Therefore, except for the ten TD types proposed in [25], we added “Deployment TD” as a new TD type.", "Deployment TD refers to flaws that slow down software deployment or make software deployment unnecessarily complicated.", "For example, from issue #151https://github.com/konveyor/pelorus/issues/151 of project konveyor/pelorus, we can see that some warnings have affected the deployment of Kubernetes, and thus this issue contains deployment TD.", "Table REF shows the results of our manual analysis.", "For each QA, we calculated the number and percentage of which TD types it was affected by.", "In addition, for each column, we fill the cells in green according to the value of the number, in order to show more clearly the relationship between affected QAs and TD types.", "The darker the color of a cell, the larger the number in the cell.", "As shown in this table, the top three QAs most affected by TD issues are maintainability, reliability, and functional suitability, and the top three TD types most likely contained in TD issues are design TD, architectural TD, and test TD.", "In the following, we present the results on how each QA is affected by TD.", "(1) Maintainability is mainly affected by design TD, architectural TD, and test TD, accounting for 30.9%, 27.3%, and 23.8%, respectively.", "(2) Reliability is mainly affected by five types of TD, of which defect TD has the greatest impact on this QA, accounting for 34.0%.", "(3) For functional suitability, design TD is the most influential TD type, followed by architectural TD.", "(4) For usability, design TD, architectural TD, and requirements TD are the top three TD types that account for more than 10.0%.", "Among the three types of TD, design TD with 58.8% ranks in the first place.", "(5) Performance efficiency is most affected by design TD and architectural TD, accounting for 51.5% and 33.3%, respectively.", "(6) Portability is mainly affected by architectural TD, accounting for more than a half.", "Portability requires a developer to consider the relationship between the software system and the computing platforms during design.", "Thus, architectural TD has a great impact on portability.", "(7) Security is mainly affected by design TD, architectural TD, and test TD, accounting for 23.1%, 46.2%, and 15.4%, respectively.", "(8) Compatibility is mainly affected by infrastructure TD with 36.4%.", "Table: How each QA is affected by TD (RQ2)." ], [ "Co-occurring labels with TD labels", "Table REF lists the TD labels used in the repositories on GitHub and the number of their occurrences.", "Fifteen TD labels are used, technical debt was used most frequently by 11,787 issues, and doc debt was used most scarcely by only 12 issues.", "There are 3,974 co-occurring labels and most of them occurred just for a few times.", "After filtering, the final co-occurring labels and their numbers of the occurrences are shown in Table REF .", "As we can see, the most frequent co-occurring label is enhancement/improvement, followed by bug.", "Co-occurring labels may indicate specific characteristics of TD issues.", "For instance, some co-occurring labels are associated with the purposes (e.g., enhancement) and expectations (e.g., nicetohave) of the TD issues.", "Some co-occurring labels (e.g., testing) indicate the TD types contained by the issues, while some others (e.g., security) indicate the QAs affected by TD.", "Table: TD labels and their occurrences (RQ2).Table: Labels co-occurring with TD labels (RQ2).Summary (RQ2): Maintainability, reliability, and functional suitability are the top three QAs affected most by TD issues.", "Labels enhancement/improvement and bug are more likely associated with TD labels." ], [ "RQ3: Identification of TD issues", "We looked into the identification of TD issues from the following three perspectives: when TD issues are identified, who identifies TD issues, and phase of adopting TDM during the development lifecycle.", "As mentioned in Section REF , when we needed to analyze the labeled events of the TD issues in DS1, we excluded the 914 TD issues whose labeled events of TD labels were missing.", "Therefore, we analyzed the left 34,364 TD issues to answer RQ3 from the first two perspectives, and analyzed the full DS1 to answer RQ3 from the third perspective." ], [ "When TD issues are identified", "In dataset DS1, there are 34,480 TD labeled events that record the assignment of TD labels to 34,364 TD issues.", "Among the 34,364 TD issues, 16,967 (49.4%) were tagged with TD labels when they were reported, and the other 17,397 (50.6%) were tagged with TD labels during the issue resolution process.", "For the 17,397 TD issues, the maximum duration between the issue reporting time and the TD tagging time is 2,784 days, the minimum is 1 day, and the average is 63 days.", "In addition, the 70th percentile is 15 days, and the median is 1 day." ], [ "Who identifies TD issues", "In dataset DS1, all TD issues were identified by 4,526 participants, 1,408 (31.1%) of which identified only one TD issue, and 23 (0.5%) of which identified more than 100 TD issues.", "On average, each participant identified 7.6 TD issues.", "In addition, there were 2,438 (69.0%) repositories in which all TD issues were identified by only one participant.", "In more than 3,306 (91.9%) of all repositories had no more than 4 participants who identified TD issues.", "Among the 34,364 TD issues, 27,690 (80.5%) were identified as TD issues by their reporters, and 6,674 (19.5%) were identified as TD by other participants other than the issue reporters.", "Through analyzing the 27,690 TD issues, we found that the proportion of CONTRIBUTOR and MEMBER among the reporters of the TD issues is relatively high.", "Table REF shows the number and proportion of each relationship.", "Among them, the numbers of FIRST_TIMER, FIRST_TIME_CONTRIBUTOR, and MANNEQUIN are 0, and thus these three relationships do not appear in Table REF .", "Table: The number and proportion of each relationship between issue reporters and repositories of TD issues (RQ3)." ], [ "Phase of adopting TDM during the development lifecycle", "As shown in Figure REF , the number of repositories with TD labels tends to decrease as $Phase_{TD}$ increases.", "Notably, 35.1% of the repositories have a $Phase_{TD}$ value no more than 0.1, and more than half of the repositories have a $Phase_{TD}$ value less than or equal to 0.3.", "This means that the participants start investing effort to TDM in the early phase of the development lifecycle.", "Summary (RQ3): Around a half (49.4%) of TD issues were identified when they were reported, and the others were identified during the resolution process.", "31.1% of the participants identified only one TD issue, and 69.0% of the repositories were with all TD issues identified by only one participant.", "More than a half of the repositories adopt TDM in the early phase of the development lifecycle." ], [ "Proportion of resolved TD issues", "The distribution of the number of repositories over the proportion of resolved TD issues is shown in Figure REF .", "The proportion of resolved TD issues is divided into multiple intervals, e.g., [0.0, 0.0], (0.0, 0.1], and [1.0, 1.0].", "The number and proportion of repositories in each interval are shown in Figure REF .", "All TD issues were resolved in 1,137 out of 3,598 (31.6%) repositories (corresponding to interval [1.0, 1.0]), and each repository contains 4.5 TD issues on average.", "None of the TD issues was resolved in 982 out of 3,598 (27.3%) repositories (corresponding to interval [0.0, 0.0]), and each repository contains 2.1 TD issues on average.", "In each of the rest 1,479 out of 3,598 (41.1%) repositories, the proportion of resolved TD issues falls into (0.0, 1.0), i.e., part of the TD issues were resolved.", "The median of the proportion of resolved TD issues for all 3,598 repositories in DS1 is 0.582.", "The total number of all resolved TD issues between 2009 and 2020 is 22,841, the total number of all reported TD issues is 35,278, and thus the proportion of resolved TD issues is 64.7%.", "In contrast, the total number of resolved issues and the total number of issues by the end of 2020 are 49,694,427 and 72,813,679, respectively, and hence the proportion of resolved issues is 68.2%, which is slightly greater than the proportion of resolved TD issues.", "Repository microsoft/vscode has 1,610 TD issues, and is the only repository that has more than 1,000 TD issues.", "The proportion of resolved TD issues in this repository is 92.7%, which is a very high proportion, especially considering that some unresolved TD issues reported recently would be resolved afterwards.", "Figure: Distribution of the number of repositories over the proportion of resolved TD issues (RQ4)." ], [ "Open time of TD issues", "Out of all the 22,841 resolved TD issues, 2,761 (12.1%), 1,012 (4.4%), and 7,601 (33.3%) were resolved in one, two, and less than ten days, respectively.", "The median is 25 days.", "As shown in Figure REF , the distribution of the proportion of resolved TD issues over open time (day) follows a power law [3], i.e., $y=bx^{-a}$ , where $y$ denotes the ratio of resolved TD issues, $x$ denotes the open time of the resolved TD issue, $a$ and $b$ are constants.", "The fitting results are $a=1.228$ and $b=0.369$ , i.e., $y=0.369x^{-1.228}$ (see Figure REF ).", "Figure: Distribution of the proportion of resolved TD issues over open time (RQ4).Figure: Distribution of the open time of resolved TD issues (RQ4).In addition, in order to study the difference on the open time between the resolved TD issues and non-TD issues, we calculated their distributions, which show that: (1) There are discrete values of the distribution of open time for the TD issues and non-TD issues, and the distributions show a right skewed pattern.", "The maximum open time for the TD issues is 2,403 days, which is extremely long.", "(2) The inter-quartile range shows that the open time of TD issues is longer than that of non-TD issues.", "The average open time of resolved TD issues is 84.5% longer than that of resolved non-TD issues, which is 155 days for the former and 84 days for the latter.", "(3) A vast majority of TD issues are resolved in a long period of time compared to non-TD issues.", "Figure REF provides a violin plot based on the distributions of resolved TD issues and non-TD issues.", "The vertical coordinate of this plot is logarithmized (into natural logarithm) in order to ensure that the readers' attention can be focused on the core data (i.e., the inter-quartile range)." ], [ "Characteristics of resolved TD issues", "The results of Mann-Whitney U tests on the characteristics of resolved TD issues and non-TD issues are shown in Table REF , in which $Avg_{TD}$ and $Avg_{non-TD}$ denote the average values of the characteristic in the corresponding row of resolved TD issues and non-TD issues, respectively.", "It can be found that for IssueSize and ParticipantNo, their p-values are less than 0.05.", "Considering the average values of these two characteristics, it means that issue characteristics IssueSize and ParticipantNo of resolved TD issues are significantly smaller than those of resolved non-TD issues, respectively.", "On the contrary, the p-values of CmntSize and CmntNo are larger than 0.05, which means that there is no significant difference between resolved TD issues and non-TD issues with respect to characteristics CmntSize and CmntNo." ], [ "Reopen of TD issues", "There are 24 repositories with at least 100 resolved TD issues.", "Out of the total 5,202 resolved TD issues in the 24 repositories, 326 (6.3%) resolved TD issues had been reopened before.", "In addition, for 205,927 resolved non-TD issues, 9,677 (4.7%) resolved non-TD issues had been reopened before.", "From the percentages mentioned above, it can be concluded that TD issues are more likely to be fixed for multiple times.", "Table: Results of comparison on issue characteristics (RQ4).Summary (RQ4): All TD issues were resolved in 31.6% of the repositories while no TD issues were resolved at all in 27.3% of the repositories.", "The distribution of the proportion of resolved TD issues over open time obeys a power law.", "The mean open time of resolved TD issues is 84.5% longer than that of resolved non-TD issues.", "TD issues are more likely to be reopened than non-TD issues." ], [ "Repositories with different TDM continuity levels", "After calculation, the value of $Period_{Y}$ is around 368 and the value of $Count_{Z}$ is 8.", "Following the definitions of the three continuity levels, 1,187 (32.9%) repositories with TD labels are identified as Abandoned, while only 298 (8.2%) repositories with TD labels are identified as Consistent.", "The number of repositories of level Abandoned is around 4 times as large as the number of those of level Consistent.", "In addition, the remaining repositories are identified as Unclear.", "Since too few TD issues are managed or the period for TDM is too short, it is not clear whether these repositories are consistently using or have abandoned TDM." ], [ "Characteristics of repositories with different TDM continuity levels", "We compared the characteristics of the repositories of level Consistent and repositories of level Abandoned, and the specific results of Mann-Whitney U tests on the 7 characteristics of the repositories of level Consistent or Abandoned are shown in Table REF , in which $Avg_{CR}$ and $Avg_{AR}$ denote the average values of the characteristic in the corresponding row of the repositories of level Consistent and repositories of level Abandoned, respectively.", "We found that the repositories of these two levels differ significantly within the 95% confidence interval on 6 characteristics: RepoIssueNo, RepoResolvedIssuePro, RepoTDRepoterNo, RepoTDIssueNo, RepoOpentimeAvg, and RepoCmntSizeAvg.", "Specifically, the average of these 6 characteristics are higher for repositories of level Consistent than that for repositories of level Abandoned.", "Table: Results of comparisonon repository characteristics (RQ5).Summary (RQ5): Surprisingly, only 8.2% (i.e., 298) of the repositories with TD labels adopt TDM consistently, while 32.9% (i.e., 1,187) of the repositories with TD labels have abandoned TDM.", "The repositories with consistent TDM have significantly more issues, reporters of TD issues, TD issues, and open time as well as larger size of comments and a higher proportion of resolved issues than the repositories with abandoned TDM." ], [ "Interpretation of Study Results", "RQ1: First, the lasting and fast increase of the number of repositories with TD labels and TD issues indicates that the TD concept has been increasingly popular over the last decade.", "In other words, the OSS community on GitHub has been raising the awareness of TDM in software development.", "Second, the power-law characteristic of the distribution of ratio of repositories with TD labels over the number of TD issues indicates that only a small portion of repositories have a large number of TD issues and a large portion of repositories have only a few TD issues.", "RQ2: The results of our manual analysis on sampled TD instances indicate that 1) both internal QAs (e.g., maintainability) and external QAs (e.g., usability) are affected by TD, 2) internal QAs (maintainability in particular) are affected the most by TD.", "In addition, co-occurring labels with TD labels are extremely diverse and can carry different kinds of messages.", "They may help to identify the affected QAs, TD types, purposes, and so forth.", "RQ3: First, around a half of all the TD issues were identified when they were reported, which indicates that the reporters of such issues may have good awareness of TDM and that such issues might be relatively easy to be identified as TD.", "Second, in 91.9% of the repositories, all TD issues were identified only by no more than four participants, which means that only a really small number of participants put TDM into practice in most repositories adopting TDM.", "Third, the fact that 80.5% of the TD issues were identified by issue reporters indicates that issue reporters tend to pay more attention to the characteristics of issues.", "More than a half of the repositories with TD labels adopt TDM in the early phase of the development lifecycle, which indicates an early awareness of TDM in most repositories with TD labels.", "RQ4: First, TD issues did not receive enough attention considering that the median of the proportion of resolved TD issues for all repositories is 0.582 and no TD issues were resolved in more than a fourth of all repositories.", "Second, the power-law characteristic of the distribution of the proportion of resolved TD issues over open time indicates that most resolved TD issues experienced short open time and a small portion of resolved TD issues had long open time.", "Third, TD issues take much (i.e., 84.5%) longer to be resolved than non-TD issues.", "One potential reason is that most TD issues are not bugs and invisible to users, and thus they are usually assigned with a relatively low priority.", "Fourth, on average, a non-TD issue attracts more participants to join discussion than a TD issue.", "In other words, a non-TD issue receives more attention than a TD issue.", "Finally, on average, a TD issue is more likely to be reopened than a non-TD issue, which means that TD issues are more difficult to be cleanly fixed than non-TD issues.", "RQ5: Over the 3,598 repositories with TD labels, around a third (32.9%) of the repositories have abandoned TDM, which is partially due to the situation that only one participant is involved in TDM for 69.0% of all the repositories.", "If this only participant left the repository, the practice of TDM there is likely to be dropped.", "This is further confirmed by that the number of TD issue reporters for the repositories keeping consistent TDM is significantly larger than that for the repositories having abandoned TDM." ], [ "Implications", "Based on the study results and their analysis, we obtained a number of implications for practitioners and researchers, respectively." ], [ "Implications for practitioners", "(1) The OSS community are embracing the TD concept, which in turn reflects the practical value of TDM.", "The TD concept is well adopted in some large repositories, such as microsoft/vscode, in the sense that hundreds and even more of TD issues were lastingly managed in such repositories.", "(2) Although the TD concept has been adopted in some OSS repositories, TDM is possible to be ignored in the sense that no TD issues were resolved in 27.3% of the repositories, and TDM can even be abandoned in the sense that 32.9% of the repositories are identified as having abandoned TDM.", "In the context of OSS development, to benefit from TDM, we recommend to pre-define TD labels for a project (or repository) and to explicitly make clear and concrete TDM policies that the participants should obey.", "In OSS development, if there are no TDM policies exerted on a project, the awareness of TDM is likely to be brought away from the project by the participants who are the only ones care about TDM when they quit the project." ], [ "Implications for researchers", "(1) As the TD practice has been increasingly popular in the OSS community, which resonates with the intensive research on TD in the last decade.", "(2) The reasons for why only one TD issue was identified in 31.1% of all the repositories and only one participant identified all TD issues in 69.0% of all the repositories, need to be further investigated.", "This can help us to understand why the TD concept is not adopted by other participants in such repositories, to evaluate the value of the TD concept in such repositories, and to explore the conditions required by well adoption of the TDM practices in such repositories.", "(3) There is in need for a deep investigation on the cost/benefit of adopting TDM in OSS development, in order to clarify the reasons why around a third (32.9%) of the repositories have abandoned TDM and only 8.2% of the repositories keep TDM as a consistent practice.", "(4) We highlight the gap between the academic research and OSS development practice on TDM.", "TDM has been a hot research topic in software engineering for more than a decade [25], [8], a large number of research papers on TDM have been published.", "Although the awareness of TDM has been increasing in the sense that the growth of the number of repositories newly adopting TDM is higher than that of newly created repositories, there are only 298 repositories keep TDM as a consistent practice on the whole GitHub.", "This demonstrates a big gap between academic research on TDM and practical use of TDM in OSS development." ], [ "Threats to Validity", "There are several threats to the validity of the study results.", "We discuss these threats according to the guidelines in [31].", "Internal validity is not discussed, since we did not study causal relationships." ], [ "Construct validity", "Construct validity is concerned with whether the values of the variables (listed in Table REF ) we obtained are in line with the real values that we expected.", "First, considering that many projects hosted on GitHub do not record their issues on GitHub, our collected dataset is not complete.", "Second, it is possible that the keywords used to search TD issues do not cover all labels for tagging TD issues.", "However, to reduce this risk, we chose the string \"technical debt\" as well as its variable forms (e.g., word abbreviations), and selected the appropriate TD types according to the study of Li et al.", "[25].", "Admittedly, issues with the list of TD labels may not cover all issues containing TD.", "However, in order to investigate the practice of TDM performed by practitioners in the OSS development, we do not want to judge by ourselves whether an issue contains TD or not, but let the practitioner speak out.", "Therefore, we chose the labels that are directly related to TD as TD labels.", "Third, the 914 TD issues without labeled events may influence the completeness of the data.", "Dataset DS1 excluding 914 TD issues without labeled events was analyzed to partially answer RQ3.", "Considering that the 914 TD issues did not come from a small number of repositories and were not reported in a narrow time slot, the threat caused should be limited.", "Finally, the results of our manual analysis of the affected QAs and TD types of TD issues may be biased due to the differences of experience and knowledge between the researchers.", "This threat was greatly mitigated by the pilot tagging process, discussion of the disagreement of individual tagging results with a third researcher." ], [ "External validity", "External validity is concerned with the generalizability of the study results.", "We only investigated issues reported on repositories hosted on GitHub, therefore, the findings obtained in this paper are merely valid for GitHub, and may not be generalized to other OSS platforms or ecosystems.", "However, GitHub is the largest OSS platform, the repositories included in this study come from different application domains and adopt a wide range programming languages, and thus our datasets used in this study should be sufficiently representative.", "Hence, the threats to external validity should be alleviated." ], [ "Reliability", "Reliability is concerned with whether the study yields the same results when it is replicated by other researchers.", "In all the data analysis tasks, only descriptive statistics were used, and all datasets are provided on GitHub.", "Hence, the threats to reliability should be minimal." ], [ "Conclusions and Future Work", "In this work, we conducted an empirical study on all issues with TD labels on the whole GitHub to investigate the execution of TDM in OSS development.", "The following findings were obtained: (1) the awareness of TDM in the OSS community has been arising; (2) maintainability, reliability, and functional suitability are the top three QAs affected most by TD issues, and design TD, architectural TD, and test TD are the top three TD types occurred most frequently in TD issues; (3) only one TD issue was identified in 31% of the repositories, all TD issues were identified by only one developer in 69% of the repositories, 81% of TD issues were identified by the issue reporters, 49% of TD issues were identified when the issues were reported and the other 51% of TD issues were identified during the resolution process of the issues; (4) TDM was ignored in 27.3% of the repositories after TD issues were identified; and (5) only 8.2% of the repositories keep TDM as a consistent practice, while 32.9% of the repositories have abandoned TDM.", "Based on the findings in this work, we plan (1) to conduct a broad survey on TDM in the closed source software industry to understand the state of the adoption and execution of TDM, and (2) to investigate the reasons for ignoring and abandoning TDM in the repositories that have adopted TDM." ], [ "Acknowledgment", "This work is supported by the Natural Science Foundation of Hubei Province of China under Grant No.", "2021CFB577, and the National Natural Science Foundation of China under Grant No.", "62176099 and No.", "62172311." ] ]
2212.05537
[ [ "Retire: Robust Expectile Regression in High Dimensions" ], [ "Abstract High-dimensional data can often display heterogeneity due to heteroscedastic variance or inhomogeneous covariate effects.", "Penalized quantile and expectile regression methods offer useful tools to detect heteroscedasticity in high-dimensional data.", "The former is computationally challenging due to the non-smooth nature of the check loss, and the latter is sensitive to heavy-tailed error distributions.", "In this paper, we propose and study (penalized) robust expectile regression (retire), with a focus on iteratively reweighted $\\ell_1$-penalization which reduces the estimation bias from $\\ell_1$-penalization and leads to oracle properties.", "Theoretically, we establish the statistical properties of the retire estimator under two regimes: (i) low-dimensional regime in which $d \\ll n$; (ii) high-dimensional regime in which $s\\ll n\\ll d$ with $s$ denoting the number of significant predictors.", "In the high-dimensional setting, we carefully characterize the solution path of the iteratively reweighted $\\ell_1$-penalized retire estimation, adapted from the local linear approximation algorithm for folded-concave regularization.", "Under a mild minimum signal strength condition, we show that after as many as $\\log(\\log d)$ iterations the final iterate enjoys the oracle convergence rate.", "At each iteration, the weighted $\\ell_1$-penalized convex program can be efficiently solved by a semismooth Newton coordinate descent algorithm.", "Numerical studies demonstrate the competitive performance of the proposed procedure compared with either non-robust or quantile regression based alternatives." ], [ "Introduction", "Penalized least squares has become a baseline approach for fitting sparse linear models in high dimensions.", "Its focus is primarily on inferring the conditional mean of the response given the a large number of predictors/covariates.", "In many scientific applications, however, more aspects than the mean of the conditional distribution of the response given the covariates are of interest, and that the covariate effects may be inhomogeneous and/or the noise variables exhibit heavy-tailed and asymmetric tails.", "For instance, in the Job Training Partners Act studied in [1], one is more interested in the lower tail than the mean of the conditional distribution of income given a large pool of predictors.", "To capture heterogeneity in the set of covariates at different locations of the response distribution, methods such as quantile regression [21] and asymmetric least squares regression [29] have been widely used.", "The latter is known as the expectile regression, which has also been widely used in risk analysis [37], [23], [42], [9].", "We refer the reader to [21], [19], and [22] for a comprehensive overview of quantile regression, and [29] and [16] for conventional and penalized expectile regressions.", "We focus on high-dimensional regression models in which the number of covariates, $d$ , is considerably larger than the number of observations, $n$ .", "The goal is to infer the conditional distribution of the response variable $y$ given the covariates $$ based on the training data $(y_1, _1) , \\ldots , (y_n, _n) \\in \\times ^d$ .", "High-dimensional data analysis greatly benefits from the sparsity assumption—only a small number of significant predictors are associated with the response.", "This motivates the use of various convex and non-convex penalty functions so as to achieve a desirable trade-off between model complexity and statistical accuracy [8], [39], [12].", "The most widely used penalty functions include the $\\ell _1$ /Lasso penalty [38], the smoothly clipped absolute deviation (SCAD) penalty [11], and the minimax concave penalty (MCP) [44].", "Albeit being computationally efficient and statistically (near-)optimal under $\\ell _2$ -errors, the $\\ell _1$ penalty induces non-negligible estimation bias, which may prevent consistent variable selection.", "The selected model with a relatively small prediction error tends to include many false positives, unless stringent assumptions are imposed on the design matrix [47], [46], [34], [24].", "Folded-concave (non-convex) penalty functions, on the other hand, have been designed to reduce the bias induced by the $\\ell _1$ penalty.", "With either the $\\ell _2$ or a robust loss function, the resulting folded-concave penalized estimator is proven to achieve the oracle property provided the signals are sufficiently strong, i.e., the estimator has the same rate of convergence as that of the oracle estimator obtained by fitting the regression model with true active predictors that are unknown in practice [11], [47], [46], [27], [26].", "Due to non-convexity, directly minimizing the concave penalized loss raises numerical instabilities.", "Standard gradient-based algorithms are often guaranteed to find a stationary point, while oracle results are primarily derived for the hypothetical global minimum.", "[47] proposed a unified algorithm for folded-concave penalized estimation based on local linear approximation (LLA).", "It relaxes the non-convex optimization problem into a sequence of iteratively reweighted $\\ell _1$ -penalized subproblems.", "The statistical properties of the final iterate have been studied by [45], [15], [13] and [31] under different regression models.", "We refer to [39] and [12], and the references therein, for a comprehensive introduction of penalized $M$ -estimation based on various convex and folded-concave (non-convex) penalty functions.", "For sparse quantile regression (QR) in high dimensions, [5] studied $\\ell _1$ -penalized quantile regression process, and established the uniform (over a range of quantile levels) rate of convergence.", "To alleviate the bias induced by the $\\ell _1$ penalty, [41] proposed concave penalized quantile regression, and showed that the oracle estimator is a local solution to the resulting optimization problem.", "Via the one-step LLA algorithm, [15] proved that the oracle estimator can be obtained (with high probability) as long as the magnitude of true nonzero regression coefficients is at least of order $\\sqrt{s \\log (d)/n}$ .", "We refer to [40] for a unified analysis of global and local optima of penalized quantile regressions.", "While quantile regression offers the flexibility to model the conditional response distribution and is robust to outliers, together the non-differentiability of the check function and the non-convexity of the penalty pose substantial technical and computational challenges.", "To our knowledge, the theoretical guarantee of the convergence of a computationally efficient algorithm to the oracle QR estimator under the weak minimum signal strength condition—$\\min _{j \\in } |\\beta ^*_j| \\gtrsim \\sqrt{\\log (d)/n}$ with $= {\\rm supp}(\\beta ^*)$ —remains unclear.", "An alternative approach to explore heterogeneity and/or asymmetry in the response distribution is the expectile regression [29], which is essentially a least squares analogue of regression quantile estimation.", "Quantiles and expectiles are useful descriptors of the tail behavior of a distribution in the same way as the median and mean are related to its central behavior.", "They share similar properties, and as shown by [18], expectiles are exactly quantiles of a transformed version of the original distribution.", "Quantiles are naturally more dominant in the literature due to the fact that expectiles lack an intuitive interpretation while quantiles are the inverse of the distribution function and directly indicate relative frequency.", "The key advantage of expectile regression is its computational expediency (for example, via the iteratively-reweighted least squares algoirhtm), and the asymptotic covariance matrix can be estimated without the need of estimating the conditional density function (nonparametrically).", "Therefore, it offers a convenient and relatively efficient method of summarizing the conditional response distribution.", "In finance, expectile (EVaR) is known to be a “coherent\" measure of risk [23], in contrast with quantile (VaR), since it satisfies the well-known axioms introduced by [3].", "In high-dimensional sparse models, [16] considered the penalized expectile regression using both convex and concave penalty functions.", "Since the expectile loss is convex and twice-differentiable, scalable algorithms, such as the cyclic coordinate decent and proximal gradient descent, can be employed to solve the resulting optimization problem.", "Theoretically, the consistency of penalized expectile regression in the high-dimensional regime “$\\log (d) \\ll n \\ll d$ \" requires sub-Gaussian error distributions [16].", "This is in strong contrast to penalized QR, the consistency of which requires no moment condition [5], [40] although certain regularity conditions on the conditional density function are still needed.", "This lack of robustness to heavy-tailedness is also observed in numerical studies.", "Since expectile regression is primarily introduced to explore the tail behavior of the conditional response distribution, its sensitivity to the tails of the error distributions, particularly in the presence of high-dimensional covariates, raises a major concern from a robustness viewpoint.", "In this work, we aim to shrink the gap between quantile and expectile regressions, specifically in high dimensions, by proposing a robust expectile regression (retire) method that inherits the computational expediency and statistical efficiency of expectile regression and is nearly as robust as quantile regression against heavy-tailed response distributions.", "The main idea, which is adapted from [35], is to replace the asymmetric squared loss associated with expectile regression with a Lipschitz and locally quadratic robust alternative, parameterized by a data-dependent parameter to achieve a desirable trade-off between bias and robustness.", "Under the low-dimensional regime “$d\\ll n$ \", we provide nonasymptotic high probability error bounds, Bahadur representation, and a Berry-Esseen bound (normal approximation) for the retire estimator when the noise variable has bounded variance.", "In the high-dimensional sparse setting “$\\max \\lbrace s, \\log (d) \\rbrace \\ll n \\ll d$ \", we propose an iteratively reweighted $\\ell _1$ -penalized (IRW-$\\ell _1)$ algorithm to obtain the penalized retire estimator, where $s$ denotes the number of significant predictors.", "The problem boils down to iteratively minimizing convex loss functions (proven to be locally strongly convex with high probability), solvable by (but not limited to) a semismooth Newton coordinate descent type algorithm proposed by [43].", "Theoretically, we provide explicit error bounds (in high probability) for the solution path of IRW-$\\ell _1$ .", "More specifically, we first obtain the statistical error of the $\\ell _1$ -penalized retire estimator, i.e., the first iterate of the IRW-$\\ell _1$ algorithm initialized at zero.", "We then show that the statistical error for the subsequent estimators can be improved sequentially by a $\\delta $ -fraction at each iteration for some constant $\\delta \\in (0,1)$ .", "Under a near necessary and sufficient minimum signal strength condition, we show that the IRW-$\\ell _1$ algorithm with $\\lbrace \\log (\\log d)\\rbrace $ iterations delivers an estimator that achieves the oracle rate of convergence with high probability.", "The rest of the paper is organized as follows.", "In Section , we briefly revisit the connection and distinction between quantile and expectile regressions.", "We describe the proposed method in Section , where we construct the new loss function and detail the semismooth Newton algorithm to solve the resulting optimization problem.", "The theoretical properties of the proposed estimator are presented in Section .", "Sections  and  consist of extensive numerical studies and two data applications, respectively.", "The proofs of the theoretical results are given in the Appendix." ], [ "Background and Problem Setup", "Let $y\\in $ be a scalar response variable and $=(x_1,\\ldots ,x_d)^^d$ be a $d$ -dimensional vector of covariates.", "The training data $(y_1, _1) , \\ldots , (y_n , _n)$ are independent copies of $(y,)$ .", "Given a location parameter $\\tau \\in (0,1)$ , we consider the linear model $y_i = _i^{\\beta ^*(\\tau ) + \\varepsilon _i(\\tau ),}where \\beta ^*(\\tau ) is the unknown d-dimensional vector of regression coefficients, and \\varepsilon _i(\\tau )^{\\prime }s are independent random noise.Model~(\\ref {eq:linearmodel}) allows the regression coefficients \\beta ^*(\\tau ) to vary across different values of \\tau , and thereby offers insights into the entire conditional distribution of y given .Throughout the paper, we let x_{1} = 1 so that \\beta _1^* denotes the intercept term.", "We suppress the dependency of \\beta ^*(\\tau ) and \\varepsilon (\\tau ) on \\tau whenever there is no ambiguity.$ The most natural way to relate the conditional distribution of $y$ given $$ and the parameter process $\\lbrace \\beta ^*(\\tau ), \\tau \\in (0,1)\\rbrace $ is through quantile regression, under the assumption that $F_{y_i|_i}^{-1}(\\tau ) = _i^^*(\\tau )$ , or equivalently, $\\lbrace \\varepsilon _i(\\tau ) \\le 0 \\, | _i \\rbrace = \\tau $ .", "Fitting a conditional quantile model involves minimizing a non-smooth piecewise linear loss function, $ \\varphi _\\tau (u)= u\\lbrace \\tau - \\mathbb {1}(u<0)\\rbrace $ , typically recast as a linear program, solvable by the simplex algorithm or interior-point methods.", "For the latter, [32] showed that the average-case computational complexity grows as a cubic function of the dimension $d$ , and thus, is computationally demanding for problems with large dimensions.", "Adapted from the concept of quantiles, [29] and [10] separately proposed an alternative class of location measures of a distribution, named the expectile according to the former.", "The resulting regression methods are referred to as the expectile regression or the asymmetric least squares regression, which are easy to compute and reasonably efficient under normality conditions.", "We start with some basic notation and facts for expectile regression.", "Let $Z \\in $ be a random variable with finite moment, i.e., $(|Z|) < \\infty $ .", "The $\\tau $ -th expectile or $\\tau $ -mean of $Z$ is defined as $e_\\tau (Z) := _{u \\in } \\bigl \\lbrace \\eta _\\tau (Z- u) - \\eta _\\tau (Z) \\bigr \\rbrace , \\qquad \\tau \\in (0,1) ,$ where $\\eta _\\tau ( u ) = | \\tau - \\mathbb {1}(u<0) | \\cdot \\frac{u^2}{2} = \\frac{\\tau }{2} \\lbrace \\max (u,0)\\rbrace ^2 + \\frac{1-\\tau }{2}\\lbrace \\max (-u,0)\\rbrace ^2$ is the asymmetric squared/$\\ell _2$ loss [29].", "The quantity $e_\\tau (Z)$ is well defined as long as $|Z|$ is finite.", "When $\\tau =1/2$ , it can be easily seen that $e_{1/2}(Z) = (Z)$ .", "Therefore, expectiles can be viewed as an asymmetric generalization of the mean, and the term “expectile\" stems from a combination of “expectation” and “quantile\".", "Moreover, expectiles are uniquely identified by the first-order condition $\\tau \\cdot ( Z - e_\\tau (Z) )_+ = (1-\\tau )\\cdot (Z - e_\\tau (Z) )_-,$ where $x_+ = \\max (x, 0)$ and $x_- = \\max (-x, 0)$ .", "Note also that the $\\tau $ -expectile of $Z$ defined in (REF ) is equivalent to Efron's $\\omega $ -mean with $\\omega = \\tau /(1-\\tau )$ [10].", "The notion of expectiles is a least squares counterpart of quantiles, and can be viewed as an alternative measure of “locations\" of the random variable $Z$ .", "Respectively, $1/2$ -expectile and $1/2$ -quantile correspond to the mean and median, both of which are related to the central behavior.", "In general, $\\tau $ -expectile and $\\tau $ -quantile with $\\tau $ close to zero and one describe the lower and higher regions of the distribution of $Z$ , respectively.", "The latter is the point below which 100$\\tau $ % of the mass of $Z$ lies, whereas the former specifies the position, say $e_\\tau $ , such that the average distance from $Z$ below $e_\\tau $ to $e_\\tau $ itself is 100$\\tau $ % of the average distance between $Z$ and $e_\\tau $ .", "Given independent observations $Z_1, \\ldots , Z_n$ from $Z$ , the expectile location estimator is given by $\\hat{e}_\\tau = _{u \\in } \\sum _{i=1}^n\\eta _\\tau (Z_i - u)$ , which is uniquely defined due to the strong convexity of the asymmetric $\\ell _2$ -loss.", "The expectile estimator $\\hat{e}_\\tau $ can also be interpreted as a maximum likelihood estimator of a normal distributed sample with unequal weights given to disturbances of differing signs, with a larger relative weight given to less variable disturbances [2].", "Essentially the asymmetric squared loss $\\eta _\\tau (\\cdot )$ is an $\\ell _2$ -version of the check function $\\varphi _\\tau (\\cdot )$ for quantile regression.", "Given train data from the linear model (REF ) subject to $e_\\tau (\\varepsilon _i|_i)=0$ , the expectile regression estimator [29] is defined as the minimizer of the following convex optimization problem $\\underset{\\beta \\in ^d}{\\mathrm {minimize}} ~ \\frac{1}{n} \\sum _{i=1}^n\\eta _\\tau ( y_i - _i^{{} \\beta ) , }which consistently estimates \\beta ^* when d=o(n) as n\\rightarrow \\infty .In particular, expectile regression with \\tau =0.5 reduces to the ordinary least squares regression.$" ], [ "A Class of Asymmetric Robust Squared Losses", "Despite its computational advantage over quantile regression, expectile regression (REF ) is much more sensitive to heavy-tailed distributions due to the squared loss component in (REF ).", "This lack of robustness is amplified in the presence of high-dimensional covariates, and therefore necessitates the development of a new class of asymmetric loss functions that preserves the robustness of the check loss to a degree.", "To this end, we construct a class of asymmetric robust loss functions that is more resistant against heavy-tailed error/response distributions.", "The main idea is to replace the quadratic component in (REF ) with a Lipschitz and locally strongly convex alternative, typified by the Huber loss [17] that is a hybrid $\\ell _1$ /$\\ell _2$ function.", "The proposed loss function, $\\ell _{\\gamma }(u)$ , contains a tuning parameter $\\gamma >0$ that is to be chosen to achieve a balanced trade-off between the robustification bias and the degree of robustness.", "At a high level, we focus on the class of loss functions that satisfies Condition REF below.", "Let $\\ell _\\gamma (u) = \\gamma ^2 \\ell (u/\\gamma )$ for $u\\in $ , where the function $\\ell : \\mapsto [0,\\infty )$ satisfies: (i) $\\ell ^{\\prime }(0)=0$ and $| \\ell ^{\\prime }(u) | \\le \\min (a_1, |u|) $ for all $u\\in $ ; (ii) $\\ell ^{\\prime \\prime }(0)=1$ and $\\ell ^{\\prime \\prime }(u) \\ge a_2$ for all $|u| \\le a_3$ ; and (iii) $|\\ell ^{\\prime }(u) - u| \\le u^2$ for all $u\\in $ , where $a_1$ , $a_2$ , and $a_3$ are positive constants.", "Condition REF encompasses many commonly used robust loss functions such as the Huber loss $\\ell (u) = \\min \\lbrace u^2/2, |u| - 1/2 \\rbrace $ [17], pseudo-Huber losses $\\ell (u) = \\sqrt{1+u^2} - 1$ and $\\ell (u) = \\log ( e^u/2 + e^{-u}/2 ) $ , smoothed Huber losses $\\ell (u) = \\min \\lbrace u^2/2 - |u|^3/6, |u|/2 - 1/6\\rbrace $ and $\\ell (u) = \\min \\lbrace u^2/2 - u^4/24, (2\\sqrt{2}/3) |u| - 1/2 \\rbrace $ , among other smooth approximations of the Huber loss [25].", "Consequently, we consider the following asymmetric robust loss $L_{\\tau , \\gamma }(u) := | \\tau - \\mathbb {1}(u<0) | \\cdot \\ell _\\gamma (u),$ where $\\ell _{\\gamma }(\\cdot )$ is subject to Condition REF .", "In Section REF , we consider the robust expectile regression (retire) estimator based on the robust loss (REF ) in the classical setting that $d<n$ .", "Its statistical properties, both asymptotic and nonasymptotic, will be given in Section REF under the so-called “many regressors\" model [6] in which the dimension $d=d_n$ is allowed to grow with $n$ subject to the constraint $d_n = o(n^a)$ for some $0<a \\le 1$ .", "To deal with high-dimensional data for which $d$ can be much larger than $n$ , we propose penalized retire estimators in Section REF with statistical guarantees (under sparsity) provided in Section REF ." ], [ "Retire Estimator in Low Dimensions", "Given a location parameter $\\tau \\in (0,1)$ , we define the retire estimator (when $d<n$ ) as $\\hat{\\beta } = \\hat{\\beta }_\\gamma = _{\\beta \\in ^d} \\frac{1}{n} \\sum _{i=1}^n L_{\\tau , \\gamma }(y_i - _i^{ \\beta ),}where \\gamma >0 is a robustification parameter that will be calibrated adaptively from data as we detail in Section~\\ref {sec:simulate}.", "Numerically, the optimization problem (\\ref {ld estimator}) can be efficiently solved by either gradient descent or quasi-Newton methods \\cite {NW1999}, such as the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm that can be implemented as on option of the base function \\texttt {optim()} in \\texttt {R}.$ Recall that the population parameter $\\beta ^*$ is uniquely identified as $\\beta ^* = _{\\beta \\in ^d}~ \\lbrace L_{\\tau ,\\infty }(y-^{\\beta )\\rbrace ~\\mbox{ with }~ L_{\\tau ,\\infty } (u) := | \\tau - \\mathbb {1}(u<0) | \\cdot u^2/2.On the other hand, \\hat{\\beta } can be viewed an M-estimator of the following population parameter\\beta ^*_\\gamma := _{\\beta \\in ^d} \\lbrace L_{\\tau , \\gamma }(y-^{\\beta )\\rbrace .It is worth pointing out that \\beta ^*_\\gamma typically differs from \\beta ^* for any given \\gamma >0.", "To see this, note that the convexity of the robust loss L_{\\tau , \\gamma }:^d \\rightarrow implies the first-order condition, that is, \\lbrace |\\tau - \\mathbb {1}(y < ^{\\beta ^*_\\gamma )| \\cdot \\ell ^{\\prime }_{\\tau , \\gamma }(y-^{\\beta ^*_\\gamma )\\rbrace = \\mathbf {0}.", "On the other hand, we have e_\\tau (\\varepsilon |) = e_\\tau (y-^{\\beta ^* |) =0, implying\\lbrace | \\tau - \\mathbb {1}(\\varepsilon <0) |\\cdot \\varepsilon \\rbrace = \\mathbf {0}.Since the random error \\varepsilon given is asymmetric around zero, in general we have\\mathbf {0} \\ne \\lbrace |\\tau - \\mathbb {1}(\\varepsilon < 0)| \\cdot \\ell ^{\\prime }_{\\tau , \\gamma }(\\varepsilon )\\rbrace = \\lbrace |\\tau - \\mathbb {1}(y < ^{\\beta ^* )| \\cdot \\ell ^{\\prime }_{\\tau , \\gamma }(y-^{\\beta ^*)\\rbrace ,which in turn implies that \\beta ^* \\ne \\beta ^*_\\gamma .", "We refer to the difference \\Vert \\beta ^*_\\gamma - \\beta ^* \\Vert _2 as the robustification bias.In Section \\ref {subsec:lowdim}, we will show that under mild conditions, the robustification bias is of the order (\\gamma ^{-1}), and a properly chosen \\gamma balances bias and robustness.", "}}}}To perform statistical inference on \\beta ^*_j^{\\prime }s, we construct normal-based confidence intervals based on the asymptotic theory developed in Section~\\ref {subsec:lowd}.To this end, we first introduce some additional notation.Let \\hat{\\varepsilon }_i = y_i-_i^{ \\hat{\\beta } be the residuals from the fitted model and let _j \\in ^d be the canonical basis vector, i.e., the j-th entry equals one and all other entries equal zero.Let \\hat{\\mathbf {J}} = n^{-1}\\sum _{i=1}^n|\\tau - \\mathbb {1}(\\hat{\\varepsilon }_i < 0 )| \\cdot _i _i^{.An approximate 95\\% confidence interval for \\beta _j^* can thus be constructed as\\Bigg [ \\hat{\\beta }_j - 1.96 {\\frac{\\hat{\\sigma }(_j)}{\\sqrt{n}}},~ \\hat{\\beta }_j - 1.96 {\\frac{\\hat{\\sigma }(_j)}{\\sqrt{n}}} \\Bigg ],where\\hat{\\sigma }^2(_j) := _j^{ \\, \\hat{\\mathbf {J}}^{-1} \\Bigg [ \\frac{1}{n} \\sum _{i=1}^n\\zeta ^2(\\hat{\\varepsilon }_i) _i _i^{ \\Bigg ] \\hat{\\mathbf {J}}^{-1} _j,and \\zeta (u) = L^{\\prime }_{\\tau ,\\gamma }(u) = | \\tau - \\mathbb {1}(u<0) | \\cdot \\ell ^{\\prime }_\\gamma (u) is the first-order derivative of L_{\\tau ,\\gamma }(\\cdot ) given in (\\ref {retire.loss}).", "}}}\\subsection {Penalized Retire Estimator in High Dimensions}In this section, we propose the penalized retire estimator for modeling high-dimensional datawith d>n, obtained by minimizing the robust loss in (\\ref {ld estimator}) plus a penalty function that induces sparsity on the regression coefficients.As mentioned in Section~\\ref {sec:1}, the non-negligible estimation bias introduced by convex penalties (e.g., the Lasso penalty) can be reduced by folded-concave regularization when the signals are sufficiently strong, that is, the minimum of magnitudes of all nonzero coefficients are away from zero to some extent.The latter, however, is computationally more challenging and unstable due to non-convexity.", "}Adapted from the local linear approximation algorithm proposed by \\cite {ZL2008}, we apply an iteratively reweighted \\ell _1-penalized algorithm for fitting sparse robust expectile regression models with the robust loss L_{\\tau , \\gamma }(\\cdot ).", "At each iteration, the penalty weights depend on the previous iterate and the choice of a (folded) concave regularizer satisfying Condition~\\ref {def:concavepenalty} \\cite {ZZ2012} below.Some popular examples include the smoothly clipped absolute deviation (SCAD) penalty \\cite {FL2001}, the minimax concave penalty \\cite {Z2010}, and the capped-\\ell _1 penalty.", "We refer the reader to \\cite {ZZ2012} and Section~4.4 of \\cite {FLZZ2020} for more details.", "}\\begin{condition}The penalty function p_\\lambda (\\lambda >0) is of the form p_\\lambda (t) = \\lambda ^2 p_0(t/\\lambda ) for t\\ge 0, where the function p_0: _+ \\rightarrow _+ satisfies:(i) p_0(\\cdot ) is non-decreasing on [0,\\infty ) with p_0(0)=0; (ii) p_0(\\cdot ) is differentiable almost everywhere on (0,\\infty ) and \\lim _{t \\downarrow 0} p_0^{\\prime }(t) = 1; (iv) p_0^{\\prime }(t_1) \\le p_0^{\\prime }(t_2) for all t_1 \\ge t_2> 0.\\end{condition}}}Let $ ()$ be a prespecified concave regularizer that satisfies Condition~\\ref {def:concavepenalty}, and let $ p'()$ be its first-order derivative.Starting at iteration 0 an initial estimate $$\\beta $ (0)$, we sequentially solve the following weighted $ 1$-penalized convex optimization problems:{\\begin{@align}{1}{-1}\\hat{\\beta }^{(t)} \\in \\underset{\\beta \\in ^d}{\\mathrm {minimize}}~\\left\\lbrace \\frac{1}{n} \\sum _{i=1}^n L_{\\tau ,\\gamma }(y_i-_i^{\\beta )+ \\sum _{j=2}^d p^{\\prime }_\\lambda ( | \\hat{\\beta }_j^{(t-1)} | ) |\\beta _j| ,}\\right.where \\hat{\\beta }^{(t)}= (\\hat{\\beta }_1^{(t)}, \\ldots , \\hat{\\beta }_d^{(t)})^.At each iteration, \\hat{\\beta }^{(t)} is a weighted \\ell _1-penalized robust expectile regression estimate, where the weight p^{\\prime }_\\lambda ( | \\hat{\\beta }_j^{(t-1)} | ) |\\beta _j| can be viewed as a local linear approximation of the concave regularizer p_\\lambda (|\\beta _j|) around | \\hat{\\beta }_j^{(t-1)} |.", "With the trivial initialization \\hat{\\beta }^{(0)}=\\mathbf {0}, the first optimization problem~(\\ref {retire.est.convex}) (when t=1) reduces to the \\ell _1-penalized robust expectile regression because p^{\\prime }_{\\lambda }(0) =\\lambda .This iterative procedure outputs a sequence of estimates \\hat{\\beta }^{(1)},\\ldots ,\\hat{\\beta }^{(T)}, where the number of iterations T can either be set before running the algorithm or depend on a stopping criterion.Throughout this paper, we refer to the sequence of estimates \\lbrace \\hat{\\beta }^{(t)} \\rbrace _{t=1,\\ldots , T} given in (\\ref {retire.est.convex}) as the {\\it iteratively reweighted \\ell _1-penalized retire} estimators.We will characterize their statistical properties in Section~\\ref {subsec:theory:highd}, including the theoretical choice of T in order to obtain a statistically optimal estimator.\\end{@align}}$ We now outline a coordinate descent type algorithm, the semismooth Newton coordinate descent (SNCD) algorithm proposed by [43], to solve the weighted $\\ell _1$ -penalized convex optimization problem in ().", "Recall that the key component of the asymmetric loss function $L_{\\tau ,\\gamma } (\\cdot )$ is the robust loss $\\ell _{\\gamma }(u) = \\gamma ^2\\ell (u/\\gamma )$ .", "For convenience, we focus on the Huber loss for which $\\ell (u) = u^2/2 \\cdot \\mathbb {1}(|u| \\le 1) + (|u| - 1/2)\\cdot \\mathbb {1}(|u| >1)$ .", "The main crux of the SNCD algorithm is to combine the semismooth Newton method and the cyclic coordinate descent algorithm to iteratively update the parameter of interest one at a time via a Newton-type step until convergence.", "In the following, we provide a brief derivation of the algorithm, and defer the details to Section  of the Appendix.", "Let $L^{\\prime }_{\\tau ,\\gamma }(u)$ and $L^{\\prime \\prime }_{\\tau ,\\gamma }(u)$ be the first and second order derivatives (with respect to $u$ ) of the loss function in (REF ), respectively.", "For notational convenience, let $\\lambda _j^{(t)} = p^{\\prime }_{\\lambda }( |\\hat{\\beta }_j^{(t-1)} | )$ be the penalty weights at the $t$ -th iteration.", "Then, the Karush-Kuhn-Tucker conditions for () read $\\begin{split}{\\left\\lbrace \\begin{array}{ll}-\\frac{1}{n} \\sum _{i=1}^n L^{\\prime }_{\\tau ,\\gamma }(y_i-_i^{ \\hat{\\beta }) = 0 \\quad \\mathrm {for}~ j=1, \\\\-\\frac{1}{n} \\sum _{i=1}^n L^{\\prime }_{\\tau ,\\gamma }(y_i-_i^{ \\hat{\\beta })x_{ij} +\\lambda _j^{(t)} \\hat{z}_j= 0\\quad \\mathrm {for}~ j=2,\\dots ,d,\\\\\\hat{\\beta }_j - S(\\hat{\\beta }_j+\\hat{z}_j) = 0 \\quad \\mathrm {for}~ j=2,\\ldots ,d,}}\\end{array}\\right.where \\hat{z}_j \\in \\partial |\\hat{\\beta }_j| is a subgradient of the absolute value function, and S(u) = \\mathrm {sign}(u) \\max (|u|-1,0).", "Finding the optimum to the optimization problem~(\\ref {retire.est.convex}) is equivalent to solving the system of equations (\\ref {eq:kkt}).The latter can be done iteratively in a cyclic fashion.", "That is, at each iteration, we update the pair of parameters (\\beta _j,z_j) by solving the corresponding equations in~(\\ref {eq:kkt}) while keeping the remaining parameters fixed.Each pair of parameters is updated by a semismooth Newton step, which we detail in Section~\\ref {sec:algorithm:derivation} of the Appendix.The whole procedure is summarized in Algorithm~\\ref {Alg:general}.", "}\\begin{algorithm}[!htp]\\small \\caption { Semismooth Newton Coordinate Descent Algorithm for Solving (\\ref {retire.est.convex}) with a Huber Loss.", "}\\textbf {Input:} regularization parameter \\lambda , Huber loss tuning parameter \\gamma , and convergence criterion \\epsilon .\\\\\\textbf {Initialization:} \\hat{\\beta }^{0}=0.\\\\\\textbf {Iterate:} the following until the stopping criterion \\Vert \\hat{\\beta }^{k}-\\hat{\\beta }^{k-1}\\Vert _2 \\le \\epsilon is met, where \\hat{\\beta }^{k} is the value of \\beta obtained at the k-th iteration.\\begin{enumerate}\\item \\hat{\\beta }_1^{k+1} \\leftarrow \\hat{\\beta }_1^{k} + \\lbrace \\sum _{i=1}^n L^{\\prime }_{\\tau ,\\gamma }(y_i-_i^{ \\hat{\\beta }^{k})\\rbrace /\\lbrace \\sum _{i=1}^n L^{\\prime \\prime }_{\\tau ,\\gamma }(y_i-_i^{ \\hat{\\beta }^{k})\\rbrace .\\item for j=2,\\ldots ,d, update the pair of parameters (\\beta _j,z_j) as follows:\\begin{bmatrix}\\hat{\\beta }_j^{k+1} \\\\\\hat{z}_j^{k+1}\\end{bmatrix}={\\left\\lbrace \\begin{array}{ll}\\begin{bmatrix}\\hat{\\beta }_j^{k} + \\frac{ \\sum _{i=1}^n L^{\\prime }_{\\tau ,\\gamma }(y_i-_i^{\\hat{\\beta }^{k}) x_{ij} - \\lambda _j^{(t)} \\cdot \\mathrm {sign}(\\hat{\\beta }_j^{k}+\\hat{z}_j^{k} ) }{ \\sum _{i=1}^n L^{\\prime \\prime }_{\\tau ,\\gamma }(y_i-_i^{\\hat{\\beta }^k)) x_{ij}^2 } \\\\\\mathrm {sign}(\\hat{\\beta }_j^{k}+\\hat{z}_j^{k} )},& \\mathrm {if~} |\\hat{\\beta }_j^{k} + \\hat{z}_j^{k}|>1, \\\\\\begin{bmatrix} 0 \\\\(n \\lambda _j^{(t)})^{-1} \\sum _{i=1}^n L^{\\prime }_{\\tau ,\\gamma }(y_i-_i^{\\hat{\\beta }^{k})x_{ij} + (n \\lambda _j^{(t)})^{-1} \\hat{\\beta }_j^{k} \\sum _{i=1}^n L^{\\prime \\prime }_{\\tau ,\\gamma }(y_i-_i^{\\hat{\\beta }^{k})x_{ij}^2},& \\mathrm {if~} |\\hat{\\beta }_j^{k} + \\hat{z}_j^{k}|\\le 1.", "}\\end{bmatrix}}{\\textbf {}{Output:} the final iterate \\hat{\\beta }^{k}.", "}\\end{bmatrix}\\end{array}For the Huber loss, the first- and second-order derivatives of \\right.L_{\\tau ,\\gamma } (u) are\\begin{equation*}L^{\\prime }_{\\tau ,\\gamma }(u) = {\\left\\lbrace \\begin{array}{ll}- (1 - \\tau ) \\gamma , &{\\rm if }~ u < -\\gamma ,\\\\(1 - \\tau ) u, &{\\rm if }~ -\\gamma \\le u < 0,\\\\\\tau u, &{\\rm if }~ 0 \\le u \\le \\gamma ,\\\\\\tau \\gamma , &{\\rm if }~ u > \\gamma ,\\end{array}\\right.", "}\\quad \\mathrm {and}\\quad L^{\\prime \\prime }_{\\tau ,\\gamma }(u) = {\\left\\lbrace \\begin{array}{ll}1 - \\tau , &{\\rm if }~ -\\gamma \\le u < 0,\\\\\\tau , &{\\rm if }~ 0 \\le u \\le \\gamma ,\\\\0, & \\text{otherwise},\\end{array}\\right.", "}\\end{equation*}respectively.In Algorithm~\\ref {Alg:general}, the update \\hat{\\beta }_j^{k+1} involves the second derivative of the loss function, \\sum _{i=1}^nL^{\\prime \\prime }_{\\tau ,\\gamma }(y_i - _i^{\\hat{\\beta }^{k}), in the denominator.For extreme values of \\tau that are near zero or 1, the denominator may be close to zero, causing instability.To address this issue, \\cite {YH2017} implemented a continuity approximation in their \\texttt {R} package \\texttt {hqreg}, and we adopt the same technique to implement Algorithm~\\ref {Alg:general}.In particular, if the sum of second derivatives is equal to zero or the percentage of residuals with magnitude below \\gamma is less than 5\\% or n^{-1}, we instead substitute the sum of second derivatives by the quantity \\sum _{i=1}^n (| y_i - _i^{\\hat{\\beta }^{k}|)^{-1} \\mathbb {1}(| y_i - _i^{\\hat{\\beta }^{k}| > \\gamma ).Such a continuity approximation works well across all of the numerical settings that we have considered.", "}}}\\section {Theoretical Results}We provide an explicit characterization of the estimation error for the retire estimator \\hat{\\beta } defined in (\\ref {ld estimator}) (low-dimensional setting), and the sequence of penalized retire estimators \\lbrace \\hat{\\beta }^{(t)} \\rbrace _{t=1,\\ldots , T} defined in (\\ref {retire.est.convex}) (high-dimensional setting) in Sections~\\ref {subsec:lowdim} and~\\ref {subsec:theory:highd}, respectively.Our proposed estimator relies on the choice of robust loss function in Condition~\\ref {def:general.loss}.For simplicity, we focus on the Huber loss \\ell (u) = u^2/2 \\cdot \\mathbb {1}(|u| \\le 1) + (|u| - 1/2)\\cdot \\mathbb {1}(|u| >1) throughout our analysis, i.e., a_1=a_2=a_3=1 in Condition~\\ref {def:general.loss}, but note that similar results hold for any robust loss that satisfies Condition~\\ref {def:general.loss}.Throughout the theoretical analysis, we assume that the location measure \\tau \\in (0,1) is fixed.", "}}We first defined the empirical loss function and its gradient as\\begin{equation*}_n(\\beta ) = \\frac{1}{n} \\sum _{i=1}^nL_{\\tau ,\\gamma }(y_i - _i^) ~~\\mbox{ and}~~ \\nabla _n(\\beta ) = -\\frac{1}{n} \\sum _{i=1}^nL_{\\tau ,\\gamma }^{\\prime }(y_i - _i^) _i ,\\end{equation*}respectively.Moreover, we impose some common conditions on the random covariates and the random noise \\varepsilon for both low-and high-dimensional settings.In particular, we assume that the random covariates \\in ^d are sub-exponential and that the random noise \\varepsilon is heavy-tailed with finite second moment.", "}\\end{enumerate}\\begin{condition}Let = (^{) be a positive definite matrix with \\lambda _u \\ge \\lambda _{\\max }() \\ge \\lambda _{\\min }() \\ge \\lambda _{l} > 0 and assume that \\lambda _l = 1 for simplicity.There exists \\nu _0 \\ge 1 such that (| ^{^{-1/2}|\\ge \\nu _0\\Vert \\Vert _{2}\\cdot t)\\le e^{-t} for all t \\in and \\in ^d.For notational convenience, let \\sigma _{}^2 = \\max _{1\\le j \\le d} \\sigma _{jj}, where \\sigma _{jj} is the j-th diagonal entry of .", "}}\\begin{condition}The random noise \\varepsilon has a finite second moment, i.e.,(\\varepsilon ^2|)\\le \\sigma _{\\varepsilon }^2<\\infty .", "Moreover, the conditional \\tau -expectile of \\varepsilon satisfies [w_\\tau (\\varepsilon )\\varepsilon |]=0, where w_\\tau (u) := | \\tau - \\mathbb {1}(u<0)|.\\end{condition}\\end{condition}\\end{algorithm}\\end{split}$" ], [ "Statistical Theory for the Retire Estimator in (", "In this section, we provide nonasymptotic error bounds for the retire estimator, $\\hat{\\beta }$ , under the regime in which $n>d$ but $d$ is allowed to diverge.", "Moreover, we establish a nonasymptotic Bahadur representation for $\\hat{\\beta }-\\beta ^*$ , based on which we construct a Berry-Esseen bound for a normal approximation.", "As mentioned in Section REF , the robustification bias $||\\beta ^*_\\gamma - \\beta ^* ||_2$ is inevitable due to the asymmetry nature of error term $\\varepsilon $ .", "Let ${\\tau } =\\min (\\tau , 1-\\tau )$ , $\\bar{\\tau } = \\max (\\tau , 1-\\tau )$ , and $A_1 \\ge 1$ be a constant satisfying $(^{ ^{-1/2} ) ^4 \\le A_1^4 \\Vert \\Vert ^4_2 for all \\in ^d.", "The following proposition reveals the fact that the robustification bias scales at the rate \\gamma ^{-1}, which decays as \\gamma grows.", "}$ Assume Conditions REF , , and hold.", "Provided that $\\gamma \\ge 2 \\sigma _\\varepsilon A_1^2 \\bar{\\tau } / \\underline{\\tau }$ , we have $\\Vert ^{1/2} (\\beta ^*_\\gamma - \\beta ^* ) \\Vert _{2} \\le 2 \\gamma ^{-1} (\\bar{\\tau }/\\underline{\\tau }) \\sigma ^2_\\varepsilon .$ The key to our subsequent analysis for the retire estimator $\\hat{\\beta }$ is the strong convexity property of the empirical loss function $_n(\\cdot )$ uniformly over a local ellipsoid centered at $\\beta ^*$ with high probability.", "Let $\\kappa _1 = \\min _{|u| \\le 1} \\ell ^{\\prime \\prime }(u)$ , $\\mathbb {B}_{} (r) = \\lbrace \\delta \\in ^d: \\Vert ^{1/2} \\delta \\Vert _{2} \\le r \\rbrace $ be an ellipsoid.", "We characterize the strong convexity of $_n(\\cdot )$ in Lemma REF .", "With the aid of Lemma REF , we establish a non-asymptotic error bound for the retire estimator $\\hat{\\beta }$ in Theorem REF .", "Let $(\\gamma , n)$ satisfy $\\gamma \\ge 4\\sqrt{2} \\max \\lbrace \\sigma _{\\varepsilon }, 2A_1^2r \\rbrace $ and $n \\gtrsim ({\\gamma }/{r})^2(d+t)$ .", "Under Conditions REF , , and , with probability at least $1 - e^{-t}$ , we have $\\langle \\nabla _n(\\beta ) - \\nabla _n (\\beta ^*) , \\beta - \\beta ^* \\rangle \\ge \\frac{1}{2} \\kappa _1 {\\tau }\\Vert ^{1/2}(\\beta -\\beta ^*) \\Vert _{2}^2 \\mbox{~~uniformly over~~} \\beta \\in \\beta ^* + \\mathbb {B}_{}(r) .$ Assume Conditions REF , , and hold.", "For any $t>0$ , the retire estimator $\\hat{\\beta }$ in (REF ) with $\\gamma = \\sigma _\\varepsilon \\sqrt{n/(d+t)}$ satisfies the bound $\\Vert ^{1/2} (\\hat{\\beta } - \\beta ^*) \\Vert _{2} \\le C(\\bar{\\tau }/\\underline{\\tau })\\kappa _1^{-1} \\sigma _\\varepsilon v_0 \\sqrt{\\frac{d+t}{n}},$ with probability at least $1-2e^{-t}$ as long as $n \\gtrsim d+t$ , where $C>0$ is an absolute constant.", "Theorem REF shows that under the sub-exponential design with heavy-tailed random errors with bounded second moment, the retire estimator $\\hat{\\beta }$ exhibits a sub-Gaussian type deviation bound, provided that the robustification parameter is properly chosen, i.e., $\\gamma = \\sigma _\\varepsilon \\sqrt{n/(d+t)}$ .", "In other words, the proposed retire estimator gains robustness to heavy-tailed random noise without compromising statistical accuracy.", "The choice of $\\gamma = \\sigma _\\varepsilon \\sqrt{n/(d+t)}$ in Theorem REF is a reflection of the bias and robustness trade-off for the retire estimator $\\hat{\\beta }$ .", "Intuitively, a large $\\gamma $ creates less robustification bias but sacrifices robustness.", "More specifically, we shall see from the proof of Theorem REF that conditioning on the event $\\lbrace \\hat{\\beta } \\in \\beta ^* + \\mathbb {B}_{}(r_{\\rm loc}) \\rbrace $ , $\\Vert ^{1/2} (\\hat{\\beta } - \\beta ^*) \\Vert _{2} \\lesssim \\underbrace{\\frac{\\sigma ^2_\\varepsilon }{\\gamma }}_{{\\rm robustification~bias}} + \\, \\underbrace{ \\sigma _\\varepsilon \\sqrt{\\frac{ d}{n}} + \\gamma \\frac{ d}{n} }_{{\\rm statistical~error}}$ with high probability.", "Therefore, we choose $\\gamma = \\sigma _\\varepsilon \\sqrt{n/(d+t)}$ to minimize the right-hand side as a function of $\\gamma $ .", "Next, we establish nonasymptotic Bahadur representation for the difference $\\hat{\\beta } - \\beta ^*$ .", "To this end, we need slightly stronger conditions on both the random covariates $$ and the random noise $\\varepsilon $ .", "In particular, we require that the random covariates $$ to be sub-Gaussian and that the conditional density of the random noise $\\varepsilon $ is upper bounded.", "We formalize the above into the following conditions.", "There exists $\\nu _1 \\ge 1$ such that $(| ^{ ^{-1/2}| \\ge v_1 ||||_2 t) \\le 2e^{-t^2/2} for all t \\in and \\in ^d.", "}$ Let $f_{\\varepsilon |}(\\cdot )$ be the conditional density function of the random noise $\\varepsilon $ .", "There exists $\\bar{f}_{\\varepsilon |} > 0$ such that $\\sup _{u \\in }f_{\\varepsilon |}(u) \\le \\bar{f}_{\\varepsilon |}$ almost surely (for all $$ ).", "Recall that $w_\\tau (u) = | \\tau - \\mathbb {1}(u<0)|$ and that $\\zeta (u) = L_{\\tau , \\gamma }^{\\prime }(u) = w_\\tau (u) \\ell _\\gamma ^{\\prime }(u) $ .", "Moreover, let ${\\mathbf {J}} = \\lbrace w_\\tau (\\varepsilon ) ^{ \\rbrace be the Hessian matrix.", "Theorem~\\ref {ld thm 2 bahadur representation} establishes the Bahadur representation of the retire estimator \\hat{\\beta }.Specifically, we show that the remainder of the Bahadur representation for \\hat{\\beta } exhibits sub-exponential tails, which we will use to establish the Berry-Esseen bound for linear projections of \\hat{\\beta }- \\beta ^* in Theorem~\\ref {ld thm 3 asym normality}.", "}$ Assume Conditions REF , , REF , and REF hold.", "For any $t>0$ , the retire estimator $\\hat{\\beta }$ given in (REF ) with $\\gamma = \\sigma _\\varepsilon \\sqrt{n/(d+t)}$ satisfies the following nonasymptotic Bahadur representation $\\bigg \\Vert ^{-1/2} {\\mathbf {J}} (\\hat{\\beta } - \\beta ^*) - \\frac{1}{n} \\sum _{i=1}^n \\zeta (\\varepsilon _i) ^{-1/2}_i \\bigg \\Vert _2 \\le C \\sigma _\\varepsilon \\cdot {\\frac{d+t}{n}}$ with probability at least $1-3e^{-t}$ as long as $n \\gtrsim d+t$ , where $C>0$ is a constant independent of $(n,d)$ and $t$ .", "Under the same set of conditions as in Theorem REF , assume further that $( |\\varepsilon |^3 | ) \\le v_3 < \\infty $ (almost surely).", "Then, the retire estimator $\\hat{\\beta }$ in (REF ) with $\\gamma = \\sigma _\\varepsilon \\sqrt{n/(d+\\log n)}$ satisfies $\\sup _{\\in ^d, z \\in } \\big | (n^{1/2} \\langle , \\hat{\\beta } - \\beta ^* \\rangle \\le \\sigma z) - \\Phi (z) \\big | \\lesssim \\frac{d + \\log n}{\\sqrt{n}},$ where $\\sigma ^2 = \\sigma ^2() := ^{ {\\mathbf {J}}^{-1} \\lbrace \\zeta ^2(\\varepsilon ) ^{ \\rbrace {\\mathbf {J}}^{-1} and \\Phi (\\cdot ) is the standard normal cumulative distribution function.", "}}$ Theorem REF shows that with a diverging parameter $\\gamma = \\sigma _\\varepsilon \\sqrt{n/(d+\\log n)}$ , for any $\\in ^d$ , the linear projection of $\\hat{\\beta }-\\beta ^*$ is asymptotically normal after some standardization as long as $(n,d)$ satisfies the scaling condition $d = o(\\sqrt{n})$ ." ], [ "Statistical Theory for the Iteratively Reweighted $\\ell _1$ -Penalized Retire Estimator ", "In this section, we analyze the sequence of estimators $\\lbrace \\hat{\\beta }^{(t)}\\rbrace _{t=1}^T$ obtained in () under the high-dimensional regime in which $d >n$ .", "Throughout the theoretical analysis, we assume that the regression parameter $\\beta ^*\\in ^d$ in model (REF ) is exactly sparse, i.e., $\\beta ^*$ has $s$ non-zero coordinates.", "Let $= \\lbrace 1\\le j \\le d: \\beta _j^*\\ne 0\\rbrace $ be the active set of $\\beta ^*$ with cardinality $||=s$ .", "Recall that ${\\tau } =\\min (\\tau , 1-\\tau )$ , $\\kappa _1 = \\min _{|u| \\le 1} \\ell ^{\\prime \\prime }(u)$ and $A_1>0$ is a constant that satisfies $(^{ ^{-1/2} ) ^4 \\le A_1^4 \\Vert \\Vert ^4_2 for all \\in ^d, where satisfies Condition \\ref {cond:covariates}.", "Similar to the low-dimensional setting, the key to our high-dimensional analysis is an event _{\\rm {rsc}} that characterizes the local restricted strong convexity property of the empirical loss function _n(\\cdot ) over the intersection of an \\ell _1-cone and a local \\ell _2-ball centered at \\beta ^* \\cite {LW2015}.Lemma~\\ref {lem:RSC} below shows that the event _{\\rm {rsc}} occurs with high probability for suitably chosen parameters.", "}$ Given radii parameters $r , L >0$ and a curvature parameter $\\kappa >0$ , define the event $_{\\rm {rsc}}(r,L,\\kappa ) = \\Bigg \\lbrace \\inf _{\\beta \\in \\beta ^* + \\mathbb {B}(r) \\cap (L)} \\frac{\\langle \\nabla _n(\\beta ) - \\nabla _n(\\beta ^*) , \\beta - \\beta ^* \\rangle }{\\Vert \\beta - \\beta ^* \\Vert _2^2 } \\ge \\kappa \\Bigg \\rbrace ,$ where $\\mathbb {B}(r) = \\lbrace \\delta \\in ^d: \\Vert \\delta \\Vert _2 \\le r \\rbrace $ is an $\\ell _2$ -ball with radius $r$ , and $(L)=\\lbrace \\delta : \\Vert \\delta \\Vert _1 \\le L\\Vert \\delta \\Vert _2 \\rbrace $ is an $\\ell _1$ -cone.", "Let the radii parameters $(r,L)$ and the robustification parameter $\\gamma $ satisfy $\\gamma \\ge 4\\sqrt{2} \\lambda _u \\max \\lbrace \\sigma _{\\varepsilon }, 2A_1^2r \\rbrace \\qquad \\mathrm {and} \\qquad n \\gtrsim ({\\sigma _{} \\nu _0 \\gamma }/{r})^2 (L^2\\log d + t).$ Then, under Conditions REF , , and , event $_{\\rm {rsc}}(r,L,\\kappa )$ with $\\kappa = \\kappa _1 {\\tau }/ 2$ occurs with probability at least $1-e^{-t}$ .", "Under the local restricted strong convexity, in Theorem REF , we provide an upper bound on the estimation error of $\\hat{\\beta }^{(1)}$ , i.e., the $\\ell _1$ -penalized retire estimator.", "Assume Conditions REF , , and hold.", "Then, the $\\ell _1$ -penalized retire estimator $\\hat{\\beta }^{(1)}$ with $\\gamma = \\sigma _{\\varepsilon }\\sqrt{{n}/{(\\log d + t)}}$ and $\\lambda \\asymp \\sqrt{{(\\log d + t)}/{n}}$ satisfies the bounds $\\Vert \\hat{\\beta }^{(1)} - \\beta ^* \\Vert _2 \\le 3(\\kappa _1 {\\tau } )^{-1} s^{1/2} \\lambda \\qquad \\mathrm {and}\\qquad \\Vert \\hat{\\beta }^{(1)} - \\beta ^* \\Vert _1\\le 12(\\kappa _1 {\\tau } )^{-1} s \\lambda ,$ with probability as least $1-3e^{-t}$ .", "Theorem REF shows that with an appropriate choice of the tuning parameters $\\gamma $ and $\\lambda $ , the $\\ell _1$ -penalized robust expectile regression satisfies exponential deviation bounds with near-optimal convergence rate as if sub-Gaussian random noise were assumed [16].", "Condition  can be further relaxed to accommodate heavy-tailed random error with finite $(1+\\phi )$ moment with $0<\\phi <1$ .", "Specifically, it can be shown that under the $\\ell _2$ norm, the estimation error of the $\\ell _1$ -penalized Huber regression estimator takes the form $s^{1/2} \\lbrace \\log (d)/n\\rbrace ^{\\min \\lbrace \\phi /(1+\\phi ),1/2\\rbrace }$ [35], [36].", "Similar results can be obtained for the proposed $\\ell _1$ -penalized retire estimator and we leave it for future work.", "Throughout this section, we assume that the underlying regression parameter $\\beta ^* \\in ^d$ is exactly sparse.", "In this case, iteratively reweighted $\\ell _1$ -penalization helps reduce the estimation bias from $\\ell _1$ -penalization as signal strengthens.", "For weakly sparse vectors $\\beta ^*$ satisfying $\\sum _{j=1}^d |\\beta _j^*|^q \\le R_q$ for some $0< q\\le 1$ and $R_q >0$ , [14] showed that the convergence rate (under $\\ell _2$ -norm) of the $\\ell _1$ -penalized adaptive Huber estimator with a suitably chosen robustification parameter is of order $(\\sigma \\sqrt{R_q} \\, \\lbrace \\log (d) / n \\rbrace ^{1/2 - q/4 })$ .", "Using the same argument, the results in Theorem REF can be directly extended to the weakly sparse case where $\\beta ^*$ belongs to an $L_q$ -ball for some $0<q \\le 1$ .", "For recovering weakly sparse signals, folded-concave penalization no longer improves upon $\\ell _1$ -penalization, and therefore we will not provide details on such an extension.", "Next, we establish the statistical properties for the entire sequence of estimators $\\hat{\\beta }^{(1)}, \\hat{\\beta }^{(2)},\\ldots , \\hat{\\beta }^{(T)}$ obtained from solving the convex optimization problem () iteratively.", "Let $\\Vert \\beta _{}^*\\Vert _{\\min }= \\min _{j\\in } |\\beta _j^*|$ be the smallest (in absolute value) non-zero regression coefficient.", "Under a beta-min condition, we show that the estimation error of $\\hat{\\beta }^{(1)}$ stated in Theorem REF can be refined.", "More generally, given the previous iterate $\\hat{\\beta }^{(T-1)}$ , the estimation error of the subsequent estimator, $\\hat{\\beta }^{(T)}$ , can be improved by a $\\delta $ -fraction for some constant $\\delta \\in (0,1)$ .", "Let $p_0(\\cdot )$ be a penalty function satisfying Condition .", "Under Conditions REF , and , assume there exist some constants $a_1 > a_0 >0$ such that $a_0 > {\\sqrt{5}}/(\\kappa _1 {\\tau }),~~~~ p^{\\prime }_0(a_0 ) >0, ~~~~ p^{\\prime }_0(a_1) =0.$ Assume further the minimum signal strength condition $\\Vert \\beta ^*_{} \\Vert _{\\min } \\ge (a_0+a_1) \\lambda $ and the sample size requirement $n\\gtrsim s \\log d + t$ .", "Picking $\\gamma \\asymp \\sigma _{\\varepsilon } \\sqrt{n/(s + \\log d + t)}$ and $\\lambda \\asymp \\sigma _{\\varepsilon } \\sqrt{(\\log d +t) / n}$ , we have $\\Vert \\hat{ \\beta }^{(T)} - \\beta ^* \\Vert _2\\lesssim \\delta ^{T-1} \\sigma _{\\varepsilon } \\sqrt{\\frac{s\\, (\\log d + t)}{n}} + \\frac{\\sigma _{\\varepsilon }}{1-\\delta } \\sqrt{\\frac{s+\\log d +t}{n}}, $ with probability at least $1-4e^{-t}$ .", "Furthermore, setting $T \\gtrsim \\frac{\\log \\lbrace \\log (d) + t \\rbrace }{\\log (1/\\delta )}$ , we have $\\Vert \\hat{ \\beta }^{(T)} - \\beta ^* \\Vert _2\\lesssim \\sigma _{\\varepsilon } \\sqrt{\\frac{s+\\log d +t}{n}} \\\\\\mbox{ and }~ \\Vert \\hat{ \\beta }^{(T)} - \\beta ^* \\Vert _1\\lesssim \\sigma _{\\varepsilon } s^{1/2} \\sqrt{\\frac{s+\\log d +t}{n}} $ with probability at least $1-4e^{-t}$ , where $\\delta = \\sqrt{5}/( a_0 \\kappa _1 {\\tau }) < 1$ .", "Theorem REF shows that under the beta-min condition $\\Vert \\beta ^*_{} \\Vert _{\\min } \\gtrsim \\sqrt{\\log (d)/n}$ , the iteratively reweighted $\\ell _1$ -penalized retire estimator $\\hat{\\beta }^{(T)}$ with $T \\asymp \\log \\lbrace \\log (d)\\rbrace $ achieves the near-oracle convergence rate, i.e., the convergence rate of the oracle estimator that has access to the true support of $\\beta ^*$ .", "This is also known as the weak oracle property.", "Picking $t= \\log d$ , we see that iteratively reweighted $\\ell _1$ -penalization refines the statistical rate from $\\sqrt{s\\log (d)/n}$ for $\\hat{\\beta }^{(1)}$ to $\\sqrt{(s+ \\log d) /n}$ for $\\hat{\\beta }^{(T)}$ .", "Theorem REF reveals the so-called weak oracle property in the sense that the regularized estimator $\\hat{ \\beta }^{(T)}$ enjoys the same convergence rate as the oracle estimator defined by regressing only on the significant predictors.", "To obtain such a result, the required minimum signal strength $\\Vert \\beta ^*_{} \\Vert _{\\min } \\gtrsim \\sqrt{\\log (d)/n}$ is almost necessary and sufficient.", "To see this, consider the linear model $y_i = _i^^*+ \\varepsilon _i$ with $\\varepsilon _i \\sim N(0, \\sigma ^2)$ independent of $_i$ , and define the parameter space $\\Omega _{s, a} = \\lbrace \\beta \\in ^d: \\Vert \\beta \\Vert _0 \\le s, \\min _{j : \\beta _j \\ne 0} |\\beta _j| \\ge a\\rbrace $ for $a>0$ .", "Under the assumption that the design matrix $\\mathbb {X} = (_1, \\ldots , _n)^^{n\\times d}$ satisfies a restricted isometry property and has normalized columns, [28] derived the following sharp lower bounds for the minimax risk $\\psi (s,a) := \\inf _{\\hat{\\beta }} \\sup _{\\beta ^* \\in \\Omega _{s,a}} \\Vert \\hat{\\beta }- \\beta ^* \\Vert _2^2$ : for any $\\epsilon \\in (0,1)$ , $\\psi (s,a) \\ge \\lbrace 1 + o(1) \\rbrace \\frac{2\\sigma ^2 s \\log (e d/s)}{n} ~\\mbox{ for any } a \\le (1-\\epsilon ) \\sigma \\sqrt{\\frac{2\\log (ed/s)}{n}} \\\\\\mbox{ and }~ \\psi (s,a) \\ge \\lbrace 1 + o(1) \\rbrace \\frac{ \\sigma ^2 s }{n} ~\\mbox{ for any } a \\ge (1 + \\epsilon ) \\sigma \\sqrt{\\frac{2\\log (ed/s)}{n}} ,$ where the limit corresponds to $s/d \\rightarrow 0$ and $s\\log (ed/s) /n \\rightarrow 0$ .", "The minimax rate $2\\sigma ^2 s \\log (ed/s)/n$ is attainable by both Lasso and Slope [4], while the oracle rate $\\sigma ^2 s/n$ can only be achieved when the magnitude of the minimum signal is of order $\\sigma \\sqrt{\\log (d/s)/n}$ .", "The beta-min condition imposed in Theorem REF is thus (nearly) necessary and sufficient, and is the weakest possible within constant factors.", "Under a stronger beta-min condition $\\Vert \\beta ^*_{} \\Vert _{\\min } \\gtrsim \\sqrt{s\\log (d)/n}$ , [16] showed that with high probability, the IRW-$\\ell _1$ expectile regression estimator (initialized by zero) coincides with the oracle estimator after three iterations.", "This is known as the strong oracle property.", "Based on the more refined analysis by [31], we conjecture that the IRW-$\\ell _1$ retire estimator $\\hat{\\beta }^{(T)}$ with $T\\asymp \\log (s \\vee \\log d)$ achieves the strong oracle property provided $\\Vert \\beta ^*_{} \\Vert _{\\min } \\gtrsim \\sqrt{\\log (d)/n}$ without the $\\sqrt{s}$ -factor." ], [ "Simulated Data", "We evaluate the performance of the proposed IRW-$\\ell _1$ -penalized retire estimator via extensive numerical studies.", "We implement the $\\ell _1$ -penalized retire and the IRW-$\\ell _1$ -penalized retire using SCAD-based weights with $T=3$ , which we compare to three other competitive methods: (i) $\\ell _1$ -penalized Huber regression (huber); (ii) $\\ell _1$ -penalized asymmetric least squares regression (sales) proposed by [16], and (iii) $\\ell _1$ -penalized quantile regression (qr) implemented via the R package rqPen [33].", "To assess the performance across different methods, we report the estimation error under the $\\ell _2$ -norm, i.e., $\\Vert \\hat{\\beta }-\\beta ^*\\Vert _2$ , the true positive rate (TPR), and the false positive rate (FPR).", "Here, TPR is defined as the proportion of the number of correctly identified non-zeros and the false positive rate is calculated as the proportion of the number of incorrectly identified nonzeros.", "Note that huber and sales are special cases of retire by taking $\\tau = 0.5$ and $\\gamma \\rightarrow \\infty $ , respectively.", "Thus, both huber and sales can be implemented via Algorithm .", "For all methods, the sparsity inducing tuning parameter $\\lambda $ is selected via ten-fold cross-validation.", "Specifically, for methods retire, huber, and sales, we select the largest tuning parameter that yields a value of the asymmetric least squares loss that is less than the minimum of the asymmetric least squares loss plus one standard error.", "For qr, we use the default cross validation function in R package rqPen to select the largest tuning parameter that yields a value of its corresponding loss function that is the minimum of the quantile loss function.", "Both huber and $\\ell _1$ -penalized retire require tuning an additional robustness parameter $\\gamma $ .", "We propose to select $\\gamma $ using a heuristic tuning method that involves updating $\\gamma $ at the beginning of each iteration in Algorithm .", "Let $r_i^k = y_i - _i^{ \\hat{\\beta }^{k-1}, i = 1,\\ldots ,n be the residuals, where \\hat{\\beta }^{k-1} is obtained from the (k-1)-th iteration of Algorithm~\\ref {Alg:general}.Let \\tilde{r}_i^k = (1-\\tau )r_i^k \\mathbb {1}_{r_i^k \\le 0} + \\tau r_i^k \\mathbb {1}_{r_i^k > 0} be the asymmetric residuals, and let\\tilde{}^k = (\\tilde{r}_1^k,\\ldots ,\\tilde{r}_n^k)^{.", "We define \\mathrm {mad}(\\tilde{}^k) = \\lbrace \\Phi ^{-1}(0.75)\\rbrace ^{-1}\\text{median}(|\\tilde{}^k - \\text{median}(\\tilde{}^k)|) as the median absolute deviation of the asymmetric residuals, adjusted by a factor \\Phi ^{-1}(0.75).We start with setting \\gamma =\\sqrt{ n / \\log (n p)}.At the k-th iteration of Algorithm~\\ref {Alg:general}, we update the robustification parameter by\\begin{equation}\\gamma ^{k} = \\text{mad}(\\tilde{}^{k}) \\cdot \\sqrt{\\frac{n}{\\log {(n p)}}}.\\end{equation}Throughout our numerical studies, we have found that \\gamma chosen using the above heuristic approach works well across different scenarios.", "}}For all of the numerical studies, we generate the covariates $ i$ from a multivariate normal distribution $ N(0, = (jk)1j, k d)$ with $ jk = 0.5| j-k|$.", "We then generate the response variable $ yi$ from one of the following three models:\\begin{enumerate}\\item Homoscedastic model:\\begin{equation}y_i = _i^{ \\beta ^*+\\epsilon _i,}\\end{equation}\\item Quantile heteroscedastic model:\\begin{equation}y_i = _i^{ \\beta ^*+{ (0.5 |x_{id}| +0.5) }\\lbrace \\epsilon _{i} - F_{\\epsilon _{i}}^{-1}(\\tau ) \\rbrace ,}\\end{equation}\\item Expectile heteroscedastic model:\\end{enumerate}\\begin{equation}y_i = _i^{ \\beta ^*+{ (0.5 |x_{id}| +0.5) }\\lbrace \\epsilon _{i} - e_{\\tau }(\\epsilon _{i})\\rbrace ,}\\end{equation}where $ i$ is the random noise, $ Fi-1()$ denotes the inverse cumulative distribution function of $ i$, and $ e(i)$ denotes the inverse of the expectile function of $ i$.", "Note that under Gaussian and t-distributed noises, the two models~(\\ref {exphetmodel}) and~(\\ref {hetmodel}) are the same for $ =0.5$.We set the regression coefficient vector $$\\beta $ *= (1*, 2*,..., d*)$ as $ 1* = 2$ (intercept), $ *j ={1.8,1.6,1.4,1.2,1,-1,-1.2,-1.4,-1.6,-1.8}$ for $ j = 2,4,...,20$, and 0 otherwise.The random noise is generated from either a Gaussian distribution, $ N(0,2)$, or a $ t$ distribution with 2.1 degrees of freedom.", "For the heteroscedastic models, we consider two quantile/expectile levels $ ={0.5,0.8}$.The results, averaged over 100 repetitions, are reported in Tables~\\ref {tab:method4CV}--\\ref {tab:method5} for the moderate- ($ n=400,  d=200$) and high-dimensional ($ n=400,  d=500$) settings.$ Table REF contains results ($\\tau =0.5$ ) under the homoscedastic model with normally and $t$ -distributed noise.", "For Gaussian noise, the four $\\ell _1$ -penalized estimators have similar performance, and both the estimation error and FPR of IRW retire (with SCAD) are notably reduced.", "Under the $t_{2.1}$ noise, we see that retire gains considerable advantage over sales in both estimation and model selection accuracy, suggesting that the proposed estimator gains robustness without compromising statistical accuracy.", "Tables REF and REF show results under the quantile heteroscedastic model with the Gaussian and $t_{2.1}$ noise, respectively.", "Two quantile levels $\\tau =\\lbrace 0.5, 0.8\\rbrace $ are considered.", "We see that huber and $\\ell _1$ -penalized retire have the same performance when $\\tau =0.5$ since they are equivalent for the case when $\\tau =0.5$ .", "Moreover, IRW retire has the lowest estimation error among all methods under the Gaussian noise.", "When $\\tau =0.8$ , the performance of huber deteriorates since huber implicitly assumes $\\tau =0.5$ and there is a non-negligible bias when $\\tau =0.8$ .", "Finally, from Table REF under the expectile heteroscedastic model, we see that the proposed estimator has an even lower estimation error than that of the qr.", "We want to point out that in general, under the $t_{2.1}$ noise, the quantile regression method qr has an advantage because the quantile loss is more robust to outliers than all of the other methods.", "While qr exhibits an advantage in terms of estimation error, it is not as computationally efficient as retire, which we will show in Section REF .", "In summary, the numerical studies confirm IRW retire as a robust alternative to its least squares counterpart sales and as a computationally efficient surrogate for the penalized quantile regression approach.", "Table: Homoscedastic model () with Gaussian noise (ϵ∼N(0,2)\\epsilon \\sim N(0, 2)) and t 2.1 t_{2.1} noise (ϵ∼t 2.1 ) \\epsilon \\sim t_{2.1}).", "Estimation error under ℓ 2 \\ell _2-norm (and its standard deviation), true positive rate (TPR) and false positive rate (FPR), averaged over 100 repetitions, are reported.Table: Heteroscedastic model () with Gaussian noise (ϵ∼N(0,2)\\epsilon \\sim N(0, 2)) and quantile levels τ={0.5,0.8}\\tau = \\lbrace 0.5, 0.8\\rbrace .Table: Heteroscedastic model () with t 2.1 t_{2.1} noise (ϵ∼t 2.1 \\epsilon \\sim t_{2.1}) and quantile levels τ={0.5,0.8}\\tau = \\lbrace 0.5, 0.8\\rbrace .Table: Heteroscedastic model () with Gaussian noise (ϵ∼N(0,2)\\epsilon \\sim N(0, 2)) and t 2.1 t_{2.1} noise (ϵ∼t 2.1 \\epsilon \\sim t_{2.1}), under the τ\\tau -expectile =0.8=0.8." ], [ "Timing Comparison", "In this section, we show using additional numerical studies that the proposed $\\ell _1$ -penalized retire estimator has a significant computational advantage over the $\\ell _1$ -penalized qr.", "We implement retire and qr using the R packages adaHuber and rqPen, respectively.", "For both methods, their corresponding sparsity regularization parameter is selected from a sequence of 50 $\\lambda $ -values via ten-fold cross-validation.", "The robustification parameter $\\gamma $ for retire is selected using the data adaptive procedure described in Section REF .", "We generate the data from the homoscedastic model () with the same setup as in Section REF .", "Results, averaged over 100 independent data sets, for $n=d/2$ and $d=\\lbrace 100,200,300,400,500\\rbrace $ are summarized in Figure REF .", "The curves in panels (a) and (c) of Figure REF represent the estimation error (under $\\ell _2$ norm) as a function of the dimension $d$ , and the curves in panels (b) and (d) of Figure REF represent the computational time (in seconds) as a function of the dimension $d$ .", "Under the Gaussian random noise, $\\epsilon \\sim N(0,2)$ , the $\\ell _1$ -penalized retire has slightly lower estimation error than $\\ell _1$ -penalized qr, and both estimation errors decrease as $n$ and $d$ grow.", "On the other hand, the $\\ell _1$ -penalized qr performs better under the $t_{2.1}$ noise since the quantile loss is more robust to outliers than that of the Huber-type loss.", "Computationally, the $\\ell _1$ -penalized retire, implemented via the adaHuber package, exhibits a significant improvement over the $\\ell _1$ -penalized qr, implemented via the rqPen package, especially when $d$ is large.", "Figure: Elapsed time for model () with t 2.1 t_{2.1} error." ], [ "Job Training Partners Act Data", "We analyze the Job Training Partners Act (JTPA) data, previously studied in [1], using the retire estimator proposed in Section REF .", "The JTPA began funding federal training programs in 1983, and its largest component Title II supports training for the economically disadvantaged.", "Specifically, applicants who faced \"barriers to employment,\" the most common of which were high-school dropout status and long periods of unemployment, were typically considered eligible for JTPA training.", "The services offered as a part of training included classroom training, basic education, on-the-job training, job search assistance, and probationary employment.", "In this data set, applicants who applied for training evaluation between November 1987 and September 1989 were randomly selected to enroll for the JTPA training program.", "Of the 6,102 adult women in the study, 4,088 were offered training and 2,722 enrolled in the JTPA services, and of the 5,102 adult men in the study, 3,399 were offered training and 2,136 enrolled in the services.", "The goal is to assess the effect of subsidized training program on earnings.", "Motivated by [1], we use the 30-month earnings data collected from the Title II JTPA training evaluation study as the response variable.", "Moreover, we adjust for the following covariates: individual's sex (male=1, female=0), whether or not the individual graduated high school or obtained a GED (yes=1, no=0), whether or not the individual worked less than 13 weeks in the 12 months preceding random assignment (yes=1, no=0), whether or not the individual is black (yes=1, no=0), whether or not the individual is Hispanic (yes=1, no=0), and marriage status (married=1, not married=0).", "We study the conditional distribution of 30-month earnings at different expectile levels $\\tau = \\lbrace 0.1,0.5,0.9\\rbrace $ .", "Our proposed method involves robustification parameter $\\gamma $ , which we select using the tuning method described in Section REF .", "The regression coefficients and their associated 95% confidence intervals are shown in Table REF .", "We find that covariates with positive regression coefficients for all quantile levels are enrollment for JTPA services, individual's sex, high school graduation or GED status, and marriage status.", "Black, hispanic, and worked less than 13 weeks in the past year had negative regression coefficients.", "The regression coefficients varied across the three different expectile levels we considered.", "The positive regression coefficients increase as the $\\tau $ level increases and the negative regression coefficients decrease as the $\\tau $ level increases.", "That is, for the lower expectile level of 30-month earnings, the covariates have a smaller in magnitude effect on the individual's earnings compared to the higher expectile level.", "The regression coefficient for enrollment in JTPA services was 1685.34, 2637.57, and 2714.57 at $\\tau =\\lbrace 0.1,0.5,0.9\\rbrace $ , respectively.", "The $\\tau $ -expectile of 30-month earnings for $\\tau =\\lbrace 0.1,0.5,0.9\\rbrace $ is 5068.02, 15815.29, and 32754.89 dollars, respectively.", "Compared to the expectile at the given $\\tau $ , the effect of subsidized training was larger for lower expectile levels.", "Notably, if an individual is a male, conditional on other covariates, their 30-month earnings increase by 5,005 dollars for $\\tau =0.5$ and increase by 10,311 dollars for $\\tau =0.9$ .", "From the confidence intervals, we see that all variables are statistically significant except Hispanic.", "Table: Regression coefficients (and their associated 95% confidence intervals) for the retire estimator." ], [ "Childhood Malnutrition Data", "We apply the IRW $\\ell _1$ -penalized retire estimator with SCAD-based weights to the childhood malnutrition data,.", "This data set is previously studied in [7] and [20].", "The data are collected from the Demographic and Health Surveys (DHS) conducted regularly in more than 75 countries.", "Similar to [7], in this analysis, we will focus on data collected from India, with a total sample size of 37,623.", "The children studied are between the ages of zero and five.", "The goal is to study the conditional distribution of children's height in India given the following covariates: the child's age, months of breastfeeding, mother's body mass index, mother's age, mother's education in years, partner's (father's) education in years, number of dead children in family, and multiple categorical variables including but are not limited to child's sex, child's birth order, mother's employment status, family's wealth (whether they are in poorest, poorer, middle, or richer bracket), electricity, television, refrigerator, bicycle, motorcycle, and car.", "Additionally, interactions between the following variables were considered: child's age, months of breastfeeding, child's sex, whether or not the child was a twin, mother's BMI, mother's age, mother's years of education, father's years of education, mother's employment status, and mother's residence.", "There are a total of 75 covariates: 30 individual variables and 45 two-way interactions.", "We aim to study the conditional distribution of children's height at different expectile levels $\\tau = \\lbrace 0.1,0.5,0.9\\rbrace $ .", "Our proposed method involves two tuning parameters $\\gamma $ and $\\lambda $ .", "The choice of robustification parameter $\\gamma $ was determined by theoretic guidance via a tuning method described in Section REF .", "The choice of sparsity tuning parameter $\\lambda $ is selected using a ten-fold cross validation where we select the largest tuning parameter that yields a value of the asymmetric least squares loss that is less than the minimum of the asymmetric least squares loss plus one standard error.", "For fair comparison, we apply the same sparsity tuning parameter across the three expectile levels.", "This is achieved by taking the maximum of the sparsity tuning parameters selected using a ten-fold cross-validation for the three different expectile levels.", "The selected tuning parameter takes value $\\lambda = 0.035$ .", "The regression coefficients that are non-zero for at least one value of $\\tau $ are shown in Table REF .", "There are a total of 38 non-zero coefficients.", "The regression coefficients for months of breastfeeding vary across the three different expectile levels we consider.", "At $\\tau = 0.1$ , the coefficient is 0.445, while at $\\tau = \\lbrace 0.5,0.9\\rbrace $ , the coefficients are 0.397 and 0.378 respectively.", "That is, for lower expectile level of child's height, months of breastfeeding plays a more important role to ensure that the child is not malnourished compared to higher expectile levels.", "Other variables of interest are electricity, television, and motorcycle.", "For $\\tau =0.1$ and $\\tau =0.9$ , the regression coefficients are 0, suggesting that access to these resources plays less of a role in a child's height at extreme expectile levels since access becomes a given for $\\tau =0.9$ and vice versa.", "For $\\tau =0.5$ , the coefficients for electricity, television, and motorcycle are 0.647, 0.367, and 0.587 respectively, suggesting that these resources are important.", "Table: Non-zero regression coefficients for IRW-ℓ 1 \\ell _1-penalized retire with SCAD-based weights across three expectile levels τ={0.1,0.5,0.9}\\tau =\\lbrace 0.1,0.5,0.9\\rbrace for the childhood malnutrition data." ], [ "Derivation of Algorithm ", "In this section, we provide a derivation of Algorithm  for solving () under the Huber loss $\\ell (u) = u^2/2 \\cdot \\mathbb {1}(|u| \\le 1) + (|u| - 1/2) \\cdot \\mathbb {1}(|u| > 1)$ .", "Under the Huber loss, the loss function $L_{\\tau ,\\gamma }(u)$ takes the form $\\begin{split}L_{\\tau , \\gamma }(u) &={\\left\\lbrace \\begin{array}{ll}(1- \\tau ) \\cdot (\\gamma \\cdot |u| - \\gamma ^2/2), & u/\\gamma < -1, \\\\(1-\\tau ) \\cdot u^2/2, & -1 \\le u/\\gamma < 0, \\\\\\tau \\cdot u^2/2, & 0 \\le u/\\gamma \\le 1, \\\\\\tau \\cdot (\\gamma \\cdot |u| - \\gamma ^2/2), & u/\\gamma > 1.\\end{array}\\right.", "}\\end{split}$ For notational simplicity, let $\\lambda _j^{(t)} = p_{\\lambda }^{\\prime }(\\hat{\\beta }_j^{(t-1)})$ .", "Then, optimization problem () reduces to $\\underset{\\beta \\in ^d}{\\mathrm {minimize}}~\\left\\lbrace \\frac{1}{n} \\sum _{i=1}^n L_{\\tau ,\\gamma }(y_i-_i^{\\beta )+ \\sum _{j=2}^d \\lambda _j^{(t-1)} |\\beta _j| ,}\\right.We now derive the SNCD algorithm proposed by \\cite {YH2017} to solve~(\\ref {eq:penr}).", "The main idea is to solve the system of equations, i.e., the KKT conditions, in~(\\ref {eq:kkt}) iteratively in a cyclic fashion using a semismooth Newton algorithm.In particular, we update the parameters \\beta _1, (\\beta _2,z_2),\\ldots ,(\\beta _d,z_d) iteratively one at a time while keeping the others fixed.In the following, we will derive the explicit updates for the parameters at each iteration.$ We start with the intercept term $\\beta _1$ .", "Let $r_i^{k} = y_i -_i^{ \\beta ^{k} andF_1^{k}(u) = -\\frac{1}{n} \\sum _{i=1}^n L^{\\prime }_{\\tau ,\\gamma } (r_i^{k} + \\beta _1^{k} -u).At the kth iteration, we update the parameter \\beta _1 as{\\begin{@align*}{1}{-1}\\beta _1^{k+1} &= \\beta _1^{k} - \\lbrace H_1^{k}(\\beta _1^{k})\\rbrace ^{-1} F_1^{k}(\\beta ^{k}_1) = \\beta _1^{k} + \\frac{\\sum _{i=1}^n L^{\\prime }_{\\tau ,\\gamma }( y_i -_i^{ \\beta ^{k})}{\\sum _{i=1}^n L^{\\prime \\prime }_{\\tau ,\\gamma }( y_i -_i^{ \\beta ^{k})},}}{w}here H_1^{k}(u)=\\frac{1}{n} \\sum _{i=1}^n L_{\\tau ,\\gamma }^{\\prime \\prime } (r_i^{k}+\\beta _1^{k}-u) is the derivative of F_1^k(u).\\end{@align*}}}We now derive the update for $ (j,zj)$.", "Let$$F_{j}^{k}(v_1,v_2) = \\begin{bmatrix} -\\frac{1}{n}\\sum _{i=1}^nL^{\\prime }_{\\tau ,\\gamma }(r_{i}^{k} + x_{ij}\\beta _{j}^{k} - x_{ij}v_{1})x_{ij} + \\lambda _j^{(t-1)} v_{2} \\\\ v_{1} - S(v_{1} + v_{2})\\end{bmatrix}, \\qquad \\mathrm {for}~j=2,\\ldots ,d,$$where $ S(u) = sign(u) (|u|-1,0)$ is the soft-thresholding operator.The derivative of $ Fjk(v1,v2)$ then takes the form\\begin{equation}H_j^k (v_1,v_2) = {\\left\\lbrace \\begin{array}{ll}\\begin{bmatrix} \\frac{1}{n}\\sum _{i=1}^nL^{\\prime \\prime }_{\\tau ,\\gamma }(r_{i}^{k} + x_{ij}\\beta _{j}^{k} - x_{ij}v_{1})x_{ij}^{2} & \\lambda _j^{(t-1)} \\\\ 0 & -1\\end{bmatrix} & \\mathrm {if} ~|\\beta _j^k + z_j^k | > 1\\\\\\begin{bmatrix} \\frac{1}{n}\\sum _{i=1}^nL^{\\prime \\prime }_{\\tau ,\\gamma }(r_{i}^{k} + x_{ij}\\beta _{j}^{k} - x_{ij}v_{1})x_{ij}^{2} & \\lambda _j^{(t-1)} \\\\ 1 & 0\\end{bmatrix}&\\mathrm {otherwise}.", "\\\\\\end{array}\\right.", "}\\end{equation}The update for $ (j,zj)$ then takes the form{\\begin{@align*}{1}{-1}\\begin{bmatrix} \\beta _j^{k+1} \\\\ z_j^{k+1} \\end{bmatrix} &= \\begin{bmatrix} \\beta _j^{k} \\\\ z_j^{k} \\end{bmatrix} - \\lbrace H_j^k(\\beta _j^{k},z_j^{k})\\rbrace ^{-1}F_j^k(\\beta _j^{k},z_j^{k}).\\end{@align*}}Substituting~(\\ref {eq:Hjupdate}) into the above equation yields the updates in Algorithm~\\ref {Alg:general}.$" ], [ "Preliminary Results", "Given $\\tau \\in (0,1)$ , let $\\lbrace (y_i, _i)\\rbrace _{i=1}^n$ be a sample of independent data vectors from the linear regression model in (REF ), $y_i = _i^{\\beta ^*(\\tau ) + \\varepsilon _i(\\tau ), where \\varepsilon _i(\\tau ) satisfies e_\\tau (\\varepsilon _i | _i )=0.In other words, the conditional \\tau -mean of y_i give _i is a linear combination of _i.We suppress the dependency of \\beta ^*(\\tau ) and \\varepsilon (\\tau ) on \\tau throughout the Appendix.Let w_\\tau (u) := | \\tau - \\mathbb {1}(u<0)| and let \\ell _\\gamma (u) = \\gamma ^2 \\ell (u/\\gamma ).Recall from~(\\ref {retire.loss}) that L (u) := L_{\\tau ,\\gamma }(u) = w_\\tau (u) \\ell _{\\gamma }(u) and let\\begin{equation*}_n(\\beta ) = \\frac{1}{n} \\sum _{i=1}^nL(y_i - _i^) ~~\\mbox{ and}~~ \\nabla _n(\\beta ) = -\\frac{1}{n} \\sum _{i=1}^nL^{\\prime }(y_i - _i^) _i ,\\end{equation*}where L^{\\prime }(u) = \\gamma w_\\tau (u) \\ell ^{\\prime }(u/\\gamma ) is the first-order derivative of L(u).", "}For $$\\beta $ d$, let $ ($\\beta $ ) = n($\\beta $ ) - ($\\beta $ )$,where $ ($\\beta $ ) = {n($\\beta $ )}$ is the population loss.", "Moreover, we define the quantity $ * = n($\\beta $ *) - ($\\beta $ *)$as the centered score function.Recall from Definition~\\ref {def:rsc} that $ (L)={ $\\delta $ : $\\delta $ 1 L $\\delta $ 2 }$.", "Let $ 1 :={ $\\delta $ : $\\delta $ c 1 3 $\\delta $ 1 }$.Moreover, define the symmetrized Bregman divergence $ : pp [0,)$ associated with the convex function $ n()$ evaluated at $$\\beta $ 1, $\\beta $ 2$ as{\\begin{@align}{1}{-1}(\\beta _1, \\beta _2 ) = \\langle \\nabla _n(\\beta _1) - \\nabla _n (\\beta _2) , \\beta _1 - \\beta _2 \\rangle .", "\\end{@align}}Recall from Condition~\\ref {cond:covariates} that $ u ()$, where $ = ()$.", "Also recall from Condition~\\ref {cond:randomnoise} that $ (2 | ) 2$.$ We first present some technical lemmas that are useful for proving theoretical results in the low-dimensional setting for the non-penalized retire estimator in Section REF , i.e., $\\hat{\\beta } = \\hat{\\beta }_\\gamma = _{\\beta \\in ^d} \\frac{1}{n} \\sum _{i=1}^n L_{\\tau , \\gamma }(y_i - _i^{ \\beta ).", "}$ Under Conditions REF , , and , we have $ \\Vert ^{-1/2}\\nabla (\\beta ^*) \\Vert _2 \\le \\gamma ^{-1} \\bar{\\tau } \\sigma _\\varepsilon ^2$ .", "Moreover, for any $t>0$ , $\\big |\\big | ^{-1/2} \\big \\lbrace \\nabla _n(\\beta ^*) - \\nabla (\\beta ^*) \\big \\rbrace \\big |\\big |_2 \\le 3\\bar{\\tau }v_0 \\Bigg (\\sigma _\\varepsilon \\sqrt{\\frac{2d+t}{n}} + \\gamma \\frac{2d+t}{2n} \\Bigg )$ with probability at least $1 - e^{-t}$ .", "Let $\\varepsilon $ be a real-valued random variable with $(\\varepsilon ) = \\sigma _\\varepsilon ^2$ , $|\\varepsilon |^3 = v_3 < \\infty $ , and $\\lbrace w_\\tau (\\varepsilon )\\varepsilon \\rbrace = 0$ with $w_\\tau (u) = |\\tau - \\mathbb {1}(u<0)|$ .", "Let $\\ell _\\gamma (\\cdot )$ be the Huber loss with parameter $\\gamma $ .", "We have $\\big | \\lbrace w_\\tau (\\varepsilon )\\ell _\\gamma ^{\\prime }(\\varepsilon ) \\rbrace \\big | \\le {\\bar{\\tau }v_3}/{\\gamma ^2}\\mbox{~~and~~}\\underline{\\tau }^2 \\bigg ( \\sigma _\\varepsilon ^2 - {v_3}/{\\gamma } \\bigg ) \\le \\Big \\lbrace w_\\tau (\\varepsilon )\\ell _\\gamma ^{\\prime }(\\varepsilon ) \\Big \\rbrace ^2 \\le \\bar{\\tau }^2 \\sigma _\\varepsilon ^2.$ Next, we present some technical lemmas that are useful for analyzing the high-dimensional penalized retire estimator.", "Recall that the penalized retire estimator is obtain by solving optimization problem ().", "For notational convenience, throughout the Appendix, we define the minimizer of  () as $\\hat{\\beta }^{(t)} \\in _{\\beta \\in ^{d}} \\big \\lbrace _n(\\beta ) + \\Vert ^{(t)} \\circ \\beta \\Vert _1 \\big \\rbrace , $ where $^{(t)} = (\\lambda _1^{(t)}, \\ldots , \\lambda _d^{(t)} )^ is a $ d$-dimensional vector of tuning parameters with $ j(t) = p'(| j(t-1)|)$, and $$ is the Hadamard product.", "Throughout the proof, we drop the superscript from $$\\beta $ (t)$ and $ (t)$ when the context is clear.$ The proofs of all of the technical lemmas are deferred to Section .", "Under Conditions REF , , and , we have $ \\Vert \\nabla (\\beta ^*) \\Vert _2 \\le \\gamma ^{-1} \\bar{\\tau } \\lambda _{u}^{1/2}\\sigma _{\\varepsilon }^2$ and $\\Vert \\nabla (\\beta ^*) \\Vert _\\infty \\le \\gamma ^{-1} \\bar{\\tau } \\sigma _{} \\sigma ^2_{\\varepsilon }$ .", "Moreover, for any $t\\ge 0$ , $\\Vert ^* \\Vert _\\infty = \\Vert \\nabla _n (\\beta ^*) - \\nabla (\\beta ^*)\\Vert _\\infty \\le \\nu _0 \\sigma _{} \\bar{\\tau } \\Bigg (2\\sigma _{\\varepsilon } \\sqrt{\\frac{\\log d + t }{n}}+ \\gamma \\frac{\\log d + t }{n}\\Bigg )$ holds with probability at least $1-2e^{-t}$ .", "Lemma REF reveals the proper range for the penalty level $\\lambda $ so that event $ \\lbrace \\lambda \\ge 2\\Vert \\nabla _n(\\beta ^*) \\Vert _\\infty \\rbrace $ occurs with high probability.", "Let $$ be the active set of the true regression parameter $\\beta ^*$ , and $\\mathbf {S}= (_{} _{}^{) be the s \\times s principal submatrix of .", "Denote by \\lambda _{\\max }(\\mathbf {S}) the maximal eigenvalue of \\mathbf {S}.", "Write ^* = \\nabla _n (\\beta ^*) - \\nabla (\\beta ^*).", "The next lemma provides an upper bound for the centered score ^*, projected on the true support .", "}\\begin{lemma}Under Conditions~\\ref {cond:covariates}--\\ref {cond:randomnoise}, for any t > 0, we have{\\begin{@align*}{1}{-1}\\Vert ^*_{} \\Vert _2\\le 3\\bar{\\tau } \\nu _0 \\lambda ^{1/2}_{\\max }(\\mathbf {S}) \\Bigg ( \\sigma _{\\varepsilon } \\sqrt{\\frac{2s+t}{n}} + \\gamma \\frac{2s+t}{2n} \\Bigg ),\\end{@align*}}with probability at least 1-e^{-t}.\\end{lemma}$ The following two lemmas contain some results for the solution of (REF ).", "Both lemmas are essential for the proof of Proposition .", "Let $$ be a set such that $\\subseteq \\subseteq [d]$ .", "For any $\\beta \\in ^d$ , let $\\beta _{^{{\\rm c}}}=\\bf {0}$ .", "Assume that $\\Vert _{^{{\\rm c}}} \\Vert _{\\min } > \\Vert (\\beta ) \\Vert _{\\infty }$ .", "Then, any solution $\\hat{\\beta }$ to the optimization problem (REF ) satisfies $\\Vert (\\hat{\\beta }-\\beta )_{^{{\\rm c}}} \\Vert _1 \\le \\frac{ \\big \\lbrace \\Vert \\Vert _{\\infty } + \\Vert (\\beta ) \\Vert _{\\infty } \\big \\rbrace \\Vert (\\hat{\\beta }-\\beta )_{} \\Vert _1 + \\Vert \\nabla (\\beta )\\Vert _2 \\Vert \\hat{\\beta }-\\beta \\Vert _2}{\\Vert _{^{{\\rm c}}} \\Vert _{\\min }-\\Vert (\\beta ) \\Vert _{\\infty }}.$ Let $$ be a set such that $\\subseteq \\subseteq [d]$ and $||=k$ .", "Let $= (\\lambda _1,\\ldots ,\\lambda _d)^{ be a vector of tuning parameters that satisfies \\Vert \\Vert _{\\infty } \\le \\lambda and \\Vert _{^{{\\rm c}}} \\Vert _{\\min } \\ge a\\lambda for some constant a \\in (0,1] and \\lambda \\ge s^{-1/2} \\Vert \\nabla (\\beta ^*) \\Vert _2.", "Then, under the event \\lbrace a \\lambda \\ge 2\\Vert ^* \\Vert _\\infty \\rbrace , any solution \\hat{\\beta } to~(\\ref {general.lasso2}) satisfies \\hat{\\beta } \\in \\beta ^* + (L) with L=(2+2/a)k^{1/2} + 2 s^{1/2}/a.In addition, let \\kappa , r >0 satisfy r > \\kappa ^{-1}(2s^{1/2}+ {k^{1/2}a}/{2})\\lambda .Then, under the event _{\\rm {rsc}}(r,L,\\kappa ), we have{\\begin{@align*}{1}{-1}\\Vert \\hat{\\beta } - \\beta ^* \\Vert _2&\\le \\kappa ^{-1} \\big \\lbrace (2s^{1/2}+ {k^{1/2}a}/{2})\\lambda \\big \\rbrace < r.\\end{@align*}}}$" ], [ "Proof of Theorem ", "Recall from (REF ) that $\\hat{\\beta } = ~_n(\\beta )$ and from () that $(\\beta , \\beta ^*) = \\langle \\nabla _n(\\beta ) - \\nabla _n(\\beta ^*) , \\beta - \\beta ^* \\rangle $ is the symmetric Bregman divergence.", "The main idea is to establish lower and upper bounds for $(\\hat{\\beta }, \\beta ^*)$ .", "We start with obtaining a lower bound for $(\\hat{\\beta }, \\beta ^*)$ .", "Let $r_{\\rm loc} = \\gamma / (8\\sqrt{2}A_1^2)$ and define an intermediate quantity $\\hat{\\beta }_\\eta = \\eta \\hat{\\beta } + (1-\\eta )\\beta ^* $ , where $\\eta = \\sup \\lbrace \\eta \\in [0,1]: \\hat{\\beta }_\\eta \\in \\beta ^* + \\mathbb {B}_{}(r_{\\rm loc}) \\rbrace $ .", "Then $\\hat{\\beta }_\\eta \\in \\beta ^* + \\partial \\mathbb {B}_{}(r_{\\rm loc}) $ whenever $\\hat{\\beta } \\notin \\beta ^* + \\mathbb {B}_{}(r_{\\rm loc})$ , where $\\partial \\mathbb {B}_{}(r_{\\rm loc})$ is the boundary of $\\mathbb {B}_{}(r_{\\rm loc})$ .", "On the other hand, $\\hat{\\beta }_\\eta = \\hat{\\beta }$ whenever $\\hat{\\beta } \\in \\beta ^* + \\mathbb {B}_{}(r_{\\rm loc})$ .", "By an application of Lemma REF , provided that $\\gamma \\ge 4\\sqrt{2}\\sigma _\\varepsilon $ and $n \\gtrsim d+t$ , we obtain $(\\hat{\\beta }_\\eta , \\beta ^*)\\ge \\frac{1}{2} \\kappa _1 \\underline{\\tau } \\Vert \\hat{\\beta }_\\eta -\\beta ^* \\Vert _{}^2,$ with probability at least $1 - e^{-t}$ .", "Next, we proceed to obtain an upper bound of $(\\hat{\\beta }, \\beta ^*)$ .", "By an application of Lemma C.1 in [35] and the first order condition $\\nabla _n(\\hat{\\beta }) = \\mathbf {0}$ , we have $(\\hat{\\beta }_\\eta , \\beta ^*) &\\le \\eta (\\hat{\\beta }, \\beta ^*)\\\\&=\\eta \\langle -\\nabla _n(\\beta ^*) , \\hat{\\beta } - \\beta ^* \\rangle \\nonumber \\\\&\\le ||^{-1/2} \\nabla _n(\\beta ^*) ||_2 \\cdot \\Vert \\hat{\\beta }_\\eta -\\beta ^* \\Vert _{} \\nonumber \\\\&\\le \\bigg [ \\big |\\big | ^{-1/2} \\nabla (\\beta ^*) \\big |\\big |_2 + \\big |\\big |^{-1/2} \\big \\lbrace \\nabla _n(\\beta ^*) - \\nabla (\\beta ^*) \\big \\rbrace \\big |\\big |_2 \\bigg ] \\cdot \\Vert \\hat{\\beta }_\\eta -\\beta ^* \\Vert _{}.$ Combining the above upper and lower bounds in (REF ) and (REF ), applying Lemma REF , and picking $\\gamma = \\sigma _\\varepsilon \\sqrt{n/(d+t)}$ , we have $\\Vert \\hat{\\beta }_\\eta -\\beta ^* \\Vert _{} \\le C(\\bar{\\tau }/\\underline{\\tau })\\kappa _1^{-1} \\sigma _\\varepsilon v_0 \\sqrt{\\frac{d+t}{n}},$ with probability at least $1 - 2e^{-t}$ as long as $n \\gtrsim d+t$ , where $C$ is an absolute constant.", "Lastly, it can be checked that with our proper choice of $\\gamma $ and $r_{\\rm loc}$ , we have $\\Vert \\hat{\\beta }_\\eta -\\beta ^* \\Vert _{} \\lesssim \\sigma _\\varepsilon \\sqrt{(d+t)/n} < \\sigma _\\varepsilon \\sqrt{n/ (d+t)} \\asymp r_{\\rm loc}$ .", "It immediately implies $\\hat{\\beta }_\\eta \\in \\beta ^* + \\mathbb {B}_{}(r_{\\rm loc})$ and $\\hat{\\beta }_\\eta = \\hat{\\beta }$ by construction.", "Thus (REF ) also holds when replacing $\\hat{\\beta }_\\eta $ by $\\hat{\\beta }$ ." ], [ "Proof of Theorem ", "We consider the following vector-valued random process $(\\beta ) = ^{-1/2} \\lbrace \\nabla _n(\\beta ) - \\nabla _n(\\beta ^*) \\rbrace - \\frac{1}{n} \\sum _{i=1}^n ^{-1/2} w_\\tau (\\varepsilon _i) _i _i^{ (\\beta - \\beta ^*).By the first order condition \\nabla _n(\\hat{\\beta }) = \\mathbf {0}, it can be shown that the nonasymptotic Bahadur representation in~(\\ref {ld thm 2a}) takes the form \\Vert (\\hat{\\beta })\\Vert _2.", "By the triangle inequality, we have\\Vert (\\hat{\\beta })\\Vert _2 \\le \\sup _{\\beta \\in \\beta ^* + \\mathbb {B}_{}(r)} \\Vert \\lbrace (\\beta )\\rbrace \\Vert _2+ \\sup _{\\beta \\in \\beta ^* + \\mathbb {B}_{}(r)} \\Vert (\\beta ) - \\lbrace (\\beta )\\rbrace \\Vert _2for radius r that satisfies \\hat{\\beta } \\in \\beta ^* + \\mathbb {B}_{}(r) with high probability.It suffices to obtain upper bounds for the two terms separately.", "}We start with an upper bound on$ $\\beta $$\\beta $ * + B(r) {($\\beta $ )} 2 $.", "By the mean value theorem for vector-valued functions (Theorem 12 in Section 2 of \\cite {Pugh2015}), we obtain{\\begin{@align*}{1}{-1}\\lbrace (\\beta ) \\rbrace &=^{-1/2} \\int _0^1 \\nabla ^2 _n(\\beta ^*_t)dt(\\beta - \\beta ^*) - \\frac{1}{n} \\sum _{i=1}^n ^{-1/2} w_\\tau (\\varepsilon _i) _i _i^{ (\\beta - \\beta ^*) \\\\&=\\Big \\langle \\int _0^1 \\Big \\lbrace ^{-1/2} \\nabla ^2 _n(\\beta ^*_t) ^{-1/2} - \\frac{1}{n} \\sum _{i=1}^n w_\\tau (\\varepsilon _i) _i _i^{ \\Big \\rbrace dt , ^{1/2} (\\beta - \\beta ^*) \\Big \\rangle ,}}where \\beta ^*_t = (1-t)\\beta ^* + t\\beta for (0 \\le t \\le 1) and _i = ^{-1/2} _i.Let \\delta _t = ^{1/2}(\\beta ^*_t - \\beta ^*).Since \\beta \\in \\beta ^* + \\mathbb {B}_{}(r), we have ||\\delta _t||_2 \\le r and y_i - _i^{ \\beta ^*_t = \\varepsilon _i - \\delta _t^{ _i.", "For all \\in ^{d-1}, we obtain{\\begin{@align*}{1}{-1}&\\bigg |^{ \\Big \\lbrace ^{-1/2} \\nabla ^2 _n(\\beta ^*_t) ^{-1/2} - \\frac{1}{n} \\sum _{i=1}^n w_\\tau (\\varepsilon _i) _i _i^{ \\bigg \\rbrace \\bigg | \\\\&=\\bigg | \\frac{1}{n} \\sum _{i=1}^n ^{ \\bigg [ w_\\tau (\\varepsilon _i - \\delta _t^{ _i)\\Big \\lbrace 1 - \\mathbb {1}(|\\varepsilon _i - \\delta _t^{ _i|>\\gamma )\\Big \\rbrace _i _i^{ - w_\\tau (\\varepsilon _i) _i _i^{ \\bigg ] \\bigg | \\\\&\\le \\bigg | \\frac{1}{n} \\sum _{i=1}^n \\bigg [ (^{ _i)^2 \\Big \\lbrace w_\\tau (\\varepsilon _i - \\delta _t^{ _i) - w_\\tau (\\varepsilon _i)|_i \\Big \\rbrace \\bigg ] \\bigg |+\\bigg | \\frac{1}{n} \\sum _{i=1}^n (^{ _i)^2 w_\\tau (\\varepsilon _i - \\delta _t^{ _i) \\mathbb {1}(|\\varepsilon _i - \\delta _t^{ _i|>\\gamma ) \\bigg | \\\\&:= \\Pi _1 + \\Pi _2.", "}}}}For \\Pi _1, let f_{\\varepsilon |} be the conditional density function of \\varepsilon given , and recall that it is upper bounded by \\bar{f}_{\\varepsilon |}.", "Moreover, let m_3>0 be a constant that satisfies \\sup _{\\in ^{d-1}} |^{ ^{-1/2} | ^3 \\le m_3.We have \\big \\lbrace w_\\tau (\\varepsilon _i - \\delta _t^{ _i) - w_\\tau (\\varepsilon _i)|_i \\big \\rbrace =\\int _{-\\infty }^{\\infty } \\big \\lbrace w_\\tau (u - \\delta _t^{ _i) - w_\\tau (u) \\big \\rbrace f_{\\varepsilon | }(u)du\\le (\\bar{\\tau } - \\underline{\\tau }) \\bar{f}_{\\varepsilon |} |\\delta _t^{ _i|.", "Consequently,{\\begin{@align}{1}{-1}\\Pi _1 \\le \\frac{1}{n} \\sum _{i=1}^n (\\bar{\\tau } - \\underline{\\tau }) \\bar{f}_{\\varepsilon |} \\Big \\lbrace (^{ _i)^2 |\\delta _t^{ _i| \\Big \\rbrace \\le (\\bar{\\tau } - \\underline{\\tau }) \\bar{f}_{\\varepsilon |} m_3 rt.", "}}\\end{@align}}For \\Pi _2, we first note that \\mathbb {1}(|\\varepsilon _i - \\delta _t^{ _i|>\\gamma ) \\le \\mathbb {1}(|\\varepsilon _i |>\\gamma /2) + \\mathbb {1}(| \\delta _t^{ _i|>\\gamma /2).", "By an application of the Markov^{\\prime }s inequality, we obtain{\\begin{@align}{1}{-1}\\Pi _2 &\\le \\bigg | \\bar{\\tau } (^{ )^2 \\Big \\lbrace \\mathbb {1}(|\\varepsilon |>\\gamma /2) + \\mathbb {1}(| \\delta _t^{ |>\\gamma /2)\\Big \\rbrace \\bigg | \\nonumber \\\\&\\le \\bar{\\tau } \\bigg | \\bigg (\\frac{|\\varepsilon |}{\\gamma /2} \\bigg )^2 (^{ )^2 \\bigg |+\\bar{\\tau } \\bigg | \\frac{| \\delta _t^{ |}{\\gamma /2} (^{ )^2 \\bigg | \\nonumber \\\\&\\le \\frac{4\\bar{\\tau }\\sigma _\\varepsilon ^2}{\\gamma ^2}+\\frac{2 \\bar{\\tau } m_3 r}{\\gamma }.", "}}{}}}}Combining (\\ref {ld thm 2b}) and (\\ref {ld thm 2c}), we have{\\begin{@align}{1}{-1}\\sup _{\\beta \\in \\beta ^* + \\mathbb {B}_{}(r)} \\Vert \\lbrace (\\beta )\\rbrace \\Vert _2\\le \\delta (r)r :=\\bigg \\lbrace (\\bar{\\tau } - \\underline{\\tau }) \\bar{f}_{\\varepsilon |} m_3 rt+\\frac{4\\bar{\\tau }\\sigma _\\varepsilon ^2}{\\gamma ^2} + \\frac{2\\bar{\\tau } m_3 r}{\\gamma } \\bigg \\rbrace r.\\end{@align}}\\end{@align}}}Next, we obtain an upper bound for \\sup _{\\beta \\in \\beta ^* + \\mathbb {B}_{}(r)} \\Vert (\\beta ) - \\lbrace (\\beta )\\rbrace \\Vert _2.With some abuse of notation, let \\bar{}(\\delta ) = (\\beta ) - \\lbrace (\\beta )\\rbrace where \\delta = ^{1/2}(\\beta - \\beta ^*) \\in \\mathbb {B}(r).It can be checked that \\bar{}(\\mathbf {0}) = \\mathbf {0}, \\lbrace \\bar{}(\\delta )\\rbrace =\\mathbf {0}, and{\\begin{@align*}{1}{-1}\\nabla _{\\delta } \\bar{}(\\delta )&=\\frac{1}{n}\\sum _{i=1}^n \\bigg [ w_\\tau (\\varepsilon _i - \\delta ^{ _i)\\ell _\\gamma ^{\\prime \\prime }(\\varepsilon _i - \\delta ^{ _i) _i _i^{ -\\Big \\lbrace w_\\tau (\\varepsilon _i - \\delta ^{ _i)\\ell _\\gamma ^{\\prime \\prime }(\\varepsilon _i - \\delta ^{ _i) _i _i^{ \\Big \\rbrace \\bigg ] \\\\&:= \\frac{1}{n}\\sum _{i=1}^n _i.", "}}}}}For all , \\in ^{d-1} and \\lambda \\in , we see that ^{ _i = 0, |^{ _i | \\le \\bar{\\tau }|^{_i ^{_i| + \\bar{\\tau }|^{_i ^{_i| and |^{ _i |^2 \\le 2\\bar{\\tau }^2 \\big (|^{_i ^{_i|^2 + ^2|^{_i ^{_i| \\big ).", "It then follows from the elementary inequality |e^z -1-z|\\le z^2e^{|z|}/2 and bound |^{_i ^{_i| \\le \\big \\lbrace (^{_i)^2 \\big \\rbrace ^{1/2} \\big \\lbrace (^{_i)^2\\big \\rbrace ^{1/2} \\le 1 that{\\begin{@align}{1}{-1}\\exp \\Big \\lbrace \\lambda \\sqrt{n} ^{ \\nabla _{\\delta } \\bar{}(\\delta ) \\Big \\rbrace &=\\prod _{i=1}^n \\exp \\bigg \\lbrace \\frac{\\lambda }{\\sqrt{n}} ^{ _i \\bigg \\rbrace \\nonumber \\\\&\\le \\prod _{i=1}^n \\Bigg \\lbrace 1 + \\frac{\\lambda }{\\sqrt{n}} ^{ _i + \\bigg ( \\frac{\\lambda }{\\sqrt{n}} ^{ _i \\bigg )^2 e^{\\big | \\frac{\\lambda }{\\sqrt{n}} ^{ _i \\big |}/2 \\Bigg \\rbrace \\nonumber \\\\&\\le \\prod _{i=1}^n \\Bigg \\lbrace 1 + \\frac{\\lambda ^2 \\bar{\\tau }^2}{n} e^{\\frac{|\\lambda |\\bar{\\tau }}{\\sqrt{n}}} \\Big ( e^{ \\frac{|\\lambda |\\bar{\\tau }}{\\sqrt{n}} |^{_i ^{_i| } + |^{_i ^{_i|^2 e^{ \\frac{|\\lambda |\\bar{\\tau }}{\\sqrt{n}} |^{_i ^{_i| } \\Big ) \\Bigg \\rbrace .", "}}}}}Here we upper-bound the components appeared in the right-hand side of (\\ref {ld thm 2f}).", "For all t>0, it follows from Cauchy-Schwarz inequality and the elementary inequality ab\\le a^2/2 + b^2/2 that{\\begin{@align*}{1}{-1}|^{_i ^{_i|^2 e^{ t |^{_i ^{_i| }&\\le (^{_i)^2 (^{_i)^2 e^{ t(^{_i)^2/2 + t(^{_i)^2/2 } \\\\&\\le \\Big \\lbrace (^{_i)^4 e^{ t(^{_i)^2} \\Big \\rbrace ^{1/2} \\Big \\lbrace (^{_i)^4 e^{ t(^{_i)^2} \\Big \\rbrace ^{1/2}.", "}}Consequently |^{_i ^{_i|^2 e^{ t |^{_i ^{_i| } \\le \\sup _{\\in ^{d-1}} (^{ _i)^4 e^{t (^{ _i)^2}, and similarly e^{ t |^{_i ^{_i| } \\le \\sup _{\\in ^{d-1}} e^{t (^{ _i)^2}.", "To further upper-bound these supremums, let \\chi := (^{)^2/(2v_1)^2.", "Recall the sub-Gaussian condition (|\\langle , ^{-1/2}\\rangle | \\ge v_1 ||||_2 t) \\le 2e^{-t^2/2}, we have (\\chi \\ge t) \\le 2e^{-2t} (i.e.", "\\chi is sub-Exponential).", "It follows that e^{\\chi } = 1 + \\int _0^{\\infty } e^t (\\chi \\ge t)dt \\le 1 + 2\\int _0^{\\infty } e^{-t} dt =3 , and{\\begin{@align*}{1}{-1}(\\chi ^2 e^\\chi ) = \\int _0^{\\infty } (t^2 + 2t)e^t (\\chi \\ge t)dt \\le 2 \\int _0^{\\infty } (t^2 + 2t)e^{-t} dt = 8.\\end{@align*}}Along with the monotonicity of exponential function, we conclude that both |^{_i ^{_i|^2 e^{ \\frac{|\\lambda |\\bar{\\tau }}{\\sqrt{n}} |^{_i ^{_i| } and e^{ \\frac{|\\lambda |\\bar{\\tau }}{\\sqrt{n}} |^{_i ^{_i| } can be upper-bounded by some constants C_1, C_2 respectively, uniformly over , \\in ^{d-1} , as long as |\\lambda | \\le \\sqrt{n}/(4v_1^2 \\bar{\\tau }).", "Substituting the above bounds into (\\ref {ld thm 2f}) yields,{\\begin{@align*}{1}{-1}\\exp \\Big \\lbrace \\lambda \\sqrt{n} ^{ \\nabla _{\\delta } \\bar{}(\\delta ) \\Big \\rbrace &\\le \\prod _{i=1}^n \\Bigg [ 1 + \\frac{\\lambda ^2 \\bar{\\tau }^2}{n} e^{\\frac{|\\lambda |\\bar{\\tau }}{\\sqrt{n}}} \\bigg \\lbrace \\sup _{\\in ^{d-1}} e^{\\frac{|\\lambda |\\bar{\\tau }}{\\sqrt{n}} (^{ _i)^2} + \\sup _{\\in ^{d-1}} (^{ _i)^4 e^{\\frac{|\\lambda |\\bar{\\tau }}{\\sqrt{n}} (^{ _i)^2} \\bigg \\rbrace \\Bigg ] \\\\&\\le \\exp \\Big \\lbrace {\\lambda ^2 \\bar{\\tau }^2} e^{\\frac{|\\lambda |\\bar{\\tau }}{\\sqrt{n}}} (C_1 + C_2) \\Big \\rbrace \\\\&\\le \\exp \\bigg \\lbrace 2(C_1+C_2)\\bar{\\tau }^2 e^{-4 v_1^2} \\cdot \\frac{\\lambda ^2}{2} \\bigg \\rbrace \\mbox{~~~valid for all~~~} \\lambda ^2 \\le 2\\cdot \\frac{n}{32 \\bar{\\tau }^2 v_1^4}.", "}}}}\\end{@align*}With the above preparations, we apply Theorem A.3 in \\cite {Spokoiny2013} with v_0^2 = 2(C_1 + C_2)\\bar{\\tau }^2e^{-4v_1^2} and g^2 = n/(32\\bar{\\tau }^2 v_1^4) to yield{\\begin{@align}{1}{-1}\\sup _{\\beta \\in \\beta ^* + \\mathbb {B}_{}(r)} ||(\\beta ) - (\\beta ) ||_2\\le 12\\sqrt{C_1 + C_2} \\bar{\\tau } e^{-2v_1^2}\\sqrt{\\frac{2d+t}{n}} \\cdot r\\end{@align}}with probability at least 1 - e^{-t}, as long as n \\ge 64\\bar{\\tau }^2 v_1^4 (2d+t).", "}}}Lastly, combining (\\ref {ld thm 2e}) and (\\ref {ld thm 2g}), we have{\\begin{@align}{1}{-1}\\sup _{\\beta \\in \\beta ^* + \\mathbb {B}_{}(r)} || (\\beta ) ||_2\\le \\Bigg \\lbrace \\delta (r) + 12\\sqrt{C_1 + C_2} \\bar{\\tau } e^{-2v_1^2}\\sqrt{\\frac{2d+t}{n}} \\Bigg \\rbrace r\\end{@align}}with probability at least 1 - e^{-t}, as long as n \\ge 64\\bar{\\tau }^2 v_1^4 (2d+t).", "Recall from the proof of Theorem \\ref {ld thm 1 l2errorbound} that we have \\hat{\\beta } \\in \\beta ^* + \\mathbb {B}_{}(r_0) with probability at least 1 - 2e^{-t} for some r_0 \\asymp \\sigma _\\varepsilon \\sqrt{(d+t)/n}.", "Taking r = r_0 in (\\ref {ld thm 2h}) and \\gamma = \\sigma _\\varepsilon \\sqrt{n/(d+t)} finishes the proof.", "}}}}}}}}}\\subsubsection {Proof of Theorem \\ref {ld thm 3 asym normality}}\\begin{proof}Let \\in ^d be an arbitrary vector, and {\\mathbf {J}} = \\lbrace w_\\tau (\\varepsilon ) ^{\\rbrace be the Hessian matrix.", "Define S_n = n^{-1/2} \\sum _{i=1}^n a_i b_i and its centered version S_n^0 = S_n - (S_n), where a_i = w_\\tau (\\varepsilon _i)\\ell _\\gamma ^{\\prime }(\\varepsilon _i) and b_i = \\langle {\\mathbf {J}}^{-1} , _i \\rangle .We first show that the centered partial sum S_n^0 is close to the quantity of interest n^{1/2} \\langle , \\hat{\\beta } - \\beta ^* \\rangle .By an application of Theorem \\ref {ld thm 2 bahadur representation} and Lemma \\ref {ld lem:huber} with \\gamma = \\sigma _\\varepsilon \\sqrt{n/(d+t)} and t = \\log n, we obtain{\\begin{@align}{1}{-1}&\\big | n^{1/2} \\langle , \\hat{\\beta } - \\beta ^* \\rangle - S_n^0 \\big | \\nonumber \\\\&\\le n^{1/2} \\Big | \\Big \\langle ^{1/2} {\\mathbf {J}}^{-1} , ^{-1/2}{\\mathbf {J}}(\\hat{\\beta } - \\beta ^*) - \\frac{1}{n} \\sum _{i=1}^n w_\\tau (\\varepsilon _i) \\ell _\\gamma ^{\\prime }(\\varepsilon _i) ^{-1/2}_i \\Big \\rangle \\Big |+ \\big | S_n \\big | \\nonumber \\\\&\\le n^{1/2} \\big |\\big | {\\mathbf {J}}^{-1} \\big |\\big |_{} \\cdot \\big |\\big | ^{-1/2}{\\mathbf {J}} (\\hat{\\beta } - \\beta ^*) - \\frac{1}{n} \\sum _{i=1}^n w_\\tau (\\varepsilon _i) \\ell _\\gamma ^{\\prime }(\\varepsilon _i) ^{-1/2}_i \\big |\\big |_2 + n^{1/2} \\Big | \\Big [ \\langle {\\mathbf {J}}^{-1} , \\rangle \\Big \\lbrace w_\\tau (\\varepsilon )\\ell _\\gamma ^{\\prime }(\\varepsilon )|\\Big \\rbrace \\Big ] \\Big | \\nonumber \\\\&\\le n^{1/2} \\big |\\big | {\\mathbf {J}}^{-1} \\big |\\big |_{} C \\cdot \\frac{d + \\log n}{n}+ n^{1/2} \\frac{\\bar{\\tau } v_3}{\\gamma ^2} \\Big ( \\langle {\\mathbf {J}}^{-1} , \\rangle ^2 \\Big )^{1/2}\\nonumber \\\\&\\le C_1 \\big |\\big | {\\mathbf {J}}^{-1} \\big |\\big |_{} \\frac{d + \\log n}{\\sqrt{n}},\\end{@align}}with probability at least 1 - 3n^{-1}, where C_1 = C + {\\bar{\\tau } v_3}/{\\sigma _\\varepsilon ^2}.", "}\\end{proof}Next, we show that the centered partial sum S_n^0 = n^{-1/2} \\sum _{i=1}^n (1 - )a_i b_i is approximately normally distributed.", "It follows from Berry-Esseen inequality (e.g., see \\cite {Tyurin2011}) that{\\begin{@align}{1}{-1}\\sup _{x \\in } ~\\big | (S_n^0 \\le \\textnormal {var}(S_n^0)^{1/2} x) - \\Phi (x) \\big | \\le \\frac{|a_i b_i - a_i b_i |^3}{2 \\textnormal {var}(S_n^0)^{3/2} \\sqrt{n}}.\\end{@align}}Thus, it suffices to obtain a lower bound for \\textnormal {var}(S_n^0) and an upper bound for ( |a_i b_i - a_i b_i |^3).By an application of Lemma~\\ref {ld lem:huber}, we have(a_i b_i) \\le \\bar{\\tau }v_3 || {\\mathbf {J}}^{-1} ||_{}/\\gamma ^2 and (a_i b_i)^2 \\ge \\underline{\\tau }^2 (\\sigma _\\varepsilon ^2 - 2v_3/\\gamma ) \\Vert {\\mathbf {J}}^{-1} \\Vert _{}^2 .Thus, \\textnormal {var}(S_n^0) = (a_i b_i)^2 - (a_i b_i)^2 \\ge \\Vert {\\mathbf {J}}^{-1} \\Vert _{}^2 ( \\underline{\\tau }^2 \\sigma _\\varepsilon ^2 - 2\\underline{\\tau }^2 v_3/\\gamma - \\bar{\\tau }^2v_3^2/\\gamma ^4 ).For sufficiently large \\gamma (i.e., n \\gtrsim d), we obtain the lower bound \\textnormal {var}(S_n^0)^{3/2} \\ge || {\\mathbf {J}}^{-1} ||_{}^3(\\underline{\\tau }^3 \\sigma _\\varepsilon ^3/2).Next, we proceed to obtain an upper bound for the centered third moment |a_i b_i - (a_i b_i) |^3.", "Recall that m_3 = \\sup _{\\in ^{d-1}}|\\langle , ^{-1/2}\\rangle |^3, we have |a_i b_i|^3 \\le \\big [ |\\langle {\\mathbf {J}}^{-1} , _i \\rangle |^3 \\big \\lbrace |w_\\tau (\\varepsilon _i) \\ell _\\gamma ^{\\prime }(\\varepsilon _i)|^3 | _i \\big \\rbrace \\big ]\\le \\bar{\\tau }^3 v_3 m_3 || {\\mathbf {J}}^{-1} ||_{}^3.", "Along with Minkowski^{\\prime }s inequality |a+b|^p \\le 2^{p-1} (|a|^p + |b|^p), we obtain |a_i b_i - a_i b_i |^3 \\le 4 \\bar{\\tau }^3 v_3 m_3 (1 + {v_3^2}/{m_3 \\gamma ^6} )|| {\\mathbf {J}}^{-1} ||_{}^3.", "Therefore |a_i b_i - a_i b_i |^3 \\le 8 \\bar{\\tau }^3 v_3 m_3 || {\\mathbf {J}}^{-1} ||_{}^3 provided that n \\gtrsim d.Substituting the above inequalities into (\\ref {ld thm 3b}), we have{\\begin{@align}{1}{-1}\\sup _{x \\in }~ \\big | (S_n^0 \\le \\textnormal {var}(S_n^0)^{1/2} x) - \\Phi (x) \\big | \\le C_2 n^{-1/2},\\end{@align}}where C_2 = 8v_3 m_3 \\bar{\\tau }^3/(\\underline{\\tau }\\sigma _\\varepsilon )^3 .", "}}}Let \\sigma ^2 = (a_i b_i)^2 = ^{ {\\mathbf {J}}^{-1} \\big [\\big \\lbrace w_\\tau (\\varepsilon )\\ell _\\gamma ^{\\prime }(\\varepsilon ) \\big \\rbrace ^2 ^{ \\big ] {\\mathbf {J}}^{-1} .By an application of Lemma~\\ref {ld lem:huber}, we have \\underline{\\tau }^2 (\\sigma _\\varepsilon ^2 - 2v_3/\\gamma ) \\Vert {\\mathbf {J}}^{-1} \\Vert _{}^2 \\le \\sigma ^2 \\le \\bar{\\tau }^2 \\sigma _\\varepsilon ^2 || {\\mathbf {J}}^{-1} ||_{}^2.Moreover, | \\textnormal {var}(S_n^0) - \\sigma ^2| = |a_i b_i|^2 \\le \\bar{\\tau }^2 v_3^2 || {\\mathbf {J}}^{-1} ||_{}^2/\\gamma ^4.Provided that n \\gtrsim d, we obtain{\\begin{@align*}{1}{-1}\\bigg | \\frac{\\textnormal {var}(S_n^0)}{\\sigma ^2} - 1 \\bigg | \\le \\bigg (1 - \\frac{2v_3}{\\sigma _\\varepsilon ^2 \\gamma } \\bigg )^{-1} \\cdot \\frac{\\bar{\\tau }^2 v_3^2 }{\\underline{\\tau }^2 \\sigma _\\varepsilon ^2} \\cdot \\frac{1}{\\gamma ^4}\\le \\frac{2\\bar{\\tau }^2 v_3^2 }{\\underline{\\tau }^2 \\sigma _\\varepsilon ^2} \\cdot \\frac{1}{\\gamma ^4}.\\end{@align*}}An application of Lemma A.7 in the supplement of \\cite {SZ2015} indicates that{\\begin{@align}{1}{-1}\\sup _{x \\in } \\big | \\Phi (x/\\textnormal {var}(S_n^0)^{1/2}) - \\Phi (x/\\sigma ) \\big | \\le C_3 \\gamma ^{-4},\\end{@align}}where C_3 = (\\bar{\\tau } v_3 / \\underline{\\tau } \\sigma _\\varepsilon )^2.", "}}Let G \\sim (0,1).", "Applying the inequalities in (\\ref {ld thm 3a}), (\\ref {ld thm 3c}), and (\\ref {ld thm 3d}), along with the fact that for all a<b and \\sigma > 0, \\Phi (b/\\sigma ) - \\Phi (a/\\sigma ) \\le (2\\pi )^{-1/2}(b-a)/\\sigma , we obtain that for any x \\in and \\in ^d,{\\begin{@align*}{1}{-1}(n^{1/2} \\langle , \\hat{\\beta } - \\beta ^* \\rangle \\le x)&\\le \\bigg ( S_n^0 \\le x + C_1\\big |\\big | {\\mathbf {J}}^{-1} \\big |\\big |_{} \\frac{d + \\log n}{\\sqrt{n}} \\bigg ) + \\frac{3}{n} \\\\&\\le \\bigg ( \\textnormal {var}(S_n^0)^{1/2} G \\le x + C_1\\big |\\big | {\\mathbf {J}}^{-1} \\big |\\big |_{} \\frac{d + \\log n}{\\sqrt{n}} \\bigg ) + \\frac{3}{n} + \\frac{C_2}{\\sqrt{n}} \\\\&\\le \\bigg ( \\sigma G \\le x + C_1\\big |\\big | {\\mathbf {J}}^{-1} \\big |\\big |_{} \\frac{d + \\log n}{\\sqrt{n}} \\bigg ) + \\frac{3}{n} + \\frac{C_2}{\\sqrt{n}} + \\frac{C_3}{\\gamma ^4} \\\\&\\le \\bigg ( \\sigma G \\le x \\bigg ) +\\frac{C_1 ||{\\mathbf {J}}^{-1} ||_{}}{\\sqrt{2\\pi } \\sigma } \\frac{d + \\log n}{\\sqrt{n}} +\\frac{3}{n} + \\frac{C_2}{\\sqrt{n}} + \\frac{C_3}{\\gamma ^4} \\\\&\\lesssim \\bigg ( \\sigma G \\le x \\bigg ) + \\frac{d + \\log n}{\\sqrt{n}} + \\frac{1}{n} + \\frac{1}{\\sqrt{n}} + \\frac{(d + \\log n)^2}{n^2},\\end{@align*}}where the last inequality follows from the fact that \\sigma \\asymp || {\\mathbf {J}}^{-1} ||_{} and taking \\gamma = \\sigma _\\varepsilon \\sqrt{n/(d+\\log n)}.", "A similar argument leads to a series of reverse inequalities.", "Since the above bounds are independent of x and , they hold uniformly over x \\in and \\in ^d.", "}}}Putting together all the pieces, we conclude that by taking \\gamma = \\sigma _\\varepsilon \\sqrt{n/(d+\\log n)}, we have{\\begin{@align*}{1}{-1}\\sup _{\\in ^d, x \\in } \\big | (n^{1/2} \\langle , \\hat{\\beta } - \\beta ^* \\rangle \\le \\sigma x) - \\Phi (x) \\big | \\lesssim \\frac{d + \\log n}{\\sqrt{n}},\\end{@align*}}as long as n \\gtrsim d.}}}}}}\\subsubsection {Proof of Theorem \\ref {thm:l1.retire}}\\begin{proof}Let \\hat{\\beta } :=\\hat{\\beta }^{(1)} be a minimizer of~(\\ref {retire.est.convex}) with p^{\\prime }_\\lambda (0) = \\lambda , i.e., optimization problem~(\\ref {retire.est.convex}) reduces to the \\ell _1-penalized robustified expectile regression, i.e.,{\\begin{@align}{1}{-1}\\hat{\\beta } \\in \\underset{\\beta \\in ^d}{\\mathrm {minimize}}~\\left\\lbrace \\frac{1}{n} \\sum _{i=1}^n L_{\\tau ,\\gamma }(y_i-_i^{\\beta )+ \\lambda \\sum _{j=2}^d |\\beta _j| .", "}\\right.Let =\\lbrace 1,\\ldots ,d\\rbrace be the active set of \\beta ^*, i.e., the index set contains indices for which \\beta ^*_j\\ne 0.Let s=|| be the cardinality of .", "Recall the definition of the symmetric Bregman divergence in~(\\ref {eq:symmetricdivergence}).", "The main crux of the proof of Theorem~\\ref {thm:l1.retire} involves establishing upper and lower bounds for (\\hat{\\beta }, \\beta ^* ).", "We start with deriving an upper bound for (\\hat{\\beta }, \\beta ^* ).", "Throughout the proof, we write \\hat{\\delta } = \\hat{\\beta }-\\beta ^*.\\end{@align}}\\end{proof}Since \\hat{\\beta } is a minimizer of~(\\ref {retire.est.convex.l1}), we have{\\begin{@align}{1}{-1}_n(\\hat{\\beta })-_n(\\beta ^*)&\\le \\lambda (\\Vert \\beta ^* \\Vert _1-\\Vert \\hat{\\beta } \\Vert _1) \\\\&\\le \\lambda (\\Vert \\beta _^* \\Vert _1-\\Vert \\beta _^* + \\hat{\\delta }_{}\\Vert _1 - \\Vert \\hat{\\delta }_{^{{\\rm c}}} \\Vert _1) \\nonumber \\\\&\\le \\lambda ( \\Vert \\hat{\\delta }_\\Vert _1 - \\Vert \\hat{\\delta }_{^{{\\rm c}}} \\Vert _1).", "\\end{@align}}By the optimality condition of \\hat{\\beta }, we have \\langle \\nabla _n(\\hat{\\beta })+\\lambda \\hat{}, \\hat{\\beta }-\\beta ^*\\rangle \\le 0, where \\hat{} \\in \\partial \\Vert \\hat{\\beta } \\Vert _1 and \\langle \\hat{} ,\\hat{\\beta } \\rangle = \\Vert \\hat{\\beta } \\Vert _1.Thus, conditioned on the event _{\\rm {score}} := \\lbrace \\lambda \\ge 2\\Vert \\nabla _n(\\beta ^*) \\Vert _\\infty \\rbrace , (\\hat{\\beta }, \\beta ^* ) can be upper bounded by{\\begin{@align}{1}{-1}(\\hat{\\beta }, \\beta ^* ) &= \\langle \\nabla _n(\\hat{\\beta }) - \\nabla _n (\\beta ^*) , \\hat{\\beta } - \\beta ^* \\rangle \\\\&=\\langle \\nabla _n(\\hat{\\beta }) + \\lambda \\hat{} , \\hat{\\beta } - \\beta ^* \\rangle +\\langle -\\lambda \\hat{} - \\nabla _n (\\beta ^*) , \\hat{\\beta } - \\beta ^* \\rangle \\nonumber \\\\&\\le 0 + \\lambda (\\Vert \\beta ^* \\Vert _1-\\Vert \\hat{\\beta } \\Vert _1) + \\Vert \\nabla _n(\\beta ^*) \\Vert _\\infty \\cdot \\Vert \\hat{\\delta } \\Vert _1 \\nonumber \\\\&\\le \\lambda (\\Vert \\hat{\\delta }_\\Vert _1 - \\Vert \\hat{\\delta }_{^{{\\rm c}}} \\Vert _1) + \\frac{\\lambda }{2}(\\Vert \\hat{\\delta }_\\Vert _1 + \\Vert \\hat{\\delta }_{^{{\\rm c}}} \\Vert _1) \\nonumber \\\\&\\le \\frac{3}{2} \\lambda s^{1/2} \\Vert \\hat{\\delta } \\Vert _2.\\end{@align}}}}}\\end{@align*}We now obtain a lower bound for (\\hat{\\beta }, \\beta ^* ).", "To this end, we apply the restricted strong convexity result in Lemma~\\ref {lem:RSC}.", "First, from the proof of Lemma~\\ref {lem:RSC}, we know that the result in Lemma~\\ref {lem:RSC} is applicable for any \\beta \\in \\beta ^*+\\mathbb {B}(r)\\cap _1, for which \\hat{\\beta } does not necessarily satisfies.To this end, we define an intermediate quantity to help facilitate the proof.Let A_1 be a constant that satisfies (^{ ^{-1/2}) ^4 \\le A_1^4 \\Vert \\Vert ^4_2 for all \\in ^d and let r_{\\rm {loc}} = \\gamma /(8\\sqrt{2} \\lambda _u A_1^2 ) .Consider \\hat{\\beta }_\\eta = \\eta \\hat{\\beta } +(1-\\eta )\\beta ^*, where \\eta = \\sup \\big \\lbrace u \\in [0,1] : (1-u)\\beta ^* + u \\hat{\\beta } \\in \\beta ^* + \\mathbb {B}(r_{\\rm {loc}}) \\big \\rbrace .Then, \\hat{\\beta }_\\eta \\in \\beta ^* + \\partial \\mathbb {B}(r_{\\rm {loc}}) whenever \\hat{\\beta } \\notin \\beta ^* + \\mathbb {B}(r_{\\rm {loc}}) where \\partial \\mathbb {B}(r_{\\rm {loc}}) is the boundary of \\mathbb {B}(r_{\\rm {loc}}), and \\hat{\\beta }_\\eta = \\hat{\\beta } whenever \\hat{\\beta } \\in \\beta ^* + \\mathbb {B}(r_{\\rm {loc}}).Let \\hat{\\delta }_{\\eta }=\\hat{\\beta }_{\\eta }-\\beta ^*.", "It remains to show that \\hat{\\beta }_{\\eta } \\in \\beta ^* + _1, i.e., \\Vert (\\hat{\\delta }_{\\eta })_{S^c}\\Vert _1 \\le 3 \\Vert (\\hat{\\delta }_{\\eta })_{S}\\Vert _1.By convexity of _n(\\cdot ), we have{\\begin{@align}{1}{-1}_n(\\hat{\\beta })-_n(\\beta ^*) \\ge \\langle \\nabla _n(\\beta ^*), \\hat{\\delta } \\rangle \\ge -\\Vert \\nabla _n(\\beta ^*) \\Vert _\\infty \\cdot \\Vert \\hat{\\delta } \\Vert _1 \\ge -\\frac{\\lambda }{2}(\\Vert \\hat{\\delta }_\\Vert _1 + \\Vert \\hat{\\delta }_{^{{\\rm c}}} \\Vert _1) \\end{@align}}Combining (\\ref {thm_claim_2}) and (\\ref {thm_claim_3}), we have \\Vert \\hat{\\delta }_{^{{\\rm c}}} \\Vert _1 \\le 3\\Vert \\hat{\\delta }_\\Vert _1, conditioned on the event _{\\rm {score}}.Since \\eta \\hat{\\delta } = \\hat{\\delta }_\\eta , we have verified that \\hat{\\beta }_\\eta \\in \\beta ^* + \\mathbb {B}(r_{\\rm {loc}}) \\cap _1, conditioned on _{\\rm {score}}.Applying Lemma \\ref {lem:RSC} with \\beta = \\hat{\\beta }_\\eta , the following bound holds with probability at least 1-e^{-t}{\\begin{@align}{1}{-1} (\\hat{\\beta }_{\\eta }, \\beta ^* ) = \\langle \\nabla _n(\\hat{\\beta }_\\eta ) - \\nabla _n (\\beta ^*) , \\hat{\\beta }_\\eta - \\beta ^* \\rangle \\ge \\frac{1}{2} \\kappa _1 {\\tau } \\Vert \\hat{\\delta }_\\eta \\Vert ^2_2,\\end{@align}}as long as (\\gamma , n, d) satisfies \\gamma \\ge 4\\sqrt{2}\\lambda _u \\sigma _{\\varepsilon } and n \\gtrsim s\\log d + t.For notational convenience, we denote the event at (\\ref {thm_part_a_1}) as _{\\mathrm {rsc}} with (_{\\mathrm {rsc}}) \\ge 1-e^{-t}.", "}We now combine the lower and upper bounds in (\\ref {thm_part_a_3}) and (\\ref {thm_part_a_1}).Since _n(\\cdot ) is convex, by Lemma C.1 in \\cite {SZF2020} we have(\\hat{\\beta }_{\\eta }, \\beta ^* ) \\le \\eta (\\hat{\\beta }, \\beta ^* ),and thus implies that{\\left\\lbrace \\begin{array}{ll}\\Vert \\hat{\\delta }_\\eta \\Vert _2 \\le 3(\\kappa _1 {\\tau } )^{-1} s^{1/2} \\lambda ;\\\\\\Vert \\hat{\\delta }_\\eta \\Vert _1\\le 4s^{1/2} \\Vert \\hat{\\delta }_\\eta \\Vert _2\\le 12(\\kappa _1 {\\tau } )^{-1} s \\lambda .\\end{array}\\right.", "}We now show that with proper choice of \\lambda and \\gamma , \\hat{\\beta } \\in \\beta ^* + \\mathbb {B}(r_{\\rm {loc}}), implying \\hat{\\beta }_\\eta = \\hat{\\beta }.Let \\gamma = \\sigma _{\\varepsilon }\\sqrt{{n}/{(\\log d + t)}}.", "By Lemma \\ref {lem:grad},\\Vert \\nabla _n (\\beta ^*) \\Vert _\\infty \\le \\bar{\\tau } \\sigma _{} \\sigma _{\\varepsilon }(3 \\nu _0 + 1) \\sqrt{{(\\log d + t)}/{n}}with probability at least 1-2e^{-t}, suggesting that \\lambda = 2c \\bar{\\tau } \\sqrt{{(\\log d + t)}/{n}} where c=\\sigma _{} \\sigma _{\\varepsilon }(3 \\nu _0 + 1).Moreover, it can be verified that \\gamma \\ge 4\\sqrt{2}\\lambda _u \\sigma _{\\varepsilon } under the scaling condition n \\gtrsim s \\log d + t.Finally, we have\\Vert \\hat{\\beta }_\\eta - \\beta ^*\\Vert _2 \\le 3(\\kappa _1 {\\tau } )^{-1} s^{1/2} \\lambda \\lesssim \\sqrt{s(\\log d + t)/n} < \\sqrt{n/ (\\log d + t )} \\asymp r_{\\rm {loc}} , i.e., \\hat{\\beta }_\\eta \\in \\beta ^* + \\mathbb {B}(r_{\\rm {loc}}).This further implies that \\hat{\\beta }=\\hat{\\beta }_\\eta by construction.Thus, we obtain the desired results{\\left\\lbrace \\begin{array}{ll}\\Vert \\hat{\\beta } - \\beta ^* \\Vert _2 \\le 3(\\kappa _1 {\\tau } )^{-1} s^{1/2} \\lambda ;\\\\\\Vert \\hat{\\beta } - \\beta ^* \\Vert _1\\le 12(\\kappa _1 {\\tau } )^{-1} s \\lambda ,\\end{array}\\right.", "}with probability \\big ( _{\\rm {score}} \\cap _{\\rm {rsc}} \\big ) \\ge 1-3e^{-t} .", "}}}}}\\subsubsection {Proof of Theorem \\ref {thm:random}}Recall that (\\beta ^*) = \\lbrace _n(\\beta ^*)\\rbrace be the population loss evaluated at \\beta ^* and let ^* = \\nabla _n(\\beta ^*) - \\nabla (\\beta ^*)be the centered score function.We first show that given an estimator at the (T-1)th iteration, \\hat{\\beta }^{(T-1)}, the estimation error of the subsequent estimator \\hat{\\beta }^{(T)} can be improved sequentially by a \\delta -fraction for some constant \\delta \\in (0,1), under a beta-min condition on \\Vert \\beta _{}^*\\Vert _{\\min }.We establish a deterministic claim in the following proposition, where we conditioned on events that are related to the local restricted strong convexity property and the gradient of the loss function, _{\\rm {rsc}}(r,L,\\kappa ) and\\lbrace p_0^{\\prime }(a_0) \\lambda \\ge 2 ^* \\rbrace , respectively.", "}}\\end{@align}}\\begin{proposition}Let p_0(\\cdot ) be a penalty function that satisfies Condition~\\ref {def:concavepenalty}.Given \\kappa >0, assume that there exists some constant a_0 > 0 such that p_0^{\\prime }(a_0) >0 and \\kappa > {\\sqrt{5}}/{(2 a_0)} .", "Let c>0 be a constant that is the solution to the equation{\\begin{@align}{1}{-1} 0.5 p^{\\prime }_0(a_0)(c^2 + 1)^{1/2} + 2 = c \\kappa a_0.\\end{@align}}Assume the beta-min condition \\Vert \\beta ^*_{} \\Vert _{\\min } \\ge a_0 \\lambda and let r^{\\rm {crude}}=ca_0 s^{1/2} \\lambda .", "Conditioned on the event _{\\rm {rsc}}(r,L,\\kappa ) \\cap \\lbrace p_0^{\\prime }(a_0) \\lambda \\ge 2\\Vert ^* \\Vert _\\infty \\rbrace withL = \\lbrace 2 +2/p_0^{\\prime }(a_0) \\rbrace (c^2 + 1)^{1/2} s^{1/2} + 2 s^{1/2}/p_{{0}}^{\\prime }(a_0),~r > r^{\\rm {crude}}, ~\\mathrm {and}~ \\lambda \\ge s^{-1/2} \\Vert \\nabla (\\beta ^* )\\Vert _2,the sequence of solutions \\hat{ \\beta }^{(1)},\\ldots ,\\hat{ \\beta }^{(T)} obtained from solving~(\\ref {retire.est.convex}) satisfies{\\begin{@align}{1}{-1} \\Vert \\hat{ \\beta }^{(T)} - \\beta ^* \\Vert _2 & \\le \\delta \\Vert \\hat{ \\beta }^{(T-1)} - \\beta ^* \\Vert _2 + \\kappa ^{-1} \\left\\lbrace \\Vert p_\\lambda ^{\\prime }\\lbrace (|\\beta ^*_j|-a_0 \\lambda )_+\\rbrace \\Vert _2 + \\Vert ^* _{ } \\Vert _2 + \\Vert \\nabla (\\beta ^* )\\Vert _2 \\right\\rbrace ,\\end{@align}}where \\delta = \\sqrt{5}/(2a_0 \\kappa ) \\in (0,1) and z_+ = \\max (z,0).", "Furthermore, we have{\\begin{@align}{1}{-1}\\Vert \\hat{ \\beta }^{(T)} - \\beta ^* \\Vert _2 \\le { \\delta ^{T-1} } r^{\\rm {crude}} + \\lbrace (1- \\delta ) \\kappa \\rbrace ^{-1} \\left\\lbrace \\Vert p_\\lambda ^{\\prime }\\lbrace (|\\beta ^*_j|-a_0 \\lambda )_+\\rbrace \\Vert _2 + \\Vert ^*_{} \\Vert _2 + \\Vert \\nabla (\\beta ^* )\\Vert _2 \\right\\rbrace .\\end{@align}}\\end{proposition}}}}Proposition~\\ref {prop:further steps} establishes the fact that every additional iteration of the proposed iteratively reweighted method shrinks the estimation error of the solution obtained from the previous iteration by a factor of \\delta \\in (0,1), at the cost of inducing some extra terms \\Vert p_\\lambda ^{\\prime }\\lbrace (|\\beta ^*_j|-a_0 \\lambda )_+ \\Vert _2, \\Vert ^* _{ } \\Vert _2, and \\Vert \\nabla (\\beta ^* )\\Vert _2 , which can be shown to be smaller than r^{\\mathrm {crude}}.Such a phenomenon is also known as the contraction property and has been studied in different contexts \\cite {FLSZ2018,PSZ2021}.", "We refer the reader to \\cite {PSZ2021} for a detailed discussion on the various terms that appear in~(\\ref {contraction.inequality2}).For completeness, we also provide the proof of Proposition~\\ref {prop:further steps} in Section~\\ref {proof:prop:further steps}.", "}The results in Proposition~\\ref {prop:further steps} are deterministic, conditioned on some events.In the following proof of Theorem~\\ref {thm:random}, we provide an appropriate choice of the set of tuning parameters (\\lambda ,\\gamma ) such that the event _{\\rm {rsc}}(r,L,\\kappa ) \\cap \\lbrace p_0^{\\prime }(a_0) \\lambda \\ge 2\\Vert ^* \\Vert _\\infty \\rbrace holds with high probability.Moreover, we will control the shrinkage bias \\Vert p_\\lambda ^{\\prime }\\lbrace (|\\beta ^*_j|-a_0 \\lambda )_+ \\Vert _2 in~(\\ref {contraction.inequality2}) by proposing slightly stronger conditions on the minimum signal strength \\Vert \\beta ^*_{} \\Vert _{\\min } as well as the first derivative of the penalty function p_{\\lambda }(\\cdot ).", "}}}}}}\\begin{proof}The proof is based on Proposition~\\ref {prop:further steps}.We will show that under the stated conditions in Theorem \\ref {thm:random}, the events _{\\rm {rsc}}(r,L,\\kappa ) and \\lbrace p_0^{\\prime }(a_0) \\lambda \\ge 2\\Vert ^* \\Vert _\\infty \\rbrace in Proposition~\\ref {prop:further steps} hold with high probabilities.We then show that the terms p_\\lambda ^{\\prime }\\lbrace (|\\beta ^*_j|-a_0 \\lambda )_+\\rbrace \\Vert _2, \\Vert ^* _{ } \\Vert _2, and \\Vert \\nabla (\\beta ^* )\\Vert _2 can be upper bounded with high probabilities.\\end{proof}}Picking \\gamma = \\sigma _{\\varepsilon } \\sqrt{{n}/(s + \\log d + t )} and applying Lemma~\\ref {lem:grad} indicates that\\Vert \\nabla (\\beta ^*) \\Vert _2 \\le \\bar{\\tau } \\sigma _{\\varepsilon } \\lambda _u^{1/2} \\sqrt{(s + \\log d +t)/n}.and\\Vert \\nabla _n(\\beta ^*)-\\nabla (\\beta ^*) \\Vert _\\infty \\le 3 \\nu _0 \\bar{\\tau } \\sigma _{} \\sigma _{\\varepsilon } \\sqrt{{(\\log d + t) }/{n}},with probability at least 1 - 2e^{-t}.Picking \\lambda \\asymp \\sigma _{\\varepsilon }\\sqrt{(\\log d + t)/n}, we have \\lambda \\ge s^{-1/2} \\Vert \\nabla (\\beta ^*) \\Vert _2 and the event \\lbrace p_0^{\\prime }(a_0) \\lambda \\ge 2\\Vert ^* \\Vert _\\infty \\rbrace holds with probability at least 1 - 2e^{-t}.", "}}Next, we set \\kappa = 0.5 \\kappa _1 {\\tau } and set the constant c to be the solution to~(\\ref {choice of c}).Picking r = \\gamma / (8\\sqrt{2} \\lambda _u A_1^2), it can be shown that r \\asymp \\sigma _{\\varepsilon } \\sqrt{{n}/(s + \\log d + t )} > \\sigma _{\\varepsilon } \\sqrt{s(\\log d + t)/n} \\asymp r^{\\rm {crude}} and \\delta = \\sqrt{5}/( a_0 \\kappa _1 {\\tau }) < 1.", "Thus, setting L = \\lbrace 2 +\\frac{2}{p_{{0}}^{\\prime }(a_0) } \\rbrace (c^2 + 1)^{1/2} s^{1/2} + \\frac{2}{p_{{0}}^{\\prime }(a_0) } s^{1/2} , Lemma \\ref {lem:RSC} indicates that the event _{\\rm {rsc}}(r,L,0.5 \\kappa _1 {\\tau } ) holds with probability at least 1 - e^{-t}.", "}Moreover, by Lemma \\ref {lem:grad_l2} and the choice of \\gamma = \\sigma _{\\varepsilon } \\sqrt{{n}/(s + \\log d + t )}, we obtain{\\begin{@align}{1}{-1}\\Vert ^*_{} \\Vert _2\\lesssim \\sigma _{\\varepsilon } \\sqrt{\\frac{s+ t}{n}},\\end{@align}}with probability at least 1-e^{-t}.", "}Finally, we obtain an upper bound for the term \\Vert p_\\lambda ^{\\prime }(|\\beta ^*_| - a_0 \\lambda )_+ \\Vert _2.", "Since |\\beta ^*_j| \\ge (a_0 + a_1)\\lambda for any j\\in , we have p^{\\prime }_{\\lambda }(|\\beta ^*_j| - a_0\\lambda ) = 0.Combining the aforementioned inequalities to~(\\ref {contraction.inequality2}), we obtain{\\begin{@align*}{1}{-1}\\Vert \\hat{ \\beta }^{(T)} - \\beta ^* \\Vert _2\\lesssim \\delta ^{T-1} \\sigma _{\\varepsilon } \\sqrt{\\frac{s (\\log d + t)}{n}} + \\frac{\\sigma _{\\varepsilon }}{1-\\delta } \\sqrt{\\frac{s+\\log d +t}{n}}, \\end{@align*}}with probability at least 1-4e^{-t}.Setting T \\gtrsim \\frac{\\log \\lbrace \\log (d+ t)\\rbrace }{\\log (1/\\delta )} leads to the desired results in (\\ref {test}) and (\\ref {oracle1}).", "}\\end{@align*}}}}\\subsection {Proof of Lemmas}}\\subsubsection {Proof of Lemma~\\ref {ld lem:RSC}}\\begin{proof}The proof is a simplified version of the proof of Lemma~\\ref {lem:RSC}.In the following, we outline the slight difference of the proof.Let \\delta = ^{1/2}(\\beta - \\beta ^*) and _i = ^{-1/2}_i.Using the arguments from the beginning of the proof of Lemma~\\ref {lem:RSC} to~(\\ref {lem RSC EB}), it can be shown that \\lbrace (\\alpha )\\rbrace \\ge 3/4 provided that \\gamma \\ge 4\\sqrt{2} \\max \\lbrace \\sigma _{\\varepsilon }, 2A_1^2r \\rbrace , where (\\alpha ) is as defined in (\\ref {eq:RSC2.51}).Moreover, since d<n, by Cauchy-Schwarz inequality, we have{\\begin{@align*}{1}{-1}(\\Delta )&\\le \\frac{\\gamma }{r} \\bigg \\lbrace \\sup _{\\alpha \\in \\mathbb {B}(r)} \\frac{1}{n} \\sum _{i=1}^n\\langle e_i _i ,\\alpha \\rangle \\bigg \\rbrace \\\\&\\le \\frac{\\gamma }{rn} \\sup _{\\alpha \\in \\mathbb {B}(r)} \\Big |\\Big | \\sum _{i=1}^ne_i _i \\Big |\\Big |_2 \\cdot || \\alpha ||_2 \\\\&\\le \\frac{\\gamma }{r} \\sqrt{\\frac{d}{n}}.\\end{@align*}}Consequently, we have \\Delta \\le 1/4 with high probability provided that n \\gtrsim ({\\gamma }/{r})^2(d+t).", "Combining the above pieces finishes the proof.\\end{proof}}}}\\subsubsection {Proof of Lemma~\\ref {lem:RSC}}\\begin{proof}For notational convenience, throughout the proof we let \\delta = \\beta -\\beta ^*.Recall from Definition~\\ref {def:rsc} that \\mathbb {B}(r) = \\lbrace \\delta \\in ^d: \\Vert \\delta \\Vert _2 \\le r \\rbrace is a ball and (L)=\\lbrace \\delta : \\Vert \\delta \\Vert _1 \\le L\\Vert \\delta \\Vert _2 \\rbrace is an \\ell _1-cone.In the following proof, we will provide a lower bound for the symmetrized Bregman divergence (\\beta , \\beta ^* ) under the constraint \\beta \\in \\beta ^* + \\mathbb {B}(r) \\cap (L).\\end{proof}}We start by defining the events\\begin{equation}E_i(\\delta , r,\\gamma ) = \\lbrace | \\varepsilon _i | \\le \\gamma /2\\rbrace \\cap \\left\\lbrace | _i^{{} \\delta | \\le \\frac{\\gamma \\Vert \\delta \\Vert _2}{2r} }for \\right.i=1,\\ldots ,n.The symmetrized Bregman divergence can then be low bounded by\\begin{equation}\\begin{split}(\\beta , \\beta ^* ) & =\\frac{1}{n} \\sum _{i=1}^n\\bigl \\lbrace L^{\\prime }(\\varepsilon _i) - L^{\\prime } (\\varepsilon _i - _i^{{} \\delta ) \\bigr \\rbrace \\cdot _i^{{} \\delta \\\\& \\ge \\frac{1}{n} \\sum _{i=1}^n\\bigl \\lbrace L^{\\prime }(\\varepsilon _i) - L^{\\prime } (\\varepsilon _i - _i^{{} \\delta ) \\bigr \\rbrace \\cdot _i^{{} \\delta \\cdot \\mathbb {1}_{E_i(\\delta ,r,\\gamma )},}}where \\mathbb {1}_{E_i(\\delta ,r,\\gamma )} is an indicator function that takes value one when the event in (\\ref {rsc:event}) holds and zero otherwise.Thus, it suffices to obtain a lower bound on~(\\ref {RSC1}) for any \\beta \\in \\beta ^* + \\mathbb {B}(r) \\cap (L).", "}}Recall from Section~\\ref {Sec:Prelim} that L^{\\prime }(u) = \\gamma w_\\tau (u) \\ell ^{\\prime }(u/\\gamma ) with w_\\tau (u)= |\\tau - I(u<0)|.Conditioned on the event E_i(\\delta , r,\\gamma ), for any \\delta \\in \\mathbb {B}(r), we have|\\varepsilon _i| \\le \\gamma and |\\varepsilon _i - _i^{{} \\delta | \\le \\gamma /2 + \\gamma /2 = \\gamma .For notational convenience, let u_i = \\varepsilon _i and let v_i = \\varepsilon _i-_i^{ \\delta .", "Then, the term \\bigl \\lbrace L^{\\prime }(\\varepsilon _i) - L^{\\prime } (\\varepsilon _i - _i^{{} \\delta ) \\bigr \\rbrace \\cdot _i^{{} \\delta can be rewritten as \\lbrace L^{\\prime }(u_i) - L^{\\prime }(v_i) \\rbrace (u_i-v_i).In the following, we obtain a lower bound for the term \\lbrace L^{\\prime }(u_i) - L^{\\prime }(v_i) \\rbrace (u_i-v_i) for any u_i,v_i \\in [-\\gamma ,\\gamma ].Let \\kappa _1 = \\min _{|t|\\le 1} \\ell ^{\\prime \\prime }(t).", "To this end, we consider three possible cases:\\begin{enumerate}\\item [(i)] (u_iv_i=0).", "If v_i=0, we have \\lbrace L^{\\prime }(u_i) - L^{\\prime }(v_i) \\rbrace (u_i-v_i) \\ge \\gamma w_\\tau (u_i) \\lbrace \\ell ^{\\prime }(u_i/\\gamma ) - \\ell ^{\\prime }(0) \\rbrace u_i \\ge \\kappa _1 {\\tau } u_i^2, where the last inequality hold by the mean value theorem.Similarly if u_i=0, \\lbrace L^{\\prime }(u_i) - L^{\\prime }(v_i) \\rbrace (u_i-v_i) \\ge \\kappa _1 {\\tau } v_i^2.\\end{enumerate}\\item [(ii)] (u_iv_i>0).", "In this case, w_\\tau (u_i)=w_\\tau (v_i) and hence \\lbrace L^{\\prime }(u_i) - L^{\\prime }(v_i) \\rbrace (u_i-v_i) = \\gamma w_\\tau (u_i) \\lbrace \\ell ^{\\prime }(u_i/\\gamma ) - \\ell ^{\\prime }(v_i/\\gamma ) \\rbrace (u_i-v_i) \\ge \\kappa _1 {\\tau } (u_i-v_i)^2.", "}\\item [(iii)] (u_iv_i <0).", "In this case, we have either u>0, v<0 or u<0, v>0.", "For the former, \\lbrace L^{\\prime }(u_i) - L^{\\prime }(v_i) \\rbrace (u_i-v_i) = \\gamma \\lbrace \\tau \\ell ^{\\prime }(u_i/\\gamma ) - (1-\\tau ) \\ell ^{\\prime }(v_i/\\gamma ) \\rbrace (u_i-v_i) \\ge \\kappa _1 {\\tau } (u_i-v_i)^2, where the last inequality holds by the mean value theorem.", "The latter can be shown in a similar fashion.", "}Combining all three cases, we conclude that \\lbrace L^{\\prime }(u_i) - L^{\\prime }(v_i) \\rbrace (u_i-v_i)\\ge \\kappa _1 {\\tau } (u_i-v_i)^2 for all u_i, v_i\\in [-\\gamma , \\gamma ].", "Substituting this into (\\ref {RSC1}) yields{\\begin{@align}{1}{-1}(\\beta , \\beta ^* ) \\ge \\frac{ \\kappa _1 {\\tau } }{n} \\sum _{i=1}^n(_i^{{} \\delta )^2 \\mathbb {1}_{E_i(\\delta , r,\\gamma )}}\\end{@align}for any \\delta \\in \\mathbb {B}(r).", "}Next, we will derive a lower bound for (1/n)\\sum _{i=1}^n(_i^{{} \\delta )^2 \\mathbb {1}_{E_i(\\delta ,r, \\gamma )}, uniformly over \\delta \\in \\mathbb {B}(r).To this end, we smooth the discontinuous indicator function \\mathbb {1}_{E_i(\\delta ,r, \\gamma )} = \\mathbb {1}_{\\lbrace |_i^{{} \\delta | \\le \\gamma \\Vert \\delta \\Vert _2 /(2r) \\rbrace } \\cdot \\mathbb {1}_{\\lbrace |\\varepsilon _i| \\le \\gamma /2 \\rbrace } by a Lipschitz continuous function.", "Using similar ideas from the proof of Proposition~2 in \\cite {L2017}, for any R\\ge 0, we define the truncated squared function as\\varphi _R(u) = u^2 \\mathbb {1}(|u|\\le R/2) + (|u|-R)^2\\mathbb {1}(R/2 < |u| \\le R) , \\ \\ u \\in .It can be verified that the function \\varphi _R(\\cdot ) is R-Lipschitz continuous and satisfies the following:{\\begin{@align}{1}{-1} u^2\\mathbb {1}(|u|\\le R/2) \\le \\varphi _R(u) \\le \\min \\bigl \\lbrace u^2 \\mathbb {1}(|u| \\le R), (R/2)^2 \\bigr \\rbrace ~~\\mbox{ and }~~ \\varphi _{cR}(cu) = c^2 \\varphi _R(u)~\\mbox{ for any } c\\ge 0.\\end{@align}}It then follows from (\\ref {RSC2}) and (\\ref {RSC2.5}) that\\begin{equation}(\\beta , \\beta ^* ) \\ge \\kappa _1 {\\tau } \\,\\Vert \\delta \\Vert _2^2 \\cdot \\underbrace{ \\frac{1}{n} \\sum _{i=1}^n\\mathbb {1}(|\\varepsilon _i| \\le \\gamma /2) \\cdot \\varphi _{\\gamma /(2r)} (_i^{{{}} \\alpha ) }_{ =: (\\alpha ) }, ~~\\mbox{ where }~ \\alpha := \\delta / \\Vert \\delta \\Vert _2 \\in \\mathbb {S}^{d-1}.", "}\\end{equation}Next, we bound the random quantity (\\alpha ) from below.", "Let \\Delta = \\sup _{\\alpha \\in \\mathbb {S}^{d-1}}-(\\alpha )+\\lbrace (\\alpha )\\rbrace .Then, we have (\\alpha )\\ge \\lbrace (\\alpha )\\rbrace -\\Delta .", "It suffices to obtain a lower bound for \\lbrace (\\alpha )\\rbrace and an upper bound for the random fluctuation \\Delta .We start with obtaining a lower bound for \\lbrace (\\alpha )\\rbrace .", "}}Recall that A_1 is a constant that satisfies \\lbrace (^{ )^4\\rbrace \\le A_1^4 \\Vert \\Vert ^4_{} \\le \\lambda _u^2 A_1^4 \\Vert \\Vert ^4_2 for all \\in ^d.Applying the inequality in~(\\ref {RSC2.5}), for any \\alpha \\in \\mathbb {S}^{d-1}, we have{\\begin{@align}{1}{-1} \\lbrace (\\alpha ) \\rbrace &\\ge \\big \\lbrace (_i^{ \\alpha )^2 \\mathbb {1}(|\\varepsilon _i| \\le \\gamma /2) \\mathbb {1}(|_i^{ \\alpha | \\le \\gamma /{4r})\\big \\rbrace \\nonumber \\\\&\\ge \\Big [(_i^{\\alpha )^2 \\big \\lbrace 1-\\mathbb {1}(|_i^{\\alpha |> \\gamma / 4r) - \\mathbb {1}(|\\varepsilon _i|>\\gamma /2) \\big \\rbrace \\Big ] \\nonumber \\\\&\\ge 1-({4r}/{\\gamma })^2 (_i^{\\alpha )^4 - \\Big [(_i^{\\alpha )^2 \\big \\lbrace ({2|\\varepsilon _i|}/{\\gamma })^2|_i \\big \\rbrace \\Big ] \\nonumber \\\\&\\ge 1 - ({4r}/{\\gamma })^2 \\lambda _u^2 A_1^4 - ({2}/{\\gamma })^2 \\sigma ^2_\\varepsilon \\lambda _u}}Provided \\gamma \\ge 4 \\sqrt{2} \\lambda _u \\max \\lbrace \\sigma _\\varepsilon , 2A_1^2r \\rbrace , we obtain \\lbrace (\\alpha )\\rbrace \\ge 3/4.", "}Next, we obtain an upper bound for \\Delta = \\sup _{\\alpha \\in \\mathbb {S}^{d-1}}-(\\alpha )+\\lbrace (\\alpha )\\rbrace .Applying the inequality in (\\ref {RSC2.5}) on \\varphi _{\\gamma / (2r)}(\\cdot ), we have (\\alpha ) \\le (\\gamma / 4r)^2.", "Applying Theorem 7.3 in \\cite {B2003} and the inequality ab \\le a^2/4 + b^2 , for any t \\ge 0, we obtain{\\begin{@align}{1}{-1} \\Delta &\\le (\\Delta ) + \\sqrt{\\frac{\\gamma ^2 t}{4r^2 n}( \\Delta )} + \\lambda _u A_1^2 \\sqrt{\\frac{2t}{n}} + \\frac{\\gamma ^2}{48r^2} \\cdot \\frac{t}{n} \\nonumber \\\\&\\le 1.25 (\\Delta ) + \\lambda _u A_1^2 \\sqrt{\\frac{2t}{n}} + \\frac{\\gamma ^2}{3r^2} \\cdot \\frac{t}{n},\\end{@align}}with probability at least 1-e^{-t}.", "}It remains to bound (\\Delta ).", "Let _i(\\alpha )= \\mathbb {1}(|\\varepsilon _i| \\le \\gamma /2) \\cdot \\varphi _{\\gamma /(2r)} (_i^{{{}} \\alpha ) and note that (\\Delta ) = \\big [ \\sup _{\\alpha \\in \\mathbb {S}^{d-1}} \\big \\lbrace -({1}/{n}) \\sum _{i=1}^n_i(\\alpha ) +({1}/{n}) \\sum _{i=1}^n_i(\\alpha ) \\big \\rbrace \\big ].", "By the symmetrization inequality for empirical process, ( \\Delta ) \\le 2\\big \\lbrace \\sup _{\\alpha \\in \\mathbb {S}^{d-1}} ({1}/{n}) \\sum _{i=1}^ne_i _i(\\alpha ) \\big \\rbrace , where e_1,\\dots ,e_n are independent Rademacher random variables.Recall that _1= \\lbrace \\delta : \\Vert \\delta _{^{{\\rm c}}} \\Vert _1 \\le 3\\Vert \\delta _{} \\Vert _1\\rbrace where =\\mathrm {supp}(\\beta ^*).", "For all \\beta \\in \\beta ^* + \\mathbb {B}(r) \\cap _1, we have \\Vert \\beta -\\beta ^* \\Vert _1 \\le 4 \\Vert (\\beta - \\beta ^*)_{} \\Vert _1 \\le 4s^{1/2}\\Vert \\beta -\\beta ^* \\Vert _2.Since _i(\\alpha ) is {\\gamma }/{(2r)}-Lipschitz, applying the Talagrand^{\\prime }s contraction principle \\cite {LT1991} and Holder^{\\prime }s inequality, we have{\\begin{@align}{1}{-1}(\\Delta )&\\le \\frac{\\gamma }{r} \\Bigg \\lbrace \\sup _{\\beta \\in \\beta ^* + \\mathbb {B}(r) \\cap _1} \\frac{1}{n} \\sum _{i=1}^n\\Big \\langle e_i _i ,\\frac{\\beta - \\beta ^*}{\\Vert \\beta -\\beta ^*\\Vert _2} \\Big \\rangle \\Bigg \\rbrace \\nonumber \\\\&\\le \\frac{\\gamma }{rn} 4s^{1/2} \\left\\Vert \\sum _{i=1}^ne_i _i \\right\\Vert _\\infty .", "\\end{@align}}}}Let S_j=\\sum _{i=1}^ne_i x_{ij} for j=1,\\dots ,d. It remains to bound \\Vert \\sum _{i=1}^ne_i _i \\Vert _\\infty = (\\max _j |S_j|).Since is sub-exponential, by Condition~\\ref {cond:covariates}, we have (|x_{ij}|\\ge \\nu _0 \\sigma _{jj}^{1/2} t)\\le e^{-t}.", "Consequently, we obtain{\\begin{@align*}{1}{-1}(|e_i x_{ij}|^k) \\le \\int _{0}^{\\infty } (|x_{ij}|^k \\ge t)dt \\le k!", "\\nu _0^k \\sigma _{jj}^{k/2} \\mbox{~ for all ~} k \\ge 2.\\end{@align*}}Along with the fact that (e_i x_{ij})=0, for any 0 \\le \\lambda \\le (\\nu _0 \\sigma _{})^{-1} , the moment generating function of e_i x_{ij} can be upper bounded by{\\begin{@align*}{1}{-1}\\left(e^{\\lambda e_i x_{ij}} \\right)&\\le 1 + \\sum _{k \\ge 2} \\frac{\\lambda ^{k}}{k!}", "|e_i x_{ij}|^{k} \\\\&\\le 1+ \\sum _{k \\ge 2} (\\nu _0 \\sigma _{jj}^{1/2} \\lambda )^k \\\\&\\le 1+ \\frac{\\nu _0^2 \\sigma _{}^2 \\lambda ^2}{1-\\nu _0 \\sigma _{}\\lambda }\\end{@align*}}Using the inequality \\log (1+x)\\le x for all x>0, we have \\log \\lbrace (e^{\\lambda S_j})\\rbrace \\le \\sum _{i=1}^n\\log \\lbrace 1+ \\nu _0^2 \\sigma _{}^2 \\lambda ^2/(1-\\nu _0 \\sigma _{}\\lambda )\\rbrace \\le (2n \\nu _0^2 \\sigma ^2_{} \\lambda ^2)/\\lbrace 2(1-\\nu _0\\sigma _{} \\lambda )\\rbrace for any 1 \\le j \\le d and 0 \\le \\lambda \\le (\\nu _0 \\sigma _{})^{-1}.Consequently S_1,...,S_d are sub-gamma \\Gamma _{+} (v,c) with v=2n\\nu _0^2 \\sigma ^2_{} and c=\\nu _0 \\sigma _{}.Applying Corollary 2.6 in \\cite {BLM2013}, we obtain{\\begin{@align}{1}{-1} \\left\\Vert \\sum _{i=1}^ne_i _i \\right\\Vert _\\infty = \\left(\\max _j |S_j|\\right) \\le \\sqrt{2v\\log 2d} + c\\log 2d = \\nu _0 \\sigma _{} \\Big (2\\sqrt{n\\log 2d}+\\log 2d\\Big )\\end{@align}}}Combining (\\ref {lem RSC partial 2}), (\\ref {lem RSC partial 3b1}), and (\\ref {lem RSC partial 3b2}), we obtain{\\begin{@align*}{1}{-1}\\Delta \\le 5s^{1/2} \\frac{\\gamma \\nu _0 \\sigma _{}}{r} \\Bigg (2\\sqrt{\\frac{\\log 2d}{n}}+\\frac{\\log 2d}{n}\\Bigg ) + \\lambda _u A_1^2 \\sqrt{\\frac{2t}{n}} + \\frac{\\gamma ^2}{3r^2}\\frac{t}{n},\\end{@align*}}with probability at least 1-e^{-t}.Provided that n \\gtrsim ({\\sigma _{} \\nu _0 \\gamma }/{r})^2 s (\\log d + t), we have \\Delta \\le 1/8 with probability at least 1-e^{-t}.", "Putting all pieces together, as long as \\gamma \\ge 4 \\sqrt{2} \\lambda _u \\max \\lbrace \\sigma _{\\varepsilon }, 2A_1^2r \\rbrace and n \\gtrsim ({\\sigma _{} \\nu _0 \\gamma }/{r})^2 (s\\log d + t), the following bound holds uniformly over \\beta \\in \\beta ^* + \\mathbb {B}(r) \\cap _1:{\\begin{@align*}{1}{-1}(\\beta ,\\beta ^*) \\ge \\frac{1}{2} \\kappa _1 {\\tau }\\Vert \\beta -\\beta ^* \\Vert _2^2,\\end{@align*}}with probability at least 1-e^{-t}.The final result is obtained by replacing 4s^{1/2} by L.\\end{@align}}}}\\subsection {Proof of Propositions}}\\subsubsection {Proof of Proposition \\ref {prop:further steps}}\\begin{proof}\\end{proof}\\end{split}\\end{equation}\\end{equation}We start by obtaining an upper bound for \\hat{\\beta }^{(1)} obtained by solving~(\\ref {retire.est.convex}), or equivalently, solving~(\\ref {general.lasso2}), with an initial estimator \\hat{\\beta }^{(0)}=\\textbf {0} and ^{(0)}=p^{\\prime }_{\\lambda }(\\textbf {0})=(\\lambda ,\\ldots ,\\lambda )^{.Conditioned on the event _{\\rm {rsc}}(r,L_0,\\kappa ) \\cap _{\\rm {score}}(\\lambda ) with L_0 = 6 s^{1/2} , and from the proof of Lemma~\\ref {lem:deterministic.error.bound} with parameters r, \\kappa , \\lambda >0 such that r > 2.5 \\kappa ^{-1} s^{1/2} \\lambda , any solution \\hat{ \\beta }^{(1)} to~(\\ref {general.lasso2}) satisfies{\\begin{@align}{1}{-1} \\Vert \\hat{ \\beta }^{(1)} - \\beta ^* \\Vert _2 \\le 2.5 \\kappa ^{-1} s^{1/2} \\lambda .\\end{@align}}}We now continue to establish an upper bound on the estimation error for the subsequent estimators \\hat{\\beta }^{(t)} for t\\ge 2.For t=1,2,\\ldots , we first construct a series of augmented sets{\\begin{@align*}{1}{-1}_{t} = \\cup \\big \\lbrace 1 \\le j \\le d : \\lambda _j^{(t -1)} < p_0^{\\prime }(a_0)\\lambda \\big \\rbrace .\\end{@align*}}Let c>0 be a constant such that 0.5 p_0^{\\prime }(a_0)(c^2 + 1)^{1/2} + 2 = c \\kappa a_0.In the following, using mathematical induction, we will show that the cardinality of _{t} can be upper bounded as\\begin{equation}|_{t}| \\le (c^2+1)s.\\end{equation}For t=1, the inequality holds trivially, i.e., |_1| = || = s \\le (c^2 + 1)s.Now, assume that~(\\ref {eq:Acard}) holds for some integer t \\ge 2.We aim to show that |_{t+1}| \\le (c^2+1)s.To this end, we first obtain an upper bound of the cardinality of the set _{t+1}\\setminus .Since p^{\\prime }_{\\lambda }(\\cdot ) is monotonically decreasing on ^+, by the definition of _{t+1}, for each j\\in _{t+1}\\setminus , we have p^{\\prime }_{\\lambda }(|\\hat{\\beta }_j^{(t)}|) = \\lambda _j^{(t)} \\le p_0^{\\prime }(a_0)\\lambda = p^{\\prime }_{\\lambda }(a_0 \\lambda ), which implies |\\hat{\\beta }_j^{(t)}| \\ge a_0 \\lambda .Moreover, the monotonicity of p^{\\prime }_{\\lambda }(\\cdot ) on ^+ and the definition of _t imply that \\Vert ^{(t-1)} \\Vert _{\\infty } =\\Vert p^{\\prime }_{\\lambda }(|\\hat{\\beta }^{(t-1)}|) \\Vert _{\\infty } \\le \\Vert p^{\\prime }_{\\lambda }(\\textbf {0}) \\Vert _{\\infty } = \\lambda and \\big \\Vert _{_{t}^{{\\rm c}}}^{(t -1)} \\big \\Vert _{\\min } \\ge p_0^{\\prime }(a_0)\\lambda , respectively.", "}Conditioned on the event _{\\rm {rsc}}(r,L,\\kappa ) \\cap \\lbrace p_0^{\\prime }(a_0) \\lambda \\ge 2\\Vert \\nabla _n(\\beta ^*)-\\nabla (\\beta ^*) \\Vert _\\infty \\rbrace with L = \\lbrace 2 +2/p_0^{\\prime }(a_0) \\rbrace (c^2 + 1)^{1/2} s^{1/2} + 2 s^{1/2}/p_0^{\\prime }(a_0) , it follows from the proof of Lemma \\ref {lem:deterministic.error.bound} that{\\begin{@align}{1}{-1}\\Vert \\hat{\\beta }^{(t)}-\\beta ^* \\Vert _2&\\le \\frac{ \\Vert ^{(t -1)}_{} \\Vert _2 + \\Vert ^*_{_{t}} \\Vert _2 + \\Vert \\nabla (\\beta ^* )\\Vert _2 }{\\kappa } \\\\&\\le \\frac{\\big \\lbrace 0.5{p_{{0}}^{\\prime }(a_0)} (c^2 + 1)^{1/2} +2 \\big \\rbrace s^{1/2}\\lambda }{\\kappa } \\nonumber \\\\&=c a_0 s^{1/2}\\lambda = r^{\\rm {crude}}< r. \\end{@align}}Along with the fact that \\beta ^*_j=0 for all j \\in {_{t+1}} \\setminus , we obtain{\\begin{@align}{1}{-1}|{_{t+1}} \\setminus |^{1/2} = \\Vert \\mathbf {1}_{{_{t+1}} \\setminus } \\Vert _2&\\le \\bigg \\Vert \\Big (\\frac{\\hat{\\beta }^{(t)}}{a_0 \\lambda } \\Big )_{{_{t+1}} \\setminus } \\bigg \\Vert _2 \\nonumber \\\\&\\le \\frac{1}{a_0 \\lambda } \\big \\Vert (\\hat{\\beta }^{(t)} - \\beta ^*)_{_{t+1} \\setminus } \\big \\Vert _2 \\\\&\\le c s^{1/2}, \\nonumber \\end{@align}}where the last inequality holds by applying~(\\ref {furstep.ineq.1}).Therefore |_{\\ell +1}|=|{_{\\ell +1}} \\setminus | + || \\le (c^2 +1)s. By induction, |_{t}| \\le (c^2 + 1)s holds for all t \\ge 1.", "Consequently,~(\\ref {furstep.ineq.basic}) holds for all t \\ge 1.", "}We note that the upper bound~(\\ref {furstep.ineq.1}) is not sharp and is mainly derived for proving~(\\ref {furstep.ineq.2.5}).We now derive a sharper upper bound for \\hat{\\beta }^{(t)} by controlling the terms \\Vert ^{(t -1)}_{} \\Vert _2 and \\Vert ^*_{_{t}} \\Vert _2 more carefully.We start with providing a tighter upper bound for \\Vert ^{(t -1)}_{} \\Vert _2.For each j \\in , we consider the following two cases:~(i) if |\\hat{\\beta }^{(t -1)}_j - \\beta ^*_j| \\ge a_0 \\lambda , then the inequality \\lambda _j^{(\\ell -1)} \\le \\lambda \\le a_0^{-1}|\\hat{\\beta }^{(t -1)}_j - \\beta ^*_j| holds trivially; (ii) if |\\hat{\\beta }^{(t -1)}_j - \\beta ^*_j| < a_0 \\lambda , then along with minimal signal strength condition \\Vert \\beta ^*_{} \\Vert _{\\min } \\ge a_0 \\lambda and the monotonicity of p^{\\prime }_{\\lambda }(\\cdot ) on ^{+}, we have 0 \\le |\\beta ^*_j| - a_0 \\lambda \\le |\\hat{\\beta }^{(t -1)}_j|, thus \\lambda _j^{(t -1)} = p^{\\prime }_{\\lambda }(|\\hat{\\beta }^{(t -1)}_j|) \\le p^{\\prime }_{\\lambda }\\lbrace (|\\beta ^*_j|-a_0 \\lambda )_+\\rbrace .Combining the two cases above, we obtain{\\begin{@align}{1}{-1} \\Vert ^{(t -1)}_{} \\Vert _2 \\le \\Vert p^{\\prime }_{\\lambda }\\lbrace (|\\beta ^*_j|-a_0 \\lambda )_+\\rbrace \\Vert _2 +a_0^{-1}\\Vert (\\hat{\\beta }^{(t -1)}-\\beta ^*)_{} \\Vert _2\\end{@align}}}We now obtain an upper bound for \\Vert ^*_{_{t}} \\Vert _2.Since {_{t}} = \\cup (_{t} \\setminus ), we have{\\begin{@align}{1}{-1}\\Vert ^*_{_{\\ell }} \\Vert _2&=\\Vert ^*_{} \\Vert _2 + \\Vert ^*_{_{t} \\setminus } \\Vert _2 \\nonumber \\\\&\\le \\Vert ^*_{} \\Vert _2 + |_{t} \\setminus |^{1/2}\\Vert ^* \\Vert _{\\infty } \\nonumber \\\\&\\le \\Vert ^*_{} \\Vert _2 +\\frac{p_0^{\\prime }(a_0)}{2a_0}\\Vert (\\hat{\\beta }^{(t -1)}-\\beta ^*)_{_{t} \\setminus } \\Vert _2 \\\\&\\le \\Vert ^*_{} \\Vert _2 +\\frac{1}{2 a_0}\\Vert (\\hat{\\beta }^{(t -1)}-\\beta ^*)_{_{t} \\setminus } \\Vert _2, \\end{@align}}where (\\ref {careful.control.2}) holds from applying~(\\ref {furstep.ineq.2.5}), and (\\ref {careful.control.3}) holds from the fact that {p^{\\prime }_\\lambda ({a_0})} \\le 1.", "}Putting (\\ref {furstep.ineq.basic}), (\\ref {careful.control.1}), and (\\ref {careful.control.3}) together, and applying the inequality \\sqrt{a} + \\sqrt{b/4} \\le \\sqrt{5(a+b)/4} for a,b \\ge 0, we obtain{\\begin{@align}{1}{-1}\\Vert \\hat{\\beta }^{(t)}-\\beta ^* \\Vert _2&\\le \\frac{\\Vert ^{(t -1)}_{} \\Vert _2 + \\Vert ^*_{_{\\ell }} \\Vert _2 + \\Vert \\nabla (\\beta ^* )\\Vert _2 }{\\kappa }\\nonumber \\\\&\\le \\frac{\\Vert p^{\\prime }_{\\lambda }\\lbrace (|\\beta ^*_j|-a_0 \\lambda )_+\\rbrace \\Vert _2 + \\Vert ^*_{} \\Vert _2 + \\Vert \\nabla (\\beta ^* )\\Vert _2 }{\\kappa } + \\frac{\\sqrt{5}}{2 a_0 \\kappa }\\Vert (\\hat{\\beta }^{(t -1)}-\\beta ^*)_{{_{\\ell }}} \\Vert _{2} \\nonumber \\\\&\\le \\frac{\\Vert p^{\\prime }_{\\lambda }\\lbrace (|\\beta ^*_j|-a_0 \\lambda )_+\\rbrace + \\Vert ^*_{} \\Vert _2 + \\Vert \\nabla (\\beta ^* )\\Vert _2 }{\\kappa } + \\delta \\Vert \\hat{\\beta }^{(t -1)}-\\beta ^* \\Vert _2, \\end{@align}}for all t\\ge 2.The result in (\\ref {contraction.inequality2}) can then be obtained by applying~(\\ref {final.ineq.1}) iteratively.", "}}\\subsubsection {Proof of Proposition \\ref {ld prop approx err}}\\begin{proof}Let \\delta = \\beta ^* - \\beta _\\gamma ^*.", "The optimality of \\beta _\\gamma ^* and the mean value theorem indicate respectively that \\nabla (\\beta _\\gamma ^*) = \\mathbf {0}, and{\\begin{@align}{1}{-1}\\delta ^{ \\nabla ^2(\\tilde{\\beta }_\\gamma ^*) \\delta &=\\langle \\nabla (\\beta ^*) -\\nabla (\\beta _\\gamma ^*), \\delta \\rangle \\nonumber \\\\&=\\langle \\nabla (\\beta ^*) , \\delta \\rangle =-\\frac{1}{n} \\sum _{i=1}^n \\lbrace w_\\tau (\\varepsilon _i) \\ell _\\gamma ^{\\prime }(\\varepsilon _i)_i^{\\delta \\rbrace ,}}where \\tilde{\\beta }_\\gamma ^* = \\lambda \\beta ^* + (1-\\lambda ) \\beta _\\gamma ^* for some 0\\le \\lambda \\le 1.\\end{@align}}\\end{proof}We start with an upper bound on the right-hand side of (\\ref {ld prop 1a}).By the fact that \\lbrace w_\\tau (\\varepsilon )\\varepsilon |\\rbrace = 0 and |\\ell ^{\\prime }(u) - u| \\le u^2, we have \\lbrace w_\\tau (\\varepsilon )\\ell _\\gamma ^{\\prime }(\\varepsilon )|\\rbrace \\le [ \\gamma w_\\tau (\\varepsilon )\\lbrace \\ell ^{\\prime }(\\varepsilon /{\\gamma }) - {\\varepsilon }/{\\gamma } \\rbrace |] \\le \\bar{\\tau } \\sigma _\\varepsilon ^2 / \\gamma .", "Consequently{\\begin{@align}{1}{-1}\\lbrace w_\\tau (\\varepsilon _i) \\ell _\\gamma ^{\\prime }(\\varepsilon _i)_i^{\\delta \\rbrace \\le |^{ \\delta |\\cdot \\bar{\\tau } \\sigma _\\varepsilon ^2 / \\gamma \\le || ^{1/2} \\delta ||_2 \\cdot \\bar{\\tau } \\sigma _\\varepsilon ^2 / \\gamma .", "}}\\end{@align}}Next, we obtain a lower bound for \\delta ^{ \\nabla ^2(\\tilde{\\beta }_\\gamma ^*) \\delta .Let L_{\\tau ,\\infty } (\\cdot ) be the resulting asymmetric \\ell _2 loss when taking \\gamma = \\infty in L_{ \\tau , \\gamma }(\\cdot ).Moreover, let _\\infty (\\beta ) = \\lbrace L_{\\tau ,\\infty }(y - ^{ \\beta )\\rbrace .Since (\\cdot ) is convex and minimized at \\beta _\\gamma ^*, we have(\\tilde{\\beta }_\\gamma ^*) \\le \\lambda (\\beta ^*) + (1-\\lambda )(\\beta _\\gamma ^*) \\le (\\beta ^*) \\le _\\infty (\\beta ^*) \\le \\bar{\\tau }\\sigma _\\varepsilon ^2 / 2.On the other hand, by the definition of Huber loss, for all \\beta \\in ^d, we have(\\beta ) \\ge n^{-1} \\sum _{i=1}^n w_\\tau (y_i - _i^{ \\beta )(\\gamma |y_i - _i^{ \\beta | - \\gamma ^2/2)\\mathbb {1}(|y_i - _i^{ \\beta | > \\gamma ).Let \\tilde{\\varepsilon _i} = y_i - _i^{ \\tilde{\\beta }_\\gamma ^*.Combining the above inequalities, we have{\\begin{@align*}{1}{-1}\\frac{\\gamma }{n} \\sum _{i=1}^n \\lbrace w_\\tau (\\tilde{\\varepsilon _i}) |\\tilde{\\varepsilon _i}| \\mathbb {1}(|\\tilde{\\varepsilon _i}|>\\gamma )\\rbrace &\\le \\frac{\\gamma ^2}{2n} \\sum _{i=1}^n \\lbrace w_\\tau (\\tilde{\\varepsilon _i}) \\mathbb {1}(|\\tilde{\\varepsilon _i}|>\\gamma )\\rbrace + \\frac{\\bar{\\tau } \\sigma _\\varepsilon ^2}{2} \\\\&\\le \\frac{\\gamma }{2n} \\sum _{i=1}^n \\lbrace w_\\tau (\\tilde{\\varepsilon _i}) |\\tilde{\\varepsilon _i}| \\mathbb {1}(|\\tilde{\\varepsilon _i}|>\\gamma )\\rbrace + \\frac{\\bar{\\tau } \\sigma _\\varepsilon ^2}{2},\\end{@align*}}which further implies that{\\begin{@align}{1}{-1}\\frac{1}{n} \\sum _{i=1}^n \\lbrace w_\\tau (\\tilde{\\varepsilon _i})\\mathbb {1}(|\\tilde{\\varepsilon _i}|>\\gamma )\\rbrace \\le \\frac{1}{n\\gamma } \\sum _{i=1}^n \\lbrace w_\\tau (\\tilde{\\varepsilon _i}) |\\tilde{\\varepsilon _i}| \\mathbb {1}(|\\tilde{\\varepsilon _i}|>\\gamma ) \\rbrace \\le \\frac{\\bar{\\tau }\\sigma _\\varepsilon ^2}{\\gamma ^2}.\\end{@align}}}}}}}Moreover, note that \\nabla ^2 (\\tilde{\\beta }_\\gamma ^*) = n^{-1} \\sum _{i=1}^n \\lbrace w_\\tau (\\tilde{\\varepsilon _i})_i _i^{\\rbrace - n^{-1} \\sum _{i=1}^n \\lbrace w_\\tau (\\tilde{\\varepsilon _i})\\mathbb {1}(|\\tilde{\\varepsilon _i}| > \\gamma ) _i _i^{\\rbrace .", "It then follows from the Cauchy–Schwarz inequality and (\\ref {ld prop 1c}) that{\\begin{@align*}{1}{-1}\\delta ^{ \\nabla ^2 (\\tilde{\\beta }_\\gamma ^*) \\delta &=\\frac{1}{n} \\sum _{i=1}^n \\lbrace w_\\tau (\\tilde{\\varepsilon _i}) (\\delta ^{ _i)^2 \\rbrace -\\frac{1}{n} \\sum _{i=1}^n \\lbrace w_\\tau (\\tilde{\\varepsilon _i})\\mathbb {1}(|\\tilde{\\varepsilon _i}| > \\gamma ) (_i^{ \\delta )^2\\rbrace \\\\&\\ge \\underline{\\tau } || ^{1/2}\\delta ||_2^2 - \\Bigg \\lbrace \\frac{1}{n} \\sum _{i=1}^n w_\\tau ^2(\\tilde{\\varepsilon _i})\\mathbb {1}^2(|\\tilde{\\varepsilon _i}| > \\gamma ) \\Bigg \\rbrace ^{1/2} \\Bigg \\lbrace \\frac{1}{n} \\sum _{i=1}^n (_i^{ \\delta )^4 \\Bigg \\rbrace ^{1/2} \\\\&\\ge \\underline{\\tau } || ^{1/2}\\delta ||_2^2 - \\Bigg \\lbrace \\frac{1}{n} \\sum _{i=1}^n \\bar{\\tau } w_\\tau (\\tilde{\\varepsilon _i})\\mathbb {1}(|\\tilde{\\varepsilon _i}| > \\gamma ) \\Bigg \\rbrace ^{1/2} \\Bigg \\lbrace \\frac{1}{n} \\sum _{i=1}^n \\langle ^{1/2}\\delta , ^{-1/2}_i \\rangle ^4 \\Bigg \\rbrace ^{1/2} \\\\&\\ge \\underline{\\tau } || ^{1/2}\\delta ||_2^2 - \\frac{\\bar{\\tau } \\sigma _\\varepsilon }{\\gamma }A_1^2|| ^{1/2}\\delta ||_2^2.", "}}Picking \\gamma \\ge 2\\sigma _\\varepsilon A_1^2 \\bar{\\tau }/\\underline{\\tau }, we have{\\begin{@align}{1}{-1}\\delta ^{ \\nabla ^2 (\\tilde{\\beta }_\\gamma ^*) \\delta \\ge \\underline{\\tau } || ^{1/2}\\delta ||_2^2 /2.", "}\\end{@align}}}}Putting together (\\ref {ld prop 1a}), (\\ref {ld prop 1b}), and (\\ref {ld prop 1d}) completes the proof.\\end{@align*}}\\section {Proof of Technical Lemmas~\\ref {ld lem:B1B2}--\\ref {lem:deterministic.error.bound}}}}\\subsection {Proof of Lemma~\\ref {ld lem:B1B2}}\\begin{proof}The proof is a simplified version of the proof of Lemmas \\ref {lem:grad} and \\ref {lem:grad_l2}, and is thus omitted.\\end{proof}}\\end{@align*}\\subsection {Proof of Lemma~\\ref {ld lem:huber}}\\begin{proof}\\end{proof}We start with obtaining an upper bound for \\lbrace w_\\tau (\\varepsilon )\\ell _\\gamma ^{\\prime }(\\varepsilon )\\rbrace .", "Denote \\ell (\\cdot ) as the Huber loss with \\gamma =1.", "By the fact that |\\ell ^{\\prime }(u) - u|\\le |u|^3 for all u \\in , we have{\\begin{@align*}{1}{-1}\\big | \\lbrace w_\\tau (\\varepsilon )\\ell _\\gamma ^{\\prime }(\\varepsilon ) \\rbrace \\big |=\\big | \\gamma w_\\tau (\\varepsilon ) \\big \\lbrace \\ell ^{\\prime }(\\varepsilon /\\gamma ) - (\\varepsilon /\\gamma ) \\big \\rbrace \\big |\\le \\bar{\\tau } \\gamma ^{-2} |\\varepsilon |^3 = \\bar{\\tau } \\gamma ^{-2} v_3 .\\end{@align*}}}}Turning to \\big \\lbrace w_\\tau (\\varepsilon )\\ell _\\gamma ^{\\prime }(\\varepsilon ) \\big \\rbrace ^2, note that \\ell _\\gamma ^{\\prime }(\\varepsilon )^2 = \\sigma _\\varepsilon ^2 - \\varepsilon ^2 \\mathbb {1}(|\\varepsilon | > \\gamma ) + \\gamma ^2 (|\\varepsilon | > \\gamma ).", "By Markov^{\\prime }s inequality, ( \\varepsilon ^2 - \\gamma ^2 ) \\mathbb {1}(|\\varepsilon | > \\gamma ) \\le \\gamma ^{-1} | \\varepsilon |^3 = \\gamma ^{-1} \\nu _3.", "Combining this with the fact that \\underline{\\tau }\\le w_\\tau (\\varepsilon ) \\le \\bar{\\tau } and |\\ell _\\gamma ^{\\prime }(\\varepsilon )| \\le |\\varepsilon | completes the proof.", "}\\end{@align*}}$" ], [ "Proof of Lemma ", "We start with an upper bound for the term $\\Vert \\nabla (\\beta ^*) \\Vert _2 =\\sup _{\\in \\mathbb {S}^{d-1}} \\lbrace L^{\\prime }(\\varepsilon ) ^{ \\rbrace .Under Condition~\\ref {def:general.loss} on \\ell (\\cdot ) and Condition~\\ref {cond:randomnoise} on the random noise \\varepsilon , we have ( \\varepsilon ^2 | ) \\le \\sigma _{\\varepsilon }^2 and |\\ell ^{\\prime }(u)-u |\\le u^2.Since [w_\\tau (\\varepsilon ) \\varepsilon |]= 0 and L^{\\prime }(\\varepsilon ) = \\gamma w_\\tau (\\varepsilon ) \\ell ^{\\prime }(\\varepsilon /\\gamma ), we have{\\begin{@align*}{1}{-1}\\big |\\lbrace L^{\\prime }(\\varepsilon )|\\rbrace \\big |\\le \\big | \\gamma \\big [ w_\\tau (\\varepsilon ) \\lbrace \\ell ^{\\prime }(\\varepsilon /\\gamma )-\\varepsilon / \\gamma \\rbrace |\\big ] \\big |\\le \\big | \\gamma ^{-1} \\big \\lbrace w_\\tau (\\varepsilon ) \\varepsilon ^2|\\big \\rbrace \\big |\\le \\gamma ^{-1} \\bar{\\tau } \\sigma _{\\varepsilon }^2.\\end{@align*}}Therefore,\\big \\lbrace L^{\\prime }(\\varepsilon ) ^{ \\big \\rbrace =\\big [ \\lbrace L^{\\prime }(\\varepsilon )|\\rbrace ^{ \\big ]\\le \\gamma ^{-1} \\bar{\\tau } \\sigma _{\\varepsilon }^2 ( {|} ^{ {|} )\\le \\gamma ^{-1} \\bar{\\tau } \\sigma _{\\varepsilon }^2 \\Vert \\Vert _{}.Taking the supremum over all \\in \\mathbb {S}^{d-1}, we have \\Vert \\nabla (\\beta ^*) \\Vert _2 \\le \\gamma ^{-1} \\bar{\\tau } \\sigma _{\\varepsilon }^2 \\lambda _{u}^{1/2}, as desired.", "}Next, we obtain an upper bound for the centered score ^* = \\nabla _n (\\beta ^*) - \\nabla (\\beta ^*)=-({1}/{n}) \\sum _{i=1}^n\\big [ L^{\\prime }( \\varepsilon _i ) _i-\\lbrace L^{\\prime }( \\varepsilon _i ) _i \\rbrace \\big ] using the Bernstein^{\\prime }s inequality.", "We start with establishing an upper bound on the kth moment of L^{\\prime }( \\varepsilon _i ) _i .Let_j \\in ^d be the canonical basis vector, i.e., the jth entry equals one and all other entries equal zero.Setting =_j in Condition~\\ref {cond:covariates} yields \\big (|x_{ij}| \\ge \\nu _0 \\sigma ^{1/2}_{jj} t \\big ) \\le e^{-t}.Therefore,\\begin{equation*}\\begin{split}|x_{ij}|^k&=\\int _{0}^{\\infty } k u^{k-1} (|x_{ij}| \\ge u)\\mathrm {d} u \\\\&=\\int _{0}^{\\infty } k \\nu _0^{k} \\sigma _{jj}^{k/2} \\Big ( |x_{ij}| \\ge \\nu _0 \\sigma _{jj}^{1/2} t \\Big ) t^{k-1}\\mathrm {d} t\\\\&\\le \\nu _0^k \\sigma _{jj}^{k/2} k \\int _{0}^{\\infty } t^{k-1} e^{-t} \\mathrm {d}t \\\\&=k!", "\\nu _0^k \\sigma _{jj}^{k/2} .\\end{split}\\end{equation*}In addition, |\\ell ^{\\prime }(u)|\\le \\min (1,|u|) for all u \\in , thus |L^{\\prime }(\\varepsilon _i)| = |\\gamma w_\\tau (\\varepsilon _i) \\ell ^{\\prime }(\\varepsilon _i/\\gamma ) | \\le \\min \\big \\lbrace \\bar{\\tau }\\gamma , \\bar{\\tau } |\\varepsilon _i| \\big \\rbrace .", "Combining the above inequalities, for all k \\ge 2 and 1 \\le j \\le d, we have{\\begin{@align*}{1}{-1}|L^{\\prime }(\\varepsilon _i)x_{ij}|^k&\\le \\Big \\lbrace (\\bar{\\tau }\\gamma )^{k-2} |x_{ij}|^k\\cdot (\\bar{\\tau }^2\\varepsilon _i^2|_i) \\Big \\rbrace \\\\&\\le \\bar{\\tau }^k \\gamma ^{k-2} \\sigma _{\\varepsilon }^2 |x_{ij}|^k \\\\&\\le \\bar{\\tau }^k \\gamma ^{k-2} \\sigma _{\\varepsilon }^2 \\nu _0^k \\sigma _{jj}^{k/2}k!", "\\\\&\\le \\frac{k!", "}{2} ( 2\\bar{\\tau }^2 \\sigma _{\\varepsilon }^2 \\nu _0^2 \\sigma ^2_{})(\\nu _0 \\bar{\\tau } \\sigma _{} \\gamma )^{k-2}.\\end{@align*}}}By Bernstein^{\\prime }s inequality, for every u>0 and j \\in \\lbrace 1, \\dots ,d \\rbrace , we obtain\\bigg |\\frac{1}{n} \\sum _{i=1}^n\\Big [ L^{\\prime }(\\varepsilon _i)x_{ij}-\\lbrace L^{\\prime }(\\varepsilon _i)x_{ij}\\rbrace \\Big ] \\bigg | \\le \\nu _0 \\sigma _{} \\bar{\\tau } \\Bigg ( 2\\sigma _{\\varepsilon }\\sqrt{\\frac{u}{n}}+\\gamma \\frac{u}{n} \\Bigg )with probability at least 1-2e^{-u}.Applying the union bound yields{\\begin{@align*}{1}{-1}\\Vert \\nabla _n (\\beta ^*) - \\nabla (\\beta ^*)\\Vert _\\infty \\le \\nu _0 \\sigma _{} \\bar{\\tau } \\Bigg ( 2\\sigma _{\\varepsilon }\\sqrt{\\frac{u}{n}}+\\gamma \\frac{u}{n} \\Bigg )\\end{@align*}}with probability at least 1-2de^{-u}.", "We then set u = \\log d + t to reach\\begin{equation}\\Vert \\nabla _n (\\beta ^*) - \\nabla (\\beta ^*)\\Vert _\\infty \\le \\nu _0 \\sigma _{} \\bar{\\tau } \\Bigg (2\\sigma _{\\varepsilon } \\sqrt{\\frac{\\log d + t }{n}}+ \\gamma \\frac{\\log d + t }{n}\\Bigg )\\end{equation}with probability at least 1-2e^{-t}.", "}Finally, we now obtain an upper bound for \\Vert \\nabla _n (\\beta ^*) \\Vert _\\infty .", "By the triangle inequality, we have \\Vert \\nabla _n (\\beta ^*) \\Vert _\\infty \\le \\Vert \\nabla _n (\\beta ^*) - \\nabla (\\beta ^*)\\Vert _\\infty + \\Vert \\nabla (\\beta ^*) \\Vert _\\infty .", "It suffices to obtain an upper bound for \\Vert \\nabla (\\beta ^*) \\Vert _\\infty .", "We have\\Vert \\nabla (\\beta ^*) \\Vert _\\infty = \\max _j \\lbrace L^{\\prime }(\\varepsilon _i)x_{ij} \\rbrace \\le \\max _j \\big [ x_{ij} \\lbrace L^{\\prime }(\\varepsilon _i)|_i \\rbrace \\big ]\\le \\max _j ({|} x_{ij} {|} \\gamma ^{-1} \\bar{\\tau } \\sigma _{\\varepsilon }^2)\\le \\sigma _{} \\gamma ^{-1} \\bar{\\tau } \\sigma _{\\varepsilon }^2.Combining the above and~(\\ref {lemmaB1.1}), we have\\Vert \\nabla _n (\\beta ^*) \\Vert _\\infty \\le \\sigma _{} \\bar{\\tau } \\Bigg (2 \\nu _0 \\sigma _{\\varepsilon } \\sqrt{\\frac{\\log d + t }{n}} + \\nu _0 \\gamma \\frac{\\log d + t }{n} +\\gamma ^{-1} \\sigma ^2_{\\varepsilon } \\Bigg )with probability at least 1-2e^{-t}, as desired.", "}$" ], [ "Proof of Lemma ", "Recall that $^* = \\nabla _n (\\beta ^*) - \\nabla (\\beta ^*)$ .", "The goal is to obtain an upper bound for the oracle centered loss function $_{}^*$ under the $\\ell _2$ norm.", "To this end, we employ a covering argument.", "Specifically, for any $\\epsilon \\in (0,1)$ , there exists an $\\epsilon $ -net $_{\\epsilon }$ of the unit sphere in $^s$ with cardinality $| _{\\epsilon }| \\le (1+2/\\epsilon )^s$ such that $\\Vert ^*_{} \\Vert _2\\le \\frac{1}{1-\\epsilon } \\max _{\\in _{\\epsilon }} \\big \\langle -^*_{},\\big \\rangle =\\frac{1}{1-\\epsilon } \\max _{\\in _{\\epsilon }} \\frac{1}{n} \\sum _{i=1}^n\\Big [ L^{\\prime }(\\varepsilon _i) _{i}^{ - \\left\\lbrace L^{\\prime }(\\varepsilon _i) _{i}^{ \\Big ]}From Condition \\ref {def:general.loss} on the loss function \\right.\\ell (\\cdot ), we have |\\ell ^{\\prime }(u)|\\le \\min (1,|u|) for all u \\in .Thus, we have |L^{\\prime }(\\varepsilon _i)| = |\\gamma w_\\tau (\\varepsilon _i) \\ell ^{\\prime }(\\varepsilon _i/\\gamma ) | \\le \\min \\big ( \\bar{\\tau }\\gamma , \\bar{\\tau } |\\varepsilon _i| \\big ).Since is sub-exponential, by Condition~\\ref {cond:covariates},we have \\big (| ^{ |\\ge \\nu _0\\Vert \\Vert _{}\\cdot t \\big )\\le e^{-t} for all t \\in and \\in ^d.Thus, for all k \\ge 2, and by a change of variable, we obtain{\\begin{@align*}{1}{-1}\\left(\\big | L^{\\prime }(\\varepsilon _i) _{i}^{ \\big |^k&\\le \\Big \\lbrace (\\bar{\\tau }\\gamma )^{k-2} | _{i}^{ |^k \\big (\\bar{\\tau }^2 \\varepsilon _i^2|_{i} \\big ) \\Big \\rbrace \\\\&\\le \\bar{\\tau }^k \\gamma ^{k-2} \\sigma _{\\varepsilon }^2 \\big | _{i}^{ \\big |^k \\\\&\\le \\bar{\\tau }^k \\gamma ^{k-2} \\sigma _{\\varepsilon }^2 \\int _{0}^{\\infty } kt^{k-1} \\big (| _{i}^{ | \\ge t \\big ) \\mathrm {d} t \\\\&\\le \\frac{k!", "}{2} \\Big (2\\bar{\\tau }^2 \\sigma _{\\varepsilon }^2 \\nu _0^2 \\Vert \\Vert _{\\mathbf {S}}^2 \\Big ) \\cdot \\Big ( \\bar{\\tau } \\gamma \\nu _0 \\Vert \\Vert _{\\mathbf {S}} \\Big )^{k-2}.", "}}}}Applying the Bernstein^{\\prime }s inequality with \\right.a = 2\\bar{\\tau }^2 \\sigma _{\\varepsilon }^2 \\nu _0^2 \\Vert \\Vert _{\\mathbf {S}}^2 and b=\\bar{\\tau } \\gamma \\nu _0 \\Vert \\Vert _{\\mathbf {S}} , along with the inequality \\Vert \\Vert _{\\mathbf {S}} \\le \\lambda ^{1/2}_{\\max }(\\mathbf {S})\\Vert \\Vert _2 = \\lambda ^{1/2}_{\\max }(\\mathbf {S}), we have for all x>0,{\\begin{@align}{1}{-1}\\frac{1}{n} \\sum _{i=1}^n\\Big [L^{\\prime }(\\varepsilon _i) _{i}^{ - \\lbrace L^{\\prime }(\\varepsilon _i) _{i}^{ \\rbrace \\Big ]\\le \\bar{\\tau } \\nu _0 \\lambda ^{1/2}_{\\max }(\\mathbf {S}) \\Bigg (2 \\sigma _{\\varepsilon } \\sqrt{\\frac{x}{n}} + \\gamma \\frac{x}{n} \\Bigg ),}}with probability at least 1-e^{-x}.Combining (\\ref {lem:gram_l2 part 1}) and (\\ref {lem:gram_l2 part 2}), and applying the union bound over all vectors \\in _{\\epsilon }, we have{\\begin{@align*}{1}{-1}\\Vert ^*_{} \\Vert _2 \\le \\frac{\\bar{\\tau } \\nu _0 \\lambda ^{1/2}_{\\max }(\\mathbf {S})}{1-\\epsilon } \\Bigg (2 \\sigma _{\\varepsilon } \\sqrt{\\frac{x}{n}} + \\gamma \\frac{x}{n} \\Bigg )\\end{@align*}}with probability at least 1-(1+2/\\epsilon )^se^{-x}.", "Selecting \\epsilon = 1/3 and x=2s+t, we obtain{\\begin{@align*}{1}{-1}\\Vert ^*_{} \\Vert _2\\le 3\\bar{\\tau } \\nu _0 \\lambda ^{1/2}_{\\max }(\\mathbf {S}) \\Bigg ( \\sigma _{\\varepsilon } \\sqrt{\\frac{2s+t}{n}} + \\gamma \\frac{2s+t}{2n} \\Bigg ),\\end{@align*}}with probability at least 1 - e^{-t}.\\end{@align}\\subsection {Proof of Lemma~\\ref {lem:l1cone}}\\begin{proof}Let \\hat{\\beta } be any solution to~(\\ref {general.lasso2}).", "Since~(\\ref {general.lasso2}) is convex, there exists a subgradient \\in \\partial \\Vert \\hat{\\beta } \\Vert _1 such that \\nabla _n(\\hat{\\beta })+\\circ = \\mathbf {0}.", "Thus, we have\\begin{aligned}0 &= \\langle \\nabla _n(\\hat{\\beta })+\\circ ,\\hat{\\beta } - \\beta \\rangle \\\\&=\\langle \\nabla _n(\\hat{\\beta })-\\nabla _n(\\beta ),\\hat{\\beta } - \\beta \\rangle +\\langle \\nabla _n(\\beta ) - \\nabla (\\beta ),\\hat{\\beta } - \\beta \\rangle +\\langle \\nabla (\\beta ), \\hat{\\beta } - \\beta \\rangle +\\langle \\circ ,\\hat{\\beta } - \\beta \\rangle \\\\&\\ge 0+\\langle (\\beta ),\\hat{\\beta } - \\beta \\rangle +\\langle \\nabla (\\beta ),\\hat{\\beta } - \\beta \\rangle + \\langle \\circ ,\\hat{\\beta } - \\beta \\rangle \\\\&\\ge -\\Vert (\\beta ) \\Vert _{\\infty } \\Vert \\hat{\\beta } - \\beta \\Vert _1 - \\Vert \\nabla (\\beta )\\Vert _2\\Vert \\hat{\\beta } - \\beta \\Vert _2+ \\langle \\circ ,\\hat{\\beta } - \\beta \\rangle \\end{aligned}Since \\beta _{^{{\\rm c}}}=\\bf {0}, \\Vert \\Vert _{\\infty }\\le 1, and \\langle ,\\hat{\\beta } \\rangle = \\Vert \\hat{\\beta } \\Vert _1, we can obtain a lower bound for \\langle \\circ ,\\hat{\\beta } - \\beta \\rangle as{\\begin{@align*}{1}{-1}\\langle \\circ ,\\hat{\\beta } - \\beta \\rangle &= \\langle (\\circ )_{^{{\\rm c}}}, \\hat{\\beta }_{^{{\\rm c}}}\\rangle + \\langle (\\circ )_{},(\\hat{\\beta } - \\beta )_{} \\rangle \\\\&\\ge \\Vert _{^{{\\rm c}}} \\Vert _{\\min } \\Vert \\hat{\\beta }_{^{{\\rm c}}} \\Vert _1 - \\Vert _{} \\Vert _{\\infty } \\Vert (\\hat{\\beta } - \\beta )_{} \\Vert _1 \\\\&\\ge \\Vert _{^{{\\rm c}}} \\Vert _{\\min } \\Vert (\\hat{\\beta } - \\beta )_{^{{\\rm c}}} \\Vert _1 - \\Vert \\Vert _{\\infty } \\Vert (\\hat{\\beta } - \\beta )_{} \\Vert _1.\\end{@align*}}Combining the above inequalities yields\\Vert (\\beta ) \\Vert _{\\infty } \\Vert \\hat{\\beta } - \\beta \\Vert _1 + \\Vert \\nabla (\\beta )\\Vert _2\\Vert \\hat{\\beta } - \\beta \\Vert _2 \\ge \\Vert _{^{{\\rm c}}} \\Vert _{\\min } \\Vert (\\hat{\\beta } - \\beta )_{^{{\\rm c}}} \\Vert _1 -\\Vert \\Vert _{\\infty } \\Vert (\\hat{\\beta } - \\beta )_{} \\Vert _1.The result~(\\ref {eq:l1cone}) can then be obtained by rearranging the terms.\\end{proof}}\\end{@align*}\\subsection {Proof of Lemma~\\ref {lem:deterministic.error.bound}}\\begin{proof}The proof is similar to that of the proof of Theorem~\\ref {thm:l1.retire}.", "For some r > 0 to be specified, define an intermediate quantity \\hat{\\beta }_\\eta = \\eta \\hat{\\beta } +(1-\\eta )\\beta ^* where \\eta = \\sup \\lbrace u \\in [0,1] : (1-u)\\beta ^* + u \\hat{\\beta } \\in \\beta ^* + \\mathbb {B}(r) \\rbrace .When \\hat{\\beta } \\in \\beta ^* + \\mathbb {B}(r), we have \\hat{\\beta }_\\eta = \\hat{\\beta }.", "On the other hand, when \\hat{\\beta } \\notin \\beta ^* + \\mathbb {B}(r), \\hat{\\beta }_\\eta lies on \\beta ^* + \\partial \\mathbb {B}(r) with \\eta <1.\\end{proof}We first show that \\hat{\\beta }_\\eta \\in \\beta ^* + \\mathbb {B}(r) \\cap (L).Since _n(\\cdot ) is convex, by an application of Lemma C.1 in \\cite {SZF2020}, we have{\\begin{@align}{1}{-1}0\\le \\langle \\nabla _n(\\hat{\\beta }_\\eta ) -\\nabla _n(\\beta ^*), \\hat{\\beta }_\\eta -\\beta ^* \\rangle \\le \\eta \\langle \\nabla _n(\\hat{\\beta })-\\nabla _n(\\beta ^*), \\hat{\\beta } - \\beta ^* \\rangle .\\end{@align}}Conditioned on the event \\lbrace a \\lambda \\ge 2\\Vert \\nabla _n(\\beta ^*)-\\nabla (\\beta ^*) \\Vert _\\infty \\rbrace and the assumption that \\Vert \\Vert _{\\infty } \\le \\lambda and \\Vert _{^{{\\rm c}}} \\Vert _{\\min } \\ge a\\lambda , applying Lemma~\\ref {lem:l1cone}, we have}\\begin{equation*}\\begin{split}\\Vert (\\hat{\\beta }-\\beta ^*)_{^{{\\rm c}}} \\Vert _1 &\\le \\frac{ \\big \\lbrace \\Vert \\Vert _{\\infty } + \\Vert (\\beta ^*) \\Vert _{\\infty } \\big \\rbrace \\Vert (\\hat{\\beta }-\\beta ^*)_{} \\Vert _1 + \\Vert \\nabla (\\beta ^* )\\Vert _2 \\Vert \\hat{\\beta }-\\beta ^* \\Vert _2}{\\Vert _{^{{\\rm c}}} \\Vert _{\\min }-\\Vert (\\beta ^*) \\Vert _{\\infty }}\\\\&\\le \\left(1+\\frac{2}{a}\\right)\\Vert (\\hat{\\beta } - \\beta ^*)_{} \\Vert _1 + \\frac{2}{a\\lambda } \\Vert \\nabla (\\beta ^* )\\Vert _2 \\Vert \\hat{\\beta } - \\beta ^* \\Vert _2\\end{split}\\end{equation*}By the assumption that \\lambda \\ge s^{-1/2} \\Vert \\nabla (\\beta ^* )\\Vert _2, we have\\begin{equation*}\\begin{split}\\Vert \\hat{\\beta } - \\beta ^* \\Vert _1&\\le \\left(2+2/a\\right)\\Vert (\\hat{\\beta } - \\beta ^*)_{} \\Vert _1 + \\frac{2}{a\\lambda } \\Vert \\nabla (\\beta ^* )\\Vert _2 \\Vert \\hat{\\beta } - \\beta ^* \\Vert _2\\\\&\\le \\left(2+2/a\\right) k^{1/2} \\Vert (\\hat{\\beta }-\\beta ^*)_{}\\Vert _2+\\frac{2}{a\\lambda } \\Vert \\nabla (\\beta ^* )\\Vert _2 \\Vert \\hat{\\beta } - \\beta ^* \\Vert _2\\\\&\\le \\left\\lbrace \\left(2+2/a\\right) k^{1/2}+2s^{1/2}/a \\right\\rbrace \\Vert \\hat{\\beta } - \\beta ^* \\Vert _2.\\end{split}\\end{equation*}The above inequality implies that \\hat{\\beta } \\in \\beta ^* + (L) with L=(2+2/a)k^{1/2} + 2 s^{1/2}/a.Since \\hat{\\beta }_\\eta - \\beta ^*= \\eta (\\hat{\\beta } -\\beta ^*) and \\hat{\\beta }_\\eta \\in \\beta ^* + \\mathbb {B}(r) by construction, we have \\hat{\\beta }_\\eta \\in \\beta ^* + \\mathbb {B}(r) \\cap (L).", "Consequently, conditioned on the event _{\\rm {rsc}}(r,L,\\kappa ), we have{\\begin{@align}{1}{-1}\\langle \\nabla _n(\\hat{\\beta }_\\eta ) -\\nabla _n(\\beta ^*), \\hat{\\beta }_\\eta -\\beta ^* \\rangle \\ge \\kappa \\Vert \\hat{\\beta }_\\eta - \\beta ^* \\Vert _2 ^2.\\end{@align}}}Next we upper bound the right-hand side of (\\ref {basic.ineq}).", "Let Since \\hat{\\beta } is a solution to~(\\ref {general.lasso2}), we have{\\begin{@align*}{1}{-1}\\langle \\nabla _n(\\hat{\\beta })-\\nabla _n(\\beta ^*), \\hat{\\beta } - \\beta ^* \\rangle &=\\langle \\nabla _n(\\hat{\\beta })+\\circ ,\\hat{\\beta } - \\beta ^* \\rangle - \\langle \\circ , \\hat{\\beta } - \\beta ^* \\rangle \\\\&~~~~- \\langle \\nabla _n(\\beta ^*)- \\nabla (\\beta ^*),\\hat{\\beta } - \\beta ^* \\rangle - \\langle \\nabla (\\beta ^*),\\hat{\\beta } - \\beta ^* \\rangle \\\\&:=\\Pi _1 - \\Pi _2 - \\Pi _3 - \\Pi _4\\end{@align*}}We now obtain bounds for the terms \\Pi _1, \\dots , \\Pi _4.For \\Pi _1, since \\hat{\\beta } is a solution to~(\\ref {general.lasso2}), we have \\Pi _1 \\le 0.We now obtain a lower bound for \\Pi _2.", "Since [d]=\\cup (\\setminus ) \\cup ^{{\\rm c}}, \\beta ^*_{^{{\\rm c}}}=\\textbf {0}, \\Vert \\Vert _{\\infty }\\le 1, and \\langle ,\\hat{\\beta } \\rangle = \\Vert \\hat{\\beta } \\Vert _1, we have\\begin{equation*}\\begin{split}\\langle \\circ ,\\hat{\\beta } - \\beta ^* \\rangle &=\\langle (\\circ )_{},(\\hat{\\beta } - \\beta ^*)_{} \\rangle + \\langle (\\circ )_{\\setminus },\\hat{\\beta }_{\\setminus } \\rangle +\\langle (\\circ )_{^{{\\rm c}}},\\hat{\\beta }_{^{{\\rm c}}} \\rangle \\\\&\\ge -\\Vert _{} \\Vert _2 \\Vert (\\hat{\\beta } - \\beta ^*)_{} \\Vert _2 + \\langle _{\\setminus },| \\hat{\\beta }_{\\setminus } | \\rangle + \\langle _{^{{\\rm c}}},| \\hat{\\beta }_{^{{\\rm c}}} | \\rangle \\\\&\\ge -\\Vert _{} \\Vert _2 \\Vert (\\hat{\\beta } - \\beta ^*)_{} \\Vert _2 + 0 + \\Vert _{^{{\\rm c}}} \\Vert _{\\min } \\Vert \\hat{\\beta }_{^{{\\rm c}}} \\Vert _1 \\\\&\\ge -\\Vert _{} \\Vert _2 \\Vert (\\hat{\\beta } - \\beta ^*)_{} \\Vert _2 +\\Vert _{^{{\\rm c}}} \\Vert _{\\min } \\Vert (\\hat{\\beta }-\\beta ^*)_{^{{\\rm c}}} \\Vert _1.\\end{split}\\end{equation*}For \\Pi _3, it can be shown that\\begin{equation*}\\begin{split}\\langle \\nabla _n(\\beta ^*)- \\nabla (\\beta ^*),\\hat{\\beta } - \\beta ^* \\rangle & =\\langle ^*_{},\\hat{\\beta } - \\beta ^* \\rangle + \\langle ^*_{^{{\\rm c}}},\\hat{\\beta } - \\beta ^* \\rangle \\\\&\\ge -\\Vert ^*_{} \\Vert _2 \\Vert (\\hat{\\beta } - \\beta ^*)_{} \\Vert _2 - \\Vert ^* \\Vert _{\\infty } \\Vert (\\hat{\\beta } - \\beta ^*)_{^{{\\rm c}}} \\Vert _1.\\end{split}\\end{equation*}Finally, for \\Pi _4, we have\\big \\langle \\nabla (\\beta ^*),\\hat{\\beta } - \\beta ^* \\big \\rangle \\ge -\\Vert \\nabla (\\beta ^* )\\Vert _2\\Vert \\hat{\\beta } - \\beta ^* \\Vert _2.", "}Combining all of the above inequalities with \\Vert _{^{{\\rm c}}} \\Vert _{\\min } \\ge a\\lambda \\ge 2\\Vert ^* \\Vert _{\\infty }, we obtain,{\\begin{@align}{1}{-1}\\langle \\nabla _n(\\hat{\\beta })-\\nabla _n(\\beta ^*), \\hat{\\beta } - \\beta ^* \\rangle &\\le \\Big ( -\\Vert _{^{{\\rm c}}} \\Vert _{\\min } + \\Vert ^* \\Vert _{\\infty } \\Big ) \\Vert (\\hat{\\beta } - \\beta ^* )_{^{{\\rm c}}} \\Vert _1 \\nonumber \\\\&~~~~+\\Vert ^*_{} \\Vert _2\\Vert (\\hat{\\beta } - \\beta ^* )_{} \\Vert _2 + \\Vert _{} \\Vert _2 \\Vert (\\hat{\\beta } - \\beta ^* )_{} \\Vert _2 + \\Vert \\nabla (\\beta ^* )\\Vert _2 \\Vert \\hat{\\beta } - \\beta ^* \\Vert _2 \\nonumber \\\\&\\le \\Big ( \\Vert _{} \\Vert _2 + \\Vert ^*_{} \\Vert _2 + \\Vert \\nabla (\\beta ^* )\\Vert _2\\Big ) \\Vert \\hat{\\beta } - \\beta ^* \\Vert _2.\\end{@align}}Putting (\\ref {basic.ineq}), (\\ref {partial.ineq.1}), and (\\ref {partial.ineq.2}) together, and using the fact that \\eta \\Vert \\hat{\\beta } - \\beta ^* \\Vert _2 = \\Vert \\hat{\\beta }_\\eta - \\beta ^* \\Vert _2, we obtain{\\begin{@align}{1}{-1} \\kappa \\Vert \\hat{\\beta }_\\eta - \\beta ^* \\Vert _2 \\le \\Vert _{} \\Vert _2 + \\Vert ^*_{} \\Vert _2 +\\Vert \\nabla (\\beta ^* )\\Vert _2.\\end{@align}}Furthermore, under the scaling conditions, we have \\Vert _{} \\Vert _2 \\le s^{1/2}\\lambda and \\Vert ^*_{} \\Vert _2 \\le k^{1/2}a\\lambda /2.Putting these into~(\\ref {partial.ineq.3}), we obtain \\Vert \\hat{\\beta }_\\eta - \\beta ^* \\Vert _2 \\le \\kappa ^{-1} \\big \\lbrace (2s^{1/2}+ {k^{1/2}a}/{2})\\lambda \\big \\rbrace < r. Thus, \\hat{\\beta }_\\eta falls in the interior of \\beta ^* + \\mathbb {B}(r), implying that \\eta = 1 and that \\hat{\\beta }_\\eta =\\hat{\\beta }.", "This completes the proof that \\Vert \\hat{\\beta } - \\beta ^* \\Vert _2 \\le \\kappa ^{-1} \\big \\lbrace (2s^{1/2}+ {k^{1/2}a}/{2})\\lambda \\big \\rbrace .$" ] ]
2212.05562
[ [ "A subperiodic tree whose intermediate branching number is strictly less\n than the intermediate growth rate" ], [ "Abstract We construct an example of a subperiodic tree whose intermediate branching number is strictly less than the intermediate growth rate.", "This answers a question of Amir and Yang (2022) in the negative." ], [ "Introduction and main result", "There are several ways to measure the branching structure of an infinite locally finite tree.", "An important and successful one is the branching number introduced by Lyons [5].", "For instance the branching number is the critical parameter for Bernoulli percolation and homesick random walk on trees.", "However the branching number is not so effective for trees with sub-exponential growth.", "Later Collevecchio, Kious and Sidoravicius [3] introduced a branching-ruin number which works well for trees with polynomial growth.", "Inspired by these previous work, recently Amir and Yang [1] introduced the intermediate branching number and showed that it is crucial for several probability models on trees with intermediate growth rate.", "Our focus here is a special family of infinite locally finite trees—the subperiodic trees.", "For a subperiodic tree, the branching number actually equals the lower exponential growth rate—this result is due to Furstenberg [4]; see Theorem 3.8 in [6] for a proof.", "Amir and Yang [1] then asked whether the corresponding equality holds for the intermediate branching number and the intermediate growth rate on subperiodic trees.", "In the present note we construct an example of a subperiodic tree whose intermediate branching number is strictly less than its intermediate growth rate, answering their question in the negative." ], [ "Various branching numbers and growth rates of trees", "Suppose $T=(V,E)$ is an infinite locally finite tree with a distinguished vertex $o$ , which will be called the root of $T$ .", "We imagine the tree $T$ as growing upward from the root $o$ .", "For $x,y\\in V$ , we write $x\\le y$ if $x$ is on the shortest path from $o$ to $y$ ; and $T^x$ for the subtree of $T$ containing all the vertices of $y\\ge x$ .", "For a vertex $x\\in V$ we denote by $|x|$ the graph distance from $o$ to $x$ .", "For an edge $e\\in E$ , we write $e=(e^-,e^+)$ where $|e^+|=|e^-|+1$ and define $|e|=|e^+|$ .", "Write $T_n:=\\lbrace e\\in E\\colon |e|=n \\rbrace $ .", "Write $B(n)=\\lbrace x\\colon x\\in V, |x|\\le n\\rbrace $ for the ball of radius $n$ centered at $o$ .", "A cutset $\\pi $ separating $o$ and infinity is a set of edges such that every infinite path starting from $o$ must include an edge in $\\pi $ .", "For instance $T_n$ is a cutset separating $o$ and infinity for every $n\\ge 1$ .", "We write $\\Pi (T)$ for the collection of cutsets separating $o$ and infinity.", "The branching number of $T$ is defined as $\\operatorname{\\mathrm {br}}(T):=\\sup \\left\\lbrace \\lambda >0\\colon \\inf _{\\pi \\in \\Pi (T)}\\sum _{e\\in \\pi } \\lambda ^{-|e|}>0 \\right\\rbrace .$ We recommend the readers Chapter 3 of [6] for backgrounds on branching numbers.", "The lower exponential growth rate of $T$ is defined as $\\operatorname{\\mathrm {gr}}(T):=\\liminf _{n\\rightarrow \\infty }|T_n|^{1/n}.$ Note that $\\operatorname{\\mathrm {gr}}(T)$ can be rewritten in a similar form as (REF ): $\\operatorname{\\mathrm {gr}}(T)=\\sup \\left\\lbrace \\lambda >0\\colon \\liminf _{n\\rightarrow \\infty }\\sum _{e\\in T_n}\\lambda ^{-|e|}>0 \\right\\rbrace $ and $1\\le \\operatorname{\\mathrm {br}}(T)\\le \\operatorname{\\mathrm {gr}}(T).$ The branching-ruin number introduced by Collevecchio, Kious and Sidoravicius [3] is defined as $\\operatorname{\\mathrm {brr}}(T):=\\sup \\left\\lbrace \\lambda >0\\colon \\inf _{\\pi \\in \\Pi (T)}\\sum _{e\\in \\pi } |e|^{-\\lambda }>0 \\right\\rbrace ,$ where we use the convention of $\\sup \\emptyset =0$ .", "This branching-ruin number is a natural way to measure trees with polynomial growth rate and turned out be the critical parameter of some random processes [2] (in particular the once-reinforced random walk [3]).", "One can define a corresponding polynomial growth rate by $\\operatorname{\\mathrm {grr}}(T):=\\sup \\left\\lbrace \\lambda >0\\colon \\liminf _{n\\rightarrow \\infty }\\sum _{e\\in T_n}|e|^{-\\lambda }>0 \\right\\rbrace .$ Recently Amir and Yang [1] introduced the intermediate branching number $\\operatorname{\\mathrm {Ibr}}(T):=\\sup \\left\\lbrace \\lambda >0\\colon \\inf _{\\pi \\in \\Pi (T)}\\sum _{e\\in \\pi } \\exp \\big (-|e|^\\lambda \\big )>0 \\right\\rbrace $ and the intermediate growth rate $\\operatorname{\\mathrm {Igr}}(T):=\\sup \\left\\lbrace \\lambda >0\\colon \\liminf _{n\\rightarrow \\infty }\\sum _{e\\in T_n}\\exp \\big (-|e|^\\lambda \\big )>0 \\right\\rbrace .$ Amir and Yang [1] proved that the intermediate branching number is the critical parameter for certain random walk, percolation and firefighting problems on trees with intermediate growth, where a tree $T$ was said to be of intermediate (stretched exponential) growth if $0<\\liminf _{n\\rightarrow \\infty }\\frac{\\log \\log |B(n)|}{\\log n}\\le \\limsup _{n\\rightarrow \\infty }\\frac{\\log \\log |B(n)|}{\\log n}<1.$ We remark that these numbers $\\operatorname{\\mathrm {br}}(T),\\operatorname{\\mathrm {gr}}(T),\\operatorname{\\mathrm {brr}}(T),\\operatorname{\\mathrm {grr}}(T),\\operatorname{\\mathrm {Ibr}}(T),\\operatorname{\\mathrm {Igr}}(T)$ do not depend on the choice of the root of $T$ ." ], [ "Subperiodic trees", "We first recall the definition of subperiodic trees from p 82 of [6]; see Example 3.6 and 3.7 there for some examples of subperiodic trees.", "Definition 1.1 Let $N\\in \\lbrace 0,1,2,3,\\ldots \\rbrace $ .", "An infinite tree $T$ is called $N$ -periodic (resp., $N$ -subperiodic) if $\\forall \\,x\\in T$ there exists an adjacency-preserving bijection (resp.", "injection) $f: T^x\\rightarrow T^{f(x)}$ with $|f(x)|\\le N$ .", "A tree is periodic (resp.", "subperiodic) if there is some $N$ for which it is $N$ -periodic (resp., $N$ -subperiodic).", "As mentioned earlier $\\operatorname{\\mathrm {br}}(T)=\\operatorname{\\mathrm {gr}}(T)$ for any subperiodic tree $T$ ([6]).", "Amir and Yang [1] asked whether $\\operatorname{\\mathrm {Ibr}}(T)=\\operatorname{\\mathrm {Igr}}(T)$ for subperiodic trees with intermediate growth rate.", "Our main result gives a negative answer to their question.", "Theorem 1.2 There exists a subperiodic tree $T$ with intermediate growth rate and $\\operatorname{\\mathrm {Ibr}}(T)<\\operatorname{\\mathrm {Igr}}(T).$" ], [ "Proof of the main result", "We will prove thm: an example with Ibn less than Igr via a concrete example (see Example REF )." ], [ "Coding by trees", "Given an integer $b\\ge 2$ , one can code a closed nonempty set $E\\subset [0,1]$ as a subtree $T=T_{[b]}(E)$ of the $b$ -ary tree $\\mathbb {T}_b$ in the following way.", "The vertices of $T$ correspond to the set of $b$ -adic intervals with nonempty intersections with $E$ , where an interval of the form $[k/b^n,(k+1)/b^n]$ for integers $k$ and $n$ is called a $b$ -adic interval of order $n$.", "We let the root of $T$ be the vertex corresponding to $[0,1]$ .", "Two such intervals are connected by an edge if and only if one interval contains the other and the orders of them differ by 1.", "See Section 1.10 and 15.2 of [6] for background on this coding.", "Note that if a point $x\\in E$ has the form of $k/b^n$ (i.e., it is the endpoint of some $b$ -adic interval), then it might correspond to two rays in $T_{[b]}(E)$ .", "This is just the fact that $x$ can be written in base $b$ with two equivalent expressions $x=\\frac{k}{b^n}=\\frac{k-1}{b^n}+\\sum _{m=n+1}^{\\infty }\\frac{b-1}{b^m}$ .", "To build a bijection between the rays of $T$ and $E$ we will view $T$ as a labelled subtree of $\\mathbb {T}_b$ and $E$ as a subset of $\\lbrace 0,1,\\ldots ,b-1\\rbrace ^{\\mathbb {N}}$ instead, where $\\mathbb {N}=\\lbrace 1,2,3,\\ldots \\rbrace $ .", "We will only consider the $b=3$ case for simplicity and view $\\mathbb {T}_3$ as a labelled tree with the root labelled as $\\emptyset $ , the three children of the root labelled $0,1,2$ respectively from left to right, and so on.", "Write ${D}(\\mathbb {T}_3)$ for the set of infinite labelled subtrees of $\\mathbb {T}_3$ which contain the root and have no leaf and write ${R}(\\mathbb {T}_3)$ for the set of labelled rays starting from the root.", "In particular ${R}(\\mathbb {T}_3)\\subset {D}(\\mathbb {T}_3)$ .", "Definition 2.1 For each element $a=(a_1,a_2,a_3,\\ldots )\\in \\lbrace 0,1,2\\rbrace ^{\\mathbb {N}}$ , we associate it with a ray $\\Phi (a)\\in {R}(\\mathbb {T}_3)$ with the $(n+1)$ -th vertex on the ray labelled as $a_1a_2\\cdots a_n$ .", "(The first vertex is just the root labelled as $\\emptyset $ .)", "Obviously $\\Phi $ is a bijection between $\\lbrace 0,1,2\\rbrace ^{\\mathbb {N}}$ and ${R}(\\mathbb {T}_3)$ .", "For a nonempty subset $E$ of $\\lbrace 0,1,2\\rbrace ^{\\mathbb {N}}$ , we code $E$ by the tree $\\Phi (E)\\in {D}(\\mathbb {T}_3)$ as the union of the rays (each ray is viewed as a labelled subtree of $\\mathbb {T}_3$ ) $\\Phi (E)=\\bigcup _{x\\in E}\\Phi (x)$ , where the union means the vertex set of $\\Phi (E)$ is the union of the vertex set of $\\Phi (x)$ and the same for the edge set.", "In particular, the map $\\Phi $ can be also viewed as a bijection between the collection of all nonempty subsets of $\\lbrace 0,1,2\\rbrace ^{\\mathbb {N}}$ and ${D}(\\mathbb {T}_3)$ .", "We also define the shift map $\\mathcal {S}:\\lbrace 0,1,2\\rbrace ^{\\mathbb {N}} \\rightarrow \\lbrace 0,1,2\\rbrace ^{\\mathbb {N}}$ by $\\mathcal {S}\\big ((a_1,a_2,a_3,\\ldots )\\big )=(a_2,a_3,a_4,\\ldots ).$ The following observation is a rephrasing of Example 3.7 in [6] in the case $b=3$ and it is crucial for our construction later.", "Observation 2.2 If a nonempty subset $E\\subset \\lbrace 0,1,2\\rbrace ^{\\mathbb {N}}$ is invariant under the shift map in the sense that $\\mathcal {S}(E)\\subset E$ , then the tree $\\Phi (E)$ is 0-subperiodic." ], [ "The construction of our example", "We first review the 1-3 tree $T_{1,3}$ [6]: the root has two children; and $|T_n|=2^n$ ; and for each $n\\ge 1$ , the left half vertices at distance $n$ from the root will each have only 1 child, the right half will each have 3 children.", "We view $T_{1,3}$ as a labelled subtree of $\\mathbb {T}_3$ according to the following labeling rule: the root is labelled as $\\emptyset $ and if a vertex with label $a_1a_2\\cdots a_n$ has $k$ children, then its $k$ children are labelled as $a_1a_2\\cdots a_n0,\\ldots ,a_1a_2\\cdots a_n(k-1)$ respectively from left to right.", "See Figure REF for $T_{1,3}$ and its labeling.", "Figure: The 1-3 tree T 1,3 T_{1,3} and its labeling.Example 2.3 Let $T_{0}$ be the tree obtained by replacing each edge $e$ of the 1-3 tree $T_{1,3}$ by a path of length $|e|$ and view it as a subtree of $\\mathbb {T}_3$ labelled according to the labeling rule we used for $T_{1,3}$ (see Figure REF ).", "As already noted by Amir and Yang [1], the tree $T_0$ satisfies $\\operatorname{\\mathrm {Ibr}}(T_0)=0\\quad \\textnormal { and }\\quad \\operatorname{\\mathrm {Igr}}(T_0)=\\frac{1}{2}.$ However $T_0$ is not subperiodic.", "Let $E_0=\\Phi ^{-1}(T_0)$ be the set of sequences in $\\lbrace 0,1,2\\rbrace ^{\\mathbb {N}}$ coded by $T_0$ .", "Define $E_j=\\mathcal {S}(E_{j-1})$ for $j\\ge 1$ and let $\\widetilde{E}=\\bigcup _{j=0}^{\\infty }E_j$ .", "Our example is just the tree $\\widetilde{T}:=\\Phi \\big (\\widetilde{E}\\big )$ .", "Figure: The tree T 0 T_{0} and its labeling.Recall that ${D}(\\mathbb {T}_3)$ denotes the set of infinite labelled subtrees of $\\mathbb {T}_3$ which contain the root and have no leaf.", "For a vertex $v\\in V(T_0)$ labelled as $a_1a_2\\cdots a_n$ , we will view the subtree $T_0^v$ as a labelled subtree of $\\mathbb {T}_3$ rooted at $\\emptyset $ , i.e., view it as the tree $\\Phi \\big (\\mathcal {S}^{\\circ n}\\lbrace x=(x_1,x_2,x_3,\\ldots )\\colon x\\in E_0,x_i=a_i \\textnormal { for } i=1,\\ldots ,n \\rbrace \\big ) \\in {D}(\\mathbb {T}_3).$ Since $E_n=\\bigcup _{v\\in V(T_0),|v|=n}\\mathcal {S}^{\\circ n}\\lbrace x=(x_1,x_2,x_3,\\ldots )\\colon x\\in E_0,v \\textnormal { is labelled as }x_1x_2\\cdots x_n \\rbrace $ and $\\Phi $ is a bijection, we have the following equivalent description of $\\widetilde{T}$ : Observation 2.4 As labelled subtrees of $\\mathbb {T}_3$ , the tree $\\widetilde{T}$ is just the union of $T_0^v$ over all $v\\in V(T_0)$ ." ], [ "The intermediate branching number and the intermediate growth rate of our example", "By construction the set $\\widetilde{E}$ is invariant under the shift map.", "Thus by obser: invariance imply subperiodic the tree $\\widetilde{T}=\\Phi \\big (\\widetilde{E}\\big )$ is subperiodic.", "We will show that $0=\\operatorname{\\mathrm {Ibr}}\\big (\\widetilde{T}\\big )< \\operatorname{\\mathrm {Igr}}\\big (\\widetilde{T}\\big )=\\frac{1}{2}$ which then proves thm: an example with Ibn less than Igr.", "Proposition 2.5 For the tree $\\widetilde{T}=\\Phi \\big (\\widetilde{E}\\big )$ constructed in Example REF , one has that $\\lim _{n\\rightarrow \\infty }\\frac{\\log \\log |B(n)|}{\\log n}=\\operatorname{\\mathrm {Igr}}\\big (\\widetilde{T}\\big )=\\frac{1}{2}\\quad \\textnormal { and }\\quad \\operatorname{\\mathrm {Ibr}}\\big (\\widetilde{T}\\big )=0.$ First of all since $T_0=\\Phi (E_0)$ is a subtree of $\\widetilde{T}=\\Phi \\big (\\widetilde{E}\\big )$ , one has that $\\liminf _{n\\rightarrow \\infty }\\frac{\\log \\log |B(n)|}{\\log n}\\ge \\operatorname{\\mathrm {Igr}}\\big (\\widetilde{T}\\big )\\ge \\operatorname{\\mathrm {Igr}}(T_0)\\stackrel{(\\ref {eq: Ibn and Igr of Tzero})}{=}\\frac{1}{2}.$ On the other hand, note that $ \\big |\\widetilde{T}_n\\big |$ equals the cardinality of the set $\\lbrace (x_1,\\ldots ,x_n)\\colon x=(x_1,x_2,\\ldots )\\in \\widetilde{E} \\rbrace $ —the first $n$ -bits of $\\widetilde{E}$ .", "Also observe that a ray $\\gamma $ in $T_{1,3}$ coding the sequence $(a_1,a_2,a_3,\\ldots )$ becomes a ray $\\gamma ^{\\prime }$ in $T_0$ coding the sequence $(a_1,a_2,0,a_3,0,0,a_4,0,0,0,a_5,0,0,0,0,a_6,0,\\ldots ).$ Hence by our construction an element $a\\in \\widetilde{E}$ always has the form $a=\\big (\\underbrace{0,\\ldots ,0}_{m},a_j,\\underbrace{0,\\ldots ,0}_{= j-1},a_{j+1},\\underbrace{0,\\ldots ,0}_{= j},a_{j+2},0,\\cdots \\big ),$ where $(a_1,a_2,a_3,\\ldots )\\in \\Phi ^{-1}(T_{1,3})$ and $m\\le (j-2)\\vee 0$ .", "Note that there exists a constant $c>0$ such that there are at most $c\\sqrt{n}+1$ nontrivial entries $a_j,a_{j+1},\\ldots ,a_{j+c\\sqrt{n}}$ in the first $n$ -bits of $a$ .", "If $j\\ge n+1$ , then there is at most one nonzero entry in the first $n$ -bits and this would contribute at most $2n+1$ to the set $\\lbrace (x_1,\\ldots ,x_n)\\colon x=(x_1,x_2,\\ldots )\\in \\widetilde{E} \\rbrace $ .", "If $j\\le n$ , then there are at most $(n-2)\\vee 0\\le n$ choice for $m$ —the number of zeroes before $a_j$ ; once $m$ and $j$ are fixed, the positions of $a_j,a_{j+1},\\ldots ,a_{j+c\\sqrt{n}}$ are fixed and each element of $\\lbrace a_j,a_{j+1},\\ldots ,a_{j+c\\sqrt{n}}\\rbrace $ has at most 3 choice whence this would contribute at most $n^2*3^{c\\sqrt{n}+1}$ to the set $\\lbrace (x_1,\\ldots ,x_n)\\colon x=(x_1,x_2,\\ldots )\\in \\widetilde{E} \\rbrace $ .", "In sum we have $\\big |\\widetilde{T}_n\\big |\\le 3^{C\\sqrt{n}}$ for some constant $C>0$ .", "Therefore one has the other direction $\\operatorname{\\mathrm {Igr}}\\big (\\widetilde{T}\\big )\\le \\limsup _{n\\rightarrow \\infty }\\frac{\\log \\log |B(n)|}{\\log n}\\le \\frac{1}{2}.$ Next we proceed to show that $\\operatorname{\\mathrm {Ibr}}\\big (\\widetilde{T}\\big )=0$ .", "Fixing an arbitrary $\\lambda >0$ , we will show that for any $\\varepsilon >0$ there exists a cutset $\\pi $ of $\\widetilde{T}$ such that $\\sum _{e\\in \\pi } \\exp \\big (-|e|^\\lambda \\big )\\le 2\\varepsilon .$ Since $\\operatorname{\\mathrm {Ibr}}(T_0)\\stackrel{(\\ref {eq: Ibn and Igr of Tzero})}{=}0$ , one has $\\operatorname{\\mathrm {Ibr}}(T_0^v)=0$ for any $v\\in V(T_0)$ .", "In particular one can choose cutsets $\\pi _v$ for $T_0^v$ (viewed as a subtree of $\\mathbb {T}_3$ rooted at $\\emptyset $ ) such that $\\sum _{v \\in V(T_0)} \\sum _{e\\in \\pi _v}\\exp \\big (-|e|^\\lambda \\big )\\le \\varepsilon .$ Since $\\widetilde{T}$ is the union of $T_0^v$ over $v\\in V(T_0)$ (Observation REF ), one might hope the set $\\bigcup _{v\\in V(T_0)}\\pi _v$ is a cutset of $\\widetilde{T}$ .", "But it might not be the case since there might exist a ray $\\gamma $ in $\\widetilde{T}$ such that its edges come from $T_0^{v_i}$ for infinitely many different $v_i$ 's and $\\gamma $ is not blocked from infinity by $\\bigcup _{v\\in V(T_0)}\\pi _v$ .", "To rescue this, we add some additional edges in the following way.", "Choose $N=N(\\lambda ,\\varepsilon )$ large enough so that $9N\\exp (-N^\\lambda )\\le \\varepsilon $ .", "Let $\\beta $ be the collection of all edges in $\\widetilde{T}_{N+1}$ with the form $(v,vj)\\colon v=(v_1v_2\\cdots v_N) \\textnormal { with at most one nonzero entry and }j=0,1,2.$ In particular $\\sum _{e\\in \\beta }\\exp \\big (-|e|^\\lambda \\big )\\le 9N\\exp (-N^\\lambda )\\le \\varepsilon .$ Now we set $\\pi =\\Big (\\bigcup _{v\\in V(T_0)}\\pi _v\\Big )\\cup \\beta .$ and claim that $\\pi $ is a cutset of $\\widetilde{T}$ .", "In fact since $\\widetilde{T}$ is just the union of $T_0^v$ over all $v\\in V(T_0)$ , we can choose $M\\ge 100N^2$ large enough so that all the edges $e$ of $\\widetilde{T}$ with $|e|\\le N$ appear in some $T_0^v$ with $|v|\\le M$ .", "Now if a ray $\\gamma $ of $\\widetilde{T}$ does not use any edge outside $\\bigcup _{v\\in V(T_0),|v|\\le M}T_0^v$ , then there must exist some $v\\in V(T_0)$ with $|v|\\le M$ such that $\\gamma $ is just a ray in $T_0^v$ .", "Hence in this case $\\gamma $ has a nonempty intersection with $\\pi _v$ .", "Otherwise $\\gamma $ must use some edge $e^{\\prime }$ of $\\widetilde{T}$ which is not in the union $\\bigcup _{v\\in V(T_0),|v|\\le M}T_0^v$ .", "By our choice of $M$ , one must have $|e^{\\prime }|>N$ and $e^{\\prime }$ is coming from some $T_0^v$ with $|v|>M\\ge 100N^2$ .", "For such a vertex $v$ , in the first $N$ levels of $T_0^v$ there is at most one vertex with three children because of the long pieces of zeroes (see (REF ) and Figure REF ).", "Therefore the edge $e^{\\prime }$ must be a descendant of some edge from the set $\\beta $ and whence $\\gamma $ has a nonempty intersection with $\\beta $ .", "Hence $\\pi $ is a cutset of $\\widetilde{T}$ .", "By our choice of $\\pi _v$ and $\\beta $ the cutset $\\pi $ satisfies (REF ).", "By (REF ) one obtains that $ \\operatorname{\\mathrm {Ibr}}\\big (\\widetilde{T}\\big )\\le \\lambda $ .", "Since this is true for any $\\lambda >0$ one has that $\\operatorname{\\mathrm {Ibr}}\\big (\\widetilde{T}\\big )=0$ ." ], [ "Concluding remarks", "In the construction of $T_0$ we replace an edge $e$ by a path of length $f(|e|)$ where the function $f:\\mathbb {N}\\rightarrow \\mathbb {N}$ is given by $f(x)=x$ .", "If we use some other increasing functions, say $f(x)=\\lceil x^s\\rceil $ with $s\\in (0,\\infty )$ , then we can obtain a family of subperiodic trees using the procedure in Example REF so that for each $\\alpha \\in (0,1)$ there are some trees $T$ in the family with the property that $0=\\operatorname{\\mathrm {Ibr}}(T)<\\operatorname{\\mathrm {Igr}}(T)=\\alpha $ .", "We also note that there exist periodic trees $T$ with polynomial growth and satisfy $\\operatorname{\\mathrm {brr}}(T)<\\operatorname{\\mathrm {grr}}(T)$ .", "For instance consider the following lexicographically minimal spanning tree of $\\mathbb {Z}^2$ illustrated in Figure REF ; see Section 3.4 in [6] for definitions of Cayley graphs and their lexicographically minimal spanning trees.", "We don't know whether there exists a Cayley graph $G$ of a finitely generated countable group with intermediate growth and a lexicographically minimal spanning tree $T$ of $G$ such that $\\operatorname{\\mathrm {Ibr}}(T)<\\operatorname{\\mathrm {Igr}}(T)$ .", "Figure: A lexicographically minimal spanning tree of ℤ 2 \\mathbb {Z}^2.However there are no periodic trees with intermediate growth rate.", "Proposition 3.1 Suppose $T$ is an infinite periodic tree.", "Then either $\\operatorname{\\mathrm {br}}(T)>1$ or there exists an integer $d\\ge 1$ such that $\\big |B(n)\\big |=\\Theta (n^d)$ .", "Here $\\big |B(n)\\big |=\\Theta (n^d)$ means that the ratio $\\big |B(n)\\big |/n^d$ is bounded away from zero and infinity.", "We give a sketch here and leave the details to interested readers.", "First of all, the periodic tree $T$ is the directed cover of some finite directed graph $G=(V,E)$ based at some vertex $x_0\\in V$ ; see p 82-83 in [6] for a proof of this fact.", "Let $C_1,\\ldots ,C_m$ be the strongly connected components of $G$ (if for a vertex $v$ there is no directed path from $v$ to itself, then we say $v$ does not belong to any strongly connected component).", "If there exist some $C_i$ and some $v\\in V(C_i)$ such that $v$ has at least two out-going edges in $C_i$ , then it is easy to see $\\operatorname{\\mathrm {br}}(T)>1$ .", "Otherwise, each $C_i$ is either a single vertex with a self-loop, or it is a directed cycle.", "In this case one can prove $\\big |B(n)\\big |=\\Theta (n^d)$ by induction on the size of $V(G)$ (Exercise 3.30 in [6] would be a good warm-up).", "We omit the details of the induction and just point out that in this case $d=\\max \\lbrace C(\\gamma )\\colon \\gamma \\textnormal { is a self-avoiding directed path in } G \\textnormal { starting from }x_0 \\rbrace ,$ where $C(\\gamma )$ is the number of strongly connected components visited by $\\gamma $ ." ], [ "Acknowledgment", "We are grateful to Asaf Nachmias for helpful discussions." ] ]
2212.05553
[ [ "Algorithms approaching the threshold for semi-random planted clique" ], [ "Abstract We design new polynomial-time algorithms for recovering planted cliques in the semi-random graph model introduced by Feige and Kilian~\\cite{FK01}.", "The previous best algorithms for this model succeed if the planted clique has size at least \\(n^{2/3}\\) in a graph with \\(n\\) vertices (Mehta, Mckenzie, Trevisan, 2019 and Charikar, Steinhardt, Valiant 2017).", "Our algorithms work for planted-clique sizes approaching \\(n^{1/2}\\) -- the information-theoretic threshold in the semi-random model~\\cite{steinhardt2017does} and a conjectured computational threshold even in the easier fully-random model.", "This result comes close to resolving open questions by Feige and Steinhardt.", "Our algorithms are based on higher constant degree sum-of-squares relaxation and rely on a new conceptual connection that translates certificates of upper bounds on biclique numbers in \\emph{unbalanced} bipartite Erd\\H{o}s--R\\'enyi random graphs into algorithms for semi-random planted clique.", "The use of a higher-constant degree sum-of-squares is essential in our setting: we prove a lower bound on the basic SDP for certifying bicliques that shows that the basic SDP cannot succeed for planted cliques of size $k =o(n^{2/3})$.", "We also provide some evidence that the information-computation trade-off of our current algorithms may be inherent by proving an average-case lower bound for unbalanced bicliques in the low-degree-polynomials model." ], [ "Introduction", "Clique is one of the most intensely studied combinatorial problems in theoretical computer science, both in terms of its worst-case and its average-case complexity.", "It was among the first graph problems shown to be NP-complete [36].", "In fact, it turns out that for every $\\varepsilon >0$ , it is NP-hard to find cliques of size $n^{\\varepsilon }$ even in graphs that contain cliques of size $n^{1-\\varepsilon }$ [28], [61], [37].", "The most well-studied average-case counterpart is the planted clique problem [34], [43] where the goal is to recover a $k$ -clique added to an Erdős–Rényi random graph $G(n,1/2)$ .", "Such a clique is uniquely identifiable if $k \\gg 2 \\log _2 n$ .", "There are polynomial time algorithms based on rounding the second eigenvector of the adjacency matrix [1] as well as basic semidefinite programming relaxations (e.g., the Lovász theta function) [22], [24] to recover the planted clique with high probability whenever $k\\ge n^{1/2}$ .", "Closing the exponential gap between the information-theoretic threshold value of $k$ vs the threshold of the best known algorithms is a tantalizing open question and has inspired a large body of research, culminating in lower bounds against restricted classes of algorithms like statistical query algorithms [20] and sum-of-squares relaxations [4], that vastly generalize the current algorithms for this problem.", "These concrete lower bounds provide some rigorous evidence that current algorithms for planted clique are optimal." ], [ "Fragility of algorithms.", "Unfortunately, many algorithms for the planted clique problem are fragile: a small number of adversarial changes to the input can cause the natural algorithms to break down completely.", "This includes methods based on basic statistics such as degrees of vertices or eigenvalues of the adjacency matrix that provide the strongest possible guarantees for the problem.", "Such fragility can be viewed as known algorithms overfitting to the choice of the distributional model.", "In response, a significant research effort has gone into finding algorithms resilient against even the most benign forms of adversarial modifications.", "This includes a long line of work on monotone adversary models introduced in [21] for average-case formulations of clique and coloring (i.e., community detection) [50], [49], [47].", "In the context of planted clique, such models correspond to starting from the standard planted clique input and allowing an adversary to delete any subset of edges not in the planted clique.", "Such deletions are, in principle, only helpful since the planted clique continues to be the true maximum clique in the resulting graph.", "And indeed while basic statistics and spectral methods fail in the presence of monotone adversaries, natural analyses of more resilient algorithms based on semidefinite programming [22] succeeds at the same $k=O(\\sqrt{n})$ threshold while tolerating monotone adversaries." ], [ "Semi-random model.", "A seminal work by Feige and Kilian [23] introduced the following semi-random planted-clique model following the classical work of [12] on semi-random coloring.", "Such semi-random models combine a distributional input with a monotone adversary and an adversarial choice at the same time.", "After the introduction of this model, similar semi-random models have been studied for a wide range of combinatorial optimization problems, including graph partitioning and constraint satisfaction problems.", "We refer the interested reader to the excellent survey [18].", "Definition 1.1 (Feige–Kilian semi-random planted-clique model, $\\mathsf {FK}(n,k,p)$ ) For $n,k\\in {N}$ with $k\\le n$ and $p \\in [0, 1]$ , we let $\\mathsf {FK}(n,k,p)$ be the collection of distributions over graphs with vertex set $V=[n]$ sampleable by a process of the following form: Random Generation Phase: Choose a uniformly random subset $S^*\\subseteq V$ of size $k$ and add a clique on $S^*$ to an Erdős–Rényi random graph $G(n,p)$ (which includes each possible edge independently at random with probability $p$ ), Adversarial Deletion Phase: delete an arbitrary subset of edges going out of $S^*$ adaptively (i.e., possibly depending on the previous random choices), Adversarial Addition Phase: replace the subgraph induced on $V\\setminus S^*$ by an arbitrary one, again adaptively.", "Unlike planted clique with monotone adversaries, semi-random models are far from “helpful”.", "In particular, the planted clique isn't necessarily the maximum clique in the resulting graph.", "And the adversarial choices in the generation process are known to result in significantly altered information-theoretic thresholds at which efficient algorithms can succeed for related problems such as community detection in the stochastic block model [49].", "If $p=1$ , the above model recovers the worst-case version of the clique problem.", "On the other hand, by omitting the last two steps, we recover the original planted-clique model, and by omitting only the last step we recover the planted clique with “helpful” monotone adversaries.", "Importantly, the last two steps are adaptive and can be chosen adversarially in response to the first step.", "In absence of adaptivity (i.e., when the last two steps are oblivious to the distributional choices), the model becomes significantly easier algorithmically.", "We write $G\\sim \\mathsf {FK}(n,k,p)$ to denote a graph sampled according to one of the distributions in $\\mathsf {FK}(n,k,p)$ .", "For particular choices of parameters $k=k(n)$ and $p=p(n)$ , our goal is to develop an algorithm that succeeds with high probability for every distribution described by $\\mathsf {FK}(n,k,p)$ ." ], [ "What does it mean for the algorithm to succeed?", "Since the graph induced on $V \\setminus S^*$ could be a worst-case hard instance for the clique problem, it is NP-hard to find a maximum clique in $G$ .", "So the goal of the algorithm is to find a clique of size $k$ in $G$ .", "For the original planted clique model (and the version with helpful monotone adversaries), we could with high probability recover the planted clique $S^*$ in $G$ .", "In contrast, in the semi-random model, this task is impossible information-theoretically because the adversary could simulate multiple disjoint copies of the distributional process in $V\\setminus S^*$ .", "Instead, we can ask the algorithm to compute a small list of (pairwise almost-disjoint) $k$ -cliques in $G$ that contains the planted clique $S^*$ .", "Such a list also allows uniquely identifying $S^*$ if, in addition, we are given a random vertex of $S^*$ as advice.", "In their work introducing this model,  [23] gave an algorithm that uses a Gaussian rounding [27] of the vector solution for the Lovász theta SDP relaxation combined with a combinatorial cleanup step to produce a correct list.", "For any $p$ such that $1-p \\ge (1+\\varepsilon ) \\ln (n)/n$ , their algorithm works if $k \\ge \\alpha n$ for some constant $\\alpha >0$ .", "Such a guarantee is essentially optimal if $1-p= \\tilde{O}(1/n)$ .", "The main focus of subsequent works has been in the case when $1-p$ is larger.", "In particular, the case of $p=1/2$ (and more generally, any constant $<1$ ) is of special interest.", "In this case, one can ideally expect polynomial time algorithms that succeed for $k\\sim \\sqrt{n}$ as in the case of average-case planted clique.", "We will focus on the case of $p=1/2$ in this introduction for the sake of clarity." ], [ "Prior work.", "Algorithms in prior works rely on rounding carefully designed semidefinite programming (SDP) relaxations.", "In the slightly easier setting that drops the adversarial deletion step from the model, Charikar, Steinhardt and Valiant [14] gave an algorithm based on a semidefinite programming relaxation for list-decodable mean estimation that succeeds whenever $k \\ge O(n^{2/3} \\log ^{1/3} (n))$ .", "Their guarantee was improved to $k \\ge O(n^{2/3})$ by Mehta, Mackenzie and Trevisan [48].", "The algorithm of [48] is based on a variant of the Lovász theta SDP (that they call “crude” or C-SDP) with an objective function that incentivizes “spread-out” vector solutions and analyzed via the Grothendieck inequality.", "They suggest (though don't prove) that their SDP should fail if $k = o(n^{2/3})$ .", "Further heightening the intrigue, Steinhardt [56] proved that if $k=o(\\sqrt{n} )$ , then it is information-theoretically impossible to identify a $O(n/k)$ -size list, indicating an information-theoretic (as opposed to computational) phase transition at $k\\sim \\sqrt{n}$ ." ], [ "Feige's open question.", "Given the apparent barrier for the basic semidefinite program at $k\\sim n^{2/3}$ , it is natural to ask: is the semi-random variant harder than the average-case planted clique problem or could there be algorithms that succeed for $k$ approaching the $O(\\sqrt{n})$ threshold?", "In his survey on semi-random models [18], Feige posed (see Section 9.3.4, Page 205) this as an outstanding open question and hoped for algorithms for semi-random planted clique matching the $k\\sim \\sqrt{n}$ threshold for the average-case variant." ], [ "Results", "In this work, we nearly resolve Feige's question and give an algorithm for the semi-random planted clique problem that works for $k$ approaching $\\sqrt{n}$ .", "Specifically, we give a scheme of algorithms, that, for any $\\varepsilon >0$ , run in time $n^{O(1/\\varepsilon )}$ and succeed in solving the semi-random planted clique problem whenever $k \\ge n^{1/2+\\varepsilon }$ : Theorem 1.2 (Main result, see Theorem REF for a detailed version) For every $\\varepsilon >0$ , there is an algorithm that, given a graph $G$ on $n$ vertices as input, computes a list of vertex subsets in time $n^{O(1/\\varepsilon )}$ satisfying the following guarantee: If $G$ is generated according $\\mathsf {FK}(n,k,1/2)$ for $k \\ge n^{1/2 + \\varepsilon }$ , then with probability at least $0.99$ the algorithm outputs a list of at most $(1+o(1))\\tfrac{n}{k}$ cliques of size $k$ such that one of them is the clique planted in $G$ .", "In particular, our algorithm manages to recover planted cliques of size $k$ approaching $\\sim \\sqrt{n}$ — the information-theoretic threshold [56] and the conjectured computational threshold even for the easier fully-random planted clique problem.", "This improves on the best known prior algorithm [48] that gives a polynomial time algorithm that succeeds whenever $k \\ge O(n^{2/3})$ .", "Our approach extends to edge probabilities $p$ beyond the choice $p=1/2$ and yields improved guarantees even when $p = 1-o(1)$ , though in that case we do not approach the information-theoretic threshold.", "Our hardness results (discussed below) show that such an outcome might be inevitable." ], [ "Higher degree sum-of-squares vs basic SDP.", "Our algorithm relies on rounding a high-constant degree sum-of-squares relaxation (that maximizes a natural “entropy-like” objective function) of the natural integer program for finding $k$ -cliques in graphs.", "As we discuss below, this is likely necessary as for the natural certification problem (discussed below) that arises in algorithms for semirandom planted clique, the basic SDP (Lemma REF ) has a lower bound that precludes recovering $k \\ll n^{2/3}$ -size cliques.", "This is in sharp contrast to the average-case planted clique problem where no constant degree strengthening of the basic SDP allows recovering planted cliques of size $k =o(\\sqrt{n})$  [4] (i.e., asymptotically smaller than the threshold for recovery using the basic SDP).", "In fact, the best known analyses can only obtain an $n^{O(t)}$ algorithm that succeeds whenever $k \\ge O(\\sqrt{n/2^t})$  [24].", "Indeed, our work appears to be the first example in combinatorial optimization where higher constant degree sum-of-squares relaxations substantially improves on the basic SDP.", "We note that in statistical estimation (as opposed to combinatorial optimization), higher degree sum-of-squares relaxations have recently been pivotal in algorithmic applications such as robust method of moments [42], linear regression [39], [11], list-decodable learning [42], [38], [53], [54], [7], [33] and recently, settling the robust clusterability and learnability of high-dimensional Gaussian mixtures [42], [30], [6], [2], [3], [46]." ], [ "Rounding and Connection to certifying bicliques", "Our rounding algorithm is reminiscent of the “rounding by votes” strategy employed in several recent works on list-decodable learning [38], [7], [33].", "Our analysis relies on a new connection to efficient certificates (see Definition REF for a standalone definition of the problem) of upper bounds on bipartite cliques in unbalanced bipartite random graphs.", "The specific form of the certificate does not make a difference as our rounding algorithms (or our constraint system, which is simply the standard quadratic formulation of $k$ -clique) function in a blackbox way given the existence of such a certificate.", "Given a bipartite graph $H=(U,V,E)$ with $|U|=k$ , $|V|=n$ and each edge included in $H$ with probability $p$ independently, it is easy to prove by a standard application of Chernoff and union bounds that there is no $\\ell $ by $k$ bipartite clique in $H$ with $\\ell \\gg O(\\log n/(1-p))$ .", "The bipartite clique certification problem asks to find a polynomial time verifiable certificate that $H$ contains no $\\ell $ by $k$ biclique for $\\ell $ as small as possible.", "This is a variant of the more standard biclique certificate problem (see e.g.,  [19]) where both the graph and the cliques we are interested in are unbalanced.", "Our main result is based on the following primitive that certifies that a random $k$ by $n$ bipartite graphs do not contain $\\ell \\times k$ blicliques for $\\ell = n^{\\varepsilon }$ and any $k \\ge \\tilde{O}(\\sqrt{n})$ in time $n^{O(1/\\varepsilon )}$ .", "Our certificates are based on $O(1/\\varepsilon )$ -degree sum-of-squares proofs and this is necessary – we prove that for any $\\ell = o(n/k)$ , there is no degree 2 sum-of-squares (i.e., basic SDP) certificates of absence of bicliques.", "In particular, unlike the case of balanced bipartite random graphs, the unbalanced setting seems to naturally benefit from large constant degree sum-of-squares certificates.", "Theorem 1.3 (Informal, see Theorem REF and Lemma REF ) For every $\\varepsilon >0$ , there is an $n^{O(1/\\varepsilon )}$ time algorithm that takes input a bipartite graph $H=(U,V,E)$ with $|U| = k$ , $|V|=n-k$ and each bipartite edge included in $H$ with probability $p=1/2$ , and with probability at least $0.99$ over the draw of $H$ outputs an $n^{O(1/\\varepsilon )}$ time verifiable certificate that $H$ contains no $\\ell $ by $k$ biclique for $\\ell \\le n^{\\varepsilon }$ whenever $k \\ge \\tilde{O}(\\sqrt{n})$ .", "Further, 1) the certificate can be expressed as an $O(1/\\varepsilon )$ -degree sum-of-squares refutation of the biclique axioms (see (REF )) and 2) there does not exist a degree 2 certificate (equivalently, based on the “basic SDP”) to certify a bound of $\\ell = o(n/k)$ .", "Prior works can be effectively seen as exploiting a basic spectral certificate captured by the basic SDP relaxation (see detailed discussion in Section REF of the techniques) for upper bounding such bicliques.", "In contrast, in this work, we depart from the spectral certificates and rely on a certain simple geometric certificate based on upper bounds on the size of sets of pairwise negatively correlated vectors.", "In particular, if $\\varepsilon =O(\\log \\log n/\\log n)$ is chosen so that $\\ell = O(\\log n)$ and $k = O(\\sqrt{n \\log n})$ , the biclique refutation above translates into an algorithm for semi-random planted clique problem that works for $k \\ge O(\\sqrt{n \\log n})$ matching Steinhardt's information-theoretic lower bound [56] and the threshold for the best known efficient algorithms for planted clique up to a $\\sqrt{\\log n}$ factor." ], [ "Hardness of refuting bicliques.", "We provide some evidence that improving on our biclique certification algorithm likely requires new techniques by proving a lower bound in the low-degree polynomial model.", "The low-degree polynomial model (see [44] for a great exposition) is a restricted model for statistical distinguishing problems.", "More precisely, the model considers problems where we are given a single sample (an instance of an algorithmic problem, a graph in our case) with the promise that it is an independent sample from one of two possible distributions: $D_{\\mathrm {null}}$ — a distribution that does not admit solutions, usually a natural random model, and $D_{\\mathrm {planted}}$ — a closely related distribution that does admit solutions.", "Informally speaking, the low-degree model restricts distinguishers to thresholds of low-degree polynomials of the input.", "While low-degree polynomials might appear restricted, they capture several algorithms including power iteration, approximate message passing, and local algorithms on graphs (cf.", "[17], [26]).", "Moreover, it turns out that they are enough to capture the best known spectral algorithms for several canonical problems such as planted clique, community detection, and sparse/tensor principal component analysis [5], [32], [16], [29].", "This model arose naturally from work on constructing sum-of-squares lower bounds for the planted clique problem [5].", "It was formalized in [29] and conjectured to imply sum-of-squares lower bounds for certain average-case refutation problems.", "Subsequently, starting with [32] (see also [31]), researchers have used the low-degree polynomial method as a technique to demarcate average-case algorithmic thresholds [29], [26], [57], [60].", "In our case, $D_{\\mathrm {null}}$ will be the $B(k,n,p)$ model: bipartite graphs with left vertex set of size $k$ , right vertex set of size $n$ , and each bipartite edge present in the graph with probability $p$ .", "Notice that if we had any algorithm to certify the absence of $\\ell $ by $k-\\ell $ bicliques in such a graph for any value of $\\ell $ then we can distinguish between $D_{\\mathrm {null}}$ and any $D_{\\mathrm {planted}}$ supported on bipartite graphs that admit $\\ell $ by $k-\\ell $ bicliques.", "Thus the distinguishing problem is formally easier than the task of certification (aka refutation).", "Informally speaking, the low-degree polynomial considers whether a restricted class of tests, namely, tests that take thresholds of low-degree polynomials in the edge indicator variables of the input graph, distinguish whp between $D_{\\mathrm {null}}$ and $D_{\\mathrm {planted}}$ with a single sample.", "Though restricted, such tests capture distinguishing algorithms for a wide variety of statistical distinguishing problems including community detection [32], densest $k$ -subgraph, sparse/tensor PCA and beyond [29].", "In particular, we observe that for the task of distinguishing between $D_{\\mathrm {null}} = G(n,1/2)$ and $D_{\\mathrm {planted}} = G(n,1/2)$ + $k$ -clique for $k \\gg \\sqrt{n}$ , constant degree polynomial distinguishers suffice!", "Theorem 1.4 (Low-degree polynomial heuristic for biclique certification problem) Fix $\\varepsilon >0$ small enough and $n$ large enough.", "Let $D_{\\mathrm {null}} = B(k,n,1/2)$ be the distribution on $(k,n)$ -bipartite graphs where every edge is included independently with probability $p=1/2$ .", "For $k = n^{1/2+\\varepsilon }$ , there is a distribution $D_{\\mathrm {planted}}$ on $(k,n)$ -bipartite graphs containing an $\\ell $ by $k$ bipartite clique for $\\ell = n^{1/5}$ such that the norm of the degree-$O(1/\\varepsilon )$ truncated likelihood ratio between $D_{\\mathrm {planted}}$ and $D_{\\mathrm {null}}$ is $1+o(1)$ .", "Informally speaking, the above theorem asserts that statistical tests based on computing thresholds of $O(1/\\varepsilon )$ -degree polynomials fail to distinguish between $B(k,n,1/2)$ that does not admit $O(\\log n)$ by $k$ cliques for any $k \\ge O(\\sqrt{n \\log n})$ and $D_{\\mathrm {planted}}$ that contains an $n^{1/5}$ by $n^{1/2+\\varepsilon }$ size biclique.", "It turns out that the most natural planted model (plant a random $\\ell $ by $k$ clique and sample the rest of the graph independently) can be distinguished from $D_{\\mathrm {null}}$ using just degree 1 polynomials and thus does not suffice to prove the above theorem.", "Instead, we use an edge-adjusted model where the probability of sampling edges outside the biclique is reduced in order to make the degree distribution of left vertices match that of $B(k,n,p)$ .", "For $p=1/2$ , the above theorem shows that we need polynomials of degree $O(\\log n)$ in order to be able to distinguish between $D_{\\mathrm {null}}$ and bipartite graphs with $\\ell $ by $k-\\ell $ bicliques.", "Given the contrast to the planted clique problem where the corresponding distinguishing problem can be solved by constant degree polynomials, we obtain some (weak) evidence that beating the guarantees of our current certificates may require new techniques for $p=1/2$ .", "A generalization (see Theorem REF ) suggests that the degree of the polynomial required to distinguish $D_{\\mathrm {null}}$ and $D_{\\mathrm {planted}}$ with bicliques of size smaller than our current polynomial time certificates can refute requires polynomials with degree growing with $\\mathrm {poly}(1/(1-p))$ — a bound that is $\\mathrm {poly}(n)$ when $1-p=1/\\mathrm {poly}(n)$ ." ], [ "Techniques", "In this section, we provide a high-level overview of our algorithm for the semi-random planted clique problem.", "For simplicity of exposition, we will focus on the important case of $p=1/2$ .", "Given a graph $G$ chosen from $\\mathsf {FK}(n,k,1/2)$ , our goal is to construct a small list of candidate $k$ -cliques in $G$ such that the true planted clique $S^*$ is contained in the list (we will call such lists correct).", "Our construction will also ensure that a constant fraction of the vertices in $S^*$ do not appear in any other clique in the list.", "As a result, we can also uniquely recover $S^*$ whp when given, in addition, a uniformly random vertex in $S^*$ .", "Our algorithm and its analysis rely on the proofs-to-algorithms method (see [25], [13] for more on the usage of this method)." ], [ "Inefficient algorithm.", "Let's first find an algorithm, even if inefficient, to generate a $\\mathrm {poly}(n)$ size correct list, i.e., one that contains $S^*$ .", "Notice that simply outputting all $k$ -cliques in $G$ can lead to an exponentially large $(i.e., \\sim n^k$ ) size list since we have no control over the graph induced by $G$ on $[n] \\setminus S^*$ (e.g., consider a clique on $[n] \\setminus S^*$ ).", "Instead, we will enumerate all $k$ -cliques in $G$ that satisfy an additional property such that 1) the property is satisfied by the planted $k$ -clique on $S^*$ with high probability, and 2) every graph $G$ has at most $(1+o(1))n/k$ $k$ -cliques satisfying the property.", "This property is quite natural and asks for the bipartite graph with the $k$ -clique on the left and the rest of the vertices on the right to not contain a large unbalanced biclique.", "Recall that an $\\ell $ by $r$ biclique in a bipartite graph $H$ is a set of vertices that include $\\ell $ left vertices and $r$ right vertices such that $H$ contains all possible bipartite edges between the two sides.", "Definition 2.1 (Good $k$ -cliques) Let $G$ be a graph on $n$ vertices.", "We say that a $k$ -clique $S$ in $G$ is $\\ell $ -good if every biclique $(L,R)$ in the bipartite graph with left vertex set $S$ , right vertex set $[n] \\setminus S$ , and edge set $\\mathsf {cut}_G(S)$ satisfies $|L| \\le \\ell $ whenever $|R| \\ge 1$ and $|L|+|R|=k$ .", "The planted $k$ -clique on $S^*$ is $O(\\log n)$ -good with high probability over the draw of $\\mathsf {cut}(S^*)$ .", "Proposition 2.2 (Bipartite clique number of $\\mathsf {cut}(S^*)$ ) Let $k,n \\in {N}$ and $G \\sim \\mathsf {FK}(n,k,1/2)$ .", "Then, for large enough $n$ and a constant $c>0$ , with probability at least $0.99$ over the draw of edges in $\\mathsf {cut}(S^*)$ , for any $L \\subseteq S^*$ , $R \\subseteq [n] \\setminus S^*$ such that $(L,R)$ is a biclique in $\\mathsf {cut}(S^*)$ satisfying $|R| \\ge 1$ and $|L| + |R| = k$ , we have $|L| \\le c \\log _2 n$ .", "The proof is a simple application of the first moment method.", "Note that it is enough to argue the proposition in the absence of the monotone adversary as deleting any subset of edges in $\\mathsf {cut}(S^*)$ maintains the goodness of $S^*$ .", "The probability that $\\mathsf {cut}(S^*)$ contains all the edges between $L \\subseteq S^*$ and $R \\subseteq [n] \\setminus S^*$ is at most $2^{-(k-|L|)|L|}$ .", "Thus, the expected number of bicliques $(L,R)$ such that $|L| \\ge c \\log _2 n$ is at most $\\sum _{\\ell +r = k, \\ell \\ge c \\log _2 n} {n k-\\ell } {k \\ell } 2^{-(k-\\ell )\\ell } \\rightarrow 0$ as $n \\rightarrow \\infty $ if $c$ is a large enough constant.", "The proposition then follows by an application of Markov's inequality.", "A simple greedy argument upper bounds the number of $\\ell $ -good $k$ -cliques if $k \\ge O(\\sqrt{n \\log n})$ .", "Proposition 2.3 (Number of good $k$ -cliques) Let $G$ be a graph on $n$ vertices.", "Then, for any $\\ell $ , if $k > 2\\sqrt{n \\ell /\\delta }$ for some $\\delta <1$ , then the number of $\\ell $ -good $k$ -cliques in $G$ is at most $(1+\\delta )n/k$ .", "Suppose not and take any $m = (1+\\delta )n/k$ such good $k$ -cliques.", "Observe that any pair of $\\ell $ -good $k$ -cliques $S,S^{\\prime }$ can only intersect in at most $\\ell $ vertices, as otherwise $\\mathsf {cut}(S)$ would contain a biclique with more than $\\ell $ left vertices.", "Thus, the $m$ good $k$ -cliques must cover at least $mk - m^2 \\ell = n + \\delta n - (4n^2/k^2)\\ell $ vertices, a number that exceeds the total number of vertices $n$ if $k > 2 \\sqrt{n \\ell /\\delta }$ .", "Propositions REF and REF immediately yield a $n^{O(k)}$ time algorithm to generate a correct list of $k$ -cliques of size $(1+\\delta )(n/k)$ .", "In fact, this algorithm can be made to run in time $n^{O(\\log n)}$ by enumerating all $c \\log _2 n$ size cliques $U$ in $G$ and adding a $k$ -clique to the list if the common neighborhood of $U$ is of size $\\ge k-|U|$ and forms a clique with $U$ ." ], [ "Efficient algorithms and biclique certificates", "In the inefficient algorithm above a key idea is the claim that $\\mathsf {cut}(S^*)$ does not have an $\\ell $ by $k-\\ell $ bipartite clique for $\\ell > O(\\log n)$ .", "Note that $\\mathsf {cut}(S^*)$ is an unbalanced (left side is much smaller than the right) $k$ by $n-k \\approx n$ bipartite graph and we proved that it does not have an unbalanced ($\\gg O(\\log n)$ vertices from the left) biclique in it.", "Key to our efficient algorithm for semi-random planted clique is an efficiently computable certificate of non-existence of unbalanced bicliques in $H$ as above (i.e., a refutation).", "Let $B(n_1, n_2, p)$ denote the distribution on bipartite graphs with $n_1$ left and $n_2$ right vertices and every bipartite edge included with probability $p$ independently.", "Let us phrase the version relevant to us formally before continuing: Definition 2.4 (Refuting unbalanced bicliques) Find an algorithm that takes input a bipartite graph $H$ on left and right vertex sets $(U,V)$ of size $|U|=k$ , $|V|=n-k$ , and also find a bound $\\ell \\in [k]$ , such that the algorithm has the following two properties: Correctness: If the algorithm outputs $s$ , then there is no $s$ by $k-s$ biclique in $H$ .", "Utility: If $H \\sim B(k,n-k,1/2)$ , then the algorithm outputs $s \\le \\ell $ with probability at least $0.99$ over the draw of $H$ .", "Remark 2.5 (From certificates to algorithms: a heuristic) In Section REF , we overview the translation of a (constant-degree sum-of-squares) certificate that the left side of any biclique in $H\\sim B(k,n-k,1/2)$ has at most $\\ell $ vertices into an algorithm for the semi-random planted $k$ -clique problem that succeeds whenever $k \\ge O(\\sqrt{n \\ell })$ .", "This matches the simple bound in Proposition REF for the “brute-force” algorithm above.", "We postpone the discussion of sum-of-squares proofs for now while noting that all certificates discussed in this section are in fact constant-degree sum-of-squares certificates.", "Observe that our simple analysis of the inefficient algorithm gives an $n^{O(\\log n)}$ algorithm that refutes the existence of $\\ell $ by $k$ bicliques in $B(k,n,1/2)$ with probability at least $0.99$ for $\\ell = O(\\log n)$ .", "Our goal is to find a polynomial time algorithm that succeeds for a $\\ell $ as close to $O(\\log n)$ as possible.", "The biclique refutation problem appears to be an interesting analog of refuting clique number of random (non-bipartite) graphs $G \\sim G(n,1/2)$ (that underlies algorithms for the fully-random planted clique problem) or the biclique number of $B(n,n,1/2)$ (i.e., the balanced bipartite graph).", "It can be thought of as certifying the correctness of the candidates in the list that is purportedly a solution to the semi-random planted clique problem.", "Finding solutions along with a certificate of correctness is an important goal by itself.", "For example, this is a key advantage (in addition to tolerating a monotone adversary) of the method of Feige and Krauthgamer [22] over the spectral algorithm [1] for the planted clique problem." ], [ "Basic spectral certificate", "Let us start by recalling the basic spectral certificate that underlies the algorithms for the average-case planted clique problem.", "This certificate implicitly underlies the algorithms of [48], [14].", "Our framework translates it into an algorithm for semi-random planted clique whenever $k \\gg O(n^{2/3})$ .", "Proposition 2.6 (Basic spectral certificate for clique number) In any graph $G$ , the clique number $\\omega (G) \\le 1+ \\left\\Vert A\\right\\Vert _2$ where $A$ is the $\\lbrace \\pm 1\\rbrace $ adjacency matrix of $G$ .", "If $x$ is a $\\lbrace 0,1\\rbrace $ -indicator of a $k$ -clique in $G$ , then, note that $k(k-1) = x^{\\top }Ax \\le \\left\\Vert x\\right\\Vert _2^2 \\left\\Vert A\\right\\Vert _2 = k \\left\\Vert A\\right\\Vert _2$ .", "Thus, $k \\le 1 + \\left\\Vert A\\right\\Vert _2$ for any graph $G$ .", "Thus, simply outputting the (polynomial time computable) largest singular value of $A$ gives a certificate of an upper bound on $\\omega (G)$ .", "Further, if $G \\sim G(n,1/2)$ , then standard spectral norm bounds on random symmetric $\\lbrace \\pm 1\\rbrace $ matrices imply that the algorithm whp outputs a bound of $O(\\sqrt{n})$ .", "Let's now see an analog of this method for bicliques (see Lemma REF for a general version).", "Proposition 2.7 (Basic spectral certificate for bicliques, see also Lemma REF ) Let $H$ be the $\\lbrace \\pm 1\\rbrace $ adjacency matrix of a $k$ by $n$ bipartite graph $H$ .", "For any $k$ -clique in $H$ , the number of left vertices $\\ell $ satisfies $\\ell (k-\\ell ) \\le \\left\\Vert H\\right\\Vert _2^2$ .", "Let $x,y$ be the $\\lbrace 0,1\\rbrace $ indicators of the left and right sides of a biclique in $H$ .", "Then, $\\left\\Vert x\\right\\Vert _2^2 \\left\\Vert y\\right\\Vert _2^2 = x^{\\top } Hy \\le \\left\\Vert x\\right\\Vert _2 \\left\\Vert y\\right\\Vert _2 \\left\\Vert H\\right\\Vert _2$ .", "Or, $(\\sum _i x_i)(\\sum _i y_i) = \\left\\Vert x\\right\\Vert _2^2 \\left\\Vert y\\right\\Vert _2^2 \\le \\left\\Vert H\\right\\Vert _2^2$ .", "For a random bipartite graph from $B(k,n-k,1/2)$ , the $H$ is a $k$ by $n-k$ matrix with independent random $\\lbrace \\pm 1\\rbrace $ entries.", "For such matrices, standard results (see Fact REF ) show that $\\left\\Vert H\\right\\Vert _2 \\le O(\\sqrt{k} + \\sqrt{n}) = O(\\sqrt{n})$ .", "Further, by a union bound, the degrees of all right vertices are at most $k/2 + O(\\sqrt{k \\log n})$ with high probability.", "Thus $k-\\ell \\ge k/4$ if $k \\gg \\log n$ .", "In that case, the above proposition shows that the spectral certificate refutes the existence of an $\\ell $ by $k-\\ell $ clique for $\\ell \\le O(n/k)$ .", "By applying the heuristic from Remark REF , we obtain an algorithm for semi-random planted clique if $k \\ge O(\\sqrt{n \\ell })$ for $\\ell = O(n/k)$ or when $k \\ge O(n^{2/3})$ matching the guarantees of [48].", "It turns out that the bound of $\\ell = O(n/k)$ based on the basic SDP/spectral relaxation is essentially tight.", "In Lemma REF , we show that the basic SDP provably fails to certify that $\\ell =o(n/k)$ .", "This shows an inherent limitation of certificates based on the basic SDP/spectral relaxations." ], [ "The Charikar-Steinhardt-Valiant approach.", "In their work on algorithms for list-decodable mean estimation [14], the authors devised a method for the analog of the semi-random planted clique problem without the monotone adversary step.", "When viewed from our vantage point of biclique refutations, their idea can be thought of as taking the $\\pm 1$ -neighborhood indicators of the right hand side of the graph and treating them as $n$ samples of a $k$ -dimensional distribution.", "An $\\ell $ by $k$ clique translates We note that the CSV approach directly applies to the semi-random planted clique model and does not actually yield a biclique certificate.", "The reason is that an $\\ell $ by $k$ -clique does not translate into non-zero mean for arbitrary bipartite graphs.", "We ignore this distinction in order to allow an intuitive comparison of their technique in the context of our work.", "into the distribution having a non-zero mean.", "Thus, one can apply (analogs of) list-decodable mean estimation algorithms [14], [41] to refute the existence of bicliques.", "The guarantees of the algorithm depend on higher directional moments of the input distribution.", "The “base case” corresponds to using just the second moments of the distribution — and this roughly relates to the use of the basic spectral certificate above.", "The higher moment variants can indeed yield improvements but this does not apply to our setting, because when seen from the vantage point of list-decodable mean estimation we have $n \\ll k^2$ samples of a $k$ -dimensional distribution — a bound not enough for the 4th moments to converge!", "Indeed, this is the key bottleneck that leads to a barrier at $k=\\tilde{O}(n^{2/3})$ for the CSV approach (and lead to Steinhardt's open question for semi-random planted clique [56])." ], [ "Improved spectral certificates", "Can we improve on the basic spectral certificate?", "We note that for related problems (e.g., densest $k$ -subgraph, random constraint satisfaction, coloring $G(n,p)$ ) we usually get no asymptotic improvement by considering spectral certificates with larger but polynomial size matrices built from the instance.", "Indeed, one can prove strong lower bounds [40], [35] that rule out such larger polynomial size certificates captured by constant degree sum-of-squares proofs." ], [ "Neighborhood reduction.", "A natural way to improve the spectral certificate for the clique number of $G \\sim G(n,1/2)$ from Proposition REF is to cycle through all possible subsets of $t$ vertices, move to the common neighborhood of the $t$ vertices and then apply Proposition REF to the induced graph on this common neighborhood.", "This strategy yields an upper bound of $\\omega (G) \\le t+1+\\max _{S \\subseteq [n], |S|=t} \\left\\Vert A_S\\right\\Vert _2$ where $A_S$ is the adjacency matrix of the induced subgraph on the common neighborhood of $S$ .", "One can prove that $\\left\\Vert A_S\\right\\Vert _2 \\le O(\\sqrt{n/2^t})$ whp simultaneously for all $S$ of size $t$ certifying an upper bound of $O(\\sqrt{n/2^t})$ on the clique number of $\\omega (G)$ .", "Since the resulting certificate is polynomial size only when $t=O(1)$ , the improvement makes no asymptotic difference in the threshold $k$ at which polynomial time algorithms work.", "As an aside, this simple certificate happens to be optimal for the degree $t$ Lovász-Schrijver SDP hierarchy [24] applied to $G \\sim G(n,1/2)$ .", "Repeating an analogous argument in our case also yields no asymptotic improvement unless $t=\\omega (1)$ (though it does allow us to get arbitrary constant factor improvements)." ], [ "Tensoring.", "There is a natural class of “tensoring” schemes for producing improved spectral certificates that we next consider.", "Consider a bipartite graph with $\\lbrace \\pm 1\\rbrace $ adjacency matrix $H^{\\prime }$ with the same right side but the left side containing all pairs of left vertices from $H$ .", "The $((i,j),k)$ -th entry of $H^{\\prime }$ equals $H(i,k)H(j,k)$ – the “parity” or product of the $\\lbrace \\pm 1\\rbrace $ indicators of edges $(i,k)$ and $(j,k)$ in $H$ .", "$H^{\\prime }$ is a $k^2$ by $n$ matrix, and further, an $\\ell $ by $k-\\ell $ biclique in $H$ translates into an $\\ell ^2$ by $k-\\ell $ biclique in $H^{\\prime }$ .", "The basic spectral certificate from Proposition REF applied to $H^{\\prime }$ yields that $\\ell ^2 \\le O(\\left\\Vert H^{\\prime }\\right\\Vert _2^2/k)$ .", "If $H^{\\prime }$ were a matrix of independent random $\\lbrace \\pm 1\\rbrace $ entries, $\\left\\Vert H^{\\prime }\\right\\Vert _2 = O(\\sqrt{k^2}) = O(k)$ yielding $\\ell \\le O(\\sqrt{k})$ .", "Despite $H^{\\prime }$ having correlations in its entries, this optimisticEvery rectangular matrix of larger dimension $k^2$ and Frobenius norm $k\\sqrt{n}$ has a spectral norm $\\ge k$ .", "bound is essentially correct (we will omit the proof here).", "Plugging this back into our heuristic, we get an algorithm for semi-random planted $k$ -clique if $k \\ge O(\\sqrt{n \\sqrt{k}})$ or $k \\gg n^{2/3}$ , the same as before!", "That is, even though the tensoring trick gives a different asymptotic estimate, it does not lead to any improvement in the threshold for $k$ in our semi-random planted clique application.", "What happens if we “tensor the left side” $t$ times for $t >2$ ?", "An optimistic estimate such as the above yields a bound of $\\ell ^t \\le O(k^{t-1})$ or $\\ell \\le k^{1-1/t}$ – a bound that appears to degrade as we increase $t$ !", "We will omit the details here but a similarly worse bound results if we tensor the right side of $H$ instead." ], [ "Two-sided tensoring beats the $n^{2/3}$ barrier but fails a long way off {{formula:9fd2913b-d376-40e6-a7f6-f8b536457edd}} .", "It turns out simultaneously tensoring both sides unequally helps beat the $\\ell \\le \\max \\lbrace \\sqrt{k}, n/k\\rbrace $ bound obtained via one-sided tensoring above.", "Intuitively speaking, the “optimal” two-sided tensoring attempts to make the resulting adjacency matrix as “square” in dimensions as possible.", "Formal proofs require analyzing matrices of correlated random entries using the graph matrix method devised in the context of proving sum-of-squares lower bounds in [4] and follow-ups.", "We note without further details that two-sided tensoring appears to break down at $k \\sim n^{0.61}$ ." ], [ "Our certificate: bicliques imply sets of negatively correlated vectors", "Our key idea to circumvent the bottlenecks in the natural spectral certificates above is to abandon the idea of spectral certificates altogether.", "Instead, we will show that a simpler family of “geometric” certificates for biclique numbers allows us to show $\\ell \\le n^{\\varepsilon }$ for any fixed $\\varepsilon >0$ .", "Specifically, we will show that if there is a $\\ell $ by $k-\\ell $ clique in $H$ , then one can extract $2^{\\ell }-1$ pairwise negatively correlated vectors in $n$ dimensions.", "In order to explain this connection, let us note a property of a random bipartite graph $H \\sim B(k,n,1/2)$ .", "For any subset $S \\subseteq U$ of $|S| \\le t$ vertices from the left vertex set of $H$ , let $N_S(k) = \\prod _{i \\in S} H(i,k)$ where $H$ is the $\\lbrace \\pm 1\\rbrace $ -adjacency matrix of $H$ .", "Then, $N_S$ is an $n$ dimensional vector of “parities” of $\\lbrace \\pm 1\\rbrace $ indicators of all edges from $S$ to $\\lbrace k\\rbrace $ .", "Further, in a random $H$ , every $N_S$ is nearly balanced.", "That is, by a simple Chernoff and union bound argument (see Lemma REF ), $\\vert \\sum _{i \\le n } N_S(i) \\vert \\le O(\\sqrt{nt \\log n})$ for every $S$ of size $t$ .", "Let's call a $k$ by $n$ bipartite graph $t$ -wise balanced if the above property holds.", "That is, every $N_S$ is approximately balanced for $|S| \\le t$ .", "We will now show that given an $\\ell $ by $k$ biclique in a $t$ -wise balanced graph, we can produce a set of ${\\ell t/2}$ pairwise negatively correlated vectors in $n$ dimensions.", "Proposition 2.8 (Bicliques vs negatively correlated vectors) Suppose $H$ is a $k$ by $n$ bipartite graph that is $t$ -wise balanced for some $t \\in {N}$ .", "Suppose that $H$ contains an $\\ell $ by $k-\\ell $ biclique $(L,R)$ for $k-\\ell \\ge k/4$ .", "Then, if $k \\ge O(\\sqrt{nt \\log n})$ , there exist ${\\ell t/2}$ different $(n-k+\\ell )$ -dimensional vectors $N_S^{-}$ (one for each $S\\subseteq L$ of size $t/2$ ) such that $\\langle N_S^{-}, N_T^{-}\\rangle <0$ whenever $S \\ne T$ .", "First observe that for any $S,T \\subseteq L$ of size $t/2$ , $\\langle N_S, N_T\\rangle = \\sum _{k \\le n} N_{S \\Delta T}(k) = O(\\sqrt{nt \\log n})$ where we invoked the $t$ -wise balancedness of $H$ .", "Now, without loss of generality, assume that $R$ is the set of the first $k-\\ell $ vertices on the right.", "Consider the vectors $N_S^{-}$ in $n-k+\\ell $ dimensions obtained by stripping the first $k-\\ell $ coordinates off of $N_S$ for every $S \\subseteq L$ of size $t/2$ .", "Since $S, T\\subseteq L$ , the first $k-\\ell $ coordinates contribute a $+(k-\\ell )$ to $\\langle N_S, N_T\\rangle $ .", "Thus, $\\langle N_S^{-}, N_T^{-}\\rangle \\le O(\\sqrt{nt \\log n}) - k/4 <0$ if $k \\ge O(\\sqrt{nt \\log n})$ .", "It is a standard fact that there can only be $d+1$ pairwise negatively correlated vectors in $d$ dimensions.", "A weaker version can be proved via a simple spectral argument: Proposition 2.9 (Spectral bound on negatively correlated vectors) Let $v_1, v_2,\\ldots ,v_N$ be $n$ -dimensional vectors of length $\\sqrt{n}$ each satisfying $\\langle v_i, v_j\\rangle \\le -r$ .", "Then $N\\le 1+ n/r$ .", "We know that $\\left\\Vert \\sum _{i\\le N} v_i\\right\\Vert _2^2 \\ge 0$ .", "On the other hand, $\\left\\Vert \\sum _{i = 1}^N v_i \\right\\Vert _2^2 = \\sum _{i = 1}^N \\left\\Vert v_i\\right\\Vert _2^2 + \\sum _{i \\ne j} \\langle v_i,v_j\\rangle \\le N n - N(N-1) r$ .", "Putting the lower and upper bound together yields that $N-1 \\le n/r$ or $N \\le 1+ n/r$ .", "Now, Proposition REF yields ${\\ell t/2}$ vectors with pairwise correlations at most $-ck$ for some constant $c>0$ if $k \\gg \\sqrt{nt \\log n}$ .", "On the other hand, Proposition REF yields that the number of such vectors can only be $1+O(n/k)$ .", "Putting these two bounds together yields that $\\ell \\lesssim (n/k)^{2/t}$ .", "Choosing $t = 1/\\varepsilon $ gives us an $n^{O(1/\\varepsilon )}$ size certificate that $\\ell $ is at most $n^{\\varepsilon }$ .", "It turns out that the above argument can be converted into a sum-of-squares refutation of the bicliques in $H$ .", "The main observation is that the step where we strip off the first $\\sim k$ coordinates off of $N_S$ can be done “within SoS” while the remaining argument is an explicit sum-of-squares proof by virtue of the above simple proposition.", "It turns out that we need some additional careful arguments to place the certificate in a usable form that we will omit for the purpose of this overview (see Remark REF ).", "The final certificate that comes out of such an argument can be “packaged” into a rather neat form: Proposition 2.10 (See Theorem REF for a precise version) Fix any integer constant $r>0$ .", "Let $H \\sim B(k,n,p)$ and let $x,y$ be indicators of vertices on the left and right vertex sets of $H$ respectively.", "If $(x,y)$ describe bipartite clique in $H$ of total size $k$ and $y \\ne 0$ , then $(\\sum _{i = 1}^k x_i)^r (\\sum _{i = 1}^n y_i) \\le O(n/k)$ ." ], [ "From biclique certificates to algorithms for semi-random planted clique", "Our algorithms use the biclique certificates discussed previously to analyze a natural rounding algorithm for the SDP relaxations of the standard $k$ -clique axioms.", "Specifically, consider the standard integer programming formulation of the $k$ -clique problem written as the quadratic polynomial system $\\mathcal {A}=\\mathcal {A}(G)$ below.", "Note that the solutions to $\\mathcal {A}(G)$ are $k$ -cliques in the given graph $G$ on vertex set $[n]$ .", "$ \\mathcal {A}(G)\\colon \\left\\lbrace \\begin{aligned}&\\forall i \\in [n]& w_i^2& =w_i \\\\&\\forall i\\in [n].&\\textstyle \\sum _{i=1}^n w_i&= k\\\\&\\forall i,j \\text{ s.t. }", "\\lbrace i,j\\rbrace \\notin G& w_i w_j& = 0\\end{aligned}\\right\\rbrace $ Now, finding a solution to this quadratic program is clearly NP-hard.", "So we will instead work with “sum-of-squares” SDP relaxations of this quadratic program.", "The solution to this SDP relaxation can be interpreted as a generalization of a probability distribution over solutions to the quadratic program above.", "Specifically, a degree $d$ pseudo-distribution $D$ is a relaxation of a probability distribution on $\\lbrace 0,1\\rbrace ^n$ in that the associated “mass” function can take negative values while still inheriting a non-trivial subset of properties of probability distributions.", "We will postpone the formal definition of pseudo-distribution to Section  and for now note the following relevant bits: 1) Unlike an actual probability distribution, we only get access to low-degree moments (i.e., expectations of monomials) of $D$ and thus can only compute expectations of degree $\\le d$ polynomials, 2) pseudo-distributions can assign “negative probabilities” and thus may not assign non-negative expectation to pointwise non-negative degree $d$ polynomials $f$ , but 3) degree $d$ pseudo-distributions do assign a non-negative expectation to any $f$ that is a sum of squares of degree $\\le d/2$ polynomials, and 4) a pseudo-distribution of degree $d$ satisfying $\\mathcal {A}$ satisfies all “low-degree inferrable” properties of $k$ -cliques but need not be supported on $w$ that indicate $k$ -cliques at all.", "Here, low-degree inferrable property means that for any degree $\\le d-2$ polynomial $f$ and any $\\lbrace i,j\\rbrace \\notin G$ , $\\tilde{{E}}_D [f w_i w_j] =0$ .", "A degree $d$ pseudo-distribution minimizing any convex objective in the pseudomoments $\\tilde{{E}}[\\prod _{i \\in S} w_i]$ for $|S| \\le d$ and approximately satisfying $\\mathcal {A}$ at degree $d$ can be computed in time $n^{O(d)}$ (see Section ).", "Though a pseudo-distribution is not a probability distribution over solutions to $\\mathcal {A}$ , it is still helpful for the reader to imagine it to be as such.", "How does our biclique certificates help us?", "It turns out that while degree $d$ pseudo-distributions are (far from) actual probability distributions for $d \\ll n$ , they behave so for the purpose of polynomial inequalities that can be derived from $\\mathcal {A}$ using a degree $d$ sum-of-squares proofs.", "The conclusion of our biclique certificate from Proposition REF can be written (see Theorem REF ) as a degree $O(r)$ consequence of the quadratic system $\\mathcal {B}$ (see (REF )) that identifies bicliques in bipartite graphs of total size $k$ .", "Consider the bipartite graph $\\mathsf {cut}(S^*)$ .", "Let $w_L$ be the restriction of $w$ to coordinates in $S^*$ and $w_R$ be the restriction of $w$ to coordinates outside of $S^*$ .", "Then, $\\mathcal {A}$ implies that $(w_L, w_R)$ satisfy $\\mathcal {B}$ for the bipartite graph $\\mathsf {cut}(S^*)$ .", "Since the pseudo-distribution $D$ satisfies $\\mathcal {A}$ , we can conclude that $\\tilde{{E}}_{D}\\left[ \\left(\\sum _{i \\in S^*} w_i\\right)^r \\left(\\sum _{i \\notin S^*} w_i\\right)\\right] \\le O(n/k) \\,,$ whenever the pseudo-distribution $D$ has degree at least $O(r)$ .", "Note that $S^*$ is not known to us but the above inequality forces the pseudo-distribution computed by the SDP to capture some non-trivial information about it." ], [ "The need for coverage constraints.", "Roughly speaking, (REF ) can be interpreted as saying that the pseudo-distribution is “supported” only on those $w$ that cannot simultaneously appreciably intersect $S^*$ and $[n] \\setminus S^*$ .", "Such a fact by itself seems unhelpful.", "After all, the pseudo-distribution could completely ignore $S^*$ and focus on the “worst-case\" graph on $[n] \\setminus S^*$ .", "Given the worst-case hardness of the clique, the pseudo-distribution may not have any information about $k$ -cliques in $[n] \\setminus S^*$ and consequently the input graph.", "In order to make (REF ) useful, we must somehow “force” the pseudo-distribution to have a non-trivial mass on vertices in $S^*$ .", "Of course, we do not know $S^*$ , so how can we do it?", "It turns out that this can be accomplished by certain “max coverage\" constraints.", "Specifically, instead of finding any pseudo-distribution consistent with $\\mathcal {A}$ , we find one that minimizes $\\left\\Vert \\tilde{{E}}_{D} [w]\\right\\Vert _2^2$ .", "This is a convex function of the pseudo-distribution and thus can be minimized efficiently using the ellipsoid method.", "This objective forces the pseudo-distribution to be “spread-out”.", "Indeed, in a different language, such an objective is used also in  [48], though arguably our treatment of such an objective as a max-coverage constraint on SoS relaxations of $\\mathcal {A}$ appears to demystify the use of crude-SDP in  [48].", "We note that such a max coverage constraint is at the heart of the rounding algorithms for several problems in list-decodable learning starting with [38].", "A key consequence of the max-coverage constraint is that, by an elementary convexity argument, it implies the following proposition: Proposition 2.11 (Max-coverage pseudo-distributions) For any pseudo-distribution $D$ on $w$ satisfying $\\mathcal {A}$ of degree at least 2 and minimizing $\\left\\Vert \\tilde{{E}}_{D}[w]\\right\\Vert _2^2$ , $\\sum _{i \\in S^*} \\tilde{{E}}_{D}[ w_i ] \\ge \\frac{k^2}{n}$ .", "A rounding algorithm now falls naturally out of the above two discussions.", "We look at a $n^r$ by $n$ matrix indexed by subsets of size $r=O(1/\\varepsilon )$ on the rows and singleton vertices on the columns.", "Proposition REF allows us to conclude that the rows of this (huge) matrix corresponding to the subsets of the unknown planted clique must have a large total sum.", "As a consequence of the biclique certificates, on the other hand, we learn that when restricted to such rows, the columns corresponding to $[n] \\setminus S^*$ must have a low total contribution.", "Together these two statements allow us to use a simple greedy algorithm that selects a uniformly random row of the above matrix and takes the largest $\\sim k$ entries to recover a list containing a set of $k$ vertices that has a large constant fraction intersection with $S^*$ .", "Such a set can then be refined using a simple combinatorial “cleanup” step." ], [ "Preliminaries", "All logarithms have base 2 unless otherwise stated.", "We will use letters $G,H$ to denote graphs and also their $\\lbrace \\pm 1\\rbrace $ -entry adjacency matrices.", "We adopt the convention that $G(i,j)=1$ iff edge $\\lbrace i,j\\rbrace $ is present in the graph $G$ .", "For any $x \\in {R}^n$ and $S \\subseteq [n]$ , will use $x_S$ to denote the monomial $\\prod _{i\\in S} x_i$ .", "The bit complexity of a rational number $p/q$ is $\\lceil \\log _2 p \\rceil + \\lceil \\log _2 q \\rceil $ ." ], [ "Sum-of-squares preliminaries", "We refer the reader to the monograph [25] and the lecture notes [13] for a detailed exposition of the sum-of-squares method and its usage in average-case algorithm design.", "A degree-$\\ell $ pseudo-distribution is a finitely-supported function $D:{R}^n \\rightarrow {R}$ such that $\\sum _{x} D(x) = 1$ and $\\sum _{x} D(x) f(x)^2 \\ge 0$ for every polynomial $f$ of degree at most $\\ell /2$ .", "We define the pseudo-expectation of a function $f$ on ${R}^d$ with respect to a pseudo-distribution $D$ , denoted $\\tilde{{E}}_{D(x)} f(x)$ , as $\\tilde{{E}}_{D(x)} f(x) = \\sum _{x} D(x) f(x)$ .", "The degree-$\\ell $ pseudo-moment tensor of a pseudo-distribution $D$ is the tensor ${E}_{D(x)} (1,x_1, x_2,\\ldots , x_n)^{\\otimes \\ell }$ with entries corresponding to pseudo-expectations of monomials of degree at most $\\ell $ in $x$ .", "The set of all degree-$\\ell $ moment tensors of degree $d$ pseudo-distributions is also closed and convex.", "Definition 3.1 (Constrained pseudo-distributions) Let $D$ be a degree-$\\ell $ pseudo-distribution over ${R}^n$ .", "Let $\\mathcal {A}= \\lbrace f_1\\ge 0, f_2\\ge 0, \\ldots , f_m\\ge 0\\rbrace $ be a system of $m$ polynomial inequality constraints.", "We say that $D$ satisfies the system of constraints $\\mathcal {A}$ at degree $r$ (satisfies it $\\eta $ -approximately, respectively), if for every $S\\subseteq [m]$ and every sum-of-squares polynomial $h$ with $\\deg h + \\sum _{i\\in S} \\max \\lbrace \\deg f_i,r\\rbrace $ , $\\tilde{{E}}_{D} h \\cdot \\prod _{i\\in S}f_i \\ge 0$ ($\\tilde{{E}}_{D} h \\cdot \\prod _{i\\in S}f_i \\ge \\Vert h \\Vert _2 \\prod _{i \\in S} \\Vert f_i \\Vert _2 $ where $\\Vert h \\Vert _2$ for any polynomial $h$ is the Euclidean norm of its coefficient vector.", "We say that $D$ satisfies (similarly for approximately satisfying) $\\mathcal {A}$ (without mentioning degree) if $D$ satisfies $\\mathcal {A}$ at degree $r$ ." ], [ "Basic facts about pseudo-distributions.", "Fact 3.2 (Hölder's inequality for pseudo-distributions) Let $f,g$ be polynomials of degree at most $d$ in indeterminate $x \\in {R}^d$ .", "Fix $t \\in {N}$ .", "Then, for any degree $dt$ pseudo-distribution $\\tilde{\\zeta }$ , $\\tilde{{E}}_{\\tilde{\\zeta }}[f^{t-1}g] \\le (\\tilde{{E}}_{\\tilde{\\zeta }}[f^t])^{\\frac{t-1}{t}} (\\tilde{{E}}_{\\tilde{\\zeta }}[g^t])^{1/t}$ .", "Observe that the special case of $t =2$ corresponds to the Cauchy-Schwarz inequality.", "The following idea of reweighted pseudo-distributions follows immediately from definitions and was first formalized and used in [10]).", "Fact 3.3 (Reweightings [10]) Let $D$ be a pseudo-distribution of degree $k$ satisfying a set of polynomial constraints $\\mathcal {A}$ in variable $x$ .", "Let $p$ be a sum-of-squares polynomial of degree $t$ such that $\\tilde{{E}}[p(x)] \\ne 0$ .", "Let $D^{\\prime }$ be the pseudo-distribution defined so that for any polynomial $f$ , $\\tilde{{E}}_{D^{\\prime }}[f(x)] = \\tilde{{E}}_{D}[ f(x)p(x)]/\\tilde{{E}}_{D}[p(x)]$ .", "Then, $D^{\\prime }$ is a pseudo-distribution of degree $k-t$ satisfying $\\mathcal {A}$ ." ], [ "Sum-of-squares proofs.", "A sum-of-squares proof that the constraints $\\lbrace f_1 \\ge 0, \\ldots , f_m \\ge 0\\rbrace $ imply the constraint $\\lbrace g \\ge 0\\rbrace $ consists of polynomials $(p_S)_{S \\subseteq [m]}$ such that $g = \\sum _{S \\subseteq [m]} p_S \\cdot \\Pi _{i \\in S} f_i$ .", "We say that this proof has degree $\\ell $ if for every set $S \\subseteq [m]$ , the polynomial $p_S \\Pi _{i \\in S} f_i$ has degree at most $\\ell $ and write: $\\lbrace f_i \\ge 0 \\mid i \\le r\\rbrace {\\ell }{}\\lbrace g \\ge 0\\rbrace \\,.$ Fact 3.4 (Soundness) If $D$ satisfies $\\mathcal {A}$ for a degree-$\\ell $ pseudo-distribution $D$ and there exists a sum-of-squares proof $\\mathcal {A}{r^{\\prime }}{} \\mathcal {B}$ , then $D$ satisfies $\\mathcal {B}$ at degree $rr^{\\prime } +r^{\\prime }$ .", "Definition 3.5 (Total bit complexity of sum-of-squares proofs) Let $p_1, p_2, \\ldots , p_m$ be polynomials in indeterminate $x$ with rational coefficients.", "For a polynomial $p$ with rational coefficients, we say that $\\lbrace p_i \\ge 0\\rbrace $ derives $\\lbrace p\\ge 0\\rbrace $ in degree $k$ and total bit complexity $B$ if $p = \\sum _i q_i^2 + \\sum _i r_i p_i$ where each $q_i^2,r_i$ are polynomials with rational coefficients of degree at most $k$ and $k-deg(p_i)$ for every $i$ , and the total number number of bits required to describe all the coefficients of all the polynomials $q_i, r_i, p_i$ is at most $B$ .", "There's an efficient separation oracle for moment tensors of pseudo-distributions that allows approximate optimization of linear functions of pseudo-moment tensors approximately satisfying constraints.", "The degree-$\\ell $ sum-of-squares algorithm optimizes over the space of all degree-$\\ell $ pseudo-distributions that approximately satisfy a given set of polynomial constraints: Fact 3.6 (Efficient optimization over pseudo-distributions [55], [52], [51], [45]) Let $\\eta >0$ .", "There exist an algorithm that for $n, m\\in {N}$ runs in time $(n+ m)^{O(\\ell )} \\mathrm {poly}\\log 1/\\eta $ , takes input an explicitly bounded and satisfiable system of $m$ polynomial constraints $\\mathcal {A}$ in $n$ variables with rational coefficients and outputs a level-$\\ell $ pseudo-distribution that satisfies $\\mathcal {A}$ $\\eta $ -approximately." ], [ "Basic sum-of-squares proofs.", "Fact 3.7 (Operator norm bound) Let $A$ be a symmetric $d\\times d$ matrix with rational entries with numerators and denominators upper-bounded by $2^B$ and $v$ be a vector in ${R}^d$ .", "Then, for every $\\varepsilon \\ge 0$ , ${2}{v} \\left\\lbrace v^{\\top } A v \\le \\Vert A\\Vert _2\\Vert v\\Vert ^2_2 + \\varepsilon \\right\\rbrace $ The total bit complexity of the proof is $\\mathrm {poly}(B,d,\\log 1/\\varepsilon )$ .", "Fact 3.8 (SoS Hölder's inequality) Let $f_i,g_i$ for $1 \\le i \\le s$ be indeterminates.", "Let $p$ be an even positive integer.", "Then, ${p^2}{f,g} \\left\\lbrace \\left(\\frac{1}{s} \\sum _{i = 1}^s f_i g_i^{p-1}\\right)^{p} \\le \\left(\\frac{1}{s} \\sum _{i = 1}^s f_i^p\\right) \\left(\\frac{1}{s} \\sum _{i = 1}^s g_i^p\\right)^{p-1}\\right\\rbrace \\,.$ The total bit complexity of the sos proof is $s^{O(p)}$ .", "Observe that using $p = 2$ yields the SoS Cauchy-Schwarz inequality.", "Fact 3.9 (SoS almost triangle inequality) Let $f_1, f_2, \\ldots , f_r$ be indeterminates.", "Then, ${2t}{f_1, f_2,\\ldots ,f_r} \\left\\lbrace \\left(\\sum _{i\\le r} f_i\\right)^{2t} \\le r^{2t-1} \\left(\\sum _{i =1}^r f_i^{2t}\\right)\\right\\rbrace \\,.$ The total bit complexity of the sos proof is $r^{O(t)}$ .", "Fact 3.10 (SoS AM-GM inequality, see Appendix A of [9]) Let $f_1, f_2,\\ldots , f_m$ be indeterminates.", "Then, $\\left\\lbrace f_i \\ge 0\\mid i \\le m\\right\\rbrace {m}{f_1, f_2,\\ldots , f_m} \\left\\lbrace \\left(\\frac{1}{m} \\sum _{i =1}^m f_i \\right)^m \\ge \\Pi _{i \\le m} f_i\\right\\rbrace \\,.$ The total bit complexity of the sos proof is $\\exp (O(m))$ .", "Fact 3.11 (Cancellation within sum-of-squares, Lemma 9.3 in [8]) Let $a,C$ be indeterminates.", "Then, $\\lbrace a \\ge 0\\rbrace \\cup \\lbrace a^t \\le Ca^{t-1}\\rbrace {2t}{a,C} \\left\\lbrace a^{2t} \\le C^{2t}\\right\\rbrace \\,.$ Fact 3.12 (Univariate sum-of-squares proofs) Let $p$ be a degree-$d$ univariate polynomial with rational coefficients of bit complexity $B$ such that $p(x) \\ge 0$ for every $x \\in {R}$ .", "Then, for every $\\varepsilon >0$ , there is a degree-$d$ sum-of-squares polynomial $q(x)$ with coefficients of bit complexity $O(\\mathrm {poly}(B, \\log 1/\\varepsilon ))$ such that $\\varepsilon +p(x)=q(x)$ .", "Lemma 3.13 (Simple cancellation within sum-of-squares) Let $a$ an indeterminate and $C$ some positive constant.", "Then, $\\lbrace a^2 \\le Ca \\rbrace {2}{a} \\left\\lbrace a^{2} \\le C^{2}\\right\\rbrace \\,.$ $\\lbrace a^2 \\le C \\rbrace {2}{a} \\left\\lbrace a \\le \\sqrt{C}\\right\\rbrace \\,.$ For the first claim, we have: $\\lbrace a^2 \\le Ca \\rbrace {2}{a} \\left\\lbrace a^2 \\le a^2 + (a-C)^2 = C^2+2a^2 - 2aC \\le C^2\\right\\rbrace \\,.$ For the second claim, note that it is enough to prove the claim for $C=1$ (and apply this special case to $a/C$ ).", "Using the fact that ${2}{a} \\left\\lbrace (1+a)^2 \\le 2 a^2 + 2\\right\\rbrace $ , we have: $\\lbrace a^2 \\le 1 \\rbrace {2}{a} \\left\\lbrace a = \\frac{1}{4} (a+1)^2-\\frac{1}{4}(1-a)^2 \\le \\frac{1}{2}(a^2 +1) \\le 1\\right\\rbrace \\,.$ We also need the following fact about random matrices: Fact 3.14 (Singular values of random matrices, consequence of Theorem 2.3.21 [58]) Fix any $\\varepsilon >0$ .", "Let $A$ be a $k \\times n$ matrix for $k \\le n$ with independent entries with magnitude at most $n^{0.5-\\varepsilon }$ , mean 0 and variance 1.", "Then, for large enough $n$ , with probability at least $0.99$ , $\\left\\Vert A\\right\\Vert _2 \\le O(\\sqrt{n})$ .", "Fact 3.15 (Singular values of rectangular random matrices,  [59]) Let $A$ be a $k$ by $n$ matrix with independent entries chosen uniformly from $\\lbrace -1,1\\rbrace $ for $k \\le n$ .", "Then, with probability at least $0.99$ over the draw of entries of $A$ , the largest singular value of $A$ is at most $O(\\sqrt{n})$ and the $k$ -th smallest singular value of $A$ is at least $\\Omega (\\sqrt{n} - \\sqrt{k-1})$ ." ], [ "Certifying biclique bounds in unbalanced random bipartite graphs", "In this section, we develop low-degree sum-of-squares certificates of upper bounds on biclique numbers for unbalanced random bipartite graphs.", "We use $H$ to denote a bipartite graph with left vertex set $U(H)$ and the right vertex set $V(H)$ .", "For a bipartite graph $H=H(U,V)$ with $U=U(H)$ and $V=V(H)$ , let $\\mathcal {B}=\\mathcal {B}(H)$ be the following system of polynomial constraints, which has as solution every biclique $(S,T)$ in $H$ of total size $k$ with $S=\\lbrace u \\in U \\mid x_u=1\\rbrace $ and $T = \\lbrace v \\in V \\mid y_v = 1\\rbrace $ : $ \\mathcal {B}(H)\\colon \\left\\lbrace \\begin{aligned}&\\forall u \\in U& x_u^2& =x_u \\\\&\\forall v \\in V& y_v^2& =y_v \\\\&&\\left(\\sum _{u \\in U} x_u\\right) + \\left(\\sum _{v \\in V} y_v\\right)&= k\\\\&\\forall u \\in U,v \\in V \\text{ s.t. }", "\\lbrace u,v\\rbrace \\notin H& x_u y_v& = 0\\end{aligned}\\right\\rbrace \\,.$ Remark 4.1 Notice that the above biclique formulation places a constraint on the total size of the clique.", "This is the natural formulation that arises in our reduction from the semirandom planted clique problem.", "Intuitively speaking, given a graph $G \\sim FK(n,k,1/2)$ , the bipartite graph we care about is $H=\\mathsf {cut}(S^*)$ where $S^*$ is the planted clique of size $k$ .", "The bicliques we care about refuting are obtained by taking an arbitrary $k$ -clique $S$ in $G$ and taking the induced biclique in $H$ with left vertices $S \\cap S^*$ and right vertices $S \\setminus S^*$ .", "In particular, notice that the total size of the biclique is $k$ and so long as $S \\ne S^*$ , the right hand side of the clique contains at least one vertex.", "For more a detailed commentary, we direct the reader to Section .", "For ease of exposition, we will present our certificates and analysis for the most important case of $p=1/2$ first and then follow it up with a generalization to arbitrary $p$ in the following subsection." ], [ "The case of $p=1/2$", "The goal of the following theorem is to show that with high probability over the draw $H \\sim B(k,n,1/2)$ of a bipartite Erdős-Rényi random graph with edge density $p=1/2$ and $k \\le n$ , there is a degree $r$ (and thus verifiable in time $n^{O(r)}$ ) sum-of-squares certificate of (informally speaking) the fact that any $\\ell $ by $k-\\ell $ biclique with $k-\\ell >1$ satisfies $\\ell \\le O(r) (n/k)^{O(1/r)}$ .", "In particular, for any $\\varepsilon >0$ , by choosing $r=O(1/\\varepsilon )$ , we get a $n^{O(1/\\varepsilon )}$ -time verifiable certificate of the absence of $n^{\\varepsilon } \\times k-n^{\\varepsilon }$ -biclique in $H$ .", "Formally, we will prove: Theorem 4.2 (Sum-of-squares certificates for unbalanced bicliques in random bipartite graphs) Let $H=H(U,V,1/2) \\sim B(k,n,1/2)$ be a bipartite Erdős-Rényi graph with edge probability $1/2$ .", "Then, for any $r \\le O(\\frac{k^2}{n \\log n})$ , with probability at least $0.99$ over the draw of $H$ , the sizes $X = \\sum _{u \\in U} x_u$ of the sets indicated by $x$ and $y$ respectively satisfy: $\\mathcal {B}(H) {4r+2}{x,y} \\Biggl \\lbrace X^{4r} \\cdot Y\\le (1000r)^{10r}n \\left(\\frac{n}{k}\\right)^4 \\Biggr \\rbrace \\,.$ Further, the total bit complexity of the SoS proof is $n^{O(r)}$ .", "As a corollary, we obtain that whp over the choice of $H \\sim B(k,n,p)$ , for every pseudo-distribution $D$ of degree $\\ge 4r+2$ satisfying $\\mathcal {B}(H)$ , we must have $\\tilde{{E}}_{D}[X^{4r} Y] \\le (1000r)^{10r} n (n/k)^4$ .", "Remark 4.3 Observe that if $H$ contains a $\\ell $ by $(k-\\ell )$ biclique for $k-\\ell \\ge 1$ then the above theorem yields that $\\ell ^r (k-\\ell )\\le X^{r} Y\\le (1000r)^{10r} n(n/k)^4$ and thus, $\\ell \\le O(r) n^{O(1/r)}$ .", "That is, there exist degree $O(r)$ certificates of absence of $\\ell $ by $k$ bicliques in $H$ for $\\ell \\sim n^{O(1/r)}$ .", "Our proof of Theorem REF uses two simple pseudorandom properties of the graph $H$ and thus works for all graphs that satisfy these properties.", "For any $S \\subseteq U$ , let $u_S$ be a vector in $\\lbrace -1,1\\rbrace ^{|V|}$ so that for any $j$ , $u_S(j)$ is the parity of $\\lbrace \\pm 1\\rbrace $ edge indicators for edges $\\lbrace i,j\\rbrace $ as $i \\in S$ .", "Then, we will need the following $r$ -fold balancedness property that informally asks that the vector $u_S$ be nearly balanced for every subset $S \\subseteq U$ of size at most $r$ .", "In addition, we will also need that every vertex on the right side of $H$ has degree no larger than $k/2+O(\\sqrt{k \\log n})$ .", "Definition 4.4 (Balancedness) Let $H$ be a bipartite graph with left vertex set $U$ and right vertex set $V$ .", "For every $S \\subseteq U$ and $T \\subseteq V$ , let $u_S$ be the $|V|$ -dimensional vector defined by setting $u_S(v)= \\prod _{u \\in S} H(u,v)$ and $v_T$ be the analogously defined $|U|$ -dimensional vector $v_T(u) = \\prod _{v \\in T} H(u,v)$ .", "Then, we define the $r$ -fold balancedness of $H$ to be the smallest $\\Delta _r$ such that for every $S \\subseteq U$ of size $|S| \\le r$ , it holds that $|\\sum _{j \\in V} u_S(j)|\\le \\Delta _r$ .", "The following lemma verifies that the above two pseudorandom properties hold for random bipartite graphs by a simple application of Hoeffding and union bounds.", "Lemma 4.5 (Balancedness of random bipartite graphs) Let $H \\sim H(U,V,1/2)$ with $n= |U \\cup V|$ and $k=|U|$ .", "Then, for any $r \\le |U|$ , with probability at least $0.99$ over the draw of $H$ , 1) $H$ is $\\Delta $ $r$ -fold balanced for $\\Delta =O(\\sqrt{r |V| \\log |U|})$ , and 2) the maximum degree of a node in $V$ is $k/2 + O(\\sqrt{k\\log |V|})$ .", "For $S \\subseteq U$ such that $|S| \\le r$ , we have that $u_{S}(j)$ has mean 0 and is bounded between $-1$ and 1.", "Then, by Hoeffding's inequality, $\\Pr \\left[ \\left| \\sum _{j \\in V} u_{p,S}(j)u_{p,T}(j) \\right| \\ge t \\sqrt{|V|}\\right] \\le 2e^{-t^2/2} \\,,$ so, by a union bound over all choices of $S$ , $\\Pr \\left[ \\exists S \\subseteq U \\text{ s.t. }", "|S| \\le r, \\left| \\sum _{j \\in V} u_{p,S}(j)u_{p,T}(j) \\right| \\ge t \\sqrt{|V|}\\right] \\le |U|^{r} \\cdot 2e^{-t^2/2} \\,.$ Choosing $t = O(\\sqrt{r \\log U})$ makes the right-hand side a small constant, so we have $r$ -fold right-balancedness $O(\\sqrt{r|V| \\log |U|})$ .", "The degree of a node in $V$ is a binomial random variable $\\operatorname{Bin}(k, 0.5)$ , which by standard bounds is larger than $k/2+t$ with probability at most $e^{-t^2/k}$ .", "By a union bound over all nodes in $V$ , the maximum degree is larger than $kp+t$ with probability at most $|V|e^{-t^2/k}$ , so choosing $t=O(\\sqrt{k \\log |V|})$ makes the probability a small constant.", "Hence, the maximum degree is $k/2+O(\\sqrt{k\\log |V|})$ .", "The key component of the proof of Theorem  REF is the following lemma that gives a sum-of-squares certificate of an upper bound on a quantity closed related to $X^{2r}Y$ .", "Lemma 4.6 Let $H$ be a bipartite graph with left and right vertex sets $U, V$ satisfying $k=|U|\\le |V| = n$ with $2r$ -fold right-balancedness $\\Delta _{2r}$ .", "Then, $\\mathcal {B}(H) {4r}{x,y} \\Biggl \\lbrace \\left(\\sum _{|S|=r} x_S\\right)^{2} \\sum _i y_i \\le \\left(\\sum _{|S|=r} x_S\\right) |V| + \\left(\\sum _{|S|=r} x_S\\right)^{2} \\Delta _{2r}\\Biggr \\rbrace \\,.$ Further, the bit complexity of the SoS proof is $n^{O(r)}$ .", "Remark 4.7 (Proof Plan) In order to interpret this lemma, we suggest the readers to think of $\\sum _{|S|=r} x_S \\approx (\\sum _i x_i)^{r} = X^{r}$ (this is formally shown to be fine in Lemma REF ).", "Then, rearranging the conclusion of the lemma yields a statement of the form $X^{2r} (Y-\\Delta _{2r}) \\le X^r |V|$ .", "At this point, “in real life” (as opposed to within the sum-of-squares proof system), we could reason as follows: if $Y\\ge \\Delta _{2r}$ then, “canceling” $X^{r}$ from both sides and “dividing through” by $(Y-\\Delta _{2r})$ yields that $X^r \\le |V|$ giving us a bound on the left hand side of biclique as desired.", "On the other hand, if $Y < \\Delta _{2r}$ , then, $X \\gg k/2$ This argument, however, is not easy to do within the low-degree sum-of-squares proof system because of the case analysis involved.", "Indeed, a similar issue arises in the context of list-decodable learning and robust clustering algorithms that rely on certifiable anticoncentration (see overview of [6] for a discussion and a general resolution, and also the discussion on the need for a priori bounds in [15]).", "In our situation, we can resolve this need using a more straightforward observation (see Lemma REF ).", "The rest of the steps above can indeed by implemented within low-degree sum-of-squares via cancellation inequalities (see for e.g., Fact REF ).", "We postpone the proof of this key lemma for the time being and first show how to use it.", "The following simple lemma uses a bound on the degree of the right vertices in $H$ in order to lower bound the LHS of the conclusion of Lemma REF .", "This will allow us to eliminate the term $\\Delta _{2r} \\left(\\sum _{|S|=r} x_S \\approx (\\sum _i x_i)^{r}\\right)^2$ from the RHS of the conclusion of Lemma REF .", "Lemma 4.8 (Lower bounding the LHS of (REF )) Let $H$ be a bipartite graph on left and right vertex sets $U, V$ satisfying $k=|U|\\le |V|=n-k$ with maximum degree of a node in $V$ of at most $k/2+\\Delta _{\\ell }$ .", "Then, for any $j \\in V$ , we have: $\\mathcal {B}(H) {4r+2}{x,y} \\Biggl \\lbrace y_j \\left(\\sum _{S: |S|=r} x_S\\right)^2 \\left(\\sum _{v \\in V} y_v\\right) \\ge \\left(\\frac{k}{2}-\\Delta _{\\ell }\\right) y_j \\left(\\sum _{S: |S|=r} x_S\\right)^2 \\Biggr \\rbrace \\,.$ Further, the bit complexity of the SoS proof is $n^{O(r)}$ .", "Every $j \\in V$ has degree at most $\\frac{k}{2}+\\Delta _{\\ell }$ .", "Thus, we have using the constraint system $\\mathcal {B}(H)$ $\\mathcal {B}(H) {2}{x,y} \\left\\lbrace \\sum _{u \\in U} x_u y_j = \\sum _{u \\in U:u \\sim j} x_u y_j \\le \\left(\\frac{k}{2}+\\Delta _{\\ell }\\right) y_j\\right\\rbrace \\,.$ Thus, $\\mathcal {B}(H) {2}{x,y} \\left\\lbrace \\sum _{u \\in U} x_u = k-\\sum _{v \\in V} y_v\\right\\rbrace $ allows us to conclude: $\\mathcal {B}(H) {2}{x,y} \\left\\lbrace \\sum _{v \\in V} y_v y_j \\ge \\left(\\frac{k}{2}-\\Delta _{\\ell }\\right) y_j\\right\\rbrace $ Multiplying both sides by the sum-of-squares polynomial $\\left(\\sum _{S: |S|=r} x_S\\right)^2$ completes the proof.", "As a direct consequence of Lemma REF and Lemma REF , we obtain: Lemma 4.9 Let $H$ be a bipartite graph on left and right vertex sets $U, V$ satisfying $k=|U|\\le |V|=n-k$ with $2r$ -fold balancedness of $\\Delta _{2r}$ and maximum degree of a node in $V$ of at most $k/2+\\Delta _{\\ell }$ .", "Suppose further that $\\frac{k}{2} - \\Delta _{\\ell } - \\Delta _{2r} \\ge \\frac{k}{4}$ .", "Then, we have: $\\mathcal {B}(H) {4r+2}{x,y} \\Biggl \\lbrace Y \\left(\\sum _{S: |S|=r}x_S\\right)^4 \\le n\\left(\\frac{4n}{k}\\right)^4\\Biggr \\rbrace \\,.$ Further, the bit complexity of the SoS proof is $n^{O(r)}$ .", "We first multiply both sides of the conclusion of Lemma REF with the sum-of-squares polynomial $y_j^2$ for an arbitrary $j \\in V$ : $\\mathcal {B}(H) {4r+2}{x,y} \\Biggl \\lbrace y_j\\left(\\sum _{|S|=r} x_S\\right)^{2} \\sum _i y_i \\le y_j \\left(\\sum _{|S|=r} x_S\\right) |V| + y_j \\left(\\sum _{|S|=r} x_S\\right)^{2} \\Delta _{2r}\\Biggr \\rbrace \\,.$ Next, we use Lemma REF to obtain to replace the left-hand side by a useful lower bound: $\\mathcal {B}(H) {4r+2}{x,y} \\Biggl \\lbrace y_j \\left(\\frac{k}{2}-\\Delta _\\ell \\right) \\left(\\sum _{S: |S|=r} x_S\\right)^2 \\le y_j \\left(\\sum _{|S|=r} x_S\\right) |V| + y_j \\left(\\sum _{|S|=r} x_S\\right)^{2} \\Delta _{2r}\\Biggr \\rbrace \\,.$ We then move the second term on the right-hand side to the left-hand side and use that $\\frac{k}{2} -\\Delta _{\\ell }- \\Delta _{2r} \\ge \\frac{k}{4}$ to conclude: $\\mathcal {B}(H) {4r+2}{x,y} \\Biggl \\lbrace y_j \\left(\\sum _{S: |S|=r} x_S\\right)^2 \\le y_j \\left(\\sum _{|S|=r} x_S\\right) \\frac{4n}{k}\\Biggr \\rbrace \\,.$ We finally apply Lemma REF with $a = y_j (\\sum _{S: |S|=r} x_S)$ , $C = \\frac{4n}{k}$ , and $t=2$ to obtain: $\\mathcal {B}(H) {4r+2}{x,y} \\Biggl \\lbrace y_j \\left(\\sum _{S: |S|=r}x_S\\right)^4 \\le y_j^4 \\left(\\sum _{S: |S|=r}x_S\\right)^4 \\le \\left(\\frac{4n}{k}\\right)^4\\Biggr \\rbrace \\,.$ Summing up as $j$ varies over $V$ completes the proof.", "Finally, we invoke the following simple observation that allows us to replace $\\sum _{|S| =r} x_S$ by $(\\sum _i x_i)^r$ : Lemma 4.10 () For every $\\varepsilon >0$ , there is a sum-of-squares proof with coefficients of bit complexity $O(\\mathrm {poly}(|U|, \\log 1/\\varepsilon ))$ $ \\Biggl \\lbrace x_i^2 = x_i \\text{ } \\forall i \\in U \\Biggr \\rbrace {r}{x} \\Biggl \\lbrace \\frac{1}{2^r r!}", "\\left(\\sum _i x_i\\right)^r - \\frac{2r^r}{r!}", "- \\varepsilon \\le \\sum _{|S|=r} x_S \\le \\frac{1}{r!}", "\\left(\\sum _i x_i\\right)^r + \\varepsilon \\Biggr \\rbrace \\,.$ The following polynomial identity holds: $\\sum _{|S|=r} x_S = \\frac{1}{r!", "}\\left(\\sum _i x_i\\right)\\left(\\sum _i x_i - 1\\right)\\cdots \\left(\\sum _i x_i - (r - 1)\\right)$ .", "Then $\\sum _{|S| = r} x_S \\le \\frac{1}{r!", "}\\left(\\sum _i x_i\\right)^r$ and |S| = r xS 1r!", "(i xi - r)r - rrr!", "12r r!", "(i xi)r - 2rrr!", ", where in the last inequality the subtracted term makes the inequality trivial unless $\\sum _i x_i \\ge 2r$ , case in which we use that $\\sum _i x_i -r \\ge \\sum _i x_i / 2$ .", "Notice that $\\sum _{|S|=r} x_S$ is a univariate degree-$r$ polynomial in $\\sum _i x_i$ .", "Then the inequalities $\\sum _{|S| = r} x_S \\le \\frac{1}{r!", "}\\left(\\sum _i x_i\\right)^r$ and $\\sum _{|S| = r} x_S \\ge \\frac{1}{2^r r!", "}\\left(\\sum _i x_i\\right)^r - \\frac{2r^r}{r!", "}$ can be written as univariate polynomial inequalities $p_U(\\sum _i x_i) \\ge 0$ and $p_L(\\sum _i x_i) \\ge 0$ , respectively, with $p_U$ and $p_L$ of degree at most $r$ .", "It is easy to check that the coefficients of $p_U$ and $p_L$ have bit complexity $O(\\mathrm {poly}(|U|))$ , so by Fact REF the conclusion follows.", "We can finish the proof of Theorem REF from here: [Proof of Theorem REF ] From Lemma REF , we have: $\\Biggl \\lbrace x_i^2 = x_i \\text{ } \\forall i \\in U \\Biggr \\rbrace {r}{x} \\Biggl \\lbrace \\frac{1}{2^r r!}", "\\left(\\sum _i x_i\\right)^r - \\frac{2r^r}{r!}", "- \\varepsilon \\le \\sum _{|S|=r} x_S \\Biggr \\rbrace \\,.$ Setting $\\varepsilon =1$ and using that $\\lbrace 0 \\le a \\le C\\rbrace {4}{a,C} \\lbrace a^4 \\le C^4\\rbrace $ , we have: $\\Biggl \\lbrace x_i^2 = x_i \\text{ } \\forall i \\in U \\Biggr \\rbrace {4r}{x} \\Biggl \\lbrace \\left(\\sum _i x_i\\right)^{4r} \\le (100r)^{10r} + (100r)^{10r} \\left(\\sum _{|S|=r} x_S\\right)^{4} \\Biggr \\rbrace \\,.$ Now we want to combine this with the conclusion of Lemma REF .", "We briefly verify that we satisfy the condition $\\frac{k}{2} - \\Delta _{\\ell } - \\Delta _{2r} \\ge \\frac{k}{4}$ .", "We have by Lemma REF that $\\Delta _{\\ell } = O(\\sqrt{k \\log |V|}) = O(\\sqrt{k \\log n})$ and $\\Delta _{2r} = O(\\sqrt{r|V| \\log |U|}) = O(\\sqrt{rn \\log n})$ .", "Observe that for $k \\ge O(\\sqrt{rn \\log n})$ large enough the condition is satisfied.", "Then we have: $\\mathcal {B}(H) {4r+2}{x,y} \\Biggl \\lbrace Y \\left(\\sum _i x_i\\right)^{4r} \\le (100r)^{10r}Y + (100r)^{10r} n\\left(\\frac{4n}{k}\\right)^4 \\Biggr \\rbrace \\,.$ Observing that $\\mathcal {B}(H) {2}{y} \\left\\lbrace Y \\le n\\right\\rbrace $ completes the proof." ], [ "Proof of Lemma ", "We now return to the proof of Lemma REF .", "[Proof of Lemma REF ] Let us write $u_S^{\\prime }$ for the vector-valued linear function in indeterminate $y$ defined by $u_S^{\\prime }(i) = u_S(i) (1-y_i)$ .", "Then, observe that $\\mathcal {A}{2r+2}{x,y} \\left\\lbrace x_S u_S(i)y_i = x_S y_i\\right\\rbrace $ .", "In particular, $\\mathcal {A}{4r}{x,y} \\left\\lbrace \\left\\Vert x_S u_S^{\\prime }\\right\\Vert _2^2 = x_S(|V|-\\sum _i y_i)\\right\\rbrace $ .", "Further, for any $r \\in {N}$ and any $S \\subseteq U$ such that $|S|=r$ , we have: $\\mathcal {A}{4r}{x,y} \\left\\lbrace x_S \\sum _{i} u_S^{\\prime }(i) = x_S \\sum _{i} u_S(i)(1-y_i) \\le x_S \\Delta _{2r} - x_S \\sum _i y_i\\right\\rbrace \\,.$ Next, let $S,T \\subseteq U$ such that $S \\ne T$ and $|S|,|T|\\le r$ .", "Then, by noting that $u_{S}^{\\prime } \\circ u_{T}^{\\prime } = u_{S \\Delta _{2r} T}^{\\prime }$ , we have: $\\mathcal {A}{4r}{x,y} \\left\\lbrace x_S x_T \\langle u_S^{\\prime },u_T^{\\prime }\\rangle = x_{S \\cup T} \\sum _{i} u_{S \\Delta _{2r} T}(i)(1-y_i) \\le x_{S \\cup T} \\Delta _{2r} - x_{S \\cup T} \\sum _i y_i\\right\\rbrace \\,.$ Next, we have This is a spectral proof of the classical fact upper bounding the number of negatively correlated vectors in $n$ dimensions: A4rx,y { 0 |S| = r xS uS'22 = |S| = r xS uS'22 + S T xS uS', xT uT' (|S|=r xS) (|V|-i yi) + S T ( xS T2r - xS T i yi) (|S|=r xS) |V| + S,T U, |S|,|T|=r xS T 2r - S,T U, |S|,|T|=r xS T i yi = (|S|=r xS) |V| + (|S|=r xS)2 2r - (|S|=r xS)2 i yi }  .", "Rearranging gives: A4rx,y { (|S|=r xS)2 i yi (|S|=r xS) |V| + (|S|=r xS)2 2r } ." ], [ "The case of arbitrary $p$", "In this section, we generalize the certificates of Section  to general edge densities.", "The certificates use the same system of polynomial constraints $\\mathcal {B}$ as in the previous section.", "Theorem 4.11 (Sum-of-squares certificates for unbalanced bicliques in random bipartite graphs for general densities) Let $H=H(U,V,p) \\sim B(k,n, p)$ be a bipartite Erdős-Rényi with edge probability $p$ .", "Then with probability $0.99$ we obtain the following two bounds: Fix any $\\varepsilon > 0$ independently of the other parameters.", "For $p, 1-p \\ge n^{-(1-\\varepsilon )}$ , $\\mathcal {B}(H) {4}{x,y} \\left\\lbrace XY \\le O\\left(\\frac{np}{1-p}\\right) \\right\\rbrace \\,.$ For any $r$ such that $k \\ge \\max \\lbrace O(\\sqrt{r n \\log n}p^{2r}/(1-p)^{2r+1}), O((\\log n) p/(1-p))\\rbrace $ , $\\mathcal {B}(H) {4r+2}{x,y} \\left\\lbrace X^{4r}Y \\le (1000r)^{10r} n \\left(\\frac{n\\max \\lbrace p/(1-p),(1-p)/p\\rbrace ^rp^r}{k(1-p)^{r+1}}\\right)^4 \\right\\rbrace \\,.$ Here, we use the shorthand $X = \\sum _{u \\in U} x_u$ and $Y= \\sum _{v \\in V} y_v$ for the size of the sets indicated by $x$ and $y$ respectively.", "The total bit complexity of the SoS proofs is $n^{O(1)}$ and $n^{O(r)}$ , respectively.", "For the proof of Theorem REF , we will work with matrices with $p$ -biased characters as entries.", "We first define these well-studied object.", "Definition 4.12 ($p$ -biased characters and normalized adjacency matrix) Let $H$ be a bipartite graph with left vertex set $U$ and right vertex set $V$ .", "We define the $p$ -biased character corresponding to an edge $H(u, v)$ to be $H_p(u, v) = {\\left\\lbrace \\begin{array}{ll}\\sqrt{\\frac{1-p}{p}} & \\text{if } H(u, v) = 1\\,,\\\\-\\sqrt{\\frac{p}{1-p}} & \\text{if } H(u, v) = -1\\,.\\end{array}\\right.", "}$ The normalized adjacency matrix $H_p$ of the graph is matrix with the $(u,v)$ -th entry equal to $H_p(u,v)$ .", "Let us first analyze a simple spectral certificate (that confirms that our algorithm recovers the bounds of [48], [14] from a basic relaxation in our scheme) in order to recover the first bound above.", "Lemma 4.13 (Simple spectral certificate) Fix any $\\varepsilon > 0$ .", "Under the hypothesis of Theorem REF , for any $p,1-p \\ge n^{-(1-\\varepsilon )}$ , we have: $\\mathcal {B}(H) {4}{x,y} \\left\\lbrace \\left(\\sum _i x_i \\right) \\left(\\sum _i y_i\\right) = \\left\\Vert H_p\\right\\Vert _2^2 \\le O\\left(\\frac{np}{1-p}\\right) \\right\\rbrace \\,.$ The total bit complexity of the SoS proofs is $n^{O(1)}$ .", "Here, $H_p$ is the normalized adjacency matrix of the bipartite graph defined in Definition REF .", "We have: B(H) 4x,y { 1-pp (i xi )2 (i yi)2 = (x Hp y)2 x22 Hpy22 x22 Hp22 y22 } .", "In the inequality above, we used the sum-of-squares Cauchy-Schwarz inequality.", "Applying the first part of Lemma REF with $a = \\left\\Vert x\\right\\Vert _2^2 \\left\\Vert y\\right\\Vert _2^2$ : B(H) 4x,y { (i xi )2 (i yi)2 p2(1-p)2 Hp24 } .", "Applying the second part of Lemma REF with $a = \\left\\Vert x\\right\\Vert _2^2 \\left\\Vert y\\right\\Vert _2^2$ we obtain: B(H) 4x,y { (i xi ) (i yi) p1-pHp22 } .", "Finally, notice that the entries of $H_p$ are mean 0, variance 1 and are bounded above by $\\max \\lbrace \\sqrt{p/1-p}, \\sqrt{1-p/p}\\rbrace $ in magnitude.", "For $p, 1-p \\ge n^{-(1-\\varepsilon )}$ , the bound on the entries evaluates to $n^{0.5-\\varepsilon /2}$ .", "So we can apply Fact REF to conclude that $\\left\\Vert H_p\\right\\Vert _2 =O(\\sqrt{n})$ with probability at least $0.99$ .", "The proof of second bound in Theorem REF uses a generalization of $r$ -fold balancedness defined in terms of $p$ -biased characters.", "We call this new property $r$ -fold $p$ -balancedness.", "Definition 4.14 (Balancednes for general densities) Let $H$ be a bipartite graph with left vertex set $U$ and right vertex set $V$ .", "For every $S \\subseteq U$ , we define $u_{p,S}$ to be the $|V|$ -dimensional vector defined by setting $u_{p,S}(v)= \\prod _{u \\in S} H_p(u,v)$ .", "Then, we say that $H$ has $2r$ -fold $p$ -right-balancedness $(\\Delta , \\delta )$ if, for all $S, T \\subseteq U$ such that $|S|,|T| \\le r$ , $|\\sum _{j \\in V} u_{p,S}(j) u_{p,T}(j)| \\le \\Delta $ .", "The following lemma verifies $r$ -fold $p$ -balancedness of random bipartite graphs, as well as an upper bound on the maximum degree of the vertices on the righ-hand side.", "Lemma 4.15 (Balancedness of random bipartite graphs for general densities) Let $H \\sim H(U,V,p)$ with $n= |U \\cup V|$ and $k=|U|$ .", "Then with probability at least $0.99$ over the draw of $H$ , it has $2r$ -fold $p$ -right-balancedness $O(\\sqrt{r |V| \\log |U|}p^r/(1-p)^r)$ , and the maximum degree of a node in $V$ is $kp+O(\\sqrt{kp(1-p) \\log |V|})$ .", "For $S, T \\subseteq U$ such that $|S|,|T| \\le r$ , we have that $u_{p,S}(j) u_{p,T}(j)$ has mean 0 and is bounded between $-\\left(\\sqrt{\\frac{p}{1-p}}\\right)^{2r} = -p^r/(1-p)^r$ and $\\left(\\sqrt{\\frac{p}{1-p}}\\right)^{2r} = p^r/(1-p)^r$ .", "Then, by Hoeffding's inequality, $\\Pr \\left[ \\left| \\sum _{j \\in V} u_{p,S}(j)u_{p,T}(j) \\right| \\ge t \\sqrt{|V|} p^r / (1-p)^r\\right] \\le 2e^{-t^2/2} \\,,$ so, by a union bound over all choices of $S$ and $T$ , $\\Pr \\left[ \\exists S, T \\subseteq U \\text{ s.t. }", "|S|, |T| \\le r, \\left| \\sum _{j \\in V} u_{p,S}(j)u_{p,T}(j) \\right| \\ge t \\sqrt{|V|} p^r / (1-p)^r\\right] \\le |U|^{2r} \\cdot 2e^{-t^2/2} \\,.$ Choosing $t = O(\\sqrt{r \\log U})$ makes the right-hand side a small constant, so we have $2r$ -fold $p$ -right-balancedness $O(\\sqrt{r|V| \\log |U|}p^r/(1-p)^r)$ .", "The degree of a node in $V$ is a binomial random variable $\\operatorname{Bin}(k, p)$ , which by standard bounds is larger than $kp+t$ with probability at most $\\min \\lbrace e^{-t^2/(2k(1-p))}, e^{-t^2/(2kp+2t/3)}\\rbrace $ .", "By a union bound over all nodes in $V$ , the maximum degree is larger than $kp+t$ with probability at most $|V|\\min \\lbrace e^{-t^2/(2k(1-p))}, e^{-t^2/(2kp+2t/3)}\\rbrace $ , so choosing $t=O(\\sqrt{kp(1-p) \\log |V|})$ makes the probability a small constant.", "Hence, the maximum degree is $kp+O(\\sqrt{kp(1-p)\\log |V|})$ .", "The following lemma is the key component of the proof of Theorem REF , and is analogous to Lemma REF in Section .", "Lemma 4.16 Let $H$ be a bipartite graph on left vertex set $U$ and right vertex set $V$ such that $|U|\\le |V|$ with $2r$ -fold $p$ -right-balancedness $\\Delta _{2r}$ .", "Then, $\\mathcal {B}(H) {4r}{x,y} \\Biggl \\lbrace \\left(\\sum _{|S|=r} x_S\\right)^{2} \\sum _i y_i \\left(1-p\\right)^{r}/p^r \\le \\left(\\sum _{|S|=r} x_S\\right) |V| \\max \\lbrace p/(1-p),(1-p)/p\\rbrace ^r + \\left(\\sum _{|S|=r} x_S\\right)^{2} \\Delta _{2r}\\Biggr \\rbrace \\,.$ The total bit complexity of the SoS proof is $n^{O(r)}$ .", "We postpone the proof of the lemma, and combine the result with an observation analogous to that in Lemma REF .", "Lemma 4.17 (Lower bounding the LHS of (REF )) Let $H$ be a bipartite graph on left and right vertex sets $U, V$ satisfying $k=|U|\\le |V|=n-k$ with maximum degree of a node in $V$ of at most $kp+\\Delta _{\\ell }$ with $\\Delta _{\\ell } \\ge 0$ .", "Then, for any $j \\in V$ , we have: $\\mathcal {B}(H) {4r}{x,y} \\Biggl \\lbrace y_j \\left(\\sum _{S: |S|=r} x_S\\right)^2 \\left(\\sum _{v \\in V} y_v\\right) \\ge \\left(k(1-p)-\\Delta _{\\ell }\\right) y_j \\left(\\sum _{S: |S|=r} x_S\\right)^2 \\Biggr \\rbrace \\,.$ The total bit complexity of the SoS proof is $n^{O(r)}$ .", "Every $j \\in V$ has degree at most $kp+\\Delta _{\\ell }$ .", "Thus, we have using the constraint system $\\mathcal {B}(H)$ : $\\mathcal {B}(H) {2}{x,y} \\left\\lbrace \\sum _{u \\in U} x_u y_j = \\sum _{u \\in U:u \\sim j} x_u y_j \\le (kp+\\Delta _\\ell ) y_j\\right\\rbrace $ Thus, $\\mathcal {B}(H) {2}{x,y} \\left\\lbrace \\sum _{u \\in U} x_u = k-\\sum _{v \\in V} y_v\\right\\rbrace $ allows us to conclude: $\\mathcal {B}(H) {2}{x,y} \\left\\lbrace \\sum _{v \\in V} y_v y_j \\ge (k(1-p)-\\Delta _{\\ell }) y_j\\right\\rbrace $ Multiplying both sides by the sum-of-squares polynomial $\\left(\\sum _{S: |S|=r} x_S\\right)^2$ completes the proof.", "As a direct consequence of Lemma REF and Lemma REF , we obtain: Lemma 4.18 Let $H$ be a bipartite graph on left and right vertex sets $U, V$ satisfying $k=|U|\\le |V|=n-k$ with with $2r$ -fold $p$ -right-balancedness $(\\Delta _{2r}, \\delta )$ and maximum degree of a node in $V$ of at most $kp+\\Delta _{\\ell }$ .", "Suppose further that $k(1-p)^{r+1}/p^r - \\Delta _\\ell (1-p)^{r}/p^r - \\Delta _{2r} \\ge \\frac{k}{2}(1-p)^{r+1}/p^r\\,.$ Then, we have $\\mathcal {B}(H) {4r+2}{x,y} \\left\\lbrace Y \\left(\\sum _{S: |S|=r}x_S\\right)^4 \\le n \\left(\\frac{2n\\max \\lbrace p/(1-p),(1-p)/p\\rbrace ^rp^r}{k(1-p)^{r+1}}\\right)^4\\right\\rbrace \\,.$ The total bit complexity of the SoS proof is $n^{O(r)}$ .", "We first multiply both sides of the conclusion of Lemma REF with the sum-of-squares polynomial $y_j^2$ for an arbitrary $j \\in V$ : B(H) 4r+2x,y { yj (|S|=r xS)2 i yi (1-p)r/pr yj (|S|=r xS) |V| {p/(1-p),(1-p)/p}r + yj (|S|=r xS)2 2r } .", "Next, we use Lemma REF to replace the left-hand side in the above by a useful lower bound: B(H) 4r+2x,y { (k(1-p)-) yj (S: |S|=r xS)2 (1-p)r/pr yj (|S|=r xS) |V| {p/(1-p),(1-p)/p}r + yj (|S|=r xS)2 2r} .", "We then move the second term on the right-hand side to the left-hand side and use that $k(1-p)^{r+1}/p^r - \\Delta _\\ell (1-p)^{r}/p^r - \\Delta _{2r} \\ge \\frac{k}{2}(1-p)^{r+1}/p^r$ to conclude: $\\mathcal {B}(H) {4r+2}{x,y} \\left\\lbrace y_j \\left(\\sum _{S: |S|=r} x_S\\right)^2\\le y_j \\left(\\sum _{|S|=r} x_S\\right) \\frac{2n\\max \\lbrace p/(1-p),(1-p)/p\\rbrace ^rp^r}{k(1-p)^{r+1}}\\right\\rbrace \\,.$ We finally apply Lemma REF with $a = y_j (\\sum _{S: |S|=r} x_S)$ , $C=\\frac{2n\\max \\lbrace p/(1-p),(1-p)/p\\rbrace ^rp^r}{k(1-p)^{r+1}}$ , and $t=2$ to obtain: $\\mathcal {B}(H){4r+2}{x,y} \\left\\lbrace y_j \\left(\\sum _{S: |S|=r}x_S\\right)^4 \\le \\left(\\frac{2n\\max \\lbrace p/(1-p),(1-p)/p\\rbrace ^rp^r}{k(1-p)^{r+1}}\\right)^4\\right\\rbrace \\,.$ Summing up as $j$ varies over $V$ completes the proof.", "We now finish the proof of Theorem REF : [Proof of Theorem REF ] The first bound follow by Lemma REF .", "In the rest of the proof we focus on the second bound.", "From Lemma REF , we have: $\\left\\lbrace x_i^2 = x_i \\text{ } \\forall i \\in U \\right\\rbrace {r}{x} \\left\\lbrace \\frac{1}{2^r r!}", "\\left(\\sum _i x_i\\right)^r - \\frac{2r^r}{r!}", "- \\varepsilon \\le \\sum _{|S|=r} x_S\\right\\rbrace \\,.$ Setting $\\varepsilon =1$ and using that $\\lbrace 0 \\le a \\le C\\rbrace {4}{a,C} \\lbrace a^4 \\le C^4\\rbrace $ , we have: $\\left\\lbrace x_i^2 = x_i \\text{ } \\forall i \\in U \\right\\rbrace {4r}{x} \\left\\lbrace \\left(\\sum _i x_i\\right)^{4r} \\le (100r)^{10r} + (100r)^{10r} \\left(\\sum _{|S|=r} x_S\\right)^{4} \\right\\rbrace \\,.$ Now we want to combine this with the conclusion of Lemma REF .", "We briefly verify that we satisfy the condition $k(1-p)^{r+1}/p^r - \\Delta _\\ell (1-p)^{r}/p^r - \\Delta _{2r} \\ge \\frac{k}{2}(1-p)^{r+1}/p^r$ .", "We have by Lemma REF that $\\Delta _{\\ell }=O(\\sqrt{kp(1-p)\\log |V|}) = O(\\sqrt{k(1-p)\\log n})$ and $\\Delta _{2r}=O(\\sqrt{r|V|\\log |U|}p^r/(1-p)^r) = O(\\sqrt{rn\\log n}p^r/(1-p)^r)$ .", "Observe that for $k \\ge \\max \\lbrace O((\\log n) p/(1-p)), O(\\sqrt{r n \\log n}p^{2r}/(1-p)^{2r+1})\\rbrace $ large enough the condition is satisfied.", "Then we have: $\\mathcal {B}(H) {O(r)}{x,y} \\left\\lbrace Y \\left(\\sum _i x_i\\right)^{4r} \\le (100r)^{10r} Y + (100r)^{10r} n \\left(\\frac{2n\\max \\lbrace p/(1-p),(1-p)/p\\rbrace ^rp^r}{k(1-p)^{r+1}}\\right)^4 \\right\\rbrace \\,.$ Observing that $\\mathcal {B}(H) {2}{y} \\left\\lbrace Y \\le n\\right\\rbrace $ completes the proof.", "Finally, we complete the proof of Lemma REF .", "[Proof of Lemma REF ] Let us write $u_{p,S}^{\\prime }$ for the vector-valued linear function in indeterminate $y$ defined by $u_{p,S}^{\\prime }(i) = u_{p,S}(i) (1-y_i)$ .", "Then, observe that $\\mathcal {A}{4r}{x,y} \\left\\lbrace x_{S \\cup T} u_{p,S}(i) u_{p,T}(i) y_i = x_S y_i (1-p)^r/p^r\\right\\rbrace $ and A4rx,y {xS up,S'22 = i xS up,S(i)2 (1-yi) xS |V| {p/(1-p),(1-p)/p}r - xS i yi (1-p)r/pr} .", "Let $S,T \\subseteq U$ such that $S \\ne T$ and $|S|,|T|\\le r$ .", "We have: A4rx,y {xS xT uS',uT' = xS T i uS(i)uT(i)(1-yi) xS T 2r - xS T i yi (1-p)r/pr} .", "Then, we have: A4rx,y { 0 |S| = r xS uS'22 = |S| = r xS uS'22 + S T xS uS', xT uT' (|S|=r xS) (|V| {p/(1-p),(1-p)/p}r - i yi (1-p)r/pr)    + S T ( xS T 2r - xS T i yi (1-p)r/pr) (|S|=r xS) |V| {p/(1-p),(1-p)/p}r + S,T U, |S|,|T|=r xS T 2r    - S,T U, |S|,|T|=r xS T i yi (1-p)r/pr = (|S|=r xS) |V| {p/(1-p),(1-p)/p}r + (|S|=r xS)2 2r - (|S|=r xS)2 i yi (1-p)r/pr }  .", "Rearranging gives: A4rx,y { (|S|=r xS)2 i yi (1-p)r/pr (|S|=r xS) |V| {p/(1-p),(1-p)/p}r + (|S|=r xS)2 2r } ." ], [ "List-decoding semi-random planted cliques", "In this section, we describe our algorithm for list-decoding semi-random planted cliques using high-constant degree sum-of-squares relaxations.", "We will abstract out our requirement of sum-of-squares refutation of biclique numbers in random bipartite graphs in order to transparently show that the nature (or explicitness) of the certificate is irrelvant to our algorithm.", "In Section REF , we will immediately obtain our algorithmic results as a direct consequence of our certificates from the previous section and an elementary cleanup step that takes a list with an approximately correct candidate and fixes it up to a list containing the planted clique $S^*$ .", "Theorem 5.1 Fix any $t \\in {N}$ .", "There is a $n^{O(t)}$ time algorithm that takes input a graph $G$ on $n$ vertices as input with the following guarantees.", "Suppose $G$ has a clique $S^*$ of size $k$ in it.", "Suppose that the bipartite graph $H$ defined by keeping only the edges from $\\mathsf {cut}(S^*)$ in $G$ admits a $t$ -th order sum-of-squares certificate of unbalanced biclique number as below for some function $\\omega =\\omega (n,k,p)$ .", "$\\mathcal {B}(H) {O(t)}{x,y} \\left\\lbrace X^{t} Y \\le \\omega \\right\\rbrace \\,.$ Then, if $\\omega \\cdot (n/k^2)^t \\le \\delta k$ , the algorithm outputs a list of $O((n/k)^t)$ subsets, each of size at most $k/(1-2\\delta )$ such that with probability at least $0.99$ over the randomness of the algorithm there is an element $S$ of the list that satisfies $|S \\cap S^*| \\ge (1-2\\delta ) k$ .", "We will prove Theorem REF using the following natural algorithm.", "Recall the standard $k$ -clique constraint system $\\mathcal {A}$ defined earlier.", "Our rounding scheme is reminiscent of those used in rounding algorithms for list-decodable learning [38], [7], [33].", "Algorithm 5.2 (List-decoding semi-random planted cliques) algorithm]alg:list-decoding-semirandom-planted-clique Given: A graph $G$ on $n$ vertices with a clique $S^*$ of size $k$ .", "Output: A list $L \\subseteq {R}^d$ of size $O((n/k)^t)$ that contains an $S$ such that $|S \\cap S^*| \\ge (1-\\delta )k$ .", "Operation: Find a degree-$O(t)$ pseudo-distribution $D$ on $w$ satisfying the $k$ -clique axioms on $\\mathcal {A}(G)$ and minimizing $\\Vert \\tilde{{E}}_{D}[w]\\Vert _2$ .", "For every $Q \\in [n]^t$ , an ordered $t$ -tuple on $[n]$ such that $\\tilde{{E}}_D[w_Q]>0$ , let $C_Q = \\frac{\\tilde{{E}}_{D}[w_Q w]}{\\tilde{{E}}_{D}[w_Q]}$ .", "For $N= O((n/k)^t)$ repetitions, choose an ordered $t$ -tuple $Q \\in [n]^t$ with probability proportional to $\\tilde{{E}}_{D}[ w_Q]$ and add $C_Q$ to the list $\\mathcal {L}^{\\prime }$ .", "For each element $C_Q \\in \\mathcal {L}^{\\prime }$ , construct the set $S_Q = \\lbrace i \\mid C_Q(i) \\ge 1-2\\delta \\rbrace $ and add it to $\\mathcal {L}$ .", "Output $\\mathcal {L}$ .", "To analyze this algorithm, we first observe that the maximal coverage property (i.e., $D$ minimizing $\\left\\Vert \\tilde{{E}}_D[w]\\right\\Vert _2$ ) implies that $\\tilde{{E}}_{D}[w]$ has a non-trivial weight on the true (but unknown) $k$ -clique $S^*$ .", "This lemma is by now standard with analogous usages in the context of list-decodable learning [38], [7], [33].", "It can be proven by showing that if $\\sum _{i \\in S^*} \\tilde{{E}}_D[w_i] < k^2/n$ then one can take a “mix” $D$ with the distribution that places all its mass on $S^*$ (which does satisfy $\\mathcal {A}$ ) and produce another pseudo-distribution $D^{\\prime }$ with smaller $\\left\\Vert \\tilde{{E}}_D[w_i]\\right\\Vert _2$ .", "Lemma 5.3 (Maximal coverage implies non-trivial weight on $S^*$ , see Lemma 4.3 in [38]) Let $D$ be a pseudo-distribution of degree $\\ge 4$ satisfying $\\mathcal {A}(G)$ that minimizes $\\left\\Vert \\tilde{{E}}_D[w]\\right\\Vert _2$ .", "Then, $\\sum _{i \\in S^*} \\tilde{{E}}_D[w_i] \\ge k^2/n$ .", "As an immediate corollary, we observe the following consequence of our rounding scheme: Lemma 5.4 Let $D$ be the pseudo-distribution constructed in Step 1 of the algorithm.", "Then, in Step 3 of the algorithm, each of the chosen $t$ -tuples $Q$ satisfies $Q \\in (S^*)^t$ with probability at least $(k/n)^t$ .", "The probability that in Step 3 of the algorithm an ordered $t$ -tuple $Q$ is in $(S^*)^t$ is $\\tilde{{E}}_D[\\left(\\sum _{i \\in S^*} w_i\\right)^{t}]/k^t$ , where we used that $\\tilde{{E}}_D[\\left(\\sum _{i=1}^n w_i\\right)^t]=k^t$ .", "The proof now follows by applying Hölder's inequality for pseudo-distributions to conclude that $\\tilde{{E}}_D[\\left(\\sum _{i \\in S^*} w_i\\right)^t] \\ge \\tilde{{E}}_D [ \\sum _{i \\in S^*} w_i]^t \\ge (k^2/n)^t$ (from Lemma REF ).", "Next, we argue that for $Q \\in (S^*)^t$ chosen with probability proportional to $\\tilde{{E}}_{D}[ w_Q]$ , with probability at least $0.5$ , the corresponding $S=C_Q$ has a non-trivial intersection with $S^*$ .", "Lemma 5.5 Assume the hypothesis of Theorem REF .", "Then, in Step 4 of the algorithm, conditioned on $Q \\in (S^*)^t$ , with probability at least $0.5$ , $\\sum _{i \\in S^*} C_Q(i) \\ge (1-\\delta )k$ .", "Consider the bipartite graph $H$ formed by keeping only the edges that lie in $\\mathsf {cut}(S^*)$ in $G$ with left vertex set equal to $S^*$ and right vertex set equal to $[n] \\setminus S^*$ .", "Then, observe that $\\mathcal {A}(G) {}{} \\mathcal {B}(H)$ via the polynomial map $x_u = w_u$ for every $u \\in S^*$ and $y_v = w_v$ for every $v \\in [n] \\setminus S^*$ .", "Since $D$ satisfies $\\mathcal {A}(G)$ , the polynomial transformation above applied to $D$ gives a pseudo-distribution on $(x,y)$ that satisfies $\\mathcal {B}(H)$ .", "From the biclique certificate, we have: $\\tilde{{E}}_D[ X^tY] \\le \\omega $ .", "This yields that $\\sum _{i_1, i_2,\\ldots , i_t} \\tilde{{E}}_{D}[x_{i_1} x_{i_2} \\cdots x_{i_t} Y] \\le \\omega \\,.$ Rescaling and rewriting yields $\\frac{1}{\\tilde{{E}}_D [X^t]} \\sum _{i_1, i_2,\\ldots , i_t: \\tilde{{E}}_{D}[x_{i_1} x_{i_2} \\cdots x_{i_t}] >0} \\tilde{{E}}_{D}[x_{i_1} x_{i_2} \\cdots x_{i_t}] \\frac{\\tilde{{E}}_{D}[x_{i_1} x_{i_2} \\cdots x_{i_t}Y]}{\\tilde{{E}}_{D}[x_{i_1} x_{i_2} \\cdots x_{i_t}]} \\le \\frac{\\omega }{\\tilde{{E}}_D [X^t]} \\,.$ Using Hölder's inequality for pseudo-distributions and Lemma REF , we know that $\\tilde{{E}}_D [X^t] = \\tilde{{E}}_D [(\\sum _{i \\in S^*} w_i)^t] \\ge \\tilde{{E}}_D [\\sum _{i \\in S^*} w_i]^t \\ge (k^2/n)^t$ .", "Thus the right-hand side of the above is at most $\\omega (n/k^2)^t$ .", "Observe that the left-hand side can be interpreted as the expected value of the random variable $\\frac{\\tilde{{E}}_{D}[x_{i_1} x_{i_2} \\cdots x_{i_t}Y]}{\\tilde{{E}}_{D}[x_{i_1} x_{i_2} \\cdots x_{i_t}]}$ where each $i_1,i_2, \\ldots , i_t$ is chosen with probability equal to $\\frac{\\tilde{{E}}_{D}[x_{i_1} x_{i_2} \\cdots x_{i_t}]}{\\sum _{i_1, i_2, \\ldots , i_t}\\tilde{{E}}_{D}[x_{i_1} x_{i_2} \\cdots x_{i_t}]}$ .", "For an ordered tuple $Q \\in (S^*)^t$ chosen with probability proportional to $\\tilde{{E}}_{D}[w_Q]$ , consider the $(n-k)$ -dimensional vector $\\frac{\\tilde{{E}}_{D}[w_Q y]}{\\tilde{{E}}_{D}[w_Q]}$ .", "Its $\\ell _1$ -norm is equal to $\\frac{\\tilde{{E}}_{D}[w_Q Y]}{\\tilde{{E}}_{D}[w_Q]}$ , which is also equal to the random variable $\\frac{\\tilde{{E}}_{D}[x_{i_1} x_{i_2} \\cdots x_{i_t}Y]}{\\tilde{{E}}_{D}[x_{i_1} x_{i_2} \\cdots x_{i_t}]}$ where each $i_1,i_2, \\ldots , i_t$ is chosen with probability equal to $\\frac{\\tilde{{E}}_{D}[x_{i_1} x_{i_2} \\cdots x_{i_t}]}{\\sum _{i_1, i_2, \\ldots , i_t}\\tilde{{E}}_{D}[x_{i_1} x_{i_2} \\cdots x_{i_t}]}$ .", "Thus, we have concluded that the expected value of the $\\ell _1$ -norm of $\\frac{\\tilde{{E}}_{D}[w_Q y]}{\\tilde{{E}}_{D}[w_Q]}$ is at most $\\omega (n/k^2)^t$ .", "By Markov's inequality, with probability at least $0.5$ over the choice of $Q$ , thus, the $\\ell _1$ -norm of $\\frac{\\tilde{{E}}_{D}[w_Q y]}{\\tilde{{E}}_{D}[w_Q]}$ is at most $2\\omega (n/k^2)^t$ .", "Thus, with probability at least $0.5$ over the choice of $Q$ , $\\sum _{i \\in S^*} C_Q(i) \\ge k - \\left\\Vert \\frac{\\tilde{{E}}_{D}[w_Q y]}{\\tilde{{E}}_{D}[w_Q]}\\right\\Vert _1 \\ge k- \\omega (n/k^2)^t \\ge (1-\\delta )k$ as $\\omega (n/k^2)^t \\le \\delta k$ , where we used that $\\sum _{i=1}^n C_Q(i) \\ge k$ .", "[Proof of Theorem REF ] From Lemma REF , in Step 4, we choose a $Q \\subseteq S^*$ with probability at least $(k/n)^t$ .", "Conditioned on this event happening, Lemma REF shows that $\\sum _{i \\in S^*}C_Q(i) \\ge (1-\\delta )k$ with probability at least $0.5$ .", "We call such $Q$ good.", "By averaging, for a good $Q$ , we must have that for a $(1-2\\delta )$ -fraction of $i \\in S^*$ , $C_Q(i) \\ge 1-2\\delta $ .", "Further, the total number of coordinates of $C_Q$ larger than $1-2 \\delta $ cannot be more than $k/(1-2\\delta )$ .", "Thus, $S_Q$ is a set of size at most $k/(1-2\\delta )$ such that $|S_Q \\cap S^*| \\ge (1-2\\delta ) k$ .", "The $O((n/k)^t)$ repetitions in Step 3 ensure that with probability at least $0.99$ we choose at least one good $Q$ ." ], [ "Proof of main results", "We combine our biclique certificates and the rounding algorithm and a simple cleanup step to obtain the main results of our work.", "We start with two auxiliary lemmas that help us prune the list of subsets returned by the list-decoding algorithm in Theorem REF .", "Lemma 5.6 (Intersection of cliques with the planted clique) Let $S^*$ be a planted clique of size $k$ in a subgraph of size $n$ under the semi-random model.", "Then, with probability at least $1-\\frac{k}{n^2}$ , any other clique $S$ of size at least $k$ satisfies $|S\\cap S^*| \\le 3 \\frac{\\log n}{\\log 1/p}$ .", "The proof is analogous to that of Proposition REF and is an easy consequence of Chernoff and union bounds.", "Lemma 5.7 (Subsets with small intersection) Let $S_1, ..., S_m \\subseteq [n]$ with $|S_i| = k$ and $|S_i \\cap S_j| \\le \\Delta $ .", "Then, if $k \\ge \\sqrt{2n\\Delta }$ , we have $m \\le \\frac{n}{k} \\left(1+\\frac{2n\\Delta }{k^2}\\right)$ .", "By the inclusion-exclusion principle, we need $mk - \\frac{m^2}{2}\\Delta \\le n\\,.$ By inspecting the above as a quadratic equation in $m$ , we get that for $k \\ge \\sqrt{2n\\Delta }$ the equation is violated when $m > \\frac{k-\\sqrt{k^2-2n\\Delta }}{\\Delta }$ .", "We note that $\\frac{k-\\sqrt{k^2-2n\\Delta }}{\\Delta } = \\frac{k}{\\Delta }\\left(1-\\sqrt{1-\\frac{2n\\Delta }{k^2}}\\right) \\le \\frac{k}{\\Delta } \\frac{n\\Delta }{k^2} \\left(1+\\frac{2n\\Delta }{k^2}\\right) = \\frac{n}{k}\\left(1+\\frac{2n\\Delta }{k^2}\\right)$ and therefore obtain that $m \\le \\frac{n}{k} \\left(1+\\frac{2n\\Delta }{k^2}\\right)$ .", "We state now the main result.", "Theorem 5.8 (Main result) Consider a graph $G$ on $n$ vertices such that $G$ is generated according to $\\mathsf {FK}(n,k,p)$ .", "Then the following two results hold: For any $\\varepsilon >0$ small enough, there exists an algorithm that takes input $G$ , runs in time $n^{O(1/\\varepsilon )}$ , and for $k \\ge n^{1/2 + \\varepsilon }/(1-p)^{1/\\varepsilon }$ , with probability $0.99$ outputs a list of at most $(1+o(1))n/k$ $k$ -cliques such that one of them is the planted clique in $G$ .", "For any $\\varepsilon >0$ and $p,1-p\\ge n^{-(1-\\varepsilon )}$ , there exists an algorithm that takes input $G$ , runs in polynomial time, and for $k \\ge \\max \\lbrace O(n^{2/3}p^{1/3}/(1-p)^{2/3}), \\tilde{O}(n^{1/2})\\rbrace $ , with probability $0.99$ outputs a list of at most $(1+o(1))n/k$ $k$ -cliques such that one of them is the planted clique in $G$ .", "The first guarantee uses the second certificate in Theorem REF and is the main contribution of our work, and the second guarantee uses the first certificate in Theorem REF and produces a result similar to that of [48].", "We start by proving the first guarantee, which we prove separately for the case $p \\le 1/2$ and the case $p \\ge 1/2$ .", "Lemma 5.9 (First guarantee of Theorem REF , $p \\le 1/2$ ) Fix any $\\varepsilon >0$ small enough.", "There is an algorithm that takes input a graph $G$ on $n$ vertices, runs in time $n^{O(1/\\varepsilon )}$ , and provides the following guarantee: If $G$ is generated according to $\\mathsf {FK}(n,k,p)$ with $p \\le 1/2$ , for $k \\ge n^{1/2 + \\varepsilon }$ , with probability $0.99$ the algorithm outputs a list of at most $(1+o(1))n/k$ $k$ -cliques such that one of them is the planted clique in $G$ .", "For $p < 1/2$ , the second certificate in Theorem REF is the same up to constant factors as the one in Theorem REF for $p=1/2$ .", "Furthermore, for $p < 1/2$ , the range of $k$ for which the second certificate in Theorem REF holds is a superset of the range of $k$ under which the one in Theorem REF holds.", "Therefore, in this proof, we assume without loss of generality that $p=1/2$ , noting that all the steps in the proof continue to be valid even if in fact $p<1/2$ .", "By Theorem REF , for $k \\ge O(\\sqrt{t n \\log n})$ , we have $\\mathcal {B}(H) {4t+2}{x,y} \\Biggl \\lbrace X^{4t}Y \\le (1000t)^{10t}n \\left(\\frac{n}{k}\\right)^4 \\Biggr \\rbrace \\,.$ Next, we want to apply Theorem REF with $\\delta = 1/(4k)$ , which ensures that each subset of the returned list has size at most $k$ .", "For that, we need $(1000t)^{10t}n \\left(\\frac{n}{k}\\right)^4 \\left(\\frac{n}{k^2}\\right)^{4t} \\le \\frac{1}{4}\\,,$ which we rewrite as $k \\ge \\operatorname{poly}(t) \\cdot \\sqrt{n} \\cdot n^{\\frac{3}{8t+4}}\\,.$ Then we obtain a list of $O((n/k)^{4t})$ subsets, each of size at most $k$ , such that with probability at least $0.99$ the true clique $S^*$ is in the list.", "Next, we remove from the list the subsets with size different than $k$ and the subsets that are not cliques.", "By Lemma REF , any clique of size $k$ intersects $S^*$ in at most $O(\\log n)$ vertices.", "Therefore, we can prune the list by iterating the following procedure: find $S, S^{\\prime }$ in the list such that $|S \\cap S^{\\prime }| \\ge O(\\log n)$ and remove one of them from the list.", "By Lemma REF , the resulting list has size at most $(1+o(1))n/k$ , where we use that our choice of $k$ satisfies $\\frac{2nO(\\log n)}{k^2} = o(1)$ .", "We choose the smallest $t$ such that $\\varepsilon \\ge \\frac{3.1}{8t+4}$ , which is $t=\\lceil \\frac{31}{80\\varepsilon }-\\frac{1}{2}\\rceil =O(1/\\varepsilon )$ .", "Then $k \\ge n^{1/2+\\varepsilon }$ satisfies the lower bounds on $k$ that we required in the proof.", "Finally, the time complexity of the algorithm is $n^{O(t)} = n^{O(1/\\varepsilon )}$ .", "Lemma 5.10 (First guarantee of Theorem REF , $p \\ge 1/2$ ) Fix any $\\varepsilon >0$ small enough.", "There is an algorithm that takes input a graph $G$ on $n$ vertices, runs in time $n^{O(1/\\varepsilon )}$ , and provides the following guarantee: If $G$ is generated according to $\\mathsf {FK}(n,k,p)$ with $p \\ge 1/2$ , for $k \\ge n^{1/2 + \\varepsilon }/(1-p)^{1/\\varepsilon }$ , with probability $0.99$ the algorithm outputs a list of at most $(1+o(1))n/k$ $k$ -cliques such that one of them is the planted clique in $G$ .", "By the second certificate in Theorem REF , for $k \\ge O(\\sqrt{t n \\log n} p^{2t} / (1-p)^{2t+1})$ , we have $\\mathcal {B}(H) {4t+2}{x,y} \\left\\lbrace X^{4t}Y \\le (1000t)^{10t} n \\left(\\frac{np^{2t}}{k(1-p)^{2t+1}}\\right)^4 \\right\\rbrace \\,.$ Next, we want to apply Theorem REF with $\\delta = 1/(4k)$ , which ensures that each subset of the returned list has size at most $k$ .", "For that, we need $(1000t)^{10t} n \\left(\\frac{np^{2t}}{k(1-p)^{2t+1}}\\right)^4 \\left(\\frac{n}{k^2}\\right)^{4t} \\le \\frac{1}{4}\\,,$ which we rewrite as k poly(t) n n38t+4 p1-48t+4 / (1-p)  .", "Then we obtain a list of $O((n/k)^{4t})$ subsets, each of size at most $k$ , such that with probability at least $0.99$ the true clique $S^*$ is in the list.", "Next, we remove from the list the subsets with size different than $k$ and the subsets that are not cliques.", "By Lemma REF , any clique of size $k$ intersects $S^*$ in at most $O(\\log n/(1-p))$ vertices, where we used that $\\log 1/p = \\Omega (1-p)$ for $p \\ge 1/2$ .", "Therefore, we can prune the list by iterating the following procedure: find $S, S^{\\prime }$ in the list such that $|S \\cap S^{\\prime }| \\ge O(\\log n/(1-p))$ and remove one of them from the list.", "By Lemma REF , the resulting list has size at most $(1+o(1))n/k$ , where we use that our choice of $k$ satisfies $\\frac{2nO(\\log n/(1-p))}{k^2} = o(1)$ .", "We choose the smallest $t$ such that $\\varepsilon \\ge \\frac{3.1}{8t+4}$ , which is $t=\\lceil \\frac{31}{80\\varepsilon }-\\frac{1}{2}\\rceil =O(1/\\varepsilon )$ .", "In this case, $(1-p)^{2t+1} \\ge (1-p)^{1/\\varepsilon }$ , because $2t+1 \\le \\frac{31}{40\\varepsilon }+2 \\le \\frac{1}{\\varepsilon }$ for $\\varepsilon \\le 0.1$ .", "Then $k \\ge n^{1/2+\\varepsilon }/(1-p)^{1/\\varepsilon }$ satisfies the lower bounds on $k$ that we require.", "Finally, the time complexity of the algorithm is $n^{O(t)} = n^{O(1/\\varepsilon )}$ .", "[Proof of first guarantee in Theorem REF ] By Lemma REF we obtain the desired result for $p \\le 1/2$ when $k \\ge n^{1/2+\\varepsilon }$ , and by Lemma REF we obtain the desired result for $p \\ge 1/2$ when $k \\ge n^{1/2+\\varepsilon }/(1-p)^{1/\\varepsilon }$ .", "Then both results hold when $k \\ge n^{1/2+\\varepsilon }/(1-p)^{1/\\varepsilon }$ .", "Finally, we prove the second guarantee in Theorem REF .", "[Proof of second guarantee in Theorem REF ] By the first certificate in Theorem REF , we have $\\mathcal {B}(H) {4}{x,y} \\left\\lbrace XY \\le O\\left(\\frac{np}{1-p}\\right) \\right\\rbrace \\,.$ Next, we want to apply Theorem REF with $\\delta = (1-p)/24$ .", "For that, we need $O\\left(\\frac{np}{1-p}\\right)\\frac{n}{k^2} \\le (1-p)k/24\\,,$ which we rewrite as k O(n2/3p1/3(1-p)2/3)  .", "Then we obtain a list of $O(n/k)$ subsets, each of size at most $k/(1-(1-p)/12) \\le (1-(1-p)/6)k$ , such that with probability at least $0.99$ one one them interesects the true clique $S^*$ in at least $(1-(1-p)/12)k \\ge (1-(1-p)/6)k$ vertices.", "To obtain a list that contains $S^*$ exactly, we make use of the following claim: Claim 5.11 With probability at least $0.99$ , for all subsets $S \\subseteq [n]$ with $|S \\cap S^*| \\ge (1-\\gamma )k$ and $|S| \\le (1+\\gamma ) k$ , every vertex $v \\in S^*$ is connected to at least $(1-\\gamma )k$ vertices in $S$ and every vertex $v \\notin S^*$ is connected to at most $kp+O(\\sqrt{kp(1-p)\\log n}) +2\\gamma k$ vertices in $S$ .", "The number of vertices in $S^*$ to which a vertex $v\\notin S^*$ is connected is a binomial random variable $\\operatorname{Bin}(k,p)$ , which by standard bounds is larger than $kp+t$ with probability at most $\\min \\lbrace e^{-t^2/(2k(1-p))}, e^{-t^2/(2kp+2t/3)}\\rbrace $ .", "By a union bound over all such $v$ , the probability that this quantity is larger than $kp+t$ for any of them is at most $n \\min \\lbrace e^{-t^2/(2k(1-p))}, e^{-t^2/(2kp+2t/3)}\\rbrace $ .", "Choosing $t=O\\left(\\sqrt{kp(1-p)\\log n}\\right)$ makes this probability a small constant.", "Then, with probability at least $0.99$ , no vertex $v \\notin S^*$ is connected to more than $kp+O\\left(\\sqrt{kp(1-p)\\log n}\\right)$ vertices in $S^*$ .", "Then, every vertex $v \\notin S^*$ is connected to at most $kp+O(\\sqrt{kp(1-p)\\log n})$ vertices in $S \\cap S^*$ , and to at most $|S \\setminus S^*| \\le (1+\\gamma )k-(1-\\gamma )k=2\\gamma k$ other vertices in $S$ .", "Furthermore, every vertex $v \\in S^*$ is connected to at least $|S \\cap S^*| \\ge (1-\\gamma )k$ vertices in $S$ .", "Consider the subset in the list $S$ for which $|S\\cap S^*| \\ge (1-(1-p)/6)$ .", "By the claim applied with $\\gamma =(1-p)/6$ , we have that every vertex $v \\in S^*$ is connected to at least $(1-(1-p)/6)k$ vertices in $S$ , and every vertex $v \\notin S^*$ is connected to at most $(p+(1-p)/3)k+O(\\sqrt{kp(1-p)\\log n})$ vertices in $S$ .", "We have that $(1-(1-p)/6)k > (p+(1-p)/3)k+O\\left(\\sqrt{kp(1-p)\\log n}\\right)$ when $k > O\\left((\\log n)p/(1-p)\\right)\\,.$ Therefore, we do the following: for each subset $S$ in the list, we remove from $S$ all vertices that are connected to less than $(1-(1-p)/6)k$ of the vertices in $S$ , and we add to $S$ all vertices that are connected to at least $(1-(1-p)/6)k$ of the vertices in $S$ .", "The observations above ensure that the subset $S$ for which $|S\\cap S^*| \\ge (1-(1-p)/6)$ is transformed by this procedure into $S^*$ .", "After that, we remove from the list the subsets with size different than $k$ and the subsets that are not cliques.", "Then, by Lemma REF , any clique of size $k$ intersects $S^*$ in at most $O(\\log n/\\log 1/p)$ vertices.", "Therefore, we can prune the list by iterating the following procedure: find $S, S^{\\prime }$ in the list such that $|S \\cap S^{\\prime }| \\ge O(\\log n/\\log 1/p)$ and remove one of them from the list.", "By Lemma REF , the resulting list has size at most $(1+o(1))n/k$ , where we use that our choice of $k$ satisfies $\\frac{2nO(\\log n/\\log 1/p)}{k^2} = o(1)$ .", "We note that $k \\ge \\max \\lbrace O(n^{2/3}p^{1/3}/(1-p)^{2/3}), \\tilde{O}(n^{1/2})\\rbrace $ satisfies the lower bounds on $k$ that we require.", "The time complexity of the algorithm is polynomial in $n$ ." ], [ "Evidence of hardness for certifying blicliques", "In this section, we collect some evidence that suggests that improving on our guarantees for the unbalanced bipartite clique certification problem is hard.", "Our hardness results are in two settings: in the first we will prove a lower bound on the basic SDP relaxation for the problem that gives a concrete reason for the $n^{2/3}$ barrier (for $p=1/2$ ) in prior works, and in the second we will prove lower bounds in the low-degree polynomial model for hypothesis testing problems." ], [ "Lower bounds against basic SDP", "We consider the following SDP relaxation for finding large bicliques in a given bipartite graph $H(U,V,E)$ where $|U|=k$ , $|V|=n$ , and all edges go between vertices in $U$ and vertices in $V$ .", "It is equivalent to the degree 2 sum-of-squares relaxation of the biclique constraint system (REF ).", "$ \\left\\lbrace \\begin{aligned}&\\forall i,j& 0 \\le X(i,j)& \\le 1 \\\\&&\\mathrm {tr}(X)&= k\\\\&&\\left(\\sum _{u \\in U} X(u,u)\\right)&= \\ell \\\\&&\\left(\\sum _{v \\in V} X(v,v)\\right)&= k-\\ell \\\\&&\\sum _{u \\in U,v \\in V} X(u,v)&= \\ell (k-\\ell )\\\\&\\forall u \\in U,v \\in V \\text{ s.t. }", "\\lbrace u,v\\rbrace \\notin H& X(u,v)& = 0\\\\&& X& \\succeq 0\\end{aligned}\\right\\rbrace $" ], [ "Commentary on the SDP relaxation", "We think of the SDP solution $X$ as a matrix indexed by all the (left and right) vertices of the bipartite graph $H$ .", "We associate the first $k=|U|$ rows and columns of $X$ with the left vertices and the last $n=|V|$ rows and columns of $X$ with the right vertices of $H$ .", "If $x \\in \\lbrace 0,1\\rbrace ^U$ and $y \\in \\lbrace 0,1\\rbrace ^V$ indicate left and right subsets of vertices in a purported biclique of total size $k$ , then $X$ should be “thought of” as a relaxation of the constraints satisfied by the rank 1 matrix $(x,y)(x,y)^{\\top }$ .", "In particular, the first two constraints posit that $X$ is non-negative in all its entries and that its trace (that equals $\\left\\Vert x\\right\\Vert _2^2 + \\left\\Vert y\\right\\Vert _2^2$ and thus the total size of the biclique) is $k$ .", "The next three constraints posit that the left hand side vertices contribute $\\ell $ (corresponding to the left hand side contributing $\\ell $ vertices to the biclique), that the right hand side vertices contribute $k-\\ell $ to the trace and that $(\\sum _i x_i) (\\sum _i y_i) = \\ell (k-\\ell )$ .", "The penultimate constraint posits that if $u\\in U$ and $v \\in V$ do not have an edge between them, then,we cannot simultanesouly pick $u,v$ to be in the biclique (capturing the “biclique” constraints).", "For fixed $k,n$ , the infeasibility of the SDP for some $\\ell = \\ell (k,n)$ is equivalent to there being a degree 2 sum-of-squares certificate of the absence of $\\ell \\times k-\\ell $ bicliques in $H$ .", "We will show that the above SDP is in fact feasible whp over the draw of $H$ so long as $\\ell \\ll n/k$ .", "This corresponds to the basic SDP barrier at $k=n^{2/3}$ (a threshold obtained by balancing the above obtained trade-off – see Remark REF ) encountered in prior works on the semi-random planted clique problem.", "Lemma 6.1 (SDP lower bound for biclique certification) Let $H \\sim B(k,n,1/2)$ be a bipartite Erdős-Rényi random graph with edge probability $1/2$ .", "Then, with probability at least $0.99$ over the draw of $H$ , for any $100 \\sqrt{n} \\le k\\le n/2$ and $\\ell \\le c n/k$ for some constant $c>0$ small enough, the SDP (REF ) is feasible.", "We will prove the lemma by exhibiting an explicit solution to the SDP (REF ).", "The unbalanced setting requires a slightly more involved construction compared to the related SDP lower bounds for the clique number of $G(n,1/2)$ , where a natural shifted and scaled adjacency matrix yields a feasible solution." ], [ "The Construction", "In order to describe our construction, it is helpful to think of the solution $X$ as being divided into $X_{top}$ , the principal $k \\times k$ block corresponding to first $k$ rows and columns, $X_{bot}$ , the principal $n \\times n$ block corresponding to the last $n$ rows and columns, and $X_{cross}$ , the $k \\times n$ off-diagonal block (and its transposed copy).", "We will set every diagonal entry of $X_{top}$ to be $\\ell /k$ and every off-diagonal entry of $X_{top}$ to be $(\\ell /k)^2$ .", "Informally, $X_{top}$ is the 2nd moment matrix of the probability distribution that chooses every vertex on the left with probability $\\ell /k$ independently.", "We describe $X_{cross}$ next.", "For every $u \\in U, v\\in V$ , we set $X(u,v)=X(v,u)=0$ if $u$ is not connected to $v$ in $H$ , and otherwise we set $X(u,v) = X(v,u)=c_1 (\\ell /n)$ for some constant $c_1>0$ to be chosen later.", "Notice that this is equivalent to setting $X_{cross} = c_1 \\frac{\\ell }{n} A$ where $A$ is the $k$ by $n$ bipartite adjacency matrix of $H$ .", "Finally, we describe $X_{bot}$ .", "This is where we need to be a little more careful.", "Let $a_1, a_2,\\ldots , a_k$ be $n$ -dimensional vectors in $\\lbrace -1,1\\rbrace ^n$ such that $a_u(v)=1$ iff $\\lbrace u,v\\rbrace $ is an edge in $H$ .", "That is, the $a_i$ s are the $\\pm 1$ neighborhood indicators of the $k$ left vertices in $H$ .", "Then, we set $X_{bot} = \\frac{k-\\ell }{n (k+1)} (\\sum _{i=1}^k a_i a_i^{\\top } + \\mathbf {1} \\mathbf {1}^{\\top })$ .", "Here $\\mathbf {1}$ is the vector of all 1 coordinates.", "Note that the diagonal entries of $X_{bot}$ exactly equal $(k-\\ell )/n$ and thus $\\mathrm {tr}(X_{bot}) = k-\\ell $ as required.", "Also note that $X_{bot}$ is low rank, as it has rank at most $k+1$ .", "We discuss now the choice of $c_1$ .", "We want to enforce $\\sum _{u \\in U,v \\in V} X(u, v) = \\ell (k-\\ell )$ , so we choose $c_1 = \\frac{\\ell (k-\\ell )}{(\\ell /n)\\sum _{u\\in U, v \\in V} A(u, v)} = \\frac{(k-\\ell )n}{\\sum _{u \\in U, v \\in V} A(u, v)}$ .", "Note that with probability at least $0.999$ we have that $\\Omega (kn) \\le \\sum _{u \\in U, v \\in V} A(u, v) \\le kn$ , so with probability at least $0.999$ we have that $c_1$ is bounded below and above by absolute constants." ], [ "Analysis", "With probability at least $0.999$ over the draw of $H$ , $X$ immediately satisfies all the constraints except for positive semidefiniteness.", "We focus next on verifying the PSD-ness of $X$ .", "Consider any “test” vector $z \\in {R}^{k+n}$ , which we will think of as $(z_L, z_R)$ where $z_L$ is the projection of $z$ to the first $k$ coordinates (i.e., the left vertices) and $z_R$ the projection to the last $n$ coordinates (i.e., the right vertices).", "Now, $z^{\\top } X z = z_L^{\\top } X_{top} z_L + z_R^{\\top } X_{bot} z_R + 2 z_L^{\\top } X_{cross} z_R \\,.$ Let $F$ be the subspace of at most $k+1$ dimensions spanned by the $k$ rows of $A$ and the all 1s vector $\\mathbf {1}$ .", "Now, notice that $X_{cross} z_R = X_{cross} z_R^F$ where $z_R^F$ is the projection of $z_R$ to $F$ .", "Similarly, by design, $X_{bot}$ has range space equal to $F$ , so $X_{bot} z_R = X_{bot} z_R^F$ .", "Thus, WLOG, we can assume that $z_R = z_R^F$ in the following.", "Let's write $z_L = z_L^{\\parallel } + z_L^{\\prime }$ and $z_R = z_R^{\\parallel } + z_R^{\\prime }$ where $z_L^{\\parallel } = \\langle z_L, \\frac{\\mathbf {1}}{\\sqrt{n}}\\rangle \\frac{\\mathbf {1}}{\\sqrt{n}}$ is the component of $z_L$ along the all 1s direction (and similarly for $z_R^{\\parallel }$ ).", "Let $X_{cross}^{\\prime } = c_1 \\frac{\\ell }{n} (A-\\frac{1}{2} \\mathbf {1} \\mathbf {1}^{\\top })$ be the “centered” version of $X_{cross}$ .", "Also define the centered versions $X_{top}^{\\prime } = X_{top} - \\frac{\\ell ^2}{k^2} \\mathbf {1} \\mathbf {1}^{\\top }$ and $X_{bot}^{\\prime } = \\frac{k-\\ell }{n(k+1)} \\sum _{i =1}^k a_i a_i^{\\top }$ .", "Our argument is to simply “charge” the third term (which can be potentially negative) to the first two terms (that are always non-negative).", "We will use the following two standard random matrix facts (see Fact REF ) in our analysis: for $A^{\\prime } = A- \\frac{1}{2} \\mathbf {1} \\mathbf {1}^{\\top }$ we have $\\left\\Vert A^{\\prime }\\right\\Vert _2 \\le O(\\sqrt{n})$ , and the $k$ -th smallest singular value of both $A$ and $A^{\\prime }$ is at least $\\Omega (\\sqrt{n} - \\sqrt{k}) = \\Omega (\\sqrt{n})$ as $k \\le n/2$ ." ], [ "The potentially negative terms.", "Let's work with the potentially negative terms coming from the parallel components of $z_L$ and $z_R$ .", "Observe that $(z_L^{\\parallel })^{\\top } X_{cross} z_R^{\\parallel } = \\left\\Vert z_L^{\\parallel }\\right\\Vert _2 \\left\\Vert z_R^{\\parallel }\\right\\Vert _2 \\frac{c_1}{2} \\ell + (z_L^{\\parallel })^{\\top } X_{cross}^{\\prime } z_R^{\\parallel } \\ge \\left\\Vert z_L^{\\parallel }\\right\\Vert _2 \\left\\Vert z_R^{\\parallel }\\right\\Vert _2 (\\frac{c_1 \\ell }{2} - O(\\frac{\\ell }{n}\\sqrt{n})) > 0$ .", "Since $(z_L^{\\parallel })^{\\top } X_{top} z_L^{\\parallel } +(z_R^{\\parallel })^{\\top } X_{bot} z_R^{\\parallel } >0$ we can conclude that $(z^{\\parallel })^{\\top } X z^{\\parallel }\\ge 0$ .", "Let's analyze the potentially negative terms coming from the perpendicular components of $z_L$ and $z_R$ .", "We have $|(z_L^{\\prime })^{\\top } X_{cross} z_R^{\\prime }| = |(z_L^{\\prime })^{\\top } X_{cross}^{\\prime } z_R^{\\prime }| \\le c_2 \\frac{\\ell }{n} \\sqrt{n} \\Vert z_L^{\\prime } \\Vert _2 \\Vert z_R^{\\prime } \\Vert _2 = c_2 \\frac{\\ell }{\\sqrt{n}} \\Vert z_L^{\\prime } \\Vert _2 \\Vert z_R^{\\prime } \\Vert _2$ .", "Finally, let's analyze the potentially negative terms coming from crossing the parallel and the perpendicular components of $z_L$ and $z_R$ .", "We have: $|(z_L^{\\parallel })^\\top X_{cross} z_R^{\\prime }| = |(z_L^{\\parallel })^\\top X_{cross}^{\\prime } z_R^{\\prime }| \\le \\Vert z_L^{\\parallel } \\Vert _2 \\Vert z_R^{\\prime } \\Vert _2 O(\\ell /\\sqrt{n})$ .", "Similarly, $|(z_L^{\\prime })^\\top X_{cross} z_R^{\\parallel }| \\le \\Vert z_L^{\\prime } \\Vert _2 \\Vert z_R^{\\parallel } \\Vert _2 O(\\ell /\\sqrt{n})$ ." ], [ "The square terms.", "We now compute a lower bound on the non-negative terms in (REF ).", "We have $z_L^\\top X_{top} z_L = z_L^\\top X_{top}^{\\prime } z_L + \\frac{\\ell ^2}{k^2} (z_L^{\\parallel })^\\top \\mathbf {1} \\mathbf {1}^\\top z^{\\parallel } = \\Vert z_L \\Vert _2^2 (\\frac{\\ell }{k} - \\frac{\\ell ^2}{k^2}) + \\Vert z_L^{\\parallel } \\Vert _2^2 \\frac{\\ell ^2 n}{k^2} \\ge c_3 \\Vert z_L \\Vert _2^2 \\frac{\\ell }{k}$ .", "Next, we lower bound $z_R^{\\top } X_{bot} z_R$ .", "Now, $z_R\\in F$ .", "Recall that the $k$ -th smallest singular value of $A$ and $A^{\\prime }$ is at least $\\Omega (\\sqrt{n})$ if $k \\le n/2$ with probability at least $0.999$ over the draw of the graph $H$ , and 2) the matrix $\\sum _i a_i a_i^{\\top }$ has the same eigenvalues as $4A^{\\prime }{A^{\\prime }}^{\\top }$ where, recall that $A^{\\prime } = A-\\frac{1}{2} \\mathbf {1} \\mathbf {1}^{\\top }$ .", "Together this yields that all eigenvalues of $\\sum _i a_i a_i^{\\top } + \\mathbf {1} \\mathbf {1}^{\\top }$ are at least $c_4 n$ for some constant $c_4>0$ when restricted to the subspace $F$ .", "Thus, $z_R^{\\top } X_{bot} z_R \\ge c_4 n \\left\\Vert z_R\\right\\Vert _2^2 \\frac{k-\\ell }{n (k+1)} = c_4 \\left\\Vert z_R\\right\\Vert _2^2 \\frac{k-\\ell }{k+1} \\ge c_5 \\left\\Vert z_R\\right\\Vert _2^2$ recalling that $k-\\ell > k/2$ .", "Let's now complete the charging argument.", "Let's first observe that, by the AM-GM inequality, the square terms contribute at least $c_6 \\sqrt{\\ell /k}\\Vert z_L \\Vert _2 \\Vert z_R \\Vert _2$ .", "The potentially negative term from the perpendicular components is at most $c_2 \\ell /\\sqrt{n} \\Vert z_L^{\\prime } \\Vert _2 \\Vert z_R^{\\prime } \\Vert _2$ in magnitude, and the potentially negative term from crossing the components is at most $\\Vert z_L^{\\parallel } \\Vert _2 \\Vert z_R^{\\prime } \\Vert _2 O(\\ell /\\sqrt{n}) + \\Vert z_L^{\\prime } \\Vert _2 \\Vert z_R^{\\parallel } \\Vert _2 O(\\ell /\\sqrt{n})$ .", "Thus, the square terms dominate as long as $\\ell \\le O(n/k)$ .", "This completes the proof." ], [ "Low-degree lower bound for $p=1/2$", "Formally, we will prove that there are distributions over bipartite graphs that admit $\\ell $ by $k-\\ell $ cliques for appropriate parameters $\\ell $ that are indistinguishable from $B(k,n,p)$ — the distribution on random bipartite graphs with left vertex set of size $k$ , right vertex set of size $n$ and each bipartite edge included to be in the graph with probability $p$ independently.", "The choice of the planted model requires a bit of care, as we soon discuss.", "We will deal with the case of $p=1/2$ and general $p$ separately for clarity of exposition.", "$D_{\\mathrm {null}} = B(k,n, 1/2)$ : the distribution on bipartite graphs $H=H(U,V)$ where $|U|=k$ , $|V|=n$ and each bipartite edge $(u,v)$ with $u \\in U$ and $v \\in V$ is included in $H$ with probability $1/2$ .", "$D_{\\mathrm {planted}} = B(k,n, \\ell , 1/2)$ : the distribution on bipartite graphs $H=H(U,V)$ where $|U|=k$ , $|V|=n$ sampled as follows.", "Choose $S$ by including each vertex from $U$ in $S$ with probability $\\ell /k$ .", "Choose $P$ by including every vertex from $V$ in $R$ with probability $(k-\\ell )/n$ .", "Finally, include each edge $(u,v)$ with $u \\in U$ and $v \\in V$ with probability $\\Pr _{D_{\\mathrm {planted}}}[(u, v) \\text{ is included}] = {\\left\\lbrace \\begin{array}{ll}1 & \\text{ if } u \\in S, v \\in P\\,,\\\\\\frac{n/2-(k-\\ell )}{n-(k-\\ell )} & \\text{ if } u \\in S, v \\notin P\\,,\\\\\\frac{1}{2} & \\text{ otherwise}\\,.\\end{array}\\right.", "}$ Remark 6.2 $D_{\\mathrm {planted}}$ is chosen so as to have a $\\ell $ by $k-\\ell $ biclique in it while having the same distribution of degrees of left vertices as in $D_{\\mathrm {null}}$ .", "This is necessary since otherwise the average degree of the left vertices gives a distinguisher between the models.", "Theorem 6.3 Fix $\\varepsilon > 0$ independent of $n$ with $\\varepsilon \\le 0.001$ .", "For $k=n^{1/2+\\varepsilon }$ and $\\ell \\le n^{1/4-0.001}$ , the norm of the degree-$\\lfloor 0.001/\\varepsilon \\rfloor $ truncated likelihood ratio between $B(k,n,\\ell , 1/2)$ and $B(k,n, 1/2)$ is $1+o(1)$ .", "On the other hand, for $k=n^{1/2+\\varepsilon }$ and all $\\ell $ , the norm of the degree-$O(1/\\varepsilon )$ likelihood ratio between $B(k,n,\\ell , 1/2)$ and $B(k,n, 1/2)$ is unbounded as $n \\rightarrow \\infty $ .", "For a bipartite graph $H=H(U,V)$ recall that $\\chi _{u,v}$ is 1 if the edge $(u, v)$ is included and $-1$ otherwise.", "For $E \\subseteq U \\times V$ define $\\chi _E = \\prod _{(u,v) \\in E} \\chi _{u,v}$ .", "Lemma 6.4 For $H$ sampled from $D_{\\mathrm {planted}}$ and $E \\subseteq U \\times V$ , let $L$ be the number of left vertices in $E$ , $R$ the number of right vertices in $E$ , and $d_1, ..., d_R$ the number of edges in $E$ incident to each of the right vertices.", "Then ${E}_{D_{\\mathrm {planted}}}[\\chi _E] = {\\left\\lbrace \\begin{array}{ll}\\left(\\frac{\\ell }{k}\\right)^L \\left(\\frac{k-\\ell }{n}\\right)^R \\left(1 + O\\left(\\frac{k-\\ell }{n}\\right)\\right)^R & \\text{ if } d_1, \\ldots , d_R > 1\\,,\\\\0 & \\text{ otherwise}\\,.\\end{array}\\right.", "}$ Conditioned on the planted biclique $(S, P)$ , the edges are independent.", "For an edge $(u, v)$ , we calculate ${E}_{D_{\\mathrm {planted}}}[\\chi _{u,v} \\mid \\text{planted biclique is }(S,P)] = {\\left\\lbrace \\begin{array}{ll}1 & \\text{ if } u \\in S, v \\in P\\,,\\\\\\frac{-(k-\\ell )}{n-(k-\\ell )} & \\text{ if } u \\in S, v \\notin P \\,,\\\\0 & \\text{ otherwise}\\,,\\end{array}\\right.", "}$ where for the case $u \\in S, v \\notin P$ , we calculated the expectation as $\\frac{n/2-(k-\\ell )}{n-(k-\\ell )} \\cdot 1 + \\left(1 - \\frac{n/2-(k-\\ell )}{n-(k-\\ell )}\\right) \\cdot (-1) = \\frac{-(k-\\ell )}{n-(k-\\ell )}\\,.$ We observe that if any of the left vertices in $E$ is not in the planted biclique, the conditional expectation of $\\chi _E$ is zero.", "Therefore, we condition on the event that all the left vertices in $E$ are in the planted biclique, which happens with probability $\\left(\\frac{\\ell }{k}\\right)^L$ .", "Conditioned on this event, for any particular right vertex, all the edges in $E$ incident to it are independent from the other edges in $E$ .", "Let $E_i$ be the subset of edges in $E$ that are incident to the $i$ -th right vertex.", "Then EDplanted[Ei planted biclique contains all left vertices in E] = k-n 1 + (1 - k-n) (-(k-)n-(k-))di = {ll k-n (1 + O(k-n)) if di > 1 , 0 if di = 1 ,.", "so EDplanted[E planted biclique contains all left vertices in E] = {ll (k-n)R (1 + O(k-n))R if d1, ..., dR > 1 , 0 otherwise  ..", "Therefore, overall, ${E}_{D_{\\mathrm {planted}}}[\\chi _E] = {\\left\\lbrace \\begin{array}{ll}\\left(\\frac{\\ell }{k}\\right)^L \\left(\\frac{k-\\ell }{n}\\right)^R \\left(1 + O\\left(\\frac{k-\\ell }{n}\\right)\\right)^R & \\text{ if } d_1, \\ldots , d_R > 1\\,,\\\\0 & \\text{ otherwise }\\,.\\end{array}\\right.", "}$ [Proof of Theorem REF ] Let $LR^{\\le D}$ be the degree-$D$ truncated likelihood ratio between $D_{\\mathrm {planted}}$ and $D_{\\mathrm {null}}$ .", "Then, by standard results, $\\left\\Vert LR^{\\le D} - 1\\right\\Vert ^2 = \\sum _{0 < |E| \\le D} {E}_{D_{\\mathrm {planted}}}[\\chi _E]^2$ , where the norm is the one induced by $D_{\\mathrm {null}}$ .", "Therefore, if the right-hand side is $o(1)$ , then $\\left\\Vert LR^{\\le D}\\right\\Vert $ is $1+o(1)$ , and if the right-hand side is unbounded, then $\\left\\Vert LR^{\\le D}\\right\\Vert $ is also unbounded.", "Consider all $E$ with $L$ left vertices and $R$ right vertices.", "The contribution from these $E$ is, by Lemma REF , $\\sum _{{E\\\\L \\text{ left vertices}\\\\R \\text{ right vertices}}} {E}_{D_{\\mathrm {planted}}}[\\chi _E]^2 = \\left(\\frac{\\ell }{k}\\right)^{2L} \\left(\\frac{k-\\ell }{n}\\right)^{2R} \\left(1 + O\\left(\\frac{k-\\ell }{n}\\right)\\right)^{2R} \\cdot \\binom{k}{L} \\binom{n}{R} \\operatorname{Bip}(L, R)\\,,$ where $\\operatorname{Bip}(L,R)$ is the number of bipartite graphs with $L$ left vertices and $R$ right vertices such that all left degrees are at least 1 and all right degrees are greater than 1.", "Consider a choice of $L$ and $R$ such that $\\operatorname{Bip}(L,R) \\ne 0$ .", "Because we are interested in the behavior of the sum as $n$ goes to infinity, we ignore as negligible all factors that depend only on $L$ and $R$ .", "We also approximate $k-\\ell \\approx k$ and $\\left(1+O\\left(\\frac{k-\\ell }{n}\\right)\\right)^{2R} \\approx 1$ .", "Then we have E L left vertices R right vertices EDplanted[E]2 (k)2L (kn)2R kL nR (k)2L (kn)2R kL nR = n-Rk2R-L2L .", "For $k = n^{1/2+\\varepsilon }$ , the above is equal to $n^{(2R-L)\\varepsilon - L/2}\\ell ^{2L}$ .", "For $\\ell =n^{1/4-0.001}$ , this is equal to $n^{(2R-L)\\varepsilon - 0.002L}$ .", "For $|E| \\le 0.001/\\varepsilon $ , we have $1 \\le L,R \\le 0.001/\\varepsilon $ and hence $2R-L \\le 0.002/\\varepsilon -1$ .", "Then $n^{(2R-L)\\varepsilon - 0.002L} \\le n^{-\\varepsilon }$ , which goes to zero as $n$ goes to infinity.", "Therefore, the sum of all the terms with $|E| \\le 0.001/\\varepsilon $ is $o(1)$ .", "For $|E| \\ge O(1/\\varepsilon )$ , consider the term corresponding to $L=2$ and some $R=O(1/\\varepsilon )$ .", "Note that the term satisfies $\\operatorname{Bip}(L, R) \\ne 0$ (e.g., the complete bipartite graph on 2 left vertices and $O(1/\\varepsilon )$ right vertices is a valid choice).", "For this term, $n^{(2R-L)\\varepsilon - L/2} = n^{2R\\varepsilon - 2\\varepsilon - 1} \\ge n$ for $R = O(1/\\varepsilon )$ large enough.", "Therefore, this term goes to infinity as $n$ goes to infinity, and then the same is true for the sum of all the terms." ], [ "Low-degree lower bound for general densities", "In this section, we will prove that the following two distributions on bipartite random graphs are indistinguishable by low-degree polynomials.", "$D_{\\mathrm {null}} = B(k, n, p)$ : the distribution on bipartite graphs $H=H(U,V)$ where $|U|=k$ , $|V|=n$ and each bipartite edge $(u,v)$ with $u \\in U$ and $v \\in V$ is included in $H$ with probability $p$ .", "$D_{\\mathrm {planted}} = B(k,n, \\ell , p)$ : the distribution on bipartite graphs $H=H(U,V)$ where $|U|=k$ , $|V|=n$ sampled as follows.", "Choose $S$ by including each vertex from $U$ in $S$ with probability $\\ell /k$ .", "Choose $P$ by including every vertex from $V$ in $R$ with probability $(k-\\ell )/n$ .", "Finally, include each edge $(u,v)$ with $u \\in U$ and $v \\in V$ with probability $\\Pr _{D_{\\mathrm {planted}}}[(u, v) \\text{ is included}] = {\\left\\lbrace \\begin{array}{ll}1 & \\text{ if } u \\in S, v \\in P\\,,\\\\\\frac{np-(k-\\ell )}{n-(k-\\ell )} & \\text{ if } u \\in S, v \\notin P\\,,\\\\p & \\text{ otherwise}\\,.\\end{array}\\right.", "}$ Theorem 6.5 Fix $\\varepsilon > 0$ independent of $n$ .", "Let $p \\ge 1/2$ and $q=1-p$ , and define $\\gamma $ such that $q=n^{-\\gamma }$ .", "For $\\varepsilon \\le \\gamma /2$ and $k=n^{1/2+\\varepsilon }/q^{1/2}$ and $\\ell \\le n^{1/4 - 0.001}$ , the norm of the degree-$f(1/\\varepsilon )$ truncated likelihood ratio between $B(k,n,\\ell , p)$ and $B(k,n, p)$ is $1+o(1)$ , for any function $f$ independent of $n$ .", "On the other hand, for $\\varepsilon \\ge \\gamma $ and $k=n^{1/2+\\varepsilon }/q^{1/2}$ and all $\\ell $ , the norm of the degree-$O(1/\\varepsilon )$ truncated likelihood ratio between $B(k,n,\\ell , p)$ and $B(k,n, p)$ is unbounded as $n \\rightarrow \\infty $ .", "Remark 6.6 Information-theoretically, to identify a small list in the semi-random planted clique model with a general $p$ , we need $k \\sim \\sqrt{n/q}$  [56].", "If we set $k$ to be this allegedly optimal value, then, the corresponding bipartite random graph has no $\\ell $ by $k$ -clique for $\\ell = O(\\log n)$ .", "The above theorem shows that in the low-degree polynomial model, distinguishing between the case when $\\ell = n^{\\varepsilon }$ vs $\\ell = O(\\log n)$ requires polynomials of degree $O(1/\\varepsilon )$ .", "In this section, for a bipartite graph $H=H(U,V)$ , we define $\\chi _{u,v}$ to be $\\sqrt{\\frac{1-p}{p}}$ if the edge $(u, v)$ is included and $-\\sqrt{\\frac{p}{1-p}}$ otherwise.", "For $E \\subseteq U \\times V$ define $\\chi _E = \\prod _{(u,v) \\in E} \\chi _{u,v}$ .", "Lemma 6.7 For $H$ samples from $D_{\\mathrm {planted}}$ and $E \\subseteq U \\times V$ , let $L$ be the number of left vertices in $E$ , $R$ the number of right vertices in $E$ , and $d_1, ..., d_R$ the number of edges in $E$ incident to each of the right vertices.", "Then ${E}_{D_{\\mathrm {planted}}}[\\chi _E] = {\\left\\lbrace \\begin{array}{ll}\\left(\\frac{\\ell }{k}\\right)^L \\left(\\frac{k-\\ell }{n}\\right)^R \\prod _{i=1}^R \\left(\\left(\\sqrt{\\frac{1-p}{p}}\\right)^{d_i}+O\\left(\\frac{k-\\ell }{n}\\frac{1-p}{p}\\right)\\right) & \\text{ if } d_1, \\ldots , d_R > 1\\,,\\\\0 & \\text{ otherwise }\\,.\\end{array}\\right.", "}$ Conditioned on the planted biclique $(S, P)$ , the edges are independent.", "For an edge $(u, v)$ , we calculate ${E}_{D_{\\mathrm {planted}}}[\\chi _{u,v} \\mid \\text{planted biclique is }(S,P)] = {\\left\\lbrace \\begin{array}{ll}\\sqrt{\\frac{1-p}{p}} & \\text{ if } u \\in S, v \\in P\\,,\\\\\\frac{-\\frac{k}{n}}{1-\\frac{k}{n}} \\sqrt{\\frac{1-p}{p}} & \\text{ if } u \\in S, v \\notin P \\,,\\\\0 & \\text{ otherwise}\\,,\\end{array}\\right.", "}$ where for the case $u \\in S, v \\notin P$ , we calculated the expectation as $\\frac{np-(k-\\ell )}{n-(k-\\ell )} \\cdot \\sqrt{\\frac{1-p}{p}} + \\left(1-\\frac{np-(k-\\ell )}{n-(k-\\ell )}\\right) \\cdot \\left(-\\sqrt{\\frac{p}{1-p}}\\right) = \\frac{-(k-\\ell )}{n-(k-\\ell )} \\sqrt{\\frac{1-p}{p}}\\,.$ We observe that if any of the left vertices in $E$ is not in the planted biclique, the conditional expectation of $\\chi _E$ is zero.", "Therefore, we condition on the event that all the left vertices in $E$ are in the planted biclique, which happens with probability $\\left(\\frac{\\ell }{k}\\right)^L$ .", "Conditioned on this event, for any particular right vertex, all the edges in $E$ incident to it are independent from the other edges in $E$ .", "Let $E_i$ be the subset of edges in $E$ that are incident to the $i$ -th right vertex.", "Then EDplanted[Ei planted biclique contains all left vertices in E] = k-n (1-pp)di + (1 - k-n) (-k-n1-k-n1-pp)di = {ll k-n ( (1-pp)di + O(k-n 1-pp)) if di > 1 , 0 if di = 1 ,.", "so EDplanted[E planted biclique contains all left vertices in E] = {ll (k-n)R i=1R ((1-pp)di+O(k-n1-pp)) if d1, ..., dR > 1 , 0 otherwise  ..", "Therefore, overall, ${E}_{D_{\\mathrm {planted}}}[\\chi _E] = {\\left\\lbrace \\begin{array}{ll}\\left(\\frac{\\ell }{k}\\right)^L \\left(\\frac{k-\\ell }{n}\\right)^R \\prod _{i=1}^R \\left(\\left(\\sqrt{\\frac{1-p}{p}}\\right)^{d_i}+O\\left(\\frac{k-\\ell }{n}\\frac{1-p}{p}\\right)\\right) & \\text{ if } d_1, \\ldots , d_R > 1\\,,\\\\0 & \\text{ otherwise }\\,.\\end{array}\\right.", "}$ [Proof of Theorem REF ] Let $LR^{\\le D}$ be the degree-$D$ truncated likelihood ratio between $D_{\\mathrm {planted}}$ and $D_{\\mathrm {null}}$ .", "Then, by standard results, $\\left\\Vert LR^{\\le D} - 1\\right\\Vert ^2 = \\sum _{0 < |E| \\le D} {E}_{D_{\\mathrm {planted}}}[\\chi _E]^2$ , where the norm is the one induced by $D_{\\mathrm {null}}$ .", "Therefore, if the right-hand side is $o(1)$ , then $\\left\\Vert LR^{\\le D}\\right\\Vert $ is $1+o(1)$ , and if the right-hand side is unbounded, then $\\left\\Vert LR^{\\le D}\\right\\Vert $ is also unbounded.", "Consider all $E$ with $L$ left vertices and $R$ right vertices.", "The contribution from these $E$ is, by Lemma REF , E L left vertices R right vertices EDplanted[E]2 = d1, ..., dR > 1 (k-)2L (k-n)2R i=1R ((1-pp)di+O(k-n1-pp))2 kL nR Bip(L, R, d1, ..., dR) , where $d_1, \\ldots , d_R$ represent the number of edges in $E$ incident to each of the right vertices, and $\\operatorname{Bip}(L,R, d_1, \\ldots , d_R)$ is the number of bipartite graphs with $L$ left vertices and $R$ right vertices such that all left degrees are at least 1 and the right degrees are $d_1, \\ldots , d_R$ .", "Consider a choice of $L$ and $R$ such that $\\sum _{d_1, ..., d_R} \\operatorname{Bip}(L, R, d_1, \\ldots , d_R) \\ne 0$ .", "Because we are interested in the behavior of the sum as $n$ goes to infinity, we ignore as negligible all factors that depend only on $L$ and $R$ .", "In particular, because $\\sum _{d_1, ..., d_R} \\operatorname{Bip}(L, R, d_1, \\ldots , d_R)$ can be bounded in terms of only $L$ and $R$ , we can focus on the term corresponding to $d_1=\\ldots =d_R=2$ , which maximizes the contribution of the terms $\\sqrt{\\frac{1-p}{p}}^{d_i}$ and is therefore proportional to the entire sum up to factors that depend only on $L$ and $R$ .", "We also approximate $k-\\ell \\approx k$ and $\\left(\\frac{1-p}{p}+O\\left(\\frac{k-\\ell }{n}\\frac{1-p}{p}\\right)\\right)^{2R} \\approx \\left(\\frac{1-p}{p}\\right)^{2R}$ .", "Then we have E L left vertices R right vertices EDplanted[E]2 (k)2L (kn)2R (1-pp)2R kL nR (k)2L (kn)2R (1-p)2R kL nR = n-Rk2R-L2L(1-p)2R .", "For $k = n^{1/2+\\varepsilon }/(1-p)^{1/2}$ , the above is equal to $n^{(2R-L)\\varepsilon -L/2}\\ell ^{2L}(1-p)^{R+L/2}$ .", "For $1-p=q=n^{-\\gamma }$ , this is equal to $n^{(2R-L)\\varepsilon -L/2-(R+L/2)\\gamma }\\ell ^{2L}$ .", "Finally, for $\\ell =n^{\\delta }$ , this is equal to $n^{(2R-L)\\varepsilon +2L\\delta -L/2-(R+L/2)\\gamma }$ .", "For $\\varepsilon \\le \\gamma /2$ , we have that the above is at most $n^{-L\\varepsilon +2L\\delta -L/2-L\\gamma /2}$ .", "For the exponent to be non-negative, we need $2L\\delta \\ge L/2$ , so $\\delta \\ge 1/4$ .", "In particular, for $c \\le n^{1/4-0.001}$ , the term goes to zero as $n$ goes to infinity regardless of how large $L$ is.", "Therefore, the sum of all the terms with $|E| \\le f(1/\\varepsilon )$ is $o(1)$ , for any function $f$ independent of $n$ .", "For $\\varepsilon \\ge \\gamma $ , consider the term corresponding to $L=2$ and some $R=O(1/\\varepsilon )$ .", "We have that the term is at least $n^{R\\varepsilon - L\\varepsilon + 2L\\delta - L/2 - L\\gamma /2} = n^{R\\varepsilon - 2\\varepsilon + 4\\delta - 1 - \\gamma } \\ge n$ for $R=O(1/\\varepsilon )$ large enough.", "Therefore, this term goes to infinity as $n$ goes to infinity, and then the same is true for the sum of all the terms." ], [ "Appendix", "Proposition A.1 (Quasipolynomial brute force) The following algorithm takes a graph $G$ and in $n^{O(\\log _2 n)}$ time, outputs a list of size $\\le (n/k)(1+\\delta )$ whenever $k \\ge O(\\sqrt{n/\\delta \\log _2 n})$ .", "Further, if $G \\sim \\mathsf {FK}(n,k,1/2)$ then with probability at least $0.99$ over the draw of edges in $\\mathsf {cut}(S^*)$ in $G$ , the list output by the algorithm contains $S^*$ .", "Enumerate all $c\\log _2 n$ cliques $U$ in $G$ , If $U$ and its common neighbors form a $k$ -clique in $G$ , add it to the list, otherwise reject, Remove every $k$ -clique in the list that intersects another in the list in more than $c \\log _2 n$ vertices.", "Condition on the event that the planted clique $S^*$ is good.", "Note that if $U \\subseteq S^*$ , then, notice that the common neighborhood of $U$ must equal $S^*\\setminus U$ (as otherwise, there is a $k$ -clique that intersects $S^*$ in $\\ge c \\log _2 n$ vertices).", "Thus, $S^*$ is in the list.", "Further, step (3) ensures that the list only contains good cliques the number of which is at most $(1+\\delta )n/k$ ." ] ]
2212.05619
[ [ "Dark Energy Nature in Logarithmic $f(R,T)$ Cosmology" ], [ "Abstract The present research paper is an investigation of dark energy nature of logarithmic $f(R, T)$-gravity cosmology in a flat FLRW space-time universe.", "We have derived modified Einstein's field equations for the function $f(R, T)=R+16\\pi G\\alpha\\ln(T)$ where $R$ is the Ricci scalar curvature, $T$ is the trace of the stress energy momentum tensor and $\\alpha$ is a model parameter.", "We have solved field equations in the form of two fluid scenario as perfect-fluid and dark-fluid, where dark fluid term is derived in the form of perfect fluid source.", "We have made an observational constraints on the cosmological parameters $\\Omega_{(m)}, \\Omega_{(de)}, \\omega, \\omega^{(de)}, \\alpha$ and $H_{0}$ using $\\chi^{2}$ and $R^2$ tests with observational datasets like union 2.1 compilation and $H_{0}$.", "With these constraints we have discussed our model with deceleration parameter $q$, energy parameters $\\Omega_{(m)}, \\Omega_{(de)}$, EoS parameter $\\omega^{(de)}$ and Om diagnostic function.", "The derived $f(R, T)$ model shows a quintessence dark energy model $\\omega^{(de)}<-1$ and late-time universe approaches to $\\Lambda$CDM model." ], [ "Introduction", "The noble discovery in [1]-[15] approves the cosmic acceleration in expansion of the universe.", "The classical General Relativity (GR) predicts the expansion of the universe and it suggests that the expansion should be decelerating with time.", "But the observations in [1]-[15] suggest that the current universe has entered in a second phase of accelerated expansion which is started around redshift $z=1$ .", "Also, it is observed that approximately $70\\%$ of the total energy density of the universe is in some mysterious form called “Dark Energy\" which has high negative pressure that creates repulsive forces among the galaxies and results the accelerating expansion of the universe.", "But nobody knows actual nature of the Dark Energy.", "Einstein obtained this acceleration in his cosmological model by adding a constant term $\\Lambda $ , called “Cosmological Constant\".", "Although the “Cosmological Constant $\\Lambda -$ term\" is the best fit candidate for dark energy, but it has two problems, first is about its origin and second is fine-tuning its value with dark energy.", "To solve the dark energy problem and cosmological constant problem, in literature several modified and alternative theories of gravity to GR are presented by the cosmologists time to time but the dark energy problem is an unsolved problem till to date.", "Current studies focus on the determination of the equation of state parameter $\\omega $ (see the references [16], [17], [18], [19]) to measure the properties of dark energy component of the universe from observational data.", "The equation of state parameter $\\omega $ is defined as the ratio of pressure to the energy density of the fluid $\\omega (t)=\\frac{p}{\\rho }$ and is not necessarily constant.", "The vacuum energy having EoS $\\omega =-1$ is the simplest dark energy candidate and is equivalent to “Cosmological Constant $\\Lambda $ -term\".", "Alternatives to vacuum energy can be described by minimally coupled scalar fields, are quintessence $(\\omega > -1)$ , phantom energy $(\\omega <-1)$ and Quintom (that can across from phantom region to quintessence region as evolved) and have time dependent EoS parameter.", "Some observational constraints on limits of EoS $\\omega $ are obtained by Knop et al.", "[20] and Tegmark et al.", "[21] as $-1.67<\\omega <-0.62$ and $-1.33<\\omega <-0.79$ respectively.", "The latest results on limit of EoS are obtained as $-1.44<\\omega <-0.92$ at $68\\%$ confidence level in 2009 by Hinshaw et al.", "[22]; Komatsu et al.", "[23].", "However, we are not on a stage to use a constant value of $\\omega $ because we have not observational evidences which makes a distinction between constant and variable $\\omega $ .", "A large number of cosmologists, considered the equation of state parameter as a constant (Kujat et al.", "[24]; Bartelmann et al.", "[25]) with phase wise value $-1, 0, +\\frac{1}{3}$ and $+1$ for vacuum fluid, dust fluid, radiation and stiff dominated universe, respectively.", "But generally, $\\omega $ is time or redshift dependent function (Jimenez [27]; Das et al.", "[28]).", "In literature, several cosmologists ([29]-[37]) have presented cosmological models with variable EoS parameter $\\omega $ .", "A generalization of $f(R)$ gravity by including the trace $T$ of stress-energy-momentum tensor $T_{ij}$ has been proposed by Harko et al.", "[38] known as $f(R, T)$ gravity.", "The different cosmological and astrophysical aspects of $f(R, T)$ gravity have been extensively studied by several authors.", "Several authors [39] have investigated the physical and geometrical aspects of modified $f(R, T)$ cosmological models in different context.", "The accelerated expansion phase of the universe plays an important role in the dynamical history of our universe.", "Using different forms of the $f(R, T)$ gravity, Harko et al.", "[38] have constructed some FLRW modified cosmological models.", "Some generalization of $F(R)$ and $F(T)$ gravity theories are studied by Myrzakulov [40] and on the basis of this Lagrangian, he derived the field equations in $f(R, T)$ gravity and have obtained some exact solutions for the specific $F(R,T) = \\mu R + \\nu T$ function.", "After that several cosmological models are proposed in $f(R, T)$ gravity [41]-[71].", "The first logarithmic $f(R, T)$ gravity theory has been proposed by Elizalde et al.", "[72] in the form of $f(R, T)= R+ \\alpha R^2+2\\beta \\ln (T)$ in which they have studied the energy and stability conditions of the cosmological model.", "Recently Deb and Deshamukhya [73] have studied some constraints on simple form of logarithmic $f(R, T)= R+ 16\\pi G \\alpha \\ln (T)$ gravity by using dark energy parameters and Hubble constant $H_{0}$ .", "Here, we have studied the behaviour of dark energy parameters and equation of state parameters in logarithmic $f(R, T)= R-16\\pi G \\alpha \\ln (T)$ gravity with observational constraints.", "The present paper is organized as follows: Sect.", "1 is introductory, Sect.", "2 contains formulation of modified field equations for $f(R,T)= R-16\\pi G\\alpha \\ln (T)$ and its solution.", "In Sect.", "3, we have made observational constraints on energy parameters, Sect.", "4 contains discussion of results with Om diagnostic analysis.", "In last section 5 have concluding remarks." ], [ "Field Equations for Logarithmic $f(R, T)$ -Gravity and Solution", "We consider the action for the logarithmic $f(R,T)= R-16\\pi G\\alpha \\ln (T)$ function as, $S = \\int \\sqrt{-g}\\left[\\frac{R}{16 \\pi G}-\\alpha \\ln (T)+L_m\\right]d^4x,$ where $L_m$ is the matter Lagrangian, $R$ is the Ricci scalar curvature, $T$ is the trace of the matter stress-energy momentum tensor $T_{ij}$ and $\\alpha $ is the model parameter.", "Variation of action (REF ) with respect to metric tensor $g_{ij}$ , we obtain the following field equations, $R_{ij} - \\frac{1}{2}g_{ij}R = 8 \\pi G\\left[T_{ij}+ T_{ij}^{(de)}\\right],$ where $T_{ij}^{(de)}=-\\frac{2\\alpha }{T}\\left( T_{ij} + \\frac{T}{2} g _{ij}\\ln T + \\Theta _{ij}\\right),$ where the term $\\Theta _{ij}$ , which plays a crucial role in $f(R,T)$ gravity as it contains matter Lagrangian $L_m$ , is given by $\\Theta _{ij}= g^{\\beta \\gamma } \\frac{\\delta T_{\\beta \\gamma }}{\\delta g^{ij}} = -2 T_{ij} + g_{ij}L_m - 2 \\frac{\\delta ^2 L_m}{\\delta g^{ij} \\delta .", "g^{\\beta \\gamma }}$ Clearly, depending on the nature of the matter field, the field equation for $f(R,T)$ gravity will be different.", "Now, assuming the Universe is filled with perfect fluid, the stress-energy-momentum tensor is $T_{ij}=(\\rho +p)u_{i}u_{j}-pg_{ij},$ where $\\rho $ is the energy density, $p$ is the isotropic pressure of the perfect fluid source and $u^{i}=(1, 0, 0, 0)$ is four fundamental velocity in co-moving coordinates and the matter Lagrangian density can be assumed as $L_m=-p$ .", "Now, we consider the Friedmann-Lemaitre-Robertson-Walkar (FLRW) metric in spherical coordinate for flat Universe as, $ds^2= c^{2}dt^2 - a(t)^2 [dx^2+dy^{2}+dz^{2}],$ where $a(t)$ denotes scale factor of the Universe.", "Now, assuming $8\\pi G=1~~\\&~~c=1$ in cosmological units, we get the field equations for the metric (REF ) as, $3H^2 = \\rho +\\rho ^{(de)}$ and $2\\dot{H}+3H^{2}=-p-p^{(de)}$ where $\\rho ^{(de)}=\\frac{2\\alpha (\\rho +p)}{T}-\\alpha \\ln (T),~~~~p^{(de)}=\\alpha \\ln (T)$ respectively called as dark energy density and corresponding isotropic pressure.", "Here $H$ is the Hubble parameter defined by $H=\\frac{\\dot{a}}{a}$ , and the trace $T$ of stress-energy momentum tensor is given as $T=\\rho -3p$ .", "The equation of continuity is obtained as $\\dot{\\rho }+3H(\\rho +p)+[\\dot{\\rho }^{(de)}+3H(\\rho ^{(de)}+p^{(de)})]=0$ Taking non-interacting condition $\\dot{\\rho }+3H(\\rho +p)=0,~~~~~~\\dot{\\rho }^{(de)}+3H(\\rho ^{(de)}+p^{(de)})=0$ Now, taking the equation of state (EoS) as $p=\\omega \\rho $ with $\\omega =$ constant, integrating Eq.", "(REF ), we get $\\rho =\\rho _{0}\\left(\\frac{a_{0}}{a}\\right)^{3(1+\\omega )},~~~~~~\\rho ^{(de)}=\\rho _{0}^{(de)}\\left(\\frac{a_{0}}{a}\\right)^{3(1+\\omega ^{(de)})}$ Now, from equation (REF ), we obtain $\\Omega _{(m)}+\\Omega _{(de)}=1$ where $\\Omega _{(m)}=\\frac{\\rho }{3H^{2}}$ and $\\Omega _{(de)}=\\frac{\\rho ^{(de)}}{3H^{2}}$ are respectively known as matter energy density parameter and dark energy density parameter.", "From Eqs.", "(REF ) & (REF ), we get the Hubble function as $H=H_{0}\\sqrt{\\Omega _{(m)0}\\left(\\frac{a_{0}}{a}\\right)^{3(1+\\omega )}+\\Omega _{(de)0}\\left(\\frac{a_{0}}{a}\\right)^{3(1+\\omega ^{(de)})}}$ or $H=H_{0}\\sqrt{\\Omega _{(m)0}(1+z)^{3(1+\\omega )}+\\Omega _{(de)0}(1+z)^{3(1+\\omega ^{(de)})}}$ From Eqs.", "(REF ), (REF ) & (REF ), we get the expression for deceleration parameter as $q=\\frac{1}{2}+\\frac{3}{2}\\frac{p+\\alpha \\ln (T)}{\\rho +\\frac{2\\alpha (\\rho +p)}{T}-\\alpha \\ln (T)}$ where $\\alpha =\\frac{\\rho _{0}^{(de)}}{\\frac{2(1+\\omega )}{1-3\\omega }-\\ln (1-3\\omega )-\\ln (\\rho _{0})}$" ], [ "Observational Constraints", "Current theoretical cosmology is focused on best-fitting of the cosmological parameters with observational cosmology.", "Hence, we have obtained the best curve of Hubble parameter $H(z)$ and apparent magnitude $m(z)$ using observational datasets $H_{0}$ , union 2.1 compilation and Pantheon datasets of SNe Ia observations by applying $\\chi ^{2}$ -test given as follows: $\\chi ^{2}=\\sum _{i=1}^{i=N}\\frac{[O_{i}-E_{i}]^{2}}{\\sigma _{i}^{2}}$ where $N$ denotes the number of data, $O_{i},~E_{i}$ represent the observed and estimated datasets respectively and $\\sigma _{i}$ denotes standard deviations." ], [ "Hubble Parameter", "The Hubble parameter $H$ is one of the important observational cosmological parameter which reveals the rate of expansion of the universe.", "We have considered 46 $H_{0}$ datasets with redshift $z$ (see the Table 1) estimated using Differential Age (DA) method by cosmologists time to time in [74]-[89] for best curve-fitting of $H(z)$ .", "Here, we have considered matter dominated universe with $\\omega =0$ , hence, the Eq.", "(REF ) becomes $H(z)=H_{0}\\sqrt{\\Omega _{(m)0}(1+z)^{3}+\\Omega _{(de)0}(1+z)^{3(1+\\omega ^{(de)})}}$ The best fit values of energy parameters are mentioned in Table 2 and the best fit curve is given by figure 1.", "Figure: The best fit curve of Hubble parameter H(z)H(z).We have considered 40 SNe Ia bined data of $m(z)$ from compilation of supernovae pantheon samples in the range of ($ 0 \\le z \\le 1.7$ ) [90], [91].", "We use the $\\chi ^{2}$ test formula to achieve the best fit curve for theoretical and empirical results.", "The expression for apparent magnitude is taken by $m(z)=16.08+5\\times \\log _{10}\\left(\\frac{H_{0}D_{L}}{0.026 c\\text{Mpc}}\\right).$ where the Luminosity distance $D_{L}$ is given by $D_{L}=c(1+z)\\int _{0}^{z}\\frac{dz}{H(z)}$ where $c$ is the velocity of light and $H(z)$ is the Hubble parameter given in Eq.", "(REF ).", "The best fit values of the energy parameters are given in Table 2 and the best fit curve is represented by the figure 2.", "Table: The best-fit values of energy parameters along two data sets SNe Ia and Hubble Parameter H(z)H(z).Figure: The best fit curve of apparent magnitude m(z)m(z).The expression for deceleration parameter $q$ is given by equation (REF ) and its geometrical behaviour is represented by figure 3.", "One can see that the $q(z)$ is an increasing function of redshift $z$ with signature flipping and it shows a transit phase universe (decelerating to accelerating phase) model.", "The transition redshift is obtained as $z_{t}=0.6455$ for Pantheon data and $z_{t}=0.7356$ for $H_{0}$ data.", "That is the matter dominated ($\\omega =0$ ) universe is in decelerating phase for $z>z_{t}$ and accelerating for $z<z_{t}$ .", "In literature, Davis et al.", "[92] have obtained the transition redshift $z_{t} \\sim 0.6 (1\\; \\sigma )$ in better agreement with the flat $\\Lambda $ CDM model ($z_{t} = (2\\Omega _{\\Lambda }/\\Omega _{m})^{\\frac{1}{3}} - 1 \\sim 0.66$ ) which is supported our model.", "The present value of the deceleration parameter is obtained $q_{0}=-0.5276$ for Pantheon data and $q_{0}=-0.5756$ for $H_{0}$ data (see Table 3) which shows that present universe is accelerating phase and is in good agreement with recent observations [1]-[15].", "Figure: The behaviour of deceleration parameter qq over redshift zz.From equation (REF ), we can obtain $q=\\frac{1}{2}+\\frac{3}{2}\\omega ^{(de)}\\Omega _{(de)}$ For $q<0$ , we have $\\Omega _{(de)}>-\\frac{1}{3\\omega ^{(de)}}$ In our derived model, we have obtained for $q<0$ , $\\Omega _{(de)}>0.411522634, 0.573180867$ for two datasets and these are in good agreement with observations.", "Also, for $q=q_{0}$ , the energy parameters are $\\Omega _{(m)}=0.2981\\pm 0.08921$ , $\\Omega _{(de)}=0.7019$ for Pantheon data and $\\Omega _{(m)}=0.26535\\pm 0.01254$ , $\\Omega _{(de)}=0.73465$ for $H_{0}$ datasets.", "Table: The present values of cosmological parameters along two data sets SNe Ia and Hubble Parameter H(z)H(z).The energy density parameters $\\Omega _{(m)}$ and $\\Omega _{(de)}$ is given by equation (REF ) and its geometrical behaviour is shown in figure 4a & figure 4b.", "One can see that as $z\\rightarrow -1$ , $(\\Omega _{(m)}, \\Omega _{(de)})\\rightarrow (0, 1)$ and this reveals that the late-time universe is dark energy dominated and approaches to $\\Lambda $ CDM model, which is in good agreement with recent observations.", "In our model, the dark energy term is derived from perfect-fluid source and this shows the importance of the model.", "The present values of energy parameters are mentioned in Table 2 & 3.", "The value of the model parameter $\\alpha $ is estimated as $\\alpha =1.26870954\\times 10^{-37}$ for $H(z)$ data and $\\alpha =1.21383844\\times 10^{-37}$ for Pantheon data of SNe Ia which is compatible with recent values.", "Figure: The evolution of matter energy density parameter Ω (m) \\Omega _{(m)} and dark energy density parameter Ω (de) \\Omega _{(de)} over redshift zz respectively.The expression for dark-energy density and pressure is derived in equation (REF ) and its geometrical behaviour is shown in figure 5a & figure 5b.", "One can see that as $z\\rightarrow -1$ , the dark energy density $\\rho ^{(de)}$ increases and the negative pressure of dark energy $p^{(de)}$ is also increases.", "This shows that the present universe is dark energy dominated and this energy comes from matter fluid source which is responsible for acceleration in expansion.", "Figure: The evolution of dark energy density ρ (de) \\rho ^{(de)} and dark-fluid pressure p (de) p^{(de)} over redshift zz.The cosmic dark energy models can be classified through behaviour of Om diagnostic function [93].", "The simplest diagnostic for a spatially flat universe is given by $Om(z)=\\frac{\\left(\\frac{H(z)}{H_{0}}\\right)^{2}-1}{(1+z)^{3}-1}$ where $H(z)$ is the Hubble parameter given in Eq.", "(REF ) and $H_{0}$ is its current value.", "A negative slope of $Om(z)$ corresponds to pith motion, and a positive slope corresponds to phantom motion.", "The $Om(z)$ constant represents the $\\Lambda $ CDM model.", "Figure: The geometrical behaviour of Om(z)Om(z) function over redshift zz.Figure 6 shows the geometrical behaviour of Om diagnostic function $Om(z)$ over redshift $z$ and mathematical expression is given in above equation (REF ).", "From figure 6, one can see that the slope of $Om(z)$ function is negative for our model and it shows the quintessence behaviour of the model.", "Thus, the model derived in $f(R,T)=R-16\\pi G \\ln (T)$ gravity behaves just like quintessence dark energy model.", "Also, it is supported by the behaviour of dark energy EoS $\\omega ^{(de)}>-1$ as in our derived model $\\omega ^{(de)}=-0.81\\pm 0.22149, -0.58155\\pm 0.16941$ along two observational datasets Pantheon and $H(z)$ respectively." ], [ "Conclusion", "The present research paper is an investigation of dark energy nature of logarithmic $f(R, T)$ -gravity cosmology in a flat FLRW space-time universe.", "We have derived modified Einstein's field equations for the function $f(R, T)=R-16\\pi G\\alpha \\ln (T)$ where $R$ is the Ricci scalar curvature, $T$ is the trace of the stress energy momentum tensor and $\\alpha $ is a model parameter.", "We have solved field equations in the form of two fluid scenario as perfect-fluid and dark-fluid, where dark fluid term is derived in the form of perfect fluid source.", "We have made an observational constraints on the cosmological parameters $\\Omega _{(m)}, \\omega ^{(de)}$ and $H_{0}$ using $\\chi ^{2}$ test with observational datasets like Pantheon sample of SNe Ia and $H(z)$ .", "With these constraints we have discussed our model with deceleration parameter $q$ , energy parameters $\\Omega _{(m)}, \\Omega _{(de)}$ , EoS parameter $\\omega ^{(de)}$ etc.", "and Om diagnostic function.", "The main features of the derived model are as follows: The derived model shows a transit phase (decelerating to accelerating) model with present values $q_{0}=-0.5276, -0.5756$ along two observational datasets Pantheon and $H(z)$ respectively.", "The transition redshift is estimated as $z_{t}=0.6455, 0.7356$ for two data sets Pantheon and $H(z)$ respectively, which is in good agreement with recent observations [92].", "The present values of energy parameters are estimated as $\\Omega _{(m)}=0.2981\\pm 0.08921$ , $\\Omega _{(de)}=0.7019$ for Pantheon data and $\\Omega _{(m)}=0.26535\\pm 0.01254$ , $\\Omega _{(de)}=0.73465$ for $H(z)$ datasets.", "The behaviour of dark energy EoS $\\omega ^{(de)}>-1$ as in our derived model $\\omega ^{(de)}=-0.81\\pm 0.22149, -0.58155\\pm 0.16941$ along two observational datasets respectively.", "The values of model parameter $\\alpha $ are estimated as $\\alpha =1.26870954\\times 10^{-37}$ for $H(z)$ data and $\\alpha =1.21383844\\times 10^{-37}$ for Pantheon data of SNe Ia which is compatible with recent values.", "The derived $f(R, T)$ model shows a quintessence dark energy model $\\omega ^{(de)}>-1$ and late-time universe approaches to $\\Lambda $ CDM model.", "Thus, the derived cosmological model behaves as an quintessence dark energy model and the dark energy term is derived from perfect fluid source which is an interesting feature of this model." ], [ "Acknowledgement", "The author is thankful to Center for Theoretical Physics and Mathematics, IASE (Deemed to be University), Churu, Rajasthan, India for providing facilities and support where part of this work is carried out.", "The authors of this article have no conflict of interests.", "Also, this work is not supported by any type of funding sources." ], [ "Data Availability", "We have not used any data for the analysis presented in this work." ] ]
2212.05605
[ [ "Global existence for coupled reaction-diffusion equations with a balance\n law and nonlinearities with non-constant sign" ], [ "Abstract This paper aims to prove the global existence of solutions for coupled reaction diffusion equations with a balance Law and nonlinearities with a non constant sign.", "The case when one (or both) of the components of the solution is not a priori bounded is treated.", "Proofs are based on developed Lyapunov techniques." ], [ "We consider the following reaction-diffusion system $\\left\\lbrace \\begin{array}{c}\\dfrac{\\partial u}{\\partial t}-a\\Delta u=f(u,v)=-c(x)\\varphi (u,v)\\text{\\qquad \\qquad in }\\mathbb {R}^{+}\\times \\Omega , \\\\\\\\\\dfrac{\\partial v}{\\partial t}-b\\Delta v=g(u,v)=c(x)\\varphi (u,v)\\text{\\qquad \\qquad in }\\mathbb {R}^{+}\\times \\Omega ,\\end{array}\\right.", "$ with the boundary conditions $\\frac{\\partial u}{\\partial \\eta }=\\frac{\\partial v}{\\partial \\eta }=0\\text{\\qquad \\qquad on }\\mathbb {R}^{+}\\times \\partial \\Omega , $ and the initial data $u(0,x)=u_{0}(x),\\qquad v(0,x)=v_{0}(x)\\qquad \\text{in}\\;\\Omega , $ where $\\Omega $ is an open bounded domain of class $\\mathbb {C}^{1}$ in $\\mathbb {R}^{n}$ , with boundary $\\partial \\Omega ,$ $\\dfrac{\\partial }{\\partial \\eta }$ denotes the outward normal derivative on $\\partial \\Omega $ and $a$ and $b$ are positive constants.", "The initial data are assumed to be bounded on $\\Omega $ and nonnegative.", "The function $\\varphi $ $\\in C^{1}(\\mathbb {R}\\times \\mathbb {R},\\mathbb {R}^{+})$ with at most polynomial growth and satisfy $\\varphi (0,v) &=&\\ \\varphi (u,0)=0,\\text{ for a all }u\\ge 0,\\ v\\ge 0, \\\\&& $ which assures the positivity of the solution on $\\Omega $ at all time.", "The function $c(t,x)\\in C(\\mathbb {R}_{+}\\times \\Omega ,\\mathbb {R})$ is not of constant sign.", "We suppose it possesses a finite number of zeros independent of the time, that is there exist $x_{1},...,x_{k}\\in $ $\\Omega $ such that $c\\left( t,x_{1}\\right) =...=c\\left( t,x_{k}\\right) =0,\\text{ for a all }t\\ge 0, $ which is the case when $c(t,x)=c_{1}\\left( t\\right) .c_{2}\\left( x\\right) $ with $c_{1}\\left( t\\right) >0$ , for all $t\\ge 0$ .", "We denote by $\\Omega _{-}$ and $\\Omega _{+}$ the following subsets of $\\Omega $ $\\left\\lbrace \\begin{array}{c}\\Omega _{-}=\\left\\lbrace x\\in \\Omega \\text{ such that }c\\left( t,x\\right) <0,\\text{ for a all }t\\ge 0\\right\\rbrace , \\\\\\\\\\Omega _{+}=\\left\\lbrace x\\in \\Omega \\text{ such that }c\\left( t,x\\right) >0,\\text{ for a all }t\\ge 0\\right\\rbrace ,\\end{array}\\right.", "$ are independent of the time.", "The existence of a positive regular solution locally in time is classical.", "The global existence in time of a regular solution is not so obvious.", "If $c$ is a constant or is of constant sign, then global existence is immediate.", "But if $c$ changes sign, the problem become difficult.", "The following case was studied in [19]: $c(x) &<&0\\text{ in }\\left( -1,0\\right) ,\\ \\ c(0)=0,\\ \\ c(x)>0\\text{ in }\\left( 0,1\\right) , \\\\&&$ where it is shown that the solutions are locally uniformly bounded in $L_{\\infty }\\left( \\left[ 0,\\infty )\\times \\right( 0,1\\right) $ and $L_{\\infty }\\left( \\left[ 0,\\infty )\\times \\right( -1,0\\right) $ .", "The question with general $c$ is still an open question in all space dimensions (see [17]).", "In general, $L_{\\infty }\\left( \\Omega \\right) $ -blow up may occur in finite time for quadratic systems with nonlinearities satisfying the quasipositivity (REF ) and the control mass guarantied by the following condition $f(u,v)+g(u,v) &=&0,\\ \\text{for all }u,\\ v\\ge 0.", "\\\\&&$ Indeed, the authors in [18] showed $L_{\\infty }\\left( \\Omega \\right) $ -blow up for system (REF ) with appropriate boundary conditions.", "For this type of reaction diffusion systems only two properties hold: the positivity of the solutions is preserved for all time and the total mass of the components is uniformly controlled in time $\\int \\limits _{\\Omega }\\left( u+v\\right) dx=\\int \\limits _{\\Omega }\\left(u_{0}+v_{0}\\right) dx.", "$ This uniform control on the mass (or in mathematical terms on the L1-norm of the solution) suggests that no blow up should occur in finite time.", "It turns out that the situation is not so simple.", "This explains why so many partial results in different directions are found in the literature on this topic, and why also the general question of global existence of strong solutions even weak solutions is still open, while lots of systems arise in applications with these two natural properties.", "We recall here the main positive and negative results on global existence, together with many references, a description of the still open problems and a few new results as well.", "To show how the situation is difficult, in order to prove global existence in time of weak solutions to a class of quadratic reaction-diffusion systems for whom the reaction terms change sign, the authors in [2] forced these reactions to satisfy a Lyapunov structure of LlogL entropy type $\\left( \\log u\\right) f+\\left( \\log v\\right) g &\\le &0,\\text{ for a all }u>0,\\ v>0,.", "\\\\&& $ But this type of structure is satisfied only for nonlinearities at most quadratic.", "The reactions terms have not a constant sign means that none of the equations is good is the sense that neither $u$ nor $v$ is a priori bounded or at least bounded in some $Lp$ -space for $p$ large to permits us the application of the well known regularizing effect and deduce the global existence of strong solutions in time for problem (REF )-(REF ).", "Or to get uniform integrability of the right-hand side of (REF ) in order to obtain a good approximation of system (REF ) and then apply Dominated Convergence Theorem of integrals which with compactness properties of the heat operator give together global existence of a weak solution.", "In the case when the nonlinearities have a constant sign many results have been obtained (see N. Alikakos [1], K. Masuda [14]$,$ S. L. Hollis, R. H. Martin and M. Pierre [7] established global existence of positive solutions for system (REF ) with appropriate boundary conditions, under the conditions of the uniform boundedness of $u$ on $[0,T_{\\max }]\\times \\Omega $ and $f(r,s)+g(r,s) &\\le &C(r,s)\\left( r+s+1\\right) ,\\text{ for all }r\\ge 0\\text{and }s\\ge 0, \\\\&& $ where $C(r,s)$ is positive and uniformly bounded function defined on $\\mathbb {R}^{+}\\times $ $\\mathbb {R}^{+}$ .", "One notices that, to prove global existence for solutions to system (REF ), authors impose, in addition to (REF ), to one of the components of the reaction term the same condition.", "In the case when the nonlinearities have not a constant sign or for which one of the components $u$ or $v$ is a priori uniformly bounded.", "there is not many results: A. J. Morgan [15] Generalized the results of S. L. Hollis, R. H. Martin and M. Pierre [7] to show that solutions of the m-components reaction diffusion systems exist globally ($m\\ge 2$ ) where also, he imposed to $f$ and $f+g$ conditions (REF ).", "In S. Kouachi [12], we generalized the above results for two components reaction diffusion systems under the unique condition $f+Dg &\\le &C\\left[ 1+u+v\\right] ,\\text{ for a all }u\\ge 0,\\ v\\ge 0, \\\\&& $ for all positive constant $D$ sufficiently large, where $C$ is positive constant and we showed the global existence without imposing the boundedness of one of the components of the solution." ], [ "It is well known that to prove global existence of solutions to (REF )-(REF ) (see A. Friedman [3], D. Henry [5], A. Pazy [16] and F. Rothe [20]), it suffices to derive a uniform estimate of $\\left\\Vert f(u,v)\\right\\Vert _{p}$ and $\\left\\Vert g(u,v)\\right\\Vert _{p}$ on $\\left[ 0,T^{\\ast }\\right[ $ for some $p>N/2$ .", "Our aim is to apply polynomial Lyapunov functional method (see M. Kirane and S. Kouachi [8], [10] and [9], S. Kouachi and A. Youkana [13] and S. Kouachi [11] and [12]) according to the solutions $(u,v)$ of system (REF ), to carry out their L$^{p}-$ bounds and deduct their global existence.", "The nonnegativity of the solutions is preserved by application of classical results on invariant regions (see J. Smoller [21]), since the reaction (REF ) is quasi-positive, i.e.", ": $f\\left( 0,v\\right) &\\ge &0,\\ \\ \\text{\\ for all }v\\ge 0\\text{ and }g\\left(u,0\\right) \\ge 0,\\ \\ \\text{\\ for all }u\\ge 0.", "\\\\&& $ The usual norms in the spaces $\\mathbb {L}^{p}(\\Omega )$ , $\\mathbb {L}^{\\infty }(\\Omega )$ and $\\mathbb {C}\\left( \\overline{\\Omega }\\right) $ are respectively denoted by $\\left\\Vert u\\right\\Vert _{p}^{p}=\\frac{1}{\\left|\\Omega \\right|}\\int \\limits _{\\Omega }\\left|u(x)\\right|^{p}dx,$ and $\\left\\Vert u\\right\\Vert _{\\infty }=\\underset{x\\in \\Omega }{\\max }\\left|u(x)\\right|.$ Since the nonlinear right hand side of (REF ) is continuously differentiable on $\\mathbb {R}^{+}\\times $ $\\mathbb {R}^{+}$ , then for any initial data in $\\mathbb {C}\\left( \\overline{\\Omega }\\right) $ or $\\mathbb {L}^{p}(\\Omega ),\\;p\\in \\left( 1,+\\infty \\right) $ , it is easy to check directly its Lipschitz continuity on bounded subsets of the domain of a fractional power of the operator $&&\\;\\;\\left(\\begin{array}{ll}-a\\Delta & 0 \\\\0 & -b\\Delta \\end{array}\\right) .\\;\\newline \\\\&&$ Before the statement of the results, let us define for a fixed integer $p\\ge 1,$ the following polynomial functional which is of a great interest in the following $t\\mapsto L(t)=\\int \\limits _{\\Omega }H_{p}\\left( u(t,x),v(t,x)\\right) dx,$ where $H_{p}\\left( u,v\\right) =\\overset{p}{\\underset{i=0}{\\sum }}C_{p}^{i}\\theta _{i}u^{i}v^{p-i}, $ and $\\theta _{i}(x)=\\left\\lbrace \\begin{array}{c}c^{i+1}K^{i^{2}},\\ x\\in \\Omega _{-}, \\\\\\\\C^{i+1}K^{i^{2}},\\ x\\in \\Omega _{+},\\end{array}\\right.", "$ where the constants $c$ and $C$ are chosen such that the finite sequence $\\left\\lbrace \\theta _{i}\\left( x\\right) \\right\\rbrace $ is decreasing for $x\\in \\Omega _{-}$ and increasing for $x\\in \\Omega _{+}.$ That is $cK^{2p+1}<1<CK, $ and where $K$ is any positive constant satisfying $K^{2}>\\frac{\\left( a+b\\right) ^{2}}{4ab} $ Clearly, from (REF ) we have $\\frac{\\theta _{i}\\theta _{i+2}}{\\theta _{i+1}^{2}}=K^{2},\\ \\text{for all }x\\in \\Omega ,\\ i=0,1,...p-2.", "$" ], [ "The main result of the paper is the following Let $\\left( u(t,.),v(t,.", ")\\right) $ be any positive solution of the problem (REF )-(REF ), then the functional defined by (REF ) is uniformly bounded on the interval $[0,T^{\\ast }]$ .", "We treat the special case when $\\Omega =\\left] -1,1\\right[ $ and the function $c(t,x)=c\\left( x\\right) $ independent of the time and possess a unique zero, for example $c(0)=0$ .", "Suppose $c(x)<0$ for $x\\in \\left] -1,0\\right[ $ and $c(x)>0$ for $x\\in \\left] 0,1\\right[ $ .", "The general case will be deduced easily.", "We truncate the $\\theta _{i}$ ' by setting for all positive integer $n\\ge 1$ $\\theta _{i}^{n}\\left( x\\right) &=&\\psi _{n}\\left( x\\right) \\theta _{i}\\left( x\\right) , \\\\&& $ where $\\psi _{n}:\\left] -1,1\\right[ \\mapsto \\left[ 0,1\\right] $ are the $C^{\\infty }$ functions defined by $\\psi _{n}\\left( x\\right) =\\left\\lbrace \\begin{array}{c}0,\\ \\left|x\\right|<\\tfrac{1}{2n}, \\\\\\\\1,\\ \\ \\left|x\\right|>\\tfrac{1}{n}.\\end{array}\\right.", "$ $\\nabla \\theta _{i}^{n}\\left( x\\right) =\\psi _{n}\\left( x\\right) \\nabla \\theta _{i}\\left( x\\right) +\\theta _{i}\\left( x\\right) \\nabla \\psi _{n}\\left( x\\right) =\\left\\lbrace \\begin{array}{l}0,\\ \\left|x\\right|<\\tfrac{1}{2n},\\ \\text{or }\\left|x\\right|>\\tfrac{1}{n}, \\\\\\\\\\theta _{i}\\left( x\\right) \\nabla \\psi _{n}\\left( x\\right) ,\\ \\tfrac{1}{2n}<\\ \\left|x\\right|<\\tfrac{1}{n}.\\end{array}\\right.$ Then, we easily check that the the functions $\\theta _{i}^{n}\\left( x\\right),\\ n\\ge 1$ are $C^{\\infty }\\left( \\Omega \\right) $ and satisfy the assumption (2.8).", "Let $t\\mapsto L_{n}(t)=\\int \\limits _{\\Omega }H_{p}^{n}\\left( u(t,x),v(t,x)\\right)dx, $ where $H_{p}^{n}\\left( u,v\\right) =\\overset{p}{\\underset{i=0}{\\sum }}C_{p}^{i}\\theta _{i}^{n}u^{i}v^{p-i} $ By differentiating $L_{n}$ with respect to $t$ , we get $L_{n}^{\\prime }(t) &=&\\int \\limits _{\\Omega }\\overset{p}{\\underset{i=1}{\\sum \\ }}\\left( \\;iC_{p}^{i}\\theta _{i}^{n}u^{i-1}v^{p-i}\\right) \\frac{\\partial u}{\\partial t}dx+\\int \\limits _{\\Omega }\\overset{p-1}{\\underset{i=0}{\\sum \\ }}\\left( (p-i)C_{p}^{i}\\theta _{i}^{n}u^{i}v^{p-i-1}\\right) \\frac{\\partial v}{\\partial t}dx.", "\\\\&& $ Using the fact that $iC_{p}^{i} &=&pC_{p-1}^{i-1},\\text{\\ for\\ all\\ }\\;i=1,...,p, \\\\&& $ and interchanging the index, we get $L_{n}^{\\prime }(t) &=&p\\overset{p-1}{\\underset{i=0}{\\sum }}C_{p-1}^{i}\\int \\limits _{\\Omega }u^{i}v^{p-1-i}\\left[ \\left( a\\theta _{i+1}^{n}\\Delta u+b\\theta _{i}^{n}\\Delta v\\right) +\\left( \\theta _{i+1}^{n}f(u,v)+\\theta _{i}^{n}g(u,v)\\right) \\right] dx \\\\&& \\\\&=&I_{n}+J_{n},$ where $I_{n}=p\\overset{p-1}{\\underset{i=0}{\\sum }}\\int \\limits _{\\Omega }C_{p-1}^{i}u^{i}v^{p-1-i}\\left( a\\theta _{i+1}^{n}\\Delta u+b\\theta _{i}^{n}\\Delta v\\right) dx, $ and $J_{n} &=&p\\overset{p-1}{\\underset{i=0}{\\sum }}C_{p-1}^{i}\\int \\limits _{\\Omega }\\left[ \\left( -\\theta _{i+1}^{n}+\\theta _{i}^{n}\\right) c(t,x)\\varphi (u,v)u^{i}v^{p-1-i}\\right] dx.", "\\\\&& $ By simple use of Green's formula we obtain $I_{n} &=&-p\\overset{p-1}{\\underset{i=0}{\\sum }}C_{p-1}^{i}\\int \\limits _{\\Omega }\\left[ a\\nabla \\left( \\theta _{i+1}^{n}u^{i}v^{p-1-i}\\right) \\nabla u+b\\nabla \\left( \\theta _{i}^{n}u^{i}v^{p-1-i}\\right) \\nabla v\\right] dx \\\\&& \\\\&=&I_{n}^{1}+I_{n}^{2},$ where $I_{n}^{1}=-p\\overset{p-1}{\\underset{i=0}{\\sum \\ }}C_{p-1}^{i}\\int \\limits _{\\Omega }\\left[ a\\theta _{i+1}^{n}\\nabla \\left( u^{i}v^{p-1-i}\\right) \\nabla u+b\\theta _{i}^{n}\\nabla \\left( u^{i}v^{p-1-i}\\right) \\nabla v\\right] dx,$ and $I_{n}^{2} &=&-p\\overset{p-1}{\\underset{i=0}{\\sum \\ }}C_{p-1}^{i}\\int \\limits _{\\Omega }\\left( u^{i}v^{p-1-i}\\right) \\left[ a\\nabla \\theta _{i+1}^{n}\\nabla u+b\\nabla \\theta _{i}^{n}\\nabla v\\right] dx.", "\\\\&&$ Using the trigonometric formula another time and interchanging the index, we get $I_{n}^{1}=-p(p-1)\\overset{p-2}{\\underset{i=0}{\\sum }}C_{p-2}^{i}\\ \\int \\limits _{\\Omega }u^{i}v^{p-2-i}\\left( a\\theta _{i+2}^{n}\\left|\\nabla u\\right|^{2}+\\left( a+b\\right) \\theta _{i+1}^{n}\\nabla u\\nabla v+b\\theta _{i}^{n}\\left|\\nabla v\\right|^{2}\\right) dx.", "$ We have $L_{n}^{\\prime }(t) &=&I_{1}^{n}+I_{2}^{n}+J_{1}^{n}.", "\\\\&&$ Then, under condition (REF ) we have $I_{1}^{n}\\le 0$ .", "Using the following formula $C_{p-1}^{i} &=&C_{p-2}^{i-1}+C_{p-2}^{i},\\ i=1,...,p-2, \\\\&&$ the integrals $I_{2}^{n}$ can be written as follows $I_{2}^{n} &=&I_{21}^{n}+I_{22}^{n}, \\\\&&$ where $I_{21}^{n} &=&-p\\int \\limits _{\\Omega }\\left\\lbrace \\left[ av\\nabla \\theta _{1}^{n}\\nabla u+bv\\nabla \\theta _{0}^{n}\\nabla v\\right] v^{p-2}\\oplus \\overset{p-2}{\\underset{i=1}{\\sum \\ }}C_{p-2}^{i}\\left[ a\\nabla \\theta _{i+1}^{n}v\\nabla u+b\\nabla \\theta _{i}^{n}v\\nabla v\\right]u^{i}v^{p-2-i}\\right\\rbrace dx \\\\&& \\\\&=&-p\\overset{p-2}{\\underset{i=0}{\\sum \\ }}\\int \\limits _{\\Omega }C_{p-2}^{i}\\left[ av\\nabla \\theta _{i+1}^{n}\\nabla u+bv\\nabla \\theta _{i}^{n}\\nabla v\\right] u^{i}v^{p-2-i}dx,$ and $I_{22}^{n} &=&-p\\int \\limits _{\\Omega }\\left\\lbrace \\overset{p-2}{\\underset{i=1}{\\sum \\ }}C_{p-2}^{i-1}\\left[ au\\nabla \\theta _{i+1}^{n}\\nabla u+bu\\nabla \\theta _{i}^{n}\\nabla v\\right] u^{i-1}v^{p-1-i}\\oplus \\left[ au\\nabla \\theta _{p}^{n}\\nabla u+bu\\nabla \\theta _{p-1}^{n}\\nabla v\\right] u^{p-2}\\right\\rbrace dx\\\\&=&-p\\overset{p-2}{\\underset{i=0}{\\sum \\ }}\\int \\limits _{\\Omega }C_{p-2}^{i}\\left[ au\\nabla \\theta _{i+2}^{n}\\nabla u+bu\\nabla \\theta _{i+1}^{n}\\nabla v\\right] u^{i}v^{p-2-i}.$ Then $I_{2}^{n}=-p\\overset{p-2}{\\underset{i=0}{\\sum \\ }}\\int \\limits _{\\Omega }C_{p-2}^{i}\\left[ a\\left( u\\nabla \\theta _{i+2}^{n}+v\\nabla \\theta _{i+1}^{n}\\right) \\nabla u+b\\left( u\\nabla \\theta _{i+1}^{n}+v\\nabla \\theta _{i}^{n}\\right) \\nabla v\\right] u^{i}v^{p-2-i}dx,$ but $\\nabla \\theta _{i}^{n}\\left( x\\right) =\\psi _{n}\\left( x\\right) \\nabla \\theta _{i}\\left( x\\right) +\\theta _{i}\\left( x\\right) \\nabla \\psi _{n}\\left( x\\right) =\\left\\lbrace \\begin{array}{l}0,\\ \\left|x\\right|<\\tfrac{1}{2n},\\ \\text{or }\\left|x\\right|>\\tfrac{1}{n}, \\\\\\\\\\theta _{i}\\left( x\\right) \\nabla \\psi _{n}\\left( x\\right) ,\\ \\tfrac{1}{2n}<\\ \\left|x\\right|<\\tfrac{1}{n}.\\end{array}\\right.$ By Young's inequality, for all $\\epsilon >0$ there exists $C_{\\epsilon }>0$ such that $\\left|\\left( u\\tfrac{\\nabla \\theta _{i+2}^{n}}{\\sqrt{\\theta _{i+2}^{n}}}+v\\tfrac{\\nabla \\theta _{i+1}^{n}}{\\sqrt{\\theta _{i+2}^{n}}}\\right) \\sqrt{\\theta _{i+2}^{n}}\\nabla u\\right|&\\le &\\epsilon (p-1)\\theta _{i+2}^{n}\\left|\\nabla u\\right|^{2}+\\tfrac{\\left|\\left( u\\tfrac{\\nabla \\theta _{i+2}^{n}}{\\sqrt{\\theta _{i+2}^{n}}}+v\\tfrac{\\nabla \\theta _{i+1}^{n}}{\\sqrt{\\theta _{i+2}^{n}}}\\right) \\right|^{2}}{4\\epsilon (p-1)}, \\\\&& \\\\\\left|\\left( u\\tfrac{\\nabla \\theta _{i+1}^{n}}{\\sqrt{\\theta _{i}^{n}}}+v\\tfrac{\\nabla \\theta _{i}^{n}}{\\sqrt{\\theta _{i}^{n}}}\\right) \\sqrt{\\theta _{i}^{n}}\\nabla v\\right|&\\le &\\epsilon (p-1)\\theta _{i}^{n}\\left|\\nabla v\\right|^{2}+\\dfrac{\\left|\\left( u\\tfrac{\\nabla \\theta _{i+1}^{n}}{\\sqrt{\\theta _{i}^{n}}}+v\\tfrac{\\nabla \\theta _{i}^{n}}{\\sqrt{\\theta _{i}^{n}}}\\right) \\right|^{2}}{4\\epsilon (p-1)}.$ That is $\\left|\\left( u\\tfrac{\\nabla \\theta _{i+2}^{n}}{\\sqrt{\\theta _{i+2}^{n}}}+v\\tfrac{\\nabla \\theta _{i+1}^{n}}{\\sqrt{\\theta _{i+2}^{n}}}\\right) \\sqrt{\\theta _{i+2}^{n}}\\nabla u\\right|&\\le &\\epsilon (p-1)\\theta _{i+2}^{n}\\left|\\nabla u\\right|^{2}+\\tfrac{\\left|\\left( u\\sqrt{\\theta _{i+2}}+v\\tfrac{\\theta _{i+1}}{\\sqrt{\\theta _{i+2}}}\\right)\\nabla \\sqrt{\\psi _{n}}\\right|^{2}}{\\epsilon (p-1)}, \\\\&& \\\\\\left|\\left( u\\tfrac{\\nabla \\theta _{i+1}^{n}}{\\sqrt{\\theta _{i}^{n}}}+v\\tfrac{\\nabla \\theta _{i}^{n}}{\\sqrt{\\theta _{i}^{n}}}\\right) \\sqrt{\\theta _{i}^{n}}\\nabla v\\right|&\\le &\\epsilon (p-1)\\theta _{i}^{n}\\left|\\nabla v\\right|^{2}+\\dfrac{\\left|\\left( u\\tfrac{\\theta _{i+1}}{\\sqrt{\\theta _{i}}}+v\\sqrt{\\theta _{i}}\\right) \\nabla \\sqrt{\\psi _{n}}\\right|^{2}}{\\epsilon (p-1)}.$ Since the $\\theta _{i}$ 's are positive and uniformly bounded on $\\overline{\\Omega }$ and since from (REF ) $\\underset{\\epsilon \\rightarrow 0}{\\lim }\\frac{\\theta _{i}^{n}\\theta _{i+2}^{n}\\left( 1-\\epsilon \\right) ^{2}}{\\left( \\theta _{i+1}^{n}\\right)^{2}}>\\frac{\\left( a+b\\right) ^{2}}{4ab},\\ i=0,1,...p-2.$ This gives $-p(p-1)\\overset{p-2}{\\underset{i=0}{\\sum }}C_{p-2}^{i}\\ \\int \\limits _{\\Omega }u^{i}v^{p-2-i}\\left( a\\theta _{i+2}^{n}\\left( 1-\\epsilon \\right) \\left|\\nabla u\\right|^{2}+\\left( a+b\\right) \\theta _{i+1}\\nabla u\\nabla v+b\\theta _{i}\\left( 1-\\epsilon \\right) \\left|\\nabla v\\right|^{2}\\right) dx\\le 0$ and $I_{1}^{n}+I_{2}^{n} &\\le &\\dfrac{p}{\\epsilon }\\overset{p-2}{\\underset{i=0}{\\sum }}C_{p-2}^{i}\\ \\int \\limits _{\\tfrac{1}{2n}<\\ \\left|x\\right|<\\tfrac{1}{n}}\\left\\lbrace \\left( u\\sqrt{\\theta _{i+2}}+v\\tfrac{\\theta _{i+1}}{\\sqrt{\\theta _{i+2}}}\\right) ^{2}+\\left( u\\tfrac{\\theta _{i+1}}{\\sqrt{\\theta _{i}}}+v\\sqrt{\\theta _{i}}\\right) ^{2}\\right\\rbrace \\left|\\nabla \\sqrt{\\psi _{n}}\\right|^{2}u^{i}v^{p-2-i}dx, \\\\&&$ Since the $\\theta _{i}$ 's and theirs derivatives are uniformly bounded on $\\overline{\\Omega },$ we get $I_{1}^{n}+I_{2}^{n}\\le C_{\\epsilon ,n}\\int \\limits _{\\tfrac{1}{2n}<\\ \\left|x\\right|<\\tfrac{1}{n}}\\overset{p-2}{\\underset{i=0}{\\sum }}C_{p-2}^{i}\\ \\left\\lbrace u^{2}+v^{2}\\right\\rbrace u^{i}v^{p-2-i}dx.$ The polynomial under the sign of integration in the above inequality is of degree $p$ with positive coefficients, then $I_{1}^{n}+I_{2}^{n} &\\le &C_{\\epsilon ,n}\\int \\limits _{\\tfrac{1}{2n}<\\ \\left|x\\right|<\\tfrac{1}{n}}\\overset{p-2}{\\underset{i=0}{\\sum }}C_{p-2}^{i}\\ \\left\\lbrace u^{2}+v^{2}\\right\\rbrace u^{i}v^{p-2-i}dx.", "\\\\&&$ Now $J^{n} &=&p\\overset{p-1}{\\underset{i=0}{\\sum }}\\int \\limits _{\\Omega }\\left[\\left( -\\theta _{i+1}^{n}+\\theta _{i}^{n}\\right) c(t,x)\\varphi (u,v)C_{p-1}^{i}u^{i}v^{p-1-i}\\right] dx.", "\\\\&&$ Since the finite sequence $\\left\\lbrace \\theta _{i}\\left( x\\right) \\right\\rbrace $ is decreasing for $x\\in \\left] -1,0\\right[ $ and increasing for $x\\in \\left] 0,1\\right[ $ and the function is $c(t,x)\\le 0$ for $x\\in \\left] -1,0\\right[ $ and $c(t,x)\\le 0$ for $x\\in \\left] 0,1\\right[ $ , then $J^{n}\\le 0$ .", "Finally, we have $L_{n}^{\\prime }(t)\\le C_{\\epsilon ,n}\\int \\limits _{\\tfrac{1}{2n}<\\ \\left|x\\right|<\\tfrac{1}{n}}\\left( u+v\\right) ^{p}dx\\le C_{\\epsilon ,n}\\int \\limits _{\\tfrac{1}{2n}<\\ \\left|x\\right|<\\tfrac{1}{n}}\\left( u+v\\right) ^{p}dx,$ which gives $L_{n}(t)\\le L_{n}\\left( 0\\right) +\\overset{t}{\\int \\limits _{0}}L(t)dt,\\ 0\\le t<T_{\\max }.", "$ We have $L_{n}\\left( 0\\right) =\\int \\limits _{\\Omega }\\left[ \\overset{p}{\\underset{i=0}{\\sum }}C_{p}^{i}\\theta _{i}^{n}u_{0}^{i}v_{0}^{p-i}\\right] dx,$ and $\\theta _{i}^{n}\\left( x\\right) =\\psi _{n}\\left( x\\right) \\theta _{i}\\left(x\\right) \\le \\theta _{i}\\left( x\\right) \\text{ in }\\left[ -1.1\\right] ,$ then $L_{n}\\left( 0\\right) \\le L(0), $ then $L_{n}(t)\\le L\\left( 0\\right) +\\overset{t}{\\int \\limits _{0}}L(t)dt,\\ 0\\le t<T_{\\max }.$ Define, for all integer $n\\ge 2$ , the function $X_{n}(x)=\\left\\lbrace \\begin{array}{l}1,\\ \\ \\tfrac{1}{n}\\le \\left|x\\right|\\le 1, \\\\\\\\0,\\ \\ \\left|x\\right|<\\tfrac{1}{n}\\end{array}\\right.$ Put $G_{n}(x)=X_{n}(x)H_{p}^{n}\\left( x\\right) ,$ then $L_{n}(t)\\ge \\int \\limits _{\\Omega }G_{n}(x)dx=:\\int \\limits _{\\Omega }X_{n}(x)H_{p}\\left( x\\right) dx,$ then (REF ) becomes $\\int \\limits _{\\Omega }X_{n}(x)H_{p}\\left( x\\right) dx\\le L\\left( 0\\right) +\\overset{t}{\\int \\limits _{0}}L(t)dt,\\ 0\\le t<T_{\\max }.$ But from (REF ), we have $c\\le \\theta _{i}\\left( x\\right) \\le CK\\text{ for all }x\\in \\text{ }\\left[-1.1\\right] ,$ then $c\\int \\limits _{\\Omega }X_{n}(x)\\left( u+v\\right) ^{p}dx\\le L\\left( 0\\right)+CK\\overset{t}{\\int \\limits _{0}}\\int \\limits _{\\Omega }\\left( u+v\\right)^{p}dxdt,\\ 0\\le t<T_{\\max }.$ This shows that the functions $X_{n}(x)\\left( u+v\\right) ^{p}$ are uniformly integrable on $\\Omega $ and bounded by $\\left( u+v\\right) ^{p}$ for all fixed time $0\\le t<T_{\\max }$ .", "But $X_{n}(x)\\left( u+v\\right) ^{p}$ converges to $\\left( u+v\\right) ^{p}$ a.e.", "in $\\Omega $ , then the Dominated Convergence Theorem gives $\\underset{n\\rightarrow \\infty }{\\lim }\\int \\limits _{\\Omega }X_{n}(x)\\left(u+v\\right) ^{p}dx=\\int \\limits _{\\Omega }\\left( u+v\\right) ^{p}dx,\\text{ forall fixed }0\\le t<T_{\\max }.$ Consequently we get $c\\int \\limits _{\\Omega }\\left( u+v\\right) ^{p}dx\\le L\\left( 0\\right) +CK\\overset{t}{\\int \\limits _{0}}\\int \\limits _{\\Omega }\\left( u+v\\right)^{p}dxdt,\\ 0\\le t<T_{\\max }.$ Then Gronwall's inequality gives $\\int \\limits _{\\Omega }\\left( u+v\\right) ^{p}dx\\le \\tfrac{L\\left( 0\\right) }{c}\\exp \\left( \\tfrac{CK}{c}\\right) t,$ that is $\\int \\limits _{\\Omega }\\left( u+v\\right) ^{p}dx\\le \\tfrac{1}{c}e^{CKt}\\int \\limits _{\\Omega }\\left( u_{0}+v_{0}\\right) ^{p}dx,\\ 0\\le t<T_{\\max }.$ This end the proof of the Theorem.", "All solutions of problem (REF )-(REF ) with uniformely bounded positive initial data in $\\Omega $ are global in time.", "Since, the function $c(x)$ is bounded on $\\Omega $ and the function $\\varphi (u,v)$ is of polynomial growth, the reactions are $L_{p}\\left( \\Omega \\right) $ for all $p>1$ which gives, from the preliminary remarks the global existence of the solution." ] ]
2212.05623
[ [ "Electrodynamics under Action of Null Cosmic Strings" ], [ "Abstract A method to study electromagnetic (EM) effects generated by a straight null cosmic string moving in classical EM fields is suggested.", "The string is shown to induce an additional EM field which can be described as a solution to homogeneous Maxwell equations with initial data set on a null surface, the string event horizon, where the string world-sheet belongs to.", "The initial data ensure the required holonomy of the string space-time caused by the gravity of the string.", "This characteristic initial value problem is used to study interaction of plane waves with null strings and perturbations by the strings of the Coulomb fields of electric charges.", "It is shown that parts of an incoming EM wave crossing the string horizon from different sides of the string are refracted with respect to each other and leave behind the string a wedge-like region of interference.", "A null string moving near an electric charge results in two effects: it creates a self-force of the charge and induces a pulse of EM radiation traveling away from the charge in the direction close to trajectory of the string." ], [ "Introduction", "Cosmic strings [1], [2] are hypothetical astrophysical objects which might have been produced in the early Universe.", "Cosmic strings yield a variety of physical effects, such as lensing effects, the Kaiser-Stebbins effect [3], which results in imprints of string's motion on cosmic microwave background [4].", "The cusps of tensile strings emit strong beams of high-frequency gravitational waves [5] which may contribute to the stochastic gravitational background.", "These effects are potentially observable [6],[7] and one expects that experimental evidences of cosmic strings would be an important step toward understanding physics at very high energies.", "Cosmic strings which appear as a result of the Kibble mechanism [1] are also called tensile strings since they have a non-vanishing tension and nonzero rest mass per unit length.", "A relatively less studied class of cosmic strings are null cosmic strings which are one-dimensional objects whose points move along trajectories of light rays, orthogonally to strings [8].", "The origin of null cosmic strings may be related to physics of fundamental strings at the Planckian energies [9]-[12].", "Equivalent names of null strings are massless strings [13] or tensionless strings, to distinguish them from the tensile strings.", "Like tensile strings null cosmic strings create holonomies of spacetime.", "The holonomies are null rotations belonging to the parabolic subgroup of the Lorentz group [13], [14].", "The group parameter of the holonomies is determined by the string energy per unit length.", "Possible astrophysical and cosmological effects of null cosmic strings [14], [15], [16], such as deviations of light rays and trajectories of particles in the gravitational field of strings as well as scattering of strings by massive sources, are similar to that of the tensile strings.", "A distinctive feature of null strings is their optical properties.", "The strings behave as one-dimensional null geodesic congruences characterized by a complex scalar which is determined by an analogue of the Sachs' optical equation [17].", "The analysis shows that world-sheets of null strings develop caustics which accumulate large amounts of energy [16].", "The main purpose of the present paper is to describe new electromagnetic (EM) effects generated by a straight null string in locally Minkowsky space-time.", "On the technical side our aim is to define classical electrodynamics on space-times with null holonomy.", "It should be noted that field theories in the background geometry of a straight tensile string have been studied earlier in numerous publications in a reference frame where the string is at rest and the corresponding space-time has conical singularities, see e.g.", "[18]-[27] among the pioneering papers.", "Field theories in the presence of null strings is relatively unknown research area.", "A approach to physical effects caused by null strings has been suggested in [14].", "In the string space-time the null holonomy transformations have fixed points on the string world-sheet which belongs to a null hypersurface ${\\cal H}$ , the string event horizon.", "The idea of [14] is to set, for matter crossing the string horizon, “initial” data on ${\\cal H}$ to ensure the required holonomy transformations.", "The approach has been used in [14], [15] to describe the Kaiser-Stebbins effect caused by null cosmic strings.", "In the present paper, we extend the method of [14] to study observable effects of classical EM fields generated by null cosmic strings.", "The paper is organized as follows.", "In Sec.", "we describe the holonomy method of [14] with the focus on free field theories.", "Finding solutions to wave equations in a free field theory on a space-time of a null string, in the domain above ${\\cal H}$ , is equivalent to solving an initial value problem with initial data on ${\\cal H}$ determined by incoming data.", "The initial data are null rotated so that to ensure the required holonomy.", "In the theory of hyperbolic second order partial differential equations (PDE) such an initial value (Cauchy) problem is called the characteristic initial value problem [28] since the standard pair of initial data are not independent on $\\cal H$ .", "We discuss in details how the initial data should be chosen in the case of scalar field theory and in Maxwell's theory, where gauge constraints should be taken into account.", "We also demonstrate that observers crossing $\\cal H$ do not see discontinuities in the stress-energy tensor of the fields.", "In next Sections we apply this method to study different EM effects induced by null strings.", "Scattering of monochromatic plane electromagnetic waves on null strings is considered in Sec .", "The null cosmic string cuts the wave front into two waves and changes on $\\cal H$ their directions relative to each other.", "That is, the string horizon acts as a refractive media.", "As a result, the string leaves behind a wedge-like region, an interference wedge, where the refracted waves interfere.", "An observer inside the interference wedge sees two waves, as if they come from different sources.", "This property is the manifestation of the lensing effect.", "The interference is also analogous to creation by cosmic strings of overdensities of matter.", "In Sec.", "we define EM fields created by charges in the presence of the straight null string.", "Solution of the Maxwell equations above the string horizon ${\\cal H}$ is studied in detail for a single point charge.", "We show that in addition to standard Coulomb field of the charge the string generates rapidly changing EM field which acts as a self-force on the charge.", "At large times the additional field looks as a pulse of EM radiation traveling away from the charge.", "We calculate numerically the energy density and the energy flow of the pulse and show that its duration and peak is determined by the impact parameter between the string and the charge.", "Other applications of our results are discussed in Sec.", ".", "The suggested method can be used to describe classical field effects on gravitational shock-wave backgrounds.", "Section is a summary.", "Our analysis is based on exact solution for the Cauchy problem for a scalar plane wave.", "The derivation of this solution and the properties of the Green function with a delta-function source on $\\cal H$ can be found in Sec.", ".", "Some details regarding the homogeneous solution for the Coulomb potential are given in Sec.", "." ], [ "Coordinate conditions on the string horizon", "We consider a straight cosmic string which is stretched along $z$ -axis and moves along $x$ -axis in $R^{1,3}$ .", "It is convenient to use the light-cone coordinates $v=t+x$ , $u=t-x$ , where the metric is $ds^2=-dv du +dy^2+dz^2~~.$ The string world-sheet can be defined by equations $u=y=0$ .", "The parabolic subgroup of the Lorentz transformations (null rotations), $(x^{\\prime })^\\mu =M^\\mu _{~\\nu }\\left(\\lambda \\right)x^\\nu $ , acts on $u,v,y,z$ coordinates in $R^{1,3}$ as follows: $u^{\\prime }=u~~,~~v^{\\prime }=v+2\\lambda y+\\lambda ^2u~~,~~y^{\\prime }=y+\\lambda u~~,~~z^{\\prime }=z~~,$ where $\\lambda $ is some real parameter.", "Transformation of a vector is $V_u^{\\prime }=V_u-\\lambda V_y+\\lambda ^2 V_v~~,~~V_v^{\\prime }=V_v~~,~~V_y^{\\prime }=V_y-2\\lambda V_v~~,~~V_z^{\\prime }=V_z~~,$ or $V^{\\prime }_\\mu =M_{\\mu }^{~\\nu }(\\lambda )V_\\nu $ , where $M_{\\mu }^{~~\\nu }=\\eta _{\\mu \\mu ^{\\prime }}\\eta ^{\\nu \\nu ^{\\prime }}M^{\\mu ^{\\prime }}_{~~\\nu ^{\\prime }}$ .", "For a null string with the world-sheet $u=y=0$ a parallel transport of a vector $V$ along a closed contour around the string results in a null rotation, $V^{\\prime }=M(\\omega )V$ with $\\omega $ defined as, see [13], $\\omega \\equiv 8\\pi GE~~.$ The world-sheet is a fixed point set of (REF ).", "The hypersurface $u=0$ is the event horizon of the string.", "We denote it by ${\\cal H}$ .", "The holonomy method suggested in [14] is to set initial data on the string horizon.", "To determine these data the string space-time is decomposed onto two parts: $u<0$ , and $u>0$ .", "We call trajectories of particles and light rays at $u<0$ and $u>0$ ingoing and outgoing trajectories, respectively.", "To describe outgoing trajectories, one introduces two types of coordinate charts: $R$ - and $L$ -charts, with cuts on the horizon either on the left ($u=0, y<0$ ) or on the right ($u=0, y>0$ ) to the string, respectively.", "The initial data on the string horizon are related to the ingoing data via null rotations (REF ) taken at $u=0$ .", "For brevity the right ($u=0, y>0$ ) and the left ($u=0, y<0$ ) parts of $\\cal H$ will be denoted as ${\\cal H}_+$ and ${\\cal H}_-$ , respectively.", "For the $R$ -charts the cut is along ${\\cal H}_-$ .", "The coordinate transformations on the string horizon and the initial data for the outgoing trajectories are $x_+^\\mu =\\bar{x}^\\mu \\mid _{{\\cal H}_+}~~,~~u_+^\\mu =\\bar{u}^\\mu \\mid _{{\\cal H}_+}~~,$ $x_-^\\mu =M^\\mu _{~\\nu }\\left(\\omega \\right)\\bar{x}^\\nu \\mid _{{\\cal H}_-}~~,~~u_-^\\mu =M^\\mu _{~\\nu }\\left(\\omega \\right)\\bar{u}^\\nu \\mid _{{\\cal H}_-}~~.$ where $\\bar{x}^\\mu $ , $\\bar{u}^\\mu $ are the coordinates and velocities of the corresponding ingoing trajectory when it reaches the horizon.", "It follows from (REF ) that the coordinate transformations (REF ) at $y<0$ are reduced to a shift of a single coordinate: $v_-=\\bar{v}+2\\omega y~~,~~y<0~~.$ Thus, on the $R$ -charts the 'right' trajectories ($y>0$ ) behave smoothly across the horizon, while the `left' trajectories ($y<0$ ) are shifted along the $v$ coordinate and change their direction under the null rotation.", "The descriptions based on $R$ - or $L$ -charts are equivalent.", "The choice of a chart is a matter of convenience depending on the observer's trajectory.", "$L$ -charts are dual to $R$ -charts.", "They are smooth everywhere except the right cut on the horizon, ${\\cal H}_+$ .", "The shift of the coordinate now is $v_+=\\bar{v}-2\\omega y~~,~~y>0~~.$ 'Right' outgoing trajectories experience null rotations like in (REF ), (REF ) where $\\omega $ should be replaced with $-\\omega $ .", "The reason why descriptions in terms of $R$ - and $L$ -charts are equivalent is that only relative transformations of 'left' and 'right' outgoing trajectories have physical and geometrical meanings." ], [ "Characteristic Cauchy problem for scalar fields", "The above holonomy method can be extended to describe classical field theories, or fibre bundles over the null string geometry.", "In this Section we consider non-interacting scalar fields $\\phi $ with equation $(\\Box -m^2)\\phi (x)=j(x)~~,$ $\\Box =\\partial _\\mu \\partial ^\\mu =-4\\partial _u \\partial _v+\\partial ^2_y+\\partial ^2_z~~,$ where $j(x)$ is an external source.", "It is a second order hyperbolic PDE which allows a well-posed Cauchy problem on initial space-like hypersurfaces.", "The initial data include fields and their first time derivatives.", "Since we need solutions of (REF ) above $\\cal H$ , $u>0$ , it is natural to consider the initial value problem with $\\cal H$ as the initial hypersurface.", "The point is that $\\cal H$ is null, and it is a characteristic surface [28] of (REF ), where standard Cauchy data, fields and their first derivatives, are not independent.", "A solution of (REF ) can be fixed just by the value of the field $\\hat{\\phi }({\\bf x})=\\phi (x)\\mid _{\\cal H}$ , where ${\\bf {x}}\\equiv (v,y,z)$ .", "An analogue of a time derivative, $\\chi ({\\bf {x}})=\\partial _u \\phi (x)$ at $u=0$ , can be expressed from (REF ) as $\\chi ({\\bf {x}})=\\frac{1}{4}\\int _{-\\infty }^v dv^{\\prime } \\left[(\\partial _y^2+\\partial _z^2-m^2)\\hat{\\phi }({\\bf x}^{\\prime })-j({\\bf x}^{\\prime })\\right]+f(y,z)~~,$ where ${\\bf {x}}^{\\prime }\\equiv (v^{\\prime },y,z)$ .", "Asymptotic properties of $\\chi ({\\bf {x}})$ at future or past null infinities depend on an arbitrary function $f(y,z)$ in (REF ).", "This function can be fixed or eliminated by requiring appropriate behavior of $\\hat{\\phi }({\\bf x})$ and $\\chi ({\\bf {x}})$ at null infinities.", "Under these conditions $\\chi $ is fixed by $\\hat{\\phi }$ and $j$ on $\\cal H$ .", "To take into account the holonomy of the string space-time the initial data on ${\\cal H}_+$ and ${\\cal H}_-$ should be considered separately.", "We denote them as $\\phi (x)\\mid _{{\\cal H}_\\pm }=\\hat{\\phi }_\\pm ({\\bf x})~~.$ In the $R$ -chart, the continuity of solutions across $\\cal H$ is ensured by the following transition conditions: $\\hat{\\phi }_+({\\bf x})=\\bar{\\phi }({\\bf x})\\mid _{{\\cal H}_+}~~,~~\\hat{\\phi }_-({\\bf x})=\\bar{\\phi }(\\bar{\\bf x})\\mid _{{\\cal H}_-}~~,$ $\\bar{\\bf x}={\\bf x}-2\\omega y{\\bf {q}}$ where $\\bar{\\phi }$ is the value on $\\cal H$ of the ingoing field at $u<0$ , and $q^i=\\delta ^i_v$ .", "We also require continuity of the current in the r.h.s.", "of (REF ), $j({\\bf x})\\mid _{{\\cal H}_+}=\\bar{j}({\\bf x})~~,~~j({\\bf x})\\mid _{{\\cal H}_-}=\\bar{j}(\\bar{\\bf x})~~.$ Conditions (REF ) are analogous to conditions (REF ), (REF ) for trajectories of particles and light rays.", "According to (REF ) a left observer with `ingoing' coordinates $\\bar{\\bf {x}}$ on ${\\cal H}_-$ will be shifted to a coordinate $\\bar{\\bf {x}}+2\\omega y{\\bf {q}}$ .", "The transition condition (REF ) means that the left observer measures in the new coordinates the same value of the field and the current.", "Let us show that vector field $V_\\mu =\\partial _\\mu \\phi $ has the required holonomy when going around the string world-sheet.", "To this aim it is enough to demonstrate that 'outgoing' and 'ingoing' data on $\\cal H$ , in the $R$ -chart, are related as $V_{\\mu }({\\bf x})=\\bar{V}_\\mu (x)\\mid _{{\\cal H}_+}~~,~~V_{\\mu }({\\bf x})=M_{\\mu }^{~\\nu }(\\omega )\\bar{V}_\\nu (\\bar{x})\\mid _{{\\cal H}_-}~~.$ Here $M_{\\mu }^{~\\nu }(\\omega )$ are defined in (REF ).", "It is easy to see that (REF ) is fulfilled on ${\\cal H}_-$ for $v$ and $z$ components.", "For the $y$ component one finds $\\partial _y\\phi (x)\\mid _{{\\cal H}_-}=(\\partial _{\\bar{y}}-2\\omega \\partial _{\\bar{v}})\\bar{\\phi }(\\bar{\\bf x})\\mid _{{\\cal H}_-}~~,$ in agreement with (REF ).", "The result for $\\chi ({\\bf {x}})=\\partial _u \\phi (x)$ at ${\\cal H}_-$ follows from (REF ) and (REF ) which imply $\\partial _v\\chi ({\\bf {x}})=\\frac{1}{4}\\left[(\\partial _y^2+\\partial _z^2-m^2)\\phi ({\\bf x})-j({\\bf x})\\right]=$ $\\frac{1}{4}\\left[((\\partial _{\\bar{y}}-2\\omega \\partial _{\\bar{v}})^2+\\partial _{z}^2-m^2)\\bar{\\phi }(\\bar{\\bf x})-\\bar{j}(\\bar{\\bf x})\\right]=\\partial _{\\bar{v}}\\left(\\bar{\\chi }(\\bar{\\bf {x}})-\\omega \\partial _{\\bar{y}}\\bar{\\phi }(\\bar{\\bf x})+\\omega ^2 \\partial _{\\bar{v}}\\bar{\\phi }(\\bar{\\bf x})\\right)~~,$ where $\\bar{\\chi }(\\bar{\\bf {x}})=\\partial _u \\bar{\\phi }(\\bar{x})$ at ${\\cal H}_-$ .", "That is, under corresponding conditions at null infinities, $\\chi ({\\bf {x}})=\\bar{\\chi }(\\bar{\\bf {x}})-\\omega \\partial _{\\bar{y}}\\bar{\\phi }(\\bar{\\bf x})+\\omega ^2 \\partial _{\\bar{v}}\\bar{\\phi }(\\bar{\\bf x})~~.$ This coincides with null rotation of the $u$ component in (REF ).", "It is clear that relation (REF ) is a consequence of relativistic invariance of equation of motion (REF ).", "The stress-energy tensor of the fields $T_{\\mu \\nu }$ is constructed of $\\phi $ and $\\partial _\\mu \\phi $ .", "Since $T_{\\mu \\nu }(x)=M_{\\mu }^{~\\alpha }(\\omega )M_{\\nu }^{~\\beta }(\\omega )\\bar{T}_{\\alpha \\beta }(\\bar{x})\\mid _{{\\cal H}_-}~~,$ it has the required holonomy which belongs to tensor representation of the Lorentz group.", "Transition conditions (REF ) guarantee that left observers with 4-velocities $u_o$ do not see discontinuities in quantities like $u_o^\\mu \\partial _\\mu \\phi $ or $u_o^\\mu u_o^\\nu T_{\\mu \\nu }$ when crossing the string horizon.", "We can now formulate the characteristic initial value problem: – the solutions $\\phi (x)$ of hyperbolic type PDE (REF ) are looked for in the domain $u>0$ ; – the initial data are set on null hypersurface $\\cal H$ ($u=0$ ) and consist of a single variable, the value of the field $\\hat{\\phi }({\\bf x})$ , appropriate asymptotic conditions at null infinities are assumed; – the initial data $\\hat{\\phi }({\\bf x})$ are determined by `incoming' solution $\\bar{\\phi }(\\bar{x})$ in the domain $u<0$ with the help of transition conditions (REF ) which are synchronized with coordinate conditions (REF ), (REF ).", "A solution to (REF ),(REF ) can be written as $\\phi (x)=\\phi _I(x)+\\phi _H(x)~~.$ Here $\\phi _I(x)$ is a particular solution to inhomogeneous equation (REF ) in $R^{1,3}$ taken at $u>0$ .", "We denote by $\\hat{\\phi }_{I,\\pm }$ the corresponding data of $\\phi _I(x)$ on ${\\cal H}_\\pm $ .", "The field $\\phi _H(x)$ is a solution to a homogeneous problem $(\\Box -m^2)\\phi _H(x)=0~~,~~\\phi _H(x)\\mid _{{\\cal H}_\\pm }=\\hat{\\phi }_{H,\\pm }({\\bf x})~~,~~\\hat{\\phi }_{H,\\pm }({\\bf x})=\\hat{\\phi }_{\\pm }({\\bf x})-\\hat{\\phi }_{I,\\pm }({\\bf x})~~.$ The Cauchy data in (REF ) are chosen so that to ensure the required data (REF ) for $\\phi (x)$ .", "The solution to (REF ) can be written as $\\phi _H(x)=\\int _{y^{\\prime }>0}d{\\bf x^{\\prime }}D(u,{\\bf x-x^{\\prime }})\\hat{\\phi }_{H,+}({\\bf x}^{\\prime })+\\int _{y^{\\prime }<0}d{\\bf x^{\\prime }}D(u,{\\bf x-x^{\\prime }})\\hat{\\phi }_{H,-}({\\bf x}^{\\prime })~~,$ where the $D$ -function is the solution to the following problem: $(\\Box -m^2) D(x)=0~~,~~D(u,{\\bf x})\\mid _{u=0}=\\delta ^{(3)}({\\bf x})~~.$ It is convenient to assume that $\\phi _I$ is the solution when the string is absent.", "Then physical effects generated by the null string are related to $\\phi _H$ .", "It is this homogeneous part we study in next Sections in different physical situations." ], [ "Characteristic Cauchy problem for a Maxwell field", "Our primary interest is electromagnetic fields on space-time of a null cosmic string.", "The standard initial value problem for the Maxwell equations in Minkowsky space-time, $\\partial _\\mu F^{\\mu \\nu }=j^\\nu ~~,$ $F_{\\mu \\nu }=\\partial _\\mu A_\\nu - \\partial _\\nu A_\\mu $ , $\\partial j=0$ , with initial space-like hypersurface $t=0$ is determined by the following initial data: $A_i\\mid _{t=0}=a_i~~,~~F_{0i}\\mid _{t=0}=\\pi _i~~,~~i=x,y,z~~.$ The pairs $a_i,\\pi _i$ are canonical coordinates and momenta.", "The gauge symmetry imposes the constraint $\\partial _i \\pi _i=-j_0\\mid _{t=0}~~,$ which leaves 2 independent momenta.", "Also the gauge transformations, $\\delta A_\\mu =\\partial _\\mu \\lambda $ , allow one to exclude one of coordinates $a_i$ .", "Therefore there are only 4 independent initial data.", "For the subsequent analysis, it is convenient to use the Lorentz gauge condition $\\partial A=0$ since it is invariant under holonomy transformations on $\\cal H$ and can be imposed globally on cosmic string space-time.", "The Maxwell equations are reduced to $\\Box A_\\mu =j_\\mu ~~,~~\\partial A=0~~.$ The initial data for (REF ) are $A_\\mu \\mid _{t=0}=a_\\mu ~~,~~\\dot{A}_\\mu \\mid _{t=0}=p_\\mu ~~.$ The data $a_0$ and $p_0$ are not independent: $p_0$ is fixed by the gauge condition, while $a_0$ is determined by constraint (REF ).", "The gauge freedom of (REF ), $\\delta A_\\mu =\\partial _\\mu \\lambda $ , $\\Box \\lambda =0$ , that implies transformations of the initial data, $\\delta a_i =\\partial _i \\lambda , \\quad \\delta a_0=\\dot{\\lambda }, \\quad \\delta p_i =\\partial _i \\dot{\\lambda }, \\quad \\delta p_0=\\triangle \\lambda ,$ leaves 4 independent initial data.", "Consider now the characteristic initial value problem for Maxwell equations with initial hypersurface $\\cal H$ ($u=0$ ).", "The canonical coordinates and momenta can be determined by using the Hamilton-Jacobi method from the variation of the Maxwell action with $\\cal H$ as a boundary, $A_b\\mid _{\\cal H}=a_b~~,~~F^{ub}\\mid _{\\cal H}=\\pi _b~~,~~b=v,y,z~~.$ The data are subject to the constraint $\\partial _v \\pi _v+\\partial _x \\pi _x+\\partial _y \\pi _y=-j^u\\mid _{\\cal H}~~.$ The momenta $\\pi _y=2F_{vy}$ , $\\pi _z=2F_{vz}$ are completely determined by the initial data $a_b$ .", "If these momenta are known, $\\pi _v=4F_{uv}$ is fixed by (REF ).", "Since there is the gauge freedom $\\delta a_b =\\partial _a \\lambda $ in definition of $\\pi _y,\\pi _z$ , the initial value problem on $\\cal H$ requires 2 independent data, twice less than that in the initial problem on space-like hypersurface.", "If we fix the Lorentz gauge the initial data for (REF ) can be formulated as $A_\\mu \\mid _{\\cal H}=a_\\mu ~~,$ where $a_u$ should be determined by the gauge condition.", "The remaining gauge freedom then leaves 2 independent data.", "Constraint (REF ) follows from the gauge condition and equation for the $v$ component.", "We go now to Maxwell equations on string space-time.", "Let $\\bar{A}_\\mu $ be a solution to (REF ) with an electric current $\\bar{j}^\\mu $ in the region $u<0$ .", "We continue to work in the $R$ -chart.", "By analogy with (REF ), the following conditions ensure continuity of the current across $\\cal H$ : $j^\\mu ({\\bf x})\\mid _{{\\cal H}_+}=\\bar{j}^\\mu ({\\bf x})~~,~~j^\\mu ({\\bf x})\\mid _{{\\cal H}_-}=M^{\\mu }_{~\\nu }(\\omega )\\bar{j}^\\nu (\\bar{\\bf x})~~,$ where $j^\\mu $ is defined at $u>0$ , and $\\bar{\\bf x}={\\bf x}-2\\omega y{\\bf {q}}$ .", "The conservation law, $\\partial j=0$ , is invariant with respect to null rotations.", "Together with (REF ) it implies that the electric charge $Q$ defined on null surfaces $u=C$ , $Q=\\int _{u=C}d\\Sigma _\\mu j^\\mu (u,{\\bf x})=\\int _{u=C} j^u(u,{\\bf x})dv dy dz~~$ does not change when crossing $\\cal H$ .", "The characteristic initial value problem for EM fields at $u>0$ includes field equations (REF ) and initial data (REF ) set on $\\cal H$ .", "The data are determined as $a_{b}({\\bf x})=\\bar{a}_b({\\bf x})\\mid _{{\\cal H}_+}~~,~~a_{b}({\\bf x})=M_{b}^{~c}(\\omega )\\bar{a}_c(\\bar{\\bf x})\\mid _{{\\cal H}_-}~~,$ $\\bar{a}_b ({\\bf x})=\\bar{A}_b (x)\\mid _{{\\cal H}}~~,~~b=v,y,z~~.$ Note that $M_{b}^{~u}=0$ , see (REF ).", "The initial data for the $u$ -component is determined by the gauge condition $\\partial A=0$ .", "By taking into account (REF ) and the fact that $a_v$ is invariant under null rotations one finds that $\\partial _v a_u=\\partial _{\\bar{v}}(\\bar{a}_u-\\omega \\bar{a}_y+\\omega ^2 \\bar{a}_v)~~.$ This relation holds on ${\\cal H}_-$ and is consistent with the transformation law $a_u({\\bf x})=M_u^{~\\mu } \\bar{a}_\\mu (\\bar{\\bf x})$ .", "By using this one can demonstrate the transition conditions for the Maxwell tensor on ${\\cal H}_-$ $F_{\\mu \\nu }(x)=M_{\\mu }^{~\\alpha }(\\omega )M_{\\nu }^{~\\beta }(\\omega )\\bar{F}_{\\alpha \\beta }(\\bar{x})\\mid _{{\\cal H}_-}~~,$ in accord with the holonomy of the space-time.", "As is explained in Sec.", "REF it is convenient to look for a solution to (REF ), (REF ) in the form: $A_\\mu (x)=A_{I,\\mu }(x)+A_{H,\\mu }(x)~~,$ where $A_{I,\\mu }$ is a particular solution to (REF ) and $A_{H,\\mu }$ is a solution to a homogeneous characteristic initial value problem $\\Box A_{H,\\mu }=0~~,~~\\partial A_H=0~~,~~A_{H,b} (x)\\mid _{{\\cal H}}=a_{H,b}({\\bf x})~~.$ If we assume that $A_{I,\\mu }$ coincides with the solution in the absence of the string, that is $A_{I,\\mu }=\\bar{A}_{\\mu }$ on ${\\cal H}_+$ , the initial data in (REF ) become $a_{H,b}({\\bf x})\\mid _{{\\cal H}_+}=0~~,~~a_{H,b}({\\bf x})\\mid _{{\\cal H}_-}=M_{b}^{~c}(\\omega )\\bar{a}_c(\\bar{\\bf x})-a_{I,b}({\\bf x})~~,$ where $a_{I,b}({\\bf x})=A_{I,b} (x)\\mid _{{\\cal H}}=\\bar{a}_b({\\bf x})$ ." ], [ "Refraction of EM waves on the string horizon", "The first type of physically interesting effects is related to scattering of electromagnetic waves on null strings.", "Consider monochromatic plane waves which have the standard form before scattering on the string (in the region $u<0$ ): $\\bar{A}_\\mu (\\bar{x})=\\Re ~(\\bar{E}_\\mu ~e^{i \\bar{k}\\cdot \\bar{x}})~~,$ where $\\bar{E}_\\mu $ is some complex polarization vector, $\\bar{k}^\\mu \\bar{E}_\\mu =0$ .", "As earlier, we denote the incoming data with the bar.", "Other types of electromagnetic waves can be treated as a superposition of plane monochromatic waves.", "We are dealing with (REF ) when $j=0$ .", "This simplifies the choice of data on $\\cal H$ .", "On the R-chart there is no transformation of the part of the wave crossing ${\\cal H}_+$ .", "If (REF ) are applied to (REF ) on ${\\cal H}_-$ one concludes that the wave leaves ${\\cal H}_-$ with the transformed momentum $k_-^\\mu =M^\\mu _{~\\nu }\\left(\\omega \\right)\\bar{k}^\\nu ~~.$ The transformed momentum $k_-$ is introduced to satisfy the condition $\\bar{k}\\cdot \\bar{x}\\mid _{{\\cal H}_-}=k_-\\cdot x_-$ , where $x_-$ are defined by (REF ).", "The initial data (REF ), (REF ) and the gauge $\\partial A=0$ then imply the following conditions: $E^+_{\\mu }=\\bar{E}_\\mu ~~,~~E^-_{\\mu }=M_{\\mu }^{~\\nu }(\\omega )\\bar{E}_\\nu ~~$ for the polarization vectors of waves which leave ${\\cal H}_+$ and ${\\cal H}_-$ , respectively.", "For the right observers the wave from ${\\cal H}_-$ changes its energy and looks refracted.", "If $E$ and $\\vec{k}$ are, respectively, the energy and the momentum of the incoming wave, the refraction angle $\\varphi _{\\mbox{\\tiny {refr}}}$ and the energy of the refracted wave are $\\cos \\varphi _{\\mbox{\\tiny {refr}}}={(\\vec{k}_-\\vec{k}) \\over E_-E}={1 \\over EE_-}\\left[E^2+{\\omega ^2 \\over 2}(Ek^x-(k^x)^2)+\\omega Ek^y\\right]$ $E_-=\\left(1+{\\omega ^2 \\over 2}\\right)E-{\\omega ^2 \\over 2}k^x+\\omega k^y~~.$ The refraction is absent only for the waves traveling along the string axis $z$ when $k^x=k^y=0$ .", "To study physical effects caused by the refraction of waves on $\\cal H$ we need to solve (REF ) with the Cauchy data (REF ).", "Since the problem is homogeneous its solution is given by (REF ).", "In case of the monochromatic plane waves each component of $A_\\mu $ can be treated as a scalar wave.", "If one ignores the effects related to polarizations, the basic features of the scattering problem can be understood by studying a scalar field theory with equation $\\Box \\phi =0~~.$ Suppose that the scalar field $\\phi $ behaves at $u<0$ as $\\bar{\\phi }(\\bar{x})=e^{i \\bar{k}\\cdot \\bar{x}}~~.$ In the domain $u>0$ the scattered wave (REF ) is a superposition, $\\phi (x)=\\int _{y^{\\prime }>0}d{\\bf x^{\\prime }}D(u,{\\bf x-x^{\\prime }})e^{i k_+\\cdot x^{\\prime }}\\mid _{u^{\\prime }=0}+\\int _{y^{\\prime }<0}d{\\bf x^{\\prime }}D(u,{\\bf x-x^{\\prime }})e^{i k_-\\cdot x^{\\prime }}\\mid _{u^{\\prime }=0}$ $\\equiv \\phi _+(x)+\\phi _-(x)~~,$ where $k_+=\\bar{k}$ , $k_-=M(\\omega )\\bar{k}$ .", "To represent solutions $\\phi _\\pm (x)$ we introduce the following dimensionless functions: $f(k,x)=u k_y+2 k_v y~~,~~g(k,x)=\\frac{f^2(k,x)}{4k_v u}~~,$ $f_\\pm =f(k_\\pm ,x)~~,~~g_\\pm =g(k_\\pm ,x)~~.$ After some algebra one gets, see Sec.", ", $\\phi _\\pm (x)=[\\theta (\\pm f_{\\pm })+\\varepsilon (\\pm f_{\\pm }) G(g_\\pm )]\\exp (i k_{\\pm } x)~~,~~k_v>0$ $\\phi _\\pm (x)=[\\theta (\\mp f_{\\pm })+\\varepsilon (\\mp f_{\\pm }) G^*(-g_\\pm )]\\exp (i k_{\\pm } x)~~,~~k_v<0~~.$ where $\\theta $ and $\\varepsilon $ are the step and the sign functions, respectively.", "The complex factor $G(g)$ is defined, for $\\Re ~g>0$ , as $G(g)=-\\frac{e^{i\\pi /4}}{\\pi }\\int _{0}^{\\infty }\\frac{dt}{t^2+i}e^{-g(t^2+i)}=-\\frac{1}{2}\\mbox{Erfc}(\\sqrt{i g})~~.$ By using (REF ) -(REF ) one can check that $\\Box \\phi _\\pm =0$ at $u>0$ .", "The $G$ -factor has the following expansions at small and large $g$ : $G(g)&=&-\\frac{1}{2}+\\frac{e^{i\\pi /4}}{\\sqrt{\\pi }} g^{1/2}\\left(1-\\frac{i}{3}g+\\dots \\right), \\quad g\\rightarrow 0,\\\\G(g)&=&\\frac{e^{ i \\pi /4}}{2\\sqrt{\\pi }}\\, e^{-i g} g^{-1/2}\\left(e^{ i \\pi /2} -\\frac{1}{2 g} +\\dots \\right), \\quad g\\rightarrow \\infty .$ As a consequence of (), $G$ vanishes as $u\\rightarrow 0$ , and $\\phi _\\pm $ satisfy the required boundary conditions.", "One can also check with the help of (REF ) that solutions (REF ), (REF ) are continuous across the surfaces $f_\\pm =0$ .", "Due to the presence of the $G$ -factor the scattered wave is not monochromatic near the string world-sheet, $u\\rightarrow 0$ , $g\\rightarrow 0$ , in a 'near-field zone'.", "In a 'far-field zone', $g\\gg 1$ , the wave has a simple form.", "For instance, for $k_v>0$ , it is $\\phi (x)=\\theta (f_{+})\\exp (i k_{+} x)+\\theta (-f_{-})\\exp (i k_{-} x)+\\phi _{\\mbox{\\tiny {tail}}}(x)~~,$ $\\phi _{\\mbox{\\tiny {tail}}}(x)=\\varepsilon (f_{+}) {1 \\over \\sqrt{4\\pi g_+}}e^{i k_{+} x+i\\varphi _+}+\\varepsilon (-f_{-}) {1 \\over \\sqrt{4\\pi g_-}}e^{i k_{+} x+i\\varphi _-}+O(g_\\pm ^{-3/2})~~,$ where $\\varphi _{\\pm }=g_{\\pm }+\\pi /4$ .", "Contributions $\\phi _{\\mbox{\\tiny {tail}}}(x)$ are 'tails' whose amplitudes decay as $g_\\pm ^{-1/2}$ .", "As follows from (REF ), $|g_\\pm |\\sim L/\\lambda $ where $\\lambda $ is a wave length and $L$ is a distance related to position of the observer with respect to the string trajectory.", "Physical effects in the far-field zone are interesting for the distant observers.", "Right observers crossing ${\\cal H}_+$ interpret (REF ) as a refraction of the left wave on ${\\cal H}_-$ .", "The surfaces $f_{\\pm }(x)=0$ determine boundaries of diffraction of right and left parts of the wave behind the string.", "The normal vectors $n_{\\pm }$ to these surfaces $df_{\\pm }=n_{\\mu }dx^{\\mu }~~,$ are orthogonal to the wave vectors $(n_\\pm \\cdot k_\\pm )=0~~.$ The surfaces $f_{\\pm }(x)=0$ intersect at the string world-sheet.", "The domains of the diffraction overlap.", "In the overlap region, $f_+>0$ , $f_-<0$ , $u>0$ , the left and right waves interfere since the wave vectors $k_\\pm $ are related by the nontrivial null rotation, $k_-=M(\\omega )k_+$ .", "Thus, the null string leaves behind an interference wedge.", "This physical effect is similar to the effect of massive and null strings which leave behind the regions of overdensities of non-relativistic matter.", "Figure: The energy density of a real scalar field is shown for the string at the moment t 0 t_0.", "The string is stretched along the zz axis, orthogonal to plane of the Figure, and is located at x=t 0 =2x=t_0=2, y=0y=0; ω=1/2\\omega =1/2.", "For the left figure: k v >0k_v>0, k -y =k y -2ωk v >0k_{-y}=k_y-2\\omega k_v>0.", "For the right figure: k y =0k_y=0, cosφ int =(1+ω 2 ) -1/2 \\cos \\phi _{int}=(1+\\omega ^2)^{-1/2}.To demonstrate the existence of the overlap region we fix the moment $t=t_0$ , put $k_+=k$ , and suppose that $k_y>0$ , $k_{-y}=k_y-2\\omega k_v>0$ .", "In coordinates $x$ and $y$ conditions $f_+>0$ , $f_-<0$ , $u>0$ look as $x_-(y)<x<x_+(y)~~,~~x<t_0~~,$ $x_+(y)=t_0+{2k_v y\\over k_y}~~,~~x_-(y)=t_0+{2k_v y \\over k_y-2\\omega k_v}~~.$ It is clear that conditions (REF ) hold for $y<0$ .", "The angle of the interference wedge, $\\varphi _{\\mbox{\\tiny {intf}}}$ , can be defined as the angle between the lines $x=x_\\pm (y)$ $\\cos \\varphi _{\\mbox{\\tiny {intf}}}={k_y(k_y-2\\omega k_v)+4k_v^2\\over (k_y^2+4k_v^2)^{1/2}((k_y-2\\omega k_v)^2+4k_v^2)^{1/2}}~~.$ One can see that $\\varphi _{\\mbox{\\tiny {intf}}}=O(\\omega ^2)$ at small $\\omega $ .", "The interference wedge exists at $k_y=0$ when $\\cos \\varphi _{\\mbox{\\tiny {intf}}}=(1+\\omega ^2)^{-1/2}$ .", "To illustrate this effect we evaluate the energy density $T_{00}(x)$ , as measured by right observers for a real scalar field $\\phi (x)=\\Re \\bigl (\\phi _+(x) +\\phi _-(x)\\bigr )$ , where $\\phi _{\\pm }$ are defined by (REF ).", "Fig.", "REF shows $T_{00}(x)$ at the moment $t_0=2$ in the $(x,y)$ plane orthogonal to the string for the string parameter $\\omega =0.5$ .", "All coordinates are given in dimensionless units (multiplied by $k_v=1$ ).", "We now return to solutions for EM waves.", "Solutions for incoming waves of the form $e^{-ik\\cdot x}$ can be obtained from (REF ), (REF ) by changing sign of the momentum.", "As one can check by using (REF ), (REF ) the solution is the complex conjugate of the solution for $e^{ik\\cdot x}$ .", "With these remarks scattered EM waves look (for $k_v>0$ ) as follows: $A^{\\pm }_\\mu (x)=\\Re \\left\\lbrace \\left[\\theta (\\pm f_{\\pm })+\\varepsilon (\\pm f_{\\pm }) G(g_\\pm )\\right]~E^\\pm _{\\mu }\\exp (i k_{\\pm } x)\\right\\rbrace ~~,$ where the polarization vectors $E^\\pm _{\\mu }$ are defined by (REF ).", "The interpretation of these results is the following.", "An observer inside the interference wedge sees the two waves with the same polarizations.", "The waves appear to come from distant sources which move with respect to each other.", "This property is a combination of two effects known for moving cosmic strings, the lensing effect and the creation of overdensities of matter." ], [ "EM field of a point charge near null string", "Consider a null string which moves near a point electric charge.", "Let $a$ be an impact parameter between the string and the charge.", "Without loss of the generality we suppose that the charge is at rest at a point with coordinates $x_e=z_e=0,y_e=a>0$ .", "As we see, the string creates EM self-force which may change the velocity of the charge.", "We neglect this effect in the considered approximation and assume that position of the charge remains fixed.", "The corresponding current in (REF ) is $\\bar{j}^\\mu (x)=e\\delta ^{(3)}(\\vec{x}-\\vec{x}_e) u^\\mu $ , with $u^\\mu =\\delta ^\\mu _0$ .", "The incoming field below $\\cal H$ is $\\bar{A}_\\mu (x)=-{e \\over 4\\pi }{\\delta _\\mu ^0 \\over \\sqrt{x^2+(y-a)^2+z^2}}~~,$ $x=(v-u)/2$ .", "Since the considered particle moves freely and crosses ${\\cal H}_+$ , in the $R$ -chart the 4-velocity of the particle is continuous.", "So do components of the current, $j_\\mu (x)=\\bar{j}_\\mu (x)~~,~~u>0~~.$ Therefore the inhomogeneous part of the solution (REF ) is taken as $A_{I,\\mu }(x)=\\bar{A}_\\mu (x)~~.$ The homogeneous part is the solution to (REF ) with the following initial data: $a_{H,b}({\\bf x})\\mid _{{\\cal H}_+}=0~~,~~a_{H,b}({\\bf x})\\mid _{{\\cal H}_-}=M_{b}^{~c}(\\omega )\\bar{a}_c(\\bar{\\bf x})-\\bar{a}_b({\\bf x})~~,$ $\\bar{a}_b=- {e \\over 8\\pi }{\\delta _b^v \\over \\sqrt{v^2/4+(y-a)^2+z^2}}~~.$ The homogeneous solution at $u>0$ can be represented as, see details in Sec.", ", $A_{H,\\mu } (x)=-{e \\over 8\\pi ^3} \\int d\\Omega ~\\Re \\left[{b_\\mu \\over x^\\nu ~m_\\nu +ia\\varepsilon }\\right]~~,$ where integration goes over a unit sphere $S^2$ , with coordinates $\\Omega =(\\theta ,\\varphi )$ and standard measure $d\\Omega =\\sin \\theta d\\theta d\\varphi $ .", "The notations used in (REF ) are the following: $b_v=-\\frac{1}{2}\\cos \\varphi \\left(g^{-1}(\\Omega ,\\omega )-g^{-1}(\\Omega ,0)\\right)~~,~~b_y=\\omega \\cos \\varphi g^{-1}(\\Omega ,\\omega )~~,~~b_z=0~~,$ $g(\\Omega ,\\omega )=e^{i\\theta }+\\omega \\sin \\theta \\cos \\varphi ~~,~~\\varepsilon =2\\sin ^2\\theta \\cos \\varphi ~~.$ The vector $m_\\mu $ is null, $m^2=0$ , $m_u=1-\\sin ^2\\theta \\cos ^2\\varphi ~,~m_v=\\sin ^2\\theta \\cos ^2\\varphi ~,~m_y=\\sin 2\\theta \\cos \\varphi ~~,~~m_z=\\sin ^2\\theta \\sin 2\\varphi ~.$ The $A_u$ component is defined by the gauge condition $\\partial A=0$ , which is equivalent to $b_\\mu m^\\mu =0$ and yields: $b_u={m_yb_y-2m_u b_v \\over 2m_v}~~.$ One can use (REF ) to calculate electric, $\\vec{E}^H$ , and magnetic, $\\vec{H}^H$ , fields created by the null string, $\\vec{E}^H=\\vec{\\partial }A_{H0}-\\partial _0\\vec{A}_H~~,~~\\vec{H}^H=[\\vec{\\partial }\\times \\vec{A}_H]~~.$ The total EM field above $\\cal H$ is $\\vec{H}=\\vec{H}^H$ , $\\vec{E}=\\vec{E}_C+\\vec{E}^H$ , where $\\vec{E}_C$ is the Coulomb field of the charge in the absence of the string.", "As is known [29], a point charge, which is at rest near a massive straight string, experiences a self-force acting in the direction away from the string, $F\\sim G \\mu e^2/r^2$ , where $\\mu $ is the tension of the string and $r$ is the distance between the charge and the string.", "One of the analogous effects is a self-force of the charge in the presence of the null string.", "The self-force is determined by the electric field induced by the string, $\\vec{F}(t,x_e)=e\\vec{E}^H(t,x_e)$ .", "This force at short times is $F\\sim \\omega e^2/a^2\\sim GE e^2/a^2$ , which is analogous to the self-force created by a massive string.", "At large times $t\\gg a$ the self-force vanishes.", "At large times numerical simulations for the potential $A_{H,0}$ , and corresponding electric and magnetic fields are shown on Fig.2 and Fig.3, respectively.", "We take coordinates in (REF ) as $x^0=t,x^i=rn^i$ , where $n^i$ is a unit vector, and study potential and fields as functions of arguments $r/t$ and $a/t$ , at different fixed $t$ .", "The component $E^H_y$ behaves similarly to $E^H_x$ .", "Components $E_z^H$ , $H_x^H$ , $H_y^H$ are negligibly small.", "As follows from these results, perturbation of EM field induced by the null string behaves as a EM pulse which moves away from the charge in different directions.", "The width of the pulse is determined by the impact parameter $a$ .", "Such pulses can be specific experimental signatures of null strings moving near charged objects.", "Figure: The vector potential A H,0 A_{H,0} for t/a=10,50,100,200t/a=10, 50, 100, 200 (from bottom to top) is plotted as a function of r/tr/t.", "The largest amplitude of a pulse corresponds to t/a=10t/a=10 (dashed curve).", "Here ω=1\\omega =1, a=1a=1, and e=4πe=4\\pi .", "The observercoordinates are (x=rcosπ/6,y=rsinπ/6,z=0)(x=r\\cos \\pi /6, y=r\\sin \\pi /6, z=0).", "The string horizon is at r/t=1.154r/t=1.154.Figure: The dimensionless functions t 2 E x H t^2 E^H_x and t 2 H z H t^2 H^H_z are plotted for t/a=10t/a=10 (dashed), 50,100,20050,\\, 100, \\,200.", "The observer's coordinates are (x=rcosπ/6,y=rsinπ/6,z=0)(x=r\\cos \\pi /6, y=r\\sin \\pi /6, z=0), ω=1\\omega =1, a=1a=1, and e=4πe=4\\pi .The energy of the EM field inside the sphere of the radius $R$ with the center at the point $x^i=0$ is $E(R,t)=-\\int _{r<R} d^3x T^0_0~~,$ where $T^\\mu _\\nu $ is the stress-energy tensor of the EM field, $T^\\mu _\\nu =-F^{\\mu \\alpha }F_{\\nu \\alpha }+\\delta ^\\mu _\\nu \\frac{1}{4} F^{\\alpha \\beta }F_{\\alpha \\beta }~~.$ The total energy density of EM field, $T_{00}$ , as measured in the frame of reference where the charge is at rest, is shown on Fig.REF .", "Figure: The electromagnetic energy density of the system \"charge + null string\" is shown in logarithmic scale at t=1,10,100t=1,\\,10,\\,100.", "The energy density is computed in the plane z=0z=0, the string moves from left to right, ω=1\\omega =1, a=1a=1, e=1e=1.", "The stringhorizon is the right vertical sides of figures.The conservation law implies that $\\partial _t E(R,t)=\\int d\\Omega ~R^2 S(t,R,\\Omega )~~,$ $S(t,R,\\Omega )=T_0^r(t,R,\\Omega )=-F_{0i}F^{ri}~~.$ Numerical results for the distribution of the energy flow $S(t,R,\\Omega )$ at large times $t\\sim r$ and $r \\gg a$ are presented on Fig. 5.", "Simulation shows that the maximal pulse follows the string.", "Figure: The Molweide projection of the EM energy flow for t/a=100t/a=100 and R/a=99.6R/a= 99.6 (left), 99.899.8 (right).", "Here ω=1\\omega =1, a=1a=1, e=1e=1.", "The equator corresponds to z=0z=0 plane.", "The string position is the central vertical line in the projection plane.The maximum of the pulse is located on the light cone $r=t$ , where $r$ is approximately the distance to the charge.", "The reason is that the denominator in the integral in (REF ) has a minimum on the surface $x^\\mu m_\\mu =0$ which is a light cone with a future directed null normal vector $m$ ." ], [ "Fields on gravitational shock wave backgrounds", "The metric of space-time of a straight null string which is stretched along the $z$ -axis and moves along the $x$ -axis is known to be of the following form: $ds^2=-dv du - \\omega |y| \\delta (u)du^2+dy^2+dz^2~~,$ where $\\omega $ is defined by (REF ).", "This string space-time is locally flat [30], except the world-sheet, where the $uu$ component of the Ricci tensor has a delta-function singularity.", "The delta-function in (REF ) appears as a result of the null holonomy [13], [14].", "Geometry (REF ) is a particular example of gravitational shockwave backgrounds: $ds^2=-dv du +f(y)\\delta (u)du^2+\\sum _i dy_i^2~~, ~~i=1,...n~~.$ Shockwaves (REF ) are exact solutions of the Einstein equations sourced by a stress energy tensor localized at $u=0$ and having the only non-vanishing $uu$ component.", "Another example of (REF ) is the Aichelburg-Sexl solution corresponding to a gravitational field generated by a massless particle.", "As has been suggested by Penrose [31], to deal with (REF ) one should cut $R^{1,n}$ along the hypersurface $u=0$ into two copies, shift (supertranslate) the $v$ -coordinate of the upper copy ($u>0$ ) to $v-f(y)$ and glue the copies again.", "The shift of the $v$ -coordinate can be also determined by working with a delta-function-like potential in wave-equations [34].", "This potential is generated by the $uu$ -component of (REF ).", "In the case of null strings the Penrose prescription implies that $f(y)=-\\omega |y|$ .", "By following Sec.", "REF , consider a coordinate chart where the cut goes over the entire surface $u=0$ and choose the Penrose coordinate transformations: $v=\\bar{v}+\\omega |y|~~,~~u=0~~.$ According to (REF ), on the left cut the coordinates are null rotated by the 'angle' $\\omega /2$ , while on the right with $-\\omega /2$ .", "Since the relative null rotation on the right and left cuts is by angle $\\omega $ , condition (REF ) is equivalent to (REF ) or (REF ).", "Field theory near shockwaves is an interesting research subject.", "Previous results include calculations of the S-matrix for scattering scalar waves on shockwaves created by massless particles [32], [33] and on generic gravitational shockwaves (REF ), see [34], [35].", "In the last decade the interest to shockwaves has been related to black hole formation in high energy particle collisions.", "Our approach can be used to describe solutions of hyperbolic PDE wave equations on gravitational shockwave backgrounds.", "It is convenient to write the Penrose transition condition for coordinates as $x^\\mu =(\\bar{x}^\\mu -\\zeta ^\\mu (\\bar{x}))\\mid _{u=0}~~,~~\\zeta ^\\mu =\\delta ^\\mu _v f(y)$ Suppose $|f(y)|\\ll 1$ .", "Then the transition condition for a field $\\phi $ , which generalizes (REF ), is $\\hat{\\phi }(x)=(\\phi (x)+{\\cal L}_\\zeta \\phi (x))\\mid _{u=0}~~,$ where $\\phi (x)$ is the value of the field in the absence of the shockwave, and ${\\cal L}_\\zeta \\phi $ is the Lie derivative of the field generated by the vector field in (REF ).", "Solutions to field equations are determined by a Cauchy problem with conditions (REF ) at $u=0$ and can be constructed in the same way as for EM fields in the presence of a null string." ], [ "Summary and perspectives", "In this work we suggested a method to describe free classical fields on gravitational background of a straight null string and, more generally, on shockwave backgrounds.", "Applications of the method have been focused on scalar and electromagnetic fields.", "We described two new physical effects: scattering of plane electromagnetic waves by null strings and generation of EM fields by null strings passing by near point charges.", "Both effects can be used in astrophysical observations to search for null cosmic strings.", "There are several avenues where the method can be applied.", "By proceeding with EM phenomena it is interesting to study effects induced by null cosmic strings near objects which possess strong EM fields, such as black holes and neutron stars.", "One can extend the results of Sec.", "to linearized Einstein equations and study gravitational perturbations generated by null cosmic strings moving near massive bodies.", "Like in case of EM fields we expect pulses of gravitational radiation and gravitational self-force effects generated by the strings.", "It should be pointed out that the suggested method allows one to introduce different Green's functions on null string space-times, and, therefore, to pave a way to quantum field effects related to null strings.", "We are planning to return to some of this topics in forthcoming works." ], [ "Acknowledgments", "The authors are grateful to V. Tainov for the help with some technicalities and numerical simulations.", "This research is supported by Russian Science Foundation grant No.", "22-22-00684, https://rscf.ru/project/22-22-00684/." ], [ "The $D$ -function, solution for scattered scalar wave", "We derive the $D$ -function for the massless field, $m=0$ in (REF ), by using the following representation: $D(u,{\\bf x})={1 \\over (2\\pi )^3}\\int d^3 p \\, e^{ip\\cdot x}~~,$ where $p\\cdot x=p_{\\mu }x^{\\mu }=p_u u+p_v v +p_y y +p_z z$ , $d^3p=dp_vdp_ydp_z$ $p_u=\\frac{p_y^2+p_z^2}{4p_v}~~.$ It is convenient to slightly shift the integration contour over $p_v$ in (REF ) to the lower part of the complex $p_v$ -plane to ensure convergence of the integrals over $p_y$ and $p_z$ .", "A simple formula $D(x)={1 \\over \\pi }{\\partial \\over \\partial v}\\delta (x^2)~~,$ where $x^2=x^\\mu x_\\mu =-uv+y^2+z^2$ , can be easily derived when one first integrates over $p_y$ , $p_z$ and then over $p_v$ .", "It is instructive to check that (REF ) satisfies (REF ).", "A direct calculation yields $\\Box D(x)={4 \\over \\pi u}(3\\delta ^{\\prime \\prime }(x^2)+x^2\\delta ^{\\prime \\prime \\prime }(x^2))~~.$ If $f(r)$ is a test function on the line, $\\int dr f(r) (3\\delta ^{\\prime \\prime }(r)+r\\delta ^{\\prime \\prime \\prime }(r))=-\\lim _{r\\rightarrow 0} (rf^{\\prime \\prime \\prime }(r))~~.$ Therefore the right hand side of (REF ) vanishes for functions which are analytical at $x^2=0$ .", "Define now the functional $F[u,\\chi ]=\\int d^3x D(u,{\\bf x})\\chi ({\\bf x})~~$ acting, say, on a $L^2$ space of test funtions $\\chi ({\\bf x})$ .", "We need to prove that $F[0,\\chi ]=\\chi (0)$ .", "By virtue of (REF ) $F[u,\\chi ]=-{1 \\over \\pi }\\int d^3x \\delta (x^2)\\partial _v\\chi ({\\bf x})~~.$ If $\\rho ,\\varphi $ are polar coordinates in $z,y$ plane $F[u,\\chi ]=-{1 \\over \\pi }\\int dv \\rho d \\rho d\\varphi ~ \\delta (\\rho ^2-uv)\\partial _v\\chi (v,\\rho ,\\varphi )=-{1 \\over 2\\pi }\\int _0^\\infty dv \\int _0^{2\\pi }d\\varphi ~\\partial _v\\chi (v,\\sqrt{uv},\\varphi )~~.$ Since ${d \\over dv}\\chi (v,\\sqrt{uv},\\varphi )=\\partial _v\\chi (v,\\sqrt{uv},\\varphi )+\\frac{1}{2}\\sqrt{\\frac{u}{v}}\\partial _\\rho \\chi (v,\\sqrt{uv},\\varphi )$ $F[u,\\chi ]=\\chi (0)+{\\sqrt{u} \\over 4\\pi }\\int _0^\\infty \\frac{dv}{\\sqrt{v}} \\int _0^{2\\pi }d\\varphi ~\\partial _\\rho \\chi (v,\\sqrt{uv},\\varphi )~~.$ The last term in the right hand side of (REF ) vanishes as $u\\rightarrow 0$ , if $\\chi $ is analytical at $\\rho =0$ .", "We now present derivation of basic formulas (REF ), (REF ) for scattering by the null string of a scalar wave.", "The $\\phi _\\pm $ parts of the scattered wave (REF ) are defined by (REF ).", "One gets with the help of (REF ) $\\phi _{\\pm }(u, {\\bf x})=\\frac{1}{(2\\pi )^3}\\int d^3p\\, e^{i {\\bf p}{\\bf x} }\\exp \\left({i \\frac{p_y^2+p_z^2}{4 p_v}u}\\right)\\int d^3 x^{\\prime }\\, e^{-i {\\bf p} {\\bf x^{\\prime }}}\\,\\hat{\\phi }_{\\pm }({\\bf x^{\\prime }})\\theta (\\pm y^{\\prime })~~,$ where ${\\bf p} {\\bf x^{\\prime }}=p_vx^v+p_yy+p_zz$ and $\\hat{\\phi }_{\\pm }({\\bf x})=e^{ik_\\pm \\cdot x}\\mid _{{\\cal H}_\\pm }~~,~~k^\\mu _+=\\bar{k}^\\mu ~~,~~k_-^\\mu =M^\\mu _{~\\nu }\\left(\\omega \\right)\\bar{k}^\\nu ~~.$ To simplify notations we put $\\bar{k}^\\mu =k^\\mu $ in what follows.", "One can perform integration in (REF ) first over ${\\bf x^{\\prime }}$ , then over $p_z$ and $p_v$ and get $\\phi _{\\pm }(u,{\\bf x})=\\pm I_\\pm (k,x) \\exp i\\left(k_v v+k_z z+\\frac{k_z^2}{4k_v}u-\\frac{k_v}{u}y^2\\right)~~,$ $I_\\pm (k,x)={1 \\over 2\\pi i}\\int \\limits _{-\\infty }^{\\infty }\\frac{ d p \\,e^{i\\frac{p^2}{4uk_v}}}{p- (f_\\pm \\pm i \\epsilon )}~~,$ where $p=up_y$ , and $f_\\pm =f(k_\\pm ,x)$ are defined in (REF ).", "Factors $I_\\pm $ appear as a result of integration over $y$ .", "It is convenient to rewrite (REF ) by changing contour in the complex $p$ -plane.", "Since $u>0$ , the integration contour over $p$ can be rotated by the angle $\\pi /4$ either counter-clockwise, if $k_v>0$ , or clockwise, if $k_v<0$ .", "This yields $&&I_\\pm (k,x)=\\pm \\left[\\theta (k_v)\\theta (\\pm f_\\pm )+\\theta (-k_v)\\theta (\\mp f_\\pm )\\right]e^{i\\frac{f_{\\pm }}{4 k_v u}} \\\\[6pt]&&+{\\theta (k_v) \\over 2\\pi i}\\int _{-\\infty }^{\\infty }{dt \\over t-e^{-i\\pi /4}f_\\pm }\\exp \\left(-{t^2 \\over 4 k_v u}\\right)+{\\theta (-k_v) \\over 2\\pi i}\\int _{-\\infty }^{\\infty }{dt \\over t-e^{i\\pi /4}f_\\pm }\\exp \\left({t^2 \\over 4k_v u}\\right)~~.\\nonumber $ The first two terms in (REF ) appear when the rotation of the contour meets poles.", "Eqs.", "(REF )-(REF ) follow from (REF ) if one replaces $t$ in the integrals with $f_\\pm t$ and takes into account that $k_u=(k_y^2+k_z^2)/{4k_v}$ ." ], [ "Homogeneous solution for point electric charge", "Here we present computations for Sec.", ".", "Define $f(v,y,z)=\\theta (-y) {e \\over 4\\pi }{1 \\over \\sqrt{v^2/4+(y-a)^2+z^2}}~~,$ By using (REF ), (REF ) the Cauchy data (REF ) can be written as $a_{H,v}({\\bf x})=-\\frac{1}{2}(f(v-2\\omega y, y, z)-f(v,y,z))~~,$ $a_{H,y}({\\bf x})=\\omega f(v-2\\omega y, y, z)~~,~~a_{H,z}({\\bf x})=0~~.$ Let $\\Phi _\\omega (u, {\\bf x})$ be a solution at $u>0$ of the following problem: $\\Box \\Phi _\\omega (u, {\\bf x})=0~~,~~\\Phi _\\omega (0, {\\bf x})=f(v-2\\omega y, y, z)~~.$ Then the homogeneous part of non-zero components of the vector-potential are $A_{H,v}(x)=-\\frac{1}{2}(\\Phi _\\omega (x)-\\Phi _0(x))~~,$ $A_{H,y}(x)=\\omega \\Phi _\\omega (x)~~.$ The solution to (REF ) can be written as $\\Phi _\\omega (x)={1 \\over (2\\pi )^{3/2}}\\int d^3k~ e^{ik \\cdot x}~\\tilde{f}_\\omega ({\\bf k})~~,$ $\\tilde{f}_\\omega ({\\bf k})={1 \\over (2\\pi )^{3/2}}\\int d^3x e^{-i{\\bf k}{\\bf x}}f(v-2\\omega y, y, z)~~,$ where $d^3x=dvdydz$ , and we use the same notations as in Sec.. A straightforward computation yields $\\tilde{f}_\\omega ({\\bf k})={e \\over (2\\pi )^3}{e^{-a\\sqrt{(2k_v)^2+k_z^2}} \\over \\sqrt{(2k_v)^2+k_z^2}\\left( \\sqrt{(2k_v)^2+k_z^2}-i(k_y+2\\omega k_v)\\right)}~~.$ It is convenient to introduce in (REF ) spherical coordinates in the momentum space, $2k_v=k\\sin \\theta \\cos \\varphi $ , $k_z=k\\sin \\theta \\sin \\varphi $ , $k_y=k\\cos \\theta $ .", "By taking into account (REF ) the integration over $k$ in (REF ) can be performed, $\\Phi _\\omega (x)=-{e \\over 8\\pi ^3} \\int _0^{2\\pi } d\\varphi \\int _0^{\\pi } \\sin \\theta d\\theta ~{1 \\over g(\\Omega ,\\omega )}~{1 \\over x^\\mu \\,m_\\mu +ia\\varepsilon }~~,$ see notations (REF ), (REF ).", "Equations (REF )-(REF ), (REF ) imply (REF ),(REF )." ] ]
2212.05564
[ [ "Robust Estimation and Inference for Expected Shortfall Regression with\n Many Regressors" ], [ "Abstract Expected Shortfall (ES), also known as superquantile or Conditional Value-at-Risk, has been recognized as an important measure in risk analysis and stochastic optimization, and is also finding applications beyond these areas.", "In finance, it refers to the conditional expected return of an asset given that the return is below some quantile of its distribution.", "In this paper, we consider a recently proposed joint regression framework that simultaneously models the quantile and the ES of a response variable given a set of covariates, for which the state-of-the-art approach is based on minimizing a joint loss function that is non-differentiable and non-convex.", "This inevitably raises numerical challenges and limits its applicability for analyzing large-scale data.", "Motivated by the idea of using Neyman-orthogonal scores to reduce sensitivity with respect to nuisance parameters, we propose a statistically robust (to highly skewed and heavy-tailed data) and computationally efficient two-step procedure for fitting joint quantile and ES regression models.", "With increasing covariate dimensions, we establish explicit non-asymptotic bounds on estimation and Gaussian approximation errors, which lay the foundation for statistical inference.", "Finally, we demonstrate through numerical experiments and two data applications that our approach well balances robustness, statistical, and numerical efficiencies for expected shortfall regression." ], [ "Introduction", "Expected shortfall (ES), also known as superquantile or conditional Value-at-Risk, has been recognized as an important risk measure with versatile applications in finance [3], [55], management science [12], [23], operations research [54], [53], and clinical studies [31].", "For example, in finance, expected shortfall refers to the expected return of an asset or investment portfolio conditional on the return being below a lower quantile of its distribution, namely its Value-at-Risk (VaR).", "In their Fundamental Review of the Trading Book [7], [8], the Basel Committee on Banking Supervision confirmed the replacement of VaR with ES as the standard risk measure in banking and insurance.", "Let $Y$ be a real-valued random variable with finite first-order absolute moment, $|Y | <\\infty $ , and let $F_Y$ be its cumulative distribution function.", "For any $\\alpha \\in (0, 1)$ , the quantile and ES at level $\\alpha $ are defined as $Q_{\\alpha }(Y) = F_{Y}^{-1}(\\alpha ) = \\inf \\lbrace y \\in : F_{Y}(y) \\ge \\alpha \\rbrace ~\\mbox{ and }~{\\rm ES}_{\\alpha }(Y) = \\lbrace Y | Y \\le Q_{\\alpha }(Y) \\rbrace ,$ respectively.", "If $F_Y$ is continuous, the $\\alpha $ -level ES is equivalently given by (see, e.g., Lemma 2.16 of [45]) ${\\rm ES}_\\alpha (Y) = \\frac{1}{ \\alpha } \\int _0^\\alpha Q_u(Y) \\,{\\rm d} u.", "$ For instance, in socioeconomics applications, $Y$ is the income and ${\\rm ES}_\\alpha (Y)$ can be interpreted as the average income for the subpopulation whose income falls below the $\\alpha $ -quantile of the entire population.", "We refer the reader to Chapter 6 of [59] and [52] for a thorough discussion of ES and its mathematical properties.", "With the increasing focus on ES and its desired properties as a risk measure, it is natural to examine the impact of a $p$ -dimensional explanatory vector $X$ , on the tail behavior of $Y$ through ES.", "One motivating example is the Job Training Partnership Act (JTPA), a large publicly-funded training program that provides training for adults with barriers to employment and out-of-school youths.", "The goal is to examine whether the training program improves future income for adults with low-income earnings [13], for which quantile regression-based approaches have been proposed [1], [19].", "For example, the 0.05-quantile of the post program income refers to the highest income earning of those who have the 5% lowest income among the entire population, while the 0.05-ES concerns the average income earning within this subpopulation and therefore is more scientifically relevant in the JTPA study.", "Compared to the substantial body of literature on quantile regression, extant works on ES estimation and inference in the presence of covariates are scarce.", "We refer to [57], [14], [37], [41] and [43] for nonparametric conditional ES estimation, and more recently to [22], [50] and [5] in the context of (semi-)parametric models.", "As suggested in [50], this is partly because regulatory interest in ES as a risk measure is only recent, and also due to the fact that this measure is not elicitable [29].", "Let $$ be a class of distributions on $^d$ .", "We say that a statistical functional $\\theta : \\rightarrow D$ with $D\\subseteq ^p$ ($p\\ge 1$ ) is elicitable relative to the class $$ if there exists a loss function $\\rho : ^d \\times ^p\\rightarrow $ such that $\\theta (F) = _{\\theta \\in D} _{Z \\sim F} \\rho (Z , \\theta )$ for any $F\\in $ .", "Here $ _{Z \\sim F}$ means that the expectation is taken with respect to the random variable $Z$ that follows the distribution $F$ .", "For example, the mean is elicitable using the $L_2$ -loss, and the median is elicitable using the $L_1$ -loss.", "Although the ES is not elicitable on its own, it is jointly elicitable with the quantile using a class of strictly consistent joint loss functions [28].", "Based on this result, [22] and [50] proposed a joint regression model for the conditional $\\alpha $ -level quantile and ES of $Y$ , given the covariates $X \\in ^p$ .", "In this work, we focus on (conditional) linear joint quantile-ES models: $Q_\\alpha ( Y |X ) = X^^* ~~\\mbox{ and }~~ {\\rm ES}_\\alpha (Y|X) = X^^* .", "$ Equivalently, we have $ \\varepsilon =Y-X^^* $ and $\\xi =Y-X^^* $ , where the conditional $\\alpha $ -quantile of $\\varepsilon $ and the conditional $\\alpha $ -level expected shortfall of $\\varepsilon $ , given $X \\in ^p$ , are zero.", "More generally, one may allow the quantile and the ES models to depend on different covariate vectors $X_q$ and $X_e$ , respectively.", "In this case, the conditional $\\alpha $ -quantile and $\\alpha $ -ES of $\\varepsilon $ and $\\xi $ , respectively given $X=(X_q^ X_e^^, are assumed to be zero.$ To jointly estimate $\\beta ^*$ and $\\theta ^*$ , [22] and [50] considered an $M$ -estimator, defined as the global minimum of any member of a class of strictly consistent joint loss functions over some compact set [28].", "The joint loss function, which will be specified in (REF ), is non-differentiable and non-convex.", "[22] employed the derivative-free Nelder-Mead algorithm to minimize the resulting non-convex loss, which is a heuristic search method that may converge to non-stationary points.", "From a statistical perspective, they further established consistency and asymptotic normality for the global minima.", "However, from a computational perspective, finding the global minimum of a non-convex function is generally intractable: approximating the global minimum of a $k$ -times continuously differentiable function $f:^p \\rightarrow $ to $\\epsilon $ -accuracy requires at least as many as $(1/\\epsilon )^{p/k}$ evaluations (ignoring problem-dependent constants) of the function and its first $k$ -derivatives [46].", "The lack of differentiability makes this problem even more challenging numerically.", "To mitigate the computational effort, [5] proposed a two-step procedure by first estimating the quantile parameters via standard quantile regression, followed by least squares regression with generated response variables.", "Although being computationally efficient, the ensuing estimator is sensitive to heavy-tailed error distributions due to the use of $L_2$ -loss for fitting possibly highly skewed data in the second step; see Section REF for a rigorous statement.", "In this paper, we propose a novel two-stage method for joint quantile and expected shortfall regression under model (REF ), with a particular focus on the latter.", "Compared to existing approaches, our proposed method is robust against heavy-tailed errors without compromising statistical efficiency under light-tailed distributions.", "Computationally, our method can be implemented via fast and scalable gradient-based algorithms.", "The main contributions of this manuscript are summarized as follows.", "Our method is built upon the recent approach to quantile-ES regression via a two-step procedure [5], for which a general non-asymptotic theory has yet to be established.", "We first establish such a finite-sample theoretical framework when the dimension of the model, $p$ , is allowed to increase with the number of observations, $n$ .", "Specifically, we provide explicit upper bounds, as a function of $(n, p)$ , on the estimation error (under $L_2$ -risk) and (uniform) Gaussian approximation errors for the two-step ES estimator.", "We further construct asymptotically valid (entrywise) confidence intervals for the ES parameters.", "The dominant computation effort of this two-step procedure is the QR regression fit in stage one.", "We thus recommend using the convolution-smoothed QR method [27], solvable by fast first-order algorithms that are scalable to very large-scale problems [32].", "Our non-asymptotic theory allows the dimension $p$ to grow with the sample size, and hence paves ways for analyzing series/projection estimators under joint nonparametric quantile-ES models [11] and penalized estimators under high-dimensional sparse models.", "The standard two-step estimator is a least squares estimator with generated response variables, and therefore is sensitive to the tails of the distribution of $Y$ .", "To achieve sub-Gaussian deviation bounds when the (conditional) distribution of $Y|X$ only has Pareto-like tails, we propose a robust ES regression method that applies adaptive Huber regression [64] in the second step.", "The idea is to use a diverging robustification parameter $\\tau = \\tau (n, p) >0$ for bias-robustness tradeoff.", "To choose this hyper-parameter in practice, we employ a recently developed data-driven mechanism [62], inspired by the censored equation approach introduced in [30].", "We also develop efficient algorithms to compute both standard and robust two-step ES estimators under additional constraints that the fitted ES does not exceed the fitted quantile at each observation; see Section A of the Appendix.", "Numerically, we compare both the two-step estimator and the proposed robust variant with the joint $M$ -estimator of [22] on large synthetic datasets generated from a location-scale model with both light- and heavy-tailed error distributions.", "The joint $M$ -estimator is computed by the R package esreg [10].", "The proposed robust two-step procedure can be implemented by a combination of R packages quantreg [39] or conquer [33] and adaHuber [49].", "We demonstrate through numerical experiments and a real data example that the proposed robust ES regression approach achieves satisfying statistical performance, a higher degree of robustness (against heavy-tailed data), and superior computational efficiency and stability.", "Notation: For any two vectors $u=(u_1, \\ldots , u_k)^ and $ v=(v1, ...,vk)k$, we define their inner product as $ uv̰ = u, v = j=1k uj vj$.We use $ p$ $ (1p )$ to denote the $ p$-norm in $ k$: $ u p = ( i=1k | ui |p )1/p$ for $ p1$ and $ u = 1ik |ui|$.For $ r>0$, define the Euclidean ball and unit sphere in $ k$ as $ B(r) = Bk(r) = { u k: u 2 r }$ and $ Sk-1 = { u k: u 2 = 1 }$, respectively.", "Given a positive semi-definite matrix $ A kk$ and $ u k$, let $ u A := A1/2 u 2$.", "We write $ BA(r) = { u k: u A r }$ and $ BA(u, r) = u + BA(r)$.", "Given an event/subset $$, $ 1()$ or $ 1$ denotes the zero-one indicator function for $$.", "For two real numbers $ a$ and $ b$, we write $ a b = {a, b}$ and $ a b = {a, b}$.", "For two sequences $ {an}n 1$ and $ { bn}n 1$ of non-negative numbers, we write $ an bn$ if $ an C bn$ for some constant $ C > 0$ independent of $ n$, $ an bn$ if $ bn an$, and $ an bn$ if $ an bn$ and $ an bn$.$" ], [ "The Joint Regression Framework", "Assume we observe a sequence of data vectors $\\lbrace (Y_i, X_i) \\rbrace _{i=1}^n$ , where $Y_i \\in $ is the response variable, and $X_i \\in ^p$ is a $p$ -dimensional vector of explanatory variables (covariates).", "For some fixed probability level $\\alpha \\in (0 , 1)$ , denote the conditional $\\alpha $ -level quantile and ES of $Y_i$ given the covariates $X_i$ as $Q_{\\alpha }(Y_i| X_i )$ and ${\\rm ES}_\\alpha ( Y_i| X_i )$ , respectively.", "For the latter, we adhere to the definition $ {\\rm ES}_\\alpha ( Y_i| X_i ) = \\lbrace Y_i | Y_i\\le Q_\\alpha (Y_i | X_i ), X_i \\rbrace $ .", "We consider the joint regression framework introduced in [22] for modeling the conditional quantile and expected shortfall.", "For some probability level $\\alpha \\in (0 , 1)$ , assume that $Q_\\alpha ( Y_i| X_i ) = X_i^^* , \\qquad {\\rm ES}_\\alpha ( Y_i | X_i ) = X_i^^* , $ where $\\beta ^*, \\theta ^* \\in ^p$ are the unknown true underlying parameters for quantile and ES, respectively.", "[28] explained that quantile and ES are jointly elicitable, and proposed a class of strictly consistent joint loss functions for quantile and ES estimation.", "Let $G_1$ be an increasing and integrable function, and let $_2$ be a three times continuously differentiable function such that both $_2$ and its derivative $G_2 = _2^{\\prime }$ are strictly positive.", "The proposed joint loss function in [28] takes the form $S(\\beta , \\theta ; Y, X) & = \\lbrace \\alpha - \\mathbb {1}( Y \\le X^) \\rbrace \\lbrace G_1(Y) - G_1(X^) \\rbrace \\\\&~~~~ + \\frac{ G_2( X^) }{\\alpha } \\big \\lbrace \\underbrace{ \\alpha X^ \\theta - \\beta ) - ( Y - X^) \\mathbb {1} (Y\\le X^) }_{=: \\, S_0(\\beta , \\theta ; Y, X) } \\big \\rbrace - _2(X^) .", "\\nonumber $ This general form also includes the joint loss function proposed by [2] by taking $G_1(x) = - (W/2) x^2$ for some $W\\in $ and $_2(x) = \\alpha x^2/2$ .", "In the regression framework with a fixed number of covariates, [22] established the consistency and asymptotic normality of the $M$ -estimator $(\\widetilde{\\beta }^ \\widetilde{\\theta }^^, defined as{\\begin{@align}{1}{-1}\\begin{pmatrix}\\widetilde{\\beta }\\\\\\widetilde{\\theta }\\end{pmatrix} \\in _{\\beta ,\\theta \\in \\, \\Theta } \\frac{1}{n} \\sum _{i=1}^nS(\\beta , \\theta ; Y_i, X_i) , \\end{@align}}where $ p$ is the parameter space, assumed to be compact, convex, and has nonempty interior.", "The main challenge of the aforementioned approach is that the objective function in (\\ref {def:M-est}) is non-differentiable and non-convex for any feasible choice of the functions $ G1$ and $ G2$ \\cite {FZ2016}.Note from definition (\\ref {ES.def1}) that the expected shortfall depends on the quantile, not vice versa.", "The estimation and inference of $ *$ is thus the main challenge.", "It is, however, infeasible to estimate a single regression model for ES through $ M$-estimation, that is, by minimizing some strictly consistent loss function \\cite {DB2019}.$ In the joint regression framework, if the main goal is to estimate and forecast ES, then $\\beta ^*$ can be naturally viewed as a nuisance parameter.", "Motivated by the idea of using Neyman-orthogonal scores to reduce sensitivity with respect to nuisance parameters [48], [18], [5] proposed a two-stage procedure that bypasses non-convex optimization problems.", "In the first stage, an estimate $\\hat{\\beta }$ of $\\beta ^*$ is obtained via standard quantile regression.", "The second step employs an orthogonal score with fitted thresholding quantiles to estimate $\\theta ^*$ .", "The key observation is as follows.", "Define the function $\\psi _0(\\beta , \\theta ; X) & = \\lbrace S_0( \\beta , \\theta ; Y, X ) | X \\rbrace \\\\& = \\alpha X^- ( Y\\le X^| X ) ( Y | Y\\le X^, X ) + \\big \\lbrace (Y\\le X^| X ) - \\alpha \\big \\rbrace X^, \\nonumber $ where $S_0$ is given in (REF ).", "Under model (REF ), we have $\\psi _0(\\beta ^*, \\theta ^*; X)=0$ almost surely over $X$ .", "Let $F_{Y | X}$ be the conditional distribution function of $Y$ given $X$ .", "Provided $F_{Y|X}$ is continuously differentiable, taking the gradient with respect to $\\beta $ on both sides of the above equality yields $\\partial _\\beta \\psi _0(\\beta , \\theta ; X) = \\lbrace F_{Y|X}(X^) -\\alpha \\rbrace X , ~\\mbox{ for any } \\beta , \\theta \\in ^p.$ We hence refer to the following property $\\partial _\\beta \\psi _0(\\beta , \\theta ; X) \\big |_{\\beta = \\beta ^* } = \\lbrace F_{Y|X}(X^^* ) -\\alpha \\rbrace X = 0 $ as Neyman orthogonality." ], [ "Two-Step ES Estimation via Neyman Orthogonal Score", "We start with a detailed overview of the two-step approach proposed by [5] using the Neyman orthogonal score (REF ) under the joint model (REF ).", "In Section REF , we will develop a non-asymptotic (finite-sample) theory for the two-step ES estimator, $\\hat{\\theta }$ , under the regime in which $p$ is allowed to increase with the sample size $n$ .We further develop asymptotic normality results for individual coordinates, or more generally linear projections, of $\\hat{\\theta }$ , in the increasing-dimension regime “$p^2/ n = o(1)$ \".", "Our non-asymptotic results and techniques pave the way for analyzing high-dimensional sparse quantile-ES models.", "The first step involves computing the standard QR estimator of $\\beta ^*$ : $\\hat{\\beta }\\in _{\\beta \\in ^p} \\frac{1}{n} \\sum _{i=1}^n\\rho _\\alpha ( Y_i - X_i^),$ where $\\rho _\\alpha (u) = \\lbrace \\alpha - \\mathbb {1} (u<0) \\rbrace u$ is the check function [40].", "The second step is motivated by the orthogonal score $S_0$ in (REF ).", "Specifically, let $\\hat{}( \\beta , \\theta ) = (1/n) \\sum _{i=1}^nS^2_i( \\beta , \\theta )$ be the joint empirical loss with $S_i (\\beta , \\theta ) := S_0(\\beta , \\theta ; Y_i, X_i) = \\alpha X_i^- \\mathbb {1}(Y_i\\le X_i^) Y_i + \\lbrace \\mathbb {1}(Y_i\\le X_i^) - \\alpha \\rbrace X_i^.", "$ Given $\\hat{\\beta }$ obtained from the first step, the ES estimator $\\hat{\\theta }$ of $\\theta ^*$ is computed as $\\hat{\\theta }\\in _{\\theta \\in ^p} \\hat{}\\,( \\hat{\\beta }, \\theta ).", "$ For any $\\beta $ fixed, the function $\\theta \\mapsto \\hat{}( \\beta , \\theta )$ is convex with gradient and Hessian given by $\\partial _\\theta \\hat{}( \\beta , \\theta ) = \\frac{2 \\alpha }{n}\\sum _{i=1}^nS_i (\\beta , \\theta ) X_i ~~\\mbox{ and }~~ \\partial ^2_\\theta \\hat{}( \\beta , \\theta ) = \\frac{2 \\alpha ^2 }{n} \\sum _{i=1}^nX_i X_i^$ respectively.", "By the first-order condition, the ES regression estimator $\\hat{\\theta }$ satisfies the moment condition $ \\partial _\\theta \\hat{}( \\hat{\\beta }, \\hat{\\theta }) = 0$ , and has a closed-form expression $\\hat{\\theta }= \\hat{\\beta }+ \\bigg ( \\sum _{i=1}^nX_i X_i^)^{-1} \\frac{1}{\\alpha } \\sum _{i=1}^n( Y_i - X_i^\\beta ) X_i \\mathbb {1}(Y_i\\le X_i^\\beta ) , $ provided that $\\mathbb {X} = (X_1, \\ldots , X_n)^^{n\\times p}$ is full-rank.", "When $p$ is large, we suggest using the convolution-smoothed quantile regression (conquer) estimator [27], [32] in the first step, which can be computed by fast and scalable gradient-based algorithms.", "Given a smoothing parameter/bandwidth $h>0$ , the conquer estimator $\\hat{\\beta }_h$ minimizes the convolution smoothed loss function $ \\hat{}_h(\\beta ) = (1/n)\\sum _{i=1}^n\\rho _{\\alpha , h}(Y_i - X_i^)$ with $\\rho _{\\alpha , h}(u) = (\\rho _\\alpha * K_h) (u) = \\int _{-\\infty }^\\infty \\rho _\\alpha (u) K_h(v-u) \\, {\\rm d} v, $ where $K_h(u) := (1/h) K(u/h)$ for some non-negative kernel function $K$ , and $*$ is the convolution operator.", "We refer to [27] and [32] for more details, including both asymptotic and finite-sample properties of $\\hat{\\beta }_h$ when $p$ is fixed and growing (with $n$ ) as well as the bandwidth selection.", "Define $p \\times p$ matrices $\\Sigma =(X X^ ~~\\mbox{ and }~~ \\Omega = ( \\omega ^2 X X^$ with $\\omega := (Y - X^^*) \\mathbb {1}(Y \\le X^^*) + \\alpha X^\\beta ^* - \\theta ^*)$ satisfying $(\\omega | X) = 0$ under model (REF ).", "Provided that $p=p_n$ satisfies $p^2/n \\rightarrow 0$ , we will show in Theorem REF that $\\hat{\\theta }_j$ is asymptotically normal: $\\frac{ \\alpha \\sqrt{n} (\\hat{\\theta }_j - \\theta ^*_j )}{\\sqrt{( \\Sigma ^{-1} \\Omega \\Sigma ^{-1} )_{jj}}} \\xrightarrow{} (0, 1) ~\\mbox{ as }~ n,p \\rightarrow \\infty .$ As a direct implication, an asymptotically valid entrywise confidence interval for $\\theta ^*$ can be constructed as follows.", "Recall that $(\\hat{\\beta }, \\hat{\\theta })$ is the joint quantile-ES regression estimators given in (REF ) and (REF ), respectively.", "Define the estimated “residuals\" as $\\hat{\\varepsilon }_i = Y_i - X_i^\\beta ~~\\mbox{ and }~~ \\hat{\\omega }_i = \\hat{\\varepsilon }_i \\mathbb {1}(\\hat{\\varepsilon }_i \\le 0) + \\alpha X_i^\\hat{\\beta }- \\hat{\\theta }) .", "$ We then use the sample analog of $\\Sigma $ and a plug-in estimator of $\\Omega $ : $\\hat{\\Sigma }= \\frac{1}{n} \\sum _{i=1}^nX_i X_i^ \\quad \\hat{\\Omega }= \\frac{1}{n} \\sum _{i=1}^n\\hat{\\omega }_i^2 X_i X_i^ $ Consequently, we construct (approximate) 95% confidence interval for each coefficient as $\\bigg [ \\hat{\\theta }_j - \\frac{ 1.96}{\\alpha \\sqrt{n}} ( \\hat{\\Sigma }^{-1} \\hat{\\Omega }\\hat{\\Sigma }^{-1} )^{1/2}_{jj} , \\, \\hat{\\theta }_j + \\frac{1.96}{\\alpha \\sqrt{n}} ( \\hat{\\Sigma }^{-1} \\hat{\\Omega }\\hat{\\Sigma }^{-1} )^{1/2}_{jj} \\bigg ] , \\ \\ j= 1,\\ldots , p.$" ], [ "Motivation", "The two-step estimator $\\hat{\\theta }$ given in (REF ) is essentially a least squares estimator (LSE) with generated response variables.", "While the two-step procedure is computationally efficient and enjoys nice asymptotic properties, due to the use of the least squares type loss, it is sensitive to outliers or heavy-tailed data that is ubiquitous in various areas such as climate, insurance claims, and genomics data.", "In particular, heavy-tailedness has become a well-known stylized fact of financial returns and stock-level predictor variables [20].", "Since the expected shortfall is a quantity that describes the tail behavior of a distribution, it is important to construct an estimator that is robust to the power-law or Pareto-like tails.", "To motivate the need for a robust ES estimator, we start with the non-regression setting in which $X_i \\equiv 1$ .", "The two-step ES estimator (REF ) can then be simplified as $\\hat{{\\rm ES}}_\\alpha = \\frac{1}{\\alpha n} \\sum _{i=1}^nY_i \\mathbb {1}\\lbrace Y_i\\le \\hat{Q}_\\alpha \\rbrace + \\hat{Q}_\\alpha \\lbrace 1 - \\hat{F}(\\hat{Q}_\\alpha ) / \\alpha \\rbrace , $ where $\\hat{F}$ is the empirical CDF of $Y$ and $\\hat{Q}_\\alpha = \\hat{F}^{-1}(\\alpha )$ is the sample quantile.", "The estimator $\\hat{{\\rm ES}}_\\alpha $ (REF ) coincides with the ES estimate (4) in [9], although the latter is motivated differently by the following property: ${\\rm ES}_\\alpha (Y) = (Y) - \\frac{1}{\\alpha } \\min _{\\beta \\in } \\rho _\\alpha (Y - \\beta ) .$ Since $| \\hat{F}(\\hat{Q}_\\alpha ) - \\alpha | \\le 1/n$ , up to higher-order terms, $\\hat{{\\rm ES}}_\\alpha $ equals $(\\alpha n )^{-1} \\sum _{i=1}^nY_i \\mathbb {1}\\lbrace Y_i\\le \\hat{Q}_\\alpha \\rbrace $ which, by the consistency of sample quantiles, is first-order equivalent to the “oracle\" ES estimator $\\hat{{\\rm ES}_\\alpha ^{{\\rm ora}} } := (\\alpha n )^{-1} \\sum _{i=1}^nY_i \\mathbb {1}\\lbrace Y_i\\le Q_\\alpha (Y)\\rbrace $ .", "Since the truncated variable $Y_i\\mathbb {1}\\lbrace Y_i\\le Q_\\alpha (Y)\\rbrace $ can be highly left-skewed with heavy tails, the corresponding empirical mean is sensitive to the (left) tails of the distribution of $Y$ , and hence lacks robustness against heavy-tailed data.", "Specifically, let $X_1, \\ldots , X_n$ be i.i.d.", "random variables with mean $\\mu $ and variance $\\sigma ^2>0$ .", "When $X_i$ is sub-Gaussian (i.e., $(e^{\\lambda X_i}) \\le \\lambda ^2 \\sigma ^2/2$ for any $\\lambda \\in $ ), it follows from the Chernoff bound [17] that $\\big \\lbrace | \\bar{X}_n - \\mu | \\ge \\sigma \\sqrt{ 2 \\log (2/\\delta ) /n} \\big \\rbrace \\le \\delta , \\ \\ {\\rm valid~for~any~} \\delta \\in (0,1 ).", "$ In other words, the sample mean $\\bar{X}_n =(1/n) \\sum _{i=1}^nX_i$ satisfies the sub-Gaussian deviation bound.", "On the other hand, the following proposition provides a lower bound for the deviations of the empirical mean $(1/n) \\sum _{i=1}^nY_i \\mathbb {1}\\lbrace Y_i\\le Q_\\alpha (Y)\\rbrace $ when the distribution of $Y$ is the least favorable among all heavy-tailed distributions with mean zero and variance $\\sigma ^2$ .", "For any value of the standard deviation $\\sigma >0$ and any probability level $\\delta \\in (0, e^{-1}]$ , there exists some distribution with mean zero and variance $\\sigma ^2$ such that for any $\\alpha \\in (0,1)$ , the i.i.d.", "sample $\\lbrace Y_i\\rbrace _{i=1}^n$ of size $n$ drawn from it satisfies $\\bigg [ \\frac{1}{n} \\sum _{i=1}^nY_i \\mathbb {1}( Y_i \\le Q_\\alpha ) - \\lbrace Y \\mathbb {1}( Y \\le Q_\\alpha )\\rbrace \\le - \\sigma \\sqrt{\\frac{1}{\\delta n }} \\cdot \\frac{ 1- e\\delta }{ \\sqrt{2 e} } \\bigg ] \\ge \\delta , $ as long as $n\\ge e\\delta /\\alpha $ , where $Q_\\alpha =Q_\\alpha (Y)$ is the $\\alpha $ -th quantile of $Y$ .", "Together, the upper and lower bounds (REF ) and (REF ) show that the worst case deviations of the empirical mean are suboptimal when the underlying distribution is heavy-tailed (as opposed to having Gaussian-like thin tails).", "If $Y$ follows a heavy-tailed distributed, such as the $t$ - or Pareto distributions, then the left-truncated variables $Z_i := Y_i \\mathbb {1}\\lbrace Y_i \\le Q_\\alpha (Y) \\rbrace $ have not only heavy but also asymmetric tails.", "In this case, the empirical mean $(\\alpha n)^{-1} \\sum _{i=1}^nZ_i$ can be a sub-optimal estimator of ${\\rm ES}_\\alpha (Y)$ ." ], [ "Robust Estimation and Inference via the Adaptive Huber Regression", "To robustify the ES regression estimator (REF ) in the presence of skewed heavy-tailed observations, we utilize the idea of adaptive Huber regression in [64].", "For some $\\tau >0$ , the Huber loss [36] takes the form $\\ell _\\tau (u ) = {\\left\\lbrace \\begin{array}{ll}u^2 /2 & ~\\mbox{ if }~ |u | \\le \\tau , \\\\\\tau | u | - \\tau ^2 /2 &~\\mbox{ if }~ |u| > \\tau .\\end{array}\\right.", "}$ We propose a robust/Huberized ES regression estimator defined as $\\hat{\\theta }_\\tau \\in _{\\theta \\in ^p} \\frac{1}{n} \\sum _{i=1}^n\\ell _\\tau ( S_i(\\hat{\\beta }, \\theta ) ) , $ where $S_i(\\hat{\\beta }, \\theta )$ is as defined in (REF ), and $\\tau >0$ is a robustification parameter that should be calibrated adaptively from data.", "To see this, we consider the oracle Huber ES estimator defined as: $\\hat{\\theta }^{{\\rm ora}}_\\tau \\in _{\\theta \\in ^p} \\frac{1}{n} \\sum _{i=1}^n\\ell _\\tau ( S_i( \\beta ^* , \\theta ) ) = _{\\theta \\in ^p} \\frac{1}{n} \\sum _{i=1}^n\\ell _\\tau ( Z_i - \\alpha X_i^) , $ where $Z_i = (Y_i - X_i^^*) \\mathbb {1} (Y_i \\le X_i^^* ) + \\alpha X_i^^*$ .", "For any $\\tau >0$ , $\\hat{\\theta }^{{\\rm ora}}_\\tau $ is an $M$ -estimator of its population counterpart $\\theta ^*_\\tau = _{\\theta \\in ^p} \\lbrace \\ell _\\tau ( Z_i - \\alpha X_i^) \\rbrace .$ Let $\\psi _\\tau (t) =\\ell _\\tau ^{\\prime }(t) = (t) \\min ( |t|, \\tau )$ be the Huber's score function.", "By the convexity of the Huber loss, $\\theta ^*_\\tau $ must satisfy the first-order condition $\\lbrace \\psi _\\tau ( Z_i - \\alpha X_i^^*_\\tau ) X_i \\rbrace = 0$ .", "On the other hand, define the ES deviations $\\omega _i = Z_i - \\alpha X_i^^*$ , satisfying $(\\omega _i | X_i) = 0$ and $(\\omega _i) =0$ .", "Since the conditional distribution of $\\omega _i$ given $X_i$ is asymmetric, in general we have $\\lbrace \\psi _\\tau ( Z_i - \\alpha X_i^^*) X_i \\rbrace = \\lbrace \\psi _\\tau (\\omega _i) X_i \\rbrace \\ne 0$ , which in turn implies that $\\theta ^*_\\tau \\ne \\theta ^*$ .", "We thus refer to their difference under the $\\ell _2$ -norm, $\\Vert \\theta ^*_\\tau - \\theta ^* \\Vert _2$ , as the robustification bias.", "Proposition REF provides an upper bound for the robustification bias, which depends on $\\tau $ and some moment parameter.", "In particular, $\\tau $ needs to diverge for the robustification bias to diminish.", "Assume that $\\varepsilon := Y - X^^*$ satisfies $\\textnormal {var}_X\\lbrace \\varepsilon \\mathbb {1} (\\varepsilon \\le 0) \\rbrace \\le \\sigma ^2$ almost surely for some constant $\\sigma ^2$ , and that $\\kappa _4 = \\sup _{u\\in \\mathbb {S}^{p-1}} \\langle u, \\Sigma ^{-1/2} X \\rangle ^4 < \\infty $ , where $\\Sigma = (X X^ $ is positive definite.", "Then, for any $\\tau \\ge 2 \\kappa _4^{1/4} \\sigma $ , we have $\\Vert \\theta ^*_\\tau - \\theta ^* \\Vert _\\Sigma \\le 2 \\sigma ^2 / (\\alpha \\tau )$ .", "In Section REF , we investigate the finite-sample properties of the robust ES estimator $\\hat{\\theta }_\\tau $ obtained via (REF ) and (REF ): our results include a deviation inequality for $\\Vert \\hat{\\theta }_\\tau - \\theta ^*\\Vert _\\Sigma $ (Theorem REF ), the Bahadur representation (Theorem REF ), and a Berry-Esseen bound for linear projections of $\\hat{\\theta }_\\tau $ and $\\hat{\\theta }^{{\\rm ora}}_\\tau $ (Theorem REF ).", "With a properly chosen $\\tau $ that is of order $\\tau \\asymp \\sigma \\sqrt{ n/p}$ , we will show that $\\alpha \\Vert \\hat{\\theta }_\\tau - \\theta ^*\\Vert _\\Sigma \\lesssim \\sigma \\sqrt{ p / n}$ with high probability.", "Moreover, for any deterministic vector $a\\in ^p$ , the standardized statistic $\\alpha \\sqrt{n} \\langle a, \\hat{\\theta }_\\tau - \\theta ^* \\rangle / \\varrho _{a}$ converges in distribution to $(0, 1)$ , where $\\varrho ^2_{a } = a^^{-1} \\Omega \\Sigma ^{-1} a$ and $\\omega = (Y - X^^*)\\mathbb {1}(Y \\le X^^*) + \\alpha X^\\beta ^* - \\theta ^*)$ .", "Our theoretical analysis reveals two attractive properties of the adaptive Huberized ES estimator $\\hat{\\theta }_\\tau $ : (i) the non-asymptotic deviation upper bounds for $\\hat{\\theta }_\\tau $ are much smaller in order than those for $\\hat{\\theta }$ at any given confidence level, and (ii) the asymptotic relative efficiency of $\\hat{\\theta }_\\tau $ to $\\hat{\\theta }$ is one.", "Moreover, Theorem REF shows that the two-step robust estimator (with estimated conditional quantiles) is asymptotically equivalent to the oracle Huberized estimator (REF ) (assuming $\\beta ^*$ were known).", "This further justifies the usefulness of the Neyman orthogonal score, which makes the QR estimation error first-order negligible.", "Consistent estimators of $\\Sigma $ and $\\Omega = ( \\omega ^2 X X^$ are useful for statistical inference.", "Given the pair of quantile-ES regression estimators $(\\hat{\\beta }, \\hat{\\theta }_\\tau )$ , with slight abuse of notation we use $\\hat{\\varepsilon }_i$ and $\\hat{\\omega }_i$ to denote the fitted QR and ES residuals as in (REF ) except with $\\hat{\\theta }$ replaced by $\\hat{\\theta }_\\tau $ .", "As discussed in Section REF , a naive estimate of $\\Omega $ is $\\hat{\\Omega }= (1/n) \\sum _{i=1}^n\\hat{w}_i^2 X_i X_i^.", "In the presence of heavy-tailed errors $ i$, even the ``oracle\" estimate $ = (1/n) i=1nwi2 Xi Xi performs poorly and tends to overestimate.", "Motivated by Huber regression, we further propose a simple truncated estimator of $\\Omega $ given by $\\hat{\\Omega }_\\gamma = \\frac{1}{n} \\sum _{i=1}^n\\psi _\\gamma ^2(\\hat{\\omega }_i) X_i X_i^ $ where $\\gamma = \\gamma (n,p) >0$ is a second robustification parameter.", "Consequently, we construct approximate 95% robust confidence intervals for $\\theta ^*_j$ 's as $\\bigg [ \\hat{\\theta }_{\\tau , j } - \\frac{ 1.96}{\\alpha \\sqrt{n}} ( \\hat{\\Sigma }^{-1} \\hat{\\Omega }_\\gamma \\hat{\\Sigma }^{-1} )^{1/2}_{jj} , \\, \\hat{\\theta }_{\\tau , j } + \\frac{1.96}{\\alpha \\sqrt{n}} ( \\hat{\\Sigma }^{-1} \\hat{\\Omega }_\\gamma \\hat{\\Sigma }^{-1} )^{1/2}_{jj} \\bigg ] , \\ \\ j= 1,\\ldots , p .$ The convergence rate of $\\hat{\\Omega }_\\gamma $ with a suitably chosen $\\gamma $ will be discussed in Section REF .", "As discussed previously, the robustification parameter $\\tau $ plays an important role in balancing the bias and robustness (against heavy-tailed error distributions).", "The former is due to the asymmetric nature of the ES residual $\\omega = \\varepsilon \\mathbb {1}(\\varepsilon \\le 0) + \\alpha X^\\beta ^* - \\theta ^*)$ with $\\varepsilon = Y - X^^*$ .", "Under a second moment assumption, that is, $\\textnormal {var}_X\\lbrace \\varepsilon \\mathbb {1}(\\varepsilon \\le 0) \\rbrace \\le \\sigma ^2$ (almost surely) for some $\\sigma >0$ , Theorem REF shows that $\\tau $ should be of order $\\sigma \\sqrt{n/ p}$ so that the resulting ES estimator $\\hat{\\theta }_\\tau $ satisfies sub-Gaussian deviation bounds.", "To choose the tuning parameter $\\tau $ in practice, we employ a recently developed data-driven mechanism [62].", "This method is inspired by the censored equation approach proposed by [30], which was originally introduced as a proof technique for deriving robust weak convergence theory for self-normalized sums.", "Given an initial QR estimator $\\hat{\\beta }$ , define the generated response variables $\\hat{Z}_i =(Y_i - X_i^\\beta ) \\mathbb {1} (Y_i \\le X_i^\\beta ) + \\alpha X_i^\\beta , \\ \\ i =1 ,\\ldots , n .$ The proposed $\\tau $ -calibration procedure is iterative, starting at iteration 0 with an initial estimate $\\theta ^{ 0 } = \\hat{\\theta }$ , which is the two-step ES estimator given in (REF ) or equivalently (REF ).", "At iteration $t = 0, 1, 2, \\ldots $ , it solves a censored equation to update its estimate $\\theta ^t \\in ^p$ , producing $\\theta ^{ t +1 } \\in ^p$ .", "The procedure involves two steps.", "Using the current estimate $\\theta ^t$ , compute the ES “residuals\" $\\omega ^t_i = \\hat{Z}_i - \\alpha X_i^^t$ .", "Let $\\tau ^t >0$ be the solution to the equation $\\frac{1}{n} \\sum _{i=1}^n\\frac{ ( | \\omega ^t_i | \\wedge \\tau )^2}{\\tau ^2} = \\frac{p + \\log n }{n} .$ By Proposition 3 of [62], this equation has a unique solution provided that $\\sum _{i=1}^n\\mathbb {1}(| \\omega ^t_i | >0) > p + \\log n$ .", "Compute the updated estimate $\\theta ^{t+1} \\in _{\\theta \\in ^p} \\sum _{i=1}^n\\ell _{\\tau ^t} (\\hat{Z}_i - \\alpha X_i^)$ .", "This convex optimization problem can be solved via either the iteratively reweighted least squares (IRLS) algorithm or the Barzilai-Borwein gradient descent method [6].", "Repeat the above two steps until convergence or until the maximum number of iterations is reached." ], [ "Statistical Theory", "Throughout this section, we write $X=(x_1, \\ldots , x_p)^^p$ with $x_1 \\equiv 1$ so that $\\beta ^*_1$ and $\\theta ^*_1$ denote the intercepts.", "Without loss of generality, we assume that the random predictors $x_2, \\ldots , x_p$ have zero means, that is, $\\mu _j = (x_j)=0$ for $j=2,\\ldots , p$ .", "This makes the later sub-Gaussian assumption more reasonable; see Condition (REF ).", "Otherwise, we set $\\widetilde{X} = (1, \\tilde{x}_2, \\ldots , \\tilde{x}_p)^ (1, x_2 - \\mu _2, \\ldots , x_p-\\mu _p)^.", "With this notation, the joint model (\\ref {joint.reg.model}) becomes{\\begin{@align*}{1}{-1}Q_\\alpha ( Y | X ) = \\tilde{\\beta }^*_0 + \\sum _{j=2}^p \\tilde{x}_j \\beta ^*_j , \\qquad {\\rm ES}_\\alpha ( Y | X ) = \\tilde{\\theta }^*_0 + \\sum _{j=2}^p \\tilde{x}_j \\theta ^*_j,\\end{@align*}}where $ *1 = *1 + j=2p j *j$ and $ *1 = *1 + j=2p j *j$.", "The sub-Gaussian assumption can then be imposed on $ X$, and our analysis naturally applies to $ {(Yi , Xi ) }i=1n$.$" ], [ "Two-step Joint Quantile and ES Regression", "Recall the two-step $\\alpha $ -ES estimator $\\hat{\\theta }$ in (REF ).", "In this section, we establish the theoretical properties of $\\hat{\\theta }$ under the regime in which $p$ is allowed to grow with $n$ .", "We start with some conditions on the covariates and the conditional distribution of $Y$ given $X$ .", "The conditional CDF $F_{\\varepsilon | X}$ of $\\varepsilon := Y - X^^*$ given $X$ is continuously differentiable and satisfies $| F_{\\varepsilon | X}(t) -F_{\\varepsilon | X}(0) | \\le f |t|$ for all $t\\in $ .", "Moreover, the negative part of $\\varepsilon $ , denoted by $\\varepsilon _- = \\varepsilon \\wedge 0$ , satisfies $\\textnormal {var}_X(\\varepsilon _-) \\le \\sigma ^2 ~\\mbox{ almost surely (over } X),$ where $\\textnormal {var}_X$ denotes the conditional variance given $X$ .", "The random covariate vector $X\\in ^p$ is sub-Gaussian, that is, there exists some (dimension-free) constant $\\upsilon _1 \\ge 1$ such that $(| u^W̰ | \\ge \\upsilon _1 t ) \\le 2 e^{-t^2/2 }$ for all $t\\ge 0$ and $u\\in \\mathbb {S}^{p-1}$ , where $W = \\Sigma ^{-1/2} X$ and $\\Sigma = (X X^$ is positive definite.", "Let $\\kappa _l = \\sup _{u\\in \\mathbb {S}^{p-1}} | u^W̰|^l$ for $l\\ge 1$ .", "Several remarks are in order.", "Condition REF states that the negative part of the QR residual $\\varepsilon =Y - X^^*$ has bounded (conditional) variance.", "For convenience, we assume $\\sigma $ is a constant in the technical analysis.", "More generally, one may assume $\\varepsilon =\\sigma (X) \\eta $ , where $\\sigma :^p \\rightarrow (0, \\infty )$ is a positive function on $^p$ (not necessarily bounded), and $\\eta $ is independent of $X$ satisfying $\\textnormal {var}(\\eta \\mathbb {1} (\\eta \\le 0) ) \\le \\sigma ^2$ .", "In this case, we only need an additional moment assumption on $\\sigma (X)$ , say $\\lbrace \\sigma (X)^4\\rbrace $ is bounded.", "Condition REF is mainly used to guarantee that population and empirical quantities (e.g., the objective function or the gradient function) are uniformly close to each other in a compact region.", "It can be replaced by a boundedness assumption, which will lead to similar results.", "For example, $X= (x_1, \\ldots , x_p)^ is compactly supported with either $ XCX$ or $ -1/2 X 2 BX$, where $ CX$ is an absolute constant and $ BX$ is usually proportional to $ p$.$ Assume Conditions REF and REF hold.", "Conditioned on the event $\\lbrace \\hat{\\beta }\\in \\mathbb {B}_\\Sigma (\\beta ^*, r_0) \\rbrace $ for some $r_0>0$ , and for any $\\delta \\in (0, 1/3]$ , the two-step $\\alpha $ -ES estimator $\\hat{\\theta }$ satisfies that, with probability at least $1-3 \\delta $ , $\\alpha \\Vert \\hat{\\theta }- \\theta ^* \\Vert _\\Sigma \\le 2 \\sigma \\sqrt{\\frac{ p}{n\\delta }} + f \\kappa _3 r_0^2 + C_1 \\upsilon _1^2 \\sqrt{\\frac{p + \\log (1/\\delta ) }{n}} \\cdot r_0 $ as long as $n \\ge C_2 \\kappa _4\\lbrace p + 2 \\log (2/\\delta ) \\rbrace $ , where $C_1, C_2>0$ are absolute constants.", "From (REF ) we see that the first-stage QR estimation error enters the final convergence rate through higher-order terms, that is, $r_0^2 + r_0 \\sqrt{p/n}$ .", "This is a direct consequence of the Neyman orthogonality condition that $\\partial _\\beta \\lbrace S_i(\\beta , \\theta ^* ) \\rbrace = 0$ .", "In a joint (linear) quantile and ES regression framework, next we provide the explicit convergence rate, as a function of $n$ and $p$ , for the QR estimator under standard regularity conditions.", "The conditional density function of $\\varepsilon $ given $X$ , denoted by $f_{\\varepsilon |X}$ , exists and is continuous on its support.", "Moreover, there exist constants ${f}, l_0 > 0$ such that $ f_{\\varepsilon |X}(0) \\ge {f}$ and $|f_{\\varepsilon |X}(t) - f_{\\varepsilon |X}(0) | \\le l_0 |t|$ for all $t\\in $ almost surely (over $X$ ).", "Assume Conditions REF and REF hold.", "For any $t\\ge 0$ , the QR estimator $\\hat{\\beta }$ given in (REF ) satisfies that, with probability at least $1-e^{-t}$ , $\\Vert \\hat{\\beta }- \\beta ^* \\Vert _\\Sigma \\le C_1 {f}^{-1} \\sqrt{\\frac{p+t}{n}}$ as long as $n \\ge C_2 l_0^2 {f}^{-4} (p+t)$ , where $C_1, C_2>0$ are constants depending only on $\\upsilon _1$ .", "Together, Theorem REF and Proposition REF show that with probability at least $1- \\delta $ , the two-step ES estimator $\\hat{\\theta }$ satisfies the bound $\\alpha \\Vert \\hat{\\theta }- \\theta ^* \\Vert _\\Sigma \\lesssim \\sigma \\sqrt{\\frac{ p}{n \\delta }} + \\frac{f}{ {f}^2} \\frac{p+\\log (1/\\delta )}{n}$ for all sufficiently large $n\\gtrsim \\max ( 1, l_0^2 {f}^{-4} ) \\lbrace p+ \\log (1/\\delta )\\rbrace $ .", "Using the $_{}(1)$ notation, the previous non-asymptotic bound immediately implies $\\alpha \\Vert \\hat{\\theta }- \\theta ^* \\Vert _\\Sigma = _{} ( \\sqrt{p/n})$ .", "Furthermore, from the proof of Theorem REF we see that $\\alpha ( \\hat{\\theta }- \\theta ^* ) = \\bigg ( \\frac{1}{n} \\sum _{i=1}^nX_i X_i^)^{-1} \\Bigg \\lbrace \\frac{1}{ n} \\sum _{i=1}^n\\omega _i X_i + _{} \\bigg (\\frac{p}{n} \\bigg ) \\Bigg \\rbrace , $ where $\\omega _i = \\varepsilon _i\\mathbb {1}(\\varepsilon _i\\le 0) + \\alpha X_i^\\beta ^* - \\theta ^*)$ .", "Because of the Neyman orthogonality, the QR estimation error is first-order negligible, and therefore does not affect the asymptotic distribution of $\\hat{\\theta }$ .", "When $p$ is fixed, applying the multivariate CLT to the linear term $(1/n)\\sum _{i=1}^n\\omega _i X_i$ in (REF ), we have $\\sqrt{n} ( \\hat{\\theta }- \\theta ^*) \\xrightarrow{} \\big ( 0 , \\, \\alpha ^{-2} \\Sigma ^{-1} \\Omega \\Sigma ^{-1} \\big ) ~\\mbox{ as } n \\rightarrow \\infty ,$ where $\\Omega = \\lbrace X X^_X ( \\varepsilon _- ) \\rbrace $ with $\\varepsilon _- = \\varepsilon \\wedge 0 $ .", "In comparison, consider the “oracle\" ES estimator $\\hat{\\theta }^{{\\rm ora}}$ , defined as $\\hat{\\theta }^{{\\rm ora}} \\in _{\\theta \\in ^p} \\frac{1}{n} \\sum _{i=1}^n( Y_i - X_i^)^2 \\mathbb {1}(Y_i\\le X_i^^* ) .", "$ As shown in [22], $\\sqrt{n} ( \\hat{\\theta }^{{\\rm ora}} - \\theta ^* ) \\xrightarrow{} ( 0 , \\Sigma ^{-1} \\Omega ^*_\\alpha \\Sigma ^{-1} )$ as $n\\rightarrow \\infty $ , where $\\Omega ^*_\\alpha = \\alpha ^{-1} \\lbrace X X^_X( \\varepsilon | \\varepsilon \\le 0 ) \\rbrace $ .", "By a straightforward calculation, we find that the two asymptotic covariance matrices $\\alpha ^{-2} \\Omega $ and $\\Omega ^*_\\alpha $ are closely connected through the identity $\\alpha ^{-2} \\Omega =\\Omega ^*_\\alpha + \\frac{1 - \\alpha }{\\alpha } \\big \\lbrace X X^X, \\beta ^* - \\theta ^* \\rangle ^2 \\big \\rbrace ,$ which also quantifies the efficiency gap between the two-step estimator and the oracle.", "In an increasing dimensional regime that $p=p_n\\rightarrow \\infty $ and $p =o(\\sqrt{n})$ as $n\\rightarrow \\infty $ , we further establish two Berry-Esseen bounds for linear projections of $\\hat{\\theta }$ .", "Define the ES residual $\\omega = \\varepsilon _- - _X (\\varepsilon _-) = \\varepsilon _- + \\alpha X^\\beta ^* - \\theta ^*)$ , such that $\\Omega = ( \\omega ^2 X X^$ .", "In addition to Conditions REF –REF , assume there exist constants ${\\sigma }, \\alpha _3>0$ such that $ \\textnormal {var}_X ( \\varepsilon _- ) \\ge {\\sigma }^2 ~\\mbox{ and }~_X ( | \\varepsilon _- |^3 ) \\le \\alpha _3 ~\\mbox{ almost surely over $X$} .$ Let $G=(G_1, \\ldots , G_p)^^p$ be a centered Gaussian random vector with covariance matrix $(G) = {\\rm Corr}( \\omega \\Sigma ^{-1} X )$ .", "Then we have $& \\sup _{t\\ge 0} \\Bigg | \\bigg ( \\max _{1\\le j\\le p} \\bigg | \\frac{ \\alpha \\sqrt{n} ( \\hat{\\theta }_j - \\theta _j^* ) }{ \\sqrt{(\\Sigma ^{-1} \\Omega \\Sigma ^{-1} )_{jj}} } \\bigg | \\le t \\bigg ) - \\bigg ( \\max _{1\\le j\\le p} |G_j| \\le t \\bigg ) \\Bigg | \\nonumber \\\\& \\le C_1 \\big \\lbrace \\rho _0^{-3/2} (\\log n)^{3/2} + \\rho _0^{-1} (\\log n)^{7/2} \\big \\rbrace \\frac{\\alpha _3 \\, p^{3/4} }{{\\sigma }^3 \\sqrt{n}} + C_2 ( f /{f}^2 \\vee \\alpha _3^{1/3})(\\log p)^{1/2} \\frac{p+\\log n}{{\\sigma } \\sqrt{n}} , $ where $\\rho _0 := \\lambda _{\\min }((G)) \\in (0, 1)$ , and $C_1, C_2>0$ are constants depending only on $\\upsilon _1$ .", "Under the same settings as Theorem REF , we have $\\sup _{a\\in ^p, \\, t\\in } \\big | \\big \\lbrace \\alpha \\sqrt{n} \\, a^ \\hat{\\theta }- \\theta ^* ) / \\varrho _a \\le t \\big \\rbrace - \\Phi (t) \\big | \\lesssim ( f /{f}^2 \\vee \\alpha _3^{1/3}) \\frac{p+\\log n}{ {\\sigma } \\sqrt{n}}, $ where $\\varrho _a^2 = a^^{-1} \\Omega \\Sigma ^{-1} a$ .", "From an asymptotic view, Theorem REF shows that any linear combination of the coordinates of $\\alpha \\sqrt{n}(\\hat{\\theta }- \\theta ^*)$ converges in distribution to the correspondent linear combination of $(0, \\Sigma ^{-1} \\Omega \\Sigma ^{-1})$ under array asymptotics $n, p \\rightarrow \\infty $ and the growth condition $p^2 = o(n)$ .", "This constraint is as expected because known multivariate central limit theorems do no apply when $p^2/n \\rightarrow \\infty $ .", "[51] constructed a counterexample showing that a general central limit theorem cannot hold if $p^2 / n \\rightarrow \\infty $ .", "On the other hand, the best known growth condition on $p$ that ensures the asymptotic normality of linear combinations of the standard QR estimator $\\hat{\\beta }$ is $p^3 (\\log n)^2 = o(n)$ [63], [34].", "That is, for any given (deterministic) vector $a \\in ^p$ , $\\sqrt{n} \\langle a, \\hat{\\beta }- \\beta ^* \\rangle \\xrightarrow{} \\big ( 0, \\tau (1-\\tau ) a^^{-1} \\Sigma \\Xi ^{-1} a \\big )$ as $n \\rightarrow \\infty $ subject to $p^3 (\\log n)^2 = o(n)$ , where $\\Xi = \\lbrace f_{\\varepsilon | X}(0) X X^$ and $f_{\\varepsilon | X}$ denotes the conditional density function of $\\varepsilon $ given $X$ .", "As discussed in [34], the order of $p$ , as an integrated part of the design conditions, crucially depends on the smoothness of the loss function, or equivalently, the score function.", "When $p$ is fixed, this together with the Cramér–Wold theorem implies $\\alpha \\sqrt{n}(\\hat{\\theta }- \\theta ^*) \\rightarrow (0, \\Sigma ^{-1} \\Omega \\Sigma ^{-1})$ in distribution as $n\\rightarrow \\infty $ ." ], [ "Robust ES Regression", "In this section, we provide non-asymptotic upper bounds on $\\Vert \\hat{\\theta }_\\tau - \\theta ^* \\Vert _2$ for the Huberized two-step ES estimator $\\hat{\\theta }_\\tau $ defined in (REF ).", "Moreover, we establish a non-asymptotic Bahadur representation for $\\hat{\\theta }_\\tau $ , which is the key step towards a Berry-Esseen-type bound for Gaussian approximation.", "Assume Conditions REF and REF hold.", "For any $t>0$ , let $r_0>0$ be such that $r_0 \\lesssim \\sigma $ and $f r_0^2 \\lesssim \\sigma \\sqrt{(p+t) / n}$ .", "Then, the two-step robust $\\alpha $ -ES ($0<\\alpha \\le 1/2$ ) estimator $\\hat{\\theta }_\\tau $ with $\\tau \\asymp \\sigma \\sqrt{n/(p+t)}$ satisfies that, with probability at least $1-3 e^{-t}$ conditioned on the event $\\lbrace \\hat{\\beta }\\in \\mathbb {B}_\\Sigma (\\beta ^*, r_0) \\rbrace $ , $\\alpha \\Vert \\hat{\\theta }_\\tau - \\theta ^* \\Vert _\\Sigma \\le C_1 \\sigma \\sqrt{\\frac{p+t}{ n}} + C_2 \\bigg ( \\sqrt{\\frac{p+t}{n}} r_0 + f r_0^2 \\bigg )$ provided the sample size obeys $n \\ge C_3( p + t)$ , where $C_1$ –$C_3$ are positive constants depending only on $\\upsilon _1$ .", "[Bias-robustness tradeoff] The choice of $\\tau $ stated in Theorem REF is a reflection of the bias-robustness tradeoff.", "As discussed in Section REF , the robust estimator $\\hat{\\theta }_\\tau $ can be viewed as an $M$ -estimator of $\\theta ^*_\\tau = _{\\theta } \\lbrace \\ell _\\tau (Z_i - \\alpha X_i^) \\rbrace $ , which differs from the true ES regression coefficient $\\theta ^*$ due to the asymmetry of ES “residuals\" $\\omega _i = Z_i - \\alpha X_i^^*$ .", "Consider the decomposition $\\Vert \\hat{\\theta }_\\tau - \\theta ^* \\Vert _\\Sigma \\le \\underbrace{ \\Vert \\hat{\\theta }_\\tau - \\theta _\\tau ^* \\Vert _\\Sigma }_{{\\rm robustification~bias}} + \\underbrace{ \\Vert \\theta ^*_\\tau - \\theta ^* \\Vert _\\Sigma }_{{\\rm robust~estimation~error}} .$ As long as $\\tau \\gtrsim \\sigma $ under Condition REF , by Proposition REF , we have $\\alpha \\Vert \\hat{\\theta }_\\tau - \\theta _\\tau ^* \\Vert _\\Sigma \\le 2 \\sigma ^2 / \\tau $ .", "Moreover, conditioned on the event $\\lbrace \\hat{\\beta }\\in \\mathbb {B}_\\Sigma (\\beta ^*, r_0) \\rbrace $ , we have $\\alpha \\Vert \\hat{\\theta }_\\tau - \\theta ^* \\Vert _\\Sigma \\lesssim \\sigma \\sqrt{\\frac{p+t}{n}} + \\tau \\frac{p+t}{n} + \\frac{\\sigma ^2}{\\tau } + r_0 \\bigg ( \\sqrt{\\frac{p+t}{n}} + \\frac{\\sigma }{\\tau } \\bigg ) + r_0^2$ with high probability.", "We thus choose $\\tau \\asymp \\sigma \\sqrt{n/(p+t)}$ to minimize the upper bound as a function of $\\tau $ .", "Recall from Proposition REF that with probability at least $1-n^{-1}$ , $\\Vert \\hat{\\beta }- \\beta ^* \\Vert _\\Sigma \\lesssim \\sqrt{(p + \\log n) / n}$ as long as $n\\gtrsim p + \\log n$ .", "Combining the proof of Theorem REF with a discretization argument, we can obtain a more general result that holds for a range of $\\tau $ values.", "In addition to Condition REF , assume $_X ( | \\varepsilon _- |^k ) \\le \\alpha _k$ almost surely (over $X$ ) for some $k>2$ .", "Then, for all $\\tau $ satisfying $\\sigma \\lesssim \\tau \\lesssim \\sigma \\sqrt{n/(p+\\log n)}$ , the corresponding ES regression estimator $\\hat{\\theta }_\\tau $ satisfies with probability $1-Cn^{-1}$ that $\\alpha \\Vert \\hat{\\theta }_\\tau - \\theta ^* \\Vert _\\Sigma \\lesssim \\sigma \\sqrt{\\frac{p+\\log n}{n}} + \\frac{\\alpha _k}{\\tau ^{k-1}} + \\text{higher order terms}$ holds uniformly over $\\tau $ , where the “higher order terms\" stem from the first-step quantile regression estimation error.", "In other words, a data-adaptive choice of $\\tau $ within the aforementioned range can be used.", "To achieve tight (finite-sample) concentration bounds, the order of the robustification parameter $\\tau =\\tau (n,p)$ should be no larger than $\\sqrt{n/(p+\\log n)}$ .", "On the other hand, $\\tau $ should exhibit a sufficiently fast growth so that the bias term $\\sigma \\tau ^{-1}$ or $\\alpha _k \\tau ^{1-k}$ (if higher-order moments are bounded) decays as fast as the stochastic error.", "For any $\\delta \\in (0,1)$ , the robust estimator $\\hat{\\theta }_\\tau $ with $\\tau \\asymp \\sigma \\sqrt{n/(p+\\log (1/\\delta ))}$ satisfies with probability at least $1-\\delta $ conditioned on $\\lbrace \\hat{\\beta }\\in \\mathbb {B}_\\Sigma (\\beta ^*, r_0) \\rbrace $ that $\\alpha \\Vert \\hat{\\theta }_\\tau - \\theta ^* \\Vert _\\Sigma \\lesssim \\sigma \\sqrt{\\frac{p+\\log (1/\\delta ) }{n}} + \\sqrt{\\frac{p+\\log (1/\\delta )}{n}} r_0 + f r_0^2 .$ The above bound is proportional to $\\log (1/\\delta )$ as opposed to the bound in (REF ) which is proportional to $1/\\delta $ .", "This indicates that the Huberized estimator is much more robust to heavy tails from a non-asymptotic perspective: when the regression error only has finite variance, the worst case deviations of $\\hat{\\theta }$ are much larger than that of $\\hat{\\theta }_\\tau $ .", "To achieve a tight deviation bound at $1-\\delta $ confidence level for any given $\\delta \\in (0, 1)$ , Theorems REF suggests that the robustification parameter $\\tau =\\tau (n,p)$ should be of order $\\sigma \\sqrt{n / (p + \\log \\delta ^{-1} )}$ , where $\\sigma ^2>0$ is an upper bound on the (conditional) variance of $\\varepsilon _-= \\min \\lbrace Y - X^^*, 0\\rbrace $ .", "Since $\\sigma $ is typically unknown in practice, a rule of thumb is to replace it by the sample standard deviation of the negative QR residuals $\\varepsilon _{i,-} = \\min \\lbrace Y_i - X_i^\\beta , 0 \\rbrace $ , denoted by $\\hat{\\sigma }$ , where $\\hat{\\beta }$ is the first-stage QR estimator.", "By taking $\\hat{\\tau }= \\hat{\\sigma }\\sqrt{n/(p+ \\log \\delta ^{-1} )}$ , the resulting estimator is also location and scale equivariant.", "Recall that in Section REF , we describe a slightly more sophisticated data-driven method for choosing $\\tau $ , which is adapted from that in [62].", "All the numerical studies in Section  are based on this tuning scheme because it consistently outperforms the aforementioned rule of thumb.", "Unlike the standard two-step estimator $\\hat{\\theta }$ , its robust counterpart $\\hat{\\theta }_\\tau $ does not have a closed-form expression.", "As the main building block toward deriving Gaussian approximation results, the next theorem provides a non-asymptotic Bahadur representation for $\\hat{\\theta }_\\tau $ with explicit error bounds that depend on $(n, p)$ and the first-stage QR estimation error.", "Assume the same conditions as in Theorem REF .", "For any $t>0$ , the $\\alpha $ -ES estimator $\\hat{\\theta }_\\tau $ with $\\tau \\asymp \\sigma \\sqrt{ n/(p+t)}$ satisfies that, with probability at least $1- 6 e^{-t}$ conditioned on $\\lbrace \\hat{\\beta }\\in \\mathbb {B}_\\Sigma (\\beta ^*, r_0) \\rbrace $ , $\\bigg \\Vert \\alpha \\Sigma ^{1/2} ( \\hat{\\theta }_\\tau - \\theta ^* ) - \\frac{1}{n} \\sum _{i=1}^n\\psi _\\tau (\\omega _i) W_i \\bigg \\Vert _2 \\lesssim \\sigma \\frac{p+t}{ n} + f r_0^2 + r_0 \\sqrt{\\frac{p\\log n + t}{n}} $ as long as $n \\gtrsim p + t$ , where $W_i = \\Sigma ^{-1/2} X_i$ .", "Finally, we have the following Gaussian approximation result, which bounds the Kolmogorov distance between the distribution of the standardized statistic $\\alpha \\sqrt{n}\\, a^\\hat{\\theta }_\\tau - \\theta ^*) / \\varrho _{a } $ and the standard normal distribution uniformly over all (deterministic) vectors $a \\in ^p$ , where $\\varrho _{a }^2 = a^^{-1} \\Omega \\Sigma ^{-1}a $ is the same as in Theorem REF .", "A similar conclusion applies to the oracle robust estimate $\\hat{\\theta }^{{\\rm ora}}_\\tau $ (REF ).", "The following theorem shows that the two-step robust estimator obtained via (REF ) and (REF ) is asymptotically equivalent to the oracle Huberized estimator (REF ) (assuming $\\beta ^*$ were known).", "Under the conditions of Theorem REF , the robust $\\alpha $ -level ($\\alpha \\in (0, 1/2]$ ) ES estimator $\\hat{\\theta }_\\tau $ with $\\tau \\asymp \\sigma \\sqrt{ n/(p+\\log n )}$ satisfies $\\sup _{a\\in ^p, \\, t\\in } \\, & \\big | \\big ( \\alpha \\sqrt{n} \\langle a, \\hat{\\theta }_\\tau - \\theta ^* \\rangle / \\varrho _{a } \\le t \\big ) - \\Phi (t) \\big | \\nonumber \\\\& \\lesssim \\frac{\\alpha _3 }{ {\\sigma }^3 } \\sqrt{\\frac{p+\\log n}{n}} + ( f /{f}^2 \\vee \\alpha _3^{1/3} ) \\frac{ p \\sqrt{\\log n} + \\sqrt{p} \\log n }{ {\\sigma } \\sqrt{n}} .", "$ Moreover, the oracle Huberized ES estimator $\\hat{\\theta }^{{\\rm ora}}_\\tau $ (REF ) with the same $\\tau $ satisfies $\\sup _{a\\in ^p, \\, t\\in } \\, & \\big | \\big ( \\alpha \\sqrt{n} \\langle a, \\hat{\\theta }^{{\\rm ora}}_\\tau - \\theta ^* \\rangle / \\varrho _{a } \\le t \\big ) - \\Phi (t) \\big | \\lesssim \\frac{\\alpha _3 }{ {\\sigma }^3 } \\sqrt{\\frac{p+\\log n}{n}} .", "$ The above Gaussian approximation result lays the theoretical foundation for the statistical inference problems of testing the linear hypothesis $H_0: a^^* = c_0$ versus $H_1: a^^* \\ne c_0$ and constructing confidence intervals for $a^^*$ , where $a\\in ^p$ and $c_0 \\in $ are predetermined.", "Given the joint quantile and ES regression estimates $(\\hat{\\beta }, \\hat{\\theta }_\\tau )$ , let $\\hat{\\Omega }_\\gamma $ be the truncated estimator of $\\Omega = ( \\omega ^2 X X^$ defined in (REF ) with $\\gamma =\\gamma (n,p)>0$ denoting a second robustification parameter.", "Then, we consider the robust test statistic $T_a = \\alpha \\sqrt{n} ( a^\\theta _\\tau - c_0) / \\hat{\\varrho }_{a, \\gamma }$ for testing $H_0:a^^* = c_0$ , and the (approximate) 100$(1- \\beta )$ % confidence interval $ a^\\theta _\\tau \\pm z_{\\beta /2} \\hat{\\varrho }_{a, \\gamma } /(\\alpha \\sqrt{n})$ for $a^^*$ , where $\\hat{\\varrho }_{a, \\gamma }^2 := a^\\Sigma ^{-1} \\hat{\\Omega }_\\gamma \\hat{\\Sigma }^{-1} a$ is a robust variance estimator and $z_{\\beta /2}$ is the upper $(\\beta /2)$ -quantile of $(0,1)$ .", "In view of Theorem REF , the validity of the above normal-based confidence construction for $a^^*$ (with a prespecified $a \\in ^p$ ) depends on the consistency of the robust variance estimate $\\hat{\\varrho }_{a, \\gamma }^2$ .", "We will show in the proof of Theorem REF that with probability at least $1-1/n$ , $\\Vert \\Sigma ^{-1/2} \\hat{\\Sigma }\\Sigma ^{-1/2} - {\\rm I}_p \\Vert _2 \\lesssim \\sqrt{(p+\\log n)/n}$ and thus $\\Vert \\Sigma ^{-1/2} \\hat{\\Sigma }\\Sigma ^{-1/2} - {\\rm I}_p \\Vert _2 = _{}( \\sqrt{(p+\\log n)/n})$ .", "The consistency of the truncated estimator $ \\hat{\\Omega }_\\gamma $ , on the other hand, is more subtle in the increasing-$p$ regime as it involves both estimated regression coefficients $\\hat{\\beta }$ and $\\hat{\\theta }_\\tau $ .", "To account for the estimation errors $\\Vert \\hat{\\beta }- \\beta ^* \\Vert _\\Sigma $ and $\\Vert \\hat{\\theta }_\\tau - \\theta ^* \\Vert _\\Sigma $ , define for each pair of radius parameters $r_0, r_1>0$ the subset $\\Theta (r_0, r_1) = \\big \\lbrace (\\beta , \\theta )\\in ^p \\times ^p : \\Vert \\beta -\\beta ^*\\Vert _\\Sigma \\le r_0, \\, \\alpha \\Vert \\theta - \\theta ^* \\Vert _\\Sigma \\le r_1 \\big \\rbrace .", "$ Conditioning on the event $\\lbrace (\\hat{\\beta }, \\hat{\\theta }_\\tau ) \\in \\Theta (r_0, r_1) \\rbrace $ for some $r_0$ and $r_1$ that determine the convergence rates of $\\hat{\\beta }$ and $\\hat{\\theta }_\\tau $ , respectively, we have $\\Vert \\Sigma ^{-1/2} ( \\hat{\\Omega }_\\gamma - \\Omega ) \\Sigma ^{-1/2} \\Vert _2 \\le \\sup _{ (\\beta , \\theta ) \\in \\Theta (r_0, r_1) } \\bigg \\Vert \\frac{1}{n} \\sum _{i=1}^n\\psi ^2_\\gamma ( \\omega _i(\\beta , \\theta )) W_i W_i^ \\Sigma ^{-1/2} \\Omega \\Sigma ^{-1/2} \\bigg \\Vert _2 ,$ where $w_i(\\beta , \\theta ) = (Y_i - X_i^) \\mathbb {1}(Y_i \\le X_i^) + \\alpha X_i^\\beta - \\theta )$ and $W_i = \\Sigma ^{-1/2} X_i$ .", "The problem then boils down to controlling the supremum of $\\Vert (1/n) \\sum _{i=1}^n\\psi ^2_\\gamma ( \\omega _i(\\beta , \\theta )) W_i W_i^ ( \\omega _i^2 W_i W_i^ \\Vert _2$ over $(\\beta , \\theta )$ in a local neighborhood of $(\\beta ^*, \\theta ^*)$ .", "In addition to Conditions REF –REF , assume that $ _X ( | \\varepsilon _- |^3 ) \\le \\alpha _3 ~\\mbox{ almost surely over $X$} ~~\\mbox{ and }~~ \\max _{1\\le i\\le n} \\Vert W_i \\Vert _2 \\le C_0 \\sqrt{p}$ for some constants $\\alpha _3 , C_0 >0$ .", "Conditioning on $\\lbrace (\\hat{\\beta }, \\hat{\\theta }_\\tau ) \\in \\Theta (r_0, r_1) \\rbrace $ for any predetermined $r_0, r_1>0$ , it holds with probability at least $1-2/n$ that $& \\Vert \\Sigma ^{-1/2} ( \\hat{\\Omega }_\\gamma - \\Omega ) \\Sigma ^{-1/2} \\Vert _2 \\\\& \\lesssim \\max \\lbrace \\alpha _3^{1/2} , (\\sqrt{p}r)^{3/2} \\rbrace \\sqrt{\\gamma \\frac{ p \\log n}{n}} + \\gamma ^2 \\frac{ p \\log n}{n} + \\frac{\\alpha _3}{\\gamma } + ( \\sigma + r ) r$ as long as $n\\gtrsim p+\\log n $ , where $r = r_0+r_1$ .", "From Proposition REF and Theorem REF we see that the regression estimates $\\hat{\\beta }$ and $\\hat{\\theta }_\\tau $ with $\\tau \\asymp \\sigma \\sqrt{n/(p+\\log n)}$ satisfy with probability at least $1-C n^{-1}$ that $\\Vert \\hat{\\beta }- \\beta ^* \\Vert _\\Sigma \\le r_0\\asymp \\frac{1}{ {f}} \\sqrt{\\frac{p + \\log n}{n}} ~\\mbox{ and }~ \\alpha \\Vert \\hat{\\theta }_\\tau - \\theta ^* \\Vert _\\Sigma \\le r_1 \\asymp \\sigma \\sqrt{\\frac{p+\\log n}{n}} + \\frac{f}{ {f}^2}\\frac{p+\\log n}{n} .$ The corresponding event $\\lbrace (\\hat{\\beta }, \\hat{\\theta }_\\tau ) \\in \\Theta (r_0, r_1) \\rbrace $ holds with high probability, and $r = r_0 + r_1= ( \\sqrt{(p+\\log n)/n})$ .", "Furthermore, by taking $\\gamma \\asymp \\lbrace \\alpha _3 n / (p \\log n) \\rbrace ^{1/3}$ in Theorem REF , we conclude that up to multiplication constants depending on $(\\alpha _3, \\sigma , f, {f})$ , $\\Vert \\Sigma ^{-1/2} ( \\hat{\\Omega }_\\gamma - \\Omega ) \\Sigma ^{-1/2} \\Vert _2 \\lesssim \\bigg ( \\frac{p \\log n}{n} \\bigg )^{1/3}$ with high probability as long as $n\\gtrsim p^2$ .", "This ensures the consistency of $\\hat{\\varrho }^2_{a, \\gamma }$ , i.e.", "$| \\hat{\\varrho }^2_{a, \\gamma } / \\varrho ^2_a - 1| = o_{}(1)$ , under the constraint $p^2 = O(n)$ as $n\\rightarrow \\infty $ .", "The bounded covariates assumption $\\max _{1\\le i\\le n} \\Vert W_i \\Vert _2 \\le C_0 \\sqrt{p}$ in (REF ) is only imposed for technical convenience.", "In fact, under Condition REF , combining Theorem 2.1 in [35] with the union bound we have that with probability at least $1-1/n$ , $\\max _{1\\le i\\le n} \\Vert W_i \\Vert _2\\le C \\upsilon _0 \\sqrt{p+\\log n}$ for some absolute constant $C>1$ .", "We can modify the proof of Theorem REF to remove the bounded covariates assumption in (REF ) by using a truncation argument, namely that replacing $W_i$ by $W_i\\mathbb {1}( \\Vert W_i \\Vert _2 \\le C_0 \\sqrt{p+\\log n} )\\rbrace $ for each $1\\le i\\le n$ ($C_0 = C \\upsilon _0$ )." ], [ "Nonparametric Expected Shortfall Regression", "In this section, we consider nonparametric models for joint quantile and expected shortfall regression.", "For a predetermined quantile level $\\alpha \\in (0,1)$ , the goal is to estimate the unknown (conditional) quantile and expected shortfall functions $f^*_q(x) = Q_\\alpha (Y| X=x)$ and $f^*_e(x) = {\\rm ES}_\\alpha (Y | X=x)$ , with an emphasis on the latter.", "By (REF ), $f^*_q$ and $f^*_e$ can be identified as $f^*_q = _{f_q} \\rho _\\alpha ( Y - f_q(X) ) ~~\\mbox{ and }~~ f^*_e = _{f_e } \\lbrace Y - f_e(X) \\rbrace ^2 \\mathbb {1}_{\\lbrace Y \\le f^*_q(X) \\rbrace } .$ Motivated by the two-step procedure developed under joint linear models, in the following we propose a nonparametric ES estimator using the series regression method [25], [4], [47].", "Such a nonparametric estimate is carried out by regressing the dependent variable on an asymptotically growing number of approximating functions of the covariates, and therefore is closely related to the estimator define in (REF ) under the so-called many regressors model [11], that is, the dimension $p=p_n$ is allowed to grow with $n$ .", "The idea of series estimation is to first approximate $f^*_q$ and $f^*_e$ by their “projections” on the linear spans of $m_1$ and $m_2$ series/basis functions, respectively, and then fit the coefficients using the observed data.", "Specifically, we approximate functions $f^*_q$ and $f^*_e$ by linear forms $U(x)^$ and $V(x)^$ , where $U(x)= (u_1(x), \\ldots , u_{m_1}(x) )^~\\mbox{ and }~~ V(x)= (v_1(x), \\ldots , v_{m_2}(x) )^$ are two vectors of series approximating functions of dimensions $m_1$ and $m_2$ .", "Here both $m_1$ and $m_2$ may increase with $n$ .", "We thus define the vectors of quantile and ES series approximation coefficients as $\\beta ^* \\in _{\\beta \\in ^{m_1}} \\rho _\\alpha (Y - U(X)^) ~~\\mbox{ and }~~\\theta ^* \\in _{\\theta \\in ^{m_2}} \\lbrace Y - V(X)^\\rbrace ^2 \\mathbb {1}_{\\lbrace Y \\le f^*_q(X) \\rbrace } .", "$ Given independent observations $(Y_i, X_i)$ , $1\\le i\\le n$ from $(Y, X) \\in \\times $ with $$ denoting a compact subset of $^p$ , we write $U_i = U(X_i) \\in ^{m_1} ~~\\mbox{ and }~~ V_i = V_i(X_i) \\in ^{m_2}.$ Extending the two-step approach described in Section REF , we first define the (conditional) quantile series estimator of $f^*_q(x) = Q_\\alpha (Y| X=x)$ [11]: $\\hat{f}_q(x) = U(x)^\\beta , \\ \\ x\\in , ~\\mbox{ where }~ \\hat{\\beta }= \\hat{\\beta }_{m_1} \\in _{\\beta \\in ^{m_1}} \\frac{1}{n} \\sum _{i=1}^n\\rho _\\alpha ( Y_i - U_i^) .", "$ With nonparametrically generated response variables $\\hat{Z}_i := \\lbrace Y_i - \\hat{f}_q(X_i) \\rbrace \\mathbb {1} \\lbrace Y_i \\le \\hat{f}_q(X_i ) \\rbrace + \\alpha \\hat{f}_q(X_i)$ , the second-stage ES series estimator is given by $\\hat{f}_e(x) = V(x)^\\theta , \\ \\ x \\in , ~\\mbox{ where }~ \\hat{\\theta }= \\hat{\\theta }_{m_2} \\in _{\\theta \\in ^{m_2} } \\frac{1}{n} \\sum _{i=1}^n(\\hat{Z}_i - \\alpha V_i^)^2 .", "$ Commonly used series functions with good approximation properties include B-splines, polynomials, Fourier series and compactly supported wavelets.", "We refer to [47] and [16] for a detailed description of these series functions.", "In the context of quantile regression, [16] established the consistency and rate of convergence at a single quantile index.", "More recently, [11] developed large sample theory for quantile series coefficient process, including convergence rate and uniform strong approximations.", "The choice of the parameter $m_1$ , also known as the order of the series estimator, is crucial for establishing the balance between bias and variance.", "Note that the quantile series estimator $\\hat{f}_q$ in (REF ) has been well studied by [11].", "Because the number of regressors increases with the sample size, conventional central limit theorems are no longer applicable to capture the joint asymptotic normality of the regression coefficients.", "The growing dimensionality is the primary source of technical complication.", "Our theoretical analysis under the joint linear model (REF ), which leads to novel non-asymptotic high probability bounds, can be used as a starting point for studying the two-step nonparametric ES series estimator $\\hat{f}_e$ defined in (REF ).", "Of particular interest is to develop a uniform inference procedure for the conditional ES function $f^*_e$ and its theory.", "That is, at a given confidence level $1-\\gamma $ , we aim to construct a pair of functional estimates $[\\hat{f}^L_e, \\hat{f}^U_e]$ from $\\lbrace (Y_i, X_i)\\rbrace _{i=1}^n$ such that $\\big \\lbrace \\hat{f}^L_e(x) \\le f^*_e(x) \\le \\hat{f}^U_e(x) ~\\mbox{ for all } x \\in \\big \\rbrace \\rightarrow 1 - \\gamma , ~\\mbox{ as }~ n \\rightarrow \\infty .$ Since a significant amount of additional work is still needed, including explicit characterizations of the ES series approximation error and the impact of first-stage nonparametric QR estimation error, we leave a rigorous theoretical investigation of $\\hat{f}_e$ to future work.", "Although we have only focused on series methods, there are other nonparametric techniques which offer superior empirical and theoretical performance.", "Among those, deep neural networks have stood out as a promising tool for nonparametric estimation, from least squares, logistic to quantile regressions [58], [26], [60].", "It is practically useful to construct deep learning implementations of two-step estimators, and statistically important to deliver valid inference on finite-dimensional parameters following first-step estimation (of both quantile and ES functions) using deep learning.", "A detailed investigation of these problems is beyond the present scope but of future interest." ], [ "Monte Carlo Experiments", "In this section, we assess the numerical performance of the proposed method for fitting expected shortfall regression.", "For its R implementation, we first obtain a QR estimate via the quantreg library [39], and in step two use the adaHuber library [49] to solve (REF ) with the robustification parameter selected adaptively as described in Section REF .", "[R code for fitting $\\alpha $ -level ES regression to data ${\\rm x}\\in ^{n\\times p}$ and ${\\rm y}\\in ^n$ ]   library(quantreg) library(adaHuber) qr_fit  <- rq(y$\\sim $ x, tau=alpha, method=`pfn') fit_q   <- x %*% qr_fit$coef[-1] + qr_fit$coef[1] ynew       <- (y - fit_q) * (y <= fit_q) / alpha + fit_q es_fit  <- adaHuber.reg(x, ynew, method=`adaptive') coef_e  <- es_fit$coef We compare the proposed two-step adaptive Huber ES estimator (2S-AH) to several competitors: (i) the joint regression estimate (joint) via FZ loss minimization, implemented by the R library esreg with the default option [10]; (ii) the two-step least squares estimator (REF ) (2S-LS), and (iii) the oracle two-step “estimator” (2S-oracle).", "Recall that the two-step procedure first obtains a QR estimator $\\hat{\\beta }$ via either standard [40] or smoothed QR regression [32], and subsequently computes the ES estimator based on fitted quantile thresholds $\\lbrace X_i^{\\beta } \\rbrace _{i=1}^n$ .", "The oracle method refers to the two-step ES estimate based on the true quantile thresholds $\\lbrace X_i^^* \\rbrace _{i=1}^n$ .", "In our simulation studies, we first generate $\\gamma ^*= (\\gamma _1^*, \\ldots , \\gamma _p^*)^ and $ *=( 1*, ..., p*) independently, where $\\gamma _j^*$ 's are independent Rademacher random variables and $\\eta _j^*\\sim _{{\\rm i.i.d.}}", "0.5 \\cdot {\\rm Bernoulli}(1/2)$ .", "Data are then generated from the heteroscedastic model $Y_i = X_i^^* + X_i^^* \\cdot \\varepsilon _i,$ where $X_i = (X_{i1}, \\ldots , X_{ip} )^ with $ Xij i.i.d.", "Unif(0,1.5)$, and the random noise $ i$ follows one of the following two distributions: (i) standard normal distribution, and (ii) $ t$-distribution with $ >2$ degrees of freedom ($ t$).", "Given $ *$ and $ *$, the true quantile and expected shortfall regression coefficients are$$\\beta ^* = \\gamma ^* + \\eta ^* \\cdot Q_{\\alpha }(\\varepsilon ) \\qquad \\mathrm {and}\\qquad \\theta ^* = \\gamma ^* + \\eta ^* \\cdot \\mathrm {ES}_{\\alpha }(\\varepsilon ),$$where $ Q()$ and $ ES()$ are the $$-level quantile and expected shortfall of $$, respectively.$ We first set the dimension $p=20$ and sample size $n = \\lceil {50 p/\\alpha }\\rceil $ , where the quantile level $\\alpha $ takes values in $\\lbrace 0.05,0.1,0.2\\rbrace $ .", "Simulation results on the relative $\\ell _2$ -error $ \\Vert \\hat{\\theta }-\\theta ^*\\Vert _2/\\Vert \\theta ^*\\Vert _2$ , averaged over 200 replications, are reported in Tables REF and REF under the $(0,1)$ and $t_{2.5}$ noise model, respectively.", "All four methods have very similar performance across different quantile levels in the normal model, while in the presence of heavy-tailed errors, the proposed robust method achieves consistently more favorable performance.", "This demonstrates that the use of adaptive Huber regression (in stage two) gains robustness against heavy-tailed errors without compromising statistical efficiency when the error distribution is light-tailed.", "In a more extreme setting where $\\alpha =0.01$ , Figure REF shows the boxplots of squared $\\ell _2$ -errors for three ES estimates (2S-LS, 2S-AH and joint) under the normal and $t_3$ models.", "Although the 2S-LS estimator is easy-to-compute, it is more sensitive to heavy tailedness than the the joint estimator obtained via FZ loss minimization.", "We further compare the proposed method with the joint regression approach in terms of computational efficiency.", "The computational time in seconds, averaged over 50 independent replications, for the two methods with growing $(n, p)$ subject to $n = \\lceil {50 p/\\alpha }\\rceil $ ($\\alpha \\in \\lbrace 0.05, 0.1, 0.2\\rbrace $ ) are reported in Figure REF .", "These numerical results show evidence that our R implementation of the robust two-step method can be faster than the esreg library for the joint regression approach by several orders of magnitude.", "To shed some light on the drastic difference in numerical efficiency between the two methods, note that the joint regression approach [22] relies on the Nelder-Mead simplex method, which is sensitive to the starting values and not guaranteed to converge to a local minimum.", "The convergence of the Nelder-Mead method is already very slow for large-scale problems because it is a direct search method based on function comparison.", "And due to its sensitivity to starting values, [22] proposed to re-optimize the model (several times) with the perturbed parameter estimates as new starting values.", "This explains, to some extent, the fast increase in runtime of esreg as both $n$ and $p$ grow.", "The function in quantreg that fits linear QR is coded in fortran, and thus is very fast in larger problems.", "The computation of adaptive Huber regression is based on the Barzilai-Borwein gradient descent method [6], implemented via RcppArmadillo [24] in adaHuber.", "Next, we construct entrywise (approximate) 95% confidence intervals (CIs) for the expected shortfall regression parameter $\\theta ^*$ .", "The CI for the two-step estimator is based on (REF ) (non-robust) and (REF ) (robust), and we use the default option in the esreg package to implement [22]'s method.", "To evaluate the accuracy and reliability of the CIs, we compute the empirical coverage probability and interval width based on 500 independent replications, then averaged over the $p$ slope coefficients.", "Results for $p=20$ and $n = \\lceil {50 p/\\alpha }\\rceil $ ($\\alpha \\in \\lbrace 0.05,0.1,0.2\\rbrace $ ) are reported in Tables REF and REF .", "Once again, all three methods perform similarly under normal errors, while the robust approach gives the narrowest CIs while maintaining the desired coverage level under $t_{2.5}$ errors.", "Together, the results in Tables REF and REF demonstrate the robustness of the proposed method, as indicated by the theoretical investigations in Section REF .", "Table: Mean relative ℓ 2 \\ell _2-error ∥θ ^-θ * ∥ 2 /∥θ * ∥ 2 \\Vert \\hat{\\theta }-\\theta ^*\\Vert _2/ \\Vert \\theta ^*\\Vert _2 (and standard error), averaged over 200 replications, when ε i ∼t 2.5 \\varepsilon _i \\sim t_{2.5}, p=20p = 20, n=⌈50p/α⌉n = \\lceil {50 p/\\alpha }\\rceil and α={0.05,0.1,0.2}\\alpha =\\lbrace 0.05,0.1,0.2\\rbrace .Table: Empirical coverage probability and mean width (based on 500 replications) of 95% confidence intervals averaged over p=20p=20 variables when n=⌈50p/α⌉n = \\lceil {50 p/\\alpha }\\rceil , α={0.05,0.1,0.2}\\alpha =\\lbrace 0.05,0.1,0.2\\rbrace and ε i ∼t 2.5 \\varepsilon _i \\sim t_{2.5}.Figure: t 3 t_3 model with (p,n)=(5,10000)(p,n)=(5, 10000)Figure: t 2.5 t_{2.5} noise, α=0.2\\alpha =0.2.Table: Empirical coverage probability and mean width (based on 500 replications) of 95% confidence intervals averaged over p=20p=20 variables when n=⌈50p/α⌉n = \\lceil {50 p/\\alpha }\\rceil , α={0.05,0.1,0.2}\\alpha =\\lbrace 0.05,0.1,0.2\\rbrace and ε i ∼(0,1)\\varepsilon _i \\sim (0, 1).Table: Mean relative ℓ 2 \\ell _2-error ∥θ ^-θ * ∥ 2 /∥θ * ∥ 2 \\Vert \\hat{\\theta }-\\theta ^*\\Vert _2/ \\Vert \\theta ^*\\Vert _2 (and standard error), averaged over 200 replications, when ε i ∼(0,1)\\varepsilon _i \\sim (0,1), p=20p = 20, n=⌈50p/α⌉n = \\lceil {50 p/\\alpha }\\rceil and α={0.05,0.1,0.2}\\alpha =\\lbrace 0.05,0.1,0.2\\rbrace ." ], [ "Data Application I: Health Disparity", "Iron deficiency is one of the most common nutritional deficiency worldwide and is one of the leading cause of anemia [15].", "Being able to detect iron deficiency is essential in medical care for patients with inflammation, infection, or chronic disease.", "It is also important in preventive care since iron deficiency tends to present signs of a more serious illness such as gastrointestinal malignancy [56].", "One measure of iron deficiency that has proven to be useful is the soluble transferrin receptor (sTRP), a carrier protein for transferrin [44].", "A high value of sTRP indicates iron deficiency.", "The scientific goal here is to assess whether there is any disparity in sTRP levels among four different ethnic groups: Asian, Black, Mexican American, and White.", "To this end, we analyze a dataset obtained from the National Health and Nutrition Examination Survey from 2017 to 2020 (pre-covid).", "In this dataset, the response variable sTRP was measured for female participants who range in ages from 20 to 49 years.", "The covariates of interest are three dummy variables that correspond to Asian, Mexican American and Black, using White as the baseline.", "We adjust for the demographic variables such as age, education level, and health diet throughout our analysis.", "For simplicity, we remove all participants with missing values on the covariates and the final dataset consists of $n=1689$ observations and $p=7$ covariates.", "As an exploratory analysis, in Figure REF we plot the quantile curves of sTRP measurements at levels from 50% to 99% for each of the four different ethnic groups.", "In this dataset, the sTRP values range from 1.24 to 35.1 mg/L.", "We note that the normal range for females is between 1.9 to 4.4 mg/L [38], and values that are much higher than 4.4 mg/L indicate severe iron deficiency.", "We see from Figure REF that majority of the population have sTRP levels within the normal range.", "However, there are large disparities between Black and the other three ethnic groups, reflected in higher quantiles of the marginal distributions of sTRP.", "To quantify the statistical significance of the aforementioned disparity, we fit robust expected shortfall regression at $\\alpha = 0.75$ (upper tail), with the robustification parameter tuned by the procedure described in Section REF .", "This is equivalent to fitting the proposed 2S-AH method at level $1-\\alpha $ (see Section ) after flipping the signs of both the response and the covariates.", "We also implement the standard quantile regression at level $\\alpha $ .", "Table REF reports the estimated coefficients and the associated 95% confidence intervals for the three indicator covariates on the ethnic groups Asian, Mexican American and Black, using White as a baseline.", "We see that both the quantile and robust expected shortfall regression methods are able to detect a health disparity between Black and White.", "Specifically, the estimated robust ES regression coefficient and 95% CI (in the parenthesis) is 3.03 (1.88, 4.19) versus its QR counterparts 0.86 (0.37, 1.35).", "With the use of quantile regression (at level 0.75), we do not observe a statistically significant health disparity between Asian and White.", "In contrast, 2S-AH detects health disparity between Asian and White with estimated coefficient 2.34 (0.59, 4.09).", "We also see that the quantile regression detects health disparity between Mexican American and White, but the effect size is close to zero.", "In summary, ES regression complements QR, and can be more effective, as a tool to detect health disparity especially when it only occurs in the tail of the conditional distribution.", "Figure: The soluble transferrin receptor levels (mg/L) versus quantile levels (ranging from 0.5 to 0.99) for the female population in four different ethnic groups: Asian, Black, Mexican American and White.", "The orange horizontal dash line indicates the upper bound of the normal range (1.9–4.4 mg/L) for transferrin receptor among females.Table: The estimated regression coefficients (and 95% confidence intervals) for three dummy variables: Asian, Black and Mexican American, using White as baseline.", "Results of the upper-tail robust ES regression method 2S-AH and standard quantile regression at quantile level α=0.75\\alpha =0.75 are reported." ], [ "Data Application II: Job Training Partnership Act", "We consider the Job Training Partnership Act (JTPA) study, a publicly-funded training program that provides training for adults with the goal of improving their earnings.", "Specifically, we focus on Title II subprogram of the JPTA study that is mainly offered to adults with barriers to employment and out-of-schools youths.", "This dataset was previously analyzed in [13].", "It consists of 30-months accumulated earnings for 6102 females and 5102 males, with 16 covariates that are related to the demographics of the individuals such as age, race, and the indicator variable that indicates whether the individual received JPTA training.", "After removing individuals with zero income, there are 4576 males and 5296 females.", "Our goal is to assess the effect of JPTA training on participants' earnings with an emphasis on the low income population that are employed, for both male and female subgroups.", "To this end, we fit an expected shortfall regression model using the proposed robust method with $\\alpha = \\lbrace 0.05,0.1, 0.2\\rbrace $ .", "The robustification parameter $\\tau $ is selected automatically via the procedure described in Section REF .", "Specifically, we regress the 30-months accumulated earnings on the JPTA training to assess the effect of JPTA training on low-income individuals, adjusting for whether individuals completed high school, race, Hispanic/non-Hispanic, marital status, working less than 13 weeks in the past year, and age.", "We report the estimated regression coefficient for the binary variable JPTA training and its associated 95% confidence intervals.", "The results are summarized in Table REF .", "From Table REF , we see that 95% confidence intervals for the robust method do not contain zero for all $\\alpha \\in \\lbrace 0.05,0.1,0.2\\rbrace $ .", "This indicates that the JPTA training is statistically effective to improve earnings for the low-income population.", "Specifically, for the male subpopulation, the estimated ES-effects of JPTA training are 283, 552, and 1093 dollars at levels $0.05$ , $0.1$ , and $0.2$ , respectively.", "To further assess whether the estimated effects are scientifically meaningful, we compute the average 30-months accumulated earnings below the quantile levels $0.05,0.1$ , and $0.2$ for the male subgroup, which are 214, 566, and 1496, respectively.", "We find that the JPTA training doubles the average income for individuals with income below the quantile levels 0.05 and 0.1, and becomes less effective for individuals with higher income.", "Similar findings are also observed for the female subgroup.", "Table: The estimated regression coefficient of the binary predictor JPTA training (and its 95% confidence interval) for the proposed robust method and the standard quantile regression at quantile level α∈{0.05,0.1,0.2}\\alpha \\in \\lbrace 0.05,0.1,0.2\\rbrace .", "Results are rounded to the closest integer." ], [ "Conclusion and Discussions", "This paper considers expected shortfall regression under a joint quantile and ES model recently proposed in [22] and [50].", "The existing approach is based on a joint $M$ -estimator, defined as the global minimum of any member of a class of strictly consistent joint loss functions [28] over some compact set.", "Since the loss function is non-differentiable and non-convex, the computation of such a joint $M$ -estimator is intrinsically difficult especially when the dimensionality is large.", "To circumvent the aforementioned challenge, [5] proposed a two-step procedure for estimating the joint quantile and ES model based on Neyman-orthogonalization: the first step involves fitting the quantile regression, and the second step employs the Neyman-orthogonal scores to estimate the ES parameters.", "Due to the use of least squares method in the second step, the resulting estimator is sensitive to heavy-tailed error distributions.", "To address the robustness and computation concerns simultaneously, we propose a robust two-step method that applies adaptive Huber regression [64] in the second step.", "The key is the use of a diverging robustification parameter for bias-robustness tradeoff, tuned by a convenient data-driven mechanism.", "The proposed method can be efficiently implemented by a combination of R packages quantreg/conquer and adaHuber.", "We establish a finite-sample theoretical framework for this two-step method, including deviation bound, Bahadur representation and (uniform) Gaussian approximations, in which the dimension of the model, $p$ , may depend on and increase with the sample size, $n$ .", "Robust confidence intervals/sets are also constructed.", "Numerical experiments further demonstrate that the proposed robust ES regression approach achieves satisfying statistical performance, high degree of robustness (against heavy-tailed data) and superior computational efficiency and stability.", "Through a real data application to the Job Training Partnership Act study, we show that ES regression complements QR as a useful tool to explore heterogeneous covariate effects on the average tail behavior of the outcome.", "Although we restrict attention to (joint) linear models in this work, our non-asymptotic theory and the underpinning techniques pave the way for analyzing (i) series/projection estimators under joint nonparametric quantile-ES models, and (ii) penalized estimators under high-dimensional sparse quantile-ES models.", "We leave these extensions in future research.", "One limitation in our data analysis for the JTPA study is that we do not account for potential selection bias.", "Specifically, as pointed out by [1], out of all subjects that were assigned to participate in the training program, only approximate 60% of them (compliers) actually committed to the training program.", "These individuals may simply have higher motivation in improving their earnings, and thus, the training status is likely positively correlated with potential income earnings.", "Generalizing the proposed method to estimating complier expected shortfall treatment effect, using an instrumental variable approach previously considered in [1], is another direction for future research.", "The ES regression methods considered in this paper are suited for a fixed quantile level $\\alpha \\in (0, 1)$ , independent of the sample size.", "For extreme quantiles satisfying $\\alpha =\\alpha _n \\rightarrow $ 0 or 1 as $n\\rightarrow \\infty $ , both the FZ loss minimization method (see (REF ) and ()) and two-step procedures perform poorly because observations become scarce at that level, i.e., $\\alpha n$ is not large enough.", "In fact, if dimension $p$ is fixed, Theorem REF and Theorem REF imply that the two-step ES regression estimates, robust and non-robust, are consistent if $\\alpha _n^2 n \\rightarrow \\infty $ as $n\\rightarrow \\infty $ .", "In the case where $\\alpha _n^2 n = O(1)$ , these methods are no longer useful and one may need to resort to extreme value theory [21], [61], which provides the statistical tools for a feasible extrapolation into the tail of the variable of interest.", "A more detailed discussion on modeling the extremes are deferred to Section B of the Appendix." ], [ "Expected Shortfall Regression without Crossing", "Recall the joint loss function $S(\\beta , \\theta ; Y, X)$ and the score function $S_0(\\beta , \\theta ; Y, X)$ given in (REF ).", "Under model (REF ), both the joint $M$ -estimator [22] and the two-step procedure [5] rely on the moment conditions $_X ( Y \\le X^^* ) = \\alpha ~\\mbox{ and }~ _X ( Y - X^^*) \\mathbb {1}( Y \\le X^^* ) = \\alpha X^\\theta ^* - \\beta ^*) .$ The latter follows from the fact that $_X\\lbrace Y | Y\\le Q_\\alpha \\rbrace = \\frac{_X \\lbrace Y \\mathbb {1}( Y\\le Q_\\alpha )\\rbrace }{\\alpha } = \\frac{_X ( Y - Q_\\alpha ) \\mathbb {1} ( Y\\le Q_\\alpha ) }{\\alpha } + Q_\\alpha ,$ where $Q_\\alpha = Q_\\alpha (Y|X)$ is the conditional $\\alpha $ -quantile of $Y$ given $X$ .", "By definition, the quantile and expected shortfall satisfy a monotonicity condition.", "At the population level, it holds under model (REF ) that $X_i^^* = {\\rm ES}_\\alpha (Y_i |X_i) \\le Q_\\alpha (Y_i|X_i) = X_i^^*, \\ \\ i = 1,\\ldots , n. $ This requirement, however, is not necessarily satisfied by any of the estimators described in Sections REF and REF .", "Even when both models (for quantile and ES) are correctly specified, for a dataset of modest size, the variability of the estimates may be large enough to upset the above inequality constraints enjoyed by their population counterparts.", "A large dataset may also contain scarce observations with fitted ES larger than fitted quantiles.", "To obtain non-crossing conditional quantile and ES regression estimates, in the following we propose a constrained two-step method and its robust counterpart by directly incorporating the constraints in (REF ) to the optimization programs.", "Let $\\hat{\\beta }\\in ^p$ be the QR estimator of $\\beta ^*$ , and set $\\hat{Z}_i = (Y_i - X_i^\\beta ) \\mathbb {1}(Y_i \\le X_i^\\beta ) + \\alpha X_i^\\beta , \\ \\ i = 1,\\ldots , n.$ In the second step, we define the non-crossing robust ES estimator $\\hat{\\theta }^{{\\rm nc}}_\\tau $ as a solution to the constrained Huber loss minimization problem $ \\begin{split}{\\rm minimize}_{\\theta \\in ^p} & ~~~~ \\frac{1}{n} \\sum _{i=1}^n\\ell _\\tau (\\hat{Z}_i - \\alpha X_i^) \\\\{\\rm subject~to} &~ ~~~ X_i^\\le X_i^\\beta , \\, i =1 ,\\ldots , n,\\end{split}$ where $\\ell _\\tau $ ($\\tau >0$ ) is the Huber loss.", "For simplicity, we denote $\\hat{\\theta }^{{\\rm nc}}= \\hat{\\theta }^{{\\rm nc}}_\\infty $ as the two-step ES estimator without crossing.", "For any given $\\beta \\in ^p$ , define the subset $_n(\\beta ) \\subseteq ^{ p}$ as $_n(\\beta ) = \\lbrace \\theta \\in ^p : X_i^\\le X_i^, \\, i=1 , \\ldots , n \\rbrace ,$ which is a polyhedron and thus is convex.", "Under the joint model (REF ), the true ES regression coefficient $\\theta ^*$ lies in the interior of $_n(\\beta ^*)$ , that is, $ X_i^^* < X_i^^* $ for all $i=1,\\ldots , n$ .", "Let $\\hat{\\beta }$ and $\\hat{\\theta }$ be the quantile and ES regression estimates given in (REF ) and (REF ), respectively.", "By Proposition REF , Theorem REF and Remark REF , we have $\\max _{1\\le i\\le n} | X_i^\\hat{\\beta }- \\beta ^* ) | \\le \\Vert \\hat{\\beta }- \\beta ^* \\Vert _\\Sigma \\cdot \\max _{1\\le i\\le n} \\Vert \\Sigma ^{-1/2} X_i \\Vert _2 = _{}\\bigg (\\frac{p + \\log n}{\\sqrt{n}} \\bigg ) , \\\\\\max _{1\\le i\\le n} | X_i^\\hat{\\theta }- \\theta ^* ) | \\le \\Vert \\hat{\\theta }- \\theta ^* \\Vert _\\Sigma \\cdot \\max _{1\\le i\\le n} \\Vert \\Sigma ^{-1/2} X_i \\Vert _2 = _{}\\bigg (\\frac{p + \\log n}{\\alpha \\sqrt{n}} \\bigg ) .$ Provided $\\delta := \\min _{1\\le i\\le n} X^i(\\beta ^* - \\theta ^*)$ is bounded away from zero, and under the condition $p^2 = o(n)$ as $n\\rightarrow \\infty $ , we see that with probability approaching one the ES estimate $\\hat{\\theta }= _\\theta \\sum _{i=1}^n\\ell _\\tau (\\hat{Z}_i - \\alpha X_i^) $ maintains the proper ordering at each of the data points, that is, $\\hat{\\theta }\\in _n(\\hat{\\beta })$ .", "This further implies $\\lbrace \\hat{\\theta }= \\hat{\\theta }^{{\\rm nc}} \\rbrace \\rightarrow 1$ as $n \\rightarrow \\infty $ .", "The same conclusion also applies to their robust counterparts.", "Therefore, the non-crossing estimator obtained via (REF ) is asymptotically equivalent to the vanilla ES estimator.", "Note that problem (REF ) with $\\tau =\\infty $ can be cast as a linearly constrained quadratic program ${\\rm QP}(C, d, A, b)$ that is of the form ${\\rm minimize}_{\\theta \\in ^p} ~ \\frac{1}{2} \\theta ^C̰ \\theta + d^~~~{\\rm ~subject~to}~~ A \\theta \\le b, $ where $C = (\\alpha ^2/n) \\sum _{i=1}^nX_i X_i^^{p \\times p}$ , $d = -(\\alpha /n) \\sum _{i=1}^n\\hat{Z}_i X_i \\in ^p$ , $A= (X_1, \\ldots , X_n)^^{n\\times p}$ and $b= ( X_1^\\beta , \\ldots , X_n^\\beta )^^n$ .", "This can be solved efficiently by the dual method [72] implemented in the quadprog package.", "To compute $\\hat{\\theta }^{{\\rm nc}}_\\tau $ for any given $\\tau >0$ , in principle one can use CVXR [71], an R library for disciplined convex optimization, to solve the inequality constrained optimization problem (REF ) via generic solvers.", "In fact, CVXR applies to any user-specified convex objective function, and therefore offers a high degree of flexibility.", "For the Huber loss in particular, such a generic toolbox is blind to the problem structure and tend to be much slower than standard quadratic programming solvers when the sample size and/or dimension are large.", "In the following, we take advantage of the special structure of the Huber loss and propose a tailored combination of the iteratively reweighted least squares (IRLS) algorithm and quadratic programming.", "For any $\\tau >0$ , it can be shown that the linearly constrained Huber loss minimization problem (REF ) is equivalent to (see Exercise 4.5 in [66]) $\\begin{split}{\\rm minimize}_{\\theta \\in ^p , w_1,\\ldots , w_n \\in } & ~~~~ \\sum _{i=1}^n\\bigg \\lbrace \\frac{(\\hat{Z}_i - \\alpha X_i^)^2 }{w_i + 1} + \\tau ^2 w_i \\bigg \\rbrace \\\\{\\rm subject~to} &~ ~~~ X_i^\\le X_i^\\beta , w_i \\ge 0 , i =1,\\ldots , n.\\end{split}$ This problem can be interpreted as a (constrained) weighted least squares problem, which naturally motivates an iteratively reweighted least squares algorithm as follows.", "Starting at iteration 0 with an initial estimate $\\theta ^0$ , say $\\theta ^0 = \\hat{\\theta }^{{\\rm nc}}$ , we repeat the following two steps until convergence.", "(i) At iteration $t=0, 1, 2, \\ldots $ , compute the residuals $\\lbrace \\omega ^t_i = \\hat{Z}_i - \\alpha X_i^^t \\rbrace _{i=1}^n$ .", "Let $\\tau ^t>0$ be the solution to the equation $ \\sum _{i=1}^n( | \\omega ^t_i | \\wedge \\tau )^2/\\tau ^2 = p + \\log n$ .", "(ii) Compute the weight of the $i$ th residual as $w_i^t= ( | \\omega _i^t | / \\tau ^t - 1 ) \\mathbb {1} ( | \\omega _i^t | > \\tau ^t )$ .", "Then use quadprog to solve the constrained weighted least squares problem ${\\rm minimize}_{\\theta \\in ^p} \\frac{1}{n} \\sum _{i=1}^n\\frac{(\\hat{Z}_i - \\alpha X_i^)^2}{1 + w_i^t} ~~{\\rm subject~to} ~ A \\theta \\le b$ to obtain $\\theta ^{t+1}$ , where the matrix $A$ and vector $b$ are as in (REF ).", "Note that this is a linearly constrained quadratic program, denoted by ${\\rm QP}(C^t, d^t, A, b)$ with $C^t = \\frac{\\alpha ^2}{n} \\sum _{i=1}^n\\frac{1}{1+w_i^t} X_i X_i^~\\mbox{ and }~ \\quad d^t = - \\frac{\\alpha }{n} \\sum _{i=1}^n\\frac{\\hat{Z}_i}{1+w_i^t}X_i .$ We end this section with an additional simulation study to demonstrate the effectiveness of non-crossing ES estimation in finite samples.", "We generate random samples $\\lbrace (Y_i, X_i) \\rbrace _{i=1}^n$ from the heteroscedastic error model $Y_i = 2 + X_i^^* + X_i^^* \\cdot \\varepsilon _i $ , where $X_i \\in ^p$ consists of independent $\\mathrm {Unif}(0, 2)$ components, $\\eta ^* = (0.5, 0.5, 0, \\ldots , 0)^^p$ and $\\varepsilon _i$ follows either the standard normal distribution or the $t_{2.5}$ -distribution.", "In the normal error case, we fix some $\\gamma ^* \\in ^p$ , generated from the uniform distribution on the unit sphere $\\mathbb {S}^{p-1}$ ; in the $t_{2.5}$ -error case, we set $\\gamma ^*\\in \\sqrt{5}\\, \\mathbb {S}^{p-1}$ .", "Figure REF shows the boxplots of squared $\\ell _2$ -errors for four different two-step ES regression estimates ($L_2$ , non-crossing $L_2$ , Huber and non-crossing Huber) at quantile level $\\alpha =0.1$ under the $(0,1)$ and $t_{2.5}$ -error models.", "The non-crossing estimates, computed by the IRLS-QP algorithm, achieve consistently more favorable performance.", "For algorithmic comparisons, from Figure REF we see that the proposed IRLS-QP algorithm shows significant improvement over CVXR implementation in computational efficiency while achieves the same level of statistical accuracy.", "The average runtime in seconds is $0.41$ versus $7.38$ in the normal model with $(n, p) = (5000, 10)$ , and $1.19$ versus $25.94$ in the $t_{2.5}$ model with $(n, p )= (8000, 10)$ .", "The reference machine for the above experiments is an iMac with a 3.7 GHz 6-Core Intel Core i5 processor and 32 GB of RAM.", "Figure: t 2.5 t_{2.5} model with (n,p)=(10000,10)(n,p)=(10000, 10)Figure: t 2.5 t_{2.5} model with (n,p)=(8000,10)(n,p)=(8000, 10)" ], [ "Modeling Extremes", "A commonly used approach for the estimation of extreme conditional quantiles models extremes by fitting a fully parametric model, such as generalized extreme value distribution or generalized Pareto distribution (GPD), where the location, shape and scale parameters are allowed to depend on covariates either parametrically or nonparametrically.", "For example, the CDF of the GPD with positive shape and scale parameters $\\xi $ and $\\sigma $ is $G_{\\xi , \\sigma }(x) = 1- (1+\\xi x / \\sigma )^{-1/\\xi }$ for $x\\ge 0$ ; see, e.g.", "Definition 7.16 in [45].", "Let $Y$ represent some loss random variable, and assume that for some high threshold $u$ , its excess distribution over $u$ , denoted by $F_u(x) = (Y-u \\le x \\, | Y > u) ~~\\mbox{ for }~ x \\ge 0 ,$ satisfies $F_u(x) = G_{\\xi , \\sigma }(x)$ for some $\\xi >0$ and $\\sigma >0$ .", "Following (7.17)–(7.19) in [45], we have that for $\\alpha \\ge \\alpha _0 = F(u)$ and $\\xi <1$ , the $\\alpha $ -quantile and upper $\\alpha $ -ES of $Y$ can be written as $ {\\left\\lbrace \\begin{array}{ll} \\vspace{5.69046pt}Q_\\alpha (Y) = Q_{\\alpha _0}(Y) + \\frac{\\sigma }{\\xi } \\big \\lbrace \\big ( \\frac{1-\\alpha _0}{1-\\alpha } \\big )^{ \\xi } - 1 \\big \\rbrace , \\\\{\\rm ES}^+_\\alpha (Y) = \\frac{1}{1-\\alpha } \\int _\\alpha ^1 Q_u(Y) {\\rm d}u = \\frac{Q_\\alpha (Y)}{1-\\xi } + \\frac{\\sigma - \\xi Q_{\\alpha _0}(Y) }{1- \\xi },\\end{array}\\right.", "}$ where the threshold $u$ is chosen as an intermediate quantile $Q_{\\alpha _0}(Y)$ at level $\\alpha _0 \\in (0, 1)$ close to 1 but fixed.", "It follows immediately that $\\lim _{\\alpha \\rightarrow 1} {\\rm ES}^+_\\alpha (Y) / Q_\\alpha (Y) = (1-\\xi )^{-1}$ .", "In the presence of covariates, we have a conditional version of (REF ): ${\\left\\lbrace \\begin{array}{ll} \\vspace{5.69046pt}Q_\\alpha (Y|X=x) = Q_{\\alpha _0}(Y|X=x) + \\frac{\\sigma (x)}{\\xi (x)} \\big \\lbrace \\big ( \\frac{1-\\alpha _0}{1-\\alpha } \\big )^{ \\xi (x)} - 1 \\big \\rbrace , \\\\{\\rm ES}^+_\\alpha (Y|X=x) = \\frac{Q_\\alpha (Y|X=x)}{1-\\xi (x)} + \\frac{\\sigma - \\xi (x) Q_{\\alpha _0}(Y|X=x) }{1- \\xi (x)} .\\end{array}\\right.", "}$ This shows that the estimation of ${\\rm ES}^+_\\alpha (Y|X=x)$ essentially depends on that of the extreme quantile $Q_\\alpha (Y|X=x)$ , which further requires estimates of the intermediate quantile $Q_{\\alpha _0}(Y|X=x)$ and of the conditional GPD parameters $\\xi (x)$ and $\\sigma (x)$ .", "For a more complete review of extreme quantile regression that dates back to [69], we refer to [81] and the references therein." ], [ "Expected Shortfall Autoregression: A Numerical Study", "In this section, we conduct additional empirical investigations in serial dependent settings where the covariates include lagged values of the response.", "Let $\\lbrace U_t\\rbrace _{t\\ge 1}$ be a sequence of i.i.d.", "$\\mathrm {Unif}(0,1)$ random variables, and $\\lbrace Z_t\\rbrace _{t\\ge 1}$ a sequence of $p$ -vectors of covariates that are independent of $\\lbrace U_t\\rbrace $ .", "Motivated by [73], we consider the following data generating mechanism $Y_t = \\beta _0(U_t) + \\sum _{j=1}^{q} \\beta _j(U_t) Y_{t-j} + Z_{t-1}^(U_t), $ where $\\theta _j: [0, 1] \\rightarrow $ ($0\\le j\\le q$ ) and $\\gamma : [0, 1]\\rightarrow ^p$ ($p\\ge 1$ ) are unknown functions.", "Provided the right-hand side of (REF ) is increasing in $U_t$ , for any $\\alpha \\in (0, 1)$ it holds $Q_\\alpha (Y_t | _{t-1} ) = \\beta _0(\\alpha )+ \\sum _{j=1}^{q} \\beta _j(\\alpha ) Y_{t-j} + Z_{t-1}^(\\alpha ) , $ where $_t$ is the $\\sigma $ -field generated by $\\lbrace (Y_s, Z_s)\\rbrace _{s\\le t}$ .", "Combining (REF ) with (), we further obtain the conditional expected shortfall of $Y_t$ given $_{t-1}$ as ${\\rm ES}_\\alpha (Y_t | _{t-1} ) = \\theta _0(\\alpha )+ \\sum _{j=1}^{q} \\theta _j(\\alpha ) Y_{t-j} + Z_{t-1}^(\\alpha ) , $ where $\\theta _j (\\alpha ) = \\alpha ^{-1}\\int _0^\\alpha \\beta _j(u) {\\rm d}u $ , $0\\le j\\le q$ and $\\eta (\\alpha ) = \\alpha ^{-1} \\int _0^\\alpha \\gamma (u) {\\rm d}u$ .", "Write $X_t = (1, Y_{t-1} , \\ldots , Y_{t-q} , Z_{t-1}^^^{p+q+1}$ .", "The above models can be expressed in a more compact form $Q_\\alpha (Y_t | _{t-1} )= X_t^_\\alpha ^* , \\quad {\\rm ES}_\\alpha (Y_t | _{t-1} ) = X_t^_\\alpha ^* , $ where $\\beta _\\alpha ^* = (\\beta _0(\\alpha ) , \\beta _1(\\alpha ), \\ldots , \\beta _q(\\alpha ), \\gamma (\\alpha )^^ and $ * = (0() , 1(), ..., q(), ().", "In particular, model (REF ) with $\\gamma \\equiv 0$ is named the QAR$(q)$ model by [73].", "We thus refer to (REF ) with $\\eta \\equiv 0$ as the ESAR$(q )$ model.", "To estimate $\\beta (\\alpha ) : = ( \\beta _0(\\alpha ), \\beta _1(\\alpha ), \\ldots , \\beta _q(\\alpha ) )^ at each $ (0 ,1)$ under the QAR$ (q)$ model, \\cite {KX2006} used the standard quantile regression estimate$$\\hat{\\beta }(\\alpha ) \\in _{ b \\in ^{q+1} } \\sum _{t=1}^T \\rho _\\alpha ( Y_t - X_t^b̰) ~\\mbox{ with }~ X_t = (1, Y_{t-1}, \\ldots , Y_{t-q})^$ and proved its asymptotic normality under certain distributional assumptions.", "For the estimation and inference of $\\theta (\\alpha ) : = ( \\theta _0(\\alpha ), \\theta _1(\\alpha ), \\ldots , \\theta _q(\\alpha ) )^under the ESAR$ (q)$ model (\\ref {es.ar}) with $ 0$, it is natural to consider one of the following two-step estimates:{\\begin{@align}{1}{-1}{\\left\\lbrace \\begin{array}{ll}\\hat{\\theta }_{{\\rm FZ}} (\\alpha ) \\in _{\\theta \\in ^{q+1}} \\frac{1}{T} \\sum _{t=1}^{T} S(\\hat{\\beta }(\\alpha ) , \\theta ; Y_t, X_t) , \\\\\\hat{\\theta }_{{\\rm LS}} (\\alpha ) = _{\\theta \\in ^{q+1}} \\frac{1}{T} \\sum _{t=1}^{T} S^2_0(\\hat{\\beta }(\\alpha ) , \\theta ; Y_t, X_t) , \\\\\\hat{\\theta }_{{\\rm AH}} (\\alpha ) = _{\\theta \\in ^{q+1}} \\frac{1}{T} \\sum _{t=1}^{T} \\ell _\\tau (S_0(\\hat{\\beta }(\\alpha ) , \\theta ; Y_t, X_t) ) ,\\end{array}\\right.}", "\\end{@align}}where $ S(, ; Y , X)$ and $ S0(, ; Y , X)$ are defined in (\\ref {def:rho}), and $$ denotes the Huber loss.$ In the following, we report a Monte Carlo experiment conducted to examine the performance of three ES estimation procedures under serial dependence, which are the joint regression method via FZ loss minimization [22], [50], two-step least squares method and two-step adaptive Huber method.", "The data $\\lbrace (Y_t , Z_t ) \\rbrace _{t=1}^{T+20}$ in this experiment were generated from model (REF ) with $q=p=1$ , $Z_t \\sim _{{\\rm i.i.d.}}", "{\\rm Unif}(0,1)$ and $\\beta _0(u) = F^{-1}(u), \\quad \\beta _1(u) = a_0 + a_1 u , \\quad \\gamma (u) = b_0 + b_1 u ~\\mbox{ for } u \\in (0, 1),$ where $F$ is the CDF of either the standard normal distribution (normal model) or $t_\\nu $ -distribution with $\\nu $ degrees of freedom ($t_\\nu $ model).", "Here we require $a_0, a_1, b_1 >0$ and $a_0 + a_1 \\le 1$ .", "The first 20 observations were discarded to allow for a reasonably long burn-in period.", "At quantile level $\\alpha \\in (0, 1)$ , the correspondent ES regression coefficients in (REF ) are $\\theta _0(\\alpha ) = \\frac{1}{\\alpha } \\int _0^\\alpha F^{-1}(u) {\\rm d}u , \\quad \\theta _1(\\alpha ) = a_0 + 0.5 a_1 \\alpha , \\quad \\gamma (\\alpha ) = b_0 + 0.5 b_1 \\alpha .", "$ Specifically, we set model parameters $(a_0, a_1) = (0.5, 0.5)$ , $(b_0, b_1) = (0.95, 0.5)$ , $\\nu = 3.5$ and consider the quantile level $\\alpha = 0.05$ .", "The sample size $T$ is 1000 in the normal model and 1500 in the $t_{3.5}$ model.", "Figure REF shows the boxplots of squared $\\ell _2$ -errors (over 1000 replications) for three ES regression estimates, two-step least squares (2S-LS), two-step adaptive Huber (2S-AH) and FZ’s joint estimate (FZ), at quantile level 5%.", "The mean squared errors (MSEs) of these three methods are 0.1153, 0.1099 and 0.1227 in the normal model, and 0.9996, 0.6229 and 1.0089 in the $t_{3.5}$ model.", "We also note that the total number of crossings (over $T$ observations and 1000 replications) of the FZ method is 256 in the normal model but increases to 1338 in the $t_{3.5}$ model.", "Without non-crossing constraints, the total numbers of crossings are 22 and 729 for two-step adaptive Huber versus 92 and 1641 for two-step least squares.", "To summarize, the two-step robust method remains to be statistically and numerically preferable for serially dependent data generated from a quantile autoregression process [73].", "Figure: t 3.5 t_{3.5} modelUnder the same settings, Table REF reports the empirical coverage probability and average width (in parenthesis) of the 95% normal-based confidence intervals described in Section REF and Section REF for the two slope coefficients in (REF ).", "Because these methods rely crucially on the asymptotic normality of the estimator when observations are independent, the covariance estimation part suffers from a bias due to serial correlations and leads to undercoverage in confidence intervals.", "To fix this, research into the asymptotic analysis (as $T\\rightarrow \\infty $ with $q$ fixed) of the estimators in () under the QAR framework is underway.", "This could also bring additional interests to practitioners.", "Table: Empirical coverage probability and average width (in parenthesis), based on 1000 replications, of the 95% normal-based confidence intervals () and () for the two slope coefficients in () with α=5%\\alpha =5\\%." ], [ "Supporting lemmas", "We first introduce some basic notations that will be frequently used in the proof.", "For $i=1,\\ldots , n$ , define the standardized covariates and quantile regression residuals as $W_i = \\Sigma ^{-1/2} X_i ~~\\mbox{ and }~~ \\varepsilon _i = Y_i - X_i^^* , $ respectively.", "For $\\beta \\in ^p$ , define ${\\left\\lbrace \\begin{array}{ll}Z_i(\\beta ) = (Y_i - X_i^) \\mathbb {1}( Y_i \\le X_i^) + \\alpha X_i^, \\ \\ Z_i = Z_i(\\beta ^*), \\\\\\omega _i(\\beta ) = Z_i(\\beta ) - \\alpha X_i^^* , \\ \\ \\omega _i = \\omega _i(\\beta ^*),\\end{array}\\right.", "}$ and for some initial estimator $\\hat{\\beta }$ of $\\beta ^*$ , write $\\hat{Z}_i = Z_i(\\hat{\\beta }) , \\quad \\hat{\\omega }_i = \\omega _i(\\hat{\\beta }) .$ Then, the second-stage robust ES estimator can be equivalently defined as $\\hat{\\theta }_{ \\tau } \\in _{\\theta \\in ^p} \\bigg \\lbrace \\hat{}_\\tau (\\theta ) := \\frac{1}{n} \\sum _{i=1}^n\\ell _\\tau \\big ( \\hat{Z}_i - \\alpha X_i^\\big ) \\bigg \\rbrace , $ which is a Huber's $M$ -estimator with generated response variables.", "Let $\\psi _\\tau (u) = \\ell _\\tau ^{\\prime }(u) = \\min \\lbrace \\max ( - \\tau , u), \\tau \\rbrace $ denote the score function, which is 1-Lipschitz continuous and differentiable except at $\\pm \\tau $ .", "To control the estimation error $\\Vert \\hat{\\theta }_\\tau - \\theta ^* \\Vert _\\Sigma $ , the keys are an upper bound for the $\\ell _2$ -norm of the score $\\nabla \\hat{}_\\tau (\\theta ^* ) = - \\frac{\\alpha }{n} \\sum _{i=1}^n\\psi _\\tau \\big ( \\hat{Z}_i - \\alpha X_i^^* \\big ) X_i= - \\frac{\\alpha }{n} \\sum _{i=1}^n\\psi _\\tau \\big ( Z_i (\\hat{\\beta }) - \\alpha X_i^^* \\big ) X_i ,$ and the restricted strong convexity property of $\\hat{}_\\tau (\\cdot )$ in a neighborhood of $\\theta ^*$ .", "Conditioned on the event $\\lbrace \\hat{\\beta }\\in \\mathbb {B}_\\Sigma (\\beta ^*, r_0)\\rbrace $ , we have $\\Vert \\nabla \\hat{}_\\tau (\\theta ^* ) \\Vert _{\\Sigma ^{-1}} & \\le \\alpha \\sup _{ \\beta \\in \\mathbb {B}_\\Sigma (\\beta ^*, r_0) } \\bigg \\Vert \\frac{1 }{n} \\sum _{i=1}^n\\psi _\\tau \\big ( \\underbrace{ (Y_i - X_i^) \\mathbb {1}( Y_i \\le X_i^) + \\alpha \\langle X_i, \\beta - \\theta ^* \\rangle }_{= \\, \\omega _i(\\beta ) } \\big ) W_i \\bigg \\Vert _2 \\\\& \\le \\alpha \\underbrace{ \\sup _{ \\beta \\in \\mathbb {B}_\\Sigma (\\beta ^*, r_0) } \\bigg \\Vert \\frac{1}{n} \\sum _{i=1}^n(1 - ) \\big \\lbrace \\psi _\\tau \\big ( \\omega _i(\\beta ) \\big ) - \\psi _\\tau (\\omega _i) \\big \\rbrace W_i \\bigg \\Vert _2 }_{\\lesssim _{} \\, r_0 \\sqrt{\\frac{p}{n}} {\\rm ~(Lemma~\\ref {lem:first-order.error})} } \\\\&~~~~~ + \\alpha \\underbrace{ \\sup _{ \\beta \\in \\mathbb {B}_\\Sigma (\\beta ^*, r_0) } \\big \\Vert \\psi _\\tau \\big ( \\omega _i(\\beta ) \\big ) W_i \\big \\Vert _2 }_{\\lesssim \\,r_0^2 + r_0/\\tau {\\rm ~(Lemma~\\ref {lem:approximate.neyman})} } + \\, \\alpha \\underbrace{ \\bigg \\Vert \\frac{1}{n} \\sum _{i=1}^n(1-) \\psi _\\tau (\\omega _i) W_i \\bigg \\Vert _2 }_{\\lesssim _{} \\, \\sqrt{\\frac{p}{n}} + \\frac{\\tau p}{n} {\\rm ~(Lemma~\\ref {lem:score.bound})}} .$ Assume Conditions REF and REF hold.", "Then, for any $r_0>0$ and $t\\ge 1/2$ , $\\sup _{ \\beta \\in \\mathbb {B}_\\Sigma (\\beta ^*, r_0) } \\bigg \\Vert \\frac{1}{n} \\sum _{i=1}^n(1 - ) \\big \\lbrace \\psi _\\tau \\big ( \\omega _i(\\beta ) \\big ) - \\psi _\\tau (\\omega _i) \\big \\rbrace W_i \\bigg \\Vert _2 \\le C_1 \\upsilon _1^2 \\sqrt{\\frac{p+ t}{n}} \\cdot r_0$ holds with probability at least $1-e^{-t}$ as long as $n \\ge C_2(p+t )$ , where $C_1, C_2>0$ are absolute constants.", "Assume Conditions REF and REF hold.", "For any $\\tau >0$ and $r_0 >0$ , $\\sup _{\\beta \\in \\mathbb {B}_\\Sigma (\\beta ^*, r_0) } \\big \\Vert \\lbrace \\psi _\\tau ( \\omega _i(\\beta ) ) W_i \\rbrace \\big \\Vert _2 \\le \\frac{ \\sigma ^2 }{\\tau } + \\frac{1}{2} \\kappa _3 ( f + 1/\\tau ) r_0^2 + \\frac{\\sigma }{ \\tau } r_0 , $ where $\\omega _i(\\beta )$ is given in (REF ).", "Moreover, $\\sup _{\\beta \\in \\mathbb {B}_\\Sigma (\\beta ^*, r_0) } \\big \\Vert \\lbrace \\psi _\\tau ( \\omega _i(\\beta ) ) W_i \\rbrace \\big \\Vert _2 \\le \\frac{ \\sigma ^2 }{\\tau } + \\frac{\\kappa _3}{2} f r_0^2 + (\\sigma ^2 + \\kappa _3 \\sigma r_0 + \\kappa _4 r_0^2/3) \\frac{r_0}{\\tau ^2} .", "$ Assume Conditions REF and REF hold.", "For any $t > 0$ , it holds with probability at least $1-e^{-t}$ that $\\bigg \\Vert \\frac{1}{n} \\sum _{i=1}^n(1 -) \\psi _\\tau (\\omega _i) W_i \\bigg \\Vert _2 \\le C_0 \\upsilon _1 \\bigg ( \\sigma \\sqrt{\\frac{p + t}{n}} + \\tau \\frac{p + t}{n} \\bigg ) ,$ where $C_0 > 0$ is an absolute constant.", "The following lemma provides a form of the restricted strong convexity for the empirical Huber loss $\\hat{}_\\tau (\\cdot )$ with estimated response variables.", "Assume Conditions REF and REF hold.", "For any pair of radii $r \\ge r_0 >0$ , let the robustification parameter $\\tau $ satisfy $\\tau ^2 \\ge 32 \\lbrace \\kappa _4 (r^2 + 4 r_0^2) + \\sigma ^2 \\rbrace $ .", "Then, conditioned on the event $\\lbrace \\hat{\\beta }\\in \\mathbb {B}_\\Sigma (\\beta ^*, r_0) \\rbrace $ , we have that with probability at least $1-e^{-t}$ , $\\langle \\nabla \\hat{}_\\tau (\\theta ) - \\nabla \\hat{}_\\tau (\\theta ^*) , \\theta - \\theta ^* \\rangle \\ge \\frac{\\alpha ^2}{4} \\Vert \\theta - \\theta ^* \\Vert _\\Sigma ^2$ holds uniformly over $\\theta \\in \\mathbb {B}_\\Sigma (\\theta ^*, r/\\alpha )$ as long as $n\\gtrsim (\\tau / r)^2(p+t)$ .", "Assume Conditions REF and REF hold.", "Then, for any $r >0$ and $t\\ge 1/2$ , $\\sup _{ \\theta \\in \\mathbb {B}_\\Sigma (\\theta ^*, r/\\alpha ) } \\bigg \\Vert \\frac{1}{n} \\sum _{i=1}^n\\big \\lbrace \\psi _\\tau ( Z_i - \\alpha X_i^) & - \\psi _\\tau (Z_i - \\alpha X_i^^*) \\big \\rbrace W_i + \\alpha \\Sigma ^{1/2} (\\theta - \\theta ^*) \\bigg \\Vert _2 \\\\& \\le C_3 \\upsilon _1^2 \\sqrt{\\frac{p+ t}{n}} \\cdot r + ( \\sigma ^2 + \\kappa _4 r^2 /3 ) \\frac{r}{\\tau ^2}$ holds with probability at least $1-e^{-t}$ as long as $n \\ge C_4(p+t )$ , where $C_3, C_4>0$ are absolute constants.", "For every $\\gamma \\in ^p$ , define the quantile loss difference $\\hat{D}(\\gamma ) = \\hat{}(\\beta ^* + \\gamma ) - \\hat{}(\\beta ^*)$ and its population counterpart $D(\\gamma ) = \\hat{D}(\\gamma )$ , where $\\hat{}(\\beta ) = (1/n) \\sum _{i=1}^n\\rho _\\alpha (Y_i - X_i^)$ .", "Assume Condition REF holds.", "For any $r>0$ and $t\\ge 0$ , the bound $\\sup _{\\gamma \\in \\mathbb {B}_\\Sigma (r) } \\lbrace D(\\gamma ) - \\hat{D}(\\gamma ) \\rbrace \\le C \\upsilon _1 \\sqrt{\\frac{p+t}{n}} \\cdot r $ holds with probability at least $1- e^{-t}$ , where $C>0$ is an absolute constant.", "To prove Theorem REF , the following convexity lemma (see Lemma C.1 in [80]) will be needed.", "We reproduce it here for the sake of readability.", "Let $f:^p\\rightarrow $ be a differentiable convex function, and define the corresponding symmetrized Bregman divergence $D_f(\\beta _1, \\beta _2) = \\langle \\nabla f(\\beta _2) - \\nabla f(\\beta _1), \\beta _2 - \\beta _1 \\rangle $ for $\\beta _1, \\beta _2 \\in ^p$ .", "Then, for any $\\beta , \\delta \\in ^p$ and $\\lambda \\in [0, 1]$ , $D_f( \\beta _\\lambda , \\beta ) \\le \\lambda \\cdot D_f(\\beta _1 , \\beta )$ , where $\\beta _\\lambda =\\beta + \\lambda \\delta $ and $\\beta _1 = \\beta + \\delta $ ." ], [ "Proof of Proposition ", "The proof is adapted from that of Proposition 6.2 in [67] with certain adjustments.", "Given $\\eta >0$ and $n\\ge 1$ , let $Y$ follow a discrete distribution with support $\\lbrace - n \\eta , 0, n \\eta \\rbrace $ , satisfying $(Y = n \\eta ) = (Y = - n \\eta ) = \\frac{\\sigma ^2}{2 n^2 \\eta ^2 }.$ It is easy to see that $(Y)=0$ and $(Y^2) = \\sigma ^2$ .", "Provided $n \\eta > \\sigma / \\sqrt{2\\alpha } $ , we have $Q_\\alpha (Y) = 0$ .", "Let $Z_1, \\ldots , Z_n$ be independent copies of $Z= Y \\mathbb {1}\\lbrace Y \\le Q_\\alpha (Y) \\rbrace = Y \\mathbb {1}( Y \\le 0 )$ with mean $(Z) = - n \\eta \\cdot \\frac{\\sigma ^2}{2 n^2 \\eta ^2} = -\\frac{\\sigma ^2 }{2 n \\eta } .$ Then we have $& \\bigg \\lbrace \\frac{1}{n}\\sum _{i=1}^n(Z_i - Z_i) \\le - \\eta + \\frac{\\sigma ^2}{2 n \\eta } \\bigg \\rbrace \\\\& =\\bigg ( \\frac{1}{n}\\sum _{i=1}^nZ_i \\le - \\eta \\bigg ) \\ge \\bigg \\lbrace \\sum _{i=1}^nY_i \\mathbb {1}(Y_i \\le 0) = - n \\eta \\bigg \\rbrace \\\\& = n \\cdot \\frac{\\sigma ^2}{2 n^2 \\eta ^2} \\bigg ( 1- \\frac{ \\sigma ^2}{2 n^2 \\eta ^2} \\bigg )^{n-1} = \\frac{\\sigma ^2 }{2 n \\eta ^2 } \\bigg ( 1- \\frac{ \\sigma ^2}{2 n^2 \\eta ^2} \\bigg )^{n-1}.$ Taking $\\eta =c_n \\sigma /\\sqrt{2n\\delta }$ with $c_n = (1- e \\delta /n)^{(n-1)/2 }$ and $0<\\delta \\le e^{-1}$ , we obtain that $\\frac{1}{n}\\sum _{i=1}^n(Z_i - Z_i) \\le - \\frac{ c_n \\sigma }{ \\sqrt{2n\\delta }} + \\frac{ \\sigma }{ c_n } \\sqrt{\\frac{\\delta }{2n}}$ holds with probability at least $\\delta $ .", "Note that $c_n\\in (e^{-1/2} , 1]$ for all $n\\ge 1$ , the right-hand side is further bounded from above by $- \\sigma \\sqrt{\\frac{1}{2e n\\delta }} +\\sigma \\sqrt{\\frac{e \\delta }{2n}} = -\\frac{\\sigma (1- e\\delta )}{\\sqrt{2 e n \\delta }} .$ This proves the claimed bound.", "$\\Box $" ], [ "Proof of Proposition ", "To begin with, define the ES response variable $Z= \\varepsilon \\mathbb {1}( \\varepsilon \\le 0) + \\alpha X^^*$ and residual $\\omega = Z - \\alpha X^^*$ , where $ \\varepsilon = Y - X^^*$ satisfies $_X\\lbrace \\varepsilon \\mathbb {1}( \\varepsilon \\le 0) \\rbrace = \\alpha X^\\theta ^* - \\beta ^*)$ .", "Moreover, define the bias vector $h_\\tau = \\theta _\\tau ^* - \\theta ^*$ , and population loss functions $_\\tau (\\theta ) = \\ell _\\tau ( Z - \\alpha X^)$ and $(\\theta ) = \\frac{1}{2} ( Z - \\alpha X^)^2$ .", "Using the optimality of $\\theta ^*_\\tau $ and the mean value theorem, we have $\\nabla _\\tau (\\theta ^*_\\tau )= 0$ and $\\int _0^1 \\langle h_\\tau , \\nabla ^2 _\\tau ( \\theta ^* + t h_\\tau ) h_\\tau \\rangle {\\rm d} t & = \\langle \\nabla _\\tau (\\theta ^*_\\tau ) - \\nabla _\\tau (\\theta ^*) , \\theta ^*_\\tau - \\theta ^* \\rangle \\nonumber \\\\&= - h_\\tau ^_\\tau (\\theta ^*) = \\alpha \\lbrace \\psi _\\tau (\\omega ) X^h̰_\\tau \\rbrace .$ Since $(\\omega | X) =0$ , we have $ _X (\\omega ) - _X \\lbrace \\psi _\\tau (\\omega ) \\rbrace = _X [ \\lbrace \\omega - \\tau (\\omega ) \\rbrace \\mathbb {1}( |\\omega | > \\tau )]$ .", "This together with the fact $_X(\\omega ^2) = \\textnormal {var}_X( \\varepsilon \\mathbb {1}( \\varepsilon \\le 0))$ implies $| \\lbrace \\psi _\\tau (\\omega ) X^h̰_\\tau \\rbrace | \\le \\frac{1}{\\tau } ( \\omega ^2 | X^h̰_\\tau | ) \\le \\frac{\\sigma ^2}{\\tau } | X^h̰_\\tau | \\le \\frac{\\sigma ^2}{\\tau } \\Vert h_\\tau \\Vert _\\Sigma .", "$ Throughout the proof, we assume $\\tau \\ge 2 \\kappa _4^{1/4}\\sigma \\ge 2 \\sigma $ .", "For the left-hand side of (REF ), write $\\omega (t)= Z - \\alpha X^\\theta ^* + t h_\\tau ) = \\omega - t \\alpha X^h̰_\\tau $ and note that $\\langle h_\\tau , \\nabla ^2 _\\tau ( \\theta ^* + t h_\\tau ) h_\\tau \\rangle = \\alpha ^2 \\lbrace \\mathbb {1} ( |\\omega (t) | \\le \\tau ) (X^h̰_\\tau )^2 \\rbrace = \\alpha ^2 \\Vert h_\\tau \\Vert _\\Sigma ^2 - \\alpha ^2 \\lbrace \\mathbb {1} ( | \\omega (t) | > \\tau ) (X^h̰_\\tau )^2 \\rbrace .$ By Markov's inequality, $& \\lbrace \\mathbb {1} ( | \\omega (t) | > \\tau ) (X^h̰_\\tau )^2 \\rbrace \\le \\frac{1}{\\tau ^2} (\\omega - t \\alpha X^h̰_\\tau )^2(X^h̰_\\tau )^2 \\\\& \\le \\frac{\\sigma ^2}{\\tau ^2} \\Vert h_\\tau \\Vert _\\Sigma ^2 + \\frac{(\\alpha t)^2}{\\tau ^2} (X^h̰_\\tau )^4 \\le \\frac{\\sigma ^2}{\\tau ^2} \\Vert h_\\tau \\Vert _\\Sigma ^2 + \\frac{ \\kappa _4}{\\tau ^2} (\\alpha t)^2 \\Vert h_\\tau \\Vert _\\Sigma ^4.$ Putting these two observations together, we obtain that $\\int _0^1 \\langle h_\\tau , \\nabla ^2 _\\tau ( \\theta ^* + t h_\\tau ) h_\\tau \\rangle {\\rm d} t \\ge \\alpha ^2 \\Vert h_\\tau \\Vert _\\Sigma ^2 \\cdot \\bigg ( \\frac{3}{4} - \\frac{ \\kappa _4}{3 \\tau ^2} \\alpha ^2 \\Vert h_\\tau \\Vert _\\Sigma ^2 \\bigg ).$ Now set $r_\\tau = \\Vert h_\\tau \\Vert _\\Sigma $ .", "Then, combining the above inequality with (REF ) and (REF ) yields $\\alpha r_\\tau \\bigg ( \\frac{3}{4} - \\frac{ \\kappa _4}{3 \\tau ^2} \\alpha ^2 r_\\tau ^2 \\bigg ) \\le \\frac{\\sigma ^2}{\\tau }.", "$ Assume at the moment that $\\alpha r_\\tau \\le r^* := \\sqrt{3/(4 \\kappa _4)} \\cdot \\tau $ .", "Hence, it follows immediately from (REF ) that $\\alpha r_\\tau \\le 2 \\sigma ^2 / \\tau $ , as claimed.", "It remains to show that $\\alpha r_\\tau > r^*$ cannot be the case.", "Otherwise, if $\\theta ^*_\\tau $ satisfies $r_\\tau = \\Vert \\theta ^*_\\tau - \\theta ^*\\Vert _\\Sigma > r^*/\\alpha $ , then let $\\lambda = r^*/(\\alpha r_\\tau ) \\in (0, 1)$ and $\\widetilde{\\theta }_\\tau = (1-\\lambda ) \\theta ^* + \\lambda \\theta ^*_\\tau $ , so that $\\widetilde{r}_\\tau := \\Vert \\widetilde{\\theta }_\\tau - \\theta ^* \\Vert _\\Sigma = \\lambda r_\\tau = r^*/\\alpha $ .", "By Lemma REF , $\\langle \\nabla _\\tau (\\widetilde{\\theta }_\\tau ) - \\nabla _\\tau (\\theta ^*) , \\widetilde{\\theta }_\\tau - \\theta ^* \\rangle & \\le \\lambda \\cdot \\langle \\nabla _\\tau (\\theta ^*_\\tau ) - \\nabla _\\tau (\\theta ^*) , \\theta ^*_\\tau - \\theta ^* \\rangle \\\\& = \\langle -\\nabla _\\tau (\\theta ^*) , \\widetilde{\\theta }_\\tau - \\theta ^* \\rangle \\le \\frac{\\sigma ^2}{\\tau } \\alpha \\widetilde{r}_\\tau ,$ where the last inequality follows from (REF ).", "Arguing as above, it can be similarly shown that $\\langle \\nabla _\\tau (\\widetilde{\\theta }_\\tau ) - \\nabla _\\tau (\\theta ^*) , \\widetilde{\\theta }_\\tau - \\theta ^* \\rangle \\ge ( \\alpha \\widetilde{r}_\\tau )^2 \\cdot \\bigg ( \\frac{3}{4} - \\frac{\\kappa _4}{3\\tau ^2} ( \\alpha \\widetilde{r}_\\tau )^2 \\bigg ) = \\frac{1}{2}( \\alpha \\widetilde{r}_\\tau )^2 .$ Together, the above upper and lower bounds imply $\\alpha \\widetilde{r}_\\tau \\le 2\\sigma ^2 / \\tau $ .", "This contradicts the assumption $\\tau \\ge 2 \\kappa _4^{1/4}\\sigma $ , thus completing the proof of the proposition.", "$\\Box $" ], [ "Proof of Proposition ", "Let $\\hat{}(\\beta ) = (1/n) \\sum _{i=1}^n\\rho _\\alpha (Y_i - X_i^)$ and $(\\beta ) = \\hat{}(\\beta )$ be the sample and population quantile loss functions.", "For every $\\gamma \\in ^p$ , define $\\hat{D}(\\gamma ) = \\hat{}(\\beta ^* + \\gamma ) - \\hat{}(\\beta ^*), \\quad D(\\gamma ) = (\\beta ^* + \\gamma ) - (\\beta ^*) , \\quad R(\\gamma ) = D(\\gamma ) - \\langle \\nabla (\\beta ^*) , \\gamma \\rangle ,$ and note that $\\nabla (\\beta ) = \\lbrace F_{Y_i|X_i}(X_i^) - \\alpha \\rbrace X_i$ .", "Since $\\nabla (\\beta ^*)=0$ , we have $\\hat{D}(\\gamma ) = \\langle \\nabla (\\beta ^*) , \\gamma \\rangle + R(\\gamma ) - \\lbrace D(\\gamma ) - \\hat{D}(\\gamma ) \\rbrace = R(\\gamma ) - \\lbrace D(\\gamma ) - \\hat{D}(\\gamma ) \\rbrace .", "$ To bound $R(\\gamma )= (\\beta ^* + \\gamma ) - (\\beta ^*)$ , note that the population Hessian $\\nabla ^2 (\\beta )= \\lbrace f_{\\varepsilon _i |X_i} ( \\langle X_i, \\beta - \\beta ^* \\rangle ) X_i X_i^$ satisfies for any $\\gamma \\in ^p$ and $t\\in [0, 1]$ that $& \\langle \\gamma , \\nabla ^2 (\\beta ^* + t \\gamma ) \\gamma \\rangle = \\big \\lbrace f_{\\varepsilon _i |X_i} ( t X_i^) (X_i^)^2 \\big \\rbrace \\\\& = \\big \\lbrace f_{\\varepsilon _i |X_i} (0) (X_i^)^2 \\big \\rbrace + \\big \\lbrace f_{\\varepsilon _i |X_i} ( t X_i^) - f_{\\varepsilon _i |X_i}(0) \\big \\rbrace (X_i^)^2 \\\\& \\ge {f} \\, \\Vert \\gamma \\Vert _\\Sigma ^2 - l_0 t \\cdot | X_i^|^3 \\ge {f}\\, \\Vert \\gamma \\Vert _\\Sigma ^2 - l_0 \\kappa _3 t \\cdot \\Vert \\gamma \\Vert _\\Sigma ^3,$ where we used the Lipschitz continuity of $f_{\\varepsilon | X}(\\cdot )$ in the first inequality.", "This, together with the fundamental theorem of calculus, implies $R(\\gamma ) & = (\\beta ^* + \\gamma ) - (\\beta ^*) = \\int _0^1 \\langle \\nabla (\\beta ^* + s\\gamma ) - \\nabla (\\beta ^*) , \\gamma \\rangle {\\rm d} s \\\\& = \\int _0^1 \\int _0^1 s \\langle \\gamma , \\nabla ^2 (\\beta ^* + t s \\gamma ) \\gamma \\rangle {\\rm d} s {\\rm d} t \\ge \\frac{1}{2}{f}\\,\\Vert \\gamma \\Vert _\\Sigma ^2 - \\frac{1}{6} l_0 \\kappa _3 \\Vert \\gamma \\Vert _\\Sigma ^3 .$ For some $r_0>0$ to be determined, Lemma REF states that, with probability at least $1-e^{-t}$ , $\\sup _{\\gamma \\in \\mathbb {B}_\\Sigma (r_0) } \\lbrace D(\\gamma ) - \\hat{D}(\\gamma ) \\rbrace \\le C \\upsilon _1 r_0 \\sqrt{\\frac{p+t}{n}} .$ Together, the previous two inequalities and (REF ) show that, for any $\\gamma \\in \\partial \\mathbb {B}_\\Sigma (r_0)$ , i.e.", "$\\Vert \\gamma \\Vert _\\Sigma = r_0$ , $\\hat{D}(\\gamma ) \\ge \\frac{r_0}{2} \\Bigg ( {f} r_0 -\\frac{1}{3} l_0 \\kappa _3 r_0^2 - 2 C \\upsilon _1 \\sqrt{\\frac{p+t}{n}} \\Bigg ) .", "$ In view of (REF ), we choose $r_0 = 4 C \\upsilon _1 {f}^{-1} \\sqrt{(p+t)/n}$ and let the sample size $n$ satisfy $ {f}^2 >\\frac{8}{3}C l_0 \\kappa _3\\upsilon _1 \\sqrt{(p+t)/n}$ .", "Then, with probability at least $1-e^{-t}$ , $\\hat{D}(\\gamma ) >0$ for all $\\gamma \\in \\partial \\mathbb {B}_\\Sigma (r_0)$ .", "On the other hand, let $\\hat{\\gamma }= \\hat{\\beta }- \\beta ^*$ .", "Hence, $\\hat{D}(\\hat{\\gamma }) \\le 0$ by the optimality of $\\hat{\\beta }$ .", "Finally, using Lemma 9.21 in [83] and the convexity of $\\hat{}(\\cdot )$ , we have $\\hat{\\gamma }\\in \\mathbb {B}_\\Sigma (r_0)$ , proving the claim.", "$\\Box $" ], [ "Proof of Theorem ", "To begin with, note that the two-step ES regression estimator $\\hat{\\theta }$ defined in (REF ) satisfies the first-order condition $0 = \\frac{1}{n} \\sum _{i=1}^nS_i( \\hat{\\beta }, \\hat{\\theta }) X_i = \\frac{\\alpha }{n} \\sum _{i=1}^nX_i X_i^\\hat{\\theta }- \\theta ^*) + \\frac{1}{n} \\sum _{i=1}^nS_i( \\hat{\\beta }, \\theta ^* ) X_i ,$ which implies $\\hat{\\theta }- \\theta ^* = \\bigg ( \\frac{\\alpha }{n} \\sum _{i=1}^nX_i X_i^)^{-1} \\frac{-1}{n}\\sum _{i=1}^nS_i( \\hat{\\beta }, \\theta ^* ) X_i = \\bigg ( \\frac{\\alpha }{n} \\sum _{i=1}^nX_i X_i^)^{-1} \\frac{1}{n}\\sum _{i=1}^n\\omega _i( \\hat{\\beta }) X_i , $ where $\\omega _i(\\cdot )$ is defined in (REF ).", "Recall that $W_i = \\Sigma ^{-1/2} X_i$ and $(W_i W_i^ = {\\rm I}_p$ .", "Then, it follows from (REF ) that $\\Vert \\hat{\\theta }- \\theta ^* \\Vert _\\Sigma = \\Bigg \\Vert \\bigg ( \\frac{\\alpha }{n} \\sum _{i=1}^nW_i W_i^)^{-1} \\frac{1}{n}\\sum _{i=1}^n\\omega _i( \\hat{\\beta }) W_i \\Bigg \\Vert _2 \\le \\frac{\\Vert (\\alpha n )^{-1} \\sum _{i=1}^n\\omega _i( \\hat{\\beta }) W_i \\Vert _2 }{\\lambda _{\\min }((1/n) \\sum _{i=1}^nW_i W_i^} .", "$ To bound $\\lambda _{\\min }( (1/n) \\sum _{i=1}^nW_i W_i^$ from below, using Theorem 1.1 in [77] we obtain that for a sufficiently large sample size $n\\ge C_0 \\kappa _4 \\lbrace p + 2\\log (2/\\xi )\\rbrace $ , $\\Bigg \\lbrace \\lambda _{\\min }\\bigg ( \\frac{1}{n} \\sum _{i=1}^nW_i W_i^) \\ge \\frac{1}{2} \\Bigg \\rbrace \\ge 1- \\xi .", "$ To upper bound $\\Vert (1/n) \\sum _{i=1}^n\\omega _i( \\hat{\\beta }) W_i \\Vert _2$ conditioned on the event $\\lbrace \\hat{\\beta }\\in \\mathbb {B}_\\Sigma (\\beta ^*, r_0)\\rbrace $ , consider the decomposition $\\frac{1}{n}\\sum _{i=1}^n\\omega _i( \\hat{\\beta }) W_i = \\frac{1}{n}\\sum _{i=1}^n(1-) \\lbrace \\omega _i( \\hat{\\beta }) - \\omega _i \\rbrace W_i + \\lbrace \\omega _i(\\beta ) W_i \\rbrace \\big |_{\\beta =\\hat{\\beta }} + \\frac{1}{n} \\sum _{i=1}^n\\omega _i W_i ,$ where $\\omega _i = \\omega _i(\\beta ^*)$ satisfying $(\\omega _i | X_i)=0$ .", "It follows that $\\bigg \\Vert \\frac{1}{n}\\sum _{i=1}^n\\omega _i( \\hat{\\beta }) W_i \\bigg \\Vert _2 & \\le \\sup _{\\beta \\in \\mathbb {B}_\\Sigma (\\beta ^*, r_0)} \\Bigg \\Vert \\frac{1}{n}\\sum _{i=1}^n(1-) \\lbrace \\omega _i( \\beta ) - \\omega _i \\rbrace W_i \\Bigg \\Vert _2 \\nonumber \\\\& ~~~~~~ + \\sup _{\\beta \\in \\mathbb {B}_\\Sigma (\\beta ^*, r_0)} \\big \\Vert \\lbrace \\omega _i(\\beta ) W_i \\rbrace \\big \\Vert _2 + \\Bigg \\Vert \\frac{1}{n}\\sum _{i=1}^n\\omega _i W_i \\Bigg \\Vert _2 \\nonumber \\\\&= : \\Xi _1 + \\Xi _2 + \\Xi _3 .", "$ From the proof of Lemma REF we see that the stated bound remains valid if $\\tau = \\infty $ for which $\\psi _\\infty (t) = t$ .", "It follows that with probability at least $1- \\xi $ , $\\Xi _1 \\le C_1 \\upsilon _1^2 r_0 \\sqrt{\\frac{p+ \\log (1/\\xi )}{n}} .$ For $\\Xi _2 = \\sup _{\\beta \\in \\mathbb {B}_\\Sigma (\\beta ^*, r_0)} \\Vert \\lbrace \\omega _i(\\beta ) W_i \\rbrace \\Vert _2$ , following the proof of Lemma REF it can be similarly shown that $\\Xi _2 \\le f\\kappa _3 r_0^2 /2$ .", "Turning to $\\Pi _3$ , note that $\\Xi _3^2 = (1/n^2) \\sum _{i=1}^n( \\omega _i^2 \\Vert W_i \\Vert _2^2 ) \\le \\sigma ^2 p /n$ .", "By Markov's inequality, we have that for any $\\xi \\in (0, 1)$ , $\\Bigg (\\Xi _3 \\ge \\sigma \\sqrt{\\frac{ p }{n \\xi }} \\Bigg ) \\le \\frac{\\Xi _3^2}{ \\sigma ^2 p / (n \\xi )} \\le \\xi .$ Substituting these bounds into (REF ), we find that with probability at least $1-2\\xi $ , $\\bigg \\Vert \\frac{1}{n}\\sum _{i=1}^n\\omega _i( \\hat{\\beta }) W_i \\bigg \\Vert _2 \\le \\sigma \\sqrt{\\frac{ p}{n \\xi }} + \\frac{1}{2} f \\kappa _3 r_0^2 + C_1 \\upsilon _1^2 \\sqrt{\\frac{p + \\log (1/\\xi ) }{n}} \\cdot r_0 .$ Combining this with (REF ) and (REF ) proves the claim (REF ).", "$\\Box $" ], [ "Proof of Theorem ", "Step 1.", "A high-level Gaussian approximation result for $\\max _{1\\le j\\le p} |\\hat{\\theta }_j - \\theta ^*_j|$.", "From (REF ) we obtain $\\Sigma ^{1/2} \\, \\alpha (\\hat{\\theta }- \\theta ^* ) - \\frac{1}{n} \\sum _{i=1}^n\\omega _i W_i= \\frac{1}{n} \\sum _{i=1}^n\\lbrace \\omega _i(\\hat{\\beta }) - \\omega _i \\rbrace W_i - ( \\Sigma ^{-1/2} \\hat{\\Sigma }\\Sigma ^{-1/2} - {\\rm I}_p ) \\Sigma ^{1/2} \\alpha ( \\hat{\\theta }- \\theta ^* ) ,$ and hence $& \\bigg \\Vert \\alpha (\\hat{\\theta }- \\theta ^*) - \\frac{1}{n} \\sum _{i=1}^n\\omega _i \\Sigma ^{-1} X_i \\bigg \\Vert _\\Sigma \\\\& \\le \\bigg \\Vert \\frac{1}{n} \\sum _{i=1}^n\\lbrace \\omega _i(\\hat{\\beta }) - \\omega _i \\rbrace W_i \\bigg \\Vert _2 + \\Vert \\Sigma ^{-1/2} \\hat{\\Sigma }\\Sigma ^{-1/2} - {\\rm I}_p \\Vert _2 \\cdot \\alpha \\Vert \\hat{\\theta }- \\theta ^* \\Vert _\\Sigma .$ By condition (REF ), $\\Omega = ( \\omega ^2 X X^ \\succeq {\\sigma }^2 \\Sigma $ .", "Thus, $\\Vert u \\Vert _{\\Sigma \\Omega ^{-1} \\Sigma } = \\sqrt{ u^\\Omega ^{-1} \\Sigma u } \\le {\\sigma }^{-1} \\Vert u \\Vert _\\Sigma $ for any $u\\in ^p$ .", "Moreover, observe that for any invertible matrix $A \\in ^{p\\times p}$ and $u\\in ^p$ , $\\Vert u \\Vert _A = \\sqrt{u^A̰ u } = \\max _{v \\in \\mathbb {S}^{p-1} } \\frac{|u^v̰| }{\\sqrt{v^A̰^{-1} v}} \\ge \\max _{1\\le j\\le p} \\frac{| u_j | }{ { \\sqrt{ (A^{-1})_{jj} }}}.$ Putting together the pieces, we conclude that $& \\max _{1\\le j\\le p} \\Bigg | \\frac{ \\alpha ( \\hat{\\theta }_j - \\theta _j^* ) - n^{-1} \\sum _{i=1}^n\\psi _{ij} }{ \\sqrt{(\\Sigma ^{-1} \\Omega \\Sigma ^{-1} )_{jj}} } \\Bigg | \\nonumber \\\\& \\le \\bigg \\Vert \\frac{1}{ {\\sigma } n } \\sum _{i=1}^n\\lbrace \\omega _i(\\hat{\\beta }) - \\omega _i \\rbrace W_i \\bigg \\Vert _2 + \\Vert \\Sigma ^{-1/2} \\hat{\\Sigma }\\Sigma ^{-1/2} - {\\rm I}_p \\Vert _2 \\cdot \\frac{\\alpha }{ {\\sigma }} \\Vert \\hat{\\theta }- \\theta ^* \\Vert _\\Sigma , $ where $\\psi _{ij} := \\omega _i ( \\Sigma ^{-1} X_i )_j$ satisfies $(\\psi _{ij} )=0$ and $(\\psi _{ij}^2) = (\\Sigma ^{-1} \\Omega \\Sigma ^{-1})_{jj}$ .", "Given two sequences $\\eta _{1n}, \\eta _{2n} >0$ , define the events $_{1n} & = \\Bigg \\lbrace \\bigg \\Vert \\frac{1}{ n } \\sum _{i=1}^n\\lbrace \\omega _i(\\hat{\\beta }) - \\omega _i \\rbrace W_i \\bigg \\Vert _2 \\le \\eta _{1n} \\Bigg \\rbrace , \\\\_{2n} & = \\big \\lbrace \\Vert \\Sigma ^{-1/2} \\hat{\\Sigma }\\Sigma ^{-1/2} - {\\rm I}_p \\Vert _2 \\cdot \\alpha \\Vert \\hat{\\theta }- \\theta ^* \\Vert _\\Sigma \\le \\eta _{2n} \\big \\rbrace .$ Moreover, let $G=(G_1, \\ldots , G_p)^ be a centered Gaussian vector with covariance matrix$ (G) = Corr( -1 X )$.", "Define the Gaussian approximation error (under the Kolmogorov distance) for the maximum statistic $ 1jp |n-1/2 i=1nij | / (-1 -1 )jj $ as{\\begin{@align}{1}{-1}\\Delta _{n, p} : = \\sup _{t\\ge 0} \\Bigg | \\bigg ( \\max _{1\\le j\\le p} \\bigg | \\frac{n^{-1/2} \\sum _{i=1}^n\\psi _{ij}}{ \\sqrt{(\\Sigma ^{-1} \\Omega \\Sigma ^{-1} )_{jj} } } \\bigg | \\le t \\bigg ) - \\bigg ( \\max _{1\\le j\\le p} |G_j| \\le t \\bigg ) \\Bigg | .", "\\end{@align}}For the Gaussian maximum $ 1jp |Gj| = 1jp (Gj, -Gj)$, it follows from Nazarov^{\\prime }s inequality \\cite {N2003} that for any $ >0$,{\\begin{@align}{1}{-1}\\sup _{t\\ge 0, \\,\\epsilon >0 } \\frac{1}{\\epsilon } \\bigg ( t-\\epsilon \\le \\max _{1\\le j\\le p} |G_j| \\le t+\\epsilon \\bigg ) \\le \\sqrt{2 \\log (2p)} + 2.", "\\end{@align}}Together, inequalities (\\ref {distribution.approximation.1})--(\\ref {distribution.approximation.3}) imply{\\begin{@align}{1}{-1}& \\sup _{t\\ge 0} \\Bigg | \\bigg ( \\max _{1\\le j\\le p} \\bigg | \\frac{ \\alpha \\sqrt{n} ( \\hat{\\theta }_j - \\theta _j^* ) }{ \\sqrt{(\\Sigma ^{-1} \\Omega \\Sigma ^{-1} )_{jj}} } \\bigg | \\le t \\bigg ) - \\bigg ( \\max _{1\\le j\\le p} |G_j| \\le t \\bigg ) \\Bigg | \\nonumber \\\\& \\le \\Delta _{n, p} + \\big \\lbrace \\sqrt{2 \\log (2p)} + 2 \\big \\rbrace \\frac{\\sqrt{n}}{{\\sigma } } (\\eta _{1n} + \\eta _{2n} ) + (_{1n}^{\\rm c} ) + (_{2 n}^{\\rm c} ) .", "\\end{@align}}$ Step 2.", "Control the Gaussian approximation error $\\Delta _{n, p}$ .", "By Condition REF and (REF ), $\\rho _0 = \\lambda _{\\min }\\big ( {\\rm Corr}( \\omega \\Sigma ^{-1} X ) \\big ) \\ge \\frac{ \\lambda _{\\min }(\\Sigma ^{-1} \\Omega \\Sigma ^{-1} ) }{\\max _{1\\le j\\le p} (\\Sigma ^{-1} \\Omega \\Sigma ^{-1} )_{jj} } \\ge ( {\\sigma } / \\sigma )^2 \\frac{ \\lambda _{\\min }(\\Sigma ^{-1} )}{\\max _{1\\le j\\le p} (\\Sigma ^{-1} )_{jj}} .$ Applying Theorem 1 of [74] yields the following Berry-Esseen-type bound $\\Delta _{n, p} \\le C \\frac{ \\rho _0^{-3/2} (\\log n)^{1/2} \\log p + \\rho _0^{-1} (\\log n)^{3/2} (\\log p)^2}{\\sqrt{n}} \\bigg \\lbrace \\max _{1\\le j\\le p} |\\psi _{ij} /\\varrho _j |^3 \\bigg \\rbrace , $ where $C>1$ is a universal constant and $\\varrho ^2_j := (\\Sigma ^{-1} \\Omega \\Sigma ^{-1} )_{jj}$ .", "Noting that $\\omega _i = \\varepsilon _{i, -} - _{X_i} ( \\varepsilon _{i, -})$ with $ \\varepsilon _{i, -} = \\min \\lbrace \\varepsilon _i, 0 \\rbrace \\le 0$ , we have $|\\omega _i|^3 \\le \\max \\lbrace | \\varepsilon _{i, -} |^3, | _{X_i} \\varepsilon _{i, -} |^3 \\rbrace \\le | \\varepsilon _{i, -} |^3 + _{X_i} |\\varepsilon _{i, -}|^3$ .", "From condition (REF ) it follows that $& \\big \\lbrace \\max \\nolimits _{1\\le j\\le p} |\\psi _{ij} /\\varrho _j |^3 \\big \\rbrace \\nonumber \\\\& \\le \\big \\lbrace | \\varepsilon _{i,-} |^3 \\max \\nolimits _{1\\le j\\le p} |(\\Sigma ^{-1} X_i)_j / \\varrho _j |^3 \\big \\rbrace + \\big \\lbrace _{X_i} |\\varepsilon _{i, -} |^3 \\max \\nolimits _{1\\le j\\le p} |(\\Sigma ^{-1} X_i)_j / \\varrho _j |^3 \\big \\rbrace \\nonumber \\\\& \\le 2 \\alpha _3 \\big \\lbrace \\max \\nolimits _{1\\le j\\le p} |(\\Sigma ^{-1} X_i)_j / \\varrho _j |^3 \\big \\rbrace .", "$ For $p$ arbitrary non-negative random variables $U_1, \\ldots , U_p$ , it holds for any $q \\ge 1$ that $\\big ( \\max \\nolimits _{1\\le j\\le p} U_j \\big ) \\le \\big \\lbrace \\big ( \\max \\nolimits _{1\\le j\\le p} U_j^q \\big ) \\big \\rbrace ^{1/q} \\le p^{1/q} \\max \\nolimits _{1\\le j\\le p} ( U_j^q )^{1/q} ,$ provided $( U_j^q)$ 's are finite.", "Applying this moment inequality with $U_j = |(\\Sigma ^{-1} X_i)_j / \\varrho _j |^3$ and $q = 4/3$ , we obtain $\\big \\lbrace \\max \\nolimits _{1\\le j\\le p} |(\\Sigma ^{-1} X_i)_j / \\varrho _j |^3 \\big \\rbrace \\le p^{3/4} \\max \\nolimits _{1\\le j\\le p} \\big \\lbrace |(\\Sigma ^{-1} X_i)_j / \\varrho _j |^4 \\big \\rbrace ^{3/4} .$ For each $1\\le j\\le p$ , let ${\\rm e}_j = (0, \\ldots , 1,\\ldots , 0)^^p$ be the canonical basis vector whose $j$ -th entry equals 1 and remaining entries are 0, such that $(\\Sigma ^{-1} X_i)_j = (\\Sigma ^{-1/2} {\\rm e}_j)^W̰_i $ .", "Then, Condition REF ensures that $(\\Sigma ^{-1} X_i)_j^4 = \\lbrace (\\Sigma ^{-1/2} {\\rm e}_j)^W̰_i \\rbrace ^4 \\le \\kappa _4 \\Vert \\Sigma ^{-1/2} e_j \\Vert _2^4$ .", "On the other hand, using (REF ) we have $\\varrho _j^2 = {\\rm e}_j^^{-1}\\Omega \\Sigma ^{-1} {\\rm e}_j \\ge {\\sigma }^2 {\\rm e}_j^^{-1} {\\rm e}_j = {\\sigma }^2 \\Vert \\Sigma ^{-1/2} {\\rm e}_j \\Vert _2^2$ .", "Substituting these bounds into the above inequality yields $\\big \\lbrace \\max \\nolimits _{1\\le j\\le p} |(\\Sigma ^{-1} X_i)_j / \\varrho _j |^3 \\big \\rbrace \\le p^{3/4} ( \\kappa _4 / {\\sigma }^4 )^{3/4} = p^{3/4} \\kappa _4^{3/4} / {\\sigma }^3 .$ Combining with the earlier bounds (REF ) and (REF ), we conclude that $\\Delta _{n, p} \\le 2 C\\kappa _4^{3/4 } \\frac{ \\alpha _3}{{\\sigma }^3} \\lbrace \\rho _0^{-3/2} (\\log n)^{1/2} \\log p + \\rho _0^{-1} (\\log n)^{3/2} (\\log p)^2 \\rbrace \\frac{p^{3/4}}{\\sqrt{n}} .", "$ Step 3.", "Control the events $_{1n}$ and $_{2n}$ .", "For any $\\xi \\in (0, 1)$ , recall from the proof of Theorem REF that conditioned on $ _{1n} $ , $\\alpha \\Vert \\hat{\\theta }- \\theta ^* \\Vert _\\Sigma \\le \\frac{1}{ \\lambda _{\\min }( \\Sigma ^{-1/2} \\hat{\\Sigma }\\Sigma ^{-1/2} ) } \\Bigg ( \\eta _{1n} + \\bigg \\Vert \\frac{1}{n} \\sum _{i=1}^n\\omega _i W_i \\bigg \\Vert _2 \\Bigg ) .$ Under the moment condition (REF ), applying Theorem 3.1 in [70] to $\\Vert \\sum _{i=1}^n\\omega _i W_i \\Vert _2$ , we obtain that for any $t>0$ , $\\Bigg ( \\bigg \\Vert \\sum _{i=1}^n\\omega _i W_i \\bigg \\Vert _2 \\ge 2 \\bigg \\Vert \\sum _{i=1}^n\\omega _i W_i \\bigg \\Vert _2 + t \\Bigg ) \\le e^{-t^2/(3 n \\upsilon ) } + C_0 n \\frac{ \\Vert \\omega _i W_i \\Vert _2^3}{t^3} ,$ where $\\upsilon = \\sup _{u\\in \\mathbb {S}^{p-1}} (\\omega _i W_i^ṵ)^2 \\le \\sigma ^2$ and $C_0>0$ is an absolute constant.", "Similar to (REF ), we bound the third moment as $\\Vert \\omega _i W_i \\Vert _2^3 \\le 2 \\alpha _3 \\Vert W_i \\Vert _2^3 \\le 2 \\kappa _3 \\alpha _3 \\cdot p^{3/2} $ .", "Re-organizing the constants, it follows that with probability $1-2\\xi $ , $\\bigg \\Vert \\frac{1}{n} \\sum _{i=1}^n\\omega _i W_i \\bigg \\Vert _2 \\le 2 \\sigma \\sqrt{\\frac{p}{n}} + \\max \\bigg \\lbrace \\sigma \\sqrt{\\frac{3\\log (1/\\xi )}{n}} , (2 C_0 \\kappa _3 \\alpha _3)^{1/3} \\frac{p^{1/2}}{n^{2/3}\\xi ^{1/3}} \\bigg \\rbrace .$ To bound $\\lambda _{\\min }( \\Sigma ^{-1/2} \\hat{\\Sigma }\\Sigma ^{-1/2} )$ and $\\Vert \\Sigma ^{-1/2} \\hat{\\Sigma }\\Sigma ^{-1/2} - {\\rm I}_p \\Vert _2$ , it follows from Theorem 1 and Example 1 in [84] that with probability at least $1- \\xi $ , $\\Vert \\Sigma ^{-1/2} \\hat{\\Sigma }\\Sigma ^{-1/2} - {\\rm I}_p \\Vert _2 \\lesssim \\upsilon _1^2 \\sqrt{\\frac{p+ \\log (1/\\xi )}{n}}$ and hence $\\lambda _{\\min }( \\Sigma ^{-1/2} \\hat{\\Sigma }\\Sigma ^{-1/2} ) \\ge 1/2$ as long as $n\\gtrsim p + \\log (1/\\xi )$ .", "To control the event $ _{1n} $ , combining Proposition REF with Lemmas REF and REF we obtain that with probability at least $1- 2\\xi $ , $\\Vert \\hat{\\beta }-\\beta \\Vert _\\Sigma \\le r_0 \\asymp {f}^{-1} \\sqrt{ (p + \\log (1/\\xi )) / n }$ and $\\bigg \\Vert \\frac{1}{ n } \\sum _{i=1}^n\\lbrace \\omega _i(\\hat{\\beta }) - \\omega _i \\rbrace W_i \\bigg \\Vert _2 & \\lesssim \\upsilon _1^2 \\sqrt{\\frac{p + \\log (1/\\xi ) }{n}} r_0 + \\kappa _3 f \\,r_0^2$ as long as $n\\gtrsim p+ \\log (1/\\xi )$ .", "Under Condition REF we have $\\kappa _3 \\le \\sqrt{\\kappa _4} \\lesssim \\upsilon _1^2$ .", "Ignoring constant factors that depend only on $\\upsilon _1$ , we choose $\\eta _{1n} \\asymp \\frac{ f}{{f}^2 } \\frac{p+ \\log (1/\\xi ) }{ n} , \\quad \\eta _{2n} \\asymp \\sqrt{\\frac{p+ \\log (1/\\xi ) }{n}} \\bigg \\lbrace \\eta _{1n} + \\sigma \\sqrt{\\frac{p + \\log (1/\\xi ) }{n }} + \\alpha _3^{1/3} \\frac{ p^{1/2} }{ n^{2/3} \\xi ^{1/3} } \\bigg \\rbrace ,$ so that $ (_{1n}^{\\rm c} ) + (_{2 n}^{\\rm c} ) \\le 5 \\xi $ .", "In view of (), we take $\\xi = (p^{3/2} / n)^{1/2} <1$ to minimize $ \\sqrt{n} \\eta _{2n} + \\xi $ as a function of $\\xi $ , implying $\\sqrt{n \\log p} \\, (\\eta _{1n} + \\eta _{2n} ) + (_{1n}^{\\rm c} ) + (_{2 n}^{\\rm c} )\\lesssim ( f /{f}^2 \\vee \\alpha _3^{1/3} ) \\sqrt{ \\log p} \\, \\frac{ p + \\log n }{ \\sqrt{n} } .$ Combining this with () and (REF ) proves the claim (REF ).", "$\\Box $" ], [ "Proof of Theorem ", "Recall from the proof of Theorem REF that with probability at least $1- 6\\xi $ , $& \\bigg \\Vert \\alpha \\Sigma ^{1/2} (\\hat{\\theta }- \\theta ^*) - \\frac{1}{n} \\sum _{i=1}^n\\omega _i W_i \\bigg \\Vert _2 \\\\& \\le \\bigg \\Vert \\frac{1}{n} \\sum _{i=1}^n\\lbrace \\omega _i(\\hat{\\beta }) - \\omega _i \\rbrace W_i \\bigg \\Vert _2 + \\Vert \\Sigma ^{-1/2} \\hat{\\Sigma }\\Sigma ^{-1/2} - {\\rm I}_p \\Vert _2 \\cdot \\alpha \\Vert \\hat{\\theta }- \\theta ^* \\Vert _\\Sigma \\\\& \\le \\eta _{1n} (\\xi ) + \\eta _{2n} (\\xi )$ as long as $n\\gtrsim p + \\log (1/\\xi )$ , where ${\\left\\lbrace \\begin{array}{ll}\\eta _{1n} (\\xi ) \\asymp \\frac{ f}{{f}^2} \\frac{p+ \\log (1/\\xi ) }{ n} , \\\\\\eta _{2n}(\\xi ) \\asymp \\sqrt{\\frac{p+ \\log (1/\\xi ) }{n}} \\big \\lbrace \\eta _{1n}(\\xi ) + \\sigma \\sqrt{\\frac{p + \\log (1/\\xi ) }{n }} + \\alpha _3^{1/3} \\frac{ p^{1/2} }{ n^{2/3} \\xi ^{1/3} } \\big \\rbrace .\\end{array}\\right.}", "$ For any deterministic vector $a\\in ^p$ , define the partial sum $S_a = n^{-1/2} \\sum _{i=1}^n\\omega _i\\langle a, \\Sigma ^{-1}X_i \\rangle $ , satisfying $(S_a)=0$ and $\\textnormal {var}(S_a) = \\varrho _a^2 = a^^{-1} \\Omega \\Sigma ^{-1}a$ .", "Hence, with probability at least $1-6 \\xi $ , $| \\alpha \\sqrt{n} \\langle a, \\hat{\\theta }- \\theta ^* \\rangle - S_a | \\le \\Vert a \\Vert _{\\Sigma ^{-1}} \\sqrt{n} \\lbrace \\eta _{1n} (\\xi ) + \\eta _{2n}(\\xi ) \\rbrace .", "$ Next, applying the Berry-Esseen inequality (see, e.g.", "[78]) to $S_a$ yields $\\sup _{t\\in } | ( S_a \\le t ) - \\Phi (t/ \\varrho _a) | \\le \\frac{|\\omega _i \\langle a, \\Sigma ^{-1}X_i \\rangle |^3 }{2 \\varrho _a^3 \\sqrt{n} }.$ From moment condition (REF ) it follows that $ \\varrho _a^2 \\ge {\\sigma }^2 \\Vert a \\Vert ^2_{\\Sigma ^{-1}}$ , and by Condition REF , $|\\omega _i \\langle a, \\Sigma ^{-1}X_i \\rangle |^3 \\le 2 \\alpha _3 |\\langle a, \\Sigma ^{-1}X_i \\rangle |^3 \\le 2 \\kappa _3 \\alpha _3 \\cdot \\Vert a \\Vert _{\\Sigma ^{-1}}^3.$ Putting these bounds together and applying the Berry-Esseen inequality, we obtain $\\sup _{t\\in } | ( S_a \\le t ) - \\Phi (t/ \\varrho _a ) | \\le \\frac{ \\kappa _3 \\alpha _3}{{\\sigma }^3 \\sqrt{n}} .", "$ Let $G\\sim (0,1)$ .", "To complete the proof, applying (REF ) and (REF ) we see that $& ( \\alpha \\sqrt{n} \\langle a, \\hat{\\theta }-\\theta ^* \\rangle \\le t ) \\\\& \\le \\big \\lbrace S_a \\le t + \\Vert a \\Vert _{\\Sigma ^{-1}} \\sqrt{n} ( \\eta _{1n} + \\eta _{2n} ) \\big \\rbrace + 6 \\xi \\\\& \\le \\big \\lbrace \\varrho _a G \\le t + \\Vert a \\Vert _{\\Sigma ^{-1}} \\sqrt{n} ( \\eta _{1n} + \\eta _{2n} ) \\big \\rbrace + 6\\xi + \\frac{ \\kappa _3 \\alpha _3}{{\\sigma }^3 \\sqrt{n}} \\\\& \\le \\Phi (t/ \\varrho _a ) + 6 \\xi + \\frac{ \\kappa _3 \\alpha _3}{{\\sigma }^3 \\sqrt{n}} + \\frac{1}{\\sqrt{2\\pi } {\\sigma }} \\sqrt{n} ( \\eta _{1n} + \\eta _{2n} )$ for any $t\\in $ .", "A lower bound can be obtained by the same argument.", "In view of the order of $\\eta _{1n}(\\xi ), \\eta _{2n}(\\xi )$ stated in (REF ), we choose $\\xi = (p^{3/2} / n)^{1/2}$ so that $\\xi + \\frac{\\sqrt{n}}{{\\sigma }} ( \\eta _{1n} + \\eta _{2n} ) \\lesssim \\frac{f}{{f}^2} \\frac{p + \\log n}{{\\sigma } \\sqrt{n}} + \\sigma \\frac{p + \\log n}{{\\sigma } \\sqrt{n}} + \\alpha _3^{1/3} \\frac{p^{1/4} (p+\\log n)^{1/2} }{ {\\sigma } \\sqrt{n} }$ This proves the claimed bound (REF ).", "$\\Box $" ], [ "Proof of Theorem ", "For simplicity, we write $\\hat{\\theta }= \\hat{\\theta }_\\tau $ throughout the proof.", "For some $r\\ge r_0>0$ to be determined (in the end of the proof), we construct an intermediate “estimator\" $\\widetilde{\\theta }= (1-\\lambda _r ) \\theta ^* + \\lambda _r \\hat{\\theta }$ , where $\\lambda _r = \\sup \\lbrace \\lambda \\in [0, 1] : \\theta ^* + \\lambda (\\hat{\\theta }- \\theta ^*) \\in \\mathbb {B}_\\Sigma (r/\\alpha )\\rbrace $ .", "If $\\hat{\\theta }\\in \\mathbb {B}_\\Sigma (\\theta ^*, r/\\alpha )$ , $\\lambda _r = 1$ and hence $\\widetilde{\\theta }= \\hat{\\theta }$ ; otherwise if $\\hat{\\theta }\\notin \\mathbb {B}_\\Sigma (\\theta ^*, r/\\alpha )$ , $\\lambda _r$ is strictly less than 1 and $\\widetilde{\\theta }$ will lie at the boundary of $\\mathbb {B}_\\Sigma (\\theta ^*, r/\\alpha )$ , i.e.", "$\\alpha \\Vert \\widetilde{\\theta }- \\theta ^* \\Vert _\\Sigma = r$ .", "Since the loss function $\\theta \\mapsto \\hat{}_\\tau (\\theta )$ is convex and continuously differentiable, by the convexity lemma—Lemma REF , and the first-order optimality condition that $\\nabla \\hat{}_\\tau ( \\hat{\\theta }) = 0$ , we have $\\langle \\nabla \\hat{}_\\tau (\\widetilde{\\theta }) - \\nabla \\hat{}_\\tau ( \\theta ^*) , \\widetilde{\\theta }- \\theta ^* \\rangle & \\le \\lambda _r \\cdot \\langle \\nabla \\hat{}_\\tau (\\hat{\\theta }) - \\nabla \\hat{}_\\tau ( \\theta ^*) , \\hat{\\theta }- \\theta ^* \\rangle \\nonumber \\\\& = \\lambda _r \\cdot \\langle - \\nabla \\hat{}_\\tau ( \\theta ^*) , \\hat{\\theta }- \\theta ^* \\rangle \\le \\Vert \\nabla \\hat{}_\\tau ( \\theta ^*) \\Vert _{\\Sigma ^{-1} } \\Vert \\widetilde{\\theta }- \\theta ^* \\Vert _\\Sigma .", "$ Let $\\psi _\\tau (u) = \\ell _\\tau ^{\\prime }(u)$ be the score function.", "Conditioned on the event $\\lbrace \\hat{\\beta }\\in \\mathbb {B}_\\Sigma (\\beta ^*, r_0)\\rbrace $ , we see that $\\Vert \\nabla \\hat{}_\\tau (\\theta ^* ) \\Vert _{\\Sigma ^{-1}} & \\le \\alpha \\sup _{ \\beta \\in \\mathbb {B}_\\Sigma (\\beta ^*, r_0) } \\bigg \\Vert \\frac{1 }{n} \\sum _{i=1}^n\\psi _\\tau \\big ( \\underbrace{ (Y_i - X_i^) \\mathbb {1}( Y_i \\le X_i^) + \\alpha \\langle X_i, \\beta - \\theta ^* \\rangle }_{= \\, \\omega _i(\\beta ) {\\rm ~by~(\\ref {def:Zi})}} \\big ) W_i \\bigg \\Vert _2 \\\\& \\le \\alpha \\sup _{ \\beta \\in \\mathbb {B}_\\Sigma (\\beta ^*, r_0) } \\bigg \\Vert \\frac{1}{n} \\sum _{i=1}^n(1 - ) \\big \\lbrace \\psi _\\tau \\big ( \\omega _i(\\beta ) \\big ) - \\psi _\\tau (\\omega _i) \\big \\rbrace W_i \\bigg \\Vert _2 \\\\&~~~~~ + \\alpha \\sup _{ \\beta \\in \\mathbb {B}_\\Sigma (\\beta ^*, r_0) } \\big \\Vert \\psi _\\tau \\big ( \\omega _i(\\beta ) \\big ) W_i \\big \\Vert _2 + \\alpha \\bigg \\Vert \\frac{1}{n} \\sum _{i=1}^n(1-) \\psi _\\tau (\\omega _i) W_i \\bigg \\Vert _2 ,$ where $\\omega _i = \\omega _i(\\beta ^*) = \\varepsilon _i \\mathbb {1}(\\varepsilon _i\\le 0) +\\alpha X_i^\\beta ^* - \\theta ^*)$ .", "Applying Lemmas REF , REF and REF , we obtain that with probability at least $1 - 2 e^{-t}$ , $\\alpha ^{-1} \\Vert \\nabla \\hat{}_\\tau (\\theta ^* ) \\Vert _{\\Sigma ^{-1}}& \\le \\underbrace{ C_0 \\upsilon _1 \\bigg ( \\sigma \\sqrt{ \\frac{p+t }{n}} + \\tau \\frac{p+t}{n} \\bigg ) }_{{\\rm variance~upper~bound}} + \\underbrace{ \\frac{\\sigma ^2}{\\tau } }_{{\\rm robustification~bias~upper~bound}} \\\\&~~~~~+ \\underbrace{ C_1 \\upsilon _1^2 \\sqrt{\\frac{p+t}{n}} \\, r_0 + \\frac{\\sigma }{\\tau } r_0 + \\frac{1}{2} \\kappa _3 ( f + 1/\\tau ) r_0^2 }_{{\\rm accumulated~estimation~error~bound}} ,$ where $C_0, C_1 >0 $ are absolute constants.", "For the left-hand side of (REF ), it follows from Lemma REF and the fact $\\widetilde{\\theta }\\in \\mathbb {B}_{\\Sigma }(\\theta ^*, r/\\alpha )$ that, with probability at least $1-e^{-t}$ conditioned on $\\lbrace \\hat{\\beta }\\in \\mathbb {B}_\\Sigma (\\beta ^*, r_0)\\rbrace $ , $\\langle \\nabla \\hat{}_\\tau (\\widetilde{\\theta }) - \\nabla \\hat{}_\\tau ( \\theta ^*) , \\widetilde{\\theta }- \\theta ^* \\rangle \\ge \\frac{\\alpha ^2}{4} \\Vert \\widetilde{\\theta }- \\theta ^* \\Vert _\\Sigma ^2$ as long as $\\tau ^2 \\ge 32 \\lbrace \\kappa _4 (r^2 + 4r_0^2) + \\sigma ^2 \\rbrace $ and $n\\gtrsim (\\tau /r)^2 (p+t)$ .", "With $\\tau \\asymp \\sigma \\sqrt{ n/(p+t)}$ , putting these upper and lower bounds together and applying (REF ), we obtain $\\alpha \\Vert \\widetilde{\\theta }- \\theta ^* \\Vert _\\Sigma \\le C_3 \\upsilon _1 \\sigma \\sqrt{\\frac{p+t }{n}} + C_4 \\upsilon _1^2 \\sqrt{\\frac{p+t}{n}} r_0 +2 \\kappa _3( f + 1/\\tau ) r_0^2 .$ To complete the proof, we choose $r \\asymp \\tau /\\kappa _4^{1/2}$ so that $r\\ge r_0$ under the sample size requirement $ n\\gtrsim p+t$ .", "Note further that $\\kappa _3 \\le \\kappa _4^{1/2} \\lesssim \\upsilon _1^2$ .", "Under the stated upper bounds on $r_0$ and with $\\tau \\asymp \\sigma \\sqrt{ n/(p+t)}$ , it holds with probability at least $1- 3 e^{-t}$ that $\\alpha \\Vert \\widetilde{\\theta }- \\theta ^* \\Vert _\\Sigma \\le C_3 \\upsilon _1 \\sigma \\sqrt{\\frac{ p+t }{n}} + C_5 \\upsilon _1^2 \\bigg ( \\sqrt{\\frac{p+t}{n}} r_0 + f r_0^2 \\bigg ) .", "$ Provided $n\\gtrsim p+t$ , we are guaranteed that $\\alpha \\Vert \\widetilde{\\theta }- \\theta ^* \\Vert _\\Sigma < r$ (with high probability), that is, $\\widetilde{\\theta }$ falls in the interior of $\\mathbb {B}_\\Sigma (\\theta ^*, r/\\alpha )$ .", "By its construction in the first paragraph of the proof, we must have $\\hat{\\theta }= \\widetilde{\\theta }$ so that the above bound (REF ) also applies to $\\hat{\\theta }$ , which completes the proof.", "$\\Box $" ], [ "Proof of Theorem ", "Define the vector-valued random process $\\Pi _n(\\beta , \\theta ) = \\frac{1}{n} \\sum _{i=1}^n\\big \\lbrace \\psi _\\tau \\big ( \\omega _i(\\beta , \\theta ) \\big ) - \\psi _\\tau \\big ( \\omega _i(\\beta ^*, \\theta ^*) \\big ) \\big \\rbrace W_i + \\alpha \\Sigma ^{1/2}(\\theta - \\theta ^*) ,$ for $\\beta , \\theta \\in ^p$ , where $\\omega _i(\\beta , \\theta ) = (\\varepsilon _i - \\Delta _i) \\mathbb {1}(\\varepsilon _i \\le \\Delta _i) + \\alpha X_i^\\beta - \\theta )$ and $\\Delta _i = \\Delta _i(\\beta ) = X_i^\\beta - \\beta ^*)$ .", "Note that the two-step ES estimator $\\hat{\\theta }_\\tau $ satisfies the first-order condition $\\nabla \\hat{}_\\tau (\\hat{\\theta }_\\tau ) = (-1/n) \\sum _{i=1}^n\\psi _\\tau ( \\omega _i(\\hat{\\beta }, \\hat{\\theta }_\\tau ) ) X_i = 0$ .", "The key is then to bound the supremum $\\sup _{ (\\beta , \\theta ) \\in \\mathbb {B}_\\Sigma (\\beta ^*, r_0) \\times \\mathbb {B}_\\Sigma (\\theta ^*, r/\\alpha ) } \\Vert \\Pi _n(\\beta , \\theta ) \\Vert _2$ .", "For any $\\epsilon \\in (0, r)$ , there exists an $\\epsilon $ -net $(\\epsilon , r) = \\lbrace \\theta _1, \\ldots , \\theta _N\\rbrace $ of $ \\mathbb {B}_\\Sigma (\\theta ^*, r/\\alpha )$ with $N\\le (1+2r/\\epsilon )^p$ .", "For any $(\\beta , \\theta ) \\in \\mathbb {B}_\\Sigma (\\beta ^*, r_0) \\times \\mathbb {B}_\\Sigma (\\theta ^*, r/\\alpha )$ given, there exists some $1\\le j\\le N$ such that $\\alpha \\Vert \\theta - \\theta _j \\Vert _\\Sigma \\le \\epsilon $ .", "Recall that $\\psi _\\tau (\\cdot )$ is 1-Lipschitz continuous, we have $\\Vert \\Pi _n(\\beta , \\theta ) - \\Pi _n(\\beta , \\theta _j) \\Vert _2 & = \\bigg \\Vert \\frac{1}{n} \\sum _{i=1}^n\\big \\lbrace \\psi _\\tau \\big ( \\omega _i(\\beta , \\theta ) \\big ) - \\psi _\\tau \\big ( \\omega _i(\\beta , \\theta _j) \\big )\\big \\rbrace W_i + \\alpha \\Sigma ^{1/2}(\\theta - \\theta _j ) \\bigg \\Vert _2 \\\\& \\le \\bigg \\Vert \\frac{1}{n} \\sum _{i=1}^n\\big \\lbrace \\psi _\\tau \\big ( \\omega _i(\\beta , \\theta ) \\big ) - \\psi _\\tau \\big ( \\omega _i(\\beta , \\theta _j) \\big )\\big \\rbrace W_i \\bigg \\Vert _2 + \\epsilon \\\\& \\le \\sup _{u\\in \\mathbb {S}^{p-1}} \\frac{1}{n} \\sum _{i=1}^n| W_i^ṵ \\cdot \\alpha X_i^\\theta - \\theta _j) | + \\epsilon \\\\& \\le \\sup _{u\\in \\mathbb {S}^{p-1}} \\bigg ( \\frac{1}{n} \\sum _{i=1}^n\\langle W_i, u\\rangle ^2 \\bigg )^{1/2} \\cdot \\bigg ( \\frac{\\alpha ^2}{n} \\sum _{i=1}^n\\langle X_i, \\theta -\\theta _j\\rangle ^2 \\bigg )^{1/2} + \\epsilon \\\\& \\le \\bigg \\Vert \\frac{1}{n} \\sum _{i=1}^nW_i W_i^\\Vert _2 \\cdot \\epsilon + \\epsilon ,$ which further implies $& \\sup _{(\\beta , \\theta ) \\in \\mathbb {B}_\\Sigma (\\beta ^*, r_0) \\times \\mathbb {B}_\\Sigma (\\theta ^*, r/\\alpha ) } \\Vert \\Pi _n(\\beta , \\theta ) \\Vert _2 \\nonumber \\\\& \\le \\max _{1\\le j\\le N} \\sup _{ \\beta \\in \\mathbb {B}_\\Sigma (\\beta ^*, r_0) } \\Vert \\Pi _n(\\beta , \\theta _j ) \\Vert _2 + \\bigg \\Vert \\frac{1}{n} \\sum _{i=1}^nW_i W_i^\\Vert _2 \\cdot \\epsilon + \\epsilon .", "$ We first bound $\\max _{1\\le j\\le N} \\sup _{ \\beta \\in \\mathbb {B}_\\Sigma (\\beta ^*, r_0) } \\Vert \\Pi _n(\\beta , \\theta _j ) \\Vert _2$ .", "By the triangle inequality, $\\Vert \\Pi _n(\\beta , \\theta _j ) \\Vert _2 & \\le \\underbrace{ \\bigg \\Vert \\frac{1}{n} \\sum _{i=1}^n\\big \\lbrace \\psi _\\tau \\big ( \\omega _i(\\beta , \\theta _j) \\big ) - \\psi _\\tau \\big ( \\omega _i(\\beta ^*, \\theta _j ) \\big ) \\big \\rbrace W_i \\bigg \\Vert _2 }_{=:\\, \\Lambda _1(\\beta , \\theta _j ) } \\\\&~~~~~~ + \\underbrace{ \\bigg \\Vert \\frac{1}{n} \\sum _{i=1}^n\\big \\lbrace \\psi _\\tau \\big ( \\omega _i(\\beta ^*, \\theta _j) \\big ) - \\psi _\\tau \\big ( \\omega _i(\\beta ^*, \\theta ^*) \\big ) \\big \\rbrace W_i + \\alpha \\Sigma ^{1/2}(\\theta _j - \\theta ^*) \\bigg \\Vert _2 }_{=:\\, \\Lambda _2 (\\theta _j) } ,$ and hence $\\max _{1\\le j\\le N} \\sup _{ \\beta \\in \\mathbb {B}_\\Sigma (\\beta ^*, r_0) } \\Vert \\Pi _n(\\beta , \\theta _j ) \\Vert _2 \\le \\max _{1\\le j\\le N} \\sup _{\\beta \\in \\mathbb {B}_\\Sigma (\\beta ^* , r_0) } \\Lambda _1(\\beta , \\theta _j ) + \\max _{1\\le j\\le N} \\Lambda _2(\\theta _j) .", "\\nonumber $ Following the proof of Lemma REF , and noting that $_{X_i} \\lbrace \\omega _i(\\beta ^*, \\theta _j) \\rbrace = \\alpha X_i^\\theta ^* - \\theta _j)$ , it can be similarly shown that $\\sup _{\\beta \\in \\mathbb {B}_\\Sigma (\\beta ^* , r_0)} \\big \\Vert \\big \\lbrace \\psi _\\tau \\big ( \\omega _i(\\beta , \\theta _j) \\big ) - \\psi _\\tau \\big ( \\omega _i(\\beta ^*, \\theta _j ) \\big ) \\big \\rbrace W_i \\big \\Vert _2 \\le \\frac{1}{2} \\kappa _3 ( f + 1/\\tau ) r_0^2 + (\\sigma + \\kappa _3 \\epsilon ) \\frac{ r_0 }{ \\tau } .$ Then, applying Lemmas REF and REF to $\\sup _{\\beta \\in \\mathbb {B}_\\Sigma (\\beta ^*, r_0) } \\Lambda _1(\\beta , \\theta _j )$ for each $j$ , and taking the union bound over $j=1,\\ldots , N$ , we obtain that with probability at least $1-e^{-t}$ , $\\max _{1\\le j\\le N} \\sup _{\\beta \\in \\mathbb {B}_\\Sigma (\\beta ^*, r_0) } \\Lambda _1(\\beta , \\theta _j ) \\le C_1 \\upsilon _1^2 \\sqrt{\\frac{p \\log (3 r/\\epsilon ) + t}{n}} r_0 + \\frac{1}{2} \\kappa _3 ( f + 1/\\tau ) r_0^2 + (\\sigma + \\kappa _3 \\epsilon ) \\frac{ r_0 }{ \\tau }.", "\\nonumber $ Moreover, it follows from Lemma REF that with probability at least $1-e^{-t}$ , $\\max _{1\\le j\\le N} \\Lambda _2(\\theta _j) \\le C_2 \\upsilon _1^2 \\sqrt{\\frac{p + t}{n}} \\cdot r + ( \\sigma ^2 + \\kappa _4 r^2 /3 ) \\frac{r}{\\tau ^2}$ Together, the previous three bounds imply that with probability at least $1-2 e^{-t}$ , $& \\max _{1\\le j\\le N} \\sup _{ \\beta \\in \\mathbb {B}_\\Sigma (\\beta ^*, r_0) } \\Vert \\Pi _n(\\beta , \\theta _j ) \\Vert _2 \\nonumber \\\\& \\le C_3 \\upsilon _1^2 \\bigg \\lbrace r_0 \\sqrt{\\frac{p \\log (3 r/\\epsilon ) + t}{n}} + r \\sqrt{\\frac{p + t}{n}} \\,\\bigg \\rbrace \\nonumber \\\\& ~~~~~ + \\frac{1}{2} \\kappa _3 ( f + 1/\\tau ) r_0^2 + (\\sigma + \\kappa _3 \\epsilon ) \\frac{ r_0 }{ \\tau } +( \\sigma ^2 + \\kappa _4 r^2 /3 ) \\frac{ r}{\\tau ^2} .", "$ Turning next to $\\Vert (1/n) \\sum _{i=1}^nW_i W_i^_2$ , from Example 1 in [84] we see that $\\Vert (1/n) \\sum _{i=1}^nW_i W_i^ {\\rm I}_p \\Vert _2 \\le 20\\upsilon _1^2\\sqrt{(4p+t)/n}$ with probability at least $1-e^{-t}$ as long as $n\\ge 4p+t$ .", "Provided that $n\\gtrsim p + t$ , we have $\\Vert (1/n) \\sum _{i=1}^nW_i W_i^_2 \\le 2$ .", "Combining this with (REF ) and (REF ), and taking $\\epsilon =r/n$ , we conclude that with probability at least $1-3e^{-t}$ , $& \\sup _{ (\\beta , \\theta ) \\in \\mathbb {B}_\\Sigma (\\beta ^*, r_0) \\times \\mathbb {B}_\\Sigma (\\theta ^*, r/\\alpha ) } \\Vert \\Pi _n(\\beta , \\theta ) \\Vert _2 \\nonumber \\\\ & \\lesssim \\bigg ( \\sqrt{\\frac{p \\log n + t}{n}} + f r_0 + \\frac{\\sigma }{\\tau } \\bigg ) \\cdot r_0 + \\bigg ( \\sqrt{\\frac{p+t}{n}} + \\frac{ \\sigma ^2 + r^2}{\\tau ^2} + \\frac{r_0}{n \\tau } \\bigg ) \\cdot r $ as long as $n\\gtrsim p + t$ .", "In view of Theorem REF , the two-step ES estimator $\\hat{\\theta }_\\tau $ with $\\tau \\asymp \\sigma \\sqrt{ n /(p+t)}$ satisfies the bound $\\alpha \\Vert \\hat{\\theta }_\\tau - \\theta ^* \\Vert _\\Sigma \\lesssim \\sigma \\sqrt{(p + t)/n}$ with probability at least $1-3e^{-t}$ conditioned on $\\lbrace \\hat{\\beta }\\in \\mathbb {B}_\\Sigma (\\beta ^*, r_0) \\rbrace $ .", "This together with (REF ) proves (REF ).", "$\\Box $" ], [ "Proof of Theorem ", "To begin with, applying Proposition REF and Theorem REF with $t=\\log n$ we obtain that with probability at least $1-7 n^{-1}$ , $\\Vert \\hat{\\beta }- \\beta ^* \\Vert _\\Sigma \\le r_0 \\asymp {f}^{-1} \\sqrt{(p+\\log n)/n}$ and $\\bigg \\Vert \\alpha \\Sigma ^{1/2} ( \\hat{\\theta }_\\tau - \\theta ^* ) - \\frac{1}{n} \\sum _{i=1}^n\\psi _\\tau (\\omega _i) W_i \\bigg \\Vert _2 \\le r_1 \\asymp \\sigma \\frac{p+ \\log n}{ n} + \\frac{f}{{f}^2} \\frac{(p\\log n)^{1/2} (p + \\log n)^{1/2}}{ n} .$ For any deterministic vector $a \\in ^p$ , define $S_a= n^{-1/2} \\sum _{i=1}^n\\psi _\\tau (\\omega _i) \\langle a, \\Sigma ^{-1} X_i \\rangle $ and $S_a^0 = S_a - (S_a)$ .", "Noting that $|\\psi _\\tau (t) - t | \\le |t|^{1+q} /\\tau ^q$ for any $q >0$ , and similar to (REF ), we have $| \\psi _\\tau (\\omega _i) \\langle a, \\Sigma ^{-1} X_i \\rangle | & \\le \\frac{1}{\\tau ^2} \\big ( |\\omega _i|^3 |\\langle \\Sigma ^{-1/2} a, W_i \\rangle | \\big ) \\\\& \\le \\frac{2\\alpha _3 }{\\tau ^2} |\\langle \\Sigma ^{-1/2} a, W_i \\rangle | \\le \\frac{ 2 \\alpha _3 }{\\tau ^2} \\Vert a \\Vert _{\\Sigma ^{-1}} .$ Hence, with probability at least $1-7 n^{-1}$ , $| \\alpha \\sqrt{n} \\langle a, \\hat{\\theta }_\\tau - \\theta ^* \\rangle - S_a^0 | \\le \\Vert a \\Vert _{\\Sigma ^{-1}} \\sqrt{n} \\big (r_1 + 2 \\alpha _3 \\tau ^{-2} \\big ) .", "$ Next, define $\\xi _i = \\psi _\\tau (\\omega _i) \\langle a, \\Sigma ^{-1} X_i \\rangle $ , so that $S_a^0 = n^{-1/2} \\sum _{i=1}^n(\\xi _i - \\xi _i) $ .", "The Berry-Esseen inequality (see, e.g.", "[78]) states that $\\sup _{t\\in } \\big | \\big \\lbrace S_a^0 \\le \\textnormal {var}(S_a^0)^{1/2} t \\big \\rbrace - \\Phi (t) \\big | \\le \\frac{|\\xi _i - \\xi _i |^3}{2 \\textnormal {var}(\\xi _i)^{3/2} \\sqrt{n} }.$ Note that the mean satisfies $| \\xi _i | \\le \\tau ^{-1} ( \\omega _i^2 |\\langle a, \\Sigma ^{-1} X_i \\rangle | ) \\le \\Vert a \\Vert _{\\Sigma ^{-1}} \\sigma ^2/ \\tau $ .", "For the second moments, recall that $\\varrho _{a, \\tau }^2 = \\lbrace \\psi _\\tau ( \\omega _i ) \\langle a, \\Sigma ^{-1} X_i \\rangle \\rbrace ^2 ~\\mbox{ and }~ \\varrho _a^2 = ( \\omega _i \\langle a, \\Sigma ^{-1} X_i \\rangle )^2 = a^^{-1} \\Omega \\Sigma ^{-1} a$ with $\\Omega = ( \\omega ^2 X X^$ , satisfying $0 & \\le \\varrho _a^2 - \\varrho _{a, \\tau }^2 \\le \\omega _i^2 \\mathbb {1}(|\\omega _i| >\\tau ) \\langle a, \\Sigma ^{-1} X_i \\rangle ^2 \\nonumber \\\\& \\le \\frac{2 \\alpha _3 }{\\tau } \\langle a, \\Sigma ^{-1} X_i \\rangle ^2 = \\frac{2 \\alpha _3}{\\tau } \\Vert a \\Vert _{\\Sigma ^{-1}}^2 .", "\\nonumber $ Moreover, $ \\varrho _a^2\\ge {\\sigma }^2 \\Vert a \\Vert _{\\Sigma ^{-1}}^2 $ according to (REF ).", "Provided $\\tau \\ge 2\\max \\lbrace 2 \\alpha _3 / {\\sigma }^2 , \\sigma ^2 /{\\sigma } \\rbrace $ , it follows that $\\textnormal {var}(\\xi _i) = \\varrho ^2_{a, \\tau } - ( \\xi _i )^2 \\ge \\Vert a \\Vert _{\\Sigma ^{-1}}^2 \\big ( {\\sigma }^2 - 2 \\alpha _3 /\\tau - \\sigma ^4 /\\tau ^2 \\big ) \\ge \\Vert a \\Vert _{\\Sigma ^{-1}}^2 \\frac{{\\sigma }^2}{4}.", "$ Similarly, using the inequality $|\\psi _\\tau (t)| \\le |t|$ we obtain $|\\xi _i|^3 \\le |\\omega _i \\langle a, \\Sigma ^{-1} X_i\\rangle |^3 \\le 2 \\alpha _3 \\, | \\langle a, \\Sigma ^{-1} X_i \\rangle |^3 \\le 2 \\kappa _3 \\alpha _3 \\, \\Vert a \\Vert _{\\Sigma ^{-1}}^3 .$ Putting these bounds together leads to $\\sup _{t\\in } \\big | \\big \\lbrace S_a^0 \\le \\textnormal {var}(S_a^0)^{1/2} t \\big \\rbrace - \\Phi (t) \\big | \\le C_1 \\frac{ \\kappa _3 \\alpha _3 + (\\sigma ^2 /\\tau )^3}{{\\sigma }^3 \\sqrt{n}} .", "$ Note further that $| \\textnormal {var}(S_a^0) - \\varrho _{a }^2 | \\le \\Vert a \\Vert _{\\Sigma ^{-1}}^2 ( 2 \\alpha _3 / \\tau + \\sigma ^4 / \\tau ^2)$ .", "This together with (REF ) implies $\\sup _{t\\in } \\big | \\Phi (t/ \\textnormal {var}(S_a^0)^{1/2}) - \\Phi (t/ \\varrho _{a } ) \\big | \\le \\frac{C_2}{ {\\sigma }^2} \\bigg ( \\frac{\\sigma ^4}{\\tau ^2} + \\frac{\\alpha _3}{\\tau } \\bigg ) .", "$ Here both $C_1, C_2>0$ are absolute constants.", "Let $G\\sim (0, 1)$ .", "Combing (REF ), (REF ) and (REF ) we conclude that, for any $t\\in $ , $& \\big ( \\alpha \\sqrt{n} \\langle a, \\hat{\\theta }_\\tau -\\theta ^* \\rangle \\le t \\big ) \\\\& \\le \\big \\lbrace S_a^0 \\le t + \\Vert a \\Vert _{\\Sigma ^{-1}} \\sqrt{n} \\big (r_1 + 2 \\alpha _3 \\tau ^{-2} \\big ) \\big \\rbrace + 7n^{-1} \\\\& \\le \\big \\lbrace \\textnormal {var}(S_a^0)^{1/2} G \\le t + \\Vert a \\Vert _{\\Sigma ^{-1}} \\sqrt{n} \\big (r_1 + 2 \\alpha _3 \\tau ^{-2} \\big ) \\big \\rbrace + 7 n^{-1} + C_1 \\frac{ \\kappa _3 \\alpha _3 + (\\sigma ^2 /\\tau )^3}{{\\sigma }^3 \\sqrt{n}} \\\\& \\le \\big \\lbrace \\varrho _{a } G \\le t + \\Vert a \\Vert _{\\Sigma ^{-1}} \\sqrt{n} \\big (r_1 + 2 \\alpha _3 \\tau ^{-2} \\big ) \\big \\rbrace + 7 n^{-1} + C_1 \\frac{ \\kappa _3 \\alpha _3 + ( \\sigma ^2 /\\tau )^3}{{\\sigma }^3 \\sqrt{n}} + \\frac{C_2}{ {\\sigma }^2} \\bigg ( \\frac{\\sigma ^4}{\\tau ^2} + \\frac{\\alpha _3}{\\tau } \\bigg ) \\\\& \\le ( \\varrho _{a } G \\le t) + 7 n^{-1} + C_1 \\frac{ \\kappa _3 \\alpha _3 + ( \\sigma ^2 /\\tau )^3}{{\\sigma }^3 \\sqrt{n}} + \\frac{C_2}{ {\\sigma }^2} \\bigg ( \\frac{\\sigma ^4}{\\tau ^2} + \\frac{\\alpha _3}{\\tau } \\bigg ) + \\frac{\\Vert a \\Vert _{\\Sigma ^{-1}} }{\\sqrt{2\\pi } \\varrho _a} \\sqrt{n} \\big (r_1 + 2\\alpha _3 \\tau ^{-2} \\big ) \\\\& \\le ( \\varrho _{a } G \\le t) + 7 n^{-1} + C_1 \\frac{ \\kappa _3 \\alpha _3 + ( \\sigma ^2 /\\tau )^3}{{\\sigma }^3 \\sqrt{n}} +\\frac{C_2}{ {\\sigma }^2} \\bigg ( \\frac{\\sigma ^4}{\\tau ^2} + \\frac{\\alpha _3}{\\tau } \\bigg ) + \\frac{ \\sqrt{n} }{\\sqrt{2\\pi } {\\sigma }} \\big (r_1 + 2\\alpha _3 \\tau ^{-2} \\big ).$ Moreover, similar arguments lead to a lower bound for $ ( \\alpha \\sqrt{n} \\langle a, \\hat{\\theta }_\\tau -\\theta ^* \\rangle \\le t )$ .", "This proves (REF ) by noting that $\\tau \\asymp \\sigma \\sqrt{\\frac{ n}{p + \\log n}} ~\\mbox{ and }~ r_1 \\asymp \\sigma \\frac{p+ \\log n}{ n} + \\frac{f}{{f}^2} \\frac{(p\\log n)^{1/2} (p + \\log n)^{1/2}}{n}$ Next we consider the oracle Huberized ES estimator $\\hat{\\theta }^{{\\rm ora}}_\\tau = _\\theta \\sum _{i=1}^n\\ell _\\tau (Z_i - \\alpha X_i^)$ .", "Note that $\\omega _i = Z_i - \\alpha X_i^^*= \\varepsilon _i \\mathbb {1}(\\varepsilon _i\\le 0) - _{X_i} \\lbrace \\varepsilon _i \\mathbb {1}(\\varepsilon _i\\le 0) \\rbrace $ and hence $( \\omega _i^2 | X_i) \\le \\sigma ^2$ .", "Applying Theorem 2.1 in [68] we obtain that for any $t>0$ , the oracle estimator $\\hat{\\theta }^{{\\rm ora}}_\\tau $ with $\\tau \\asymp \\sigma \\sqrt{n/(p + t)}$ satisfies $\\alpha \\Vert \\hat{\\theta }^{{\\rm ora}}_\\tau - \\theta ^* \\Vert _\\Sigma \\le \\sigma \\sqrt{\\frac{p+t}{n}} ~~\\mbox{ and }~~\\bigg \\Vert \\alpha \\Sigma ^{1/2} ( \\hat{\\theta }^{{\\rm ora}}_\\tau - \\theta ^* ) - \\frac{1}{n} \\sum _{i=1}^n\\psi _\\tau (\\omega _i ) W_i \\bigg \\Vert _2 \\lesssim \\sigma \\frac{p+t}{n}$ with probability at least $1-3e^{-t}$ as long as $n\\gtrsim p+t$ .", "The Berry-Esseen bound (REF ) can then be proved using similar arguments as above.", "$\\Box $" ], [ "Proof of Theorem ", "Recall that $\\omega _i(\\beta , \\theta ) = ( Y_i - X_i^) \\mathbb {1}(Y_i \\le X_i^) + \\alpha X_i^\\beta - \\theta )$ for $\\beta , \\theta \\in ^p$ , and $\\omega _i = \\omega _i( \\beta ^*, \\theta ^*) = \\varepsilon _i \\mathbb {1}(\\varepsilon _i \\le 0) + \\alpha X_i^\\beta ^* - \\theta ^*)$ .", "Given $r_0, r_1>0$ , our goal is to bound $\\sup _{ \\Vert \\beta - \\beta ^* \\Vert _\\Sigma \\le r_0, \\atop \\alpha \\Vert \\theta - \\theta ^* \\Vert _\\Sigma \\le r_1 } \\big \\Vert \\hat{V}_\\gamma (\\beta , \\theta ) - V_\\gamma (\\beta ^*, \\theta ^* ) \\big \\Vert _2~\\mbox{ and }~ \\Vert V_\\gamma (\\beta ^*, \\theta ^*) - V \\Vert _2 ,$ where $\\hat{V}_\\gamma (\\beta , \\theta ) = \\frac{1}{n} \\sum _{i=1}^n\\psi _\\gamma ^2( \\omega _i( \\beta , \\theta ) ) W_i W_i^ \\quad V_\\gamma (\\beta , \\theta ) = \\hat{V}_\\gamma (\\beta , \\theta ) ~\\mbox{ and }~ V = \\Sigma ^{-1/2} \\Omega \\Sigma ^{-1/2} .$ Noting that $|\\psi _\\tau ^2(\\omega _i) - \\omega _i^2 | = |(\\omega _i^2 - \\gamma ^2) \\mathbb {1} (|\\omega _i| > \\gamma ) | \\le \\gamma ^{-1} |\\omega _i|^3$ , similarly to (REF ) we obtain $\\Vert V_\\gamma (\\beta ^*, \\theta ^*) - V \\Vert _2 \\le \\gamma ^{-1} \\sup _{ u \\in \\mathbb {S}^{p-1} } \\lbrace |\\omega _i |^3 (W_i^ṵ)^2 \\rbrace \\le 2 \\alpha _3 \\gamma ^{-1}.$ Without loss of generality, we assume $C_0=1$ for brevity in the rest of the proof, that is, $\\max _{1\\le i\\le n} \\Vert W_i \\Vert _2 \\le \\sqrt{p}$ .", "For some $\\epsilon _0 \\in (0, r_0)$ and $\\epsilon _1 \\in (0, r_1)$ to be determined, there exist $\\epsilon _0$ -net $\\lbrace \\beta _1, \\ldots ,\\beta _{N_0} \\rbrace \\subseteq \\mathbb {B}_\\Sigma ( \\beta ^*, r_0)$ and $(\\epsilon _1/\\alpha )$ -net $\\lbrace \\theta _1 , \\ldots ,\\theta _{N_1} \\rbrace \\subseteq \\mathbb {B}_\\Sigma ( \\theta ^*, r_1/\\alpha )$ such that $N_0 \\le (1 + 2r_0/\\epsilon _0)^p$ and $N_1 \\le (1 + 2r_1/\\epsilon _1)^p$ .", "For any $(\\beta , \\theta ) \\in \\mathbb {B}_\\Sigma ( \\beta ^*, r_0) \\times \\mathbb {B}_\\Sigma ( \\theta ^*, r_1/\\alpha )$ , there exist some $1\\le j\\le N_0$ and $1\\le k\\le N_1$ such that $\\Vert \\beta - \\beta _j \\Vert _\\Sigma \\le \\epsilon _0$ and $\\alpha \\Vert \\theta - \\theta _k \\Vert _\\Sigma \\le \\epsilon _1$ .", "Consequently, $& \\Vert \\hat{V}_\\gamma (\\beta , \\theta ) - V_\\gamma (\\beta ^*, \\theta ^* ) \\Vert _2 \\nonumber \\\\& \\le \\underbrace{ \\Vert \\hat{V}_\\gamma (\\beta , \\theta ) - \\hat{V}_\\gamma (\\beta _j, \\theta _k ) \\Vert _2}_{{\\rm discretization~error}} + \\underbrace{ \\Vert \\hat{V}_\\gamma (\\beta _j, \\theta _k ) - V_\\gamma (\\beta _j, \\theta _k ) \\Vert _2}_{{\\rm stochastic~error}} + \\underbrace{ \\Vert V_\\gamma (\\beta _j, \\theta _k ) - V_\\gamma (\\beta ^*, \\theta ^* ) \\Vert _2 }_{{\\rm approximation~error}} .", "\\nonumber $ For the discretization error term, we have $& \\Vert \\hat{V}_\\gamma (\\beta , \\theta ) - \\hat{V}_\\gamma (\\beta _j, \\theta _k ) \\Vert _2 \\\\& \\le \\sup _{u, v \\in \\mathbb {S}^{p-1} } \\frac{1}{n} \\sum _{i=1}^n| \\psi ^2_\\gamma ( \\omega _i(\\beta , \\theta ) ) - \\psi ^2_\\gamma (\\omega _i(\\beta _j, \\theta _k) ) | \\cdot | W_i^ṵ \\cdot W_i^v̰ |$ Following the proof of Lemma REF and since $\\sup _u | \\psi _\\gamma (u) | \\le \\gamma $ , it holds $& | \\psi ^2_\\gamma ( \\omega _i(\\beta , \\theta ) ) - \\psi ^2_\\gamma (\\omega _i(\\beta _j, \\theta _k) ) | \\nonumber \\\\& \\le | \\psi ^2_\\gamma ( \\omega _i(\\beta , \\theta ) ) - \\psi ^2_\\gamma (\\omega _i(\\beta , \\theta _k) ) |+ | \\psi ^2_\\gamma ( \\omega _i(\\beta , \\theta _k) ) - \\psi ^2_\\gamma (\\omega _i(\\beta _j, \\theta _k) ) | \\nonumber \\\\& \\le 2 \\gamma \\lbrace | \\alpha X_i^\\theta - \\theta _k) | + | X_i^\\beta - \\beta _j ) | \\rbrace \\le 2 \\gamma (\\epsilon _0 + \\epsilon _1)\\sqrt{p} .", "\\nonumber $ Substituting this into the above inequality yields $& \\Vert \\hat{V}_\\gamma (\\beta , \\theta ) - \\hat{V}_\\gamma (\\beta _j, \\theta _k ) \\Vert _2 \\nonumber \\\\& \\le 2\\gamma \\sup _{u, v \\in \\mathbb {S}^{p-1} } \\frac{1}{n} \\sum _{i=1}^n\\lbrace | \\alpha X_i^\\theta - \\theta _k) | + | X_i^\\beta - \\beta _j ) | \\rbrace \\cdot | W_i^ṵ \\cdot W_i^v̰ | \\nonumber \\\\& \\le 2 \\gamma ( \\epsilon _0 + \\epsilon _1 ) \\sqrt{p} \\bigg \\Vert \\frac{1}{n} \\sum _{i=1}^nW_i W_i^\\Vert _2 .", "$ For the random matrix $(1/n) \\sum _{i=1}^nW_i W_i^, it follows from Theorem~1 in \\cite {Z2022} that with probability at least $ 1-1/n$,$ (1/n) i=1nWi Wi Ip 2 12 (p+n) / n$ and thus $ (1/n) i=1nWi Wi2 2$ provided that $ np+n$.$ Next we jump to the approximation error.", "Write $\\Delta _{ij} = X_i^\\beta _j - \\beta ^*) ~\\mbox{ and }~ \\Theta _{ik} = \\alpha X_i^\\theta _k - \\theta ^*)$ satisfying $|\\Delta _{ij}| \\le \\sqrt{p} r_0$ and $|\\Theta _{ik}| \\le \\sqrt{p} r_1$ .", "Note that $ |\\omega _i(\\beta _j, \\theta _k) - \\omega _i | \\le |\\Delta _{ij} | + | \\Theta _{ik} | $ .", "By the Lipschitz continuity of $\\psi _\\gamma $ , that is, $|\\psi _\\gamma (u) - \\psi _\\gamma (v) | \\le |u-v|$ , and the fact that $|\\psi _\\gamma (u) | \\le |u|$ , we have $ | _{X_i} \\lbrace \\psi ^2_\\gamma (\\omega _i(\\beta _j, \\theta _k)) \\rbrace - _{X_i} \\lbrace \\psi ^2_\\gamma (\\omega _i ) \\rbrace | \\le _{X_i } \\lbrace (2 |\\omega _i| + |\\Delta _{ij} | + | \\Theta _{ik} | ) ( |\\Delta _{ij} | + | \\Theta _{ik} | ) \\rbrace .\\nonumber $ In particular, Condition REF implies $_{X_i} (|\\omega _i |) \\le \\sigma $ .", "Consequently, $& \\Vert V_\\gamma (\\beta _j, \\theta _k ) - V_\\gamma (\\beta ^*, \\theta ^* ) \\Vert _2 \\nonumber \\\\& \\le \\sup _{u\\in \\mathbb {S}^{p-1} } | _{X_i} \\psi ^2_\\gamma (\\omega _i(\\beta _j, \\theta _k)) -_{X_i} \\psi _\\gamma ^2 (\\omega _i) | (W_i^ṵ )^2 \\nonumber \\\\& \\le \\sup _{u\\in \\mathbb {S}^{p-1} } \\lbrace (2 |\\omega _i| + |\\Delta _{ij} | + | \\Theta _{ik} | ) ( |\\Delta _{ij} | + | \\Theta _{ik} | ) (W_i^ṵ )^2 \\rbrace \\nonumber \\\\& \\le 2 \\kappa _3 \\sigma ( r_0 + r_1 ) + \\kappa _4( r_0 + r_1 )^2 .", "$ It remains to control the stochastic error term $\\Vert \\hat{V}_\\gamma (\\beta _j, \\theta _k ) - V_\\gamma (\\beta _j, \\theta _k ) \\Vert _2$ .", "Via a standard covering argument, there exists a $(1/4)$ -net $$ of the unit sphere with $| | \\le 9^p$ such that $\\Vert \\hat{V}_\\gamma (\\beta _j, \\theta _k ) - V_\\gamma (\\beta _j, \\theta _k ) \\Vert _2 \\le 2 \\max _{u \\in } \\bigg | \\frac{1}{n} \\sum _{i=1}^n(1-) \\psi ^2_\\gamma (\\omega _i(\\beta _j, \\theta _k)) ( W_i^ṵ)^2 \\bigg | .$ Given $u\\in $ and for $k=2, 3, \\ldots $ , note that $| \\psi ^2_\\gamma (\\omega _i(\\beta _j, \\theta _k)) ( W_i^ṵ)^2 |^k \\le \\gamma ^{2(k-2)}\\lbrace \\psi ^4_\\gamma (\\omega _i(\\beta _j, \\theta _k)) ( W_i^ṵ)^{2k} \\rbrace $ and $& _{X_i} \\lbrace \\psi ^4_\\gamma (\\omega _i(\\beta _j, \\theta _k)) \\rbrace \\le \\gamma _{X_i} \\lbrace |\\omega _i(\\beta _j, \\theta _k) |^3 \\rbrace \\\\&\\le 6 \\gamma \\lbrace _{X_i} ( |\\omega _i|^3) + ( |\\Delta _{ij} | + | \\Theta _{ik} | )^3 \\rbrace \\lesssim \\gamma \\lbrace \\alpha _3 + ( \\sqrt{p} r_0 + \\sqrt{p} r_1)^3 \\rbrace .$ Moreover, the sub-Gaussianity of $W_i$ implies $( W_i^ṵ / \\upsilon _1 )^{2k} & = 2k \\int _0^\\infty ( | W_i^ṵ / \\upsilon _1 | \\ge t) {\\rm d} t \\le 4k \\int _0^\\infty t^{2k-1} e^{-t^2/2} {\\rm d} t \\le 2^{k+1} k!.$ Applying Bernstein's inequality, we find that for any $z\\ge 0$ , $\\bigg | \\frac{1}{n} \\sum _{i=1}^n(1-) \\psi ^2_\\gamma (\\omega _i(\\beta _j, \\theta _k)) ( W_i^ṵ)^2 \\bigg | \\lesssim \\sqrt{ \\lbrace \\alpha _3 + ( \\sqrt{p} r_0 + \\sqrt{p} r_1)^3 \\rbrace \\frac{\\gamma z}{n}} + \\frac{ \\gamma ^2 z}{n}$ with probability at least $1-2 e^{-z}$ .", "Taking the union bound over $u\\in $ and $(j, k) \\in \\lbrace 1, \\ldots , N_0\\rbrace \\times \\lbrace 1, \\ldots , N_1\\rbrace $ , and setting $z = \\log (9^p)+y$ , we conclude that with probability at least $1-2 N_0 N_1 e^{-y}$ , $\\max _{1\\le j\\le N_0 \\atop 1\\le k\\le N_1 }\\Vert \\hat{V}_\\gamma (\\beta _j, \\theta _k ) - V_\\gamma (\\beta _j, \\theta _k ) \\Vert _2 \\lesssim \\sqrt{ \\lbrace \\alpha _3 + ( \\sqrt{p} r_0 + \\sqrt{p} r_1)^3 \\rbrace \\gamma \\frac{p+y}{n}} + \\gamma ^2\\frac{p+y}{n} .", "$ Finally, we choose $\\epsilon _0 = r_0/n$ and $\\epsilon _1 = r_1/n$ such that $N_0 N_1 \\le (2n+1)^{2p}$ .", "Combining (REF )–(REF ) and taking $y = (2p+1) \\log (2n+1)$ prove the claimed bound.", "$\\Box $" ], [ "Proof of Lemma ", "With a change of variable $\\delta = \\Sigma ^{1/2} (\\beta - \\beta ^*)$ for $\\beta \\in \\mathbb {B}_\\Sigma (\\beta ^*, r_0)$ , write $r_i(\\delta ) = \\omega _i(\\beta ) = \\phi (\\varepsilon _i - W_i^) + \\alpha W_i^+ \\alpha X_i^\\beta ^* - \\theta ^*)$ where $\\phi (t) := t\\mathbb {1}(t\\le 0)$ .", "Moreover, define the $^p$ -valued random process $\\mathcal {R}(\\delta ) = (1/n) \\sum _{i=1}^n(1-) \\lbrace \\psi _\\tau (r_i(\\delta )) - \\psi _\\tau (r_i(0)) \\rbrace W_i$ , satisfying $\\mathcal {R}(0) = 0$ and $(\\delta ) = 0$ .", "The goal is to bound the supremum $\\sup _{\\delta \\in \\mathbb {B}(r_0)} \\Vert (\\delta ) \\Vert _2$ .", "Since both $\\psi _\\tau (\\cdot )$ and $\\phi (\\cdot )$ are Lipschitz continuous that have derivatives $\\psi _\\tau ^{\\prime }(t) = \\mathbb {1} (|t| \\le \\tau )$ and $\\phi ^{\\prime }(t) = \\mathbb {1}(t\\le 0)$ almost everywhere, respectively, the stochastic process $\\mathcal {R}(\\delta ) $ is absolutely continuous.", "To apply Theorem A.3 in [79], in the following we show that its gradient $\\nabla (\\delta ) = (1/n) \\sum _{i=1}^n\\lbrace w_i(\\delta ) W_i W_i^ w_i(\\delta ) W_i W_i^$ has bounded exponential moments, where $w_i(\\delta ) := \\psi _\\tau ^{\\prime } \\big ( r_i(\\delta ) \\big ) \\big \\lbrace \\alpha - \\mathbb {1}(\\varepsilon _i \\le W_i^) \\big \\rbrace $ satisfies $|w_i(\\delta )| \\le 1 - \\alpha $ .", "For any $u, v \\in \\mathbb {S}^{p-1}$ and $|\\lambda | \\le \\sqrt{n} / 4 $ , using the inequality $| e^u - 1 - u | \\le u^2 e^{|u|} /2$ we obtain $& \\exp \\big \\lbrace \\lambda \\sqrt{n} u^(\\delta ) v / \\upsilon _1^2 \\big \\rbrace \\nonumber \\\\& = \\prod _{i=1}^n \\Bigg [ 1 + \\frac{\\lambda ^2 }{2 \\upsilon _1^4 n } e^{ | \\lambda | / ( \\upsilon _1^2\\sqrt{n} ) } \\big \\lbrace w_i(\\delta ) W_i^ṵ W_i^v̰ - w_i(\\delta ) W_i^ṵ W_i^v̰ \\big \\rbrace ^2 e^{ |\\lambda | |W_i^ṵ W_i^v̰| / ( \\upsilon _1^2 \\sqrt{n} ) } \\Bigg ] \\nonumber \\\\& \\le \\prod _{i=1}^n \\Bigg [ 1 + \\frac{ \\lambda ^2 e^{1/4}}{\\upsilon _1^4 n } \\lbrace w_i(\\delta ) W_i^ṵ W_i^v̰ \\rbrace ^2 e^{ |W_i^ṵ W_i^v̰| / (2\\upsilon _1)^2 } \\nonumber \\\\&~~~~~~~~~~~~~~~~+ \\frac{\\lambda ^2 e^{1/4} }{\\upsilon _1^4 n } \\lbrace w_i(\\delta ) W_i^ṵ W_i^v̰ \\rbrace ^2 e^{ |W_i^ṵ W_i^v̰| / (2\\upsilon _1)^2 } \\Bigg ] \\nonumber \\\\& \\le \\prod _{i=1}^n \\Bigg \\lbrace 1 + \\frac{\\lambda ^2 e^{1/4}}{\\upsilon _1^4 n } ( W_i^ṵ W_i^v̰ )^2 e^{ |W_i^ṵ W_i^v̰| / (2\\upsilon _1)^2 } + \\frac{\\lambda ^2 e^{1/4} }{\\upsilon _1^4 n } e^{ |W_i^ṵ W_i^v̰| / (2\\upsilon _1)^2 } \\Bigg \\rbrace .", "$ For each $u\\in \\mathbb {S}^{p-1}$ , define the non-negative random variable $\\chi _u = ( W_i^ṵ)^2 / (2 \\upsilon _1)^2$ .", "From Condition REF we see that $ ( \\chi _u \\ge t ) \\le 2 e^{-2t}$ for any $t \\ge 0 $ .", "A standard calculation shows that $( e^{\\chi _u} ) = 1 + \\int _0^\\infty e^t (\\chi _u \\ge t ) {\\rm d} t \\le 3 ~\\mbox{ and }~( \\chi _u^2 e^{\\chi _u} ) = \\int _0^\\infty (t^2 + 2t) e^t ( \\chi _u \\ge t ) {\\rm d} t \\le 8 .$ Taking the supremum over $u\\in \\mathbb {S}^{p-1}$ yields $\\sup _{u \\in \\mathbb {S}^{p-1}} e^{(W_i^ṵ)^2/(2\\upsilon _1 )^2 } \\le 3~\\mbox{ and }~\\sup _{u \\in \\mathbb {S}^{p-1}} (W_i^ṵ)^4 e^{(W_i^ṵ)^2/(2 \\upsilon _1 )^2 } \\le 128 \\upsilon _1^4 .$ Substituting these exponential moment bounds into (REF ), and by Hölder's inequality, we obtain that for any $\\delta \\in ^p$ and $\\lambda \\in $ satisfying $|\\lambda | \\le \\sqrt{n} / 4$ , $\\sup _{ u , v \\in \\mathbb {S}^{p-1} } \\exp \\big \\lbrace \\lambda \\sqrt{n} u^(\\delta ) v/ \\upsilon _1^2 \\big \\rbrace \\le \\prod _{i=1}^n \\Bigg ( 1 + 128 e^{1/4} \\frac{\\lambda ^2 }{ n } + 3e^{1/4} \\frac{\\lambda ^2 }{ n } \\Bigg ) \\le e^{ C_0^2 \\lambda ^2 /2 } ,$ where $C_0 = e^{1/8} \\sqrt{262}$ .", "This verifies condition (A.4) in [79] with ${\\rm g} = \\frac{\\sqrt{n}}{4\\sqrt{2} }$ .", "Therefore, applying Theorem A.3 therein to the process $\\lbrace \\sqrt{n} (\\delta ) / \\upsilon _1^2 , \\delta \\in \\mathbb {B}(r_0)\\rbrace $ we have that, with probability at least $1-e^{-t}$ ($t \\ge 1/2$ ), $\\sup _{\\delta \\in \\mathbb {B}(r_0) } \\Vert (\\delta ) \\Vert _2 \\le 6 C_0 \\upsilon _1^2 \\sqrt{\\frac{4p + 2t }{n}} \\cdot r_0$ as long as $n\\ge 64 (2p+ t)$ .", "This establishes the claim.", "$\\Box $" ], [ "Proof of Lemma ", "Note that $\\Vert \\lbrace \\psi _\\tau ( \\omega _i(\\beta ) ) W_i \\rbrace \\Vert _2 = \\sup _{u\\in \\mathbb {S}^{p-1} } | \\lbrace \\psi _\\tau ( \\omega _i(\\beta ) ) W_i^ṵ \\rbrace |$ and $\\psi _\\tau (t) = t \\mathbb {1}(|t|\\le \\tau ) + \\tau (t) \\mathbb {1}(|t| >\\tau )$ .", "Recall that the conditional CDF $F=F_{\\varepsilon _i | X_i}$ of $\\varepsilon _i$ given $X_i$ is continuously differentiable with $f=F^{\\prime }$ .", "Let $_{X_i}$ be the conditional expectation given $X_i$ .", "For $\\beta \\in ^p$ and $u\\in \\mathbb {S}^{p-1}$ , define $\\Delta _i = \\Delta _i(\\beta ) = X_i^\\beta - \\beta ^*)$ and $& E_i(\\beta ) = _{X_i} \\psi _\\tau \\big ( \\omega _i (\\beta ) \\big ) \\\\& = \\int _{-\\infty }^{\\Delta _i} \\psi _\\tau \\big ( t - \\Delta _i + \\alpha X_i^\\beta - \\theta ^*) \\big ) f(t) {\\rm d} t+ \\int _{\\Delta _i}^\\infty \\psi _\\tau \\big ( \\alpha X_i^\\beta - \\theta ^* ) \\big ) f(t) {\\rm d}t .$ Since $\\psi _\\tau (\\cdot )$ is absolutely continuous and has a derivative $\\psi ^{\\prime }_\\tau (t) = \\mathbb {1}(|t| \\le \\tau )$ almost everywhere, by the fundamental theorem of Lebesgue integral calculus we have $E_i(\\beta ) - E_i(\\beta ^*) = \\int _0^1 \\langle \\nabla E_i \\big ( \\beta ^* + t(\\beta -\\beta ^*) \\big ) , \\beta - \\beta ^* \\rangle {\\rm d} t ,$ where $& \\nabla E_i(\\beta ) \\\\& = \\int _{-\\infty }^{\\Delta _i} \\psi ^{\\prime }_\\tau \\big ( t - \\Delta _i + \\alpha X_i^\\beta - \\theta ^*) \\big ) f(t) {\\rm d} t \\cdot (\\alpha -1) X_i + f(\\Delta _i) \\psi _\\tau \\big ( \\alpha X_i^ \\beta - \\theta ^* ) \\big ) \\cdot (\\alpha -1) X_i \\\\& ~~~~~~ + \\psi _\\tau ^{\\prime }\\big (\\alpha X_i^\\beta - \\theta ^* ) \\big ) \\lbrace 1- F(\\Delta _i) \\rbrace \\cdot \\alpha X_i - f(\\Delta _i) \\psi _\\tau \\big ( \\alpha X_i^\\beta - \\theta ^* ) \\big ) \\cdot (\\alpha - 1 ) X_i \\\\& = (\\alpha -1) F(\\Delta _i) X_i + \\alpha \\lbrace 1- F(\\Delta _i) \\rbrace X_i + _{X_i} \\mathbb {1}\\lbrace | \\omega _i(\\beta ) | > \\tau \\rbrace \\lbrace \\mathbb {1}(\\varepsilon _i \\le \\Delta _i) - \\alpha \\rbrace X_i \\\\& = \\lbrace \\alpha - F(\\Delta _i) \\rbrace X_i + _{X_i} \\mathbb {1}\\lbrace | \\omega _i(\\beta ) | > \\tau \\rbrace \\lbrace \\mathbb {1}(\\varepsilon _i \\le \\Delta _i) - \\alpha \\rbrace X_i .$ For $t\\in [0, 1]$ , write $\\beta _t = \\beta ^* + t(\\beta -\\beta ^*)$ so that $X_i^\\beta _t - \\beta ^*) = t \\Delta _i$ and $\\langle \\nabla E_i \\big ( \\beta ^* + t(\\beta -\\beta ^*) \\big ) , \\beta - \\beta ^* \\rangle = \\lbrace \\alpha - F(t\\Delta _i) \\rbrace \\Delta _i + _{X_i} \\mathbb {1}\\lbrace | \\omega _i(\\beta _t) | > \\tau \\rbrace \\lbrace \\mathbb {1}(\\varepsilon _i \\le t \\Delta _i) - \\alpha \\rbrace \\Delta _i .$ By Condition REF , $| \\alpha - F(t\\Delta _i) | \\le f \\cdot t | \\Delta _i |$ almost surely.", "Moreover, $_{X_i} \\mathbb {1}\\lbrace | \\omega _i(\\beta _t) | > \\tau \\rbrace | \\mathbb {1}(\\varepsilon _i \\le t \\Delta _i) - \\alpha | \\le \\frac{1-\\alpha }{\\tau } _{X_i} | \\omega _i(\\beta _t)|.", "$ Observe that $| \\omega _i(\\beta _t) | & \\le | (\\varepsilon _i - t\\Delta _i) \\mathbb {1}(\\varepsilon _i \\le t \\Delta _i) -\\varepsilon _i \\mathbb {1}(\\varepsilon _i \\le 0) + \\alpha t \\Delta _i | + | \\omega _i (\\beta ^*) | \\\\& \\le | \\omega _i | + {\\left\\lbrace \\begin{array}{ll}|\\varepsilon _i \\mathbb {1}(0<\\varepsilon _i \\le t \\Delta _i) + t \\Delta _i \\lbrace \\alpha - \\mathbb {1}(\\varepsilon _i \\le t \\Delta _i ) \\rbrace | & {\\rm ~if }~ \\Delta _i \\ge 0 \\\\|t \\Delta _i \\lbrace \\alpha - \\mathbb {1}(\\varepsilon _i \\le t \\Delta _i) \\rbrace - \\varepsilon _i \\mathbb {1} (t\\Delta _i < \\varepsilon _i \\le 0 ) | & {\\rm ~if }~ \\Delta _i < 0\\end{array}\\right.}", "\\\\& \\le | \\omega _i | + t | \\Delta _i | ,$ thus implying $_{X_i} | \\omega _i(\\beta _t)| = _{X_i} | \\omega _i | + t | \\Delta _i | \\le \\big ( _{X_i} \\omega _i^2 \\big )^{1/2} + t | \\Delta _i | \\le \\sigma + t | \\Delta _i | .$ Substituting this into (REF ) yields $_{X_i} \\mathbb {1}\\lbrace | \\omega _i(\\beta _t) | > \\tau \\rbrace | \\mathbb {1}(\\varepsilon _i \\le t \\Delta _i) - \\alpha | \\le \\tau ^{-1} \\big ( \\sigma + t | \\Delta _i | \\big )$ .", "Putting together the pieces, we conclude that for any $\\beta \\in \\mathbb {B}_\\Sigma (\\beta ^*, r_0)$ , $& \\big | \\big \\lbrace E_i(\\beta ) - E_i(\\beta ^*) \\big \\rbrace W_i^ṵ \\big | \\nonumber \\\\&\\le \\int _0^1 \\big \\lbrace f \\cdot t \\Delta _i^2 + \\tau ^{-1} \\big ( \\sigma + t | \\Delta _i | \\big ) \\cdot | \\Delta _i | \\big \\rbrace \\cdot | W_i^ṵ | \\, {\\rm d} t \\nonumber \\\\& \\le \\frac{1}{2} \\kappa _3 (f + 1/\\tau ) r_0^2 + \\sigma \\frac{r_0}{ \\tau } .", "$ Note also that $_{X_i} \\mathbb {1}\\lbrace | \\omega _i(\\beta _t) | > \\tau \\rbrace \\le \\tau ^{-2} ( \\sigma ^2 + 2 \\sigma t |\\Delta _i| + t^2 \\Delta ^2_i )$ , which in turn implies $& \\big | \\big \\lbrace E_i(\\beta ) - E_i(\\beta ^*) \\big \\rbrace W_i^ṵ \\big | \\nonumber \\\\& \\le \\frac{\\kappa _3}{2} f r_0^2 + (\\sigma ^2 + \\kappa _3 \\sigma r_0 + \\kappa _4 r_0^2/3) \\frac{r_0}{\\tau ^2} .$ Finally, for $\\Vert \\psi _\\tau (\\omega _i) W_i \\Vert _2 =\\sup _{u\\in \\mathbb {S}^{p-1} } |\\lbrace \\psi _\\tau (\\omega _i) W_i^ṵ\\rbrace |$ , note that $_{X_i} (\\omega _i) = 0$ , $_{X_i} (\\omega _i^2) \\le \\sigma ^2$ and $|\\psi _\\tau (t) - t| = (|t| - \\tau ) \\mathbb {1}(|t| > \\tau )\\le \\tau ^{-1} t^2$ .", "Therefore, $| \\lbrace \\psi _\\tau (\\omega _i) W_i^ṵ \\rbrace | \\le \\tau ^{-1} \\big ( \\omega _i^2 |W_i^ṵ| \\big ) \\le \\tau ^{-1} \\sigma ^2$ .", "Combining this with (REF ) and (REF ) proves the claims (REF ).", "and (REF ), respectively.", "$\\Box $" ], [ "Proof of Lemma ", "We apply a standard covering argument and Bernstein's inequality to bound the $\\ell _2$ -norm of the centered random vector $ (1/n) \\sum _{i=1}^n(1-) \\psi _\\tau (\\omega _i ) W_i$ .", "For any $\\epsilon \\in (0, 1)$ , there exists an $\\epsilon $ -net $_\\epsilon $ of $\\mathbb {S}^{p-1}$ with $|_\\epsilon | \\le (1 + 2/\\epsilon )^p$ such that $\\bigg \\Vert \\frac{1}{n} \\sum _{i=1}^n(1-) \\psi _\\tau ( \\omega _i) W_i \\bigg \\Vert _2 \\le \\frac{1}{1 - \\epsilon } \\max _{u\\in _\\epsilon } \\frac{1}{n} \\sum _{i=1}^n(1-) \\psi _\\tau ( \\omega _i ) W_i^ṵ .", "$ Recall that $| \\psi _\\tau ( \\omega _i ) | \\le \\tau $ and $_{X_i} \\psi ^2_\\tau ( \\omega _i ) \\le _{X_i} (\\omega _i^2) \\le \\sigma ^2$ .", "To bound the higher-order moments, Condition REF ensures that for each $k\\ge 3$ , $| W_i^ṵ |^k \\le 2 \\upsilon _1^k k \\int _0^\\infty t^{k-1} e^{-t^2/2} {\\rm d} t = \\upsilon _2^k k \\Gamma (k/2)$ , where $\\Gamma (\\cdot )$ is the Gamma function and $\\upsilon _2 = \\sqrt{2} \\upsilon _1$ .", "If $k=2l$ for some $l\\ge 2$ , $| W_i^ṵ |^k \\le 2 \\upsilon _2^k (k/2)!", "\\le 2 \\upsilon _2^k k!/2^k$ ; and if $k=2l+1$ for some $l\\ge 1$ , $| W_i^ṵ |^k \\le \\upsilon _2^{k} k \\Gamma (l + 1/2) = \\sqrt{\\pi } \\upsilon _2^k \\frac{ k (2l)!", "}{4^l l!}", "= 2 \\sqrt{\\pi } \\upsilon _2^k \\frac{k!", "}{2^k l!", "}.$ Putting together the pieces, we obtain that $| \\psi _\\tau ( \\omega _i ) W_i^ṵ |^2 \\le \\sigma ^2$ and $| \\psi _\\tau ( \\omega _i ) W_i^ṵ |^k & \\le \\tau ^{k-2} \\lbrace |W_i^ṵ|^k _{X_i} ( \\omega _i^2 ) \\rbrace \\le \\tau ^{k-2} \\sigma ^2 ( |W_i^ṵ|^k ) \\\\& \\le \\frac{k!", "}{2} \\cdot \\sqrt{\\pi } \\upsilon _2^2 \\sigma ^2 \\cdot (\\upsilon _2 \\tau /2)^{k-2} , \\ \\ k\\ge 3.$ By applying Bernstein's inequality and the union bound, we find that with probability at least $1 - e^{-t}$ , $\\max _{u\\in _\\epsilon } \\frac{1}{n} \\sum _{i=1}^n(1-) \\psi _\\tau (\\omega _i ) W_i^ṵ \\le 2 \\upsilon _2 \\sigma \\sqrt{\\frac{ p\\log (1+2/\\epsilon ) + t }{n}} + \\upsilon _2 \\tau \\frac{ p\\log (1+2/\\epsilon ) + t }{2 n } .$ Combining this with (REF ), and taking $\\epsilon = 1/2$ , we establish the claimed tail bound.", "$\\Box $" ], [ "Proof of Lemma ", "For $\\beta , \\theta \\in ^p$ , define the joint loss function $\\hat{}_\\tau (\\beta , \\theta ) = (1/n) \\sum _{i=1}^n\\ell _\\tau \\big ( Z_i ( \\beta ) - \\alpha X_i^\\big )$ , and $\\Delta _i = \\Delta _i(\\beta ) = X_i^ \\beta - \\beta ^* )$ .", "Note that $| Z_i( \\beta ) - \\alpha X_i^^* | & \\le | (\\varepsilon _i - \\Delta _i) \\mathbb {1}(\\varepsilon _i \\le \\Delta _i) -\\varepsilon _i \\mathbb {1}(\\varepsilon _i \\le 0) + \\alpha \\Delta _i | + | \\omega _i | \\\\& \\le | \\omega _i | + {\\left\\lbrace \\begin{array}{ll}|\\varepsilon _i \\mathbb {1}(0<\\varepsilon _i \\le \\Delta _i) +\\Delta _i \\lbrace \\alpha - \\mathbb {1}(\\varepsilon _i \\le \\Delta _i ) \\rbrace | & {\\rm ~if }~ \\Delta _i \\ge 0 \\\\| \\Delta _i \\lbrace \\alpha - \\mathbb {1}(\\varepsilon _i \\le \\Delta _i) \\rbrace - \\varepsilon _i \\mathbb {1} (\\Delta _i < \\varepsilon _i \\le 0 ) | & {\\rm ~if }~ \\Delta _i < 0\\end{array}\\right.}", "\\\\& \\le | \\omega _i | + | \\Delta _i | .$ For each $i=1,\\ldots , n$ , define the event $_i(\\beta , \\theta ) & = \\big \\lbrace |\\omega _i | \\le \\tau /4 \\big \\rbrace \\cap \\big \\lbrace | X_i^\\beta - \\beta ^*) | \\le \\tau /4 \\big \\rbrace \\cap \\big \\lbrace | X_i^\\theta - \\theta ^*) |/ \\Vert \\theta - \\theta ^* \\Vert _\\Sigma \\le \\tau /(2 r ) \\big \\rbrace , \\nonumber $ such that conditioning on $_i(\\beta , \\theta )$ with $\\beta \\in \\mathbb {B}_\\Sigma (\\beta ^* , r_0)$ and $\\theta \\in \\mathbb {B}_\\Sigma (\\theta ^* , r / \\alpha )$ , $| Z_i ( \\beta ) - \\alpha X_i^| \\le | Z_i( \\beta ) - \\alpha X_i^^* | + \\alpha | X_i^\\theta - \\theta ^* ) | \\le \\frac{ \\tau }{4} + \\frac{ \\tau }{4} + \\frac{ \\tau }{2} = \\tau .$ Consequently, $& \\langle \\partial _\\theta \\hat{}_\\tau (\\beta , \\theta ) - \\partial _\\theta \\hat{}_\\tau ( \\beta , \\theta ^*) , \\theta - \\theta ^* \\rangle \\nonumber \\\\& = \\frac{\\alpha }{n} \\sum _{i=1}^n\\big \\lbrace \\psi _\\tau \\big ( Z_i ( \\beta ) -\\alpha X_i^^* \\big ) - \\psi _\\tau \\big ( Z_i ( \\beta ) - \\alpha X_i^\\big ) \\big \\rbrace X_i^\\theta - \\theta ^* ) \\nonumber \\\\& \\ge \\frac{\\alpha }{n} \\sum _{i=1}^n\\big \\lbrace \\psi _\\tau \\big ( Z_i ( \\beta ) - \\alpha X_i^^* \\big ) - \\psi _\\tau \\big ( Z_i (\\beta ) - \\alpha X_i^\\big ) \\big \\rbrace X_i^\\theta -\\theta ^*) \\mathbb {1}_{_i(\\beta ,\\theta ) } \\nonumber \\\\& \\ge \\frac{\\alpha ^2 }{n} \\sum _{i=1}^n\\langle X_i, \\theta - \\theta ^* \\rangle ^2 \\mathbb {1}_{_i(\\beta , \\theta ) } .", "$ For any $R>0$ , define the functions $\\varphi _R(t) = t^2 \\mathbb {1}(|t| \\le R/2) + ( t - (t) R)^2 \\mathbb {1}(R/2 < |t| \\le R ) \\\\\\mbox{ and }~ \\phi _R(t) = \\mathbb {1}(|t|\\le R/2) + \\lbrace 2 - (2t/R) (t) \\rbrace \\mathbb {1}( R/2 < |t| \\le R ) ,$ which are smoothed proxies of $t\\mapsto t^2 \\mathbb {1} (|t| \\le R)$ and $t\\mapsto \\mathbb {1} (|t| \\le R)$ , respectively.", "Moreover, $\\varphi _R(\\cdot )$ is $R$ -Lipschitz continuous and satisfies (i) $t^2 \\mathbb {1}(|t| \\le R/2) \\le \\varphi _R(t) \\le t^2 \\mathbb {1}(|t| \\le R)$ and (ii) $\\varphi _{cR}(c t) = c^2 \\varphi _R(t) $ for any $c>0$ ; $\\phi _R(\\cdot )$ is $(2/R)$ -Lipschitz continuous and satisfies $\\mathbb {1}(|t| \\le R/2) \\le \\phi _R(t) \\le \\mathbb {1}(|t| \\le R)$ .", "For $\\beta \\in \\mathbb {B}_\\Sigma (\\beta ^* , r_0)$ and $\\theta \\in \\mathbb {B}_\\Sigma (\\theta ^*, r / \\alpha )$ , consider the following reparametrizations $\\gamma = \\Sigma ^{1/2} (\\beta - \\beta ^*) \\in \\mathbb {B}(r_0) ~\\mbox{ and }~ \\delta = \\Sigma ^{1/2}(\\theta -\\theta ^*) / \\Vert \\theta - \\theta ^* \\Vert _\\Sigma \\in \\mathbb {S}^{p-1}$ throughout the rest of the proof.", "Then $\\langle \\partial _\\theta \\hat{}_\\tau (\\beta , \\theta ) - \\partial _\\theta \\hat{}_\\tau ( \\beta , \\theta ^*) , \\theta - \\theta ^* \\rangle & \\ge \\frac{\\alpha ^2 }{n} \\sum _{i=1}^n\\chi _i \\cdot \\varphi _{ \\tau \\Vert \\theta - \\theta ^* \\Vert _\\Sigma / (2r) } (\\langle X_i, \\theta -\\theta ^* \\rangle ) \\phi _{\\tau /4 }(W_i^) \\nonumber \\\\& = \\alpha ^2 \\Vert \\theta - \\theta ^* \\Vert _\\Sigma ^2 \\cdot \\underbrace{ \\frac{1}{n} \\sum _{i=1}^n\\chi _i \\cdot \\varphi _{ \\tau / (2r) } ( W_i^) \\phi _{\\tau /4 }(W_i^) }_{=: \\, G_n(\\beta , \\theta ) }, $ where $\\chi _i = \\mathbb {1} \\lbrace |\\omega _i | \\le \\tau /4 \\rbrace $ .", "In the following, we bound $G_n(\\beta , \\theta ) - G_n(\\beta , \\theta )$ and $G_n(\\beta , \\theta )$ , respectively.", "Noting that $\\varphi _R(t) \\ge t^2 \\mathbb {1}(|t| \\le R/2)$ and $\\phi _R(t) \\ge \\mathbb {1}(|t|\\le R/2)$ , we have $G_n(\\gamma , \\delta )& \\ge (W_i^)^2 \\mathbb {1} \\lbrace | W_i^| \\le \\tau /(4r) \\rbrace \\mathbb {1} \\lbrace | W_i^| \\le \\tau / 8 \\rbrace \\mathbb {1} \\lbrace | \\omega _i | \\le \\tau / 4 \\rbrace \\nonumber \\\\& = ( W_i^)^2 - ( W_i^)^2 \\mathbb {1} \\lbrace | W_i^| > \\tau /(4r) \\rbrace - ( W_i^)^2 \\mathbb {1} \\lbrace | W_i^| > \\tau / 8 \\rbrace \\nonumber \\\\&~~~~~~ - ( W_i^)^2 \\mathbb {1} \\lbrace | \\omega _i | > \\tau /4 \\rbrace \\nonumber \\\\& \\ge 1 - \\Big ( \\frac{4 r}{\\tau } \\Big )^2 (W_i^)^4 - \\Big ( \\frac{8}{\\tau } \\Big )^2 ( W_i^)^2 ( W_i^)^2 - \\Big ( \\frac{4}{\\tau } \\Big )^2 (\\omega _i W_i^)^2 \\nonumber \\\\& \\ge 1 - \\kappa _4 ( 4r / \\tau )^2 - \\kappa _4 (8 r_0 / \\tau )^2 - (4 \\sigma /\\tau )^2 \\nonumber \\\\& \\ge \\frac{1}{2}, $ where the last inequality holds if $\\tau ^2 \\ge 32 \\lbrace \\kappa _4 (r^2 + 4 r_0^2) + \\sigma ^2 \\rbrace $ .", "Next, consider the supremum $\\Lambda _n = \\sup _{ ( \\beta , \\theta ) \\in \\mathbb {B}_\\Sigma (\\beta ^*, r_0) \\times \\mathbb {B}_\\Sigma (\\theta ^*, r/\\alpha ) } \\lbrace - G_n(\\beta , \\theta ) + G_n(\\beta , \\theta ) \\rbrace $ .", "For each pair $(\\beta , \\theta )$ , write $g_{\\beta , \\theta } ( X_i, \\varepsilon _i) = \\chi _i \\varphi _{\\tau /(2 r)} (W_i^) \\phi _{\\tau /4} (W_i^) $ , so that $\\Lambda _n = \\sup _{ ( \\beta , \\theta ) \\in \\mathbb {B}_\\Sigma (\\beta ^*, r_0) \\times \\mathbb {B}_\\Sigma (\\theta ^*, r/\\alpha ) } \\frac{1}{n}\\sum _{i=1}^n\\lbrace g_{\\beta , \\theta } ( X_i, \\varepsilon _i) - g_{\\beta , \\theta } ( X_i, \\varepsilon _i) \\rbrace .$ Since $0 \\le \\varphi _R(t) \\le \\min \\lbrace (R/2)^2, t^2 \\rbrace $ and $0\\le \\phi _R(t) \\le 1$ for any $t\\in $ , $0 \\le g_{\\beta , \\theta } ( X_i, \\varepsilon _i) \\le (\\tau / 4r )^2 ~\\mbox{ and }~ g^2_{\\beta , \\theta } ( X_i, \\varepsilon _i) \\le \\kappa _4 .$ Applying Theorem 7.3 in [65], we have that for any $t\\ge 0$ , $\\Lambda _n \\le \\Lambda _n + ( \\Lambda _n )^{1/2} \\frac{\\tau }{2 r} \\sqrt{\\frac{t}{n}} + (2 \\kappa _4 )^{1/2} \\sqrt{\\frac{t}{n}} + \\Big (\\frac{\\tau }{4r} \\Big )^2 \\frac{t}{3n} $ with probability at least $1-e^{-t}$ .", "To bound $\\Lambda _n$ , using symmetrization techniques and by the connection between Gaussian and Rademacher complexities (see, e.g.", "Lemma 4.5 in [75]), we see that $\\Lambda _n \\le \\sqrt{2\\pi } \\cdot \\bigg \\lbrace \\sup _{ ( \\beta , \\theta ) \\in \\mathbb {B}_\\Sigma (\\beta ^*, r_0) \\times \\mathbb {B}_\\Sigma (\\theta ^*, r/\\alpha ) } \\mathbb {G}_{ \\beta , \\theta } \\bigg \\rbrace ,$ where $\\mathbb {G}_{\\beta , \\theta } = (1/n) \\sum _{i=1}^ng_i \\chi _i \\varphi _{\\tau /(2 r)} (W_i^) \\phi _{\\tau /4} (W_i^)$ and $g_i$ 's are independent standard normal random variables.", "Let $^*$ be the conditional expectation given $\\lbrace (X_i, \\varepsilon _i ) \\rbrace _{i=1}^n$ .", "Note that $\\lbrace \\mathbb {G}_{\\beta , \\theta } \\rbrace $ is a (conditional) Gaussian process.", "For $(\\beta , \\theta ), (\\beta ^{\\prime }, \\theta ^{\\prime }) \\in \\mathbb {B}_\\Sigma (\\beta ^*, r_0) \\times \\mathbb {B}_\\Sigma (\\theta ^*, r/\\alpha )$ , define $(\\gamma ^{\\prime }, \\delta ^{\\prime })$ accordingly, and consider the decomposition $\\mathbb {G}_{\\beta , \\theta } - \\mathbb {G}_{\\beta ^{\\prime }, \\theta ^{\\prime } } &= \\mathbb {G}_{\\beta , \\theta } - \\mathbb {G}_{\\beta , \\theta ^{\\prime } } + \\mathbb {G}_{ \\beta , \\theta ^{\\prime } } - \\mathbb {G}_{\\beta ^{\\prime }, \\theta ^{\\prime } } \\\\& = \\frac{1}{n} \\sum _{i=1}^ng_i \\chi _i \\phi _{\\tau /4} (W_i^) \\big \\lbrace \\varphi _{\\tau /(2r)} ( W_i^) - \\varphi _{\\tau /(2r)} ( W_i^^{\\prime }) \\big \\rbrace \\\\&~~~~~ + \\frac{1}{n} \\sum _{i=1}^ng_i \\chi _i \\varphi _{\\tau /(2r)} ( W_i^’) \\big \\lbrace \\phi _{\\tau /4 } (W_i^) - \\phi _{\\tau /4} (W_i^^{\\prime }) \\big \\rbrace .$ Note that $^*(\\mathbb {G}_{\\beta , \\theta } - \\mathbb {G}_{\\beta ^{\\prime }, \\theta ^{\\prime } })^2 \\le 2 ^*(\\mathbb {G}_{\\beta , \\theta } - \\mathbb {G}_{\\beta , \\theta ^{\\prime } } )^2 + 2 ^* (\\mathbb {G}_{ \\beta , \\theta ^{\\prime } } - \\mathbb {G}_{\\beta ^{\\prime }, \\theta ^{\\prime } } )^2$ .", "By the Lipschitz properties of $\\varphi _R$ and $\\phi _R$ , $^* (\\mathbb {G}_{\\beta , \\theta } - \\mathbb {G}_{\\beta , \\theta ^{\\prime } })^2 \\le \\frac{1}{n^2} \\sum _{i=1}^n\\big \\lbrace \\varphi _{\\tau /(2r)} ( W_i^) - \\varphi _{\\tau /(2r)} ( W_i^^{\\prime }) \\big \\rbrace ^2 \\le \\Big (\\frac{\\tau }{2r} \\Big )^2 \\frac{1}{n^2} \\sum _{i=1}^n\\lbrace W^i ( \\delta - \\delta ^{\\prime } ) \\rbrace ^2$ and $^* (\\mathbb {G}_{\\beta , \\theta ^{\\prime } } - \\mathbb {G}_{\\beta ^{\\prime } , \\theta ^{\\prime } })^2 \\le \\frac{1}{n^2} \\sum _{i=1}^n\\Big ( \\frac{\\tau }{4r}\\Big )^4 \\Big ( \\frac{8}{\\tau } \\Big )^2 \\lbrace W_i^\\gamma - \\gamma ^{\\prime }) \\rbrace ^2 = \\Big ( \\frac{\\tau }{2 r^2 }\\Big )^2 \\frac{1}{n^2} \\sum _{i=1}^n\\lbrace W_i^\\gamma - \\gamma ^{\\prime }) \\rbrace ^2 .$ Define another (conditional) Gaussian process $\\lbrace \\mathbb {Z}_{\\beta ,\\theta }\\rbrace $ as $\\mathbb {Z}_{\\beta ,\\theta } = \\frac{\\sqrt{2} \\tau }{2 r} \\cdot \\frac{1}{n} \\sum _{i=1}^ng_i^{\\prime } W_i^+ \\frac{\\sqrt{2} \\tau }{2 r^2 } \\cdot \\frac{1}{n} \\sum _{i=1}^ng_i^{\\prime \\prime } W_i^,$ where $g_1^{\\prime }, g_1^{\\prime \\prime }, \\ldots , g_n^{\\prime }, g_n^{\\prime \\prime }$ are independent standard normal random variables that are independent of all the other variables.", "From the above calculations we have $^* (\\mathbb {G}_{\\beta , \\theta } - \\mathbb {G}_{\\beta ^{\\prime }, \\theta ^{\\prime } })^2 \\le ^* (\\mathbb {Z}_{\\beta , \\theta } - \\mathbb {Z}_{\\beta ^{\\prime }, \\theta ^{\\prime } })^2$ .", "Then, applying Sudakov-Fernique’s Gaussian comparison inequality (see, e.g.", "Theorem 7.2.11 in [82]), we obtain $\\bigg \\lbrace \\sup _{ ( \\beta , \\theta ) \\in \\mathbb {B}_\\Sigma (\\beta ^*, r_0) \\times \\mathbb {B}_\\Sigma (\\theta ^*, r/\\alpha ) } \\mathbb {G}_{ \\beta , \\theta } \\bigg \\rbrace \\le ^* \\bigg \\lbrace \\sup _{ ( \\beta , \\theta ) \\in \\mathbb {B}_\\Sigma (\\beta ^*, r_0) \\times \\mathbb {B}_\\Sigma (\\theta ^*, r/\\alpha ) } \\mathbb {Z}_{ \\beta , \\theta } \\bigg \\rbrace ,$ which remains valid if $^*$ is replaced by $$ .", "For the latter, it is easy to see that $& \\bigg \\lbrace \\sup _{ ( \\beta , \\theta ) \\in \\mathbb {B}_\\Sigma (\\beta ^*, r_0) \\times \\mathbb {B}_\\Sigma (\\theta ^*, r / \\alpha ) } \\mathbb {Z}_{ \\beta , \\theta } \\bigg \\rbrace \\nonumber \\\\& \\le \\frac{\\sqrt{2} \\tau }{2 r} \\bigg ( \\sup _{ \\delta \\in \\mathbb {S}^{p-1} } \\frac{1}{n} \\sum _{i=1}^ng_i^{\\prime } W_i^\\bigg ) + \\frac{\\sqrt{2} \\tau }{2r^2 } \\bigg \\lbrace \\sup _{ \\gamma \\in \\mathbb {B}(r_0) } \\frac{1}{n} \\sum _{i=1}^ng_i^{\\prime \\prime } W_i^\\bigg \\rbrace \\nonumber \\\\& \\le \\frac{\\sqrt{2}}{2} \\frac{\\tau }{r} \\bigg \\Vert \\frac{1}{n} \\sum _{i=1}^ng_i^{\\prime } W_i \\bigg \\Vert _2 + \\frac{\\sqrt{2}}{2} \\frac{\\tau r_0}{r^2} \\bigg \\Vert \\frac{1}{n} \\sum _{i=1}^ng_i^{\\prime \\prime } W_i \\bigg \\Vert _2 \\nonumber \\\\& \\le \\frac{\\tau }{r} \\bigg ( \\frac{1}{2} + \\frac{r_0 }{2 r} \\bigg ) \\sqrt{\\frac{2 p}{n}} \\le \\frac{\\tau }{r} \\sqrt{\\frac{2p}{n}} ,$ where the last step uses the condition that $r_0 \\le r$ .", "Together, (REF ) and (REF ) imply that with probability at least $1-e^{-t}$ , $\\Lambda _n \\le \\frac{5}{4} \\Lambda _n + (2 \\kappa _4)^{1/2} \\sqrt{\\frac{t}{n}} + \\Big ( \\frac{\\tau }{r} \\Big )^2 \\frac{t}{3n } \\le \\frac{1}{4}$ as long as $n\\gtrsim (\\tau /r)^2 (p+t)$ .", "Combined with (REF ), this further implies that with high probability, $G_n(\\beta , \\theta ) = G_n(\\beta , \\theta ) - \\big \\lbrace G_n(\\beta , \\theta ) - G_n(\\beta , \\theta ) \\big \\rbrace \\ge \\frac{1}{2} - \\frac{1}{4} = \\frac{1}{4}$ holds uniformly over $\\beta \\in \\mathbb {B}_\\Sigma (\\beta ^*, r_0)$ and $\\theta \\in \\mathbb {B}_\\Sigma (\\theta ^*, r/\\alpha )$ .", "Substituting this into (REF ) establishes the claim.", "$\\Box $" ], [ "Proof of Lemma ", "The proof is based on similar arguments to those employed in the proof of Lemma REF .", "With slight abuse of notation, define the vector random process $(\\theta ) = \\frac{1}{n} \\sum _{i=1}^n\\big \\lbrace \\psi _\\tau (Z_i - \\alpha X_i^) - \\psi _\\tau (Z_i - \\alpha X_i^^* ) \\big \\rbrace W_i + \\alpha \\Sigma ^{1/2} (\\theta -\\theta ^* ) , \\ \\ \\theta \\in ^p ,$ where $Z_i = \\varepsilon _i \\mathbb {1} (\\varepsilon _i \\le 0 ) + \\alpha X_i^^*$ .", "By the mean value theorem for vector-valued functions, $(\\theta ) & = \\big \\lbrace \\psi _\\tau (Z_i - \\alpha X_i^) W_i \\big \\rbrace - \\big \\lbrace \\psi _\\tau (Z_i - \\alpha X_i^^*) W_i \\big \\rbrace + \\alpha \\Sigma ^{1/2}(\\theta -\\theta ^*) \\\\& = \\alpha \\Sigma ^{1/2} (\\theta -\\theta ^*) - \\alpha \\int _0^1 \\psi _\\tau ^{\\prime }(Z_i - \\alpha X_i^_t ) {\\rm d} t \\cdot W_i X_i^\\theta - \\theta ^*) \\\\& = \\alpha \\Bigg \\lbrace {\\rm I}_p - \\int _0^1 \\psi _\\tau ^{\\prime }(Z_i - \\alpha X_i^_t) {\\rm d} t \\cdot W_i W_i^\\rbrace \\Sigma ^{1/2}(\\theta - \\theta ^* ) \\\\& = \\int _0^1 \\big \\lbrace \\mathbb {1}\\big ( | Z_i - \\alpha X_i^_t | > \\tau \\big ) W_i W_i^\\rbrace \\, {\\rm d} t \\cdot \\alpha \\Sigma ^{1/2}(\\theta - \\theta ^* )$ where $\\theta _t = ( 1-t) \\theta ^* + t\\theta $ .", "By Markov's inequality, $& _{X_i}\\big (| Z_i - \\alpha X_i^_t | > \\tau \\big ) \\le \\tau ^{-2} _{X_i} | Z_i - \\alpha X_i^_t |^2 \\\\& = \\tau ^{-2} _{X_i} | \\omega _i - \\alpha t X_i^\\theta -\\theta ^*) |^2 = \\tau ^{-2} _{X_i}(\\omega _i^2) + ( t /\\tau )^2 |\\alpha X_i^\\theta -\\theta ^* ) |^2 .$ Putting the above two observations together and applying Condition REF , we obtain that for any $\\theta \\in \\mathbb {B}_\\Sigma (\\theta ^*, r/\\alpha )$ , $& \\Vert (\\theta ) \\Vert _2 = \\sup _{u \\in \\mathbb {S}^{p-1} } \\lbrace u^(\\theta ) \\rbrace \\nonumber \\\\& \\le \\frac{\\sigma ^2}{\\tau ^2} \\alpha \\Vert \\theta - \\theta ^* \\Vert _\\Sigma + \\frac{\\alpha ^3}{3\\tau ^2} \\sup _{u \\in \\mathbb {S}^{p-1} } \\big \\lbrace | X_i^\\theta - \\theta ^* )|^3 |W_i^ṵ | \\big \\rbrace \\le ( \\sigma ^2 + \\kappa _4 r^2 /3 ) \\frac{r}{\\tau ^2} .", "$ Turning to $(\\theta ) - (\\theta )$ , consider the change of variable $\\delta = \\alpha \\Sigma ^{1/2}(\\theta -\\theta ^*)$ , and define the centered process $_0(\\delta ) = (\\theta ) - (\\theta ) = (1/n) \\sum _{i=1}^n(1- ) \\lbrace \\psi _\\tau ( \\omega _i - W_i^) - \\psi _\\tau (\\omega _i ) \\rbrace W_i$ .", "Since $_0(\\delta ) $ is absolutely continuous, following the proof of Lemma REF we can show that its gradient $\\nabla _0(\\delta ) = (-1/n) \\sum _{i=1}^n(1-) \\psi _\\tau ^{\\prime }( \\omega _i - W_i^) W_i W_i^ has bounded exponential moments.", "That is, for any $ p$ and $ || n/4$,{\\begin{@align*}{1}{-1}\\sup _{u, v\\in \\mathbb {S}^{p-1} } \\exp \\big \\lbrace \\lambda \\sqrt{n} u^_0(\\delta ) v / \\upsilon _1^2 \\big \\rbrace \\le e^{ C_0^2 \\lambda ^2 / 2} ,\\end{@align*}}where $ C0>0$ is an absolute constant.", "Applying Theorem~A.3 of \\cite {S2013} to the process $ { 0() / 12 , B(r)}$, we obtain that for any $ t 1/2$,{\\begin{@align*}{1}{-1}\\Bigg \\lbrace \\sup _{\\theta \\in \\mathbb {B}_\\Sigma (\\theta ^*, r/\\alpha ) } \\Vert (\\theta ) - (\\theta ) \\Vert _2 \\ge C_1 \\upsilon _1^2 \\sqrt{\\frac{p+t}{n}} \\cdot r \\Bigg \\rbrace \\le e^{-t}.\\end{@align*}}Combining this with (\\ref {mean.Rn.ubd}) proves the claim.", "\\Box $" ], [ "Proof of Lemma ", "For each sample $Z_i = (X_i, Y_i)$ and $\\gamma \\in ^p$ , define the loss difference $r(\\gamma ; Z_i) = \\rho _\\alpha (Y_i - X_i^\\beta ^* + \\gamma )) - \\rho _\\alpha (Y_i - X_i^^*) = \\rho _\\alpha (\\varepsilon _i - X_i^) - \\rho _\\alpha (\\varepsilon _i)$ , so that $\\hat{D}(\\gamma ) = (1/n) \\sum _{i=1}^nr(\\gamma ; Z_i)$ .", "By the Lipschitz continuity of $\\rho _\\alpha (\\cdot )$ , it is easy to see that $r(\\gamma ; Z_i)$ is $\\bar{\\alpha }$ -Lipschitz continuous in $X_i^$ , where $\\bar{\\alpha }= \\max (\\alpha , 1-\\alpha )$ .", "Given $r>0$ , define the random variable $\\Delta (r) = \\sqrt{n} \\sup _{\\gamma \\in \\mathbb {B}_\\Sigma (r)} \\lbrace D(\\gamma ) -\\hat{D}(\\gamma ) \\rbrace /( 4 \\upsilon _2 \\bar{\\alpha }r)$ , where $\\upsilon _2 = \\sqrt{2}\\upsilon _1$ .", "For any $s > 0$ , using Chernoff's inequality gives $\\big \\lbrace \\Delta (r) \\ge s \\big \\rbrace \\le \\exp \\Bigg [ - \\sup _{\\lambda \\ge 0} \\big \\lbrace \\lambda s - \\log e^{\\lambda \\Delta (r) } \\big \\rbrace \\Bigg ] .", "$ The key is to bound the exponential moment $e^{\\lambda \\Delta (r) }$ .", "Applying first Rademacher symmetrization, and then the Ledoux-Talagrand contraction inequality (see, e.g.", "(4.20) in [75]), we obtain $e^{\\lambda \\Delta (r) } & \\le \\exp \\Bigg \\lbrace 2\\lambda \\sup _{\\gamma \\in \\mathbb {B}_\\Sigma (r) } \\frac{1}{4\\upsilon _2 \\bar{\\alpha }r \\sqrt{n} } \\sum _{i=1}^ne_i \\cdot r(\\gamma ; Z_i) \\Bigg \\rbrace \\\\& \\le \\exp \\Bigg \\lbrace \\frac{\\lambda }{2 \\upsilon _2 r} \\sup _{\\gamma \\in \\mathbb {B}_\\Sigma (r) } \\frac{1}{ \\sqrt{n} } \\sum _{i=1}^ne_i X_i^\\Bigg \\rbrace \\le \\exp \\Bigg ( \\frac{\\lambda }{2\\upsilon _2 }\\bigg \\Vert \\frac{1}{ \\sqrt{n} } \\sum _{i=1}^ne_i W_i \\bigg \\Vert _2 \\Bigg ) ,$ where $e_1, \\ldots , e_n$ are independent Rademacher random variables.", "Moreover, there exists a $(1/2)$ -net $$ of $\\mathbb {S}^{p-1}$ with $||\\le 5^p$ such that $\\Vert \\sum _{i=1}^ne_i W_i \\Vert _2 \\le 2 \\max _{u \\in } \\sum _{i=1}^ne_i W_i^ṵ$ .", "Recall from the proof of Lemma REF that $( e_i W_i^ṵ )^k ={\\left\\lbrace \\begin{array}{ll}0 & \\mbox{ if $k$ is odd} \\\\( W_i^ṵ)^k & \\mbox{ if $k$ is even}\\end{array}\\right.", "}\\le {\\left\\lbrace \\begin{array}{ll}0 & \\mbox{ if $k$ is odd} \\\\\\upsilon _2^k \\cdot k \\Gamma (k/2) & \\mbox{ if $k$ is even}\\end{array}\\right.}", ".$ Hence, for any $\\lambda \\ge 0$ , $& e^{\\lambda e_i W_i^ṵ/ \\upsilon _2 } = 1 + \\frac{\\lambda ^2}{2} (W_i^ṵ/\\upsilon _2)^2 + \\sum _{k=3}^\\infty \\frac{\\lambda ^k}{k!}", "( e_i W_i^ṵ /\\upsilon _2)^k \\\\& \\le 1 + \\frac{\\lambda ^2}{2} + \\sum _{l=2}^\\infty \\frac{\\lambda ^{2l}}{(2l)!}", "2 l \\cdot (l-1)!", "\\le 1 + \\frac{\\lambda ^2}{2}+ \\sum _{l=2}^\\infty \\frac{(\\lambda ^2/\\sqrt{2})^l}{l!}", "\\le e^{ \\lambda ^2/\\sqrt{2} } .$ This further implies $e^{\\lambda \\Delta (r) }& \\le \\exp \\Bigg ( \\frac{\\lambda }{2 \\upsilon _2 }\\bigg \\Vert \\frac{1}{\\sqrt{n}} \\sum _{i=1}^ne_i W_i \\bigg \\Vert _2 \\Bigg ) \\le \\exp \\Bigg ( \\max _{u\\in } \\frac{\\lambda }{ \\sqrt{n} } \\sum _{i=1}^ne_i W_i^ṵ /\\upsilon _2 \\Bigg ) \\\\& \\le \\sum _{u \\in } \\prod _{i=1}^n e^{\\lambda e_i W_i^ṵ / ( \\upsilon _2 \\sqrt{n}) }\\le \\sum _{u \\in } \\prod _{i=1}^n e^{\\lambda ^2/(\\sqrt{2} n) } \\le 5^p \\cdot e^{\\lambda ^2/\\sqrt{2} } .$ Substituting this into (REF ), we obtain that $\\big \\lbrace \\Delta (r) \\ge s \\big \\rbrace \\le \\exp \\Bigg \\lbrace - \\sup _{\\lambda \\ge 0} \\bigg ( \\lambda s -\\frac{\\lambda ^2}{\\sqrt{2}} \\bigg ) + p \\log (5) \\Bigg \\rbrace = \\exp \\Bigg \\lbrace p \\log (5) - \\frac{s^2}{2 \\sqrt{2}} \\Bigg \\rbrace .$ Finally, taking $s^2 = 2\\sqrt{2} \\lbrace \\log (5) p + t\\rbrace $ proves (REF ).", "$\\Box $" ] ]
2212.05565
[ [ "SIPGI: an interactive pipeline for spectroscopic data reduction" ], [ "Abstract SIPGI is a spectroscopic pipeline for the data reduction of optical/near-infrared data acquired by slit-based spectrographs.", "SIPGI is a complete spectroscopic data reduction environment retaining the high level of flexibility and accuracy typical of the standard \"by-hand\" reduction methods but with a significantly higher level of efficiency.", "This is obtained exploiting three main concepts: 1) a built-in data organiser to classify the data, together with a graphical interface; 2) the instrument model (analytic description of the main calibration relations); 3) the design and flexibility of the reduction recipes: the number of tasks required to perform a complete reduction is minimised, preserving the possibility to verify the accuracy of the main stages of data-reduction process.", "The current version of SIPGI manages data from the MODS and LUCI spectrographs mounted at the Large Binocular Telescope (LBT) with the idea to extend SIPGI to support other through-slit spectrographs." ], [ "Introduction", "SIPGI [1] is a complete spectroscopic reduction environment for optical and near-IR through-slit spectra developed for the astronomical community and now publicly released.", "Data reduction is usually a time-consuming process; this pipeline was designed to perform a quick data reduction, while mantaining a high level of flexibility and accuracy.", "SIPGI works with both optical and near-IR data.", "It features a graphical interface and many tools that strongly improve the data reduction experience.", "The version released to the community is customised for the data acquired with the two couples of through-slit spectrographs @LBT, MODS1 and MODS2 in the optical band and LUCI1 and LUCI2 in the near-IR.", "However, the code minimises the dependence from the specific spectrograph, confining the information on the instrument to specific parts of the code; this makes the pipeline adaptable in the future to any through-slit (and possibly fibers) spectrograph." ], [ "The main SIPGI concepts", "The high efficiency and flexibility can be simultaneously achieved by exploiting three concepts: the Graphical Interface, the Instrument Model and the Recipes organisation.", "SIPGI inherits them from VIPGI [2], the pipeline we developed and used for the data reduction of the main extragalactic spectroscopic surveys carried out with the optical spectrograph VIMOS@VLT (e.g.", "VVDS, zCOSMOS, VUDS, VIPERS, VANDELS)." ], [ "The Graphical Interface and the Data Organiser", "[width=1.0]SIPGIflowsmall.pdflabelflowchartSIPGI flowchart.", "SIPGI ingests all the raw frames needed for the data reduction (calibration and scientific files) and, thanks to the Data Organiser, it recognises and classifies them according to the keywords in their header (Fig.", ").", "All the data are grouped in data-sets (Fig.", "A), collections of data with the same characteristics.", "The user can easily browse through different data-sets, divided in calibration (acquired through/without slit) and scientific data-sets (target and standard star) and, within each data-set, through the different reduction units (Fig.", "B), defined by the instrumental configuration they have been observed with, i.e.", "the mask, camera, grating and dichroic used during the observations.", "[width=0.95]GraphIntnumberssmall.pdflabelGraphIntSIPGI Graphical Interface.", "Data are organised in data-sets (A), containing one or more reduction units (B).", "The recipes can be run from Reduction Tab (C) and their parameters files can be accessed and modified by the user (D, example for the Preliminary Reduction).", "The Analysis Tab (E) offers tools and utilities for checking mid and final products.", "The frames are renamed to be easily recognisable (e.g.", "\"sc\" for scientific, \"lp\" for lamp, \"ff\" for flat etc.).", "The graphical interface allows to run the recipes by pressing the buttons displayed in the Reduction Tab (Fig.", "C).", "The Parameter Files menu gives a quick access to the parameters file for each recipe (Fig.", "D, see section REF ).", "The Analysis Tab (Fig.", "E) features several graphical tools to check the quality of the raw frames, mid-products and final products of the data reduction and specifically designed utilities to verify the accuracy of the calibrator files produced during the data reduction." ], [ "The Instrument model", "The Instrument Model is an analytical description of the main calibration relations necessary to obtain rectified spectra from the observations performed with a spectrograph.", "It depends only on the instrument configuration (i.e.", "grating/filter/dichroic) and is therefore mask-independent.", "It has been calibrated on real data and it is provided with SIPGI.", "It provides the first guesses to the spectra location (the position of the slit edges in the frames) and to the wavelength calibration.", "Since the distortions in the frames can change on a night basis, the first guesses need to be updated on the specific set of data.", "This is done through the use of the Adjust First Guess tool on a set of calibration frames specifically acquired for the program: the user can visualise the model, represented by a grid of ds9 regions, on a real lamp frame (Fig. )", "and find the best match between the real data and the first guesses of the spectra location (vertical lines) and wavelength calibration (horizontal lines).", "The concept of the instrument model is what really lightens the task of calibrating the data.", "If the instrument is stable, the first guesses are already an excellent approximation of the real distortions in the frame and the user needs to provide only small, if any, adjustments.", "Even in case of instability, the graphical tool makes it much less painful to find an agreement between the analytical description and the real distortions.", "Furthermore, the mask-independence of the model is a key-point in terms of time-saving: when multiple masks are being used in an observing program, the same calibrators produced for one of them can be, in principle, reused for the others.", "[width=0.5]Adjust1copy.pdflabelAdjustAdjust First Guess tool.", "The graphical representation of the Instrument Model, i.e.", "of the first guesses to the spectra location (vertical lines) and wavelength calibration (horizontal lines), is superimposed to a lamp frame of the data set to be reduced." ], [ "The recipes organisation", "The efficiency of SIPGI is achieved through the design of the recipes.", "Each recipe performs many tasks, reducing the time needed for completing the data reduction.", "Four calibration recipes produce the master frames used to infer spectra location, wavelength and flux calibration, while three reduction recipes apply the solutions provided by the master frames to the scientific frames (Fig.", ").", "Each recipe has many parameters that can be set through the Parameter Files Menu (Fig.", "D).", "This allows the user to highly customised their data reduction to meet their scientific goals.", "We decided to separate the reduction recipes in three different steps (see Fig.", "): 1. a first one removing the instrument signature, performing a bias (or dark) level subtraction, the flat-fielding and the cosmic rays cleaning (Preliminary Reduction); 2. the main step of the reduction, producing the 2D and 1D rectified, wavelength and flux calibrated spectra for individual frames (Reduce Observations); 3. the recipe combining individual frames in a final product (Combine Observations).", "This architecture allows the user to obtain mid-products, that can be checked through many tools expressly designed within SIPGI and, possibly, treated with customised routines if desired." ], [ "Software architecture and performances", "SIPGI provides a GUI written in Python, used to organise data, to run the reduction recipes (written in C) and to check reduction results with a set of interactive graphical tools.", "The interaction between Python and C is obtained using the SWIG wrapper.", "The reliability of SIPGI is attested by its extensive use for the data reduction of all the data acquired with LUCI and MODS at LBT, during the italian time, in the last ten years, whose products were used in $\\sim $ 80 referred publications.", "For all the standard configurations with both the instruments, SIPGI provides wavelength calibration with an accuracy better than 1/5 of pixel in 95 per cent of the cases.", "The typical rms in the flux calibration is 0.4% (0.5%) in the regions not affected by telluric absorption for MODS (LUCI) data and 2% (5%) in regions affected by telluric absorptions.", "SIPGI execution performances obviously depend on the computer hardware exploited and on the kind of data being reduced.", "However, for a standard 3h-observations set of data in binocular mode (i.e.", "MODS1+MODS2 or LUCI1+LUCI2) we estimate an average execution time of three hours.", "Further information on SIPGI, the download page, a manual and cookbook, and a series of video tutorial can be found at http://pandora.lambrate.inaf.it/sipgi/.", "The users can contact us at the help-desk lbt-italia-spec@inaf.it." ] ]
2212.05580
[ [ "Estimation of the photon production rate using imaginary momentum\n correlators" ], [ "Abstract The thermal photon emission rate is determined by the spatially transverse, in-medium spectral function of the electromagnetic current.", "Accessing the spectral function using Euclidean data is, however, a challenging problem due to the ill-posed nature of inverting the Laplace transform.", "In this contribution, we present the first results on implementing the proposal of directly computing the analytic continuation of the retarded correlator at fixed, vanishing virtuality of the photon via the calculation of the appropriate Euclidean correlator at imaginary spatial momentum.", "We employ two dynamical O(a)-improved Wilson fermions at a temperature of 250 MeV." ], [ "Introduction", "Ultrarelativistic heavy ion collisions have been shown to produce a novel state of matter, the quark-gluon plasma (QGP) [1].", "Electromagnetic probes – photons and dileptons – may escape the plasma carrying unaltered information about it, since they do not interact with the QGP via the strong interaction.", "Characterizing real photons according to their sources, we distinguish direct and decay photons, the latter coming from the decay of final state hadrons, while direct photons are produced during the heavy ion collision, before the freeze-out [2], [3].", "At low transverse momentum, the direct photon signal receives a dominant contribution from thermal photons coming from the QGP.", "Calculating the thermal photon rate of the QGP is a challenging task.", "As the coupling of QCD decreases for high energies according to asymptotic freedom, the calculation of the thermal photon rate is possible by using perturbative methods [4], [5].", "These weak-coupling results, however, become reliable only at sufficiently high temperatures.", "At strong couplings, the AdS/CFT correspondence allows for the calculation of the thermal photon rate as well as other transport coefficients in e.g.", "$\\mathcal {N}=4$ supersymmetric Yang–Mills theory, which shares certain common features with QCD and is often used for comparison [6].", "Using lattice QCD, one can effectively simulate QCD at strong coupling and can also access temperatures which are close to the chiral crossover temperature.", "However, lattice simulations are performed in Euclidean spacetime and the analytic continuation of the correlation functions to Minkowskian spacetime via an inverse Laplace transformation is a notoriously difficult, ill-posed problem  [7], [8], [9].", "Recent lattice QCD studies addressing the determination of the thermal photon rate are Refs.", "[10], [11], [12].", "In order to retrieve relevant information for estimating the thermal photon rate, in this contribution we explore a novel method for the extraction of the thermal photon rate from Euclidean lattice QCD data  [13]." ], [ "Probing the photon rate using imaginary spatial momentum correlators", "We begin with the definition of the spectral function of the electromagnetic current, $\\rho _{\\mu \\nu }(\\omega ,{\\bf k}) = \\int \\mathrm {d}^4 x\\, e^{i(\\omega t - {\\bf k}{\\bf x})}\\,\\langle [J_\\mu ^{\\rm {em}}(x), J_\\nu ^{\\rm {em}}(0)^\\dagger ] \\rangle ,$ where the electromagnetic current is $J_\\mu ^{\\rm {em}}(x) = \\sum _{\\rm {f}} Q_{\\rm {f}} \\bar{\\psi }_f(x) \\gamma _\\mu \\psi _f(x)$ , $Q_f$ being the charge of quark with flavor $f$ , and the time evolution is given in Minkowskian time by $J_\\mu ^{\\rm {em}}(x)= e^{IH t} J_\\mu ^{\\rm {em}}(0) e^{-IH t}$ .", "The thermal photon emission rate per unit volume of the QGP, $\\mathrm {d}\\Gamma _\\gamma (\\omega )/{\\mathrm {d}\\omega }$ , can be determined at leading order in the electromagnetic coupling constant as [14]: $\\frac{\\mathrm {d}\\Gamma _\\gamma (\\omega )}{\\mathrm {d}\\omega } = \\frac{\\alpha _\\mathrm {em}}{\\pi } \\,\\frac{\\omega \\sigma (\\omega )}{e^{\\omega /T}-1} + \\mathcal {O}(\\alpha _\\mathrm {em}^2),$ where $\\sigma (\\omega ) \\equiv \\rho _T(\\omega , k=\\omega )$ and $\\rho _T(\\omega , k) = \\frac{1}{2}(\\delta ^{ij} - k^i k^j/k^2) \\rho _{ij}(\\omega ,{\\bf k})$ is the transverse channel spectral function.", "The dispersion relation which relates the spatially transverse Euclidean correlator at imaginary momentum $k=I\\omega _n$ to the spectral function at vanishing virtuality is $H_E(\\omega _n) = -\\frac{\\omega _n^2}{\\pi } \\int _0^\\infty \\frac{\\mathrm {d}\\omega }{\\omega }\\frac{\\sigma (\\omega )}{\\omega ^2+\\omega _n^2}$ and has been derived in Ref. [13].", "Here, $\\omega _n=2 n \\pi T$ is the $n$ th Matsubara-frequency, and $H_E$ is the Euclidean transverse channel current-current correlator evaluated at imaginary spatial momentum, HE(n) GET(n,k=i n) = - 0dx0 d3 x  eIn x0  en x3 J1(x) J1(0) .", "The short-distance convergence properties of this correlator can be analyzed by expanding $e^{I\\omega _n x_0}$ and recalling that the short-distance behavior of the current-current correlator starts with $1/x^6$ .", "Within the continuum theory, the correlator vanishes in the vacuum, but this property is lost at finite lattice spacing due to the lack of Lorentz symmetry.", "The property can be restored by subtracting a correlator with the same short-distance behavior and – in order to not to alter the continuum limit –, that vanishes in the continuum.", "One can achieve this either by subtracting the vacuum lattice correlator obtained at the same bare parameters [13] or by subtracting a thermal lattice correlator having the same momentum inserted into a spatial direction [15].", "Since the latter option does not require additional simulations at $T=0$ , we proceed using the following estimator HE(sub)(n) = - 0dx0 d3 x  ( eIn x0 - eIn x2 )  en x3 J1(x) J1(0) = - -  dx3  en x3 [ Gs(n,x3) - Gns(n,x3) ], where in the second line we introduced the static ($G_{\\rm {s}}$ ) and non-static ($G_{\\rm {ns}}$ ) transverse screening correlators, involving $e^{I\\omega _n x_2}$ and $e^{I\\omega _n x_0}$ , respectively.", "By performing the subtraction as in Eq.", "(), the contribution of the unit operator cancels and the resulting expression is integrable.", "Moreover, the estimator given in Eq.", "() vanishes in the vacuum.", "By inserting the imaginary as well as the real spatial momentum to different combinations of directions and then averaging, we increased the statistics for the evaluation of this observable.", "The static and non-static screening correlators are shown in Fig.", "REF .", "In the following we omit the upper index '(sub)' referring to 'subtracted' from our lattice estimator.", "Figure: The conserved-conserved (CC) renormalized non-static and static screening correlatorsin the first Matsubara-sector on our finest ensemble called X7 (a∼a \\sim 0.033 fm)." ], [ "Lattice setup", "In order to calculate the screening correlators which enter the expression () for $H_E$ , we used three ensembles generated at the same temperature in the high-temperature phase ($T \\sim 250$ MeV).", "We employ two-flavors, O($a$ )-improved dynamical Wilson fermions and the plaquette gauge action.", "The zero-temperature pion mass in our study is around $m_\\pi \\approx 270$ MeV, and the lattice spacings are in the range of 0.033–0.05 fm.", "We use the isovector vector current instead of the electromagnetic current, whereby disconnected contributions are absent.", "The Euclidean correlators at imaginary spatial momentum have been measured using the local as well as the conserved discretizations of the currents both at source and sink, resulting in total of four different discretizations (local-local, conserved-conserved, local-conserved and conserved-local).", "The two mixed discretizations are not independent, they can be transformed into each other using Cartesian coordinate reflections.", "After averaging these appropriately, we had therefore three different discretizations of the correlators.", "At each ensemble we had around 1500–2000 configurations and 64 point sources per configuration.", "We renormalized the local-local and the mixed correlators by multiplying by $Z_V^2$ or $Z_V$ , respectively.", "We took the corresponding value for $Z_V$ from Ref. [16].", "Figure: Left panel: Measurement history of the non-static screening correlator at xT=0.833xT=0.833.The outliers are shown up as spikes in the data.Right panel: Truncation stability for the non-static screening correlator.The data points corresponding to the trimmed data have been shifted slightlyto the right to improve visibility.We occasionally encountered results for the correlator at a certain Euclidean distance with several standard deviations off from the mean value at that distance, which we identified as outliers, see the left panel of Fig.", "REF .", "These occured more frequently at larger Euclidean separations.", "These outliers increased the statistical error and also modified the mean to some extent, see Fig.", "REF , right panel.", "We eliminated these outliers by using robust statistics [17].", "First, we prepared a distribution of results at each Euclidean distance, then removed the data points belonging to the lower and upper $\\gamma $ % of that distribution.", "We varied $\\gamma $ between 0.5–4 when making these cuts and found that the error estimation as well as the calculation of the mean is more stable this way.", "In the final analysis we used $\\gamma =1$ , but when we detected only less than 10 datapoints being outside five times the interquantile range from the mean, we only applied trimming with $\\gamma =0.5$ .", "We show an example in case of the conserved-conserved correlator at our finest ensemble, X7, in the right panel of Fig.", "REF .", "At short distances of the correlator, this approach did not influence the results, because outliers occured there only very rarely.", "At intermediate distances, i.e.", "around $xT\\sim $ 0.7–1.3, the effect of this method was again not significant.", "At large distances, however, the errors reduced by a factor of around 2–6 when omitting the tails of the distributions.", "Since we believe that these outliers are not likely to have physical origin, their exclusion should not influence the validity of the extracted physical results.", "This is indeed what we found when analyzing the data without truncating: we obtained consistent final results but with larger errors." ], [ "Modeling the tail of the screening correlators", "Since the integral of Eq.", "() receives contributions from large distances as well, we need to get good control over the screening correlators at large distances, if we aim at a precise determination of $H_E$ .", "The screening correlators have a representation in terms of energies and amplitudes of screening states in the following form [13]: $G_{\\rm {ns}}(\\omega _r,x_3) \\overset{x_3 \\ne 0}{=}\\sum _{n=0}^{\\infty } |A_{\\rm {ns},n}^{(r)}|^2 e^{-E_{\\rm {ns},n}^{(r)} |x_3|}.$ A similar expression holds for the static correlator.", "The low-lying screening spectrum can be studied using weak-coupling methods as well [18].", "The lowest energy of a screening state in a given Matsubara sector with frequency $\\omega _r$ is often called the screening mass and is denoted by $E^{(r)}_0$ .", "In order to get a better handle on the asymptotic behavior of the screening correlators and avoid the enhancement of the error on $H_E$ coming from fluctuations present in the actual correlators at large distances, we performed single-state fits on the tails of the correlators using the above representation translated to a form corresponding to a periodic lattice, namely $G_{\\rm {ansatz}}(\\omega _r,x_3) = |A_0^{(r)}|^2 \\cosh \\big [E^{(r)}_0 (x_3-L/2)\\big ],$ $L$ being the spatial length of the lattice.", "While the single-state fits describe the actual data well, i.e.", "with good $\\chi ^2$ - and p-values, the identification of the plateau region was not clear in several cases, although we performed a thorough scan using all possible fit ranges having different starting points and different lengths with 6$a$ –11$a$ .", "Besides fitting, we also determined the \"effective mass\" using two consecutive correlator datapoints, by solving $\\frac{G(\\omega _r,x_3+a)}{G(\\omega _r,x_3)}= \\frac{\\cosh \\big [m_{\\rm {eff}} (x_3+a-L/2)\\big ]}{\\cosh \\big [m_{\\rm {eff}} (x_3-L/2)\\big ]}$ for $m_{\\rm {eff}}$ .", "The effective masses are in quite good agreement with the fitted masses, but also do not show a clear plateau as $x_3$ increases, see Fig.", "REF , left panel.", "Therefore, we decided to choose three representatives from a histogram built by assigning Akaike-weights [19], [20] to all the fitted masses that we obtained.", "We propagate the median as well as the values near the 16th and 84th percentiles to the later steps of the analysis.", "When proceeding this way for the non-static as well as for the static screening correlators, we obtain $3\\times 3=9$ possibilities for modeling the tail of the integrand of Eq.", "() on a given ensemble.", "We calculated $H_E$ using all these nine combinations for the tail, sorted the results and then chose the median, the values near the 16th and near the 84th percentile after assigning uniform weights for these slightly different values of $H_E$ .", "Thus for each ensemble we had three representative values of $H_E$ that went into the next step of the analysis, which was the continuum extrapolation.", "We note that by modeling the tail of the non-static and static screening correlators by doing single-state fits, we could reduce the errors by a factor of around 2.5 on our coarsest ensemble.", "The transition to the modelled tail has been introduced smoothly by using a step function and we investigated the effect of choosing different switching points, $x_w$ , in the range $x_w T$ =0.8–1.2.", "We found that the results were essentially stable against these choices." ], [ "Continuum extrapolation", "The three discretized correlators allowed us to perform a correlated simultaneous continuum extrapolation of $H_E$ .", "We used a linear ansatz in $a^2$ when extrapolating to the continuum.", "As discussed in the previous section, in order to have a more precise value we modelled the tail of the integrand needed to evaluate $H_E$ .", "Since this way at each ensemble and for each discretization we had three representative values of $H_E$ , we used these in all possible combinations when performing the continuum limit.", "These gave a total of $(3^3)^3=19683$ different continuum extrapolations of which we built an AIC-weighted histogram to estimate the systematic error.", "A representative continuum extrapolation as well as the histogram are shown in Fig.", "REF , left and right panel, respectively.", "We also performed separate continuum extrapolations of the short- as well as of the long-distance contributions.", "In the case of the short-distance contribution, we simply integrated using the trapezoid formula and the systematic error of the continuum extrapolation has been estimated by omitting one of the discretizations at the coarsest ensemble.", "By shifting the Akaike-weighted histogram of the continuum extrapolated values of the long-distance contribution with the central value of the continuum extrapolated short-distance contribution, we observe that it is consistent with the continuum extrapolated values of the total $H_E$ (Fig.", "REF , right panel)." ], [ "Comparisons", "For the purpose of comparison, it is worth calculating the imaginary part of the retarded correlator at the lightcone in the free theory as well as in strongly coupled $\\mathcal {N}=4$ super Yang–Mills theory using the AdS/CFT correspondence.", "This has been done in Ref. [13].", "In the free theory, $|H_E|/T^2 = 0.5$ in the first Matsubara sector, while in $\\mathcal {N}=4$ super Yang–Mills theory, $|H_E|/T^2 \\approx 0.75$ .", "When normalizing with the temperature, the lattice result we obtained, $0.670(6)_{\\rm {stat}}(2)_{\\rm {sys}}$ , is between these two values.", "Using another normalization might also be interesting.", "When dividing by the static susceptibility, $\\chi _s$ , of the relevant theories, we obtain in the first Matsubara sector the following results: $[|H_E|/\\chi _s]^{\\rm {(free)}}= 0.5$ in the free theory (because $\\chi _s^{\\rm {(free)}}/T^2=1$ ), $[|H_E|/\\chi _s]^{\\rm {(SYM)}} \\approx 0.67$ in $\\mathcal {N}=4$ super Yang–Mills theory with $N_c=3$ .", "On the lattice, we determined the static susceptibility in Ref.", "[12] to be $\\chi _s^{\\rm {(lat)}}/T^2=0.882(11)_{\\rm {stat}}(19)_{\\rm {sys}}$ .", "Using this normalization, the lattice result is $[|H_E|/\\chi _s]^{\\rm {(lat)}} \\approx 0.76$ that is about 13% larger than the value in $\\mathcal {N}=4$ SYM theory." ], [ "Conclusions and outlook", "In this contribution, we calculated Euclidean correlators at imaginary spatial momentum, that are related to the thermal photon emission rate according to Eq.", "(REF ).", "We focused on the correlator evaluated at the first non-vanishing Matsubara frequency.", "In order to improve the predictive power of our result we modelled the tail of the screening correlators occuring in the integrand for our primary quantity $H_E$ .", "We were able to describe the data with single-state fits with good p-values.", "We performed a simultaneaous correlated continuum extrapolation of the three lattice discretizations of the imaginary momentum correlator using three thermal ensembles.", "The result we obtained is in the same ballpark as obained in the free theory or in $\\mathcal {N}=4$ supersymmetric Yang–Mills theory.", "Depending on the normalization it could be between these two, or larger than these results.", "As noted in Ref.", "[13], the knowledge of $H_E(\\omega _n)$ for all $n>n_0$ would enable one to determine the spectral function uniquely by Carlson's theorem.", "Following a similar route for analyzing $H_E$ in the second Matsubara-sector, however, revealed that the signal will soon become noisy and the uncertainty of $H_E(\\omega _{n=2})$ is therefore much larger.", "At present our determination of $H_E(\\omega _{n=2})$ is compatible within errors with the $H_E(\\omega _{n=1})$ result.", "We note, however, that since $\\sigma (\\omega )>0$ , $|H_E(\\omega _r)|$ has to be larger than $|H_E(\\omega _n)|$ if $r>n$  [13], [15].", "Note however that without taking the absolute value, the ordering is $H_E(\\omega _r) < H_E(\\omega _n)$ , when $\\omega _r>\\omega _n$ , since $H_E$ is negative.", "In order to have a reliable calculation of $H_E(\\omega _{n \\ge 2})$ , one has to implement algorithmic improvements and/or devise other operators which could help to constrain the long-distance behavior of the screening correlators.", "The exploration of these directions is left for future work." ], [ "Acknowledgements", "This work was supported by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program through Grant Agreement No.", "771971-SIMDAMA, as well as by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through the Cluster of Excellence “Precision Physics, Fundamental Interactions and Structure of Matter” (PRISMA+ EXC 2118/1) funded by the DFG within the German Excellence strategy (Project ID 39083149).", "T.H.", "is supported by UK STFC CG ST/P000630/1.", "The generation of gauge configurations as well as the computation of correlators was performed on the Clover and Himster2 platforms at Helmholtz-Institut Mainz and on Mogon II at Johannes Gutenberg University Mainz.", "We have also benefitted from computing resources at Forschungszentrum Jülich allocated under NIC project HMZ21.", "For generating the configurations and performing measurements, we used the openQCD [21] as well as the QDP++ packages [22], respectively.", "0pt plus 0.3ex" ] ]
2212.05622
[ [ "Clustered Formation of Massive Stars within an Ionized Rotating Disk" ], [ "Abstract We present ALMA observations with a 800 au resolution and radiative-transfer modelling of the inner part ($r\\approx6000$ au) of the ionized accretion flow around a compact star cluster in formation at the center of the luminous ultra-compact (UC) HII region G10.6-0.4.", "We modeled the flow with an ionized Keplerian disk with and without radial motions in its outer part, or with an external Ulrich envelope.", "The MCMC fits to the data give total stellar masses $M_\\star$ from 120 to $200~M_\\odot$, with much smaller ionized-gas masses $M_\\mathrm{ion-gas} = 0.2$ to $0.25~M_\\odot$.", "The stellar mass is distributed within the gravitational radius $R_g\\approx 1000$ to 1500 au, where the ionized gas is bound.", "The viewing inclination angle from the face-on orientation is $i = 49$ to $56~\\deg$.", "Radial motions at radii $r > R_g$ converge to $v_{r,0} \\approx 8.7$ km/s, or about the speed of sound of ionized gas, indicating that this gas is marginally unbound at most.", "From additional constraints on the ionizing-photon rate and far-IR luminosity of the region, we conclude that the stellar cluster consists of a few massive stars with $M_\\mathrm{star} = 32$ to $60~M_\\odot$, or one star in this range of masses accompanied by a population of lower-mass stars.", "Any active accretion of ionized gas onto the massive (proto)stars is residual.", "The inferred cluster density is very large, comparable to that reported at similar scales in the Galactic Center.", "Stellar interactions are likely to occur within the next Myr." ], [ "Introduction", "Stars with masses greater than 30 $M_\\odot $  are rare among the stellar population in galaxies, but they play a main role in shaping the dynamical evolution of the interstellar medium through feedback from their radiation, ionization, and supernovae [27].", "Since these very massive stars are mostly formed in dense stellar clusters, their violent death may lead to the formation of black holes or neutron-star binary systems that give rise to gravitational-wave events [8].", "Massive protostars must accrete more than 30 $M_\\odot $  to become early O-type stars, but they begin core hydrogen burning well before reaching their final masses [40], [14].", "This poses a serious challenge to the current paradigm that massive stars form through accretion of molecular gas.", "The detection of resolved molecular disks around protostars more massive than $20~M_\\odot $ shows that this scenario is valid for the formation of individual stars up to about a few tens of solar masses [18], [15], [47], [39], [33]; however, it likely breaks down for the case of the formation of more massive stars, especially in clustered environments [9].", "In a clustered scenario, and after the protostars have reached several tens of solar masses, the amount of EUV photons is large enough to ionize the otherwise molecular accretion flow(s) and the clump in which they reside [20], [21], [41], [43], [25].", "This raises several outstanding questions: How does star formation proceed in such environment?", "Is it through an ionized accretion disk?", "Does this disk surround a single or a multiple stellar system?", "When does active (proto)stellar accretion stop due to the increasing photoionizing feedback?", "We present the first spatially resolved observations of an accretion flow in ionized gas around a dense star cluster in formation.", "Our observations were performed with the Atacama Large Millimeter/submillimeter Array (ALMA) in the 1.3 mm continuuum and hydrogen $30\\alpha $ recombination line emission.", "The target is G10.6-0.4 (hereafter G10.6), a luminous ($L_\\mathrm {FIR} \\approx 3\\times 10^6~L_\\odot $ ) and spatially concentrated [29] massive star formation region located at 4.95 kpc from the Sun [46].", "The molecular gas in the central pc-scale clump of G10.6 presents infall and rotation motions toward an inner, flattened ultracompact Hii region [11], [24], [49], [32], [2].", "A characteristic “bullseye” velocity pattern in the most redshifted molecular absorption against the central UC Hii region reveals that the molecular infall is coupled with rotation [49].", "By equating the observed molecular infall speed to free fall, [49] estimated a central stellar mass of $M_\\star \\sim 150$ .", "[22] reported a velocity gradient in the H$66\\alpha $ recombination line within the central 0.05 pc.", "Those authors proposed that the central ionized gas is an accretion flow feeding a nascent cluster of massive stars." ], [ "Observations", "Observations of G10.6 with ALMA (project ID: 2015.1.00106.S, PI Q. Zhang) were carried out in four execution blocks (EBs) between 2016 and 2017.", "The total on-source integration time was 2.2 hours.", "The two sessions executed on 2016 September 9 and 10 had 36 12-m antennas in the array with physical baselines from 15 m to 3144 m. The remaining sessions were observed in 2017 July 19 had 42 12-m antennas with baselines from 19 m to 3697 m. The precipitable water vapor (PWV) in the atmosphere ranged between 0.46 to 0.50 mm during the observations.", "System temperatures varied from 49 to 75 K. Quasars J1832-2039, J1924-2914 and J1733-1304 were used as gain, bandpass, and flux calibrators, respectively.", "The digital correlator was configured to the FDM mode for four spectral windows, centered at 217.9 GHz, 220.0 GHz, 232.0 GHz and 233.9 GHz, respectively.", "Each spectral window has an effective bandwidth of 1.875 GHz, divided into 1910 channels, providing a uniform spacing of 0.976 MHz per spectral channel.", "The pointing center of G10.6 was $\\alpha $ (J2000) = 18h 10m 28.7s, $\\delta $ (J2000) = $-19^\\circ 55^{\\prime } 49.1^{\\prime \\prime }$ .", "The calibration of the visibilities was performed in Common Astronomy Software Applications package [36] using the pipeline script supplied by the ALMA observatory.", "The calibrated visibilities were Fourier transformed and `cleaned' using tclean and Briggs weighting of visibilities with robust parameter of 0.5.", "Spectral channels free of significant line emission are used to construct continuum data.", "Continuum emission was subtracted from the calibrated visibilities to produce the spectral line visibilities.", "The rms noise in the continuum image is about 35 $\\mu $ Jy beam$^{-1}$ .", "The rms noise in the line free channels of the H30$\\alpha $ image cube is 0.5 mJy beam$^{-1}$ per 1.5 km s$^{-1}$  channel, with a beam of $0^{\\prime \\prime }.15 \\times 0^{\\prime \\prime }.13$ , $\\mathrm {P.A.}", "= -65.8 \\deg $ ." ], [ "Overview", "The ALMA observations with a physical resolution of 792 au ($0.16 $ at a distance of 4950 pc) resolve the 1.3 mm continuum and H${30}\\alpha $ emission within the inner 6000 au radius of the star-forming clump.", "Figure REF presents an overview.", "The top-left panel shows CH$_3$ OH emission imaged with the Submillimeter Array.", "The molecular envelope of a $\\sim 0.3$ pc radius flattens toward its center, most notably in the warmer (blue) CH$_3$ OH transition [32].", "An evacuated bipolar cavity is also seen.", "The emission from complex organic molecules (COMs) in the inner 0.05 pc was recently studied by [28] using ALMA.", "Those authors found highly structured emission in the form of localized hot cores accompanied by bright, extended emission.", "The overall COM emission presents a flattened X-shaped morphology with a brightness dip toward the H${30}\\alpha $ peak (see Fig.", "REF , CH$_3$ CN panel).", "In contrast, the 1.3-mm continuum peaks at the central dip of the COM emission and matches well the H${30}\\alpha $ emission (Fig.", "REF , bottom panels).", "The continuum emission is almost entirely due to the free-free radiation in the region that we analyze.", "The average continuum intensity within a stripe of 1 length and 1 beamwidth height is 0.045 Jy beam$^{-1}$ (40 K), whereas the free-free continuum expected from the H30$\\alpha $ intensity in the same area, using equation (14.29) in [45], is 0.044 Jy beam$^{-1}$ .", "Contributions from warm dust to the continuum become important at the location of hot cores in the periphery of the central ionized structure [30], [28].", "This flattened ionized emission within 0.05 pc radius has been interpreted as the innermost part of a cluster accretion flow which transitions from being molecular to ionized at its center [24], [22].", "The LSR systemic velocity of the H$30\\alpha $ line is determined to be $V_\\mathrm {sys} \\approx 0.5$ km s$^{-1}$ .", "The H$30\\alpha $ panel of Fig.", "REF shows the directional cut at $\\mathrm {PA} = 307.8^\\circ $ (East of North) used for the modelling.", "Figure REF shows the resulting position-velocity (P-V) diagram.", "High-velocity emission characteristic of Keplerian rotation is seen on either side from the position center, reaching velocities up to $\\pm \\sim 40$ km s$^{-1}$  from $V_\\mathrm {sys}$ in the inner few $10^3$  au.", "This kinematic structure is in direct contrast to that of classical H ii regions, in which the photoionized gas at $10^4$ K exerts an outward pressure that drives an unimpeded expansion.", "We rather interpret our observations in the context of H ii regions still dominated by the gravity of the embedded stars [19], [20], [21].", "In these models the ionized gas within the gravitational radius $ R_g = GM_*/c_s $ (where $c_s$ is the sound speed of the ionized gas) remains bound, and can continue to accrete onto the stars." ], [ "Radiative Transfer and Model Fitting", "We use RADMC-3D [3] to model the 1.3-mm free-free continuum and the H$30\\alpha $ emission in non-LTE.", "We use the version that allows to calculate recombination lines as presented in [42].", "The physical 3D model grids are created with sf3dmodels [16] and manipulated with the tools within that package to be compatible with RADMC-3D.", "The observational data sets that we model are a 1D intensity cut of the 1.3-mm continuum and the 2D P-V diagram of the H${30}\\alpha $ emission.", "Both cuts are centered at ICRS RA $=$ 18h 10m 28.652s, Dec $=$ $-19$ d 55m 49.66s, $\\mathrm {PA} = 307.8^\\circ $ .", "The length of the P-V and continuum cuts are 2.32(11480 au) and 0.72(3560 au), respectively, and their width is 0.16.", "The continuum cut was truncated because the emission deviates from a single power law beyond the defined length.", "This is probably due to a real change in the density profile of the ionized disk, or due to contributions from dust emission at larger radii (see Section REF ).", "The continuum intensity cut is only used in the model fitting to constrain the distribution of the electron density $n_e$ .", "We consider fully ionized hydrogen gas ($n_e = n_{ion}$ ).", "The kinematic structure is as follows.", "A Keplerian rotating disk.", "The simplest model is a flared Keplerian disk as in [44].", "The disk electron density $n_e$ goes as: $ n_e(R,z)= n_0 \\biggl [ \\frac{R}{R_0} \\biggr ]^p \\exp [-z^2 / 2H^2],$ where $R$ is the polar radius normalized at $R_0 = 10$ au, $z$ is the distance from the midplane, and $H(R) \\propto R$ is the scale height.", "For simplicity, we fix $H$ to have a linear dependency with radius, but we have verified that steeper radial variations $H(R) \\propto R^{1.25}$ or $\\propto R^{1.5}$ only affect the results minimally.", "The normalization of $H$ is selected such that at the gravitational radius $H(R_g) = R_g$ , in accordance with the physical expectation that within $R_g$ the ionized gas is bound [13], [21], [50].", "These assumptions leave only two free parameters for the density description in Equation REF : its normalization $n_0 = n_e(R=10~\\mathrm {au})$ , and its power-law index $p$ .", "The circular velocity in the Keplerian model is defined as: $v(R) = \\biggl [ \\frac{GM_\\star }{R} \\biggr ]^{0.5},$ where $G$ is the gravitational constant and $M_\\star $ is the central stellar mass.", "This kinematical model has two free parameters: $M_\\star $ and the inclination angle $i$ , where $i=90 \\deg $ means an edge-on view.", "A Keplerian disk with outer radial motions.", "The next level of complexity is to include radial spherical motions to the disk, which are vector-added to the Keplerian rotating disk.", "This adds a third kinematical free parameter $v_{r,0}$ .", "A Keplerian disk with an Ulrich envelope.", "We also considered a model consisting of a Keplerian disk surrounded by an Ulrich-type envelope [52], [37].", "This model has been previously used to interpret the kinematics of G10.6 at $\\sim 0.1$ pc scales [22], and is a widely-used option to interpret rotating-infalling envelopes which settle into a rotationally supported disk [23], [16].", "By construction, the radial motions in the outer part of this model are inward.", "We run models with four free parameters: the stellar mass $M_\\star $ , the viewing inclination angle $i$ , the accretion rate of the Ulrich envelope $\\dot{M}_\\mathrm {env}$ , and an adimensional scaling factor $A$ which controls the density contrast between the inner disk and the outer envelope.", "ccccccccccc[t] 1 0pt Model Fitting 2lDensity profile 2lKeplerian disk 3cKeplerian $+$ radial 4cKeplerian $+$ Ulrich $n_0$ $p$ $M_\\star $ $i$ $M_\\star $ $i$ $v_\\mathrm {r,0}$ $M_\\star $ $i$ $\\dot{M}_\\mathrm {env}$ $A$ $[10^{12}~\\mathrm {cm}^{-3}]$ $[\\mathrm {M_\\odot }]$ $[\\deg ]$ $[\\mathrm {M_\\odot }]$ $[\\deg ]$ [km s$^{-1}$ ] $[\\mathrm {M_\\odot }]$ $[\\deg ]$ $[10^{-5}~\\mathrm {M_\\odot ~yr}^{-1}]$ $5.5\\pm 0.1$ $-0.67\\pm 0.01$ $194.0\\pm 0.8$ $51.2\\pm 0.1$ $127.9\\pm 0.8$ $55.5\\pm 0.1$ $8.72\\pm 0.04$ $187.8\\pm 0.9$ $48.9\\pm 0.2$ $3.33\\pm 0.02$ $5.29\\pm 0.03$ Continuum fit for density profile parameters: density normalization $n_0$ and index $p$ .", "H$30\\alpha $ fit for kinematical parameters: Keplerian model – stellar mass $M_\\star $ , viewing inclination angle $i$ ; Keplerian model with external radial motions – $M_\\star $ , $i$ , and radial velocity $v_\\mathrm {r,0}$ ; Keplerian model with external Ulrich envelope – $M_\\star $ , $i$ , accretion rate of the envelope $\\dot{M}_\\mathrm {env}$ , disk density scaling factor $A$ .", "The statistical errors ($\\pm 1\\sigma $ ) from the MCMC fitting are defined to contain $68.2~\\%$ of the values.", "Table REF summarizes the free parameters of the different modelling scenarios.", "Figures REF , REF , and REF in the Appendix show diagnostic plots for each of the best-fit models.", "The modelling and fitting procedure is as follows: Homogenize the continuum and line data to a common circular beam of $\\mathrm {FWHM} = 0.16$ (792 au), with a pixel size of $0.08$ .", "Nyquist sampling was chosen to avoid overfitting the data.", "Extract the observational continuum intensity and P-V cuts as previously described.", "Create a 3D model with sf3dmodels [16] and export it to the grid format of RADMC-3D [3], [42].", "Use RADMC-3D to calculate the radiative transfer of the free-free continuum and non-LTE H$30\\alpha $ emission of the previously computed model.", "Convolve the model images to the same resolution and pixel size of the observations.", "Then extract the model continuum and P-V cuts.", "Compare observations and model, iterating over the model parameter space using the MCMC sampler implemented in emcee [5].", "The log-likelihood function that is maximized is defined as: $ \\chi ^2 = -0.5 \\sum \\biggl ( \\frac{I_\\mathrm {data} - I_\\mathrm {model}}{\\sigma _\\mathrm {noise}} \\biggr )^2,$ where the summation is over the 9 pixels of the continuum cut or the $29\\times 60=1740$ pixels of the H${30}\\alpha $ P-V image, depending on the respective maximizationNote that the number of independent measurements is half of the number of pixels.", "We have verified that changing the number of pixels per beam does not affect the results.. To get a conservative estimate of the noise in the fitted data, for the line modelling we measure the rms in the P-V diagram $\\sigma _\\mathrm {noise} = 1.5$ mJy beam$^{-1}$ .", "Note that it is larger than the rms noise measured in a single channel in a region free of emission (see Section ), since it contains contributions from channel-to-channel variations and correlated noise due to interferometric sidelobes.", "For the continuum modelling we set $\\sigma _\\mathrm {noise} = 0.5$ mJy beam$^{-1}$ .", "The continuum modelling is used to find the two free parameters of the disk density profile: $n_0 = 5.5\\times 10^{12}$ cm$^{-3}$ , $p=-0.67\\approx -2/3$ (see Table REF ).", "These two density parameters are then fixed in the three different kinematical models of the H$30\\alpha $ line.", "Figure REF shows the results for the continuum and line modelling.", "The best-fit models give similar residuals because their overall properties are similar (see figures in the Appendix).", "Figure REF shows the posterior probability “corner” plots for the different types of models.", "The statistical errors are very small ($\\sim 1~\\%$ ) and should be considered as strict lower limits.", "Our consideration of different scenarios shows that the main source of uncertainty is in the model assumptions.", "Figure: Results from the radiative-transfer model fitting.Top row, left panel: continuum modelling.", "The top cut shows the ALMA data and the bottom cut shows the best-fit power law model for the electron density.Top row, center and right panels: Keplerian disk best-fit and residuals.Middle row: ALMA H30α30\\alpha position-velocity diagram, along with the best fit Keplerian model with radial motions and residuals.Bottom row: same as middle row for the Keplerian disk with Ulrich envelope.Residuals are defined as observational data minus model.For the case of a purely Keplerian model, the MCMC runs converge to $[M_\\star , i] = [194.0~M_\\odot , 51.2~\\deg ]$ .", "For the scenario of a Keplerian disk with radial motions, we initially set the radius at which radial motions start to the gravitational radius of the purely Keplerian model: $r_0 = R_g(M_\\star = 194~M_\\odot ) = 2320$ au.", "The resulting central mass is consistently smaller ($\\approx 130~M_\\odot $ ) when radial motions are included compared to the purely Keplerian case, therefore we updated the fixed value of $r_0$ to 1550 au, or about $R_g$ for a central mass of $130~M_\\odot $ .", "The resulting values for the free parameters of this fit are $[M_\\star , i, v_{r,0}] = [127.9~M_\\odot , 55.5 \\deg , 8.72$ km s$^{-1}$ ] (see Table REF ).", "The residuals from this model are slightly improved because radial motions help to produce a larger velocity spread at radii beyond $\\sim 1000$ au (see Fig.", "REF ).", "We have verified that this result is insensitive to our selection of $r_0$ .", "Including $r_0$ as a fourth free parameter gives consistent results: $[M_\\star , i, v_{r,0}, r_0] = [123.3~M_\\odot , 56.4~\\deg , 8.0$ km s$^{-1}$$, 972$ au].", "The assumption that radial motions start at the gravitational radius given by our simple prescription might be only approximately valid, since radiation pressure onto ions could decrease the radius at which outward radial motions start [51].", "Interestingly, the results presented in Table REF where $v_{r,0}$ was restricted to be positive (i.e., outward radial motions) are identical if $v_{r,0}$ is restricted to be negative (inward motions).", "Therefore, the modelling cannot distinguish between these two scenarios because the H30$\\alpha $ emission is optically thin.", "The arguments for the selection of $r_0 \\sim R_g$ suggest an outward interpretation, but the infalling molecular gas in the exterior suggests otherwise (see Section ).", "It is also remarkable that the fitted radial motions $v_{r,0}$ are identical to the sound speed of ionized gas ($c_s = 8.6$ km s$^{-1}$ ) at the assumed electron temperature $T_e=9000$ K. This means that, if unbound, the ionized gas in the disk has not yet accelerated to its terminal supersonic expansion [6].", "Finally, the results for the kinematical models of a Keplerian disk with an external Ulrich envelope are $[M_\\star , i, \\dot{M}_\\mathrm {env}, A] = [187.8~M_\\odot , 48.9~\\deg ,3.3\\times 10^{-5}~M_\\odot $ yr$^{-1}, 5.29]$ (see Table REF ).", "The obtained stellar mass and inclination angle are consistent with the values of the disk-only models.", "Figure: Corner plots of posterior probability distributions of the model parameters fitted by maximizing the log-likelihood χ 2 \\chi ^2 function defined in Equation .The top-left panels show the parameters labeled as “Density profile” in Table .The top-right panels correspond to the “Keplerian disk” modelling.The bottom-left panels are for the “Keplerian ++ radial” models.The bottom-right panels denote the “Keplerian ++ Ulrich” models." ], [ "Discussion", "Our results make the strongest case to date for the central stellar mass of the G10.6 massive star forming clump to be very large, $M_\\star = 120$ to $200~M_\\odot $ .", "The modelling results are consistent with the evidence presented by [49], who used NH$_3$ inversion lines with a resolution similar to ours, but originating in the molecular infall zone exterior to the central ionized disk, to infer a central mass $M_\\star \\sim 150$ M$_\\odot $ .", "From our kinematical modelling, this mass is distributed within a radius $< 1000$ to 1500 au, since this is the part of the ionized structure that is dominated by Keplerian rotation.", "Moreover, the mass has to be (proto)stellar, because the amount of ionized gas needed to reproduce the continuum and H$30\\alpha $ brightness is relatively small.", "The best-fit models give a total amount of ionized gas of only 0.2 to $0.25~M_\\odot $ within the entire domain of length 11480 au.", "Besides constraining the central mass of the cluster, three main questions remain: $(i)$ what is the distribution of (proto)stellar masses?", "; $(ii)$ what is the accretion stage of these (proto)stars?", "; and $(iii)$ what is the fate of this compact cluster?", "For question $(i)$ , the ionizing-photon rate $N_\\mathrm {Ly}$ inferred from the free-free continuum gives independent constraints on the most massive stars.", "Our models give a range $N_\\mathrm {Ly} = 6.1$ to $6.6\\times 10^{48}$ s$^{-1}$ , whereas the spherical estimation by [49] gives a significantly larger $N_\\mathrm {Ly} = 2\\times 10^{50}$ s$^{-1}$ .", "The main reason for this discrepancy is the rapid density decrease in the vertical direction in our models, which are tailored to fit the averaged midplane cuts of the flattened ionized structure.", "The UC Hii region is far from sphericalThe spherical power law with a density index $n \\propto r^{-1.5}$ assumed by [49] also implies an ionized-gas mass $0.5~M_\\odot $ , about $\\times 2$ larger than in our models., therefore the true value for $N_\\mathrm {Ly}$ should be somewhere in between, although probably closer to our estimate.", "Using the stellar calibrations of [34], the range $N_\\mathrm {Ly} = 10^{49}$ to $10^{50}$ s$^{-1}$ is equivalent to the output of one Zero Age Main Sequence (ZAMS) star with mass $M_\\star \\approx 32$ to $\\gtrsim 60~M_\\odot $ .", "The corresponding stellar luminosity is $L_\\star \\approx 2\\times 10^5$ to $ 1\\times 10^6~L_\\odot $ , which is well within the bolometric luminosity of the G10.6 region reported by [29], $L_\\mathrm {FIR} \\approx 3\\times 10^6~L_\\odot $ .", "The kinematical and luminosity constraints are satisfied by a number of stellar-mass arrangements, e.g., three stars of $50~M_\\odot $ each provide enough gravity with a total $N_\\mathrm {Ly} \\approx 1\\times 10^{50}$ s$^{-1}$ .", "In the scenario of having $N_\\mathrm {Ly}$ closer to our lower estimate $\\sim 1\\times 10^{49}$ s$^{-1}$ , one star with $M_\\star \\approx 35~M_\\odot $ would be enough to provide the ionizing photons, but the kinematical constraints would require the presence of an unknown number or lower-mass stars.", "Assuming that this unseen stellar cluster follows a [26] IMF, we calculate that the median cluster mass corresponding to a median maximum stellar mass $M_\\mathrm {\\star ,max} = 35~M_\\odot $ is $M_\\mathrm {cl} \\approx 500~M_\\odot $ , subject to significant stochasticity.", "We conclude that it is unlikely that a cluster sampling the full IMF is forming in such a reduced volume (see below), but an unseen group of lower-mass stars might coexist with the ionizing sources.", "Further observations at higher angular resolution from the radio to the mid-IR with the JWST are needed to clarify the distribution of (proto)stellar masses.", "For question $(ii)$ , the current accretion stage depends on the interpretation of the ionized-gas kinematics.", "While the evidence for infall and rotation in the molecular gas beyond the central ionized structure is strong [49], [31], our modelling shows that the radial motions in the ionized gas – regardless of their direction – become important starting somewhere in between $r \\sim 1000$ to 1500 au, or about $r \\sim R_g$ .", "This suggests outward radial motions in the ionized gas.", "This scenario would be similar to that of an ionized disk wind surrounded by collapsing molecular gas.", "A few cases of ionized disks have been reported around young massive stars [12], [35], [10], [17], but for objects that are less massive and luminous.", "Since the aforementioned objects are less embedded, their most likely interpretation is that of a remnant ionized disk without active accretion to the central star.", "The situation in G10.6 could be different.", "Even if the mass reservoir in the ionized disk is smaller than a solar mass, exterior to it exists a reservoir of infalling molecular gas that surpasses the central stellar mass beyond a radius of $\\sim 0.1$ pc, and reaches up to $M_\\mathrm {gas}\\sim 2500$ M$_\\odot $ at a radius of 0.5 pc [31].", "The molecular gas could replenish the ionized reservoir and the observed boundary of the ionized and molecular emission will depend on the asymmetric interactions of the ionizing photons and the surrounding molecular gas [41], [7].", "Under the disk plus Ulrich envelope scenario, the timescale for the replenishment of available $0.2~M_\\odot $ of ionized gas is $t_\\mathrm {rep} = M_\\mathrm {ion-gas}/\\dot{M}_\\mathrm {env} \\approx 6000$ yr, or $\\times 10$ shorter than the expected star-formation timescale.", "Therefore, we conclude that even if replenishment of ionized gas occurs, it can provide at most a few $M_\\odot $ of fresh material.", "Any active accretion ought to be in a residual stage.", "For question $(iii)$ , the mass density of this young and compact cluster is large.", "Taking the upper and lower limits in mass ($M_\\star = 120$ to $200~M_\\odot $ ) and radius ($r_\\star = 1000$ to 1500 au), we estimate $\\rho _\\star \\sim 7.4\\times 10^7$ to $4.2\\times 10^8$ $M_\\odot $ pc$^{-3}$ .", "Constraining the stellar number density is challenging because of the aforementioned lack of knowledge of the lower-mass stellar population, but a number of stars in the range 5 to 20 translates into $n_\\star \\sim 1\\times 10^7$ pc$^{-3}$ .", "Our lower limit to $\\rho _\\star $ is slightly larger than the value derived by [48] for the innermost 0.01 pc of the nuclear star cluster around Sgr A$^\\ast $ , $\\rho _\\mathrm {GC} = 2.6\\times 10^7$ $M_\\odot $ pc$^{-3}$ .", "However, the Galactic Center cluster is composed of $\\sim 200$ solar-type stars.", "Our loose constraints on the number densities indicate that stellar interactions are likely to occur within the next Myr [38], [53]." ], [ "Conclusions", "Using ALMA, we report the first kinematically resolved observations of an ionized rotating disk around a forming star cluster.", "The target is the luminous star formation region G10.6-0.4.", "We use radiative transfer models of the 1.3-mm free-free continuum and H$30\\alpha $ line emission to constrain the density and velocity structure within a radius of 6000 au.", "Our preferred best-fit model is that of an ionized Keplerian disk with radial motions beyond a radius $R_g \\sim 1000$ to 1500 au, which corresponds to the radius within which the ionized gas is expected to be bound.", "The central stellar mass is robustly constrained to be in the range $M_\\star = 120$ to $200~M_\\odot $ .", "The ionized-gas mass is only $M_\\mathrm {ion-gas} = 0.2$ to $0.25~M_\\odot $ .", "The viewing inclination angle from face-on is in the range $i = 49$ to $56~\\deg $ .", "The fitted radial motions $v_{r,0} = 8.7$ km s$^{-1}$  correspond exactly to the sound speed ($c_s=8.6$ km s$^{-1}$ ) of ionized gas at $T_e=9000$ K, indicating that the outer ionized gas is barely unbound at most.", "From constraints on the amount of ionizing photons and FIR luminosity, we conclude that there are either a few massive stars with $M_\\star = 32$ to $60~M_\\odot $ , or one such massive star accompanied by an unknown number of lower-mass stars.", "Any active accretion of ionized gas onto the (proto)stars is mostly residual.", "The inferred cluster density is large, which suggests that stellar interactions are likely to occur within the next Myr .", "ALMA, SMA CASA [36], Astropy [1], sf3dmodels [16], RADMC-3D [3], emcee [5], corner [4], spectral-cube (https://spectral-cube.readthedocs.io), pvextractor (https://pvextractor.readthedocs.io), IMF (https://github.com/keflavich/imf) This paper makes use of the following ALMA data: ADS/JAO.ALMA#2015.1.00106.S.", "ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), MOST and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile.", "The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ.", "The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. R.G.-M. and C.C.-G. acknowledge support from UNAM-PAPIIT projects IN108822 and IG101321, and from CONACyT Ciencia de Frontera project ID: 86372.", "R.G-M also acknowledges support from the AAS Chrétien International Research Grant.", "H.B.L.", "is supported by the National Science and Technology Council (NSTC) of Taiwan (Grant Nos.", "108-2112-M-001-002-MY3, 111-2112-M-001-089-MY3, and 110-2112-M-001-069).", "A.G. acknowledges support from NSF AAG 2008101 and NSF CAREER 2142300." ], [ "Diagnostic plots", "In this Appendix we show diagnostic plots for the best-fit models of each of the considered scenarios: purely Keplerian disk (Fig.", "REF ), Keplerian disk with radial motions starting at $r_0$ (Fig.", "REF ), and Keplerian disk with an external Ulrich-type envelope (Fig.", "REF ).", "Figure: Diagnostic plots for best fit of purely Keplerian model.", "The top-left panel shows the midplane ion density n ion =n e n_{ion} = n_e, along with a cut in which the velocity field is measured.", "The top-right panel shows cartesian velocity components in the midplane z=0z=0 cut.", "The bottom-left panel shows the midplane density profile.", "The bottom-right panel shows the scale-height HH profile.Figure: Same as Fig.", ", but for the case of the case of a Keplerian disk with external radial motions.Figure: Same as Fig.", ", but for the case of the case of a Keplerian disk with an exterior Ulrich envelope.", "Note that the Ulrich envelope has a divergence at the radius in which it settles into a disk , , which we smooth by averaging a few neighboring grid points." ] ]
2212.05633
[ [ "Intrinsic control of interlayer exciton generation rate in van der Waals\n materials via Janus layers" ], [ "Abstract We demonstrate the possibility of engineering the optical properties of transition metal dichalcogenide heterobilayers when one of the constitutive layers has a Janus structure.", "This has important consequences for the charge separation efficiency.", "We investigate different MoS$_2$@Janus layer combinations using first-principles methods including electron-hole interactions (excitons) and exciton-phonon coupling.", "The direction of the intrinsic electric field from the Janus layer modifies the electronic band alignments and, consequently, the energy separation between interlayer exciton states -- which usually have a very low oscillator strength and hence are almost dark in absorption -- and bright in-plane excitons.", "We find that in-plane lattice vibrations strongly couple the two states, so that exciton-phonon scattering may be a viable generation mechanism for interlayer excitons upon light absorption.", "In particular, in the case of MoS$_2$@WSSe, the energy separation of the low-lying interlayer exciton from the in-plane exciton is resonant with the transverse optical phonon modes (40 meV).", "We thus identify this heterobilayer as a prime candidate for efficient electron-hole pair generation with efficient charge carrier separation." ], [ "Introduction", "The weak dielectric screening of two-dimensional semiconductors allows for the formation of strongly bound electron-hole pairs (excitons) upon light absorption, leading to peculiar optical properties such as discrete excitonic peaks with strong photoluminescence response and layer-dependent exciton modulation [1], [2], [3], [4].", "In this regard, two-dimensional transition metal dichalcogenides (TMDs) are exemplary since, due to the quasi 2D confinement and weak dielectric screening, they host excitons with binding energies of hundreds of meV [5], [6], [7], [8].", "Therefore, this class of materials represents an important testbed for the physics of light-matter interaction [8], as evidenced by a wealth of observed phenomena including exciton/polariton condensation [9], [10], [11], the realization of advanced optoelectronic and nanophotonic devices [12], [13], [14], [15], [16], as well as valleytronics.", "[17], [18] Peculiarly, heterobilayer (HBL) structures with different TMD layers generally host interlayer (IL) excitons (where electron and hole forming the exciton reside in different layers) with a static electric dipole moment.", "Earlier reports showed that the type-II band alignment of the constituent monolayers causes the IL exciton to be the lowest-energy excitation in the HBL absorption spectrum despite having smaller binding energy than in-plane (IP) excitons  [19], [20], [21], [22], [23].", "These IL excitons have a lifetime almost 100 times longer than the more commonly observed in-plane (intralayer – IP) excitons — where electron and hole reside in the same layer [24], [25] — and are also at the forefront of current research.", "For example, their ultrafast formation dynamics is investigated [26], [27], along with their role in valleytronics [28], [29] and charge transfer.", "Ovesen et al.", "[30] have suggested a scenario for the formation of the IL excitons in HBL TMDs: the excitation of the IP exciton due to light absorption is followed by the tunneling of holes into a finite-momentum state of the opposite layer which can then relax to the ground state of the IL exciton via phonon-scattering.", "Therefore, the energy-momentum dependence and exciton energy offsets in bilayer structures are crucial for the formation of IL excitons in HBLs.", "These excited-state features also strongly depend on structural degrees of freedom such as layer separation and stacking.", "By these means the engineering of optical transition strengths, energies, and selection rules have been previously demonstrated [31], [32], [33], [34], [35], notably by the application of external electric fields[36], [37], [21], [38].", "However, the electric field does not need to be external: the addition of a so-called “Janus” layer[39] can provide a strong intrinsic electric field.", "Janus monolayers have been demonstrated by experimental studies.", "Qin et al.", "[40] have recently reported an in situ growth process which results in Janus TMD monolayers with high structural and optical quality.", "A first theoretical prediction of the impact of the intrinsic electric field, with a possible reordering of IP and IL excitons, was done recently for Janus-bilayers[41].", "In this paper, we propose that combining a non-Janus TMD monolayer with a Janus monolayer – thus creating a TMD@JTMD heterostructure – allows for reliable tuning of the relative order of the IP and IL excitons, in turn leading to an efficient pathway for IP-to-IL conversion via exciton-phonon scattering.", "Experimentally, Trivedi et al.", "[42] have already shown the controllable room temperature fabrication and optical characterization of TMD@JTMD and JTMD@JTMD HBL crystals.", "The tuning of the intrinsic net out-of-plane electric dipole moment in these materials, with the large corresponding piezoelectric effect [39], is also promising for light-energy/electricity interconversions and valley-contrasting physics [43], [44], [45], [46].", "More specifically, we investigate the optoelectronic properties of MoS$_2$ @MoSSe, MoS$_2$ @MoSeS, MoS$_2$ @WSSe, and MoS$_2$ @WSeS (as shown in Fig.", "REF a-d) using first-principles, many-body perturbation theory techniques including quasiparticle corrections, electron-hole interactions and exciton-phonon coupling [47], [48], [49], [50], [51], [52], [53], [54].", "We demonstrate that the direction of the intrinsic electric field polarization of the Janus layer changes both the band alignments and the energy separation of the lowest-energy IL and IP excitonic levels.", "In addition, we calculate the scattering strength of the IP-to-IL exciton transitions mediated by optical phonons at the zone center, in order to address the generation mechanisms of the charge-separated and long-lived IL excitons as schematically explained in Fig.", "REF e. Figure: (a) to (d): Schematic representation of the investigated TMD@JTMD heterobilayer structures (only the most stable configurations are reported).", "The TMD layer (bottom) is always MoS 2 _2.", "The JTMD layer (top) is either MoSSe or WSSe, while the TMD@JTMD interface can be either S-S or S-Se.", "Moreover, the two layers can be stacked in different configurations denoted AA ' ^\\prime and AB.", "(e): Schematic representation of the phonon-assisted IP →\\rightarrow IL exciton transition in the case of MoS 2 _2@WSSe.The calculated results for the geometric and electronic properties are reported in Table REF .", "The obtained in-plane lattice parameters are in excellent agreement with the reported values[55], [56] and do not depend on the stacking order.", "However, the out-of-plane structural parameters $d_{C}$ (interlayer distance between closest chalcogen layers) and $d_{M}$ (interlayer distance between metal atom layers) change with the stacking type.", "These stacking-dependent structural changes do not remarkably affect electronic structures, as seen in the calculated band gaps and exciton energies also reported in Table REF .", "Among all the considered systems (see Supplementary Information [57] for more details), the difference in total energies between the various structures is of the order of a few meV, with the so-called AA-stacked structures clearly being less energetically favored due to the in-line chalcogen-chalcogen interlayer interaction.", "Therefore, within the limit of computational error, either the AB or AA$^{\\prime }$ stackings can lead to stable structures depending on the orientation of the Janus layer.", "At variance with the stacking order, the polarization direction of the Janus layer (namely S-Mo-S$\\leftrightarrow $ S-Metal-Se or S-Mo-S$\\leftrightarrow $ Se-Metal-S, with a different interaction at the layers interface) has a notable effect on both structural and electronic properties.", "In particular, the energy difference between direct and indirect band gaps, as well as the exciton binding energies, change remarkably.", "For instance, MoS$_2$ @MoSSe(WSSe) HBLs are distinctly indirect band gap materials, whereas MoS$_2$ @MoSeS(WSSe) are direct band gap materials within the limits of thermal fluctuations." ], [ "Excitonic properties", "The results for quasiparticle band structures (in the G$_0$ W$_0$ approximation) and the absorption spectra including electron-hole interactions for AA$^{^{\\prime }}$ and AB stacked HBLs are displayed in Fig.", "REF (a-d) (left: excitonic optical absorption spectrum; right: quasiparticle band structure in the vicinity of the K point of the hexagonal Brillouin zone (BZ); see Supplementary Information [57] for all the considered materials).", "The band structures include the projections of the electronic wave functions onto atomic orbitals localized on the constituent layers: the red and blue colors represent electronic states mostly localized on the Janus and on the TMD monolayer, respectively.", "Electron-hole interactions were included via the solution of the Bethe-Salpeter equation (BSE) from first principles.", "[58] The investigated structures differ in stacking geometry (AA$^{\\prime }$ vs AB), transition metal in the Janus layer (Mo vs W) and orientation of the Janus layer, controlling the polarization direction (S$\\leftrightarrow $ S vs S$\\leftrightarrow $ Se interface between the two layers).", "All the HBLs present some common optoelectronic features, as well as some notable differences.", "The electronic states around the $K$ points in the quasiparticle band structures are completely confined on one of the two layers.", "In particular, the four highest valence states comprise two pairs of spin-orbit-split bands, with one pair localised on the TMD layer (blue color) and the other pair on the Janus layer (red color).", "The same is true for the four lowest conduction states.", "In addition, the minimum energy transition at the $K$ point always connects a valence state on the Janus layer to a conduction state on the TMD one.", "As a consequence of the band structure shape, the low-energy region of the excitonic absorption spectra is always dominated by the IP and IL excitons originated from single-particle transitions in the bands around $K$ .", "Another common excitonic feature is that the low-lying IP states always originate from electronic transition between the TMD valence and conduction bands (blue to blue in the band graph), therefore they are always localised on the TMD layer, as shown in two examples in Fig.", "REF e-g.", "Figure: Excitonic properties of TMD@JTMD HBLs.", "Top panels (a to d): optical absorption spectra and quasiparticle band structures (zoomed-in to the vicinity of the K point in the BZ, see the supplementary materials for the full band structures) of the investigated systems.", "The absorption spectrum is proportional to the imaginary part of the excitonic dielectric function ε 2 \\varepsilon _2, which is the plotted quantity on the left panels.", "The band color (right panels) represents the projections of the electronic wave function onto its atomic orbitals components localized either in the JMTD or in the TMD layers (blue: mostly TMD state, red: mostly Janus state).", "The calculations are performed on the stacking configurations with minimal energy and are: (a) MoS 2 _2@MoSeS in AA ' ^\\prime stacking, (b) MoS 2 _2@MoSSe in AB stacking, (c) MoS 2 _2@WSeS in AA ' ^\\prime stacking and (d) MoS 2 _2@WSSe in AB stacking.", "The most important single-particle transitions forming the IL (Janus ↔\\leftrightarrow MoS 2 _2), IP (MoS 2 _2 ↔\\leftrightarrow MoS 2 _2) excitons are labeled in the band plots, while the energy of the resulting exciton states is emphasized in the absorption plots.Bottom panels (e to h): Wave-function intensity plot in real space for excitons corresponding to (e) IP in MoS 2 _2@MoSSe, (f) IL in MoS 2 _2@MoSSe, (g) IP in MoS 2 _2@WSSe, and (h) IL in MoS 2 _2@WSSe.", "The plot is obtained by fixing the hole in a position consistent with its band orbital character and plotting the resulting electron distribution.The four HBLs however differ remarkably with respect to the nature of the quasiparticle band gap, which is direct at $K$ for the AA$^\\prime $ stackings with S-Se interface (Fig.", "REF (a) and (c)), while indirect from $\\Gamma $ to $K$ for the AB stackings with S-S interface (Fig.", "REF (b) and (d)).", "Another important difference is due to the effect of the intrinsic electric field arising from the Janus monolayer, which shifts the TMD bands with respect to the Janus ones in both valence and conduction states.", "The direction of the shift depends on the direction of the field, i.e., on the orientation of the Janus layer.", "More specifically, in the HBLs with S/Se interface the TMD bands are shifted away from the Janus ones, leading to a substantial energy separation.", "When the Janus dipole orientation is reversed, instead, the TMD bands are shifted closer to the Janus ones becoming almost degenerate with them.", "These changes at the quasiparticle level determine the most important optical feature for charge transfer efficiency: the energy difference – and thus likelihood of transition – between the IL and IP excitonic states, which varies considerably between the systems.", "Intriguingly, the localization of the IL excitons changes depending on the stacking and JTMD layer types.", "In MoS$_2$ @MoSSe, the exciton is formed by electronic transitions from the TMD layer to the JTMD one (blue to red in the band graph), while in all other cases the hole is on the JTMD layer and the electron on the TMD one (red to blue in the band graph).", "This difference in spatial localization can be seen in Fig.", "REF f-g. MoS$_2$ @MoSeS (AA$^{\\prime }$ stacking, S-Se interface).", "The geometry of this system is represented in Fig.", "REF (a), the quasiparticle/BSE results in Fig.", "REF (a).", "This HBL is a direct gap material with a gap of 1.80 eV.", "The indirect band gap sits 0.01 eV above.", "Note that at the DFT-PBE level this energy ordering is reversed as seen in Table REF , which shows that the quasiparticle correction to the Kohn-Sham energies is not just a rigid shift of the bands owing to the $k$ -dependence of the $GW$ self-energy.", "The ordering of the bands at the $K$ point indicates a type-II character for this HBL at the quasiparticle level, as previously observed for MoS$_2$ @WS$_2$ and MoSe$_2$ @WSe$_2$ HBLs.", "[22], [19], [20], [23] The energy of the IL exciton which forms via the transitions from the valence band maximum (VBM) to conduction band minimum (CBM) is 1.38 eV.", "The energy difference between the lowest energy IL and the first in-plane exciton, IP, is 0.44 eV, which is too large for a one-phonon scattering process to enable the direct transitions from the optically excited IP to the IL state.", "MoS$_2$ @MoSSe (AB stacking, S-S interface).", "The geometry of this system is represented in Fig.", "REF (b) and the quasiparticle/BSE results are shown in Fig.", "REF (b).", "Contrary to the previous case, here we have an indirect gap semiconductor at both the DFT-PBE (0.93 eV) and G$_0$ W$_0$ (1.84 eV), respectively.", "Because of the direction of the intrinsic electric field, the TMD and Janus bands overlap at the CBM and have very similar band energies at the VBM.", "This leads to a situation where the electron-hole interaction strength of the IL and IP excitons become the determining factor for the energy ordering of these excitons in the absorption spectrum.", "In MoS$_2$ @WS$_2$ and MoSe$_2$ @WSe$_2$ HBLs it has been shown that the binding energiesHere, the binding energy of an IP (IL) exciton is defined as the difference between the exciton energy and the lowest-energy single-particle IP (IL) transition.", "of IL excitons are approximately 100 meV lower than the ones of IP excitons  [22].", "The case of the MoS$_2$ @MoSSe HBL, here, is similar: the IP exciton (1.83 eV) has approximately 70 meV higher binding energy than the first IL exciton (1.90 eV) as reported in Table REF .", "Due to this, the IP exciton automatically becomes the lowest energy exciton in the absorption spectrum as shown in Fig.", "REF (b).", "The spatial distribution of the excitonic wave functions for this system is shown in Fig.", "REF e and f for the IP and IL states, respectively.", "The energy separation between the IP and the IL exciton is around 70 meV.", "This is not ideal because the charge-separated state is not energetically favored, and even a thermal population of the IL exciton would be very tiny.", "MoS$_2$ @WSeS (AA$^{\\prime }$ stacking, S-Se interface).", "The geometry of this system is represented in Fig.", "REF (c) and the quasiparticle/BSE results are shown in Fig.", "REF (c).", "Similar to the Mo case, this is a direct band gap material with a quasiparticle gap of 1.57 eV (see TableREF ).", "Again, orbital projections indicate the type-II character of the electronic bands in this HBL.", "The SOC splitting of the Janus bands, shown in red, is more pronounced than in the Mo case due to the presence of the heavier W atoms.", "Here the energy difference between the IL exciton (1.15 eV) and the lowest-energy IP exciton (1.83 eV) is 0.65 eV, even larger than in the Mo case with the same stacking and Janus orientation, due to the more substantial SOC.", "This again rules out efficient IL exciton generation from the IP states via first-order phonon-assisted conversion processes, with incoherent exciton scattering (i.e., relaxation dynamics[30], [59]) likely being the most important mechanism.", "MoS$_2$ @WSSe (AB stacking, S-S interface).", "The geometry of this system is represented in Fig.", "REF (d) and the quasiparticle/BSE results are shown in Fig.", "REF (d).", "This HBL has an indirect quasiparticle band gap of 1.81 eV.", "At the VBM, the TMD bands (blue) are squeezed in between SOC-split Janus bands (red).", "Compared to the previous W-based HBL, the opposite intrinsic dipole moment from the Janus layer shifts the bands of the MoS$_2$ layer so that the bands localized on the two layers are energetically very close to each other just as in the case of the Mo-based system with the same geometry.", "The lowest-lying IL state (1.78 eV) is just 40 meV below the IP exciton as shown in Fig.", "REF (d), while the real-space excitonic wavefunctions are represented in Fig.", "REF g-h. We will see in the following section that this small energy difference will drastically improve the efficiency of the exciton-phonon scattering channel between the two excitonic states and, hence, the charge carrier separation efficiency of this HBL." ], [ "Exciton-phonon coupling strengths", "We now focus on MoS$_2$ @WSSe (AB stacking), the HBL where the lowest-bound intralayer and interlayer excitons have a very small energy separation (40 meV), lower than the Debye energy (56 meV).", "This means that phonon-mediated charge separation, i.e., excitonic intralayer-interlayer (IP-IL) scattering might be very efficient, due to two concurring mechanisms.", "First, the exciton relaxation dynamics suggests that after the higher-energy intralayer exciton is photoexcited, incoherent scatterings mediated by low-momentum acoustic phonons will quickly transfer the carriers to the lower-energy interlayer state (the description of out-of-equilibrium carrier dynamics is beyond the scope of this paper).", "Second, direct intralayer-interlayer scatterings mediated by optical phonons at vanishing momenta will also be permitted and may play an important role.", "We quantitatively analyze the latter mechanism by computing the exciton-phonon coupling matrix elements $\\mathcal {G}$ at zero exciton and phonon momenta for this system[60], [61], [54], [62]: $\\mathcal {G}_{\\alpha \\beta }^{\\mu }=\\sum _{vck}\\left[\\sum _{v^\\prime } \\left(A^{cv^\\prime k}_\\beta \\right)^* g_{vv^\\prime k}^\\mu A^{cvk}_\\alpha - \\sum _{c^\\prime } \\left(A^{c^\\prime v k}_\\beta \\right)^* g_{c^\\prime ck}^\\mu A^{cvk}_\\alpha \\right]$ Here we assume that excitons can be approximately described as well-defined quasiparticle excitations with bosonic character.", "[63] In Eq.", "(REF ), $\\alpha $ and $\\beta $ are the indices of the exciton states involved in a scattering mediated by a phonon mode $\\mu $ .", "The exciton eigenvectors $A^{cvk}$ are expressed in terms of single-particle transitions at wave vector $k$ from a valence band $v$ to a conduction band state $c$ .", "They represent the excitonic wave function and result from the solution of the BSE.", "The $g_{c^{\\prime }ck}^\\mu $ ($g_{vv^{\\prime }k}^\\mu $ ) are the electron-phonon coupling matrix elements, obtained from a density functional perturbation theory (DFPT) calculation and representing the scattering amplitude from a conduction band state $c$ (valence band state $v^{\\prime }$ ) at wave-vector $k$ into another state $c^{\\prime }$ ($v$ ) at the same wave vector, via absorption/emission of a phonon mode $\\mu $ .", "Thus, the values of $|\\mathcal {G_{\\alpha \\beta }^\\mu }|$ represent the coupling strengths of excitonic transitions $\\alpha \\rightarrow \\beta $ via absorption/emission of phonon mode $\\mu $ .", "The calculated values for the IL-IP scattering are displayed in Fig.", "REF a, showing that out of the 10 distinct optical phonon modes present in these material, all those with atoms oscillating in the layer plane may couple the two excitons and represent possible scattering channels.", "These are doubly degenerate modes with $E$ symmetry.", "The coupling is instead forbidden for the out-of-plane phonons.", "This confirms that an IP exciton localised on one layer may transfer its carriers to the lower-lying IL one with the help of ionic oscillations in that layer.", "In the case of MoS$_2$ @WSSe, the coupling strengths vary between $0.5$ and 2 meV.", "A scheme of the oscillation patterns[64] is provided in Fig.", "REF b.", "The other factor affecting scattering probabilities is energy conservation.", "We define the resonance offset energy (ROE) as $\\Delta E_{\\alpha ,\\beta }^\\mu =|E_{\\alpha }-E_{\\mu }-E_{\\beta }|$ ($E_{\\mu }$ being the phonon energy).", "The closer $\\Delta E_{\\alpha ,\\beta }^\\mu $ is to zero, the more likely the transition is to happen.", "Here we consider only the phonon emission case, since it is the dominant contribution with respect to phonon absorption.", "The ROE values are shown with a color scale in Fig.", "REF a.", "For MoS$_2$ @WSSe, the energy of the fourth $E$ phonon mode is exactly resonant with the excitonic transition at $40.4$ meV.", "This phonon mode corresponds to ionic oscillation of the Janus layer only.", "In addition, strongly coupled $E$ modes three and five (mostly the TMD layer moving) are both just 5 meV from the resonance.", "Therefore, our calculations predict that the IP-IL excitonic scattering mediated by optical phonons will be a particularly efficient process for the W-based HBL.", "Figure: (a) Exciton-phonon coupling strengths, |𝒢||\\mathcal {G}| (see Eq.", "()) for the intralayer (IP) to interlayer (IL) exciton scattering per phonon mode in MoS 2 _2@WSSe (AB stacking, see Fig.", "[d]).", "The colors report the resonance energy offset values (see text).", "(b) Oscillation patterns of the zone-center optical phonon modes for MoS 2 _2@MoSSe (AB stacking) and MoS 2 _2@WSSe (AB stacking).", "The arrow lengths are not scaled by the ionic masses, but they are set to zero for oscillations one order of magnitude weaker than the rest.", "The phonon energies associated with each mode are reported, in meV, for both systems." ], [ "Conclusions", "We have shown that the intrinsic net out-of-plane electric dipole moment of a Janus-type TMD layer has a strong influence on the optoelectronic properties of heterobilayer systems in which it is used.", "In particular, our first-principles analysis on MoS$_2$ @MoSSe, MoS$_2$ @MoSeS, MoS$_2$ @WSSe, and MoS$_2$ @WSeS demonstrates that the polarization direction of the Janus layer can be used to tune the dynamics of excitons – most notably by altering the energy separation between interlayer and in-plane excitonic states – without the use of external fields.", "Surprisingly, for MoS$_2$ @WSSe the calculated energy difference is exactly resonant with in-plane optical phonon modes.", "Moreover, the calculated zero-momentum exciton-phonon couplings point to efficient in-plane to interlayer excitonic scattering mediated by optical phonons, again with MoS$_2$ @WSSe being the prime candidate for this mechanism.", "It is important to note, however, that this system also has an indirect band gap: hence, new low-lying dark excitonic states at finite momentum may be important, introducing an additional possible pathway for exciton dynamics which could be detrimental to the intra- to interlayer conversion rate.", "This opens a relevant future avenue of investigation theoretically, experimentally, and from a materials design perspective.", "Theoretically, the next step is computing the exciton-phonon couplings at finite momenta and simulating the excitonic relaxation dynamics in order to compare the transition rates of the various competing mechanisms.", "Experimentally, these quantities can be measured by photoluminescence and time-resolved, pump-and-probe studies.", "In materials design, it would be important to target excited-state properties both microscopically and at the excitonic level, rather than macroscopic optical properties at the single-particle level, in order to obtain candidate systems more effectively.", "In conclusion, our results clearly support the use of Janus materials in layer engineering to boost the generation rate of long-lived interlayer excitons in heterobilayer TMD crystals.", "These excitons are obtained by phonon-assisted conversion of optically excited intralayer states." ], [ "Methods", "The single-particle wave functions and corresponding energies (DFT step) are obtained from density functional theory as implemented in the Quantum ESPRESSO code (QE) [47] using Perdew-Burke-Ernzerhof (PBE) [65] norm-conserving, fully relativistic pseudopotentials in the generalized gradient approximation (GGA) [66].", "These were generated by the PseudoDojo project [67].", "The plane-wave energy cutoff, vacuum separation between periodic repetitions of the simulation supercell, and $k$ -grid sampling are, 120 Ry, 55 a.u.", "and 42$\\times $ 42$\\times $ 1 ($\\Gamma $ centered), respectively.", "We adopted Grimme’s dispersion correction (labeled as Grimme-D2 in QE)[68], [69] in order to take van der Waals (vdW) interactions into account.", "The Many-Body Perturbation Theory calculations[58], performed on top of the DFT results, were conducted with the YAMBO code.", "[49], [50] The G$_0$ W$_0$[51], [52] corrections to the single-particle eigenvalues were computed with the plasmon-pole approximation for the dynamical electronic screening.", "The direct and indirect band gaps were converged with the 42 $\\times $ 42 $\\times $ 1 $k$ -grid mesh (yielding 169 $k$ -points in the irreducible Brillouin zone), summing over 600 and 900 states for the screening and the Green's functions, respectively.", "The corrections were computed for the top 4 valence bands and the bottom 4 conduction bands.", "The BSE[58] for excitons was then solved in the Tamm-Dancoff approximation with RPA static screening, which was summed over 300 bands.", "The direct exciton energies and their wave functions were obtained for the first 12000 excitonic states by using the iterative scheme enabled by the SLEPC library.", "[70] The Coulomb cutoff (CC) technique was used along the out-of-plane direction to eliminate the long-ranged interactions with the repeated periodic images of the systems in both G$_0$ W$_0$ and BSE steps.", "[53] The same computational settings used at the DFT level, albeit with stricter convergence thresholds for the electronic wave functions, were adopted for the $\\Gamma $ -point calculation of phonon frequencies, eigenvector displacements and electron-phonon coupling matrix elements via Density Functional Perturbation Theory as implemented in the Quantum Espresso Code.", "[48] A dedicated Coulomb cutoff technique[71] was employed also in the phonon case.", "Supplementary Information file includes the schematic representation of considered Janus HBLs, calculated band structures at both PBE and G$_{0}$ W$_{0}$ levels, and calculated $\\varepsilon _{2}$ with quasi-particle (via Bethe Salpeter Equation) and independent particle approximations.", "C. S. acknowledges funding by the the Air Force Office of Scientific Research (AFOSR, USA) under award number FA9550-19-1-7048.", "M. M acknowledges the Research Foundation-Flanders (FWO-Vlaanderen).", "F.P.", "acknowledges the European Union project: MaX Materials design at the eXascale H2020-INFRAEDI-2018-1, grant agreement n. 824143.", "L.W.", "acknowledges funding by the Fond National de Recherche, Luxembourg via project INTER/19/ANR/13376969/ACCEPT." ] ]
2212.05615
[ [ "Reconciling a decelerating Universe with cosmological observations" ], [ "Abstract Can modern cosmological observations be reconciled with a general-relativistic Universe without an anti-gravitating energy source?", "Usually, the answer to this question by cosmologists is in the negative, and it is commonly believed that the observed excess dimming of supernovae relative to that in the Milne model is evidence for dark energy.", "In this paper, we develop a theorem that clarifies the conditions for such an excess dimming, based on which we argue that the answer to the above question may counter-intuitively be `yes'." ], [ "Introduction", "Dark energy first became an established part of the cosmological paradigm in 1998, when data from supernovae of type Ia showed a greater dimming in their luminosity with redshift than what could be explained with a Milne universe modelThe Milne model (empty FLRW model without a cosmological constant) defines an upper limit for the distance-redshift curve of the expanding general-relativistic Friedmann-Lemaître-Robertson-Walker (FLRW) models that obey the strong energy condition.", "[36], [31].", "When interpreted within the general-relativistic FLRW models, the data thus indicated the existence of dark energy, an anti-gravitating energy source violating the universally attractive nature of gravity known from ordinary matter.", "Shortly after this realisation, it was proposed that the dimming of light from supernovae could alternatively be explained in universe models with ordinary matter forming a large-scale cosmic inhomogeneity as modelled by the Lemaître-Tolman-Bondi (LTB) models [29], [8].", "These models exhibit universal deceleration of distances between geodesic test particles, but an observer who is placed properly relative to the inhomogeneity can infer the same type of excess dimming of light from supernovae as is observed in an FLRW model with an accelerated scale factor; see [9] for a review.", "The LTB metrics that can account for the observed supernova luminosities are challenged by complementary data [7], [35] and are furthermore breaking with the Copernican principle.", "It is of interest if models that satisfy the strong energy condition and the Copernican principle could produce an excess dimming of light from supernovae.", "The Dyer-Roeder approximation [13], [14] is a commonly applied conjecture for analysing light propagation in models that obey a notion of the Copernican principle for cosmological observers and where a uniquely defined FLRW reference model exists.", "The Dyer-Roeder approximation builds on the assumption that the average redshift of the light is given by the background FLRW model, and consequently, the dimming of light within globally-expanding universe-models that conform to the approximation and obey the strong energy condition cannot exceed that of the Milne-universe; see the empty beam approximation in equation 7 of [13].", "The validity of the Dyer-Roeder approximation for describing light-propagation in space-times with non-linear stuctures is subject to debate [16], [20], [33], [11], [19], [25], [26].", "In this paper, we formulate a theorem for the constraints on the observed angular diameter distance (luminosity distance) without employing conjectures for light propagation or any a priori constraints on the underlying geometry; we only assume the geometrical optics description for light and general relativity as the gravitational theory.", "When imposing the null energy condition, we find that the angular diameter distance is bounded from above by the distance–redshift relation of the Milne universe model when certain geometrical conditions are met.", "We analyse the circumstances under which the conditions of the theorem may be violated without the introduction of exotic matter that breaks with the strong energy condition.", "We discuss how such a violation may actually take place and give rise to an observed excess dimming of light from cosmic sources in our Universe.", "Consider a space-time manifold with a metric tensor $g_{\\mu \\nu }$ of signature $(- + + +)$ and a Levi-Civita connection $\\nabla _\\mu $ .", "Let a congruence of light pass from a source to an observer in this space-time.", "We assume that the geometrical optics approximation holds, such that the photons of the congruence can be described as test particles with 4-momentum $k^\\mu $ that follow geodesic null curves.", "Let $\\hat{u}^\\mu _\\mathcal {O}$ and $\\hat{u}^\\mu _\\mathcal {E}$ be the 4-velocities of the observer and source of the light as evaluated at the events of observation $\\mathcal {O}$ and emission $\\mathcal {E}$ respectively.", "The redshift of the photons passing from $\\mathcal {E}$ to $\\mathcal {O}$ is given by $1 + \\hat{z}_\\mathcal {E}\\equiv \\frac{\\hat{E}_\\mathcal {E}}{\\hat{E}_\\mathcal {O}} \\, , \\quad \\hat{E} \\equiv - \\hat{u}^\\mu k_\\mu \\, .$ It is useful to consider a `cosmic congruence' reference field with 4-velocity $u^\\mu $ that is defined in the space-time neighbourhood of the photon congruence.", "We decompose the observer and emitter 4-velocities at the end-points of the photon congruence relative to this reference $\\hat{u}^\\mu = \\gamma (u^\\mu + v^\\mu )$ , where $v^\\mu u_\\mu = 0$ and constrain the (otherwise general) choice of cosmic 4-velocity field by requiring $\\gamma - 1 \\ll 1 \\, , \\qquad \\gamma \\equiv - \\hat{u}^\\mu u_\\mu = \\frac{1}{\\sqrt{1 - v^\\mu v_\\mu }}$ at $\\mathcal {O}$ and $\\mathcal {E}$ , and this lets us use the approximation $\\hat{u}^\\mu = u^\\mu + v^\\mu \\quad \\text{(approximation at $\\mathcal {O}$ and $\\mathcal {E}$)} \\, .$ We are only making smallness assumptions on $\\gamma - 1$ , not its derivatives, and $v^\\mu $ may vary arbitrarily along the emitter and observer worldlines as long as it is small at the points $\\mathcal {E}$ and $\\mathcal {O}$ .", "We shall typically consider more sources, and require that (REF ) is satisfied for each of the corresponding points of emission; this puts more constraints on the choice of cosmic 4-velocity.", "However, when one valid choice of 4-velocity field exists, there will be an infinity of choices for $u^\\mu $ that obey the condition (REF ).", "It is generally possible to write $k^\\mu = E (u^\\mu - e^\\mu ) \\, , \\qquad E \\equiv - u^\\mu k_\\mu \\, ,$ where $e^\\mu $ is a spatial unit vector that is orthogonal to the cosmic 4-velocity field $u^\\mu e_\\nu = 0$ .", "Using the two decompositions (REF ) and (REF ) we can write $\\hspace*{-12.80365pt} 1 + \\hat{z}_\\mathcal {E}= (1+z_\\mathcal {E}) (1 + e^\\mu v_\\mu \\vert _{\\mathcal {E}} \\!", "-\\!", "e^\\mu v_\\mu \\vert _{\\mathcal {O}} \\!)", "\\, , \\; \\; 1+z \\!", "\\equiv \\!", "\\frac{E}{E_\\mathcal {O}} .$ Let $d_A$ be the `cosmic angular diameter distance' to an artificial source comoving with $u^\\mu $ at $\\mathcal {E}$ and measured by an artificial observer comoving with $u^\\mu $ at $\\mathcal {O}$ .", "From the special-relativistic relation between the area measure in frames of relative velocity, we have that the angular diameter distance, $\\hat{d}_A$ , to the physical emitter comoving with $\\hat{u}^\\mu _\\mathcal {E}$ and observed by the physical observer comoving with $\\hat{u}^\\mu _\\mathcal {O}$ is (see equation 2.26 in [27]) $\\hat{d}_A = \\frac{ \\hat{E}_\\mathcal {O}}{ E_\\mathcal {O}} d_A = (1 + e^\\mu v_\\mu \\vert _{\\mathcal {O}} ) \\, d_A \\, .$ We assume that the photon number in the light bundle is preserved on the path from the emitter to the observer, such that Etherington's reciprocity theorem [17] holds for deriving luminosity distance: $\\hat{d}_L = (1+\\hat{z})^2 \\hat{d}_A$ .", "Thus, it suffices to analyse angular diameter distance and redshift, from which the value of luminosity distance follows." ], [ "Evolution of cosmic distances and redshift", "We consider the evolution equations for the cosmic angular diameter distance and redshift functions, while knowing that the physical distances and redshifts can easily be obtained by the relations (REF ) and (REF ).", "The evolution of $z$ along the null rays is given by $\\hspace*{-5.69046pt} \\frac{ {\\rm d} z}{{\\rm d} \\lambda } = - E_\\mathcal {O}(1+z)^2 \\mathfrak {H} \\, ,$ where $\\lambda $ satisfies $k^\\mu \\nabla _\\mu \\lambda = 1$ and is an affine parameter along the individual null rays, where $\\frac{ {\\rm d} }{{\\rm d} \\lambda } \\!", "\\equiv \\!", "k^\\nu \\nabla _\\nu $ is the directional derivative along the null ray, and where we have introduced the `effective Hubble parameter' $\\hspace*{-5.69046pt} \\mathfrak {H} \\equiv \\frac{ {\\rm d} E^{-1}}{{\\rm d} \\lambda } = \\frac{1}{3}\\theta - e^\\mu a_\\mu + e^\\mu e^\\nu \\sigma _{\\mu \\nu } \\, ,$ which reduces to the FLRW Hubble parameter `$\\dot{a}/a$ ' in the FLRW geometry.", "We have made use of the kinematic decomposition $&& \\hspace{-8.5359pt} \\nabla _{\\nu }u_\\mu = \\frac{1}{3}\\theta h_{\\mu \\nu }+\\sigma _{\\mu \\nu } + \\omega _{\\mu \\nu } - u_\\nu a_\\mu \\ , \\nonumber \\\\&& \\hspace{-8.5359pt} \\theta \\equiv \\nabla _{\\mu }u^{\\mu } , \\; \\; \\sigma _{\\mu \\nu } \\equiv h_{ \\langle \\nu }^{\\, \\beta } h_{ \\mu \\rangle }^{\\, \\alpha } \\nabla _{ \\beta }u_{\\alpha } , \\; \\; \\omega _{\\mu \\nu } \\equiv h_{ [ \\nu }^{\\, \\beta } h_{ \\mu ] }^{\\, \\alpha } \\nabla _{ \\beta }u_{\\alpha } ,$ where $h_{\\mu \\nu } = g_{\\mu \\nu } + u_{\\mu } u_{\\nu }$ is the spatial projection tensor orthogonal to $u^\\mu $ , and where the triangular bracket $\\langle \\rangle $ around indices selects the tracefree symmetric part of the spatial tensor and $[ ]$ selects the anti-symmetric part.", "The variables $\\theta $ , $\\sigma _{\\mu \\nu }$ , and $\\omega _{\\mu \\nu }$ describe respectively the volume expansion, shear, and vorticity of the cosmic congruence, and $a^\\mu \\equiv \\dot{u}^\\mu $ is the 4-acceleration of the individual observers in the congruence, where the overdot $\\dot{} \\!", "\\equiv \\!", "u^\\nu \\nabla _\\nu $ represents the directional derivative along the cosmic flow-lines.", "The second derivative of $z$ along the photon null ray is obtained by differentiating (REF ): $\\frac{ {\\rm d^2} z}{{\\rm d} \\lambda ^2} = E_\\mathcal {O}^2 \\left( 3 + \\mathfrak {Q} \\right) \\mathfrak {H} ^2 (1+z)^3 \\, ,$ where the `effective deceleration parameter' $\\mathfrak {Q} &\\equiv & - 1 - \\frac{1}{E} \\frac{ \\frac{ {\\rm d} \\mathfrak {H} }{{\\rm d} \\lambda } }{ \\mathfrak {H} ^2} \\, ,$ reduces to the usual FLRW deceleration parameter `$- a \\ddot{a}/\\dot{a}^2 $ ' in the limit of an FLRW geometry.", "In general, $\\mathfrak {Q}$ can be written on the following form [41], [23] $\\hspace*{-14.22636pt} \\mathfrak {Q} &=& - 1 - \\frac{ \\overset{0}{\\mathfrak {q}} + {e} \\cdot {{\\overset{1}{\\mathfrak {q}}}} + {e} {e} \\cdot {{\\overset{2}{\\mathfrak {q}}}} + {e} {e} {e} \\cdot {{\\overset{3}{\\mathfrak {q}}}} + {e} {e} {e} {e} \\cdot {{\\overset{4}{\\mathfrak {q}}}} }{ \\mathfrak {H} ^2} \\, ,$ where ${e} \\cdot {{\\overset{1}{\\mathfrak {q}}}} \\equiv e^\\mu \\overset{1}{\\mathfrak {q}}_\\mu $ , ${e} {e} \\cdot {{\\overset{2}{\\mathfrak {q}}}} \\equiv e^\\mu e^\\nu \\overset{2}{\\mathfrak {q}}_{\\mu \\nu }$ , etc., with multipole coefficients $&& \\overset{0}{\\mathfrak {q}} \\equiv \\frac{1}{3} \\dot{\\theta } + \\frac{1}{3} D_{ \\mu } a^{\\mu } - \\frac{2}{3}a^{\\mu } a_{\\mu } - \\frac{2}{5} \\sigma _{\\mu \\nu } \\sigma ^{\\mu \\nu } \\, , \\nonumber \\\\&& \\overset{1}{\\mathfrak {q}}_\\mu \\equiv - \\dot{a}_\\mu - \\frac{1}{3} D_{\\mu } \\theta + a^\\nu \\omega _{\\mu \\nu } + \\frac{9}{5} a^\\nu \\sigma _{\\mu \\nu } - \\frac{2}{5} D_{ \\nu } \\sigma ^{\\nu }_{\\; \\mu } \\, , \\nonumber \\\\&& \\overset{2}{\\mathfrak {q}}_{\\mu \\nu } \\equiv \\dot{\\sigma }_{\\mu \\nu } + D_{ \\langle \\mu } a_{\\nu \\rangle } + a_{\\langle \\mu }a_{\\nu \\rangle } - 2 \\sigma _{\\alpha ( \\mu } \\omega ^\\alpha _{\\; \\nu )} - \\frac{6}{7} \\sigma _{\\alpha \\langle \\mu } \\sigma ^\\alpha _{\\; \\nu \\rangle } \\, , \\nonumber \\\\&& \\overset{3}{\\mathfrak {q}}_{\\mu \\nu \\rho } \\equiv - D_{ \\langle \\mu } \\sigma _{\\nu \\rho \\rangle } - 3 a_{ \\langle \\mu } \\sigma _{\\nu \\rho \\rangle } \\, , \\quad \\overset{4}{\\mathfrak {q}}_{\\mu \\nu \\rho \\kappa } \\equiv 2 \\sigma _{\\langle \\mu \\nu } \\sigma _{\\rho \\kappa \\rangle } \\, ,$ where $D_\\mu $ is the covariant spatial derivative as projected onto the 3-dimensional space orthogonal to $u^\\mu $ .", "Apart from the volume acceleration term, $\\propto \\dot{\\theta }$ , there are a number of other terms arising from anisotropic and inhomogeneous kinematics of the cosmic congruence, which are generally not constrained in sign or amplitude by general-relativistic energy conditions.", "The interpretation of $\\mathfrak {Q}$ as a direct measure of the deceleration of distances between test particles is thus generally not valid except for in the strict FLRW case.", "We note that while the dimensionless effective deceleration parameter $\\mathfrak {Q}$ may become singular in regions where $ \\mathfrak {H} = 0$ , the dimensionful effective deceleration parameter $ \\mathfrak {H} ^2 \\mathfrak {Q}$ remains finite when the first and second derivatives of $z$ are finite.", "The evolution of $d_A$ along a null ray of the congruence is given by the focusing equation, cf.", "equation 44 in [30], $\\frac{{\\rm d}^2 d_A }{ {\\rm d} \\lambda ^2}= -\\left(\\frac{1}{2} \\hat{\\sigma }^{\\mu \\nu } \\hat{\\sigma }_{\\mu \\nu } + \\frac{1}{2}k^{\\mu }k^\\nu R_{\\mu \\nu } \\right)d_{A} \\, ,$ where $\\hat{\\sigma }_{\\mu \\nu }$ is the shear tensor of the photon congruence, and $R_{\\mu \\nu }$ is the Ricci curvature of the space-time.", "The evolution of $d_A$ may be solved for with knowledge of shear and Ricci curvature by use of the initial conditions at the vertex of the observer's lightcone $d_A \\vert _\\mathcal {O}= 0 \\, , \\qquad \\frac{{\\rm d} d_A }{ {\\rm d} \\lambda } \\vert _\\mathcal {O}= - E_\\mathcal {O}\\, ,$ which can formally be obtained by expanding the Jacobi map around the observer [38].", "We further have from (REF ) that $\\frac{{\\rm d} z }{ {\\rm d} d_A } \\Big \\vert _\\mathcal {O}= \\mathfrak {H} _\\mathcal {O}\\, ,$ which gives the observational interpretation of $ \\mathfrak {H} _\\mathcal {O}$ as the slope of the redshift–distance function at the observer." ], [ "The null energy condition and observational bounds on the dimming of light", "Let us consider general-relativistic space-time scenarios where the null energy condition is satisfied, meaning $k^\\mu k^\\nu R_{\\mu \\nu } \\ge 0$ .", "From this it immediately follows from (REF ) that $\\frac{{\\rm d}^2 d_A }{ {\\rm d} \\lambda ^2} \\le 0 \\, ,$ so $d_A(\\lambda )$ is a concave function.", "Together with the initial condition (REF ) this means that $\\frac{{\\rm d} d_A }{ {\\rm d} \\lambda } \\ge - E_\\mathcal {O}\\,$ at any point along the null ray, where we recall that $\\lambda $ is increasing towards the observer.", "We shall prove the following theorem on the maximum distance to objects as a function of their redshift.", "Theorem 1 (Milne bound) Consider a general-relativistic space-time obeying the null energy condition.", "For a null geodesic congruence with $ \\mathfrak {H} _\\mathcal {O}> 0$ , the following applies: The angular diameter distance, $d_A$ , is bounded from above by the Milne universe model in terms of its redshift for a section of the null geodesic path $[\\lambda _1,\\lambda _2]$ , if $\\hspace{-5.69046pt} \\frac{\\int ^{\\lambda _\\mathcal {O}}_\\lambda {\\rm d} \\lambda ^{\\prime } \\bar{\\mathfrak {Q}}(\\lambda ^{\\prime }) }{\\lambda _\\mathcal {O}- \\lambda } \\ge 0 \\, , \\; \\quad \\bar{\\mathfrak {Q}}(\\lambda ) \\equiv \\frac{E_\\mathcal {O}}{ \\mathfrak {H} _\\mathcal {O}} \\int ^{\\lambda _\\mathcal {O}}_\\lambda \\!\\!", "{\\rm d} \\lambda ^{\\prime } \\mathfrak {H} ^2 \\, \\mathfrak {Q} \\, ,$ is satisfied for all $\\lambda \\in [\\lambda _1,\\lambda _2]$ , corresponding to $d_A \\in [d_A(\\lambda _2), d_A(\\lambda _1)]$ .", "From (REF ) we have that the angular diameter distance is bounded in terms of the affine distance along the null geodesic $d_A \\le E_\\mathcal {O}(\\lambda _\\mathcal {O}- \\lambda ) \\, .$ We use (REF ) to make the rewriting $\\hspace{-7.11317pt}\\frac{ 1 - \\frac{1}{(1+z)^2}}{2} = - \\!\\!", "\\int ^{\\lambda _\\mathcal {O}}_\\lambda \\!\\!\\!\\!", "{\\rm d} \\lambda ^{\\prime } \\frac{\\frac{ {\\rm d} z}{{\\rm d} \\lambda ^{\\prime }} }{(1+z)^3} = E_\\mathcal {O}\\!\\!", "\\int ^{\\lambda _\\mathcal {O}}_\\lambda \\!\\!\\!\\!", "{\\rm d} \\lambda ^{\\prime } \\frac{ \\mathfrak {H} }{1+z} \\, .$ Together with the identity $\\frac{ \\mathfrak {H} }{1+z} - \\mathfrak {H} _\\mathcal {O}= \\mathfrak {H} _\\mathcal {O}\\bar{\\mathfrak {Q}}$ (which follows from multiplying (REF ) with $ \\mathfrak {H} ^2$ and integrating both sides) this yields $\\hspace{-3.98337pt} \\frac{ 1 - \\frac{1}{(1+z)^2}}{2} = \\mathfrak {H} _\\mathcal {O}E_\\mathcal {O}(\\lambda _\\mathcal {O}- \\lambda ) \\!", "\\left(\\!", "1 + \\frac{\\int ^{\\lambda _\\mathcal {O}}_\\lambda {\\rm d} \\lambda ^{\\prime } \\bar{\\mathfrak {Q}}(\\lambda ^{\\prime }) }{\\lambda _\\mathcal {O}- \\lambda } \\!", "\\right) .$ Combining (REF ) with (REF ), and assuming that (REF ) holds for a distance interval $d_A \\in [d_A(\\lambda _2), d_A(\\lambda _1)]$ , this gives the following bound for distances in that interval: $ \\mathfrak {H} _\\mathcal {O}d_A \\le \\frac{ 1}{2} \\!", "\\left( \\!", "1 - \\frac{1}{(1+z)^2} \\!", "\\right)$ where the right hand side is the dimensionless angular diameter distance in the Milne model, cf., e.g., equation 6 in [10].", "Remarks to Theorem REF .", "A simple way of satisfying (REF ) is to have $\\mathfrak {Q} \\ge 0$ everywhere along the null ray, which for instance holds for a comoving FLRW frame when the strong energy condition is satisfied.", "However, $\\mathfrak {Q}$ will generally vary in its sign locally on account on the anisotropic multipole coefficients in (REF ), which tend to dominate at small scales when structures become non-linear.", "The weaker integral condition (REF ) is thus very useful, as it may be satisfied (for appropriate distance intervals $[d_A(\\lambda _2), d_A(\\lambda _1)]$ ) in broad classes of space-times with structure.", "We note that the relation (REF ) is exact, and thus, when $d_A = E_\\mathcal {O}(\\lambda _\\mathcal {O}- \\lambda )$ (corresponding to an unlensed light beam propagating in empty space) holds, violations of (REF ) will necessarily cause violations of the Milne bound in (REF ).", "Generally, we expect systematic violations of (REF ) to produce systematic excess dimming of light relative to the Milne model.", "The theorem applies to space-time formulations that take into account structures on small scales, where light may encounter gravitational lenses and virialized structures.", "The requirement $ \\mathfrak {H} _\\mathcal {O}> 0$ does imply some level of coarse-graining around the observer residing inside a solar system in a galaxy, and in practice the slope (REF ) must be computed for astrophysical objects in a cosmic neighbourhood of the observer that is large enough for its volume to be expanding.", "The bound (REF ) will generally vary with the direction of observation, cf.", "the definition in (REF ).", "Alternatively, the minimum value of $ \\mathfrak {H} _\\mathcal {O}$ can be used in (REF ) to obtain a weaker but isotropic bound." ], [ "Excess dimming of supernovae by ordinary matter?", "Modern cosmological observations violate the general-relativistic Milne bound (REF ) for small and intermediate values of redshift.", "Assuming that general relativity is the correct gravitational theory, that the null energy condition is satisfied, and that the geometrical optics approximation holds for describing light, the above theorem thus implies that the condition (REF ) must be violated at these scales.", "A trivial way to systematically violate (REF ) is to introduce dark energy or another energy-momentum source that violates the strong energy condition, but here we shall focus on situations where the strong energy condition holds.", "Theorem REF suggests that a persistent trend where the dimensionful effective deceleration parameter $ \\mathfrak {H} ^2 \\mathfrak {Q}$ is systematically more negative than positive along the photon null rays must be present in order to produce a dark-energy-like signature in distance–redshift data without violation of the strong energy condition.", "Big systematic departures of $ \\mathfrak {H} ^2 \\mathfrak {Q}$ from the length scale deceleration $(\\theta /3)^2(-1 - 3 \\dot{\\theta } / \\theta ^2)$ can be obtained if $e^\\mu $ aligns systematically with the multipole coefficients in (REF ).", "This happens for instance in the non-Copernican LTB void modelsConcretely, the terms $\\propto e^\\mu D_\\mu \\theta $ (see dipole coefficient in (REF )) give systematic negative contributions in $\\mathfrak {Q}$ when the photon is propagating towards the center of the void (towards less density and faster volume expansion).", "[29], [8], which is the underlying reason why these models can produce a breaking of the bound (REF ) for a central observer.", "In general, the spatial propagation direction $e^\\mu $ of the photons is determined by the equation $\\hspace{-17.07182pt} \\frac{h^{\\mu }_{\\, \\nu } k^\\alpha \\nabla _\\alpha e^\\nu }{E} \\!", "= \\!", "e^\\mu e^\\nu e^\\rho \\sigma _{\\nu \\rho } \\!", "- \\!", "e^\\nu \\sigma ^{ \\mu }_{\\; \\nu } \\!", "- \\!", "e^\\nu \\omega ^{\\mu }_{\\; \\nu } \\!-\\!", "e^\\mu e^\\nu a_\\nu \\!", "+ \\!", "a^\\mu ,$ which can be obtained from the geodesic equation $k^\\nu \\nabla _\\nu k^\\mu = 0$ and the decomposition (REF ).", "The differential equation (REF ) makes explicit how the photon direction of propagation is responding to the cosmic kinematics.", "Concretely, $e^\\mu $ is driven towards alignment with $a^\\mu $ and with eigendirections of $\\sigma ^{\\mu }_{\\; \\nu }$ , while $\\omega ^{\\mu }_{\\; \\nu }$ contributes with a deflection effect perpendicular to the initial direction of propagation of the photon bundle.", "In addition, shear causes squeezing of structures along the shear eigendirections, and photons that propagate along axes of more expansion (positive shear) will tend to spend more time in the structure than photons that propagate along axes of less expansion (negative shear) in realistic models of elongated structures.", "These systematic trends caused by anisotropic expansion degrees of freedom are generically expected, and these trends hold the potential to produce non-cancelling effects that may violate (REF ) and systematically impact the distance–redshift relation." ], [ "Discussion of model scenarios", "Systematic alignment of $e^\\mu $ with shear eigendirections has been observed in Swiss-cheese models [24], [34] and a model constructed from tessellating space by Bianchi I solutions [26].", "For the majority of the investigated models, the systematic contributions from the term $e^\\mu e^\\nu \\sigma _{\\mu \\nu }$ tend to be statistically counteracted by systematic contributions from perturbations in the expansion rate $\\theta $ in (REF ), such that the overall evolution of (average) redshift is well modelled by the FLRW background expansion rate.", "Generally, the redshift measured by observers comoving with the matter in Swiss-cheese models with LTB and Szekeres structures tends to be well modelled by the FLRW background model [19], [40], [26], although counter examples exist [28].", "The cancelation between shear and inhomogeneous expansion can be seen analytically for linearly perturbed FLRW models [34], but is less transparent for the Swiss-cheese models.", "The strikingly accurate predictions of redshift by the FLRW model in cases where this would not a priori have been expected, may be understood through the freedom in choosing the cosmic 4-velocity $u^\\mu $ of reference, since the only requirement for this reference is that (REF ) is satisfied for the emitters and observer.", "If there is a cosmic reference frame that is kinematically close to a reference FLRW model – such that $\\theta /3$ is sufficiently close-to-homogeneous and with sufficiently small norms of $\\sigma _{\\mu \\nu }$ and $a^\\mu $ – it follows from (REF ), (REF ), and (REF ) that the observed redshift is close to that of an FLRW universe (up to small boost effects produced by the transformations (REF ) and (REF )).", "The shear tensor is composed of 5 independent degrees of freedom, which generally cannot all be set to zero by a convenient choice of frame.", "Nevertheless, a hypersurface forming frame where shear is almost vanishing and expansion of space is almost homogeneous (inheriting the properties of the Poisson gauge from linearised FLRW perturbation theory) is often present – even in cases where this is not a priori expected – for instance, in certain LTB solutionsThis is not in contradiction with the fact that the LTB models can account for supernova dimming for a central observer in a large underdensity [18].", "[42], post-Newtonian perturbation theory [12], and non-linear numerical simulation studies [22].", "This may be the underlying reason as to why various (analytical and simulated) models that locally have large departures from the FLRW reference model in terms of density contrasts have mean observables that are well described by the FLRW (or the Dyer-Roeder) distance–redshift relation [2], [19], [21], [37], [15], [1].", "We remark that a space-time model that conforms to the Dyer-Roeder approximation with an expanding FLRW background reference frame (i.e., $ \\mathfrak {H} > 0$ for the background) subject to the strong energy condition (i.e., $\\mathfrak {Q} \\ge 0$ for the background) obeys the Milne bound (REF ) for the average observed $d_A$ and $z$ of the model.", "It is therefore imperative to examine scenarios in which the Dyer-Roeder approximation is not accurate, but where a notion of statistical homogeneity and isotropy in the matter distribution is nevertheless present, if the observed dimming of supernovae is to be explained without dark energy.", "Such scenarios have been examined [28], [39], [26] and they would be of great interest to explore further." ], [ "Conclusion", "In this paper we have examined light propagation in the geometrical optics limit of general relativity, and examined from first principles the conditions under which cosmological observations can be made compatible with a Universe with ordinary matter and radiation only.", "The effective deceleration parameter $\\mathfrak {Q}$ , which determines the observed dimming of supernovae, was first discussed in detail in [11] where the implications for the interpretation of supernova data were also addressed.", "In this paper we have formulated Theorem REF , which, for positive redshifts, bounds the angular diameter distance as a function of redshift by the Milne-universe relation when the effective deceleration parameter $\\mathfrak {Q}$ satisfies the integral condition (REF ).", "This condition must in turn be systematically violated in order for the dimming of supernova to be compatible with observations.", "Systematic effects from inhomogeneities have previously been proposed to hold the potential to mimic dark energy through backreaction effects on global volume dynamics [5], such as the backreaction functionals proposed by Buchert in [3], [4], [6].", "An upper bound on the expansion rate of the cosmic fluid frame in terms of the Milne expansion rate was formulated by Räsänen for general-relativistic irrotational-dust space-times [32].", "This bound however, does not imply that the distance–redshift relation in such space-times is bounded from above by the Milne model – a counter example can easily be constructed with an LTB model, as discussed in the introduction.", "In this paper, we have analysed observables directly; concretely our theorem applies to any measurement probing angular diameter distance (or luminosity distance) and redshift.", "In order to arrive at a systematic excess dimming effect relative to that of a uniform universe model, photons must systematically `pick up' local irregularities of the space-time as they propagate through it.", "As discussed in the above, we do generally expect photons to systematically align with eigendirections of the kinematic variables of the cosmic reference frame, and such allignments hold the potential to cause systematic contributions in the distance–redshift relation that mimic dark energy in our Universe.", "However, such alignments do not guarantee a systematic excess dimming of supernovae, and non-trivial cancellations of different correction terms to the FLRW redshift function take place for some widely studied universe scenarios, explaining why the results of many existing simulations agree well with the FLRW prediction of redshift.", "Exceptions of models exist, where Copernican observers measure redshifts with systematic departures from those predicted by the background FLRW model.", "It remains a possibility that we live in a Universe that exhibits such properties.", "In the motivation of this paper, we have focused on the dimming of light from supernovae, but the derivations apply to any observational probe of redshift and angular diameter (or luminosity) distance.", "This work is part of a project that has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement ERC advanced grant 740021–ARTHUS, PI: Thomas Buchert).", "I would like to thank Thomas Buchert, Syksy Räsänen and David L. Wiltshire for useful comments." ] ]
2212.05568
[ [ "Authorship Identification of Source Code Segments Written by Multiple\n Authors Using Stacking Ensemble Method" ], [ "Abstract Source code segment authorship identification is the task of identifying the author of a source code segment through supervised learning.", "It has vast importance in plagiarism detection, digital forensics, and several other law enforcement issues.", "However, when a source code segment is written by multiple authors, typical author identification methods no longer work.", "Here, an author identification technique, capable of predicting the authorship of source code segments, even in the case of multiple authors, has been proposed which uses a stacking ensemble classifier.", "This proposed technique is built upon several deep neural networks, random forests and support vector machine classifiers.", "It has been shown that for identifying the author group, a single classification technique is no longer sufficient and using a deep neural network-based stacking ensemble method can enhance the accuracy significantly.", "The performance of the proposed technique has been compared with some existing methods which only deal with the source code segments written precisely by a single author.", "Despite the harder task of authorship identification for source code segments written by multiple authors, our proposed technique has achieved promising results evidenced by the identification accuracy, compared to the related works which only deal with code segments written by a single author." ], [ "Introduction", "Source code segment author identification is a major research topic in the field of software forensics.", "It has many uses such as plagiarism detection, law enforcement, copyright infringement etc.", "[8], [12].", "Frantzeskou[7] mentioned that source code author identification is useful against cyber attacks in the form of viruses, trojan horses, logic bombs, fraud, credit card cloning, and authorship disputes or proof of authorship in court.", "There are certain patterns that developers sub-consciously reflect in their codes based on their particular coding style while still following the guidelines, standards, rules, and grammars of a language or framework[12].", "These pieces of information can be used to identify the author of the source code segment.", "In recent years, open source software development has entered a new era.", "A lot of big companies like Google, Microsoft, and many others are maintaining their projects open source.", "Alongside, small and mid-level projects are being written by a group of authors.", "In these cases, trivial author identification schemes no longer work.", "When someone contributes to an open source project, the writing style of the original author of the source code segment is no longer unique and it makes the author identification task harder.", "Even worse case is when a project is equally contributed by a number of authors.", "The writing style is then the aggregation of all the authors.", "We aimed to solve this problem and proposed an approach to identify the author of a source code even when it is contributed by more than one author.", "In this paper, we have proposed an author identification technique using a stacking ensemble method composed of several deep neural networks(DNN), random forests and support vector machines(SVM).", "Authorship identification is the task of having some samples of code for several programmers and determining the likelihood of a new piece of code having been written by each programmer[9].", "As the name suggests, authorship identification of source code segments written by multiple authors is identifying the author-label when the number of authors of source code segment is more than one.", "This can be of two types.", "One is the source code segment can be written by mostly one author and have small contributions from several other authors.", "Another is a source code segment can be directly written by a group of authors and have a roughly equal contribution from each of them.", "Both of these happen in open-source software and projects which are very popular nowadays." ], [ "Motivation", "Authorship identification of source code segment has a vast application area including plagiarism detection, authorship dispute, software forensics, malicious code tracking, criminal prosecution, software intellectual property infringement, corporate litigation, and software maintenance[20], [19], [8], [13], [21], [16].", "In the case of authorship dispute, authorship identification can be a solution.", "Given the source code segment and the candidate owners, the likelihood of each candidate of being the author of the source code can be determined[12].", "Again, Kothari[12] identified that author identification is useful for detecting the author of the malicious code.", "Software companies can also use authorship identification system to keep track of programs and modules for better maintenance[21].", "Though source code segments are much more restrictive and formal than spoken or written language, they inhibit a large degree of flexibility[7].", "According to Shevertalov[18], using differences in the way programmers express their idea, their programming style can be captured.", "This programming style, in turn, can be used for author identification.", "Although, a large number of works already done regarding source code segment author identification, according to Frantzeskou[8], the future of source code segment author identification is in collaborative projects to which we aimed at.", "The remaining sections are organized as follows.", "Section  contains a briefing on background topics regarding this work.", "Section  contains a summary of the related works.", "In section , we discuss our author identification technique for multiple authors.", "In section , the experimental results of our proposed technique are analyzed and compared with that of some related works.", "Finally, in section , the conclusion is stated with possible future direction of this work." ], [ "Ensemble Method", "By combining several methods, ensembling method helps to improve the results of machine learning.", "An ensemble is often more accurate than any of the single classifiers in the ensemble.", "According to Maclin[14], an ensemble consists of a set of individually trained classifiers whose predictions are combined, while classifying instances, by the ensemble method.", "These meta-algorithm combines several machine learning techniques into one predictive model.", "In our work, we used stacking ensemble in order to improve our prediction performance." ], [ "Random Forests", "Random forest is an ensemble learning method where each classifier in the ensemble is a decision tree classifier.", "This collection of classifiers is called a forest.", "During classification, each of the decision trees gives its vote and the result is based on the majority of the votes." ], [ "Related Works", "Numerous works are available on source code segment author identification using a variety of features and classifiers.", "However, very few of them use machine learning techniques to identify the author of source code segments.", "According to Ďuračík[6], there are several approaches to identify the author of source code segment.", "The first one is text-based and uses plain text as an input.", "The second level is token or metric based." ], [ "Text Based Approaches", "The first approach, which treats source code segment as plain text, is a form of natural language processing.", "This approach cannot make use of the programmatic structure of source code segment.", "Frantzeskou et al.", "[8] proposed a technique called Source Code Author Profiles(SCAP) for author identification.", "They generated byte level n-gram author profile and compared with previously calculated author profiles.", "Burrows[4] mentioned, the SCAP method truncates the author profiles that are greater than the maximum profile length causing a bias towards the truncated profiles.", "Burrows et al.", "[3] proposed an approach using information retrieval.", "They generated n-gram tokens from the source code segments and indexed them in a search engine to query the author of source code and return a ranking list of authors which matched the n-gram token of the source code segment with 67% accuracy." ], [ "Metric Based Approaches", "Frantzeskou[8] pointed out that metric-based author identification is divided into two steps.", "The first step is extracting the code metrics that represent the author's style and the second part is using those metrics to generate a model that is capable of labeling a source code segment by corresponding author name.", "However, a large amount of time is required to gather all possible metrics and examine to choose only the metrics responsible for differing the authors' style.", "Lange and Spiros[13] assumed that the code metrics histogram should vary from author to author as of their coding style.", "From a number of source code metrics, an optimum set was selected using genetic algorithms(GA) and used as input for the nearest neighbor(NN) classifier.", "This method achieved 55% accuracy.", "According to Yang[20], some of the features of this paper are unbounded.", "For example, the indentation category.", "Shevertalov el al.", "[18] proposed a technique based on GA.", "The metrics are extracted from the source code segment to make a histogram which is sampled using GA.", "The author profile is produced using categorized histogram samples.", "For files, they achieved 54% accuracy and for projects, they achieved 75% accuracy.", "Yang[20] mentioned that the details of the final feature set are not mentioned in this paper.", "So, the feature set is non-reproducible.", "Bandara and Wijayarathna[1] used the deep Neural Network for source code segment author identification.", "The converted source code metrics they used to feed a neural network are identical to that of Lange et al.[13].", "Their deep neural network consisted of three restricted Boltzmann machine (RBM) layers and one output layer.", "They achieved 93% accuracy.", "Zhang et al.", "[21] used SVM to identify the author of source code segment.", "They categorized their feature into four groups namely – programming layout feature, programming style feature, programming structure feature and programming logic feature.", "They used sequential minimal optimization(SMO) as the classifier for SVM and achieved 98% and 80% accuracy for two different datasets." ], [ "Author Identification of Source Codes Written by Multiple Authors", "Our developed author identification approach consists of four phases.", "Firstly, source code metrics are extracted from the source code segments in the training set.", "These extracted metrics are then converted to feature vectors.", "Secondly, these feature vectors are fed to five individual base classifiers along with corresponding class-labels to train the author signatures to the base classifiers.", "In the case of open source contribution, class-label means the owner of the source code segment and in the case of a group of authors, the whole group is considered as the class label.", "By author signature, the coding style of a particular class-label is meant.", "Caruana[5] showed that, in general, for classification problem, random forest, DNN, decision tree, and SVM are the top four algorithms.", "Hence, our chosen classifiers are DNN, random forest with CART decision trees[2], random forest with C4.5 decision trees[10], $C$ -SVM and $\\nu $ -SVM.", "Thirdly, each of the classifiers generates the posterior class-probability according to their predictions.", "These outputs are called meta-features.", "Meta-features are used as the input for a meta-classifier.", "Then the meta-classifier is trained based on the meta-features and output.", "This approach is known as stacking ensemble.", "Another deep neural network is used as the meta-classifier.", "Figure REF shows a block diagram of the architecture of the stacking ensemble method we have designed.", "Figure: Block diagram of the architecture of the stacking ensemble methodFinally, to identify the author of a new source code segment, that is from the test set, the same metrics are extracted from the test source code segments and are converted to feature vectors.", "These feature vectors are fed to the meta-classifier via the base classifiers.", "Using the experience from the training, the meta-classifier along with the base classifiers predict the class labels of the test source code segments.", "Figure REF shows the block diagram of the proposed approach for author identification of source codes written by multiple authors.", "Figure: Block diagram of proposed author identification approachIn the following sub-sections, the building blocks of the author identification approach are described." ], [ "Dataset", "Some careful considerations are needed while choosing the dataset.", "Data must be collected from a diverse population of programmers and should provide enough information about the authors so that a clear distinction can be computed from author to author and valid comparison of their programming style can be made.", "In addition, the dataset must be close to real-world data as well as open for academic study[13].", "In our study, we have generated our dataset based on open source codes from github.com.", "All the source codes have a permissive license like MIT or BSD.", "The dataset contains 6063 python source code segments from 8 authors/ author groups which are considered as individual classes.", "Each source code segment contains roughly 226 lines on average.", "Source code segments of each author are roughly split into 2:1 ratio to make the training and testing set.", "Each class label consists of authors and contributors.", "By author, we mean the true owner of the projects.", "This could be a single author or a group of authors.", "By contributors, we mean a group of people who are not the owner of the project but willingly contribute to the project by writing or editing a segment of it.", "The number of authors and the number of contributors per class-label is listed in table REF .", "Table: Number of authors and contributors for each class" ], [ "Metric Extraction", "Previously, Shevertalov, Lange, Bandara, and Zhang[18], [13], [1], [21] used source code metrics for author identification.", "From a set of probable code metrics, Lange selected the optimal set of code metrics using the genetic algorithm.", "Bandara used almost the same set of source code metrics.", "We have used the same set of metrics for our author identification approach only except the access modifier metric.", "The access modifier feature is present only in a limited number of programming languages and makes the whole system language dependent.", "Table REF shows the set of metrics to be used and corresponding descriptions.", "Table: Set of code metrics and descriptionsAfter extracting the metrics, we have counted the number of occurrences for each possible values for each of the metrics.", "For example, for underscore metrics, we have counted the number of words with no underscore, one underscore, two underscores etc.", "These counts have been fed to the base classifiers." ], [ "Base Classifiers", "There are total five base classifiers in our author identification system.", "They are – DNN, random forest based on CART, random forest based on C4.5, $C$ -SVM and $\\nu $ -SVM.", "Each of the base classifiers is described below:" ], [ "Deep Neural Network", "The DNN model used as the base classifier consists of 14 layers.", "Data are fed to the DNN as batches of 32 entries.", "They are one input layer, followed by eight fully connected layers, a dropout layer, a fully connected layer, a dropout layer, a fully connected layer and finally the output layer.", "In the fully connected layers, ReLU activation function and in the output layer softmax activation function are used.", "Categorical cross-entropy is chosen as the loss function.", "Adam[11] optimizer is used to optimize the network." ], [ "Random Forest", "The second base classifier is a random forest with one hundred decision trees.", "Classification and Regression Tree(CART)[2] algorithm is used to build the trees which selects the split node based on Gini impurity.", "The third base classifier is another random forest with one hundred decision trees.", "Decision trees in the third base classifier are built with the C4.5[10] algorithm.", "This algorithm chooses the split node based on the entropy ratio." ], [ "Support Vector Machine", "The fourth base classifier is a $C$ -support vector classifier.", "It is a support vector machine where $C$ is a penalty parameter for the error term.", "The fifth base classifier is a $\\nu $ -support vector classifier.", "It is a support vector machine where $\\nu $ is the upper bound of training error and the lower bound of the number of support vectors." ], [ "Meta Classifier", "We have used another deep neural network as the meta-classifier.", "The outputs of the base classifiers (meta-features) are fed to the meta-classifier to learn the mapping from the meta-features to the actual output.", "The neural network consists of 19 layers.", "They are one input layer, followed by eight fully connected layers, a dropout layer, two fully connected layers, a dropout layer, a fully connected layer, a dropout layer, a fully connected layer, a dropout layer, a fully connected layer, a dropout layer, and finally the output layer.", "The output from this output layer is the final output of our author identification system for source code segment written by multiple authors.", "The activation functions of the network are ReLU for fully connected layers and softmax for the output layer.", "The loss function used in the meta-classifier is categorical cross-entropy.", "Stochastic Gradient Descent(SGD) is used as the optimizer of the meta-classifier.", "Figure: Steps for training the stacking ensemble systemTable: Parameter values of the classifiersTable: Accuracy of the base classifiersTable: Comparison among the methods for source code segment author identification" ], [ "Training", "We have implemented our author identification system for source code segment written by multiple authors in multi-class classification category.", "Here, a unique list of authors(or groups of authors) of the source code segments in the training set is treated as classes.", "The author identification system produces its confidence for each class of being the actual class of given source code.", "The actual author is expected to have the highest confidence.", "Roughly, 67% source code segments from each class formed the training dataset and rest are used for testing.", "The training set contains 4034 files and the test set contains 2039 source code segments.", "The training stage of our system is divided into three phases – feature extraction from the source code segments, training the base classifiers and training the meta-classifier.", "Figure REF shows the steps followed in our author identification system for source code segment written by multiple authors.", "First of all, the source code metrics mentioned in table REF are extracted from source code segments.", "Then the extracted metrics are converted to feature vectors as mentioned in section REF .", "These feature vectors are fed to each of the base classifiers as input.", "The base classifiers run according to their own learning algorithm to learn to identify the writing style of each class.", "During this training phase, several configurations of each of the base classifiers, specially DNN, are used to find out which configuration works best for the training set.", "After completing the training of each of the base models, the posterior probability for each input in the training set is generated.", "This produced a $5 \\times |classes|$ sized feature vector for each of the input feature vectors where $|classes|$ is the number of classes.", "These feature vectors are known as meta-features.", "Meta features are fed to the meta-classifier along with the class labels through which the meta-classifier learned to predict the actual class from the meta-features." ], [ "Experimental Setup", "While implementing our author identification system for source code segment written by multiple contributors, we have used keras as the framework for deep neural networks and Scikit Learn[17] as the library for general purpose machine learning.", "For data pre-processing and visualization, we have used numpy and pandas[15] library.", "We have developed a feature extractor that extracts the features mentioned in table REF from the source codes.", "For $C$ -SVM, the parameter $C$ is a penalty for the error term.", "For $\\nu $ -SVM, the parameter $\\nu $ is an upper bound to the training error and lower bound to the number of support vectors.", "During the experiment, we found that for both the random forests, a hundred trees were sufficient to converge to the highest accuracy.", "After numerous iterations, we reached to a decision that the set of values stated in table REF classifies the source code segments most accurately.", "Accuracy and f1-score were used to evaluate the accuracy of our method.", "Accuracy is the ratio between the number of correctly identified samples and the number of total samples.", "F1-score is the harmonic mean of precision and recall.", "Micro averaging was used to compute the f1-score." ], [ "Results of The Base Classifiers", "Table REF contains the accuracies for the five base models of our stacking ensemble method." ], [ "Results of The Meta Classifier", "After training the meta-classifier by the meta-features, we have achieved 87% accuracy with f1-score $0.86$ .", "Identifying the authorship of source codes is more difficult when the number of authors is more than one as the writing style of the source code is then inconsistent from segment to segment.", "Table REF shows a comparison between the type of features, language independence, the capability of handling multiple authorship, number of classes and the total number of source code segments used in training and testing.", "From that table, we can see that even after dealing with source code segments written by multiple authors, our method has achieved an accuracy that is pretty close to that of the methods that deal with single authors.", "Our chosen set of metrics is compact and is still able to achieve a satisfactory accuracy.", "Alongside a number of works suffer from choosing a set of metrics that are not language independent.", "So, the main contribution of this work is the identification of multiple authors using language independent set of metrics." ], [ "Conclusion", "Here, we have proposed a new approach for identifying the author of source code segment where the number of authors of the source code segment is more than one.", "The main challenge of this work is to select the base estimators from a large number of possible combinations.", "Again, as several classifiers need to be trained, each classifier needs to be fine-tuned individually to produce a good final result.", "On the other hand, the problem of identifying the authorship of source code segments is harder when the number of authors is more than one.", "We have developed a stacking ensemble classifier that consists of five base classifiers and a meta-classifier which uses a relatively small set of code metrics that are relatively easy to compute and language independent as well.", "In spite of the fact that our stacking ensemble method achieved a satisfactory accuracy, this still can be improved.", "Even though our code metrics are language independent, we only tested with python source code segments.", "Future works may test on other languages and check how the set of metrics works for other languages.", "Other sets of metrics can also be examined to see how they contribute to the writing style of source code segments." ] ]
2212.05610
[ [ "Black hole solutions in the quadratic Weyl conformal geometric theory of\n gravity" ], [ "Abstract We consider numerical black hole solutions in the Weyl conformal geometry, and its associated conformally invariant Weyl quadratic gravity.", "In this model Einstein gravity (with a positive cosmological constant) is recovered in the spontaneously broken phase of Weyl gravity, after the Weyl gauge field ($\\omega _{\\mu}$) becomes massive through a Stueckelberg mechanism, and it decouples.", "As a first step in our investigations we write down the conformally invariant gravitational action, containing a scalar degree of freedom, and the Weyl vector.", "The field equations are derived from the variational principle in the absence of matter.", "By adopting a static spherically symmetric geometry, the vacuum field equations for the gravitational, scalar, and Weyl fields are obtained.", "After reformulating the field equations in a dimensionless form, and by introducing a suitable independent radial coordinate, we obtain their solutions numerically.", "We detect the formation of a black hole from the presence of a Killing horizon for the timelike Killing vector in the metric tensor components, indicating the existence of the singularity in the metric.", "Several models, corresponding to different functional forms of the Weyl vector, are considered.", "An exact black hole model, corresponding to a Weyl vector having only a radial spacelike component, is also obtained.", "The thermodynamic properties of the Weyl geometric type black holes (horizon temperature, specific heat, entropy and evaporation time due to Hawking luminosity) are also analyzed in detail." ], [ "Introduction", "One of the fundamental results in theoretical relativistic astrophysics indicates that compact objects having mass functions greater than 3-4 solar masses must be black holes [1].", "Black hole solutions are among the first that have been considered in the framework of general relativity, with the first vacuum solution of the Einstein gravitational field equations obtained more than one hundred years ago [2].", "However, the first firm observational evidence for the existence of black holes was found relatively recently, beginning with the astronomical discoveries of the 1970's.", "One of the first black hole candidates was observed in a binary system, consisting of the supergiant star HD 226868, associated with the High Mass X-ray Binary object Cyg X-1.", "The determination of the mass function $f\\left(M_x\\right)$ , giving the mass $M_x$ of the compact object Cyg X-1 in terms of the companion star $M_c$ , and of the inclination angle $i$ , already showed that $M_x>4M_{\\odot }$ [3], indicating that Cyg X-1 may be a stellar mass black hole.", "In the Milky Way alone the total number of stellar mass black holes (in binaries or isolated) is estimated to be of the order of 100 million [4].", "On the other hand, compact objects consisting of exotic matter (axion stars, Bose-Einstein Condensate stars, quark stars etc.)", "have many properties very similar to those of the stellar type black holes [5], [6].", "Hence, finding effective methods for distinguishing between different types of compact object is a fundamental problem in present day astronomy and astrophysics.", "An important, and intriguing class of black holes, is represented by the supermassive black holes, which are located at the center of every massive galaxy [7].", "A supermassive black hole accretes during its lifetime a significant amount of matter, which leads to the formation of an accretion disk around the compact object.", "For a review of the supermassive black hole properties see [8].", "If the supermassive black hole is in the accreting state, it is called an active galactic nucleus.", "The most important observational evidence for the existence of supermassive black holes is provided by the Very-Long Baseline Interferometry (VLBI) imaging of molecular ${\\rm H_2O}$ masers.", "The VLBI imaging, using Doppler shift measurements that assume a Keplerian type motion of the masering source, led to the very precise estimation of the mass of the black hole at the center of the active galaxy NGC4258, which was obtained as $3.6 \\times 10^7M_{\\odot }$ [9].", "The closest to Earth supermassive black hole is the variable infrared, radio and X-ray source Sgr A*, located at the center of the Milky Way [10], [11], [12].", "The investigation of the orbital motion of the stars around Sgr A* lead to further confirmations of the predictions of the general theory of relativity.", "An important recent advance in black hole physics was the observation by the Event Horizon Telescope collaboration of the first image of the supermassive black hole M87* [13], [14], [15].", "These observations are consistent with the Kerr-like nature of the Sgr A* black hole, but still deviations from the predictions of general relativity cannot be excluded a priori.", "Hence, finding black hole solutions is essential for the theoretical understanding, and the observational testing of gravitational theories.", "The first vacuum solution of the general relativistic field equations [2], found by Karl Schwarzschild in 1916, was obtained by considering a static, spherically symmetric central compact object.", "There is a large number of exact vacuum and black hole solutions in general relativity, and in its extensions.", "For a review of the exact solutions of the Einstein gravitational field equations see [16].", "One effective method to test black hole properties, as well as the nature of the gravitational force, is by using the electromagnetic emissivity properties of thin accretion disks that form around compact astrophysical type objects [17], [18], [19], [20], [21], [22], [23], [24], [25], [26], [27], [28], [29], [30], [31], [32], [33], [34], [35], [36].", "For a review of the observational possibilities of testing black hole geometries by using electromagnetic radiation see [37].", "Black hole solutions have been also investigated in modified theories of gravity.", "Some investigations in this field can be found in the works [38], [39], [40], [41], [43], [42], [44], [45], [46], [47], [48], [49], [50], [51], [52], [53], [54], [55], [56], [57], [58], [59], [60], [61], [62], [63], [64], [65], [66], [67], [68], [69], [70], [71], [72], [73], [74], [75].", "Static spherically symmetric solutions of the gravitational field equations obtained by including four-derivative terms of the form $\\int {R_{\\mu \\nu }R^{\\mu \\nu }\\sqrt{-g}d^4x}$ and $\\int {R^2\\sqrt{-g}d^4x}$ , respectively, was considered in [38].", "The static, linearized solutions of the field equations are combinations of the Newtonian and Yukawa potentials, while the acceptable static metric solutions in the full nonlinear theory are regular at the origin.", "Static black hole solutions in Einstein's gravity with additional quadratic curvature terms were obtained in [43], with the use of a Lichnerowicz type theorem that simplifies the analysis through the condition of a vanishing Ricci scalar curvature.", "The existence of black hole solutions was proven by using numerical methods, and their thermodynamic properties were also discussed.", "A family of exact black hole solutions on a static spherically symmetric background in second-order generalized Proca theories with derivative vector-field interactions coupled to gravity was obtained in [50].", "The intrinsic vector-field derivative interactions give rise to a secondary hair, induced by nontrivial field profiles.", "The deviation from General Relativity is most significant around the horizon.", "The properties of black holes on a static and spherically symmetric background in U (1) gauge-invariant scalar-vector-tensor theories with second-order equations of motion were studied in [62].", "It was shown that in shift-symmetric theories invariant under the shift of scalar $\\phi \\rightarrow \\phi +c$ , new hairy black hole solutions do exist.", "Vacuum static spherically symmetric solutions in the hybrid metric-Palatini gravity theory, which is a combination of the metric and Palatini $f (R )$ formalisms unifying local constraints at the Solar System level and the late-time cosmic acceleration, were considered in [67], by adopting the scalar-tensor representation of the theory, in which the scalar-tensor definition of the potential can be represented as the solution of a Clairaut differential equation.", "To study the behavior of the metric functions and of the scalar field a numerical approach was adopted.", "The static and spherically symmetric solutions in a gravity theory that extends the standard Hilbert-Einstein action with a Lagrangian constructed from a three-form field $A_{\\alpha \\beta \\gamma }$ , which is related to the field strength and a potential term, were investigated in [68].", "The field equations were obtained explicitly for a static and spherically symmetric geometry in vacuum.", "Several models, corresponding to different functional forms of the three-field potential, including the Higgs and exponential types, were considered.", "The effects of the quantum fluctuations on the spherically symmetric static black hole solutions were investigated in [69], by considering a set of modified field equations, and by assuming that the quantum correction tensor is given by the coupling of a scalar field to the metric tensor.", "The formation of a black hole was detected from the presence of a singularity in the metric tensor components.", "Several models, corresponding to different functional forms of the scalar field potential, were considered.", "The thermodynamic properties of the black hole solutions were also investigated.", "Several classes of exact and perturbative spherically symmetric solutions in $f(T,B)$ gravity theories were considered in [72].", "General methods and strategies that allow to find spherically symmetric static vacuum solutions in modified teleparallel theories of gravity, like, for example, generalized Bianchi identities, were also presented.", "Charged black hole and wormhole solutions in the Einstein-Maxwell system in the presence of a massless, real scalar field were classified, and studied in [73].", "Using a conformal transformation, the static, spherically symmetric possible structures in the minimally coupled system were analyzed.", "Besides wormholes and naked singularities, only a restricted class of black holes does exist, exhibiting a horizon with an infinite surface, and a timelike central singularity.", "The birth of general relativity did have a deep effect not only on theoretical and observational physics, but also on mathematics.", "Soon after Einstein proposed his gravitational field equations, Weyl [76], [77] proposed a geometric extension of the Riemannian geometry (on which Einstein's theory is based), in which the covariant divergence of the metric tensor does not vanish identically, but satisfies a relation of the form $\\nabla _{\\lambda }g_{\\alpha \\beta }=Q_{\\lambda \\alpha \\beta }$ , where $Q_{\\lambda \\alpha \\beta }$ is the nonmetricity tensor.", "In the initial formulation of its geometry, Weyl assumed that $Q_{\\lambda \\alpha \\beta }=\\omega _{\\lambda }g_{\\alpha \\beta }$ , where $\\omega _{\\lambda }$ was assumed to be the electromagnetic potential.", "By using this geometric interpretation, Weyl tried to construct a unified theory of gravitation and of the electromagnetic field, which was severely criticized by Einstein.", "Due to this criticism, later this unified theory was completely abandoned.", "However, Weyl's geometry has many attractive features, and it did open some new perspectives in physics, mainly due to the concept of conformal invariance, on which Weyl geometry is based.", "For a review of Weyl geometry and its applications see [78].", "The present day physical applications of Weyl geometry are obtained by adopting the essential assumption that all physical laws must satisfy the principle of conformal invariance.", "A local conformal transformation is defined as $d\\tilde{s}^2=\\Sigma ^n(x)g_{\\alpha \\beta }dx^{\\alpha }dx^{\\beta }=\\tilde{g}_{\\alpha \\beta }dx^{\\alpha }dx^{\\beta }$ , where $\\Sigma (x)$ is the conformal factor, and $n$ is the Weyl charge.", "A generalization of Weyl's gravity was proposed by Dirac [79], [80], and this approach was further investigated in [81], [82].", "In Weyl-Dirac type theories ordinary baryonic matter is generated at the very beginning of the cosmological evolution as a direct consequence of the presence of the Dirac gauge function.", "On the other hand, in the late Universe, Dirac’s gauge function creates the dark energy that triggers the transition to a de Sitter type accelerating phase.", "In [83] it was suggested that the conformal invariance of the string theory has its origin in the worldvolume diffeomorphism invariance of the membrane theory, a result obtained by postulating an ansatz for the dimensional reduction that leaves the conformal factor as an “integration constant”.", "For the role of conformal symmetry in field theory and quantum gravity see [84] and [85], respectively.", "Conformally invariant gravitational theories can be constructed by using actions constructed with the help of the Weyl tensor $C_{\\alpha \\beta \\gamma \\delta }$ , and which can be obtained from the action $S=-(1/4)\\int {C_{\\alpha \\beta \\gamma \\delta }C^{\\alpha \\beta \\gamma \\delta }\\sqrt{-g}d^4x}$ [86], [87], [88], [89], [90], [91].", "Gravitational theories constructed with the help of such actions are called conformally invariant, or Weyl gravity type theories.", "Another interesting application of Weyl geometry is related to the $f(Q)$ type gravity theories, in which it is assumed that the basic quantity describing the gravitational field is the nonmetricity [92], [93], [94], [95], [96], [97].", "$f(Q)$ or $f(Q,T)$ type gravity theories may represent an attractive alternative to standard general relativity, and to the standard $\\Lambda $ CDM cosmological model.", "The idea of conformal invariance also plays a central role in the Conformal Cyclic Cosmology (CCC) theoretical model [98], in which the Universe exists as o collection of eons, which are geometric structures corresponding to time oriented spacetimes.", "The conformal compactification of eons have spacelike null infinities.", "Different aspects of the CCC model were investigated in [99], [100], [101].", "In [102] it was proposed that the conformal symmetry is an exact symmetry of nature that is spontaneously broken, and that it could be as important as the Lorentz invariance of physical laws.", "The breaking of the conformal invariance could provide a mechanism that would allow the physical understanding of the small-scale structure of the gravitational interaction, and also of the Planck scale physics.", "A gravitational theory, assuming that local conformal symmetry is an exact, but spontaneously broken symmetry, was introduced in [103].", "A conformally invariant Weyl type gravity, developed by fully using the geometrical structures of Weyl geometry, was investigated, in both metric and Palatini formulations, in [104], [105], [106], [107], [108], [109], [110], [111], [112], [113].", "The physical implications of the theory for the understanding of the elementary particle physics, as well as its cosmological aspects related to the very early evolution of the Universe were studied in detail.", "The Weyl geometry inspired quadratic action has a spontaneous symmetry breaking through the Stueckelberg mechanism, which leads to the important result that the Weyl gauge field becomes massive.", "Hence, in this approach, one reobtains the standard Hilbert-Einstein action of general relativity in the presence of a positive cosmological constant.", "Moreover, a Proca type term for the massive Weyl gauge field also appears in the action [104].", "A Weyl-geometric invariant Lagrangian without ghosts, given by $L&=& \\sqrt{-g}\\,\\Big \\lbrace - \\frac{\\xi _j}{2} \\, \\Big [\\frac{1}{6}\\, \\, \\phi ^2_j\\,R+ g^{\\mu \\nu } \\,\\partial _\\mu \\phi _j\\, \\partial _\\nu \\phi _j\\Big ]\\nonumber \\\\&&+(1+\\xi _j)\\, \\frac{1}{2} g^{\\mu \\nu } \\tilde{D}_\\mu \\phi _j\\,\\tilde{D}_\\nu \\phi _j-V(\\phi _j)\\Big \\rbrace .$ was investigated in [105], where a potential term $V(\\phi _j)$ for the scalars $\\phi _j$ was also considered, with $V$ given by a homogeneous function of the form $V(\\phi _j)=\\phi _k^4\\, V(\\phi _j/\\phi _k), k={\\rm fixed}$ .", "It turns out that in the Weyl geometric approach a successful description of the early Universe inflation can be achieved if one of the scalar fields is considered as the inflaton field [104], [105], [106].", "Inflation in the Weyl-geometry inspired gravity in the presence of a scalar field gives similar results as in the Starobinsky model [116], which is recovered in the limit of the vanishing of the non-minimal coupling term [106].", "Hence, Weyl geometry could play a fundamental role in the early Universe, once one assumes that the effective theory at short distances is conformally invariant.", "Moreover, in [107] it was shown that after a particular Weyl gauge transformation (or gauge fixing), Weyl conformal geometry acquires a geometric Stueckelberg mechanism, which is naturally built in the theory.", "This mechanism is broken spontaneously, and leads to the appearance of the Riemannian geometry.", "On the other hand, the Stueckelberg mechanism conserves the total number of the degrees of freedom, only rearranging them.", "In the Palatini formalism, quadratic Weyl gravity with Lagrangian $R^2+R_{\\mu \\nu }^2$ , was studied in [108].", "In this approach, the connection and the metric are considered as independent variables, with the action having a gauged scale symmetry.", "For non-minimally coupled Higgs-like fields, the theory can describe well inflation.", "In [109] a comparative study of inflation in the original Weyl quadratic gravity, and in a theory obtained by replacing the Weyl connection by its Palatini counterpart, was considered.", "After the Weyl vector becomes massive via the Stueckelberg mechanism, the Planck scale, the metricity condition, and the Einstein-Proca action of the Weyl field arise in the broken phase.", "For large Higgs fields, inflation is also possible.", "In [111] the cosmological dynamics of the Weyl geometry inspired quadratic gravitational model was investigated in detail.", "The coupling of matter to geometry in Weyl geometry inspired conformal quadratic gravity, by assuming a coupling term of the form $L_m\\tilde{R}^ 2$ , where $L_m$ is the ordinary matter Lagrangian, and $\\tilde{R}$ is the Weyl scalar, was investigated in [113].", "The coupling explicitly satisfies the conformal invariance of the theory.", "By expressing $\\tilde{R}^2$ with the help of an auxiliary scalar field and of the Weyl scalar, the gravitational action can be linearized, leading in the Riemannian space to a conformally invariant $f\\left(R,L_m\\right)$ type theory, with the matter Lagrangian nonminimally coupled to the Ricci scalar.", "The cosmological implications of the theory were also considered for the case of a flat, homogeneous and isotropic geometry.", "The model can give a good description of the observational data for the Hubble function up to a redshift of the order of $z \\approx 3$ .", "It is the goal of the present work to investigate black hole solutions in the Weyl type geometric gravity theories, by following the approach introduced and developed in [104], [105], [106], [107], [108], [109], [110], [111], [112], [113].", "We assume, as a starting point of our investigations, that the background geometry of the spacetime is of Weyl type.", "Moreover, we also require the conformal invariance of any gravitational theory.", "To implement these requirements, we adopt the simplest possible model of a conformally invariant Weyl geometric gravity model, by assuming that the Lagrangian density is constructed from the square of the Weyl scalar, and of the electromagnetic type tensor $F_{\\mu \\nu }$ only.", "This Lagrangian density can be linearized with the help of an auxiliary scalar field, and finally it can be written as an Einstein-Proca type Lagrangian in the ordinary Riemannian space, in the presence of a nonminimal coupling between the scalar field and the Ricci scalar, and the Weyl vector, respectively.", "As the next step in our study, after obtaining, from the considered variational principle, the gravitational field equations, we consider a spherically symmetric static metric, and write down the corresponding vacuum field equations.", "Due to their mathematical complexity, and of their strong nonlinear nature, the differential equations describing the vacuum solutions of the theory can be generally investigated only numerically.", "We consider three classes of models, corresponding to the three possible choices of the Weyl vector field, assumed to have only one timelike component, only one radial spacelike component, and both components.", "The field equations are rewritten in a dimensionless form, and, after introducing a new independent radial coordinate, defined as the inverse of $r$ , we investigate their solutions numerically.", "Thus, three distinct classes of black hole models are obtained.", "We detect the presence of an event horizon from the appearance of the singularities in the metric tensor components.", "An exact solution of the gravitational field equations, obtained in the presence of the radial component of the Weyl vector, is also obtained.", "The thermodynamic properties of the Weyl geometric type black holes, including their horizon temperature, specific heat, entropy, as well as the evaporation time due to Hawking luminosity, are also considered in detail by using numerical methods for all considered cases.", "The present paper is organized as follows.", "We review the foundations of the Weyl geometry, we introduce the gravitational action, and we obtain the general field equations of the Weyl geometric gravity in Section .", "The spherically symmetric static field equations in the Weyl geometric model of gravity are obtained in Section  for three distinct choices of the functional form of the Weyl vector $\\omega ^{\\mu }$ .", "Numerical black hole solutions in Weyl geometric gravity, corresponding to the different choices of the Weyl vector, are considered in Section .", "An exact solution of the static spherically symmetric field equations in the presence of a Weyl vector having only a radial spacelike component is also obtained.", "The thermodynamic properties of the obtained black hole solutions are considered in Section .", "We discuss and conclude our results in Section .", "In the present paper we use a system of units with $c = 1$ ." ], [ "Gravitational field equations and equations of motion in Weyl conformal gravity", "We begin the present Section with a brief discussion of the basic elements of Weyl geometry.", "Then we will proceed to introduce the simplest Weyl-geometry inspired conformally invariant gravitational action.", "After linearizing the quadratic Weyl action, we obtain the gravitational field equations describing the dynamical evolution of the metric, of the auxiliary scalar field, and of the Weyl vector, respectively, in their general form." ], [ "A quick introduction to Weyl geometry", "A basic property of a manifold is the existence of an intrinsic metric that gives the distance between two infinitesimally closed points.", "In Riemannian geometry, the distance is preserved under the parallel transport with respect to the metric-compatible connection, called the Levi-Civita connection, or the Christoffel symbols, if we refer to its components.", "Shortly after Einstein and Hilbert obtained the correct form of the general relativistic field equations, with Hilbert introducing the variational principle for their derivations, H. Weyl [76], [77] proposed the existence of a conformal transformation in every point of the spacetime manifold.", "In the case of the metric, such a conformal transformation takes the form $\\tilde{g}_{\\mu \\nu } =\\Sigma ^{n}(x)g_{\\mu \\nu }$ , where $n$ is the Weyl charge.", "In the following we will consider only the case $n=1$ .", "In Weyl geometry, under parallel transport, the length $\\ell $ of a vector varies when it is parallelly transported by an infinitesimal displacement between the points $x^\\mu $ and $x^\\mu + \\delta x^\\mu $ .", "The change in the length of a vector is given by $ \\delta \\ell = \\ell \\omega _\\mu \\delta x^\\mu ,$ where $\\omega _\\mu $ is the Weyl vector field.", "In the general case one can introduce the nonmetricity $Q_{\\lambda \\mu \\nu }$ of the Weyl geometry, via the covariant derivative of the metric tensor, according to the definition, $& \\tilde{\\nabla }_\\lambda g_{\\mu \\nu } = - \\alpha \\omega _\\lambda g_{\\mu \\nu }\\equiv Q_{\\lambda \\mu \\nu },$ where $\\alpha $ is the Weyl gauge coupling constant.", "For the connection of the Weyl geometry, from the nonmetricity condition (REF ) we obtain the following expression, $& \\tilde{\\Gamma }^\\lambda _{\\mu \\nu } = \\Gamma ^\\lambda _{\\mu \\nu } + \\frac{1}{2}\\alpha \\Big [ \\delta ^\\lambda _\\mu \\omega _\\nu + \\delta ^\\lambda _\\nu \\omega _\\mu -g_{\\mu \\nu } \\omega ^\\lambda \\Big ],$ where $ \\Gamma ^\\lambda _{\\mu \\nu }$ is the standard Levi-Civita connection associated to the metric $g$ .", "In the following, all geometrical and physical quantities defined in the Weyl geometry will be denoted by a tilde.", "By using the Weyl connection one can easily construct the curvature tensor $\\tilde{R}_{\\mu \\nu \\sigma }^{\\lambda }$ , defined as, $\\tilde{R}_{\\mu \\nu \\sigma }^{\\lambda }=\\partial _{\\nu }\\tilde{\\Gamma }_{\\mu \\sigma }^{\\lambda }-\\partial _{\\sigma }\\tilde{\\Gamma }_{\\mu \\nu }^{\\lambda }+\\tilde{\\Gamma }_{\\rho \\nu }^{\\lambda }\\tilde{\\Gamma }_{\\mu \\sigma }^{\\rho }-\\tilde{\\Gamma }_{\\rho \\sigma }^{\\lambda }\\tilde{\\Gamma }_{\\mu \\nu }^{\\rho },$ its first contraction, $\\tilde{R}_{\\mu \\nu }=\\tilde{R}_{\\mu \\lambda \\nu }^{\\lambda },\\tilde{R}=g^{\\mu \\sigma }\\tilde{R}_{\\mu \\sigma },$ and its second contraction (the Weyl scalar), respectively, given by $\\tilde{R} = g^{\\mu \\nu } \\Big ( \\partial _\\rho \\tilde{\\Gamma }^\\rho _{\\mu \\nu } -\\partial _\\nu \\tilde{\\Gamma }^\\rho _{\\mu \\rho } + \\tilde{\\Gamma }^\\rho _{\\mu \\nu }\\tilde{\\Gamma }^\\sigma _{\\rho \\sigma } - \\tilde{\\Gamma }^\\sigma _{\\mu \\rho }\\tilde{\\Gamma }^\\rho _{\\nu \\sigma } \\Big ),$ or, equivalently, by $\\tilde{R}=R-3n\\alpha \\nabla _{\\mu }\\omega ^{\\mu }-\\frac{3}{2}\\left( n\\alpha \\right) ^{2}\\omega _{\\mu }\\omega ^{\\mu },$ where $R$ is the Ricci scalar defined in the Riemannian geometry.", "Another important geometrical and physical quantity, the Weyl tensor, is defined according to $\\tilde{C}_{\\mu \\nu \\rho \\sigma }^{2}=C_{\\mu \\nu \\rho \\sigma }^{2}+\\frac{3}{2}\\left( \\alpha n\\right) ^{2}\\tilde{F}_{\\mu \\nu }^{2},$ where $C_{\\mu \\nu \\rho \\sigma }$ is the Weyl tensor as introduced in the standard Riemannian geometry [120].", "$C_{\\mu \\nu \\rho \\sigma }^{2}$ can be obtained in terms of the Riemann and Ricci tensors, and of the Ricci scalar, as $C_{\\mu \\nu \\rho \\sigma }^{2}=R_{\\mu \\nu \\rho \\sigma }R^{\\mu \\nu \\rho \\sigma }-2R_{\\mu \\nu }R^{\\mu \\nu }+\\frac{1}{3}R^{2}.$ If we apply a conformal transformation with a conformal factor $\\Sigma $ in one point, the variances of the metric tensor, of the Weyl field, and of a scalar field $\\phi $ are given by, $ \\hat{g}_{\\mu \\nu } = \\Sigma g_{\\mu \\nu }, \\hat{\\omega }_\\mu = \\omega _\\mu -\\frac{1}{\\alpha } \\partial _\\mu \\ln \\Sigma , \\hat{\\phi }= \\Sigma ^{-\\frac{1}{2}} \\phi .$ With the help of the Weyl vector we can construct the Weyl field strength $F_{\\mu \\nu }$ of $\\omega _\\mu $ , defined as, $\\tilde{F}_{\\mu \\nu } = \\tilde{\\nabla }_{[\\mu } \\omega _{\\nu ]} = \\nabla _{[\\mu }\\omega _{\\nu ]} = \\partial _{[\\mu } \\omega _{\\nu ]}=\\partial _{\\mu }\\omega _\\nu -\\partial _\\nu \\omega _\\mu .$" ], [ "Action and field equations", "In the following we consider the simplest conformally invariant Lagrangian density, that can be constructed in Weyl geometry, and which is given by [106], [107], [108], [109], [110] $L_0=\\Big [\\, \\frac{1}{4!", "}\\,\\frac{1}{\\xi ^2}\\,\\tilde{R}^2 - \\frac{1}{4}\\, F_{\\mu \\nu }^{\\,2} \\Big ]\\sqrt{-g},$ with perturbative coupling $\\xi < 1$ .", "In order to extract the scalar degree of freedom of the above Lagrangian, and to linearize it, in $L_0$ we replace $\\tilde{R}^2$ by $\\tilde{R}^2\\rightarrow 2 \\phi _0^2\\,\\tilde{R}-\\phi _0^4$ where $\\phi _0$ is an auxiliary scalar field.", "The new Lagrangian density is equivalent with the initial one, since by using the solution $\\phi _0^2=\\tilde{R}$ of the equation of motion of $\\phi _0$ in the new $L_0$ , we recover Eq.", "(REF ).", "Hence, we obtain a mathematically equivalent Weyl geometric Lagrangian, $L_0=\\sqrt{-g} \\Big [\\frac{1}{12}\\frac{1}{\\xi ^2}\\,\\phi _0^2\\,\\tilde{R}-\\frac{1}{4} \\,F_{\\mu \\nu }^2-\\frac{\\phi _0^4}{4!\\,\\xi ^2}\\Big ].$ This is the simplest possible gravitational Lagrangian density with Weyl gauge symmetry, and conformal invariance, fully implemented.", "$L_0$ has a spontaneous breaking to an Einstein-Proca Lagrangian of the Weyl gauge field.", "On the other hand, we would like to point out that the simplest gravitational action with conformal symmetry, realized locally, is the conformal gravity model, based on the $C_{\\alpha \\beta \\gamma \\delta }^2$ action [86], [87], [88], [89], [90], [91], with the Weyl tensor $C_{\\alpha \\beta \\gamma \\delta }^2$ defined in the standard Riemannian geometry, and given by Eq.", "(REF ).", "Note that the tensor defined by Eq.", "(REF ) is different from the Weyl geometric tensor, as introduced in Eq.", "(REF ).", "In the simple model of conformal Weyl gravity no Weyl gauge field $\\omega _\\mu $ , nor the scalar $\\phi $ are present.", "However, in four dimensions the theory is still invariant under local Weyl gauge transformations.", "Generally, a physical system is called conformally invariant if it satisfies the condition that the variation of the action $S\\left[g_{\\mu \\nu },\\phi \\right]$ with respect to the group of the conformal transformations is zero, $\\delta _cS\\left[g_{\\mu \\nu },\\phi \\right]=\\int {d^n x\\left(\\delta S/\\delta \\phi \\right)\\delta _c\\phi }=0$ [114].", "Another type of transformation is given by the Weyl rescaling, implying the simultaneous pointwise transformations of the metric and of the fields, $\\tilde{g}_{\\mu \\nu }(x)=e^{2\\sigma (x)}g_{\\mu \\nu }$ , and $\\tilde{\\phi }=e^{-\\Delta \\sigma (x)}\\phi (x)$ , under which the action transforms as $\\delta _\\sigma S\\left[g_{\\mu \\nu },\\phi \\right]=\\int {d^n x\\sigma \\left[2\\left(\\delta S/\\delta g_{\\mu \\nu }\\right)g_{\\mu \\nu }-\\Delta _n\\left(\\delta S/\\delta \\phi \\right)\\phi \\right]}$ [114].", "For an in depth discussion of the conformally and Weyl invariance see [114].", "By replacing in Eq.", "(REF ) $\\tilde{R}$ as given by Eq.", "(REF ), and by performing a gauge transformation that allows the redefinition of the variables, we obtain an action, defined in the Riemannian space, invariant under conformal transformation, given by [106], [107], [108], $\\mathcal {S }&=& \\int d^4x \\sqrt{-g} \\Bigg [ \\frac{1}{12} \\frac{\\phi ^2}{\\xi ^2}\\Big ( R - 3\\alpha \\nabla _\\mu \\omega ^\\mu - \\frac{3}{2} \\alpha ^2 \\omega _\\mu \\omega ^\\mu \\Big ) \\nonumber \\\\&&- \\frac{1}{4!", "}\\frac{\\phi ^4}{\\xi ^2} - \\frac{1}{4} F_{\\mu \\nu }F^{\\mu \\nu } \\Bigg ],$ The variation of the action Eq.", "(REF ) with respect to the metric tensor gives the field equation, $&&\\frac{\\phi ^{2}}{\\xi ^{2}}\\Big (R_{\\mu \\nu }-\\frac{1}{2}Rg_{\\mu \\nu }\\Big )-\\frac{3\\alpha }{2\\xi ^{2}}\\Big (\\omega ^{\\rho }\\nabla _{\\rho }\\phi ^{2}g_{\\mu \\nu }-\\omega _{\\nu }\\nabla _{\\mu }\\phi ^{2}-\\omega _{\\mu }\\nabla _{\\nu }\\phi ^{2}\\Big )+\\frac{3\\alpha ^{2}}{4\\xi ^{2}}\\phi ^{2}\\Big (\\omega _{\\rho }\\omega ^{\\rho }g_{\\mu \\nu }-2\\omega _{\\mu }\\omega _{\\nu }\\Big )\\\\&&-6F_{\\rho \\mu }F_{\\sigma \\nu }g^{\\rho \\sigma }+\\frac{3}{2}F_{\\rho \\sigma }^{2}g_{\\mu \\nu }+\\frac{1}{4\\xi ^{2}}\\phi ^{4}g_{\\mu \\nu }+\\frac{1}{\\xi ^{2}}\\Big (g_{\\mu \\nu }\\Box -\\nabla _{\\mu }\\nabla _{\\nu }\\Big )\\phi ^{2}=0.$ The trace of Eq.", "(REF ) gives, $\\Phi R+3\\alpha \\omega ^{\\rho }\\nabla _{\\rho }\\Phi -\\Phi ^{2}-\\frac{3}{2}\\alpha ^{2}\\Phi \\omega _{\\rho }\\omega ^{\\rho }-3\\Box \\Phi =0,$ where we have denoted $\\Phi \\equiv \\phi ^{2}$ .", "Variation of the action given by Eq.", "(REF ) with respect to the scalar field $\\phi $ gives the equation of motion of $\\phi $ , $R-3\\alpha \\nabla _{\\rho }\\omega ^{\\rho }-\\frac{3}{2}\\alpha ^{2}\\omega _{\\rho }\\omega ^{\\rho }-\\Phi =0.$ From Eqs.", "(REF ) and (REF ) it immediately follows that $\\Box \\Phi -\\alpha \\nabla _{\\rho }(\\Phi \\omega ^{\\rho })=0.$ The variation of Eq.", "(REF ) with respect to $\\omega _{\\mu }$ gives the equation of motion of the Weyl vector as, $4\\xi ^{2}\\nabla _{\\nu }F^{\\mu \\nu }+\\alpha ^{2}\\Phi \\omega ^{\\mu }-\\alpha \\nabla ^{\\mu }\\Phi =0.$ Applying to both sides of the above equation the operator $\\nabla _{\\mu }$ we obtain equation (REF ), a result that can be seen as a consistency check of the theory." ], [ "The spherically symmetric vacuum field equations", "In the following we adopt a static and spherically symmetric geometry, in which all quantities depend only on the radial coordinate $r$ , in a system of coordinates defined according to $\\left(t,r,\\theta , \\varphi \\right)$ .", "For mathematical convenience we will use geometric units in all equations.", "In the following we denote by a prime the derivative with respect to the radial coordinate $r$ .", "Hence, in the adopted geometry the spacetime interval becomes, $ds^{2}=e^{\\nu (r)}dt^{2}-e^{\\lambda (r)}dr^{2}-r^{2}d\\Omega ^{2},$ where we have denoted $d\\Omega ^{2}=d\\theta ^{2}+\\sin ^{2}\\theta d\\varphi ^{2}$ .", "Thus, the metric tensor components are given by, $g_{\\mu \\nu }=\\mathrm {diag}(e^{\\nu (r)},-e^{\\lambda (r)},-r^{2},-r^{2}\\sin ^{2}\\theta ).$ If we consider a spherically symmetric configuration, the third and the fourth components of the Weyl vector vanishes identically.", "Hence, the Weyl vector can be represented in a general form as $\\omega _{\\mu }=\\left( \\omega _{0},\\omega _{1},0,0\\right) $ .", "In the following we will write down the field equations resulting from the three possibilities allowed by the various choices of the Weyl vector, and we also present some of their consequences." ], [ "The case $\\omega _{\\mu }=\\left( 0,\\omega _{1},0,0\\right) $", "In the first case we are considering, the temporal component, $\\omega _{0}$ , of the Weyl vector can be taken as zero, and thus $\\omega _{\\mu }$ is given by $\\omega _{\\mu }=(0,\\omega _{1},0,0)$ .", "For this choice of $\\omega _{\\mu }$ we have $F_{\\mu \\nu }\\equiv 0$ .", "Hence, Eq.", "(REF ) immediately gives $\\Phi ^{\\prime }=\\alpha \\Phi \\omega _1.$ By taking into account that $\\Box \\Phi =\\frac{1}{\\sqrt{-g}}\\frac{\\partial }{\\partial x^{\\alpha }}\\left( \\sqrt{-g}g^{\\alpha \\beta }\\frac{\\partial \\Phi }{\\partial x^{\\beta }}\\right) ,$ and $\\nabla _{\\alpha }\\omega ^{\\alpha }=\\frac{1}{\\sqrt{-g}}\\frac{\\partial }{\\partial x^{\\alpha }}\\left( \\sqrt{-g}\\omega ^{\\alpha }\\right),$ respectively, Eq.", "(REF ) becomes $&&\\frac{1}{\\sqrt{-g}}\\frac{d}{dr}\\left( \\sqrt{-g}g^{11}\\frac{d\\Phi }{dr}\\right)-\\alpha \\omega ^{1}\\frac{d\\Phi }{dr}-\\frac{\\alpha \\Phi }{\\sqrt{-g}}\\frac{d}{dr}\\left( \\sqrt{-g}\\omega ^{1}\\right) \\nonumber \\\\&&=0.$ In order to make Eqs.", "(REF ) and (REF ) consistent, we need to impose the gauge condition on the Weyl vector, which generally can be formulated as $\\nabla _{\\alpha }\\omega ^{\\alpha }=\\frac{1}{\\sqrt{-g}}\\frac{\\partial }{\\partial x^{\\alpha }}\\left( \\sqrt{-g}\\omega ^{\\alpha }\\right) =0,$ giving for the present choice of $\\omega _{\\mu }$ , the relation $\\frac{1}{\\sqrt{-g}}\\frac{d}{dr}\\left( \\sqrt{-g}\\omega ^{1}\\right) =0,$ or, $\\omega ^{1}=\\frac{C_{1}}{\\sqrt{-g_{rad}}}=C_{1}\\frac{e^{-{(\\nu +\\lambda )}/2}}{r^{2}},$ and $\\omega _{1}=g_{11}\\omega ^{1}=-C_{1}\\frac{e^{-\\left( \\nu -\\lambda \\right) /2}}{r^{2}},$ respectively, where $C_{1}$ is an arbitrary integration constant, and we have denoted $\\sqrt{-g_{rad}}=e^{\\left( \\nu +\\lambda \\right) /2}r^{2}$ .", "By taking into account the gauge condition, and the explicit form of $\\omega ^{1}$ , Eq.", "(REF ) becomes $\\frac{1}{\\sqrt{-g}}\\frac{d}{dr}\\left( \\sqrt{-g}g^{11}\\frac{d\\Phi }{dr}\\right)-\\alpha \\omega ^{1}\\frac{d\\Phi }{dr}=0,$ or, equivalently, $\\frac{1}{\\sqrt{-g_{rad}}}\\frac{d}{dr}\\left( \\sqrt{-g_{rad}}g^{11}\\frac{d\\Phi }{dr}\\right) -\\alpha \\frac{C_{1}}{\\sqrt{-g_{rad}}}\\frac{d\\Phi }{dr}=0,$ giving $\\sqrt{-g_{rad}}g^{11}\\frac{d\\Phi }{dr}-\\alpha C_{1}\\Phi =C,$ where $C$ is an arbitrary integration constant, which for consistency with Eq.", "(REF ) must be taken as zero.", "Hence we reobtain again Eq.", "(REF ), $\\Phi ^{\\prime }=\\alpha \\omega _{1}\\Phi $ respectively.", "By taking into account the explicit expression of $\\omega ^{1}$ as given by Eq.", "(REF ), we obtain for $\\Phi $ the differential equation $\\frac{d\\Phi }{dr}=-\\alpha C_{1}\\frac{e^{-\\left( \\nu -\\lambda \\right) /2}}{r^{2}}\\Phi .$ Alternatively, from Eqs.", "(REF ) and (REF ) we obtain a non-trivial dynamical equation for $\\Phi (r)$ , $\\frac{d}{dr}\\left(\\sqrt{-g}g^{11}\\frac{d\\Phi }{dr} \\right) - \\sqrt{-g} g^{11}\\frac{1}{\\Phi } \\left(\\frac{d\\Phi }{dr}\\right)^2 = 0.$ Next, the gravitational field equations give the following relations $&& -1+e^{\\lambda }-\\frac{1}{4}e^{\\lambda }r^{2}\\Phi -\\frac{2r\\Phi ^{\\prime }}{\\Phi }+\\frac{3r^{2}}{4}\\frac{\\Phi ^{\\prime 2}}{\\Phi ^{2}}+r\\lambda ^{\\prime }\\nonumber \\\\&&+\\frac{r^{2}\\lambda ^{\\prime }}{2}\\frac{\\Phi ^{\\prime }}{\\Phi }-\\frac{r^{2}\\Phi ^{\\prime \\prime }}{\\Phi }=0,$ $&& 1-e^{\\lambda }+\\frac{1}{4}e^{\\lambda }r^{2}\\Phi +\\frac{2r\\Phi ^{\\prime }}{\\Phi }+\\frac{3r^{2}}{4}\\frac{\\Phi ^{\\prime 2}}{\\Phi ^{2}}+r\\nu ^{\\prime }\\left(1+\\frac{r}{2}\\frac{\\Phi ^{\\prime }}{\\Phi }\\right)\\nonumber \\\\&&=0,$ and $\\hspace{-14.22636pt}&&2(\\nu ^{\\prime }-\\lambda ^{\\prime })+(4-2r\\lambda ^{\\prime }+2r\\nu ^{\\prime })\\frac{\\Phi ^{\\prime }}{\\Phi }\\nonumber \\\\\\hspace{-14.22636pt}&&+r\\left(e^{\\lambda }\\Phi +4\\frac{\\Phi ^{\\prime \\prime }}{\\Phi }-3\\frac{\\Phi ^{\\prime 2}}{\\Phi ^{2}}-\\lambda ^{\\prime }\\nu ^{\\prime }+\\nu ^{\\prime 2}+2\\nu ^{\\prime \\prime }\\right)=0,$ respectively.", "It can easily be proved that Eq.", "(REF ) is a linear combination of the other two field equations.", "By adding Eqs.", "(REF ) and (REF ) we obtain, $\\frac{\\Phi ^{\\prime \\prime }}{\\Phi }-\\frac{3}{2}\\frac{\\Phi ^{\\prime 2}}{\\Phi ^2}-\\frac{\\nu ^{\\prime }+\\lambda ^{\\prime }}{2}\\frac{\\Phi ^{\\prime }}{\\Phi }-\\frac{\\nu ^{\\prime }+\\lambda ^{\\prime }}{r}=0.$" ], [ "The case $\\omega _{\\mu }=\\left(\\omega _0,0,0,0\\right) $", "We consider now the case in which it is possible to neglect all the effects of the spatial components $\\omega _1$ in the Weyl vector.", "Thus, only the $\\omega _0$ component of the Weyl vector is nonzero.", "Thus, Eq.", "(REF ) immediately gives, $&-\\frac{\\alpha ^2}{4\\xi ^2} e^\\lambda \\Phi \\omega _0 + \\frac{-4+r \\lambda ^\\prime + r \\nu ^\\prime }{2r}\\omega _0^\\prime - \\omega ^{\\prime \\prime }_0 = 0,$ and $\\Phi ^\\prime = 0,$ respectively.", "Hence, the gauge condition imposed on the Weyl vector field in the former Section is now fulfilled trivially.", "In this case the gravitational field equations give the following relations, $&& -1 + e^\\lambda + \\frac{1}{4} e^\\lambda r^2 \\Phi - \\frac{3}{4} \\alpha ^2 e^{\\lambda -\\nu } r^2 \\omega _0^2 +r \\lambda ^\\prime + \\frac{3\\xi ^2 e^{-\\nu } r^2 \\omega _0^{\\prime 2}}{\\Phi }\\nonumber \\\\&& = 0,\\\\&& 1 - e^\\lambda - \\frac{1}{4} e^\\lambda r^2 \\Phi - \\frac{3}{4} \\alpha ^2 e^{\\lambda -\\nu } r^2 \\omega _0^2 +r \\nu ^\\prime - \\frac{3\\xi ^2 e^{-\\nu } r^2 \\omega _0^{\\prime 2}}{\\Phi }\\nonumber \\\\&&=0.$ It is also easy to prove that Eq.", "(REF ) gives a trivial constraint for $\\Phi $ , if Eq.", "(REF ) is used to express $\\Phi $ ." ], [ "The case $\\omega _{\\mu }=\\left(\\omega _0,\\omega _1,0,0\\right) $", "In this case, Eq.", "(REF ) gives $\\Phi ^{\\prime }=\\alpha \\Phi \\omega _1,$ and $&- \\frac{\\alpha ^2}{4\\xi ^2} e^\\lambda \\Phi \\omega _0 + \\frac{-4+r \\lambda ^\\prime + r \\nu ^\\prime }{2r}\\omega _0^\\prime - \\omega ^{\\prime \\prime }_0 = 0,$ respectively.", "By substituting $\\omega _1$ from Eq.", "(REF ), one can simplify the field equations Eq.", "(REF ) as $& -1+e^{\\lambda }+\\frac{1}{4}e^{\\lambda }r^{2}\\Phi -\\frac{2r\\Phi ^{\\prime }}{\\Phi }+\\frac{3r^{2}}{4}\\frac{\\Phi ^{\\prime 2}}{\\Phi ^{2}}+r\\lambda ^{\\prime }\\nonumber \\\\&+\\frac{r^{2}\\lambda ^{\\prime }}{2}\\frac{\\Phi ^{\\prime }}{\\Phi }-\\frac{r^{2}\\Phi ^{\\prime \\prime }}{\\Phi }-\\frac{3}{4}\\alpha ^2 e^{\\lambda -\\nu }r^2\\omega _0^2+\\frac{3\\xi ^2}{\\Phi }e^{-\\nu }r^2\\omega ^{\\prime 2}_0=0,$ and $ & 1-e^{\\lambda }-\\frac{1}{4}e^{\\lambda }r^{2}\\Phi +\\frac{2r\\Phi ^{\\prime }}{\\Phi }+\\frac{3r^{2}}{4}\\frac{\\Phi ^{\\prime 2}}{\\Phi ^{2}}+r\\nu ^{\\prime }\\nonumber \\\\&+\\frac{r^{2}\\nu ^{\\prime }}{2}\\frac{\\Phi ^{\\prime }}{\\Phi }-\\frac{3}{4}\\alpha ^2 e^{\\lambda -\\nu }r^2\\omega _0^2-\\frac{3\\xi ^2}{\\Phi }e^{-\\nu }r^2\\omega ^{\\prime 2}_0=0,$ respectively.", "By adding Eqs.", "(REF ) and (REF ) one obtains $\\frac{\\Phi ^{\\prime \\prime }}{\\Phi }-\\frac{3}{2}\\frac{\\Phi ^{\\prime 2}}{\\Phi ^2}-\\frac{\\nu ^{\\prime }+\\lambda ^{\\prime }}{2}\\frac{\\Phi ^{\\prime }}{\\Phi }-\\frac{\\nu ^{\\prime }+\\lambda ^{\\prime }}{r}+\\frac{3}{2}\\alpha ^2 e^{\\lambda -\\nu }\\omega _0^2=0.$" ], [ "Asymptotic and near horizon behavior", "We consider now the asymptotic and near horizon behavior of the field equations of the geometric Weyl gravity.", "In particular, the asymptotic values of the metric tensor components, and of the scalar and Weyl vector fields allow us to fix the conditions at infinity for the numerical integration and analysis of the field equations.", "We will consider independently the three cases corresponding to the three different choices of the Weyl vector considered in the previous Subsection.", "We assume that at infinity the metric can either be asymptotically flat, satisfying the conditions $\\lim _{r\\rightarrow \\infty }\\nu (r)=0, \\lim _{r\\rightarrow \\infty }\\lambda (r)=0,$ and $\\lim _{r\\rightarrow \\infty }\\nu ^{\\prime }(r)=0, \\lim _{r\\rightarrow \\infty }\\lambda ^{\\prime }(r)=0,$ respectively, or of the de Sitter type, so that $\\lim _{r\\rightarrow \\infty }e^{\\nu (r)}=\\lim _{r\\rightarrow \\infty }e^{-\\lambda }=1-\\frac{r^2}{\\mu ^2},$ and $\\lim _{r\\rightarrow \\infty }\\nu ^{\\prime }(r)=-\\lim _{r\\rightarrow \\infty }\\lambda ^{\\prime }(r)=-\\frac{2r/\\mu ^2}{1-r^2/\\mu ^2},$ respectively, where $\\mu $ is a constant.", "In both cases at infinity the metric tensor components satisfy the conditions $\\lim _{r\\rightarrow \\infty }\\left( \\nu (r)+\\lambda (r)\\right) =0$ , and $\\lim _{r\\rightarrow \\infty }\\left( \\nu ^{\\prime }(r)+\\lambda ^{\\prime }(r)\\right) =0$ , respectively.", "For a correct and complete definition of the asymptotic limits of the components of the metric tensor one should also consider some bounds of these functions, like, for example, that for large $r$ they fall faster than $1/r$ .", "In the general case the static spherically symmetric field equations of the Weyl geometric gravity are too complicated to be solved analytically.", "Therefore, to obtain their solutions, we must resort to numerical methods.", "In our investigations we assume that the field equations have a black hole solution, whose presence is indicated by the presence of a horizon at a radius $r = r_0 > 0$ , where the metric functions $e^{\\nu }$ and $e^{\\lambda }$ become singular.", "Then, near the horizon, the metric functions can be approximated by their Taylor expansions [38], [43], $e^{\\nu }=A_1\\left(r-r_0\\right)+A_2\\left(r-r_0\\right)^2+A_3\\left(r-r_0\\right)^3+....,$ and $e^{-\\lambda }=B_1\\left(r-r_0\\right)+B_2\\left(r-r_0\\right)^2+B_3\\left(r-r_0\\right)^3+...,$ respectively, where $A_i$ and $B_i$ , $i=1,2,3,...$ are constants that can be determined recursively after substitution into the field equations." ], [ "Asymptotic limits.", "In the presence of a Weyl vector having only a radial component, in the limit of large $r$ , for both the asymptotically flat and de Sitter cases Eq.", "(REF ) takes the form $\\Phi \\Phi ^{\\prime \\prime }=\\frac{3}{2} \\Phi ^{\\prime 2},$ with the solution $\\Phi (r)= \\frac{K_1}{\\left(r+K_2\\right)^2},$ where $K_1$ and $K_2$ are arbitrary integration constants.", "Hence, for the scalar and the vector fields we obtain the general asymptotic conditions $\\lim _{r\\rightarrow \\infty }\\Phi (r)&=&0, \\lim _{r\\rightarrow \\infty }\\Phi ^{\\prime } (r)=0, \\nonumber \\\\\\lim _{r\\rightarrow \\infty }\\omega _1 (r)&=&-\\frac{2}{\\alpha }\\lim _{r\\rightarrow \\infty }\\frac{1}{r+K_2}=0.$ In the asymptotically flat case Eq.", "(REF ) gives $-16K_2r+\\left(K_1-4\\right)r^2\\approx 0$ , which is identically satisfied for $K_2=0$ and $K_1=4$ , respectively.", "For the de Sitter type asymptotic behavior we find $-16K_2r+r^2\\left[-12K_2^2+\\left(K_1-4\\right)\\alpha ^2\\right]/\\alpha ^2\\approx 0$ , a condition that is again satisfied for $K_2=0$ , and $K_1=4$ , respectively.", "However, in the asymptotic limit, Eq.", "(REF ) also has the solution $\\Phi ={\\rm constant}$ , implying that at infinity the scalar field can take arbitrary values.", "By taking into account the relation between the scalar field and the radial component of the Weyl vector, as given by Eq.", "(REF ), it follows that $\\lim _{r\\rightarrow \\infty }\\omega _1(r)=0$ , that is, the Weyl vector vanishes at infinity, independently of the asymptotic values of $\\Phi $ ." ], [ "Near horizon behavior.", "In order to consider the near horizon behavior of the model we also assume that near $r_0$ the metric tensor components and the scalar field admit Taylor expansions of the form [38], [43] $f_i(r)=K_{i1}\\left(r-r_0\\right)+K_{i2}\\left(r-r_0\\right)^2+K_{i3}\\left(r-r_0\\right)^3+...,$ where $f_i=\\left\\lbrace e^{\\nu }, e^{\\lambda }, \\Phi \\right\\rbrace $ , respectively, while $K_{ij}$ , $i,j=1,2,3,...$ are constants.", "As a simple example of the near horizon behavior of Weyl geometric black holes in the presence of the radial component of the Weyl vector, we assume that the metric tensor components can be represented near the singularity as $e^{\\nu (r)}=\\left(r-r_0\\right)^2\\left[K_{11}+K_{12}\\left(r-r_0\\right)\\right]^2,$ and $e^{\\lambda (r)}=\\left(r-r_0\\right)^2\\left[K_{21}+K_{22}\\left(r-r_0\\right)\\right]^2,$ respectively, where $K_{ij}$ , $i,j=1,2$ are constants.", "For the near horizon behavior of the scalar field we adopt the functional form $\\Phi (r)=\\Phi _0e^{-A/r}r^B\\left[C+D\\left(r-r_0\\right)\\right]^{-B},$ where $\\Phi _0,A,B,C,D$ are constants.", "By substituting the above expressions of metric tensor components and of the scalar field into Eq.", "(REF ), it follows that the equation is identically satisfied if the unknown coefficients satisfy the conditions $C=K_{11},D=K_{12},$ $A\\left(C-r_0D\\right)+\\alpha C_1\\left(K_{21}-r_0K_{22}\\right)=0,$ and $AD+B\\left(C-r_0D\\right)+C_1K_{22}\\alpha =0,$ respectively, giving the relations $K_{21}=\\frac{B r_0 \\left(K_{12} r_0-K_{11}\\right)-A K_{11}}{\\alpha C_1},$ $K_{22}=-\\frac{A K_{12}+B\\left( K_{11}- K_{12} r_0\\right)}{\\alpha C_1}.$ By substituting the above representations of the metric tensor and of the scalar field into the field equations Eq.", "(REF ) and (REF ), one obtains a system of ordinary nonlinear algebraic equations for the determination of the parameters of the solution.", "The resulting equations can be generally solved only approximately, due to their extremely complicated character.", "The series solutions can be extended to any order of approximation.", "Hence, at least in principle, one can obtain a complete power series solution of the Weyl geometric field equations near the black hole horizon.", "An alternative approximate solution can be obtained by neglecting the nonlinear term $\\Phi ^{\\prime 2}/\\Phi ^2$ in the scalar field equation (REF ), and in the gravitational field equations (REF ) and (REF ).", "In this case we obtain a system of algebraic nonlinear equations that can be solved to recursively obtain the values of the coefficients in the Taylor series expansions of the physical and geometrical quantities.", "Thus, a series solution of the static spherically symmetric field equations of Weyl geometric gravity in the presence of the radial component of the Weyl vector can be constructed in both its linearized and nonlinear versions." ], [ "Asymptotic limits.", "In the case in which the Weyl vector has only a nonzero temporal component $\\omega _0$ , the scalar field is a constant, $\\Phi =\\Phi _0={\\rm constant}$ .", "In the asymptotic limit of the flat spacetimes, with $\\lambda =0$ , and $\\nu ^{\\prime }+\\lambda ^{\\prime }=0$ , Eq.", "(REF ) becomes $\\omega _0^{\\prime \\prime }+\\frac{2}{r}\\omega _0^{\\prime }+\\frac{\\alpha ^2\\Phi _0}{4\\xi ^2}\\omega _0=0,$ with the general solution given by $\\omega _0=\\frac{c_1\\cos ( \\sqrt{A} r)}{r}+\\frac{c_2 \\sin (\\sqrt{A} r)}{r},$ where we have denoted $A=\\alpha ^2\\Phi _0/4\\xi ^2$ , and $c_1$ and $c_2$ are constants of integration.", "Thus, we obtain $\\lim _{r\\rightarrow \\infty }\\omega _0=0, \\lim _{r\\rightarrow \\infty }\\omega _0^{\\prime }=0.$ By assuming that asymptotically the metric is de Sitter, with $e^{\\lambda }=\\left(1-r^2/\\mu ^2\\right)^{-1}$ , Eq.", "(REF ) takes the form $\\omega _0^{\\prime \\prime }+\\frac{2}{r}\\omega _0^{\\prime }+\\frac{A}{1-r^2/\\mu ^2}\\;\\omega _0=0,$ with the general solution given by $\\omega _{0}(r)& =&c_{2}\\,_{2}F_{1}\\left( \\delta _{-},\\delta _{+};\\frac{3}{2};\\frac{r^{2}}{\\mu ^{2}}\\right) \\nonumber \\\\&&-\\frac{ic_{1}\\mu \\,_{2}F_{1}\\left( -\\delta _{+},-\\delta _{-};\\frac{1}{2};\\frac{r^{2}}{\\mu ^{2}}\\right) }{r},$ where $c_{1}$ and $c_{2}$ are arbitrary integration constants, $_{2}F_{1}\\left( a,b;c;z\\right) $ is the hypergeometric function, and we have denoted $\\delta _{-}=\\frac{1}{4}\\left( 1-\\sqrt{1+4A\\mu ^{2}}\\right) ,\\delta _{+}=\\frac{1}{4}\\left( 1+\\sqrt{1+4A\\mu ^{2}}\\right) .$ Since $\\omega _{0}$ is a real quantity, we need to take $c_{1}=0$ in the solution, thus obtaining $\\omega _{0}(r)=c_{2}\\,_{2}F_{1}\\left( \\delta _{-},\\delta _{+};\\frac{3}{2};\\frac{r^{2}}{\\mu ^{2}}\\right) .$ Here, the hypergeometric function $_{2}F_{1}(a,b;c;z)$ , representing the regular solution of the hypergeometric differential equation, is defined for $|z|<1$ by a power series of the form $_{2}F_{1}(a,b;c;z)=\\sum _{k=0}^{\\infty }{\\left[(a)_k(b)_k/(c)_k\\right]z^k/k!", "}$ , where $(q)_k$ is the (rising) Pochhammer symbol [115].", "For $c$ not a negative integer, the function $_{2}F_{1}(a,b;c;z)$ converges for all of $|z|<1$ , and, if ${\\rm Re}(c-a-b)>0$ also on the unit circle $|z|=1$ .", "Different solutions of the hypergeometric differential equations can also be derived for other values of $z$ , not restricted to the range $|z|<1$ , and these solutions are valid in different regions of the complex plane.", "Thus, by taking into account that the radius of convergence of the hypergeometric function in the expression (REF ) of $\\omega _0(r)$ is $r^{2}<\\mu ^{2}$ , we find $\\lim _{r\\rightarrow \\mu }\\omega _{0}(r)=-\\frac{2\\sqrt{\\pi }c_{2}}{A\\mu ^{2}\\Gamma \\left( \\delta _{-}\\right) \\Gamma \\left( \\delta _{+}\\right) },$ where $\\Gamma (z)=\\int _{0}^{\\infty }{t^{z-1}e^{-t}dt}$ is the Euler gamma function.", "The value $r=\\mu $ of the radial coordinate defines a cosmological horizon for the present model.", "For the limit at infinity of the derivative of $\\omega _0$ we obtain $\\lim _{r\\rightarrow \\mu }\\omega _{0}^{\\prime }(r)=-\\frac{1}{3}Ac_{2}\\mu \\,_{2}F_{1}\\left( 1+\\delta _{-},1+\\delta _{+};\\frac{5}{2};1\\right) .$ In the case of an asymptotically flat geometry, with the use of the conditions (REF ), the field equations (REF ) as evaluated at infinity give $\\Phi =0$ .", "Hence, it follows that if one assumes the presence of a non-zero scalar field, the asymptotic limit of the metric cannot be flat.", "Finally, we consider the behavior at infinity of the static spherically symmetric Weyl geometric models in the presence of both temporal and radial components of the Weyl field.", "In the case of the asymptotically flat geometry, the coupled system of equations describing the behavior of the scalar field and of the temporal component of the Weyl vector are given by $\\omega _0^{\\prime \\prime }+\\frac{2}{r}\\omega _0^{\\prime }+\\frac{\\alpha ^2}{4\\xi ^2}\\Phi \\omega _0=0,$ and $\\frac{\\Phi ^{\\prime \\prime }}{\\Phi }-\\frac{3}{2}\\frac{\\Phi ^{\\prime 2}}{\\Phi ^2}+\\frac{3}{2}\\alpha ^2\\omega _0^2=0,$ respectively.", "By neglecting the nonlinear term $\\Phi \\omega _0$ in Eq.", "(REF ), we find $\\omega _0^{\\prime }(r)=\\frac{c_4}{r^2},\\omega _0(r)=c_3-\\frac{c_4}{r}.$ Hence, $\\lim _{r\\rightarrow \\infty }\\omega _0^{\\prime }=0, \\lim _{r\\rightarrow \\infty }\\omega _0=c_3.$ Substitution on Eq.", "(REF ) gives for $\\Phi (r)$ the equation $\\frac{\\Phi ^{\\prime \\prime }}{\\Phi }-\\frac{3}{2}\\frac{\\Phi ^{\\prime 2}}{\\Phi ^2}+\\frac{3}{2}\\alpha ^2\\left(c_3-\\frac{c_4}{r}\\right)^2=0.$ By neglecting the nonlinear term $\\Phi ^{\\prime 2}/\\Phi ^2$ , and in the limit of large $r$ , Eq.", "(REF ) becomes $\\Phi ^{\\prime \\prime }+\\frac{3}{2}\\alpha ^2c_3^2\\Phi =0,$ with the general solution given by $\\Phi (r)=C_5\\cos \\left(\\sqrt{\\frac{3}{2}}\\alpha c_3r+C_6\\right),$ where $C_5$ and $C_6$ are arbitrary integration constants.", "Hence, at least in the approximation considered, the scalar field has an oscillatory behavior at infinity.", "For the Weyl vector component we obtain $\\omega _1(r)\\approx -\\frac{3}{2}\\alpha c_3\\tan \\left(\\sqrt{\\frac{3}{2}}\\alpha c_3r+C_6\\right).$ Thus, the component of the Weyl have an oscillatory behavior at infinity.", "In the case of an asymptotic de Sitter geometry, the scalar field equation (REF ) can be approximated as $\\Phi ^{\\prime \\prime }+\\frac{3}{2}\\frac{\\alpha ^2c_3^2}{\\left(1-r^2/\\mu ^2\\right)^2}\\Phi =0,$ and it has the general solution $&&\\Phi (r)= c_5 (r-\\mu )^{\\frac{1}{4} \\left(\\sqrt{4-6 \\alpha ^2 c_3^2\\mu ^2}+2\\right)} (r+\\mu )^{\\frac{1}{4} \\left(2-\\sqrt{4-6 \\alpha ^2 c_3^2 \\mu ^2}\\right)}\\nonumber \\\\&& -\\frac{c_6 (r-\\mu )^{\\frac{1}{4} \\left(2-\\sqrt{4-6 \\alpha ^2 c_3^2\\mu ^2}\\right)} (r+\\mu )^{\\frac{1}{4} \\left(\\sqrt{4-6 \\alpha ^2 c_3^2 \\mu ^2}+2\\right)}}{\\mu \\sqrt{4-6 \\alpha ^2 c_3^2 \\mu ^2}},$ where $c_5$ and $c_6$ are integration constants.", "In the limit $r\\rightarrow \\zeta $ , the scalar field tends to zero.", "Similarly, in the limit of large $r$ , the derivative of the scalar field tends to zero.", "However, we would like to point out that the above results are approximate, but, even so, they indicate that both arbitrary non-zero, as well as very small (zero) numerical values are possible for the scalar and Weyl vector fields at infinity." ], [ "Black hole solutions in Weyl geometric conformal gravity", "In order to simplify the mathematical formalism we introduce now the following representation for $e^{-\\lambda }$ , $& e^{-\\lambda } = 1-\\frac{2GM(r)}{c^{2}r}=1-\\frac{2nGM_{\\odot }m(r)}{c^{2}r},$ where $m(r) = \\frac{M(r)}{n M_{\\odot }},$ and $M_{\\odot }$ denotes the mass of the Sun.", "$M(r)$ and $m(r)$ are effective masses incorporating both the effects of the scalar field and of the Weyl vector, while $n$ is a positive integer.", "We define now a group of dimensionless quantities $(\\eta ,m,\\nu ,\\psi ,\\zeta ,\\Omega _0,\\Theta _0)$ , given by $& r = \\frac{2r_g}{\\eta },\\;\\; \\psi = r_g^2\\Phi ,\\;\\; \\zeta = r_g^3 \\frac{d\\Phi }{dr},\\;\\; \\Omega _0 = \\alpha r_g \\omega _0, \\nonumber \\\\& \\hspace{-5.69046pt} \\Theta _0 = \\xi r_g^2 \\frac{d\\omega _0}{dr},\\;\\; e^{-\\lambda } = 1- m(\\eta )\\eta ,$ where we have denoted $r_g=nGM_{\\odot }/c^{2}$ .", "Hence, for the derivative with respect to $\\eta $ we obtain $\\frac{d}{dr} = -\\frac{\\eta ^2}{2r_g} \\frac{d}{d\\eta },$ and $& \\frac{d\\lambda }{dr} = -\\frac{\\eta ^2}{2r_g} \\frac{d\\lambda }{d\\eta } = - \\frac{\\eta ^2}{2r_g}\\frac{\\frac{dm(\\eta )}{d\\eta }\\eta + m(\\eta )}{1-m(\\eta )\\eta },$ respectively.", "We also define a new constant $\\gamma $ as $\\gamma = \\alpha /\\xi $ ." ], [ "Black hole solutions with radial component of the Weyl vector field", "We will consider first black hole solutions in which the Weyl vector has only a radial component $\\omega _1$ .", "In this case it is possible to find an exact solution of the field equations.", "Numerical black hole solutions are also obtained for a specific set of initial conditions of the Weyl vector and of the scalar field at infinity.", "In the following we will first look for exact black hole solutions satisfying the condition $\\nu (r)+\\lambda (r) =0, \\forall r>0.$ Then Eq.", "(REF ) immediately gives for $\\Phi $ the equation $\\Phi ^{\\prime \\prime }=\\frac{3}{2}\\frac{\\Phi ^{\\prime 2}}{\\Phi },$ with the general solution given by $\\Phi (r)=\\frac{C_1}{\\left(r+C_2\\right)^2},$ where $C_1$ and $C_2$ are arbitrary constants of integration.", "Eq.", "(REF ) can be reformulated as $&&-1+\\frac{d}{dr}\\left(re^{-\\lambda }\\right)-\\frac{1}{4}r^2\\Phi +2re^{-\\lambda }\\frac{\\Phi ^{\\prime }}{\\Phi }-\\frac{3}{4}r^2e^{-\\lambda }\\frac{\\Phi ^{\\prime 2}}{\\Phi ^2}\\nonumber \\\\&&-\\frac{r^2\\lambda ^{\\prime }e^{-\\lambda }}{2}\\frac{\\Phi ^{\\prime }}{\\Phi }+r^2e^{-\\lambda }\\frac{\\Phi ^{\\prime \\prime }}{\\Phi }=0.$ By taking into account the identity $-\\frac{r^2\\lambda ^{\\prime }e^{-\\lambda }}{2}\\frac{\\Phi ^{\\prime }}{\\Phi }=-\\frac{re^{-\\lambda }}{2}\\frac{\\Phi ^{\\prime }}{\\Phi }+\\frac{r}{2}\\frac{d}{dr}\\left(re^{-\\lambda }\\right)\\frac{\\Phi ^{\\prime }}{\\Phi },$ Eq.", "(REF ) becomes $&&\\left(1+\\frac{r}{2}\\frac{\\Phi ^{\\prime }}{\\Phi }\\right)\\frac{d}{dr}\\left(re^{-\\lambda }\\right)+ \\left(\\frac{3}{2}\\frac{\\Phi ^{\\prime }}{\\Phi }-\\frac{3}{4}r\\frac{\\Phi ^{\\prime 2}}{\\Phi ^2}+r\\frac{\\Phi ^{\\prime \\prime }}{\\Phi }\\right)\\left(re^{-\\lambda }\\right)\\nonumber \\\\&&-\\frac{1}{4}r^2\\Phi -1=0.$ With the use of the expression of $\\Phi (r)$ as given by Eq.", "(REF ), we obtain $&&\\left(1-\\frac{r}{C_2+r}\\right)u^{\\prime }(r)+3\\left(\\frac{ r}{(C_2+r)^2}-\\frac{1}{C_2+r}\\right) u(r)\\nonumber \\\\&& -\\frac{C_1 r^2}{4 (C_2+r)^2}-1=0,$ where we have denoted $u=re^{-\\lambda }$ .", "Eq.", "(REF ) has the exact general solution $re^{-\\lambda }&=&\\frac{r^2 \\left(12 C_3 C_2^2-C_1-4\\right)}{4 C_2}+r \\left(3 C_3C_2^2-\\frac{C_1}{4}-2\\right)\\nonumber \\\\&& +C_3 C_2^3-\\frac{1}{12}(C_1+12) C_2+C_3 r^3,$ where $C_3$ is an integration constant.", "Depending on the values of the integration constants $C_1$ , $C_2$ and $C_3$ , the metric (58) can take two distinct forms.", "If we assume the condition $3 C_3C_2^2-\\frac{C_1}{4}-2=1,$ or, equivalently, $C_3C_2^3-C_1C_2/12=C_2$ , the metric (REF ) becomes $e^{-\\lambda }=e^{\\nu }=1+\\frac{2}{C_2}r+C_3r^2,$ and it represents a generalization of the static cosmological de Sitter solution.", "The solution is not asymptotically flat.", "If one imposes the condition $3 C_3C_2^2-\\frac{C_1}{4}-2=2\\beta $ where $\\beta $ is a constant, the metric tensor components (REF ) become $e^{-\\lambda }=e^{\\nu }=2\\beta +\\frac{1+2\\beta }{C_2}r-\\frac{C_2\\left(1-2\\beta \\right)}{3}\\frac{1}{r}+C_3r^2.$ By assuming, in analogy with the Schwarzschild metric, that $C_2\\left(1-2\\beta \\right)/3=r_g$ , the metric tensor components (REF ) take the form $e^{-\\lambda }=e^\\nu =2\\beta +\\frac{1-4\\beta ^2}{3}\\frac{r}{r_g}-\\frac{r_g}{r}+C_3r^2.$ If we denote $1-2\\beta =\\delta $ , the metric (REF ) can be written as $e^{-\\lambda }=e^\\nu =1-\\delta +\\frac{\\delta (2-\\delta )}{3}\\frac{r}{r_g}-\\frac{r_g}{r}+C_3r^2.$ In Fig.", "(REF ) we have plotted the behavior of the metric tensor $e^\\nu $ as a function of $r/r_g$ for different values of the constants $c_3\\equiv C_3r_g^2$ and $\\delta $ , respectively.", "In each of these cases a singularity in the metric tensor component does appear, indicating the formation of an event horizon, and the presence of a black hole, respectively.", "Figure: Variation of the metric tensor component e ν e^{\\nu } for the exact solution of the Weyl geometric gravity in the presence of the radial component of the Weyl vector only, for c 3 =0.01c_3=0.01, and different values of δ\\delta (left panel), and for δ=1\\delta =1, and different values of c 3 c_3 (right panel).It is interesting to note that very similar solutions of the vacuum field equations have been obtained in the framework of other modified gravity theories.", "In particular, in conformal Weyl gravity [86], [87], [88], [89], [90], [91], for a static spherically symmetric metric having the standard form $ds^2=-B(r)dt^2+B^{-1}(r)dr^2+r^2d\\Omega $ , the theory admits vacuum solutions of the form [86] $B(r)=1-3\\beta \\gamma -\\frac{\\beta \\left(2-3\\beta \\gamma \\right)}{r}+\\gamma r+kr^2,$ where $\\beta $ , $\\gamma $ and $k$ are arbitrary integration constants [86].", "With the use of the metric (REF ) in [86] it was suggested that Weyl conformal gravity can explain the flat rotation curves of galaxies without invoking the presence of dark matter.", "Another interesting modified gravity theory, the dRGT massive gravity theory [117], [118], [119], with action $S = \\int d^4 x \\sqrt{-g} \\frac{M_{Pl}^2}{2} \\left[R + m_g^2 \\mathcal {U}(g,f) \\right] + S_m \\,,$ where $M_{Pl}$ is the reduced Planck mass, $R$ is the Ricci scalar, $m_g$ is the graviton mass, and $\\mathcal {U}$ is the self-interacting potential of the gravitons, respectively, also admits a static spherically symmetric solution of the form (see [119], and references therein), $B(r) = 1 - \\frac{2 G m(r)}{r} - \\frac{\\Lambda r^2}{3} + \\gamma r + \\zeta ,$ where $m(r)$ is the mass within radius $r$ , and $\\Lambda \\equiv - 3 m^2_g (1 + \\alpha + \\beta )$ , $\\gamma \\equiv - m^2_g C (1 + 2 \\alpha + 3 \\beta )$ , and $\\zeta \\equiv m_g^2 C^2 (\\alpha + 3 \\beta )$ .", "Here $C$ , $\\gamma $ , and $\\zeta $ are constants depending on the graviton mass $m_g$ , $\\Lambda $ corresponds to the cosmological constant, while $\\alpha =3\\alpha _3+1$ , and $\\beta =4\\alpha _4-(1-\\alpha )/3$ are coefficients related to the decomposition of the self-interacting potential of the graviton as $\\mathcal {U}=\\mathcal {U}_2+\\alpha _3\\mathcal {U}_3+\\alpha _4\\mathcal {U}_4$ , with $\\mathcal {U}_2 \\equiv [\\mathcal {K}]^2 - [\\mathcal {K}^2]$ , $\\mathcal {U}_3 \\equiv [\\mathcal {K}]^3 - 3 [\\mathcal {K}][\\mathcal {K}^2] + 2 [\\mathcal {K}^3]$ , and $\\mathcal {U}_4 \\equiv [\\mathcal {K}]^4 - 6 [\\mathcal {K}]^2 [\\mathcal {K}^2] + 3[\\mathcal {K}^2]^2 + 8 [\\mathcal {K}][\\mathcal {K}^3] - 6 [\\mathcal {K}^4]$ , respectively, where $\\mathcal {K}^{\\mu }_{\\nu } \\equiv \\delta ^{\\mu }_{\\nu } - \\sqrt{g^{\\mu \\lambda } \\partial _{\\lambda } \\varphi ^a \\partial _{\\nu } \\varphi ^b f_{ab}}$ , and we have denoted $[\\mathcal {K}]=\\mathcal {K}_{\\mu }^{\\mu }$ .", "Here $g_{\\mu \\nu }$ is the physical metric, $f_{\\mu \\nu }$ is a reference (fiducial) metric, and $\\varphi ^a$ are the St$\\ddot{\\rm u}$ ckelberg fields [119].", "For $m_g = 0$ , we recover the standard Schwarzschild solution of general relativity.", "The existence of mathematically identical vacuum solutions in several distinct gravity theories raises the question of the possible universality of metrics of the form (REF ), representing the most general extension of the Schwarzschild metric." ], [ "Numerical black hole solutions", "We will proceed now to obtaining numerical black hole solutions in Weyl geometric gravity in the presence of a Weyl vector field having only a radial component.", "In a static and spherically symmetric configuration, Eqs.", "(REF )-(REF ) become, $\\frac{dm}{d\\eta } &=& -\\frac{7(1-\\eta m)\\zeta ^3+\\eta (5-4\\eta m)\\zeta ^2\\psi +\\zeta (3\\eta ^3 m-4\\eta ^2-3\\psi )\\psi ^2-\\eta \\psi ^4}{\\eta ^3\\psi (\\zeta +\\eta \\psi )^2},\\\\\\frac{d\\nu }{d\\eta } &=& \\frac{\\zeta (1-\\eta m)(3\\zeta +4\\eta \\psi )-(\\eta ^3 m+\\psi )\\psi ^2}{\\eta ^2\\psi (1-\\eta m)(\\zeta +\\eta \\psi )},\\\\\\frac{d\\psi }{d\\eta } &=& -\\frac{2\\zeta }{\\eta ^2},\\\\\\frac{d\\zeta }{d\\eta } &=& -\\frac{\\zeta ^2(1-\\eta m)(5\\zeta +2\\eta \\psi )+\\zeta (3\\eta ^3 m-4\\eta ^2-\\psi )\\psi ^2}{\\eta ^2\\psi (1-\\eta m)(\\zeta +\\eta \\psi )},$ Note that the above system of ordinary, strongly nonlinear system of differential equations is independent of the coupling constants $\\alpha $ and $\\xi $ .", "We integrate the system from infinity, corresponding to $\\eta =\\lim _{r\\rightarrow \\infty }1/r=0$ , up to the event horizon of the black hole.", "We find a numerical black hole solution by detecting a singularity in metric tensor components $e^{\\nu }$ and $e^{\\lambda }$ .", "The singularities in the $e^{\\nu }$ and $e^{\\lambda }$ metric tensor components are indicated by the fulfillment of the conditions $\\left.e^{\\nu (\\eta )}\\right|_{\\eta =\\eta _{\\textit {hor}}}=0,\\;\\left.e^{-\\lambda (\\eta )}\\right|_{\\eta =\\eta _{\\textit {hor}}}=1-\\left.\\left[m(\\eta )\\eta \\right]\\right|_{\\eta =\\eta _{\\textit {hor}}}=0,$ where $\\eta _{\\textit {hor}}$ is the horizon of the black hole.", "For a Schwarzschild black hole the position of the event horizon corresponds to $\\eta _{\\textit {hor}}=1$ ." ], [ "The initial conditions.", "In order to numerically integrate the gravitational field equations in the variable $\\eta $ , we need to fix the initial conditions at $\\eta =0$ , corresponding to the asymptotic values of the scalar and vector fields, and of the metric.", "As we have seen in the discussion of the asymptotic limits of this model, at infinity the values of the scalar field and of its derivative can be taken either as zero, so that $\\psi (0)=0$ , and $\\zeta (0)=0$ , or as having some arbitrary numerical values.", "As for the metric tensor components, we assume that the metric can be either asymptotically flat, corresponding to $\\nu (0)=\\lambda (0)=0$ , or it can have a de Sitter form, $\\left.e^{-\\lambda }\\right|_{\\eta =\\eta _{0}}=\\left.\\left[1-\\left(4r_g^2/\\mu ^2\\right)\\left(1/\\eta ^2\\right)\\right]\\right|_{\\eta =\\eta _{0}}$ , giving $\\left.m(\\eta )\\right|_{\\eta =\\eta _{0}}=\\left(4r_g^2/\\mu ^2\\right)\\left(1/\\eta ^3\\right)_{\\eta =\\eta _0}=m_0$ .", "Hence, in the de Sitter case we fix the initial values of $(m,\\nu )$ at the physical infinity $\\eta =\\eta _0$ as $m\\left(\\eta _0\\right)=m_0$ , and $\\nu =\\left.\\ln \\left(1-m(\\eta )\\eta \\right)\\right|_{\\eta =\\eta _0}=\\ln \\left(1-m_0\\eta _0\\right)$ .", "To obtain the black hole solutions we vary the initial values of the dimensionless scalar field $\\psi (0)$ , and of its derivative $\\zeta (0)$ , and, through the numerical integration of the field equations, we obtain the variations of the metric tensor components, of the scalar field, and of the Weyl vector field as functions of the dimensionless (inverse) radial coordinate $\\eta $ ." ], [ "Numerical results.", "In the following we consider numerical black hole solutions obtained by numerically integrating the gravitational field equations by assuming the presence of the spatial component of the Weyl vector only.", "We consider two types of models, obtained by fixing the initial value of $\\zeta $ at infinity, and by varying the values of the scalar field, and by fixing the scalar field at infinity, and varying its derivative $\\zeta (0)$ ." ], [ "Models with fixed $\\zeta (0)$ and varying {{formula:5f70c35f-581f-4bf5-8e95-fb4b6a157ffa}} .", "In Fig.", "REF we present the results of the numerical integration of the field equations in the presence of a radial component of the Weyl vector only, obtained by fixing the initial value of the derivative of the scalar field $\\zeta $ at infinity as $\\zeta (0)=1\\times 10^{-35}$ , and by varying the initial values of the scalar field $\\psi (0)$ .", "We restrict our analysis to the case of the flat asymptotic conditions.", "Figure: Variation of the metric tensor components e ν e^\\nu (upper left panel) and e -λ e^{-\\lambda } (upper right panel), of ψ\\psi (lower left panel), and of ζ\\zeta (lower right panel) as a function of η\\eta , for a Weyl geometric black hole in the presence of the radial component of the Weyl vector only, for ζ(0)=1×10 -35 \\zeta (0)=1\\times 10^{-35}, and for different values of ψ(0)\\psi (0), presented in the legends of the Figures.As one can see from the upper panels Fig.", "REF , the metric tensor components are decreasing functions of $\\eta $ , and they become singular for a finite value of the radial coordinate, indicating the formation of a black hole.", "The position of the event horizon is very sensitive to the small variations of the initial conditions.", "The scalar field is a decreasing function of $\\eta $ , taking only positive values, while $\\zeta $ increases with increasing $\\eta $ .", "Selected values of $\\eta _{\\textit {hor}}$ corresponding to different values of $\\psi (0)$ , and for a fixed $\\zeta (0)$ , are presented in Table REF .", "Table: Variation of the position of the event horizon η ℎ𝑜𝑟 \\eta _{\\textit {hor}} of the Weyl geometric black hole with radial component of the Weyl vector for ζ(0)=1×10 -35 \\zeta (0)=1\\times 10^{-35}, and different initial values of ψ(0)\\psi (0).As one can see from Table REF , the position of the event horizon of the black hole is strongly dependent on the values of the scalar field at infinity, and large variations of its location are possible, ranging from half to twice of the Schwarzschild values.", "Thus, for example, for $\\psi (0)=10^{-15}$ , the physical radius of the event horizon, $r_{hor}$ , is located at $r_{hor}\\approx 0.54$ , while $r_{hor}\\approx 2.10$ for $\\psi (0)=6\\times 10^{-15}$ ." ], [ "Models with fixed $\\psi (0)$ and varying {{formula:0dd01333-c79d-43cd-9708-fbaa5e570e99}} .", "In Fig.", "REF we present the results of the numerical integration of the field equations in the presence of a radial component of the Weyl vector only, obtained by fixing the initial value of the scalar field $\\psi (0)$ at infinity as $\\psi (0)=1\\times 10^{-15}$ , and by varying the initial value of its derivative $\\zeta (0)$ .", "Figure: Variation of the metric tensor components e ν e^\\nu (upper left panel) and e -λ e^{-\\lambda } (upper right panel), of ψ\\psi (lower left panel), and ζ\\zeta (lower right panel) as a function of η\\eta , for a Weyl geometric black hole in the presence of the radial component of the Weyl vector only, for ψ(0)=1×10 -15 \\psi (0)=1\\times 10^{-15}, and for different values of ζ(0)\\zeta (0), presented in the legends of the Figures.As one can see from the upper panels of Fig REF , a singularity does appear in the metric tensor components, corresponding to the formation of a black hole.", "The position of the event horizon is strongly dependent on the derivatives of the scalar field at the (physical) infinity.", "The scalar field is a decreasing function of $\\eta $ , while its derivative monotonically increases towards the event horizon.", "The variation of $\\eta _{\\textit {hor}}$ with respect to the changes in the values of the derivative of the scalar field at infinity $\\zeta (0)$ , for a fixed $\\psi $ , are presented in Table REF .", "Table: Variation of the position of the event horizon η ℎ𝑜𝑟 \\eta _{\\textit {hor}} of the Weyl geometric black hole with radial component of the Weyl vector for ψ(0)=1×10 -15 \\psi (0)=1\\times 10^{-15}, and different initial values ζ(0)\\zeta (0).As one can see from the Table REF , the position of the event horizon of the black hole decreases as a function of $\\eta $ (increases as a function of $r$ ) with increasing initial values of the derivative of the scalar field $\\zeta (0)$ .", "Thus, for $\\zeta (0)=0.8\\times 10^{-35}$ , the physical radius of the event horizon has the value $r_{hor}\\approx 0.45$ , while $r_{hor}\\approx 0.63$ for $\\zeta (0)=1.5\\times 10^{-35}$ .", "Hence, higher values of $\\zeta (0)$ generate black holes having higher radii.", "However, we would like to pint out that in these considered examples the event horizons of the black holes are located at much smaller radii than the event horizons of their standard general relativistic counterparts.", "We consider now a second class of Weyl type geometric black holes, in which the Weyl vector has only a temporal component, with $\\omega _{\\mu }$ given by $\\omega _{\\mu }=\\left(\\omega _0, 0,0,0\\right)$ .", "In a static and spherically symmetric gravitational field configuration, and in the presence of a Weyl vector with a temporal component only, the gravitational field equations Eqs.", "(REF )-() become, $\\frac{dm}{d\\eta } &=&\\frac{\\psi }{\\eta ^4} +\\frac{12(1-\\eta m)\\Theta _0^2-3\\psi \\Omega _0^2}{e^\\nu \\eta ^4 \\psi },\\\\\\frac{d\\nu }{d\\eta } &=&- \\frac{e^\\nu \\psi (\\psi +\\eta ^3m)+12(1-\\eta m)\\Theta _0^2-3\\psi \\Omega _0^2}{e^\\nu \\eta ^3 \\psi (1-\\eta m)},\\\\\\frac{d\\psi }{d\\eta } &=& 0,\\\\\\frac{d\\Omega _0}{d\\eta } &=& - \\frac{2\\gamma \\Theta _0}{\\eta ^2},\\\\\\frac{d\\Theta _0}{d\\eta } &=& \\frac{-\\gamma \\eta \\psi \\Omega _0+\\Theta _0(4\\eta ^2(1-\\eta m)-6e^{-\\nu }\\Omega _0^2)}{2\\eta ^3(1-\\eta m)},$" ], [ "The initial conditions", "In this case the model depends on six parameters, the initial conditions at infinity of $m$ and $\\nu $ , describing the metric tensor components, as well as of the scalar field $\\psi $ , of the Weyl vector $\\Omega _0$ , and of its derivative $\\Theta _0$ .", "Moreover, a coupling constant $\\gamma $ is also present in the model.", "In the following, we investigate numerically only the asymptotically flat case, thus assuming $\\nu (0)=\\lambda (0)=0$ .", "As one can see from Eqs.", "(REF ), asymptotically the Weyl vector temporal component, and its derivative, tend to zero, and hence we have $\\Omega _0(0)=0$ , and $\\Theta _0(0)=0$ , respectively.", "For the initial value of $m$ we adopt the condition $m(0)=0$ .", "The scalar field $\\psi $ is a constant, and its value can be fixed arbitrarily.", "From the numerical point of view we investigate the effects of the variation of the coupling constant, and of the numerical value of the scalar field, on the position of the event horizon." ], [ "Numerical results", "We present now the numerical results of the numerical integration of the field equations in the presence of a temporal component of the Weyl vector only." ], [ "Varying the value of the coupling constant $\\gamma $ .", "As a first example of black hole solutions with temporal Weyl vector we consider models in which only the coupling parameter $\\gamma $ varies, with all other parameters fixed.", "The results of the numerical integration of the field equations (REF )-() are represented in Fig.", "REF .", "Figure: Variations of the metric tensor components e ν e^\\nu (upper left panel), of the effective mass ηm\\eta m (upper right panel), of Ω 0 \\Omega _0 (lower left panel), and of Θ 0 \\Theta _0 (lower right panel) as a function of η\\eta in the presence of the temporal component of the Weyl vector only for ψ(0)=1×10 -15 \\psi (0)=1\\times 10^{-15}, Ω 0 (0)=1×10 -11 \\Omega _0(0)=1\\times 10^{-11}, Θ 0 (0)=1×10 -18 \\Theta _0(0)=1\\times 10^{-18}, and for different values of γ\\gamma , presented in the legends of the Figures.As one can see from Fig.", "REF , the metric tensor components do present a singular behavior, indicating the formation of the event horizon.", "There is a strong dependence on the numerical values of $\\gamma $ of $\\eta _{\\textit {hor}}$ .", "The behavior and the numerical values of the effective mass are also strongly dependent on the coupling constant $\\gamma $ , indicating an increase in the effective mass when approaching the event horizon of the black holes.", "The variation of the Weyl vector temporal component $\\Omega _0$ is represented in the lower right panel of Fig.", "REF .", "As a function of $\\eta $ , the Weyl vector field is a linearly decreasing function, while its derivative is an increasing function of the inverse of the radial variable.", "The effects of the variation of the coupling constant $\\gamma $ on the behavior of the fields are significant.", "The positions of the event horizon of the black holes are presented in Table REF for different values of $\\gamma $ , with all other initial conditions fixed as in Fig.", "REF .", "Table: The position of the event horizon of the Weyl geometric type black holes in the presence of the temporal component of the Weyl vector only for different values of γ\\gamma , and with the other parameters fixed like in the plots in Fig.", ".There is a significant dependence of the position of the event horizon of the black hole on the numerical value of the coupling $\\gamma $ , with the position of the physical radius $r_{hor}$ of the event horizon of the black hole decreasing with increasing $\\gamma $ .", "Thus, for $\\gamma =100$ , $r_{hor}\\approx 1.27$ , while for $\\gamma =500$ , $r_{hor}\\approx 0.95$ .", "Hence, Weyl geometric black holes with radii larger than their Schwarzschild counterparts can be obtained for small values of the coupling constant $\\gamma $ ." ], [ "Varying the value of the scalar field.", "We consider now black hole models in Weyl geometric type gravity in the presence of the temporal component of the Weyl vector obtained by varying the numerical values of the constant scalar field $\\psi $ , while keeping all the other numerical values at infinity of the physical and geometrical quantities fixed.", "The variations with respect to $\\eta $ of the metric tensor components and of the effective mass are represented for this case in Fig.", "REF .", "As one can see from Fig.", "REF , for this choice of parameters the position of the event horizon is strongly dependent on the values of the scalar field, which as a constant, acts as a cosmological background.", "The Weyl vector $\\Omega _0$ is linearly decreasing with respect to $\\eta $ (linearly increasing as a function of $r$ ), while its derivative is a monotonically increasing function of $\\eta $ (monotonically decreasing with respect to $r$ ).", "Figure: Variations of the metric tensor components e ν e^\\nu (upper left panel), and of the effective mass ηm\\eta m (upper right panel), of Ω 0 \\Omega _0 (lower left panel), and of Θ 0 \\Theta _0 (lower right panel) as a function of η\\eta of a Weyl black hole in the presence of the temporal component of the Weyl vector only, for γ=100\\gamma =100, Ω 0 (0)=1×10 -11 \\Omega _0 (0)=1\\times 10^{-11}, Θ 0 (0)=1×10 -20 \\Theta _0(0)=1\\times 10^{-20}, and for different values of ψ(0)\\psi (0), presented in the legends of the Figures.The positions of the event horizon are presented, for different initial values of the scalar field $\\psi (0)$ , in Table REF .", "Table: Variation of the position of the event horizon of the Weyl geometric black holes in the presence of a temporal component of the Weyl vector for different initial value of the scalar field ψ 0 (0)\\psi _0(0) .", "The numerical values of the other quantities are fixed like in the plots in Fig.", ".There is a significant dependence on the position of the event horizon on the values of the scalar field, with $\\eta _{\\textit {hor}}$ decreasing with the increase of the numerical values of $\\psi (0) $ .", "The physical radius $r_{hor}$ increases with the increase of $\\psi _0(0)$ , from $r_{hor}\\approx 0.33$ ($psi (0)=10^{-15}$ ), to $r_{hor}\\approx 1.33$ , for $\\psi (0)=4\\times 10^{-15}$ ." ], [ "Varying the initial value $\\Omega _0(0)$ of the temporal component of the Weyl vector.", "We consider now Weyl geometric type black holes obtained by varying only the initial value of $\\Omega _0$ , while fixing the values at infinity of the other physical and geometrical quantities.", "The variations of the metric tensor components and of the effective mass are presented in Fig.", "REF .", "As one can see from the Figure, the presence of singularities in the metric tensor components indicate the formation of an event horizon, and therefore, the presence of a black hole.", "The position of the singularities is strongly dependent on the values at infinity of the Weyl vector.", "This dependence is also apparent in the behavior of the effective mass $m$ , which increases with increasing $\\eta $ .", "The variations of the temporal component of the Weyl vector is represented in the lower right panel in Fig.", "REF .", "The Weyl field decreases linearly towards the event horizon, and its values significantly depend on the asymptotic value of the field.", "The derivative of the Weyl vector is an increasing function of $\\eta $ (a decreasing function of $r$ ).", "The explicit values of the positions of the event horizon are presented in Table REF .", "As one can see from Table REF , there is a very strong dependence of $\\eta _{hor}$ on $\\Omega _0(0)$ .", "The event horizon location $\\eta _{\\textit {hor}}$ decreases with increasing $\\Omega _0(0)$ , thus indicating an important effect of the Weyl vector on the global properties of the black holes.", "On the other hand, the physical radius of the event horizon $r_{hor}$ increases from $r_{hor}\\approx 0.33$ ($\\Omega _0(0)=10^{-11}$ ), to $r_{hor}\\approx 1.27$ , for $\\Omega _0(0)=6\\times 10^{-10}$ , respectively.", "Figure: Variations of the metric tensor component e ν e^\\nu (upper left panel), of the effective mass ηm\\eta m (upper right panel), of Ω 0 \\Omega _0 (lower left panel), and of Θ 0 \\Theta _0 (lower right panel) as a function of η\\eta for a Weyl black hole in the presence of a temporal component of the Weyl vector only, for γ=100\\gamma =100, ψ(0)=1×10 -15 \\psi (0)=1\\times 10^{-15}, Θ 0 (0)=1×10 -20 \\Theta _0(0)=1\\times 10^{-20}, and for different values of Ω 0 (0)\\Omega _0(0).Table: Variation of the position of the event horizon for a Weyl geometric black hole in the presence of the temporal component of the Weyl vector Ω 0 \\Omega _0 with respect to Ω 0 (0)\\Omega _0(0).", "The numerical values of the other quantities are fixed like in the plots in Fig.", ".We consider now the general case in which both components of the Weyl vector are nonzero, and therefore $\\omega _{\\mu }=\\left(\\omega _0,\\omega _1,0,0\\right)$ .", "Then, the field equations (REF ) and (REF ) can be reformulated in a dimensionless form as a first order dynamical system given by $\\frac{dm}{d\\eta } &=& \\frac{1}{\\eta ^4 \\psi ^2 (\\zeta + \\eta \\psi )} \\Big (12e^{-\\nu }\\Theta _0^2\\psi (1-\\eta m)(2\\zeta +\\eta \\psi )-\\zeta ^2(1-\\eta m)(4\\zeta +5\\eta \\psi )+\\zeta \\psi ^2(\\eta ^3m+2\\psi )+\\eta \\psi ^4 \\Big ),\\\\\\frac{d\\nu }{d\\eta } &=& \\frac{1}{\\eta ^2 \\psi (1-\\eta m) (\\zeta + \\eta \\psi )} \\Big ( \\zeta (1-\\eta m)(3\\zeta +4\\eta \\psi )-(\\eta ^3m+\\psi )\\psi ^2-3e^{-\\nu }\\psi \\big (4(1-\\eta m)\\Theta _0^2+\\psi \\Omega _0^2\\big )\\Big ),\\\\\\frac{d\\psi }{d\\eta } &=& - \\frac{ 2\\zeta }{\\eta ^2},\\\\\\frac{d\\zeta }{d\\eta } &=& -\\frac{\\zeta }{\\eta ^3\\psi ^2} \\bigg [ 2\\zeta (\\zeta +2\\eta \\psi )-12e^{-\\nu }\\psi \\Theta _0^2-\\frac{\\psi ^2}{1-\\eta m}\\big (\\eta ^2(2-\\eta m)-1\\big ) \\bigg ],\\\\\\frac{d\\Omega _0}{d\\eta } &=& - \\frac{2\\gamma \\Theta _0}{\\eta ^2},\\\\\\frac{d\\Theta _0}{d\\eta } &=& \\frac{1}{2\\eta ^3\\psi ^2(1-\\eta m)(\\zeta + \\eta \\psi )} \\Big ( 2\\Theta _0(1-\\eta m)(12e^{-\\nu }\\zeta \\Theta _0^2\\psi -2\\zeta ^3-\\eta \\zeta ^2\\psi +2\\eta ^3\\psi ^3)-6e^{-\\nu }\\eta \\Theta _0\\psi ^3\\Omega _0^2\\nonumber \\\\&&+2\\eta ^2\\Theta _0\\zeta \\psi ^2(4-3\\eta m)+\\gamma \\psi ^3\\Omega _0(\\zeta +\\eta \\psi )+2\\Theta _0\\zeta \\psi ^3 \\Big ),$" ], [ "The initial conditions", "For this model the general solution of the system depends on six parameters $\\left(\\gamma , m(0),\\psi (0),\\zeta (0),\\Omega _{0}(0),\\Theta _{0}(0)\\right)$ , representing the numerical values of the coupling constant $\\gamma $ , and of the initial conditions at infinity.", "As we have already discussed, the scalar field and the $\\omega _1$ component have an oscillatory behavior for large $r$ , and hence at infinity they do not converge to a single value.", "On the other hand, the $\\omega _0$ component of the field becomes an arbitrary integration constant at infinity.", "However, we will select only the set of initial conditions that is consistent with the previously analyzed particular cases, and which lead to astrophysical black holes similar to the general relativistic Schwarzschild ones, with similar values for the positions of the event horizon.", "These conditions imply again very small initial values of the scalar and Weyl vector fields, and of their derivatives." ], [ "Numerical results", "We consider now the results of the numerical integration of the static, spherically symmetric field equations of Weyl geometric gravity, obtained by varying the numerical values of the coupling constant, and of the initial conditions at infinity." ], [ "Varying the coupling constant $\\gamma $ .", "We consider first classes of numerical black hole solutions obtained by varying the coupling constant $\\gamma $ only, while keeping the initial conditions at infinity of the physical and geometrical quantities fixed.", "The variations of the metric tensor components and of the effective mass are represented for different values of $\\gamma $ , in Fig.", "REF .", "The formation of an event horizon is indicated by the presence of the singularities in the metric tensor components.", "For the adopted values of the coupling constant there is a strong dependence of the position of $\\eta _{hor}$ on $\\gamma $ .", "On the other hand, the effective mass $m$ of the Weyl black hole is a monotonically increasing function of $\\eta $ , also showing a stronger dependence on the Weyl couplings.", "Figure: Variation of the metric tensor component e ν e^\\nu (upper left panel), of ηm\\eta m (upper right panel), of Θ 0 \\Theta _0 (middle left panel), of Ω 0 \\Omega _0 (middle right panel), of ψ\\psi (lower left panel) and of ζ\\zeta (lower right panel) as a function of η\\eta for Weyl geometric black holes in the presence of both temporal and radial components of the Weyl vector for ψ(0)=1×10 -15 \\psi (0)=1\\times 10^{-15}, ζ(0)=1×10 -28 \\zeta (0)= 1\\times 10^{-28}, Ω 0 (0)=1×10 -11 \\Omega _0(0)=1\\times 10^{-11}, Θ 0 (0)=1×10 -18 \\Theta _0(0)=1\\times 10^{-18}, and for different values of the coupling constant γ\\gamma , presented in the legends of the Figures.Table: Location of the event horizon of the Weyl geometric black holes in the presence of both temporal and radial components of the Weyl vector, for ψ(0)=1×10 -15 \\psi (0)=1\\times 10^{-15}, ζ(0)=1×10 -28 \\zeta (0)= 1\\times 10^{-28}, Ω 0 (0)=1×10 -11 \\Omega _0(0)=1\\times 10^{-11}, and Θ 0 (0)=1×10 -18 \\Theta _0(0)=1\\times 10^{-18}, respectively, and for different values of the coupling constant γ\\gamma .The positions of the event horizon of the Weyl black holes for different values of $\\gamma $ and fixed initial conditions are presented in Table REF .", "The modification of the coupling constant on a large range of values has only a relatively weak effect on the position of the event horizon, as compared to the Schwarzschild case.", "However, the increase of $\\gamma $ leads to a decrease of the value of the physical radius of the event horizon, from $r_{hor}\\approx 1.28$ ($\\gamma =100$ ), to $r_{hor}\\approx 0.95$ , corresponding to $\\gamma =500$ ." ], [ "Varying the initial values of the scalar field $\\psi (0)$ and of the temporal component of the Weyl vector {{formula:c3ff1911-e0c9-4f25-a184-f2bb9d0cbd5b}} .", "We consider now numerical black hole solutions obtained by varying the initial values of the scalar field, and of the temporal component of the Weyl vector, with all the other quantities fixed.", "The variation of the position of the event horizon is presented in Fig.", "REF .", "Figure: Variation of the position of the event horizon of the Weyl geometric black hole in the presence of both temporal and radial components of the Weyl vector as a function of the initial values of the scalar field ψ 0 (0)\\psi _0(0) and of the temporal component of the Weyl vector Ω 0 (0)\\Omega _0(0).", "To obtain the Figure we have assumed γ=10 2 \\gamma =10^2, Θ 0 (0)=10 -20 \\Theta _{0}(0)=10^{-20}, and ζ(0)=10 -28 \\zeta (0)=10^{-28}, respectively.The positions of the event horizon of the Weyl black holes for different values of $\\psi (0)$ , and fixed initial conditions and values of the coupling constant $\\gamma $ are presented in Table REF .", "As one can see from the Table, very small variations of the initial values of the scalar field can induce very significant changes in the position of the event horizon.", "Interestingly enough, increasing the value of the scalar field at infinity leads to a decrease of the inverse of the radius of the horizon of the black hole, leading to smaller values of $\\eta _{hor}$ , as compared to the Schwarzschild case.", "The physical radius of the event horizon increases with increasing $\\psi (0)$ , from $r_{hor}\\approx 0.33$ , corresponding to $\\psi (0)=10^{_15}$ , to $r_{hor}\\approx 1.33$ for $\\psi (0)=4\\times 10^{-15}$ , respectively.", "Table: Location of the event horizon of the Weyl geometric black holes for γ=100\\gamma =100, ζ(0)=1×10 -28 \\zeta (0)= 1\\times 10^{-28}, Ω 0 (0)=1×10 -11 \\Omega _0(0)=1\\times 10^{-11}, and Θ 0 (0)=1×10 -20 \\Theta _0(0)=1\\times 10^{-20}, respectively, and for different values of ψ(0)\\psi (0).The positions of the event horizon of the Weyl black holes for different values of $\\Omega _0(0)$ , and fixed initial conditions are presented, for $\\gamma =100$ , in Table REF .", "Increasing the values of $\\Omega _0(0)$ leads again to a decrease of the inverse of the radius of the Weyl geometric black hole, and to an increase of the physical radius of the event horizon $r_{hor}=1/\\eta _{hor}$ .", "While for $\\Omega _0(0)=10^{-11}$ , the event horizon of the black hole is located at $\\eta _{hor}\\approx 3$ , $r_{hor}=1/\\eta _{hor}\\approx 0.33$ , for $\\Omega _0(0)=6\\times 10^{-10}$ , the position of the event horizon of the black hole has a value of $\\eta _{hor}\\approx 0.8$ , leading to a physical radius of the event horizon of the Weyl black hole larger than the value of the horizon of the Schwarzschild black hole, $r_{hor}=1/\\eta _{hor}\\approx 1.25$ .", "Table: Location of the event horizon of the Weyl geometric black holes in the presence of both temporal and radial components of the Weyl vector for γ=100\\gamma =100, ζ(0)=1×10 -28 \\zeta (0)= 1\\times 10^{-28}, ψ(0)=1×10 -15 \\psi (0)=1\\times 10^{-15}, and Θ 0 (0)=1×10 -20 \\Theta _0(0)=1\\times 10^{-20}, respectively, and for different values of Ω 0 (0)\\Omega _0(0)." ], [ "Varying the initial values of the derivatives of the scalar field $\\zeta (0)$ , and of the temporal component of the Weyl vector, {{formula:3c11c2fb-855c-4a5e-9f40-500da9ffe52e}} .", "We consider now models obtained by varying the initial values of the derivatives of the scalar field, $\\zeta (0)$ , and of the derivative of the temporal component of the Weyl vector $\\Theta _0(0)$ only, while keeping the initial conditions at infinity of the other physical and geometrical quantities fixed.", "The variations of the positions of the position of the event horizon are represented in Fig.", "REF .", "Our results indicate the formation of an event horizon, due to the presence of the singularities in the metric tensor components.", "Figure: Variation of the position of the event horizon of the Weyl geometric black hole in the presence of both temporal and radial components of the Weyl vector as a function of the initial values of the derivatives of the scalar field ψ 0 (0)\\psi _0(0) and of the temporal component of the Weyl vector Θ 0 (0)\\Theta _0(0).", "To obtain the Figure we have assumed γ=10 2 \\gamma =10^2, ψ(0)=10 -15 \\psi (0)=10^{-15}, and Ω 0 (0)=10 -11 \\Omega _0(0)=10^{-11}, respectively.The positions of the event horizon of the Weyl black holes for different $\\zeta (0)$ values and fixed initial conditions are presented in Table REF .", "Table: Location of the event horizon of the Weyl geometric black holes in the presence of both temporal and radial components of the Weyl vector, for γ=100\\gamma =100, Ω 0 (0)=1×10 -11 \\Omega _0(0)= 1\\times 10^{-11}, ψ(0)=1×10 -15 \\psi (0)=1\\times 10^{-15}, Θ 0 (0)=1×10 -20 \\Theta _0(0)=1\\times 10^{-20}, and for different values of ζ(0)\\zeta (0).With the increase of $\\zeta (0))$ , the position of the event horizon increases, in the physical radial distance variable $r$ , from $r_{hor}\\approx 0.34$ , to $r_{hor}\\approx 1.375$ , indicating the possibility of creation of both smaller and larger black holes as compared to their general relativistic counterparts." ], [ "Thermodynamics of the Weyl geometric black holes", "In our analysis of the vacuum field equations of the linear representation of the quadratic Weyl geometric gravity we have assumed that the metric tensor components $e^{\\nu }$ and $e^{\\lambda }$ , and consequently, the effective mass function $m$ all depend only on the radial coordinate $r$ .", "Therefore, the geometry of the spacetime is static, and, moreover, a timelike Killing vector $t^{\\mu }$ always does exist [121], [122]." ], [ "Surface gravity of the Weyl black holes.", "For a static black hole that has a Killing horizon the surface gravity $\\tilde{\\kappa }$ is generally defined according to [121], [122] $t^{\\mu }\\nabla _{\\mu }t^{\\nu }=t^{\\nu }\\tilde{\\kappa }.$ By adopting a static, spherically symmetric black hole geometry given by $ds^2=-\\tilde{\\sigma } ^2 (r)f(r)c^2dt^2+\\frac{dr^2}{f(r)}+r^2d\\Omega ^2,$ where $\\tilde{\\sigma }$ and $f$ are functions of the radial coordinate only, and after suitable normalizing the Killing vector $t^{\\mu }$ as $t^{\\mu }=\\left(1/\\tilde{\\sigma }_{\\infty },0,0,0\\right)$ , the surface gravity of the black hole is given by [122] $\\tilde{\\kappa }=\\left(\\frac{\\tilde{\\sigma } _{hor}}{\\tilde{\\sigma } _{\\infty }}\\right)\\frac{c^4}{4GM_{hor}}\\left.\\left[1-\\frac{2GM^{\\prime }(r)}{c^2}\\right]\\right|_{hor},$ where the subscript hor specifies that the calculation of all physical quantities must be done on the outer apparent horizon of the black hole.", "If $\\tilde{\\sigma } \\equiv 1$ , and $M={\\rm constant}$ , then the expression of the surface gravity of a Schwarzschild black hole, $\\tilde{\\kappa }=c^4/4GM_{\\textit {hor}}$ [121], is reobtained." ], [ "Hawking temperature of the quadratic Weyl geometric black holes.", "The Hawking temperature $T_{BH}$ of the black hole is defined according to [121], [122] $T_{BH}=\\frac{\\hbar }{2\\pi ck_B} \\tilde{\\kappa },$ where by $k_B$ we have denoted Boltzmann's constant.", "In the system of dimensionless variables defined in Eq.", "(REF ), the temperature of the black hole is obtained as $T_{BH}=T_H\\frac{1}{m\\left(\\eta _{\\textit {hor}}\\right)}\\left.\\left(1+\\eta ^2\\frac{dm}{d\\eta }\\right)\\right|_{\\eta =\\eta _{hor}},$ where we have introduced the notation $T_H=\\frac{\\hbar c^3}{8\\pi Gk_BnM_{\\odot }},$ corresponding to the Hawking temperature of the standard general relativistic Schwarzschild black hole.", "The variation of the dimensionless horizon temperature $\\theta =\\frac{T_{BH}}{T_H},$ of the black holes in quadratic Weyl geometric gravity is represented, for selected values of the model parameters, and for the general case, with the Weyl vector possessing both temporal and radial components, in Fig.", "REF .", "Figure: Variation of the dimensionless black hole horizon temperature for a Weyl geometric gravity black hole in the presence of both temporal and radial components of the Weyl vector for θ\\theta for Θ 0 (0)=10 -18 \\Theta _{0}(0)=10^{-18}, Ω 0 (0)=10 -11 \\Omega _{0}(0)=10^{-11}, and ζ(0)=10 -28 \\zeta (0)=10^{-28}, respectively, and different values of ψ(0)\\psi (0) (left panel), and for ψ(0)=10 -15 \\psi (0)=10^{-15}, and different values of ζ(0)\\zeta (0) (right panel).", "In both panels, the values of γ\\gamma are varied continuously.As one can see from Fig.", "REF , , the temperature of the horizon of the Weyl black holes is generally higher than that of their general relativistic counterparts.", "This is also related to the wider range of event horizon positions, as compared to the Schwarzschild black hole case.", "The Hawking temperature is a monotonically increasing function of the position of the event horizon, and it has a strong dependence on the initial values of the scalar field, and of its derivatives.," ], [ "Specific heat of the Weyl black holes. ", "Another important physical quantity characterizing the thermodynamic properties of the black holes, is their specific heat $C_{BH}$ , which can be obtained from the definition [121], [122], $C_{BH}&=&\\frac{dM}{dT_{BH}}=\\left.\\frac{dM}{dr}\\frac{dr}{dT_{BH}}\\right|_{r=r_{hor}}\\nonumber \\\\&=&\\frac{nM_{\\odot }}{T_H} \\left.\\frac{dm\\left(\\eta \\right)}{d\\eta }\\frac{d\\eta }{d\\theta }\\right|_{\\eta =\\eta _{hor}}.$ where we have denoted $C_H=nM_{\\odot }/T_H$ .", "The variations of the dimensionless specific heat $C_{eff}=C_{BH}/C_H$ of the Weyl black holes as a function of the dimensionless horizon radius in quadratic Weyl geometric gravity are represented, for some selected values of the model parameters, in Fig.", "REF .", "Figure: Variation of the dimensionless specific heat C eff =C BH /C H C_{eff}=C_{BH}/C_H of the Weyl geometric black holes in the presence of both temporal and radial components of the Weyl vector for Θ 0 (0)=10 -18 \\Theta _{0}(0)=10^{-18}, Ω 0 (0)=10 -11 \\Omega _{0}(0)=10^{-11}, and ζ(0)=10 -28 \\zeta (0)=10^{-28}, respectively, and for different values of ψ(0)\\psi (0) (left panel) and for ψ(0)=10 -15 \\psi (0)=10^{-15}, and different values of ζ(0)\\zeta (0) (right panel).", "In both panels, the values of γ\\gamma are varied continuously.Similarly to the Hawking temperature, the numerical values of $C_{eff}$ do depend on the initial conditions at infinity of the scalar field, and of its derivative.", "The specific heat of the Weyl geometric black holes is a rapidly decreasing function of $\\eta _{hor}$ ." ], [ "The entropy of the Weyl geometric black holes.", "The entropy $S_{BH}$ of the Weyl geometric black hole is obtained generally as [121], [122] $S_{BH}&=&\\int _{\\infty }^{r_{\\textit {hor}}}{\\frac{dM}{T_{BH}}}=\\int _{\\infty }^{r_{\\textit {hor}}}{\\frac{1}{T_{BH}}\\frac{dM}{dr}dr}.$ In the set of the dimensionless variables considered in the present study we have $S_{BH}\\left(\\eta _{hor}\\right)=C_H\\int _0^{\\eta _{hor}}{\\frac{1}{\\theta \\left(\\eta \\right)}\\frac{dm\\left(\\eta \\right)}{d\\eta }d\\eta }.$ In the following we denote $S_{eff}=S_{BH}\\left(\\eta _{hor}\\right)/C_H$ .", "The variation as a function of the dimensionless horizon radius $\\eta _{hor}$ of the entropy $S_{eff}$ of the black holes in the Weyl geometric gravity theory with only radial component in Weyl vector is represented, as a function of the dimensionless horizon radius, for different values of $\\psi (0)$ and $\\zeta (0)$ , in Fig.", "REF .", "Figure: Variation of the dimensionless entropy S eff =S BH /C H S_{eff}=S_{BH}/C_H of the Weyl geometric black holes in the presence of both temporal and radial components of the Weyl vector as a function of the position of the event horizon for Θ 0 (0)=10 -18 \\Theta _{0}(0)=10^{-18}, Ω 0 (0)=10 -11 \\Omega _{0}(0)=10^{-11} and ζ(0)=10 -28 \\zeta (0)=10^{-28}, respectively, and for different values of ψ(0)\\psi (0) (left panel), and for ψ(0)=10 -15 \\psi (0)=10^{-15}, and different values of ζ(0)\\zeta (0) (right panel).", "In both panels, the values of γ\\gamma are varied continuously.The black hole entropy decreases with the increase of the radius of the event horizon.", "While there is a significant dependence on the values at infinity of the scalar field, the dependence of the entropy on the derivatives of the field is weak, and no significant change in its values occurs due to the modifications of $\\zeta (0)$ ." ], [ "The luminosity of the Weyl black holes.", "The formation and the evaporation of a spherically symmetric black hole in conformal gravity was investigated in [123], by considering the collapse of a spherically symmetric thin shell of radiation, leading to the formation of a singularity-free non-rotating black hole.", "The black hole has the same Hawking temperature as a Schwarzschild black hole with the same mass, and it completely evaporates either in a finite or in an infinite time, depending on the ensemble.", "The evaporation process of a spherical neutral AdS black hole in four-dimensional conformal gravity, where the equations of states are branched, was investigated in [124].", "The luminosity of the Weyl geometric black holes, due to the Hawking evaporation processes, can be obtained according to the relation [121], [122], $L_{BH}=-\\frac{dM}{dt}=-\\sigma A_{BH}T_{BH}^4,$ where $\\sigma $ is a parameter that depends on the considered model, while $A_{BH}=4\\pi r_{hor}^2$ is the area of the event horizon.", "For the black hole evaporation time $\\tau $ we thus have $\\hspace{-22.76228pt}\\tau &=&\\int _{t_{\\it {in}}}^{t_{\\it {fin}}}{dt}=-\\frac{1}{4\\pi \\sigma }\\int _{t_{\\it {in}}}^{t_{\\it {fin}}}{\\frac{dM}{r_{\\textit {hor}}^2T_{BH}^4}},$ where $t_{{\\it in}}$ and $t_{{\\it fin}}$ represent the initial and the final times considered for the evaporation process.", "Equivalently, we can obtain the evaporation times of the Weyl geometric black hole in the form $\\tau _{BH}\\left(\\eta _{hor}\\right)= \\tau _H\\int _0^{\\eta _{hor}}{\\frac{1}{\\eta ^2\\theta ^4\\left(\\eta \\right)}\\frac{dm\\left(\\eta \\right)}{d\\eta }d\\eta },$ where we have denoted $\\tau _H=\\frac{c^4}{8\\pi G^2\\sigma nM_{\\odot }T_{BH}^4}.$ The variations of the dimensionless Hawking evaporation time $\\tau _{eff}=\\tau _{BH}/\\tau _H$ of black holes in the Weyl geometric gravity as a function of the position of the event horizon are represented in Fig.", "REF .", "In this Figure we have varied the values of $\\gamma $ continuously, and kept the values of the other parameters fixed.", "Figure: Variation of the evaporation time τ eff =τ BH /τ H \\tau _{eff}=\\tau _{BH}/\\tau _H of the Weyl geometric black holes in the presence of both temporal and radial components of the Weyl vector as a function of the position of the event horizon for Θ 0 (0)=10 -18 \\Theta _{0}(0)=10^{-18}, Ω 0 (0)=10 -11 \\Omega _{0}(0)=10^{-11} and ζ(0)=10 -28 \\zeta (0)=10^{-28}, and for different values of ψ(0)\\psi (0) (left panel), and for ψ(0)=10 -15 \\psi (0)=10^{-15}, and different values of ζ(0)\\zeta (0) (right panel).", "In both panels, the values of γ\\gamma are varied continuously.The Hawking evaporation times of the Weyl geometric black holes show a strong dependence on the initial conditions at infinity of both the scalar field, and of its derivative.", "They are rapidly decreasing functions of the radius of the event horizon, and they become negligibly small for enough massive black holes.", "The idea that conformal symmetry is an exact, but spontaneously broken symmetry of nature [102], [103], has many attractive features.", "If Einstein's field equations are reformulated by strictly imposing conformal symmetry, then the conformal component of the metric field can be treated as a dilaton field with only renormalizable interactions.", "Hence, this condition imposes some strong constraints on the gravitational theories, and they are equivalent to demanding regularity of the action as the dilaton field variable tends to zero.", "Moreover, such a procedure can turn a black hole into a regular, topologically trivial soliton with no singularities, horizons or firewalls [103].", "In the present work, we have considered the possible existence of black hole type structures in the framework of the simplest conformally invariant gravitational theory, constructed ab initio in a Weyl geometry.", "The simplest such gravitational action contains the quadratic Weyl scalar and the electromagnetic type scalar, defined in a way similar to the Faraday tensor in standard electromagnetic theory, with the Weyl vector playing the role of the electromagnetic four-potential.", "This theory intrinsically contains a scalar degree of freedom, and, once reformulated in the standard Riemannian geometry, it can be linearized in the Ricci scalar.", "Hence, geometric conformally invariant Weyl gravity is equivalent in the Riemannian space to a scalar-vector-tensor theory, in which, besides the metric tensor, the gravitational properties are determined by a scalar and a tensor field, with both having purely geometric origins.", "To investigate the physical properties of the scalar-vector-tensor Weyl geometric theory, in the present work we have considered one of the simplest possible physical and geometrical situations, namely, the case of the vacuum static and spherically symmetric systems.", "Even by adopting this simple theoretical model, the gravitational field equations are extremely complicated.", "Hence, in order to obtain solutions of the field equations one must extensively use numerical methods.", "In order to do so in a computationally efficient way, one must first rewrite the static spherically symmetric Weyl geometric gravitational field equations in vacuum in a dimensionless form suitable for numerical integration.", "This can be achieved by introducing as an independent variable the inverse of the radial coordinate.", "Moreover, we have reformulated the gravitational field equations as a first order dynamical system.", "In this representation the numerical integration procedure is significantly simplified.", "In analogy with the Schwarzschild black hole solution of standard general relativity we have also represented the metric tensor component $e^{\\lambda }$ in terms of an effective mass.", "Hence, in order to obtain a numerical solution of the field equations of the Weyl geometric gravitational theory we need to give the initial conditions at infinity of the metric tensor components, of the scalar and of the vector fields, and of their derivatives, respectively.", "As for the metric tensor components, we have assumed the condition of the asymptotically flat geometry, while the numerical values at infinity of the components of the scalar and vector fields, and of their derivatives, have been chosen in an arbitrary way.", "We consider that the presence of a singularity in the field equations, or, more precisely, of a singular point in the behavior of the metric tensor components, indicates the formation of an event horizon and, consequently, indicates the presence of a black hole type object.", "The total mass of the black hole can be determined from the effective mass appearing in the radial metric tensor component, and it can be interpreted as containing the standard (baryonic) mass of the black hole to which the contributions from the scalar and Weyl type vector fields are added.", "Generally, in spherical symmetry, only two components of the Weyl vector do survive, which are the temporal and the radial components, respectively.", "Based on this result, we have considered the solutions of the gravitational field equations of the quadratic Weyl geometric gravity in three cases, corresponding to the Weyl vector having only a radial component only, a temporal component only, and in the presence of both components.", "As a first general conclusion of our study, we have detected the formation of an event horizon in all three cases.", "Consequently, these results indicate the formation of black holes within the quadratic geometric Weyl gravity theory.", "The position of the event horizon of the black holes depends significantly on the numerical values of the scalar and vector fields, and of their derivatives at infinity (representing the initial conditions for our dynamical systems).", "These results show the existence of an interesting relation between the asymptotic values of the scalar and Weyl fields, and the black hole properties.", "For example, in the case of the presence of the temporal component of the Weyl vector only, for particular values of the vector field, and of its derivative at infinity, the physical position $r_{hor}$ of the event horizon of the black hole can be located at distances of the order of 0.33 - 1.27 of the standard Schwarzschild radius, a result that indicates the possibility of formation of a large variety of Weyl type black holes, including the presence of more compact black holes, having the same mass, but a smaller radius, than the standard Schwarzschild black hole of general relativity.", "On the other hand, black holes with the same mass, but with radii larger than the Schwarzschild radius, can also exist.", "As a general rule, the position of the event horizon of the Weyl geometric black holes is also dependent on the coupling parameter $\\gamma $ of the model.", "Thus, there is a multi-parametric dependence of the Weyl geometric black hole properties on the geometric couplings, and on the asymptotic conditions at infinity of the metric and scalar and vector fields.", "We would also like to point out that in our numerical investigations we could not detect any case of the formation of a naked singularity, with the singularity located at the center $r=0$ .", "In all the considered numerical cases the black holes are hidden beyond an event horizon.", "But, of course, the possible existence in the present conformally invariant quadratic Weyl geometric model of naked singularities, or topologically trivial soliton type structures cannot be excluded a priori.", "The numerical detection of these types of objects requires a significant extension of the range of values of the parameter space, and more detailed numerical investigations of the dynamical systems describing static spherically symmetric quadratic Weyl geometric gravity structures in vacuum.", "In the present work we have presented only the numerical results obtained by considering the asymptotically flat conditions for the metric tensor at infinity.", "Numerical solutions corresponding to the de Sitter solutions can also be easily obtained.", "However, if the de Sitter condition is assumed to be of cosmological type, the deviations from the asymptotically flat case are negligible.", "But if one assumes a significant difference from flatness at infinity, several distinct types of black holes can be obtained.", "In particular, solutions corresponding to very big \"super-massive black holes\" can also be constructed, by imposing appropriate initial conditions on the mass function $m$ .", "One such example of super-massive black hole solutions of the Weyl geometric field equations in the presence of the radial component of the Weyl vector are presented in Table REF , for $m(0)=0.4$ .", "In the coordinate $\\eta $ , these black holes have an extremely small value of the position of the event horizon, of the order of $10^{-5}$ , much smaller than the position of the inverse of the event horizon of the Schwarzschild black holes.", "However, the physical radius of these particular types of black holes is very large, ranging from $r_{hor}\\approx 10^4$ ($\\eta _{hor}=9.96\\times 10^{-5}$ ), to $r_{hor}\\approx 3.69\\times 10^4$ ($\\eta _{hor}=2.71\\times 10^{-5}$ ).", "Such Weyl type structures may be considered as possible alternatives to the super massive black holes of standard general relativity.", "Table: Variation of the position of the event horizon η ℎ𝑜𝑟 \\eta _{\\textit {hor}} of the Weyl geometric black hole with radial component of the Weyl vector only for ζ(0)=1×10 -30 \\zeta (0)=1\\times 10^{-30}, m(0)=0.4m(0)=0.4 and different initial values of ψ\\psi .Even that most of our investigations have been performed by using a numerical approach, an exact solution of the field equations has also been obtained by using analytical methods.", "In the case of a Weyl vector having only a radial component, if the metric functions satisfy the condition $\\nu +\\lambda =0$ , an exact solution of the Weyl geometric gravity field equations can be obtained, with the metric given by a function of the form $e^{\\nu }=e^{-\\lambda }=A-B/r-Cr+Dr^2$ , with $A$ , $B$ , $C$ , and $D$ constants that do not depend on the initial conditions at infinity.", "Thus, the metric tensor components contain, except the Schwarzschild term $B/r$ , two new terms proportional to $r$ and $r^2$ , respectively.", "We could not find similar simple analytic solutions for the cases of the presence of the temporal component of the Weyl only, or for the general case corresponding to the two components Weyl vector.", "The analytical solutions are extremely useful in the study of the physical properties of the black holes, and in particular for the study of the dynamics and motion of matter particles around them.", "They can also be applied for the investigation of the electromagnetic properties of thin accretion disks that may be present around black holes.", "The obtained exact solution could also help in discriminating quadratic Weyl geometric black holes from standard general relativistic black holes, and for obtaining observational constraints on the coupling constants, and on the Weyl vector components.", "We have also investigated in some detail the thermodynamic properties of the numerical black hole solutions of the quadratic Weyl geometric gravity.", "One of the extremely interesting physical property of the black holes is their Hawking temperature, an essential parameter that has important theoretical implications.", "As compared to the Hawking temperature of the standard general relativistic Schwarzschild black holes, the horizon temperature of the quadratic Weyl geometric gravity black holes has a strong dependence on the initial conditions at infinity of the scalar and of the Weyl vector fields.", "As one can observe from Fig.", "REF , the decrease in the physical horizon radius $r_{hor}$ (corresponding to an increase of the coordinate $\\eta $ ), leads to an increase in the black hole temperature, a property that is specific to Weyl geometric black holes.", "In the case of the specific initial conditions for the scalar and Weyl vector fields considered in Fig.", "REF , the increase in the temperature is several times higher, as compared to the Schwarzschild general relativistic case.", "A similar behavior does appear for the specific heat, entropy, and evaporation time, respectively, of quadratic Weyl geometric black holes, with all these physical quantities strongly dependent on the initial conditions of the scalar and of the Weyl vector field at infinity.", "The specific heat and the entropy of the Weyl black holes decrease with decreasing $r_{hor}$ (increasing $\\eta _{hor}$ ).", "The black hole evaporation times may be very different for Weyl type black holes, as compared to the Schwarzschild black holes, decreasing with increasing $r_{hor}$ .", "However, we would like to point out that our results on the thermodynamics of Weyl gravity black holes are obtained for a limited range of initial conditions at infinity for the scalar field and for the Weyl vector, and therefore they may be considered as having a qualitative interpretation only.", "But, even within this limited level of numerical investigation, they provide an indicator of the complex physical behavior of the Weyl black holes, and of the interesting and novel properties related to them.", "The no-hair theorem is an important result in black hole physics [125], [126], [127], [128].", "For short, this theorem asserts that asymptotically flat black holes cannot have external nontrivial scalar fields possessing a non-negative field potential $V (\\phi )$ .", "It would be interesting to consider the no-hair theorem for Weyl geometric black holes.", "The preliminary, and mostly numerical results obtained in the present work seem to point towards the conclusion that the no-hair theorem in its standard formulation may not be valid for quadratic Weyl black holes.", "All the black hole solutions we have considered have an asymptotically flat geometry, and scalar and vector fields do exist around them.", "However, the question if these results follow from the particular choice of the model parameters (coupling constant and initial conditions at infinity), or they are essential properties of the theory deserves further studies, and investigations.", "Quadratic Weyl gravity black holes have more variability as associated to their basic properties, leading to a more complicated external dynamics, as compared with the Schwarzschild black holes of general relativity.", "These richer properties do follow from the presence of the scalar and vector degrees of freedom, resulting in very complicated and strongly nonlinear field equations.", "The effects associated with the scalar field and Weyl vector degrees of freedom could also lead to some specific astrophysical imprints and signatures, whose observational detection could open some new perspectives on the possibilities of testing Weyl geometry, and its associated gravitational theory on cosmic scales.", "The possible observational/astrophysical implications of the existence of quadratic Weyl geometric black holes will be considered in a future investigation." ], [ "Acknowledgments", "We would like to thank the anonymous referee for comments and suggestions that helped us to significantly improve our work.", "TH is supported by a grant of the Romanian Ministry of Education and Research, CNCS-UEFISCDI, project number PN-III-P4-ID-PCE-2020-2255 (PNCDI III)." ] ]
2212.05542
[ [ "Gravitational waves from the early universe" ], [ "Abstract These lecture notes are based on the course \"Gravitational waves from the early universe\" given at the 27th W.E.", "Heraeus \"Saalburg\" Summer School 2021 by Valerie Domcke.", "Ongoing and future collaborations will probe different frequency ranges of the gravitational wave spectrum, allowing for probing different stages of the early universe and Beyond Standard Model physics.", "Due to the very high energies involved, accelerators cannot probe them.", "Therefore, current knowledge about new physics is limited and relies on bounds from CMB observations and theoretical assumptions about these energy scales.", "While some models are in tension with CMB data, others are unconstrained in shorter wavelength scales.", "Nonetheless, each one of these models has a gravitational wave density spectrum that can be compared to data.", "These lecture notes review the formalism of gravitational waves in General Relativity and introduce stochastic gravitational waves, primordial sources, and detection efforts." ], [ "Abstract", "These lecture notes are based on the course \"Gravitational waves from the early universe\" given at the 27th W.E.", "Heraeus \"Saalburg\" Summer School 2021 by Valerie Domcke.", "Ongoing and future collaborations will probe different frequency ranges of the gravitational wave spectrum, allowing for probing different stages of the early universe and Beyond Standard Model physics.", "Due to the very high energies involved, accelerators cannot probe them.", "Therefore, current knowledge about new physics is limited and relies on bounds from CMB observations and theoretical assumptions about these energy scales.", "While some models are in tension with CMB data, others are unconstrained in shorter wavelength scales.", "Nonetheless, each one of these models has a gravitational wave density spectrum that can be compared to data.", "These lecture notes review the formalism of gravitational waves in General Relativity and introduce stochastic gravitational waves, primordial sources, and detection efforts." ], [ "Motivation", "Look far away into the deep abyss of space and see how the first galaxy formed a long time ago.", "Look further and see the first stars.", "We can look all the way back to a time when the universe had a temperature of approximately $T \\approx eV$ , and free electrons for the first time combined with protons to form hydrogen.", "An event known as recombinationThe term recombination mainly confuses students about the number of times the event of protons and electrons combining has occurred.", "This event has, as a matter of fact, occurred only once.. After recombination, photons could travel freely through space.", "Since then, they have been propagating in the universe, occasionally reaching our detectors.", "The moment when the first photons freely traveled through the universe is known as photon decoupling.", "These first photons are still visible today as the Cosmic Microwave Background (CMB), a background noise from all directions, easily confused with pigeon poop on the antennaWhen Robert W. Wilson and Arno A. Penzias first heard the radio signal, they initially thought it might have been caused by the poop of the pigeons surrounding the radio antenna.", "Not until they cleaned the antenna of the pigeon poop they realized what they had discovered.. Before photon decoupling, light could not travel freely through the hot proton-electron plasma making up the young universe.", "The photons scattered continuously off the electrons and protons in the hot plasma, making the universe opaque.", "This shrouds everything that happened before photon decoupling in darkness and makes it complicated for physicists to observe what happened at the beginning and in the first years of our universe.", "However, not all hope is lost with the discovery of gravitational waves.", "Early universe phenomena could have created gravitational waves.", "They may have been produced as early as cosmic inflation, creating a background of gravitational waves similar to the CMB, known as the gravitational wave background or stochastic gravitational wave background (SGWB).", "Unlike photons before decoupling, gravitational waves traveled through the early universe largely unperturbed.", "Thus, no fundamental obstacle prevents us from observing these early gravitational waves and discovering information about the earlier stages of our universe.", "Although there are no fundamental obstacles, there are plenty of experimental challenges.", "Similar to the CMB, the gravitational wave background is expected to be like noise from all directions.", "However, the gravitational background noise is far too weak for all our current detectors to measure, and we need to distinguish the gravitational wave background noise from other noise sources.", "Nonetheless, different collaborations expect to detect the gravitational wave background in the coming yearsSome of these collaborations are already active.", "They rely on ground-based detectors (LIGO, Virgo, KAGRA) or pulsar time arrays collaborations (NANOGrav, EPTA, PPTA, IPTA).", "Future collaborations include a space-based detector (LISA) and ground-based detectors (Cosmic Explorer and Einstein telescope)., and a lot can be learned from early universe gravitational waves.", "Take, for example, the gravitational wave frequency spectrum.", "Gravitational waves can come in different frequencies, and each collaboration probes a different spectrum range.", "Analogous to an orchestra where different instruments are combined, it is possible to combine data from different collaborations in a gravitational wave orchestra to get information about different stages of the universeFollowing the analogy with the music world, in a string quintet we can go from the double bass (low) to the cello, and then to the viola and the violins (high).", "Likewise, in the gravitational wave orchestra of the early universe, we can go from matter domination to radiation domination era, then reheating and inflation..", "Finally, different early universe sources can produce gravitational waves.", "Inflation and beyond Standard Model (BSM) physics phenomena, such as cosmic strings and first-order phase transitions, are examples.", "Remarkably, BSM depends on energy scales that are far beyond what accelerators on Earth can probe.", "Therefore, the gravitational wave background also serves as a laboratory to probe new physics!", "All this information in the gravitational wave background further completes our knowledge of our cosmic history.", "The study of gravitational waves from the early universe is part of the answer to a question as old as mankind: Where do we come from?", "The lectures are organized in the following sequence.", "In Sec.", ", we obtain gravitational waves as vacuum solutions of the linearized Einstein equations and study the effects of gravitational waves on test masses.", "Next, in Sec.", ", we study sourced emission of gravitational waves and their energy-momentum tensor and derive Einstein's quadrupole formula for the power emitted by a source.", "Then, in Sec.", ", we focus on the background of stochastic gravitational waves, derive the main properties, and describe detection efforts.", "In Sec.", ", we start discussing cosmological sources of gravitational waves (cosmic gravitational microwave background, first-order phase transitions, and cosmic strings) and how data can be used to constrain beyond Standard Model physics.", "Finally, in Sec.", ", we finish the discussion on cosmological sources with single-field slow-roll inflation and axion inflation." ], [ "Lecture: Linearized Einstein equations", "In this first lecture, we start by evaluating linearized general relativity, which describes the dynamics of a slightly perturbed gravitational field.", "After all, we can think about gravitational waves as small ripples in flat spacetime.", "Hence, we consider a metric tensor decomposed into the Minkowski metric and a small perturbation, $g_{\\mu \\nu }= \\eta _{\\mu \\nu }+ h_{\\mu \\nu }(x), \\quad \\text{with} \\;|h_{\\mu \\nu }| <<1,$ where higher order in $h$ can be omitted due to the smallness of $h$ .", "Furthermore, we use the $(-,+,+,+)$ sign notation for $\\eta _{\\mu \\nu }$ and the indices are raised with $\\eta _{\\mu \\nu }$ , i.e., $g^{{\\mu \\nu }} = \\eta ^{{\\mu \\nu }} - h^{{\\mu \\nu }}.$ Afterward, we will look into the number of degrees of freedom the metric perturbation contains and discuss the most used gauge for fixing the unphysical degrees of freedom.", "Lastly, we will solve the Einstein equation for test masses far from the source of gravitational waves.", "All the material in the first two lectures is based on the book “Gravitational Waves: Volume 1: Theory and Experiments\" by Michel Maggiore [1] and “Spacetime and Geometry.", "An introduction to general relativity\" by Sean Carroll [2].", "We recommend these references for an elaborate and detailed explanation of linearized general relativity and gravitational waves." ], [ "The linearized Einstein equations", "The familiar Einsteins equations are given by, $G_{\\mu \\nu } \\equiv R_{\\mu \\nu }- \\frac{1}{2}g_{\\mu \\nu }R = \\frac{8 \\pi G}{c^4}T_{\\mu \\nu },$ which relates the spacetime geometry, encoded in the metric $g_{{\\mu \\nu }}$ , to matter described by the energy-momentum tensor $T_{{\\mu \\nu }}$ .", "The Ricci tensor $R_{{\\mu \\nu }}$ and Ricci scalar $R$ for the linearized theory are computed following the usual scheme, starting from the Christoffel symbol.", "One can easily check that the linearized Christoffel symbol is given by $\\Gamma _{\\mu \\nu }^\\rho =& \\frac{1}{2}g^{\\rho \\sigma } [\\partial _\\mu g_{\\nu \\sigma }+\\partial _\\nu g_{\\mu \\sigma }-\\partial _\\sigma g_{\\mu \\nu }]\\nonumber \\\\=& \\frac{1}{2}\\eta ^{\\rho \\sigma } [ \\partial _\\mu h_{\\nu \\sigma } + \\partial _\\nu h_{\\mu \\sigma } -\\partial _\\sigma h_{\\mu \\nu }] + {h^2},$ which leads to the following Riemann curvature tensor, $R^{\\mu }_{\\nu \\sigma \\rho } =& \\partial _{\\sigma } \\Gamma ^{\\mu }_{\\nu \\rho } - \\partial _{\\rho } \\Gamma ^{\\mu }_{\\nu \\sigma } + \\Gamma ^{\\mu }_{\\sigma \\lambda }\\Gamma ^{\\lambda }_{\\nu \\rho } - \\Gamma ^{\\mu }_{\\rho \\lambda }\\Gamma ^{\\lambda }_{\\nu \\sigma }\\nonumber \\\\=& \\frac{1}{2}[\\eta ^{\\mu \\lambda }(\\partial _{\\sigma }\\partial _\\nu h_{\\rho \\lambda } - \\partial _{\\sigma }\\partial _\\rho h_{\\nu \\lambda } - \\partial _{\\sigma }\\partial _\\lambda h_{\\nu \\rho } - (\\sigma \\leftrightarrow \\rho )] +{h^2}.$ Note that the $\\Gamma ^2$ terms are higher-order terms in $h$ and will not contribute to the first-order Einstein equations.", "With a bit of algebra, one can then find the Ricci tensor, $R_{\\mu \\nu }= R^{\\rho }_{\\mu \\rho \\nu } = \\frac{1}{2}(\\partial _{\\rho }\\partial _{\\mu }h_{\\nu }^{\\rho }+\\partial _{\\rho }\\partial _{\\nu }h_{\\mu }^{\\rho }-\\partial _{\\mu }\\partial _{\\nu }h-\\Box h_{{\\mu \\nu }})+{h^2},$ and the Ricci scalar $R=g^{{\\mu \\nu }} R_{\\mu \\nu }= \\partial _{\\mu }\\partial _{\\nu } h^{{\\mu \\nu }}-\\Box h +{h^2},$ with $h=h^{\\mu }_{\\;\\mu }$ the trace and $\\Box =\\partial _{\\mu }\\partial ^{\\mu }$ .", "Combining all the results gives us the linearized Einstein tensor $G_{\\mu \\nu }= \\frac{-1}{2}[\\Box h_{\\mu \\nu }+ \\eta _{\\mu \\nu }\\partial ^\\rho \\partial ^\\sigma h_{\\rho \\sigma }- \\eta _{\\mu \\nu }\\Box h - \\partial ^\\rho \\partial _\\nu h_{\\mu \\rho } - \\partial _\\rho \\partial _\\mu h_{\\nu }^{\\rho }+ \\partial _\\nu \\partial _\\mu h] +{h^2}.$ This is a rather lengthy equation; hence it is usually preferred to define the trace reversed quantity $\\bar{h}_{\\mu \\nu }\\equiv h_{\\mu \\nu }- \\frac{1}{2}\\eta _{\\mu \\nu }h$ .", "This simplifies the equation a little to $G_{\\mu \\nu }= \\frac{-1}{2}[\\Box \\bar{h}_{\\mu \\nu }+ \\eta _{\\mu \\nu }\\partial ^\\rho \\partial ^\\sigma \\bar{h}_{\\rho \\sigma } - \\partial ^\\rho \\partial _\\nu \\bar{h}_{\\mu \\rho }- \\partial ^\\rho \\partial _\\mu \\bar{h}_{\\nu \\rho }] +{h^2}.$" ], [ "Scalar, vector, tensor (SVT) decomposition", "General relativity is invariant under all coordinate transformations.", "The linearized theory, however, is only invariant under infinitesimal coordinate transformations and finite, global Pointcaré transformations.", "In order to study particular physical quantities, it is useful to fix a gauge to eliminate the redundancies due to symmetry under these coordinate transformations.", "It is convenient to choose a fixed inertial coordinate system on the Minkowski background because the Minkowski background has a lot of rotational symmetries [2].", "Hence, this allows us to decompose the perturbation based on their transformation under spatial rotations on a hypersurface.", "Under these spatial rotations, the metric perturbation can be decomposed into scalars, vectors, and tensors, which transform independently from each other.", "This allows us to write the Einstein equations for the linearized theory as a set of uncoupled ordinary differential equations.", "On a side note, in the field of cosmology, the SVT decomposition is not uncommonly described in Fourier spaceNote that the wavevector defines a direction of propagation which breaks the spatial $SO(3)$ symmetry of the perturbation tensor in a spatial $SO(2)$ symmetry..", "The different Fourier modes in our linear theory are independent of each other.", "This is due to the translation invariance of the linear equation of motion of the perturbation [3].", "Hence, they can be studied independently.", "Consider the rotations of the coordinate system around a single wavevector $\\vec{k}$ by an angle $\\psi $ .", "A perturbation with a helicity $m$ has its amplitude multiplied by $e^{im\\psi }$ .", "The perturbations are then classified according to their helicity $m$ .", "The scalar, vector, and tensor are defined by $m=0,\\pm 1,$ and $\\pm 2$ , respectively.", "Let us have a closer look at the SVT decomposition.", "The metric perturbation is a $(0,2)$ tensor with a spatial $SO(3)$ symmetry.", "Under these rotations, the $h_{00}$ component is a scalar, $h_{0i}$ is a three-vector, and $h_{ij}$ is a spatial rank 2 symmetric tensor [2].", "This tensor can further be decomposed into a trace and a trace-free part.", "In group theory language, these are the irreducible representations of the spatial rotation group.", "For the metric $g_{{\\mu \\nu }}$ , decomposed as $g_{\\mu \\nu }=\\left( \\begin{array}{c|c}g_{00} & g_{0i} \\\\g_{i0} & g_{ij} \\\\\\end{array}\\right),$ the irreducible parts with respect to spatial rotations are then given by $g_{00} = & -(1+2 \\Phi ), \\nonumber \\\\g_{i0} = & g_{0i} = 2a(\\partial _i B-S_i),\\\\g_{ij} = & a^2[(1-2\\Psi )\\delta _{ij} + 2 \\partial _{ij} F + (\\partial _i T_j + \\partial _j T_i) + t_{ij}].\\nonumber $ Here, $g_{00}=-1$ , and $g_{ij}=a^2 \\delta _{ij}$ are components of the background metric.", "The remaining terms are part of the perturbation $h_{{\\mu \\nu }}$ consisting of 4 scalars ($\\Phi , B, \\Psi ,F)$ , 2 vectors ($S_i, T_i$ ), and 1 tensor $(t_{ij})$ , with $\\partial _i T^i = 0, \\quad \\partial _i S^i = 0, \\quad t^i_{\\,i}=0, \\quad \\text{and} \\quad \\partial _i t^i_{j}=0.$ We find 10 independent functions in the decomposed metric.", "Namely, 4 of the scalars, 4 vector components, and 2 tensor components of the $3 \\times 3$ symmetric tensor $t_{ij}$ ." ], [ "Gauge invariance and gauge fixing", "Now, before we can start solving the linearized Einstein equations, we need to address the ambiguous definition of the perturbation $h_{{\\mu \\nu }}$ .", "This perturbation may have different forms depending on the choice of coordinate system.", "Hence, the metric is defined up to a gauge transformation.", "Indeed, if we consider an infinitesimal gauge transformation $x^{\\mu } = x^{\\mu } + \\xi ^{\\mu }$ , the perturbation transforms as $h_{{\\mu \\nu }}(x) \\rightarrow h^{\\prime }_{{\\mu \\nu }}(x^{\\prime }) - \\partial _{\\mu } \\xi _{\\nu } - \\partial _{\\mu } \\xi _{\\nu },$ where $\\xi $ is small such that the conditions for a linearized theory, $|h_{{\\mu \\nu }}| << 1$ , is preserved.", "In general, the vector $\\xi $ can be written in terms of two scalars and one vector, $\\xi ^{\\mu } = (\\xi ^0, \\partial _i f+f_i)$ , with $\\partial _i f^i=0$ [1].", "Consequently, two scalars and one vector can be gauged away by fixing $\\xi $ .", "This reduces the scalar with 2 degrees of freedom, and the two degrees of freedom that were left from the vector, are gauged away.", "This can easily be seen by checking that the scalar, vector, and tensor parts transform as $\\Phi &\\rightarrow \\Phi + \\partial _0 \\xi _0 \\nonumber \\\\B &\\rightarrow B-\\xi _0-\\partial _0 f \\nonumber \\\\\\Psi &\\rightarrow \\Psi + \\frac{1}{3} \\nabla ^2 f \\nonumber \\\\F &\\rightarrow F-2f\\\\S_i &\\rightarrow S_i-\\partial _0 f_i \\nonumber \\\\T_i &\\rightarrow T_i - f_i \\nonumber \\\\t_{ij} &\\rightarrow t_{ij} \\nonumber ,$ which also shows a gauge invariant tensor.", "We are left with 6 degrees of freedom.", "These 6 degrees of freedom do not all describe gravitational waves.", "Further gauge fixing of the scalars and vectors is needed.", "A convenient choice of gauge, which is commonly used to fix $\\xi $ , is the Lorentz gauge, $\\partial _{\\mu } \\bar{h}^{{\\mu \\nu }} = 0$ [1].", "Note that we are using the trace reversed metric here.", "The Lorentz gauge is always applicable, as we will show.", "Assume an arbitrary perturbation for which $\\partial ^{\\mu }\\bar{h}_{{\\mu \\nu }} \\ne 0$ .", "Then under an infinitesimal coordinate transformation, this transforms as $\\partial ^{^{\\prime }\\mu }\\bar{h}^{\\prime }_{{\\mu \\nu }}(x^{\\prime })=\\partial ^{\\mu }\\bar{h}_{{\\mu \\nu }}(x)-\\Box \\xi _{\\nu }.$ By simply choosing $\\Box \\xi _{\\nu }=\\partial ^{\\mu }\\bar{h}_{{\\mu \\nu }}$ the term on the left side will become zero, i.e., $\\partial ^{^{\\prime }\\mu }\\bar{h}^{\\prime }_{{\\mu \\nu }}(x^{\\prime })=0$ .", "A solution can always be found since the d'Alembertian operator is invertible.", "Hence, by choosing the appropriate $\\xi $ , the metric perturbation can always be written in the Lorentz gauge.", "Now that $\\partial ^{^{\\prime }\\mu }\\bar{h}^{\\prime }_{{\\mu \\nu }}(x^{\\prime })=0$ , we can of course continue making gauge transformations, $\\partial ^{^{\\prime \\prime }\\mu }\\bar{h}^{\\prime \\prime }_{{\\mu \\nu }}(x^{\\prime \\prime })&=\\partial ^{^{\\prime }\\mu }\\bar{h}^{\\prime }_{{\\mu \\nu }}(x^{\\prime })-\\Box \\xi _{\\nu }\\\\&=\\partial ^{\\mu }\\bar{h}_{{\\mu \\nu }}(x)-2 \\Box \\xi _{\\nu }\\\\&= -\\partial ^{\\mu }\\bar{h}_{{\\mu \\nu }}(x).$ As one can see, to remain in the Lorentz gauge after this gauge transformation, the vector $\\xi $ needs to satisfy $\\Box \\xi =$ 0.", "Hence, the Lorentz gauge removes the 4 degrees of freedom discussed above; however, it leaves residual freedom for gauge transformations with $\\Box \\xi =0$ .", "This is further fixed by the commonly used transverse traceless (TT) gauge, which is only valid in vacuum: $h_{\\,i}^{i\\,TT}=0,\\quad h_{0\\mu }^{TT}=0.$ Note that in the TT gauge $\\bar{h}^{TT}_{{\\mu \\nu }}=h^{TT}_{{\\mu \\nu }}$ , and only the tensor $t_{ij}$ with 2 degrees of freedom is left.", "These are the 2 degrees of freedom that describe the two polarizations of a gravitational wave.", "To summarize, the transverse traceless gauge and the Lorentz gauge combined to give the following constraints: $h^{TT}_{0\\mu }=0, \\quad h^{i \\, TT}_{\\,i}=0, \\quad \\partial ^j h_{ij}=0.$ The linearized Einstein equations in terms of the trace-reversed metric now reduce to a simple expression: $\\Box h^{TT}_{{\\mu \\nu }} = \\frac{-16 \\pi G}{c^4} \\Lambda _{{\\mu \\nu },\\rho \\sigma } T_{\\rho \\sigma },$ where the lambda tensor $\\Lambda _{{\\mu \\nu },\\rho \\sigma }$ is the TT projector which shall be discussed further in the second lecture." ], [ "Vacuum solutions", "Let us think about a gravitational wave detector far away from any gravitational wave source.", "Hence, in a vacuum where $T_{{\\mu \\nu }}=0$ , such that $\\Box h_{{\\mu \\nu }}^{TT}=0.$ This has a plane wave solution, $h_{{\\mu \\nu }}^{TT}(x) = A_{{\\mu \\nu }}(k)\\sin {k^{\\alpha }x_{\\alpha }}.$ Due to the restrictions imposed by the TT gauge, we can conclude that the amplitude is traceless and purely spatial.", "What is left to ensure that we are in the tranverse traceless gauge, is to check if the perturbation is transverse.", "In other words, $\\partial ^{\\mu } h_{\\mu \\nu }^{TT}= k^{\\mu } A_{{\\mu \\nu }}(k) \\sin {k^{\\alpha }x_{\\alpha }}=0.$ This relation is true if the wavevector is orthogonal to the amplitude, $k^{\\mu }A_{{\\mu \\nu }}=0$ .", "For example [1], if the wave is propagating in the $\\hat{z}$ - direction, then $A_{z\\nu }=0$ .", "Considering that $A_{0\\nu } = A_i^{\\;i} = 0$ and $A_{{\\mu \\nu }}$ is also symmetric, we can generally write $A_{xx} = -A_{yy} &\\equiv h_+,\\nonumber \\\\A_{xy} = A_{yx} &\\equiv h_{\\times },$ and the rest is zero.", "Thus $A_{{\\mu \\nu }}(k)=\\begin{pmatrix}0 & 0 & 0 & 0\\\\0 & h_+ & h_{\\times } & 0\\\\0 & h_{\\times } & -h_+ & 0\\\\0 & 0 & 0 & 0\\\\\\end{pmatrix}.$ To check if this really is a solution let's plug it into the equation of motion $\\Box \\bar{h}_{{\\mu \\nu }}^{TT} = k^{\\alpha }k_{\\alpha } A_{{\\mu \\nu }}(k) \\sin {k^{\\alpha }x_{\\alpha }} = 0.$ Note that not all components of $A_{{\\mu \\nu }}$ are zero, which means that $k^{\\alpha }k_{\\alpha } =0\\; \\rightarrow \\;E^2-P^2=0.$ This is a rough proof that gravitational waves travel at the speed of light!" ], [ "Effects of gravitational waves on test masses", "To examine the effect of gravitational waves on mass, consider two test masses on a geodesic trajectory parametrized by $x^{\\mu }(\\tau )$ and $x^{\\mu }(\\tau )+\\xi ^{\\mu }(\\tau )$ .", "The classical motion of a test particle at $x^{\\mu }$ is obtained by extremizing the action.", "This gives the geodesic equation, $\\frac{d^2 x^{\\mu }}{d\\tau ^2}+\\Gamma ^{\\mu }_{\\rho \\nu }(x) \\frac{dx^{\\nu }}{d\\tau }\\frac{dx^{\\rho }}{d\\tau }=0$ while $x^{\\mu }+\\xi ^{\\mu }$ satisfies $\\frac{d^2 (x^{\\mu }+\\xi ^{\\mu })}{d\\tau ^2}+\\Gamma ^{\\mu }_{\\rho \\nu }(x+\\xi ) \\frac{d(x^{\\nu }+\\xi ^{\\nu })}{d\\tau }\\frac{d(x^{\\rho }+\\xi ^{\\rho })}{d\\tau }=0.$ It is assumed that $\\xi $ is much smaller than the length scale of the gravitational waves, such that we can expand to the first order in $\\xi $ .", "Then, taking the difference between the two geodesic equations will give the geodesic deviation equation, $\\frac{d^2 \\xi ^{\\mu }}{d\\tau ^2}+2\\Gamma ^{\\mu }_{\\rho \\nu }(x) \\frac{dx^{\\nu }}{d\\tau }\\frac{\\xi ^{\\rho }}{d\\tau } +\\xi ^{\\sigma }\\partial _{\\sigma }\\Gamma ^{\\mu }_{\\nu \\rho }(x)\\frac{dx^{\\nu }}{d\\tau }\\frac{x^{\\rho }}{d\\tau }=0,$ describing the motion of the test particles relative to each other.", "This can be rewritten in terms of the Riemann tensor, $ \\frac{d^2 \\xi ^{\\mu }}{d\\tau ^2} + R^{\\mu }_{\\nu \\sigma \\rho } \\frac{d x^{\\rho }}{d\\tau } \\frac{d x^{\\nu }}{d\\tau } \\xi ^{\\sigma }.$ We choose coordinates such that the Christoffel symbol vanishes at the spacetime position of the first test point particle.", "Assuming a non-relativistic motion of the test particles, thus $\\frac{dx^i}{d\\tau } << \\frac{dx^0}{d\\tau }$ , and noticing from equation (REF ) that $R^i_{0j0} = \\frac{-1}{2c^2} \\ddot{h}_{j}^{i\\;TT}$ , the geodesic deviation equation reduced to $\\ddot{\\xi }^i = -c^2 R^i_{0j0} \\,\\xi ^j = \\frac{1}{2}\\ddot{h}_j^{i\\;TT} \\xi ^j.$ As an example [1], let us consider the + polarisation and study the motion of test particles in the $xy$ plane.", "In this case, $h_{ab}^{TT} = h_+ \\sin {\\omega t}\\begin{pmatrix}1 & 0\\\\0 & -1\\end{pmatrix}, \\quad a,b = \\lbrace x,y\\rbrace .$ The distance between the particles can generally be written as $\\xi _a(t) = (X_0+\\delta X(t), Y_0+\\delta Y(t)),$ where $(X_0, Y_0)$ are the unperturbed coordinates and $\\delta X(t), \\delta Y(t)$ are the displacements from the gravitational waves.", "Equation (REF ) results in $\\delta \\ddot{X}&=\\frac{-h_+}{2} (X_0+\\delta X) \\omega ^2 \\sin {\\omega t},\\\\\\delta \\ddot{Y}&=\\frac{h_+}{2} (Y_0+\\delta Y) \\omega ^2 \\sin {\\omega t},$ where the linear terms $\\delta X$ and $\\delta Y$ on the right-hand side can be neglected, since $\\delta X << X_0$ and $\\delta Y << Y_0$ .", "Integrating the equations and we get $\\delta X = \\frac{h_+}{2} X_0 \\sin {\\omega t}, \\quad \\delta Y = \\frac{-h_+}{2}Y_0 \\sin {\\omega t}.$ The result for a ring of test masses is shown in Fig.", "(REF ) Figure: A gravitational wave traveling in zz-direction with a ++ polarization will curve spacetime such that a ring of test masses (gray dots) is alternating between a vertical and horizon elliptical shape, creating a ++ sign.", "A ×\\times polarized gravitational wave creates ×\\times sign as shown in the bottom figure." ], [ "Lecture: Emission of gravitational waves", "Having solved the linearized Einstein equation for the vacuum case in the last lecture, in the second lecture we focus on the Einstein equation with a source term, i.e., $\\Box \\bar{h}_{{\\mu \\nu }} = -\\frac{16 \\pi G}{c^4}T_{{\\mu \\nu }}.$ After having solved the above equation, we will discuss gravitational waves in a curved background and derive the energy-momentum tensor for gravitational waves.", "We close this lecture by deriving Einstein's quadrupole formula." ], [ "Gravitational waves emitted by source", "Equation (REF ) is solved by using the Green function for the d'Alembertian operator $\\Box $ , $\\Box _x G(x^{\\sigma }-y^{\\sigma }) = \\delta ^{(4)}(x^{\\sigma }-y^{\\sigma }),$ where $x^{\\sigma }$ and $y^{\\sigma }$ are depicted in Fig.", "REF .", "This is exactly how it is done in the analogous electromagnetic problem.", "The general solution is $\\bar{h}_{{\\mu \\nu }}(x^{\\sigma })= - \\frac{16 \\pi G}{c^4} \\int G(x^{\\sigma }-y^{\\sigma })\\; T_{{\\mu \\nu }}(y^0, \\vec{y}) \\;d^4y,$ with $G(x^{\\sigma }-y^{\\sigma }) = - \\frac{1}{4 \\pi |\\vec{x}-\\vec{y}|}\\delta [|\\vec{x}-\\vec{y}| - (x^0-y^0)] \\theta (x^0-y^0)$ , and the theta function equals one when $x^0>y^0$ [2].", "After integrating over $y^0$ we obtain $\\bar{h}_{{\\mu \\nu }}(t,\\vec{x})= \\frac{4 G}{c^4} \\int \\frac{1}{|\\vec{x}-\\vec{y}|} T_{{\\mu \\nu }}(t-|\\vec{x}-\\vec{y}| ) d^3y,$ where $t=x^0$ and $t-|\\vec{x}-\\vec{y}| =t_r$ is referred to as the retarded time.", "In the following, we will make the assumption that the source is far away and slowly moving.", "Hence the source is centered at a distance $\\vec{x}$ , and the edge of the source is at a distance $\\vec{r}=\\vec{x}-\\vec{y}$ , as is shown in Fig.", "REF .", "Figure: An observer observes the center of a gravitational wave source at a distance x →\\vec{x}.", "The outer edge of the source is at a distance y →\\vec{y} from the center and thus observed at a distance r →=x →-y →\\vec{r}=\\vec{x}-\\vec{y}.", "The source is a large distance from the observer, hence r →≈x →\\vec{r} \\approx \\vec{x}.In terms of $r$ , the gravitational wave takes the form $\\bar{h}_{{\\mu \\nu }}(t,\\vec{x})=\\frac{4G}{r c^4} \\int d^3y\\, T_{{\\mu \\nu }}(t-\\frac{r}{c}, y).$ As we have seen before, in the vacuum case, the temporal components are set to zero, and thus we are only interested in the spatial components.", "This is given by $\\int d^3y \\,T_{ij} = \\frac{1}{2} \\partial ^2_0 \\int d^3y \\,y_i\\, y_j\\, T_{00}(y).$ To prove this relation, note that the energy/momentum conservation implies $\\partial _{\\mu } T^{{\\mu \\nu }}=0$ , and thus we can derive $\\partial _{\\mu } T^{0\\mu } &= \\partial _0 T^{00} + \\partial _k T^{0k} =0 \\nonumber \\\\\\partial _0^2T^{00} &= -\\partial _k \\partial _0 T^{0k} = \\partial _k \\partial _l T^{lk}\\\\y_i y_j \\partial _0^2T^{00} &= y_i y_j \\partial _k \\partial _l T^{lk} = 2 T^{ij} \\nonumber $ In the second line, energy/momentum conservation was used again and the last step is obtained by partial integration and $\\partial _k y_i=\\delta _{ki}$ .", "Thus $\\bar{h}_{ij}(t,\\vec{x}) = \\frac{2 G}{c^4} \\frac{1}{r} \\partial _0^2 \\int d^3 y \\; y_i y_j T_{00}(t-\\frac{r}{c}, y),$ where it is conventional to define the integral as the tensor moment of the source $I_{ij}(t-\\frac{r}{c})$ .", "The resulting formula $\\bar{h}_{ij}(t,\\vec{x}) = \\frac{2 G}{c^4} \\frac{1}{r} \\partial _0^2 I_{ij}(t-\\frac{r}{c})$ is known as the quadrupole formula.", "The transverse traceless gauge for gravitational waves outside the sources and propagating in $\\hat{n}$ direction is found by projecting the solution onto the TT gauge [1].", "For this, we introduce a transverse projector $P_{ij} = \\delta _{ij} - n_i n_j$ , with $\\hat{n}=\\frac{\\vec{k}}{k}$ , which is used to construct the Lambda tensor $\\Lambda _{ij,kl} \\equiv P_{ik} P_{jl} - \\frac{1}{2} P_{ij} P_{kl}$ .", "The Lambda tensor $\\Lambda _{ij,kl}$ is transverse in all indices, i.e, $n^i \\Lambda _{ij.kl} = 0$ , and it projects out the trace $\\Lambda _{ii,kl} = \\Lambda _{ij,kk} = 0.$ By projecting the metric perturbation, we obtain the traceless transverse version $h_{ij}^{TT} = \\Lambda _{ij,kl}\\; h_{kl}.$ As an example, for a wave in $\\hat{z}$ direction, the projector will have the form $P = \\begin{pmatrix}1 & 0 & 0\\\\0 & 1 & 0\\\\0 & 0 & 0\\end{pmatrix},$ and any arbitrary symmetric matrix will take the form t (0,0) $\\Lambda _{ij,kl} \\underbrace{A_{kl}}_\\text{\\parbox {1.7cm}{Arbitrary symmetric $3\\times 3$ matrix}} =\\begin{pmatrix}\\frac{1}{2} (A_{11}-A_{22}) & A_{12} & 0\\\\A_{21} & -\\frac{1}{2}(A_{11}-A_{22} & 0\\\\0 & 0 & 0\\end{pmatrix}.$ ; [->] (0,1.2) to [out=90, in=180] (0.5,1.7) node[anchor=west]$h_{\\times }$ ; [->] (2.5,1.2) to [out=90, in=180] (3,1.7) node[anchor=west]$h_+$ ; To summarize, in the TT gauge, the metric perturbation is given by the quadrupole formula which takes the form $h_{ij}^{TT} (t,\\vec{x}) = \\frac{2G}{c^4} \\frac{1}{r} \\Lambda _{ij,kl} \\vbox to 4.6mm{\\hbox{t}o 2.8mm{\\begin{tikzpicture}[][] (-0.05,-0.03)--(0.05,0.03) node at (0,0) {I};\\node at (0,0.2) {..};\\end{tikzpicture}}}_{kl}(t-\\frac{r}{c}),$ where the quadrupole moment is defined as $I{} \\equiv \\int d^3 y (y_k y_l - \\frac{1}{3} y^2 \\delta _{kl}) T_{00} = I_{kl} - \\frac{1}{3} I^m_{\\;m} \\delta _{kl}.$ This is the trace-free version of $I$ .", "It is a bit redundant with the projector $\\Lambda $ , although often useful in practice!" ], [ "Energy momentum tensor of gravitational waves", "So far, we have considered linearized Einstein equations as an expansion around the flat spacetime metric $\\eta _{{\\mu \\nu }}$ .", "The fluctuations around the static flat background are the gravitational waves.", "In a general dynamical curved spacetime with a metric $g_{{\\mu \\nu }}(x) = \\bar{g}_{{\\mu \\nu }}(x) + h_{{\\mu \\nu }}(x),$ the question arises whether the curvature is actually a gravitational wave or part of the background [1].", "In the latter case, the gravitational wave can locally be gauged away.", "How do we decide which part is the background and which part is a gravitational wave?", "A natural splitting arises when picking the right scale.", "Denoting the length of the background by $L_b$ , and the wavelength of the gravitational wave by $\\lambda _{GW}$ Note that the typical length scale for the gravitational wave is $\\lambda _{GW} =\\frac{\\lambda }{2 \\pi }$ instead of $\\lambda $ and is also known as the reduced wavelength.. A suitable length scale $d$ is large enough to observe $\\lambda _{GW}$ and small enough such that the background is approximately flat.", "This method of separation of the metric into a smooth background and perturbations is called short-wave expansion.", "Figure: A visualization of short-wave expansion.", "The spacetime is separated in a background with a length L b L_b, and a gravitational wave with wavelength λ GW \\lambda _{GW}.", "This separation arises natural when considering a scale length dd, such that λ GW <<d<<L b \\lambda _{GW} << d << L_b.How does this perturbation propagate in the background spacetime and how does it affect the background metric [1]?", "To address these questions we expand the Einstein equations around a background metric.", "In this expansion there are typically two small parameters, the amplitude $h$ and $\\frac{\\lambda _{GW}}{L_b}$ (or $\\frac{f_B}{f}$ ).", "So let us expand $G_{{\\mu \\nu }}$ in powers of $h$ : $G_{{\\mu \\nu }} = G_{{\\mu \\nu }}^{(B)} +G_{{\\mu \\nu }}^{(1)} +G_{{\\mu \\nu }}^{(2)}+ \\dots $ The term $G^{(B)}$ is related to the background and solely constructed from $\\bar{g}_{{\\mu \\nu }}$ .", "$G_{{\\mu \\nu }}^{(1)}$ is linear in $h_{{\\mu \\nu }}$ and contains only high-frequency modes, while $G_{{\\mu \\nu }}^{(2)}$ is quadratic and contains both high and low frequencies.", "For instance, consider a quadratic term $h_{{\\mu \\nu }}h_{\\rho \\sigma }$ , where $h_{{\\mu \\nu }}$ and $h_{\\rho \\sigma }$ contain a mode with wave-vector $\\vec{k}_1$ and $\\vec{k}_2$ , respectively, with $|\\vec{k}_1|, |\\vec{k}_2|>>\\frac{1}{d}$ .", "The high wave vectors can combine such that the sum becomes a low wave vector mode, $|\\vec{k}_1 +\\vec{k}_2| <<\\frac{1}{d}$ .", "In this manner, the Einstein equations can be split into equations for high frequencies and for low frequencies.", "We will focus on the small $\\vec{k}$$\\vec{k}=\\frac{2 \\pi }{\\lambda }=\\frac{1}{\\lambda _{GW}}$ part of Einstein's equation $G_{{\\mu \\nu }}^B &= -[G_{{\\mu \\nu }}^{(2)}]^{small \\,\\vec{k}} + \\frac{8\\pi G}{c^4}[T_{{\\mu \\nu }}]^{small \\,\\vec{k}}\\nonumber \\\\&= - {G_{{\\mu \\nu }}^{(2)}}_d + \\frac{8\\pi G}{c^4}[T_{{\\mu \\nu }}]_d$ In the second line, we average over a spatial volume at a scale $d$ .", "This does not affect the modes with a wavelength of order $L_B$ , since these are more or less constant over a distance $d$ .", "On the other hand, the fast oscillating waves of order $\\lambda _{GW}$ will average zero.", "The attentive observer will notice that the above technique is basically a renormalization group transformation.", "We take the fundamental equations of the theory and “integrate out\" the small (high energy) fluctuations, to obtain an effective theory that describes physics at the length scale $L_B$ .", "The result is the “course-grained\" Einstein equations.", "An explicit computation of $G_{{\\mu \\nu }}$ to 2nd order in the TT gauge will give $R_{{\\mu \\nu }}^{(2)} = \\dots = \\frac{1}{4} \\partial _{\\mu } h_{\\alpha \\beta } \\partial _{\\nu } h^{\\alpha \\beta } + 12\\, \\text{terms},$ ${R_{{\\mu \\nu }}^{(2)}}_d = - \\frac{1}{4} {\\partial _{\\mu } h_{\\alpha \\beta }^{TT} \\partial _{\\nu } h^{TT \\alpha \\beta }}, \\; {R^{(2)}}=0, \\; {R^{(1)}}=0.$ The averaged 2nd order $G_{{\\mu \\nu }}$ is defined as the energy-momentum tensor of gravitational waves.", "Gravitational waves carry energy that curves the background, because of the way it enters in (REF ).", "$t_{{\\mu \\nu }} = -{G_{{\\mu \\nu }}^{(2)}}_d =- \\frac{c^4}{8 \\pi G} {R^{(2)}_{{\\mu \\nu }} - \\frac{1}{2}\\bar{g}_{{\\mu \\nu }} R^{(2)}}$ The explicit expression for $t_{{\\mu \\nu }}$ is found by substituting $R_{{\\mu \\nu }}^{(2)}$ into equation (REF ).", "$t_{{\\mu \\nu }} = \\frac{c^4}{32 \\pi G} {\\partial _{\\mu } h_{\\alpha \\beta }^{TT} \\partial _{\\nu } h^{\\alpha \\beta TT}}.$ Furthermore, the energy density of gravitational waves, defined as the 00 - component of the energy stress tensor, is found to be $\\rho _{GW}=t_{00} = \\frac{c^4}{32 \\pi G} {\\dot{h}_{ij}^{TT} \\dot{h}^{ij TT}}.$" ], [ "Einstein's quadrupole formula", "Given the energy density of gravitational waves, the energy of the gravitational radiation in a volume $V$ is given by $E_{GW} = \\int _V d^3x \\, t^{00}.$ Demanding conservation of energy-momentum tensor, $\\partial _{\\mu } t^{{\\mu \\nu }}$ =0, implies that $\\int _V d^3x (\\partial _0 t^{00} + \\partial _i t^{i0})=0 $ and we can write $\\frac{d E_{GW}}{c \\;dt}=- \\int _V d^3x \\,\\partial _i t^{0i} = - \\int _S d^A\\, n_i \\,t^{0i},$ where $n_i$ is the outer normal to the surface and $dA$ the surface element of the volume $V$ .", "Now, let S be a spherical surface at a large distance $r$ from the source.", "For a spherical volume, the surface element is $dA = r^2 d\\Omega $ , and its normal is $\\hat{n}=\\hat{r}$ .", "Then $\\frac{d E_{GW}}{dt}= - \\frac{r^2}{c} \\int d \\Omega \\,t^{0r} = \\frac{r^2}{c} \\int d\\Omega \\, t^{00}.$ Hence, we have $P_{GW}= \\frac{d E_{GW}}{dt} = \\frac{r^2 c^3}{32 \\pi G} \\int d\\Omega {\\dot{h}_{ij}^{TT} \\dot{h}_{ij}^{TT}} = \\frac{G}{8 \\pi c^5} \\int d\\Omega \\,\\Lambda _{ij,kl}(\\hat{n}) {\\dddot{I}_{ij} \\dddot{I}_{ij}},$ which is known as Einstein's quadrupole formula: $P_{GW} = \\frac{G}{5 c^5} {\\dddot{I}_{ij} \\dddot{I}_{ij}},$ describing the power emitted by a source with tensor moment $I_{ij}$ .", "This allows for example to compute the gravitational wave emitted by a black hole binary." ], [ "Lecture: The stochastic gravitational wave background", "In the last lectures, we generically described gravitational waves, identifying the two propagating degrees of freedom from general relativity.", "Now, we specialize our studies on the topic of stochastic gravitational waves, by deriving the main properties, and then describing current searches." ], [ "The gravitational wave background", "Stochastic gravitational wave backgrounds (SGWBs) are defined as the superposition of gravitational waves with different wave numbers k (both in magnitude and direction).", "They can have astrophysical or cosmological origins and typically are isotropic, unpolarized, and gaussianFrom astrophysical sources, this follows from the central limit theorem.", "For cosmological sources, this statement is model-dependent..", "It is then very similar to the cosmic microwave background (CMB) from the electromagnetic spectrum.", "SGWBs can allow us to reach stages where CMB cannot guide us since gravitational waves can travel freely through the hot plasma of the early universe, which is not transparent to photons.", "Figure: Here we show a schematic representation of the propagation and detection of SGWBs.", "The circle represents some cosmic event (gravitational wave source).", "The waves then propagate through the universe.", "Occasionally they find a detector.", "The signal from SGWBs act as additional \"noise\" in a gravitational wave detector.Possible early-universe sources emitted gravitational waves in the past.", "These signals are continuously reaching us, coming from all directions.", "The gravitational wave spectral shape is given by $\\Omega _{GW}=\\dfrac{1}{\\rho _c}\\dfrac{d \\rho _{GW}}{d \\ln k}.", "$ Signals from SGWBs are small and it is challenging to detect them because what arrives in the detector is similar to a noise source.", "Therefore, they can serve as a cosmological history book, which is tricky to decipher.", "Phenomena like inflation, primordial black holes, cosmic strings, and phase transitions are possible cosmic sources.", "These primordial sources are relevant for beyond-Standard-Model theories.", "These theories usually rely on energy scales far beyond what Earth-based experiments can achieve but were reached during the high-temperature stages of the early universeFor instance, current accelerators reach center-of-mass energies of the order $\\sim 10^{4}$ GeV, which is enough to probe the electroweak phase transition scale, $\\sim 100$ GeV, but it is far beyond grand-unification scales of order $\\sim 10^{16}$ GeV..", "So far, pulsar time array (PTA) collaborations - have claimed statistical evidence for excess noise that early-universe gravitational waves could potentially explain, see, for instance, the reviews [4], [5], [6].", "Beyond PTAs, gravitational-wave detectors will reach the frequency region required for SGWBs, with LISA [7] and the 3rd generation of ground-based detectors [8].", "Notice that SGWB signals are different than those LIGO-Virgo detected (transient signals) in the recent gravitational wave observation from binary mergers [9], [10]." ], [ "gravitational waves in expanding FRW universe", "Setting $c=1$ , the metric of an expanding FRW universe is given by $ds^2=-dt^2+a(t)^2 dx^i dx_i =- a(\\tau )^2(d\\tau ^2-g_{ij}dx^i dx^j),$ where $\\tau $ is the conformal time and $a$ the scale factor.", "Expanding around a flat homogeneous cosmological background $ g_{ij}=\\delta _{ij}+h_{ij} $ , the linearized field equations are $\\square \\bar{h}_{ij}(\\textbf {x},\\tau )-\\dfrac{2a^{\\prime }}{a}\\bar{h}^{\\prime }_{ij}(\\textbf {x},\\tau )=-16\\pi G T_{ij},$ where derivatives with respect to conformal time are denoted by primes.", "Notice that the second term on the left-hand side vanishes for a static universe, and we recover the usual linearized Einstein gravitational wave equation (see the first lecture).", "By Fourier transforming and defining $\\tilde{h}_\\lambda \\equiv a h_\\lambda $ , where $\\lambda =+,\\times $ are the two polarization modes of the gravitational wave, we can rewrite the field equations as $\\tilde{h}_\\lambda ^{\\prime \\prime }(\\textbf {k},\\tau )+\\left(k^2-\\dfrac{a^{\\prime \\prime }}{a}\\right)\\tilde{h}_\\lambda (\\textbf {k},\\tau )=16\\pi G a T_\\lambda (\\textbf {k},\\tau ).", "$ We can approximate in the following two cases: $k^2>>(aH)^2 $ (sub-horizon) and $ k^2<<(aH)^2$ (super-horizon).", "In vacuum, for the sub-horizon case, (REF ) reduces to $\\tilde{h}_\\lambda ^{\\prime \\prime }(\\textbf {k},\\tau )+k^2\\tilde{h}_\\lambda (\\textbf {k},\\tau )\\approx 0,$ whose solution for $\\tilde{h}$ is oscillatory.", "Consequently, $h_\\lambda (\\textbf {k},\\tau )\\approx \\dfrac{A_\\lambda }{a}\\cos (k\\tau +\\varphi ), $ where $A_\\lambda =A_\\lambda (\\textbf {k})$ is a constant in time.", "Notice that wave amplitude decays as the universe expands!", "For the super-horizon case, we have instead $2 a^{\\prime }h^{\\prime }_\\lambda + a h^{\\prime \\prime }_\\lambda \\approx 0,$ whose solution is $h_\\lambda = A_\\lambda + B_\\lambda \\int _{0}^{\\tau }\\dfrac{d\\gamma }{a(\\gamma )^2} \\approx \\text{constant},$ where $A_\\lambda =A_\\lambda (\\textbf {k})$ and $B_\\lambda =B_\\lambda (\\textbf {k})$ are constant in time.", "Here we have used the fact that the integral decays in the expanding universe's history.", "Thus, gravitational waves are “frozen\" outside the Hubble horizon.", "This mechanism is the same one occurring in inflation.", "After re-entry in the horizon, tensor perturbations become sub-horizon modes again, as described by (REF ).", "Let us focus on these sub-horizon modes.", "A useful parametrization for the sub-horizon modes is [11] $h_{ij}^{TT}(\\textbf {x},\\tau )=\\sum _{\\lambda =+,\\times }\\int \\dfrac{d^3k}{(2\\pi )^3} \\; h_\\lambda (\\textbf {k})\\mathcal {T}_k(\\tau )e_{ij}^{\\lambda }(\\hat{k})e^{-i(k\\tau -\\textbf {k}\\cdot \\textbf {x})} .$ For this parametrization, we define an initial time $\\tau _{\\ast }$ as the time of formation or horizon entry, or for sub-horizon sources, as the time of gravitational wave emission, i.e.", "when the decaying behavior ($1/a$ ) starts; $h_\\lambda $ is the Fourier coefficient at time $\\tau =\\tau _{\\ast }$ ; $\\mathcal {T}{}_k(\\tau ) $ is the transfer function given here by the ratio $a(\\tau _{\\ast })/a(\\tau )$ (notice we have factored out $1/a$ ); and $ e_{ij}^{\\lambda } $ are the components of the polarization tensor that maps the Cartesian coordinates of the tensor $ h_{ij}^{TT}$ to its $+,\\times $ degrees of freedom, as defined in the previous lecture.", "We will use the equation for the energy density associated with gravitational waves in Sec.", ", $\\rho _{GW}(\\tau )=\\dfrac{1}{32\\pi G}\\langle \\dot{h}_{ij}^{TT}(\\textbf {x},t)\\dot{h}^{TT}{}^{ij}(\\textbf {x},t)^* \\rangle .$ Since we are using conformal time, $ \\dot{h_{ij}}=(1/a) h_{ij}^{\\prime } $ ,$ \\mathcal {T}_k^{\\prime } = - \\mathcal {T}_k (a^{\\prime }/a)$ and $ \\mathcal {H} = a^{\\prime }/a $ , for $ a = a(\\tau )$ .", "Therefore, $\\dot{h}_{ij}^{TT}(\\textbf {x},t)=-\\dfrac{1}{a}\\sum _{\\lambda }\\int \\dfrac{d^3k}{(2\\pi )^3} h_\\lambda (\\textbf {k})\\mathcal {T}_k(\\tau )e_{ij}^{\\lambda }(\\hat{k})\\left( i k + \\mathcal {H}\\right)e^{-i(k\\tau -\\textbf {k}\\cdot \\textbf {x})},$ and $\\rho _{GW}(\\tau ) = \\dfrac{1}{32\\pi G} \\dfrac{1}{a^2} \\sum _{\\lambda _1\\lambda _2}\\int \\dfrac{d^3k_1}{(2\\pi )^3} \\int \\dfrac{d^3k_2}{(2\\pi )^3} \\langle h_\\lambda (\\textbf {k}_1) h_{\\lambda ^{\\prime }}(\\textbf {k}_2)\\rangle e_{ij}^{\\lambda }(\\hat{k}_1)e_{ij}^{\\lambda }(\\hat{k}_2) \\mathcal {T}_{k_1} \\mathcal {T}_{k_2} \\times \\nonumber \\\\\\times \\left( i k_1 + \\mathcal {H}\\right)\\left( - i k_2+ \\mathcal {H}\\right)e^{-i(k_1-k_2)\\tau }e^{-i(\\textbf {k}_2-\\textbf {k}_1)\\cdot \\textbf {x}}$ The last term simplifies after assuming homogeneity and isotropy since the following expression holds $\\langle h_\\lambda (\\textbf {k}_1) h_{\\lambda ^{\\prime }}(\\textbf {k}_2)\\rangle = (2\\pi )^3 \\delta _{\\lambda \\lambda ^{\\prime }}\\delta ^3(\\textbf {k}_1-\\textbf {k}_2)P_\\lambda (\\mid \\textbf {k}_1\\mid ), $ where $P(k)$ is the non-normalized tensor power spectrum.", "We can write the energy density associated with SGWBs as $\\rho _{GW}(\\tau ) = \\dfrac{1}{32\\pi G} \\dfrac{1}{a^2}\\sum _{\\lambda }\\int \\dfrac{d^3k}{(2\\pi )^3}\\mathcal {T}_{k}^2 \\vert \\left( i k + \\mathcal {H}\\right) \\vert ^2 e_{ij}^{\\lambda }e^{\\lambda }_{ij}(k)P_\\lambda (k).$ For each polarization mode, $ e_{ij}^{\\lambda }e^{\\lambda }_{ij}$ = 1, since $ e_{ij}^{\\lambda _1}e^{\\lambda _2}_{ij}=\\delta _{\\lambda _1,\\lambda _2}$ .", "Since we are working with sub-horizon modes and $\\vert \\left( i k + \\mathcal {H}\\right) \\vert ^2 = k^2 + \\mathcal {H}^2 = k^2 + (a H)^2 $ , we can approximate $aH<<k$ so that $\\rho _{GW}(\\tau ) = \\dfrac{1}{32\\pi G} \\dfrac{1}{a^2}\\sum _{\\lambda }\\int \\dfrac{d^3k}{(2\\pi )^3}\\mathcal {T}_{k}(\\tau )^2 k^2 P_\\lambda (k).$ Hence, at a time $\\tau _0$ : $\\rho _{GW}(\\tau _0)=\\dfrac{1}{32\\pi G}\\dfrac{1}{ a^2(\\tau _0)}\\dfrac{4\\pi }{(2\\pi )^3}\\int (k^2 d k)\\times k^2\\sum _\\lambda P_\\lambda (k) \\dfrac{a^2(\\tau _{\\ast })}{a^2(\\tau _0)}.", "$ The term $\\sum _\\lambda P_\\lambda (k)$ can be identified with the primordial power spectrum, and $ \\dfrac{a^2(\\tau _{\\ast })}{a^2(\\tau _0)} $ tells the cosmological history.", "We can also relate the energy density $\\rho _{GW}$ to the gravitational wave spectral shape $\\Omega _{GW}$ through (REF ), $\\rho _{GW}=\\rho _c \\int d(\\log k) \\dfrac{1}{\\rho _c}\\dfrac{\\partial \\rho _{GW}}{\\partial \\log k} = \\rho _c \\int d(\\log k)\\Omega _{GW}(k,\\tau _0),$ so that we can obtain the gravitational wave spectrum $\\Omega _{GW}(k,\\tau _0)$ by comparing the last expression with (REF ).", "The critical energy density is given by $ \\rho _c = (3H_0^2)(8\\pi G)$ .", "Therefore, $\\Omega _{GW}(k,\\tau _0)=\\dfrac{k^5}{24\\pi ^2 H_0^2}\\sum _\\lambda P_\\lambda (k) \\dfrac{a^2(\\tau _{\\ast })}{a^4(\\tau _0)}.$ Why is $\\Omega _{k,\\tau _0}$ relevant?", "Because it is the spectrum of gravitational wave density, carrying information on the source and cosmic history as measured today.", "By defining the power spectra of tensor fluctuations as $\\Delta _t^2 = \\dfrac{k^3}{2\\pi ^2} \\sum _\\lambda P_\\lambda (k) $ we can rewrite $\\Omega _{GW}^0(k)=\\dfrac{\\Delta _t^2}{12}\\dfrac{k^2}{H_0^2}\\dfrac{a^2(\\tau _*)}{a^4(\\tau _0)}=\\dfrac{\\Delta _t^2}{12}\\left(\\dfrac{k}{a_{\\ast }H_{\\ast }}\\right)^2 \\dfrac{a_{\\ast }^4 H_{\\ast }^2}{a_0^4 H_0^2},$ where $a_*=a(\\tau _*), H_*=H(\\tau _*)$ and the index 0 denotes quantities at today's time writing.", "Take, for instance, the case of single field slow-roll inflationDuring inflation, gravitational waves are created from quantum fluctuations in quasi-de-Sitter spaces.", "Here we assume that the polarization modes are the same, i.e., $ P_+=P_- $ , which is a valid assumption for single-field slow-roll inflation, that shows no preference for either polarization, and the SGWB is unpolarized., for $\\tau _*$ in radiation domination.", "The power spectrum for each polarization mode is $P_\\lambda (k) = \\left(\\dfrac{2}{M_p}\\right)^2\\left(\\dfrac{H_{\\text{inflation}}^2}{2k^3}\\right),$ where $M_p=1/(8\\pi G)$ .", "We will deduce this expression later, see Eq.", "(REF ).", "It follows that $\\Delta _t^2$ is constant, given byIn the inflationary period, $a(t) = e^{\\Lambda t}$ , for constant $\\Lambda =H_{\\text{inflation}}$ .", "Since the $k^3$ from the power spectrum cancels the corresponding term in the definition of the tensor fluctuation, $\\Delta _t^2$ is constant for any value of $k$ .", "$\\Delta _t^2=2H_{\\text{inflation}}^2/(\\pi ^2M_p^2).$ At the time of horizon entry, $a_{\\ast }H_{\\ast } = k$ .", "So, $ \\Omega _{GW}^0$ does not depend on $k$ , and the gravitational wave spectrum and the energy density decrease with $a^{4} $ , as expected for radiation.", "We can also plot the expected shape of such gravitational wave spectrum as a function of the observed frequency, see Fig.", "REF .", "If a gravitational wave is emitted with some frequency $f_*$ at time $\\tau _*$ and assuming horizon sized source, then the observed frequency at time $\\tau _0$ is red-shifted, given by (see more in Sec.", "REF ) $f_0 = f_* \\dfrac{a(\\tau _*)}{a(\\tau _0)} \\sim H_* a_*.$ Therefore $\\Omega _{GW}^0 \\sim f_0^2 a_*^2$ .", "As $H_*^2\\sim \\rho $ and during radiation domination $\\rho \\sim a^{-4}$ , then $H_*\\sim a^{-2} $ , $f_0\\sim a_*^{-1} $ , and finally $\\Omega _{GW}^0 \\sim (f_0)^0$ , i.e., the gravitational wave spectrum does not depend on the observed frequency.", "Instead, if the gravitational wave was emitted during a matter-dominated era, then $H_*\\sim a^{-3/2} $ , $f_0\\sim a_*^{-1/2} $ , and finally $\\Omega _{GW}^0 \\sim f_0^{-2}$ .", "This is the behavior expected for inflationary gravitational waves.", "Different primordial sources will depend on $f_0$ in different ways.", "See Sec. .", "Notice also that we did not take into account any changes in the number of relativistic $(g_*)$ and entropy $(g_{*,s})$ degrees of freedom.", "In general, by considering the evolution of the universe and taking care of the Standard Model degrees of freedom, we can assume that the gravitational wave density decays like radiation so that the gravitational wave density spectrum can be expressed as a function of $\\Omega _{rad}^0\\sim 10^{-5}$ (which accounts for radiation) and the degrees of freedom as $\\Omega _{\\rm gw}^{\\rm \\text{observed}}\\left(k\\right) = \\frac{\\Omega _{rad}^0}{24} \\left(\\frac{g_*\\left(k\\right)}{g_*^0}\\right)\\bigg (\\frac{g_{*,s}^0}{g_{*,s}\\left(k\\right)}\\bigg )^{4/3} \\hspace{-2.5pt}\\Omega _{\\rm gw}^{\\rm \\text{emitted}}(k)\\,.$ Figure: Plot in logarithm scale of the gravitational wave power spectrum from single field slow-roll inflation versus frequency (observed today), in Hz.", "Notice that time flows in the opposite direction of the frequency axis since f 0 ∼a * -1 f_0\\sim a_*^{-1}.", "The spectrum does not depend on the frequency during the radiation-domination era (RD, red, solid line, ω * =1/3\\omega _*=1/3).", "During the matter-domination era (MD, green, dashed line, ω * =0\\omega _*=0), Ω GW 0 ∼f 0 -2 \\Omega _{GW}^0 \\sim f_0^{-2}.", "The acronym “eq\" stands for matter-radiation equality, and “rh\" for reheating (the last stage of inflation, which takes into account all processes from the decay of the inflaton field in order to establish the hot thermal bath of the Big Bang).", "We assumed that reheating (MD, green, dotted line) occurred at f 0 ∼10 8 Hzf_0 \\sim 10^{8} \\text{Hz}.", "During reheating ω * =<ω>=0\\omega _*=<\\omega >=0, as MD.", "On left, we also show the power spectrum of CMB (blue, thick line, r=Δ t 2 /Δ s 2 <0.1r=\\Delta _t^2/\\Delta _s^2<0.1), related to tensor anisotropies on the last scattering surface, during the matter-domination era." ], [ "Searching for SGWBs", "In this section, we describe how we can detect SGWBs by using Michelson interferometers.", "The experiment detects light signals only when gravitational waves disturb the perfect destructive interference pattern." ], [ "Experimental setting", "According to the setting in Fig.", "REF , we need to compute the time delay associated with light departing and going back to 1.", "There are at least two possible frames: in the TT frame, gravitational waves change the photon propagation, and the “free-falling\" mirrors do not move; in the proper detector frame, gravitational waves change the distance between the beam-splitter and mirrors.", "Let us work in the TT frame.", "If light travels with $c=1$ , it takes $L$ to travel from 1 to the mirror and $2L$ back to 1.", "In the $\\hat{l}=\\textbf {x}$ direction, the time delay for a light signal emitted at time $t$ is given byHere, it suffices to use flat spacetime.", "There is no relevant back-reaction of the gravitational waves in the spacetime where they are propagating, according to ().", ": $\\Delta T (t) = \\dfrac{1}{2} \\hat{l}^a \\hat{l}^b\\int _0^L h_{ab}(t+s,\\textbf {x}+s\\hat{l})ds, \\qquad a,b=1,2,3;$ where $h_{ab}(t,\\textbf {x})=\\int d^3k \\; e^{-2\\pi i \\textbf {k}\\cdot \\textbf {x}} \\sum _\\lambda \\hat{e}_{ab,\\lambda }(\\hat{k})h_\\lambda (t,\\textbf {k}).$ The time delay, to be measured at time $t$ , associated with the return trip is $\\Delta T_{12} (t-2L) + \\Delta T_{21} (t-L) $ .", "We also expect to find some noise $n_1(t)$ .", "Thus, we write $s_{12}(t)&=&\\Delta T_{12} (t-2L) + \\Delta T_{21} (t-L) + n_1(t) \\\\& = & L \\int \\dfrac{d^3k}{(2\\pi )^3}\\sum _\\lambda I_\\lambda ^{12}(\\textbf {k})h_\\lambda (t-L,\\textbf {k})+ n_1(t),$ where we have defined the single-arm detector response as $I_\\lambda ^{12}(\\textbf {k}) = & \\dfrac{1}{2} \\hat{l}^a \\hat{l}^b\\hat{e}_{ab,\\lambda }(\\hat{k})e^{-2\\pi i \\textbf {k}\\cdot \\textbf {x}}\\times \\nonumber \\\\&\\times \\left( e^{-\\pi i k L ( 1 + \\hat{k}\\cdot \\hat{l})}\\dfrac{ \\sin (\\pi k L ( 1 - \\hat{k}\\cdot \\hat{l}))}{\\pi k L ( 1 - \\hat{k}\\cdot \\hat{l})} + e^{\\pi i k L ( 1 - \\hat{k}\\cdot \\hat{l})}\\dfrac{ \\sin (\\pi k L ( 1 + \\hat{k}\\cdot \\hat{l}))}{\\pi k L ( 1 + \\hat{k}\\cdot \\hat{l})} \\right).$ Notice that this expression tells us the direction in the detector is more sensitive!", "It also tell us something about frequency modes: the terms in the last bracket tend to 2 for $k L << 1$ (small frequency modes) and to 0 for $k L >> 1$ (large frequency modes).", "For example, LIGO works within the low-frequency limit.", "From the response function, there is suppression for both small and large frequency modes.", "For the large ones, the signal drops as $ (\\sin x)/x $ for $ k>>1/L $ .", "For the small ones, the response function is constant for $ k<<1/L $ .", "As the noise grows at low frequencies, sensitivity is lost." ], [ "Overlap reduction function", "Assuming isotropic SGWB, the measured time delay $s_\\alpha $ can be averaged $<s_\\alpha ^2>=L^2\\int \\dfrac{d^3k}{(2\\pi )^3}\\sum _\\lambda P_\\lambda (k) \\mid I_\\lambda ^{12} - I_\\lambda ^{13}\\mid ^2 + <n^2>,$ where $ P_\\lambda $ is the power spectrum introduced in (REF ), $ I_\\lambda ^\\alpha = I_\\lambda ^{12} - I_\\lambda ^{13} $ and $<n^2> $ is the instrumental noise from different possible sources.", "$ P_\\lambda $ depends on the gravitational wave signal and $ I_\\lambda ^\\alpha $ on the detector response.", "In such cases, the main obstacle is noise.", "Consider that noise severely affects the data for very tiny time delays (which we expect from stochastic gravitational waves).", "Now, assume that there are two interferometers $ \\alpha $ and $ \\beta $ , as shown in Fig.", "REF .", "Then, $<s_\\alpha s_\\beta > = L^2 \\int \\dfrac{d^3k}{(2\\pi )^3} \\sum _\\lambda P_\\lambda (k) I_\\lambda ^\\alpha {}^{*}(\\textbf {k},\\textbf {x}_1) I_\\lambda ^\\beta (\\textbf {k},\\textbf {x}_2), $ i.e., there is cross-correlation.", "Ideally, the two detectors are far away so that their instrumental noises are not correlated $<n_\\alpha n_\\beta > = 0 $ .", "The signal is reduced by overlap reduction functionAlthough we get rid of the noise by considering two detectors, the measured time delay now depends on the distance between them, see Fig.", "REF .", "The overlap reduction function, therefore, depends on the response of the individual detectors as well as their relative geometry.", "For pulsar time arrays, the overlap reduction function is known as the Hellings-Downs curve [12]., which is essential for the detection of stochastic signals.", "Figure: Two Michelson interferometers.", "We put the origin of the coordinate system at 1.", "To describe the location of the second detector, we need to include the vector from 1 to 1 ¯\\bar{1}, where 1 ¯\\bar{1} is the central point of the second detector.", "Therefore, <s α s β > <s_\\alpha s_\\beta > depends on the distance between the interferometers." ], [ "Monopole response function", "Let us consider isotropic, unpolarized SGWB.", "For this case, we can define the monopole response function as $\\mathcal {M}(k)=\\sum _\\lambda \\int d\\Omega \\mid I_\\lambda ^\\alpha \\mid ^2.", "$ By using the power spectra of tensor fluctuations given by (REF ), we can express the averaged time delay as $\\dfrac{<s_\\alpha ^2>}{L^2}=\\dfrac{1}{8\\pi }\\int d(\\ln k) \\Delta _t^2 \\left( \\sum _\\lambda \\int d\\Omega \\mid I_\\lambda ^\\alpha \\mid ^2 \\right) \\equiv \\dfrac{1}{8\\pi }\\int d(\\ln k) \\Delta _t^2 \\mathcal {M}(k).", "$ The dependence on the signal is due to $\\Delta _t^2 $ .", "The dependence on the configuration of the detector is due to $ \\mathcal {M}(k) $ .", "Then, by measuring the time delay and knowing the configuration of the detector, it is possible to get the signal!", "Take, for instance, the case of LIGO, whose two detector arms are oriented perpendicularly to each other.", "We can plot how the sensitivity and the monopole response function depend on the wave number, as shown in Fig.", "REF .", "Figure: On left, sensitivity of gravitational waves versus kLkL (length of detector LL), given dependence of signal on the correlation <s α 2 ><s_\\alpha ^2> through ().", "For small wavelengths, seismic noise is an obstacle.", "For large wavelengths, the monopole response oscillates too much.", "On the right, log-log plot of the monopole detector response versus kLkL.", "The detector response drops for kL>>1kL>>1 (averaging over many oscillations, see ()), and it is constant for kL<<1kL<<1.For non-isotropic sources, we should repeat the computation and keep $\\left\\langle h h \\right\\rangle $ in the integral over $d\\Omega $ .", "One can search for anisotropies through antenna patterns, see Fig.", "REF .", "Figure: The interferometer and detector are located at the xyxy plane.", "The antenna pattern is a property of the detector and is useful for detecting anisotropy.For polarized sources, $P_\\times (k) \\ne P_+ (k)$ , we should repeat the computation and keep $\\left\\langle h h \\right\\rangle $ in $\\sum _\\lambda $ .", "One can search for polarization through the added multipoles." ], [ "Experiments", "Here we briefly comment on three different experimental configurations that can probe SGWBs: ground- and space-based interferometers and pulsar timing arrays.", "See the diagram in Fig.", "REF .", "Figure: Scheme of different experiments probing the gravitational wave spectrum.", "Thanks to Michael Lam, from the NANOGrav collaboration, for sharing the code to produce the figure.", "From left to right, the images are credited to BICEP2, Bill Saxton at NRAO, eLISA, and LIGO, respectively." ], [ "Ground-based interferometers", "LIGO, Virgo, and KAGRA [9], [13] are collaborations based on ground-based interferometers.", "These interferometers are designed as in Fig.", "REF .", "These experiments rely on very large arms.", "Even so, according to the ratio $\\dfrac{\\Delta L}{L} \\sim \\mid h_{\\mu \\nu } \\mid \\sim 10^{-21}$ , for arms with $L\\sim 10^3$ m, we have $\\Delta L \\sim 10^{-18}$ m. Compare with the proton dimension $10^{-15}$ m, and notice that the gravitational wave-interferometer must be extraordinarily sensitive!", "Take, for instance, a signal whose peak frequency is at $f\\sim 100$ Hz, typical for binary mergers, such that $\\lambda \\sim L \\sim \\dfrac{1}{2\\pi f} \\sim 750$ km.", "For such wavelength, construction would be impossible.", "The trick is to use resonant Fabry-Pérot cavities that reduce the required size to $L\\sim 3$ km.", "With such length, it is possible to detect signals from binary mergers, as LIGO-Virgo detected.", "From the last subsection, we have seen that a combination of detectors can improve experimental power.", "Indeed, if one uses two detectors, one can analyze coincidence and cross-correlation; see, for instance, the first claimed gravitational wave detection by LIGO [9].", "For three detectors, one can analyze the 3D localization and polarization of isotropic SGWB; see, for instance, [10].", "Notice that the smaller the frequency, the larger the arm.", "This fact limits the sources that a detector can observe." ], [ "Space-based interferometers", "LISA [7] is the future space-based interferometer to be launched in the early 2030s.", "The idea is to reproduce the idea behind ground-based interferometers in space.", "LISA will have three $2.5\\times 10^{6}$ km arms that would probe frequencies in the mHz range, with three detectors able to detect two independent signal channels.", "This experiment would probe smaller frequencies that, on Earth, would require unrealistic large arms.", "For such frequencies, it would be possible to detect ultra-compact binaries in our galaxy, supermassive black hole mergers, extreme mass ratio inspirals, and other exotic possibilities.", "These exotic possibilities are associated with early universe physics associated with SGWB." ], [ "Pulsar timing arrays", "Pulsar time arrays (PTAs) do not work like the experimental setting described before.", "Instead, they rely on the detection of gravitational waves by measuring the time of arrival of radio pulses from millisecond pulsars through the spatially correlated fluctuations induced by gravitational waves.", "PTAs can probe smaller frequencies (in the nHz range) than ground- and space-based interferometers.", "As gravitational waves perturb the metric along the Earth-pulsar lines, they modify radio pulses' time of arrival on Earth.", "A set of PTAs creates correlations across the baselines, while other noise sources are uncorrelated.", "Searches using PTAs then compare the measured spatial correlations with the expected values from the Helling-Downs curve, a smoking gun for the isotropic, unpolarized background of quadrupole gravitational wave radiation [14].", "Supermassive black hole binaries are the main source of gravitational waves at the nHz range.", "Supermassive black holes have masses larger than $10^5$ solar masses, are present in the center of galaxies, and are much heavier than the ones producing the transient signals detected by the LIGO-Virgo collaboration, for instance, [9].", "Since the frequency of gravitational waves scales as the inverse of the binary chirp mass, neither LIGO-Virgo nor LISA can detect such supermassive black holes.", "On top of this astrophysical background, a cosmological background is produced by early-universe and beyond Standard Model physics, which can therefore be probed by PTAs.", "NANOGrav [15], PPTA [16], EPTA [17], as well as their joint international consortium IPTA [18] are the largest PTA collaborations.", "Hints for detection on SGWB were recently claimed by the collaborations [19], [20], [21], [22].", "They showed statistical evidence for a common-spectrum low-frequency red-noise power-law process (consistent with the expected black hole binary) but without significant evidence for, or against, Helling-Downs correlations.", "A detection of SGWB can be either confirmed or refuted by coming data releases.", "For details on the past, present, and future of PTA collaborations, see, for instance, [23], [24]." ], [ "Lecture: Cosmological sources - Probing beyond Standard Model physics", "In the last section, we saw how to compute the energy density $\\rho _{GW}$ of gravitational waves, their spectrum $\\Omega _{GW}$ , the averaged time-delay $\\left\\langle s_\\alpha ^2 \\right\\rangle $ for an experimental setting based on interferometers.", "In this section, we will learn how to characterize frequencies of relic gravitational waves, associating them with cosmological sources.", "After presenting a discussion on how gravitational wave detectors can probe different stages of the universe and how we can bind the gravitational wave spectrum with complementary cosmological probes, we discuss primordial sources of gravitational waves [25], [6].", "Cosmic gravitational microwave background, phase transitions, and cosmic strings are discussed in this section.", "Inflation and axion inflation are discussed in Sec.", ".We will not discuss scalar-induced gravitational waves, which are related to primordial black holes.", "See, for example, [26], [27] for discussions on the topic." ], [ "Characteristic frequencies of relic gravitational waves", "If a gravitational wave is emitted with some frequency $f_*$ at time $\\tau _*$ , then the observed frequency at time $\\tau _0$ is red-shifted, $f_0 = f_* \\dfrac{a(\\tau _*)}{a(\\tau _0)}.$ The emitted frequency is given by $f_* = (\\epsilon _* H_*^{-1})^{-1} $ , where $\\epsilon _* $ satisfies $\\epsilon _* \\le 1$ , setting the inverse of Hubble factor $(H_*)^{-1}$ as the cosmological horizon.", "gravitational waves from a source in the early universe cannot be correlated on time scales larger than $(H_*)^{-1}$ , otherwise, it would break causality.", "The exception is cosmic inflation.", "The correct value of $\\epsilon _* $ depends on the source.", "Assuming that a gravitational wave signal is produced during the radiation era, $H_*^2= \\dfrac{\\pi ^2 g_* T_*^4}{90 M_p^4}$ , $ a \\sim 1/T $ , $ t \\sim 1/T^2$ , and degrees of freedom $g_*\\sim 100$ , we haveFor inflation, $T_*$ marks re-entry, not inflation scale.", "$& f_0\\simeq 10^{-8} \\epsilon _*^{-1} \\left( \\dfrac{T_*}{\\text{GeV}}\\right) \\text{Hz}, \\\\& t_*\\simeq 10^{-22} \\epsilon _*^{-1} \\left( \\dfrac{1 \\text{Hz}}{f_0}\\right)^2 \\text{s}.$ Thus, it is possible to associate the observed frequency of gravitational waves in the detectors with the epochs of the universe when such gravitational waves had been producedThe correct correspondence between frequency and temperature of the universe does depend on the relativistic and entropy number of degrees of freedom $g_*$ and $g_{*,S}$ , as well as on the equation of the state of the universe.", "We will not derive a general expression here since it is model-dependent.. By operating in different frequency ranges, gravitational wave detectors can probe separated energy scales and cosmological epochs.", "As we can see in Table REF , in principle we can have access to very high energy scales.", "These scales cannot be probed by other cosmological probes, for instance, CMB, BBN, and LSS, that can probe temperatures $T_p \\le 1 $ MeVTake as a grain of salt that the stochastic gravitational wave signal is weak, because of the $1/a^2$ suppression for high-redshift sources.. Table: Typical peak frequencies and their associated temperature of emission, expected for PTAs (pulsar time arrays) and the gravitational wave interferometers LIGO and LISA, for ϵ * =1\\epsilon _*=1." ], [ "Constraints from BBN and CMB", "Big bang nucleosynthesis (BBN) and the cosmic microwave background (CMB) are also cosmological probes.", "They help us to answer the question: “What is the maximum fraction $\\Omega _{GW}/\\Omega _{rad}$ we can observe today?\"", "We start computing $\\Delta \\rho _{rad}=\\rho _{rad}^{obs}-\\rho _{rad}^{SM}$ , the extra observed radiation due to neutrino species.", "The energy density due to gravitational waves cannot be larger than $\\Delta \\rho _{rad}$ itself.", "After electron decoupling, $\\rho _{rad}= \\rho _\\gamma + \\rho _\\nu = \\dfrac{\\pi ^2}{30}\\left(2+\\dfrac{7}{4}N_{eff}\\left(\\dfrac{4}{11}\\right)^{4/3}\\right)T^4.$ The factor of 2 corresponds to the two degrees of freedom of photon radiation, $7/4=(7/8)2$ to neutrinos and anti-neutrinos (fermions with one helicity state each), and $4/11$ to heating of the photon bath relative to the neutrino bath due to $e^+e^-$ decay after neutrino decoupling.", "$ N_{eff} $ is the neutrino effective number given by $ N^{SM}_{eff} + \\Delta N_{eff}$ .", "In the standard model, $N^{SM}_{eff}=3.046$ .", "Consequently, $\\left(\\dfrac{\\rho _{GW}}{\\rho _\\gamma }\\right)_{T=\\text{MeV}} < \\dfrac{\\rho _{rad}^{obs}-\\rho _{rad}^{SM}}{\\rho _\\gamma } \\le \\dfrac{7}{8}\\left(\\dfrac{4}{11}\\right)^{4/3}\\Delta N_{eff}.$ At BBN and CMB decoupling, $\\Delta N_{eff} \\lesssim 0.2$ [28], [29].", "Therefore, for $T<(T_{\\text{BBN}},T_{\\text{CMB}})$ , the observed ratio is bounded by $\\left(\\dfrac{\\rho _{GW}}{\\rho _\\gamma }\\right)_T \\lesssim 10 \\%$ .", "Today, $\\Omega _\\gamma = \\dfrac{\\rho _\\gamma }{\\rho _{c}} \\sim 10^{-5}$ , and $\\rho _{GW}\\lesssim \\Omega _{\\gamma }\\rho _{c} \\Delta N_{eff} $ constrains the gravitational wave spectrum.", "Since, $\\rho _{GW}=\\rho _c \\int d(\\log f)\\Omega _{GW}(f)$ , the BBN constraint implies, for a broad spectrum, that the observed gravitational wave spectrum today is bounded by $\\Omega _{GW}\\lesssim 10^{-6}.$ This constraint holds for gravitational waves inside the horizon at $T_{\\text{BBN}}$ and $T_{\\text{CMB}}$ , i.e, the gravitational waves emitted before CMB decoupling.", "It already constrains some early universe models!", "Next, we will focus on different cosmological sources for stochastic gravitational waves." ], [ "Cosmic gravitational microwave background", "In the primordial plasma, photon decoupling at $T\\sim $ eV leads to CMB.", "Likewise, gravitational waves decoupling at $T\\sim M_P$ leads to cosmic gravitational microwave background (CGMB) [30].", "Because of the very large value of the Planck mass $M_P \\sim 10^{18}$ GeV, it is very hard to detect CGMB: $\\Omega _{\\text{CGMB}}\\sim \\dfrac{T_{max}}{M_P}\\Omega _{\\text{CMB}},$ where $T_{max}$ is the highest temperature during radiation domination, at sub-Planckian temperatures, ($T_{max}\\le 10^{16} $ GeV, approaches BBN bound for $T_{max}\\sim M_P$ ), and $ \\Omega _{\\text{CMB}} $ is the CMB spectrum observed today.", "$ \\Omega _{\\text{CGMB}} $ peaks around to 100 GHz [30], far from any current technology or planned gravitational wave experiment." ], [ "First order phase transitions", "First-order phase transitions (PTs) are processes of spontaneous symmetry breaking from a symmetric phase (false vacuum) to a broken phase (true vacuum), as shown in Fig.", "REF .", "Bubble collisions, magneto-hydrodynamics (MHD) turbulence, and sound waves are phenomena sourcing gravitational waves during a 1st order PT.", "The peak frequency, for $\\epsilon _*\\sim 10^{-3}$ , is around $f_{\\text{peak}}\\approx 10^{-3} \\text{Hz} \\left(\\dfrac{T}{100 \\text{GeV}}\\right),$ where 100 GeV corresponds to the Standard Model (SM) electroweak (EW) phase transition temperature.", "No signal from the SM EW phase transition is expected because the SM does not have a 1st order PT for the observed Higgs mass [31].", "However, several BSM models lead to first-order EW phase transition that could be probed by LISA.", "From BBN, $ \\Omega _{GW} \\lesssim 10^{-6} $ already constrains some PT models, depending on the strength of the PTs [19].", "For more on cosmological phase transitions, see, for instance, the lecture notes [32]." ], [ "Cosmic strings", "Cosmic strings are one-dimensional topological defects.", "Topological, or cosmic, defects are products of phase transitions, when the vacuum manifold $M$ is topologically non-trivial, i.e., $\\pi _n(M)\\ne \\mathit {I}$ , see [33].", "They can be strings ($n=1$ ), monopoles ($n=2$ ) or textures ($n=3$ ) [34].", "Let us focus on strings.", "Here, $\\pi _1(M)$ stands for the first homotopy group in a manifold $M$ and counts the number of equivalence classes of loops in $M$ .", "Cosmic strings arise in phase transitions if, and only if, for $G\\rightarrow H$ , $\\pi _1 (G/H) \\ne \\mathit {I} $ .", "Cosmic string models are often associated with the spontaneous symmetry breaking of a local U(1) symmetry in some BSM/grand unified theory (GUT) scenarios.", "One example is the breaking of $B-L$ , the difference between baryon and lepton numbers [35].", "The string tension $\\mu $ is the energy per unit of length.", "It is related to the amplitude of vacuum expectation value $v^2$ through $(G\\mu )\\propto v^2$ .", "Figure: Representation of cosmic strings - one-dimensional topological defects.", "In the two first plots, we show a complex scalar field potential versus scalar field configuration, where some mechanism allows for a phase transition with a non-vanishing expectation value vv.", "We obtain the two last plots by mapping the solution to real space.", "In the 2D plot, we show the location of the local extremal point (false vacuum, orange dot) and regions where the scalar field configuration assumes different values; by continuity, these regions intersect each other where the vacuum expectation value <φ><\\phi > corresponds to the false vacuum.", "In the 3D plot, we extrapolate the false vacuum region to three spatial dimensions; the reason for the name strings becomes clear.Figure: 3D representation of cosmic strings (gray) from a simulation credited to David Daverio, from the group of Professor Martin Kunz, Université de Genève, using simulation data obtained at the Swiss National Supercomputer Centre.In the evolution of cosmic string networks, (self-) intersection generates loops.", "Loops are more energetically favorable, and then there is the emission of particles and gravitational waves by wave excitations of the loops.", "The scaling regime is a fixed point of this evolution with the property $\\dfrac{\\rho _{\\text{CS}}}{\\rho _{total}} \\approx \\text{constant},$ with $\\mathcal {O}(1)$ cosmic strings per Hubble volumeThis property follows from the fact that in the scaling regime, the only physical scale is the Hubble radius $H^{-1}$ .", "Therefore, the energy density of cosmic strings $ \\rho = \\mu \\times [M]^2 \\propto \\mu H^2 $ , while the critical energy density $\\rho _{total}=\\rho _{crit} = 3H^2/(8\\pi G)$ , so that the ratio $\\rho _{\\text{CS}}/\\rho _{total}$ is constant.", "This property is essential for cosmic strings phenomenology and it distinguishes strings from monopoles and domain walls.", "For these defects, the ratio $\\rho _{\\text{CS}}/\\rho _{total}$ is not constant and can overclose the universe..", "Figure: Representation of gravitational waves emitted by loops from cosmic string networks.", "Since it is a continuous process, the spectrum is broader.", "The gravitational wave signal is characterized by $\\rho _{GW}(t,f) \\propto \\sum _{n=1}^{\\infty } C_n(f) P_{gw,n}.$ In this expression, $P_{gw,n}$ is the power of a single loop.", "The larger the string tension $\\mu $ , the larger $P_{gw}$ .", "Furthermore, $n$ corresponds to the n-th harmonic, and $C_n(f)$ gives the number of loops emitting gravitational waves, that are observed today at frequency $f$ at time $t$ , $C_n(f)=\\dfrac{2n}{f^2}\\int dz \\dfrac{N(l(z),t(z))}{H(z)(1+z)^6}.", "$ Above, the denominator $H(z)(1+z)^6$ tells about the cosmological history.", "$N(l(z),t(z))$ is the number of loops of length $l$ at time $t$ , where the length $l$ is given by $ l = 2n/ (f(1+z))$ .", "The computation is not straightforward and there are different methods.", "In order to solve this integral analytically, the usual assumption is loops being sourced with $lH=\\alpha =$ constant.", "Numerically, see, for instance, [36].", "Figure: Amplitude of gravitational waves generated by cosmic strings, with different string tensions μ\\mu , as a function of frequency.", "The larger GμG\\mu , the larger the amplitude Ω GW \\Omega _{GW}.", "We also plot the frequency range probed, or expected to be probed, by LIGO, LISA, and PTA collaborations.", "PTA signals already constrain cosmic string models with large GμG\\mu , whose frequency peak is at the nHz scale.There are two general properties: the larger $G\\mu $ , the stronger $\\Omega _{GW}$ ; the higher the gravitational wave characteristic frequency, the earlier the emission.", "Moreover, gravitational wave signals constrain the string tension and bound the symmetry-breaking scale.", "Large symmetry-breaking scales for topologically stable cosmic strings were excluded by PTAs: $ G\\mu \\lesssim 10^{-10} \\rightarrow v \\lesssim 10^{14} \\text{GeV}.$ These signals are associated with a largely flat spectrum at high frequencies and with a mild peak at nHz-mHz frequencies, as shown in Fig.", "REF .", "Finally, as a note on possible production mechanisms, the SM cannot produce strings, but BSM theories, such as grand unification theories (GUTs), can.", "Complementary to collider searches, gravitational waves can probe GUT physics.", "For instance, in some GUT models, strings can decay via monopole pair production, suppressing $\\Omega _{GW}$ at low $f$ .", "Metastable defects relies on a sequence of phase transitions [37] $G \\rightarrow G^{^{\\prime }} \\rightarrow SM, \\qquad \\pi _n(G/SM)=\\mathbb {I} \\qquad \\text{and} \\qquad \\pi _n(G/G^{^{\\prime }})\\ne \\mathbb {I}, \\pi _m(G^{^{\\prime }}/SM)\\ne \\mathbb {I}$ i.e.", "the manifolds $(G/G^{^{\\prime }})$ and $(G^{^{\\prime }}/SM)$ have non-trivial homotopy groups, but the homotopy group of $(G/SM)$ is trivial so that the defect is not topologically stable.", "For example, we have the symmetry groups $G=SO(10)$ and $G^{^{\\prime }}=U(1)/SM$ .", "$G\\rightarrow G^{^{\\prime }}$ generates monopoles, $G^{^{\\prime }}\\rightarrow SM$ generates cosmic strings, but $\\pi _1(S0(10)/SM)=\\mathbb {I}$ .", "Therefore, strings are not topologically stable, thus metastable.", "There are at least two decaying mechanisms.", "In the first mechanism, there is an initial population of monopoles and strings; then, the string-monopole gas decays fast.", "In a second mechanism, relevant if inflation dilutes away the initial monopole population, strings can only decay via spontaneous Schwinger monopole production with a decay rate $\\propto e^{-m^2/\\mu }$ , where $m$ is the monopole mass; then, these metastable strings can emit gravitational waves [38].", "At low frequencies, the spectrum is suppressed, and it cannot be excluded by the PTA bounds while allowing for larger spectra at larger frequencies, which opens a discovery space for LIGO [39]." ], [ "Lecture: Gravitational waves from axion inflation", "In the last section, we learned how to characterize stochastic backgrounds and described primordial sources associated with beyond-standard model physics.", "In this section, we describe another potential source of gravitational wave backgrounds: inflation.", "We focus on the single field slow-roll inflation and on an axion-like inflaton particle coupled to a photon." ], [ "Cosmic inflation", "From the Einstein equations ($M_p=1,c=1$ ), $R_{\\mu \\nu }-\\dfrac{1}{2}R g_{\\mu \\nu } = T_{\\mu \\nu },$ for a FRW spacetime, the Friedmann equations are: $& \\left(\\dfrac{\\dot{a}}{a}\\right)^2 = H^2 = \\dfrac{\\rho }{3} - \\dfrac{k}{a^2}, \\\\& \\dfrac{\\ddot{a}}{a}=\\dot{H}+H^2=-\\dfrac{1}{6}(\\rho +3p),$ for a perfect fluid given by $T^\\mu {}_\\nu = \\text{diag} (\\rho ,-p,-p,-p)$ , where $\\rho $ is the energy density and $p$ is the pressure, and $H=H(t)$ is the Hubble parameter.", "On the one hand, we have decelerated expansion $\\ddot{a}<0$ if the equation of state parameter is $\\omega =p/\\rho >-1/3$ .", "On the other hand, there are two problems associated with decelerated expansion: the horizon and flatness problemsThe horizon problem is due to the fact that the Hubble horizon grows faster than any other physical scale so that the observed CMB spectrum implies uniformity for regions that were not causally connected in the early universe.", "The flatness problem is about the fact that the curvature of the universe is very small today, which would require even smaller curvatures in the past.", "For more details on these problems, see, for instance, the lecture notes on inflation [40]..", "Therefore, the condition $\\omega <-1/3 $ must be satisfied by a cosmic inflation model, that took place in the universe before BBN.", "Consider a single-field slow-roll model ($M_P, c=1$ ): $S=\\int d^4x \\sqrt{-g} \\left( \\dfrac{R}{2}-\\dfrac{1}{2}g^{\\mu \\nu }\\partial _\\mu \\phi \\partial _\\nu \\phi - V(\\phi ) \\right).$ The energy-momentum tensor is given byOur convention is $ g_{\\mu \\nu }=\\bar{g}_{\\mu \\nu } + h_{\\mu \\nu }$ , where $ \\bar{g}_{\\mu \\nu }=\\text{diag}(-1,a,a,a)$ .", "$T_{\\mu \\nu }^{(\\phi )}=-\\dfrac{2}{\\sqrt{-g}}\\dfrac{\\delta S_\\phi }{\\delta g^{\\mu \\nu }} = \\partial _\\mu \\phi \\partial _\\nu \\phi - g_{\\mu \\nu }\\left(\\dfrac{1}{2}\\partial _\\alpha \\phi \\partial ^\\alpha \\phi +V(\\phi )\\right)$ By assuming that the field is homogeneous, i.e.", "$\\phi (x,t)=\\phi (t), \\partial _i \\phi = 0$ , we have $\\rho _\\phi = \\dfrac{\\dot{\\phi }^2}{2} + V(\\phi ), \\qquad p_\\phi = \\dfrac{\\dot{\\phi }^2}{2} - V(\\phi ),$ and then $\\omega _\\phi = \\dfrac{p_\\phi }{\\rho _\\phi } = \\dfrac{ \\dot{\\phi }^2 / 2 - V(\\phi )}{ \\dot{\\phi }^2/2+ V(\\phi )} \\qquad \\Rightarrow \\qquad \\omega _\\phi \\rightarrow -1 \\qquad \\text{if}\\qquad V(\\phi ) >> \\dot{\\phi }^2 / 2.$ The last condition is known as slow-roll, see Fig.", "REF , and therefore this model is able to describe inflation, since $\\omega _\\phi <-1/3$ .", "In particular, from the Friedmann equations, its solution is de-Sitter, i.e.", "an exponential expansion $a(t) \\propto e^{Ht}$ , for a constant positive Hubble parameter $H$ .", "For the scalar field $\\phi $ , the equation of motion is $\\dfrac{1}{\\sqrt{-g}}\\partial _\\mu (\\sqrt{-g}\\partial ^\\mu \\phi ) + V_{,\\phi }=0\\qquad \\Rightarrow \\qquad \\ddot{\\phi }+3 H \\dot{\\phi } + V_{,\\phi } = 0,$ for homogeneous $\\phi $ with $H^2=\\dfrac{1}{3}(\\dot{\\phi }^2 / 2 + V(\\phi ) ) \\approx \\dfrac{V(\\phi )}{3}$ .", "Figure: Slow-roll scalar field potential V(φ)V(\\phi ) as a function of φ\\phi .", "Slow-roll condition implies that the potential does not vary much with the evolution of the field, allowing for accelerating solutions ω<-1/3\\omega <-1/3.", "In the evolution of the scalar field, time runs from right to left.", "At the beginning of the evolution, V(φ)>>φ ˙ 2 V(\\phi )>>\\dot{\\phi }^2, implies a constant H=V/3H=\\sqrt{V/3} and a de-Sitter solution for the scale factor.", "In addition, quantum fluctuations δφ\\delta \\phi source density scalar perturbations.", "Inflation ends when the slow-roll condition is not satisfied.In addition, quantum fluctuations $\\delta \\phi $ and $\\delta g^{\\mu \\nu }$ source density perturbations and gravitational waves.", "Fluctuations only propagate after horizon re-entry since they are frozen in the super-horizon, as discussed in Sec.", "REF .", "See Fig.", "REF .", "Next, we focus on the tensor perturbation.", "Figure: Diagram of the comoving scales k -1 k^{-1} and (aH) -1 (aH)^{-1} as a function of conformal time, normalized so that inflation ends at τ=0\\tau =0.", "For a mode with a given wave number kk, at earlier times during inflation, the Hubble horizon HH is constant and the scale factor a(t)a(t) grows exponentially.", "As a result, (aH) -1 (aH)^{-1} decreases, so that the mode leaves the horizon when k -1 =(aH) -1 k^{-1}= (aH)^{-1}.", "After inflation, for both matter and radiation eras, (aH)∼1/t(aH)\\sim 1/t, and (aH) -1 (aH)^{-1} increases with time so that the mode re-enters the horizon when k -1 =(aH) -1 k^{-1}= (aH)^{-1}.", "Sub-horizon scales refer to modes in the horizon k -1 <(aH) -1 k^{-1}<(aH)^{-1}.", "Super-horizon scales refer to modes out of the horizon k -1 >(aH) -1 k^{-1}> (aH)^{-1}." ], [ "gravitational waves from inflation", "The field equations in vacuum are given in Sec.", ", $\\tilde{h}_\\lambda ^{\\prime \\prime }(\\textbf {k},\\tau )+\\left(k^2+\\dfrac{a^{\\prime \\prime }}{a}\\right)\\tilde{h}_\\lambda (\\textbf {k},\\tau )= 0,$ for $\\tilde{h}_\\lambda =a h_\\lambda $ , for the two polarization modes.", "At $\\tau \\rightarrow -\\infty $ , we have the Bunch-Davies vacuum solution $\\lim _{\\tau \\rightarrow - \\infty } \\tilde{h}_\\lambda = \\dfrac{2 e^{-ik\\tau }}{\\sqrt{2k}},$ so that $\\tilde{h}_\\lambda = \\dfrac{2 e^{-ik\\tau }}{\\sqrt{2k}}\\left( 1 - \\dfrac{i}{k\\tau }\\right)$ and $\\langle h_\\lambda (\\textbf {k}) h_{\\lambda ^{\\prime }}(\\textbf {k})\\rangle = (2\\pi )^3 \\delta _{\\lambda \\lambda ^{\\prime }}\\delta ^3(\\textbf {k}-\\textbf {k}^{\\prime })\\left( \\dfrac{1}{M_p} \\right) \\dfrac{\\mid \\tilde{h}_\\lambda \\mid ^ 2}{a^2}$ gives us $\\dfrac{\\mid \\tilde{h}_\\lambda \\mid ^ 2}{a^2} = \\dfrac{4}{a^2 (2k)}\\left(1+\\dfrac{1}{k^2 \\tau ^2}\\right) = \\dfrac{4 H^2}{2k^3}(1+k^2\\tau ^2),$ for the de-Sitter expanding universe solution, where $ a H = -1 / \\tau $ .", "The last term above can be neglected on super horizon scales, since $ (a H )^{-1} < k^{-1} $ .", "We have $\\langle h_\\lambda (\\textbf {k}) h_{\\lambda ^{\\prime }}(\\textbf {k})\\rangle = (2\\pi )^3 \\delta _{\\lambda \\lambda ^{\\prime }}\\delta ^3(\\textbf {k}-\\textbf {k}^{\\prime })\\left( \\dfrac{2}{M_p} \\right)^2 \\dfrac{H_*^2}{2k^3}, $ where $H_*=H(k\\tau = 1)$ is the Hubble parameter when the mode $k$ leaves the horizon during inflation.", "Since gravitational waves are frozen on super-horizon scales and inflation ends before horizon re-entry, the last imprint from inflation in the gravitational wave signal comes from the time when the mode leaves the horizon.", "From (REF ), we get the power spectrum $P_h(k) = \\left(\\dfrac{2}{M_P}\\right)^2 \\dfrac{H_*^2}{2k^3}.", "$ Slow-roll condition implies $H \\approx \\text{constant}$ and a scale-invariant spectrum $ \\Delta _t^2 \\propto k^3 P_h(k) = \\text{constant} $ .", "How large are these tensor perturbations?", "CMB observations point to a tensor-to-scalar ratio $r=\\Delta _t/\\Delta _s<0.1$ [29].", "With $\\Delta _s$ measured, the upper bound on $r$ is an upper bound on $\\Delta _t$ and hence on the gravitational wave signal.", "This would imply $\\Omega _{GW} \\le 10^{-15}$ , which is too small for PTAs, LISA, and LIGO to detect a signal.", "For more details on gravitational waves from inflation, see for instance the review [41]." ], [ "Axion inflation", "Although the previous slow-roll scalar field model is the dominant paradigm in inflation cosmology, the requirement of having a flat scalar potential is a challenge in particle physics.", "We can solve this problem by introducing symmetry, which protects the flatness of the potential.", "One simple implementation is to assume that the inflaton $\\phi $ is a pseudo-Nambu-Goldstone boson, whose model is invariant under shift symmetry: $\\phi \\rightarrow \\phi + c$ , where $c$ is a constant.", "The resulting pseudo-scalar field model is an axion-like particle model: $S=\\int d^4 x\\left\\lbrace \\sqrt{-g} \\left[ \\dfrac{R}{2} - \\dfrac{1}{2}\\partial _\\mu \\phi \\partial ^\\mu \\phi - V(\\phi ) - \\dfrac{1}{4} F_{\\mu \\nu }F^{\\mu \\nu } \\right] - \\dfrac{\\phi }{4\\pi \\bar{f}} F_{\\mu \\nu }\\tilde{F}^{\\mu \\nu } \\right\\rbrace , $ where $ F_{\\alpha \\beta } = \\partial _\\alpha A_\\beta - \\partial _\\beta A_\\alpha $ , $ \\tilde{F}^{\\mu \\nu } = 1/2 \\epsilon ^{\\mu \\nu \\alpha \\beta }F_{\\alpha \\beta } $ is the dual strength tensor, and the inflaton $\\phi $ is coupled to the (dark) photon through a dimension five operator and the coupling $1/\\bar{f}$ .", "Shift symmetryThe last term of the action can be rewritten as $ \\phi F_{\\mu \\nu }\\tilde{F}^{\\mu \\nu } = 2 \\phi \\partial _{\\mu }(\\epsilon ^{\\mu \\nu \\alpha \\beta } A_\\nu \\partial _\\rho A_\\sigma )=-2\\partial _\\mu \\phi (\\epsilon ^{\\mu \\nu \\alpha \\beta } A_\\nu \\partial _\\rho A_\\sigma )$ , which makes it explicitly shift-symmetric.", "protects the flatness of $V(\\phi )$ , in the sense that symmetry breaking means departure of the flatness of the potential.", "Next, we derive the equations of motion for $A_\\mu $ , ($d\\tau = a dt $ ) $\\square \\vec{A} = - \\vec{A}^{\\,^{\\prime \\prime }} + \\nabla ^2 \\vec{A} = - \\dfrac{\\phi ^{^{\\prime }}}{\\pi \\bar{f}}\\nabla \\times \\vec{A},$ for the gauge-fixing $\\nabla \\cdot \\vec{A} = 0$ and $ A_0 = 0$ .", "The right-hand side contains the axion-like particle term and is interpreted as a source term.", "With the Fourier decomposition $\\vec{A}(\\tau ,\\vec{x})= \\sum _{\\lambda =\\pm }\\int \\dfrac{d^3 k}{(2\\pi )^{3/2}}\\left( A_\\lambda (\\tau ,\\vec{k})\\vec{\\varepsilon _\\lambda }(\\vec{k})\\hat{a}_\\lambda (\\vec{k})e^{i\\vec{k}\\cdot \\vec{x}} + \\text{h.c.} \\right),$ polarization vectors $\\vec{\\varepsilon _\\lambda }(\\vec{k})\\cdot \\vec{\\varepsilon _\\lambda ^{^{\\prime }}}(\\vec{k})=\\delta _{\\lambda \\lambda ^{^{\\prime }}}, \\qquad \\vec{\\varepsilon _\\lambda }(\\vec{k})\\cdot \\vec{k}=0, \\qquad i\\vec{k}\\times \\varepsilon _\\lambda (\\vec{k})=\\lambda k \\varepsilon _\\lambda (\\vec{k}),$ and creation and annihilation operators that satisfy $[\\hat{a}_\\lambda (\\vec{k}),\\hat{a}^{\\dagger }_{\\lambda ^{^{\\prime }}}(\\vec{k})] = \\delta _{\\lambda \\lambda ^{^{\\prime }}}\\delta ^{(3)}(\\vec{k}-\\vec{k}^{^{\\prime }}) $ , the equations of motion are $\\left[ \\dfrac{\\partial ^2}{\\partial \\tau ^2} + k^2\\left( 1 \\mp \\dfrac{2\\xi }{k\\tau }\\right) \\right] A_{\\pm } (\\tau ,k)=0, \\qquad \\xi = \\dfrac{\\dot{\\phi }}{2\\pi \\bar{f} H}.", "$ Assuming Bunch-Davies vacuum at $\\tau \\rightarrow -\\infty $ , the solutions are $A_\\lambda ^k (\\tau ) = \\dfrac{e^{\\lambda \\pi \\xi /2}}{\\sqrt{2k}} W_{-i\\lambda \\xi ,1/2}(2ik\\tau ), $ where we have used a Whittaker W function.", "There is tachyonic instabilityIn other contexts, tachyons are associated with space-like propagations (propagation speed larger than the speed of light).", "We highlight that in these notes, this is not what we mean by tachyonic instability.", "Instead, we mean a real exponential solution for the differential equation (REF ), which leads to the amplification of the mode $A_+$ .", "for $A_+(\\dot{\\phi }>0)$ if $-k\\tau =k(aH)^{-1} < 2 \\xi $ .", "In this case, there is exponential gauge field production for $A_+(\\dot{\\phi }>0)$ when $ (aH)^{-1}\\sim k^{-1} $ , i.e., around horizon exit (assuming $\\xi \\sim \\mathcal {O}(1)$ ).", "Therefore, this production can overcome the exponential de-Sitter expansion, which dilutes matter/radiation.", "This way, this mechanism can provide a visible signal.", "The largest contribution is at the end of inflation since $\\xi \\sim \\dot{\\phi }$ grows, as $\\phi $ rolls down its potential, see Fig.", "REF .", "Figure: Exponential gauge field production for the A + (ξ>0)A_+(\\xi >0) solution at around horizon exit.", "The larger ξ\\xi , the larger the gauge field production.Next, we describe the gravitational wave signal from axion inflation.", "The field equations from (REF ) for the rank-2 tensor areSee details on [42].", "$h^{\\prime \\prime }_{ij}(\\textbf {x},\\tau )+\\dfrac{2a^{\\prime }}{a}h^{\\prime }_{ij}(\\textbf {x},\\tau ) - \\nabla ^2 h_{ij}(\\textbf {x},\\tau )=2\\Pi _{ij}{}^{ab} T_{ab},$ whose solution is $h_{ij}(\\vec{k},\\tau ) = 2 \\int d\\tau ^\\prime G_k(\\tau ,\\tau ^{^{\\prime }}) \\Pi _{ij}{}^{ab} T_{ab},$ where the Green functions satisfy $ [\\partial _\\tau ^2 + 2 a^{\\prime }/a \\partial _\\tau + k^2]G_k = 0 $ , $\\Pi _{ij}{}^{ab}$ is the traceless and transverse projection operator, and $T_{ab}$ is the energy-momentum tensor from the axion-like particle model.", "By using the solution $A_+$ from (REF ) in $T_{ab}$ , we can compute the contribution for the gravitational wave spectrum.", "Since $\\xi \\propto \\dot{\\phi }$ , and $\\phi $ is a pseudo-scalar field, we have parity violation.", "The chiral contribution is a consequence of the enhancement of only the $A_+$ mode above.", "Indeed, we have two helicities ($h_+$ and $h_\\times $ ) whose contributions to the gravitational wave spectrum are different.", "The larger contribution to the spectrum is $\\Omega _{GW}= \\left(\\dfrac{H}{\\pi M_p}\\right)^2\\left(1+10^{-7}\\dfrac{H^2}{M_p^2}\\dfrac{e^{4\\pi \\xi }}{\\xi ^6}\\right).$ The first term corresponds to the vacuum contribution (solution of the homogeneous equation (REF ), without source, computed in the previous sections) and the second term is sourced by $A_\\mu $ .", "This last contribution is chiral, and its signature can be probed by LISA, the third-generation Einstein telescope, and the Cosmic Explorer detectors by searching for anisotropies.", "For the LIGO-Virgo-KAGRA (LVK) collaboration, see [43].", "The gravitational wave spectrum can be seen in Fig.", "REF .", "There is a large enhancement for large values of $\\xi $ due to the exponential production, and back-reaction effects from the gauge bosons onto the axion field become important [44].", "The perturbations sourced by the gauge field can be probed by CMB observations, searches for primordial black holes, and gravitational wave experiments (PTAs, LISA, and LVK).", "In particular, in this model, the spectrum will be peaked towards higher frequencies, corresponding to modes exiting the horizon just before the end of inflation.", "More details on axion inflation can be found, for instance, in [45], [25].", "Figure: gravitational wave spectrum for the axion-inflaton model.", "The vacuum contribution accounts for inflationary gravitational waves without a source.", "Larger wavelengths exit the horizon earlier frozen and contribute less to the spectrum.", "But there is also a large enhancement of gauge boson production, due to ξ\\xi .", "Then, the precise balance requires taking into account the back-reaction of the gauge boson onto the axion field." ], [ "Conclusions", "In these lecture notes on gravitational waves from the early universe, we derived the main generic property of gravitational waves, introduced the stochastic gravitational waves, discussed ongoing and future detection efforts, and summarized some primordial sources of gravitational waves.", "We started with the basics in the first lecture, with the derivation of the Einstein equations for a linear perturbed metric $g_{{\\mu \\nu }}=\\eta _{{\\mu \\nu }}+h_{{\\mu \\nu }}$ .", "We evaluated the degrees of freedom of the linearized metric tensor and fixed all the nonphysical degrees of freedom.", "The solution of the Einstein equation for a gravitational wave in a vacuum is a sine wave.", "Hence, any test mass, while a gravitational wave is passing by, will follow a sinusoidal geodesic.", "This effect on test masses is the gist of gravitational wave measurement.", "In the second lecture, we derived the gravitational wave emitted by a source given by an energy-momentum tensor $T_{{\\mu \\nu }}$ and showed that gravitational waves carry energy that curves the background.", "An important aspect of this section is the distinction between gravitational radiation and the curved background.", "The power of gravitational radiation was given by Einstein's quadrupole formula in the last part of this lecture.", "In the third lecture, we introduced stochastic gravitational wave backgrounds (SGWBs).", "These backgrounds are defined as the superposition of gravitational waves with different wave numbers k (both in magnitude and direction), coming from all directions in the sky.", "We derived their main properties.", "Since these waves behave like noise, it is a challenge to detect them.", "Although we have never detected SGWBs, boosted by the detection of gravitational waves (transient signal) by the LIGO-Virgo collaboration, a new generation of experiments (ground-based, space-based, and pulsar time arrays) expect to detect stochastic signals in the near future.", "We briefly discussed experimental efforts in this direction.", "SGWBs can have astrophysical and/or cosmological origins.", "The astrophysical sources of SGWBs are supermassive black-hole binaries.", "A detection of astrophysical SGWBs would confirm again a prediction of General Relativity, now for a different range of masses, never probed before.", "In a binary merger, the larger the black hole masses, the lower the frequencies of the gravitational waves emitted.", "Pulsar time arrays collaborations expect to detect such signals.", "On top of this astrophysical background, there are also cosmological sources from the early universe.", "These primordial sources produced gravitational waves way before the emission of Cosmic Microwave Background (CMB) and these waves traveled freely through the hot plasma of the early universe.", "Since beyond Standard Model (BSM) physics relies on mechanisms in energy scales beyond the ones current accelerators can probe, the phenomenology of SGBWs from the early universe is an important step towards probing BSM physics.", "In this context, in the fourth lecture, we first discussed how we can use data from gravitational waves as complimentary probes to CMB and BBN bounds to constrain beyond Standard Model physics and the earlier stages of the universe.", "Then, we discussed how first-order phase transitions and cosmic strings, and, in the fifth lecture, how inflation and axion inflation contribute to the gravitational wave spectrum.", "Data from gravitational waves, therefore, opens a new window to the phenomenology of new physics and can give important insights into new physics.", "In the next decade, we expect the development of detection techniques and prospects for the detection of cosmological sources from different collaborations, such as LVK, LISA, PTAs, SKA, the Einstein telescope, and the Cosmic Explorer.", "Better stay tuned!", "We highlight that these notes at any moment intend to substitute the extensive literature on gravitational waves and the cosmological sources presented here.", "Instead, we believe our notes can be seen as an introduction or even as an executive summary of the field, where we highlighted some of the main cosmological sources and the main results available in the literature for each one of the sources." ], [ "Acknowledgements", "We thank all the organizers and students for the friendly and productive atmosphere in the 27th W.E.", "Heraeus Summer School \"Saalburg\" for Graduate Students on \"Foundations and New Methods in Theoretical Physics\".", "We thank Dr. Valerie Domcke for allowing us to use her lecture notes as the starting point of this work.", "R.R.L.d.S.", "thanks Tobias Schröder for feedback on the manuscript and Professor Kai Schmitz and his group at ITP, WWU Münster, for discussions and hospitality during the last stages of preparation of these notes." ], [ "Author contributions", "These lecture notes are based on lectures given by Valerie Domcke.", "The notes were written up by Rafael R. Lino dos Santos and Linda M. van Manen." ], [ "Funding information", "R.R.L.d.S.", "is supported by a research grant (29405) from VILLUM FONDEN.", "L.M.v.M is supported by the Volkswagen Foundation." ] ]
2212.05594
[ [ "How and How Much is ${g_A}$ {\\it Fundamentally} Quenched in Nuclei?" ], [ "Abstract The superallowed Gamow-Teller transition in the doubly-magic-shell nucleus $^{100}$Sn and the high resolution spectral shape analysis in the fourth-forbidden nonunique transition in $^{115}$In indicate as much as $\\sim 40\\%$ {\\it fundamental} quenching in the axial-current coupling constant $g_A$ in nuclei.", "This can be attributed to an effect of the trace anomaly in QCD \"emerging\" in nuclear medium.", "If confirmed, this would signal a major revamping to do in nuclear interactions consistent with chiral-scale symmetry in nuclear medium and a big impact on $0\\nu$ and $\\nu\\nu$ double $\\beta$ decays for BSM.", "I present an argument that such a big anomaly-induced quenching is incompatible with how hidden scale symmetry manifests in nuclear medium, A possible means to resolve this issue is discussed in terms of hidden scale symmetry permeating in baryonic matter from normal nuclear matter to massive compact-star matter." ], [ "How and How Much is ${g_A}$ Fundamentally Quenched in Nuclei?", "Mannque Rho mannque.rho@ipht.fr Université Paris-Saclay, CNRS, CEA, Institut de Physique Théorique, 91191, Gif-sur-Yvette, France The superallowed Gamow-Teller transition in the doubly-magic-shell nucleus $^{100}$ Sn and the high resolution spectral shape analysis in the fourth-forbidden nonunique transition in $^{115}$ In indicate as much as $\\sim 40\\%$ fundamental quenching in the axial-current coupling constant $g_A$ in nuclei.", "This can be attributed to an effect of the trace anomaly in QCD “emerging\" in nuclear medium.", "If confirmed, this would signal a major revamping to do in nuclear interactions consistent with chiral-scale symmetry in nuclear medium and a big impact on $0\\nu $ and $\\nu \\nu $ double $\\beta $ decays for BSM.", "I present an argument that such a big anomaly-induced quenching is incompatible with how hidden scale symmetry manifests in nuclear medium, A possible means to resolve this issue is discussed in terms of hidden scale symmetry permeating in baryonic matter from normal nuclear matter to massive compact-star matter.", "Introduction.— In a recent Letter article [1], it was agued that the long-standing “puzzle\" of the quenching of the axial coupling constant $g_A$ from the free-space value of $g_A=1.276$ to $g_A^\\ast \\simeq 1$ observed in Gamow-Teller transitions in light nuclei [2] is resolved almost entirely by nuclear correlations.", "The quenching does not involve, if any, a significant “fundamental quenching\" of the axial vector constant $g_A$ , but it is the single-particle $\\sigma \\tau $ operator that is quenched, that is, purely nuclear correlation effect.", "There is absolutely nothing new in this result.", "This conclusion was arrived at by many authors as listed e.g.", "in the reviews [2].", "To quote one example out of many, in 1977 Les Houches Lectures, Wilkinson concluded, based on his analysis, that with the shell-model wave-functions for light nuclei $A\\le 21$ that include “full mixing,\" the “effective\" $g_{Ae}$ comes out to be [3] $g_{Ae}/g_A=1+(2\\pm 6)\\%.$ In other words, there was nothing “fundamental\" in $g_A^\\ast $ going down to 1 from $g_A=1.276$ listed in the Particle Physics Booklet.", "What was intriguingly mysterious was that $g_A^\\ast $ was tantalizingly close to 1, resembling something fundamental like $g_V=1$ in CVC.", "Indeed the modern high-power many-body calculations such as Quantum Monte Carlo calculations explain the $\\beta $ decay rates for light nuclei $A < 10$ with the free-space $g_A$ , modulo some small corrections coming from two-body exchange currents [4].", "The situation in heavier nuclei was unclear [2].", "In this note, I will describe what the solution to the puzzle could be, what it implies how hidden scale symmetry manifests in nuclear dynamics and what fundamental information vis-à-vis with the quenched $g_A$ it could encode.", "Resolving the puzzle.— What's new in the solution to the puzzle was that it involved a novel mechanism, so far totally unexplored, based on a symmetry hidden in QCD that “emerges\" in strong nuclear correlations.", "It is argued that $g_A^\\ast =1$ applies not only to light nuclei as observed [2] but also to heavy nuclei as well as to dense matter: It is found to permeate from nuclear-matter density $n_0\\simeq 0.16$ fm$^{-3}$ to high densities $n_{star}\\sim 7 n_0$ relevant to compact-star physics, and beyond to what is referred to as “dilaton-limit fixed point (DLFP)\" $n_{\\rm dlfp} \\mathrel {\\unknown.", "{\\hspace{1.0pt}\\sim }}$ >$$ 25 n0$.$ The argument used in arriving at the result in [1] relied on nuclear effective field theory (EFT), coined G$n$ EFT, formulated to primarily address compact-star physics.", "It is formulated, however, to be valid for nuclear matter at $\\sim n_0$ for consistency with an accuracy comparable to that of the standard nuclear chiral EFT carried out to N$^p$ LO, $p\\sim 3$ .", "Furthermore it is rendered applicable to compact-star physics [5] by implementing the putative “hadron-quark continuity (HQC)\" conjectured to hold in QCD by a topology change at $\\sim 3 n_0$ .", "The effective Lagrangian does not involve phase transitions, hence no explicit quarks and gluons.", "Up to date, the results on compact-star properties fare surprisingly well, so far with no serious tension with the available observations [6].", "It is this G$n$ EFT that was applied to the $g^\\ast _A$ problem in [1].", "What's exploited there as a theoretical tool was the “Fermi-liquid fixed-point (FLFP)\" approximation in Landau Fermi-liquid theory [7]The renormalization-group approach formulated for strongly-correlated electron system on the Fermi sea [7] was incorporated into chiral effective Lagrangian for nuclear interactions [8].", "It is identified as Landau-Migdal theory [9] since the pion plays a key role in nuclear physics.", "which corresponds to doing mean-field approximation in G$n$ EFT.", "It turns out to be most closely mappable to a shell-model calculation of superallowed Gamow-Teller transitions in heavy doubly-magic-shell nuclei.", "The heaviest such doubly-magic-shell nucleus known is $^{100}$ Sn.", "As in [1], I will focus on the Gamow-Teller transition from the ground state of $^{100}$ Sn to the first excited $1^+$ state in $^{100}$ In.", "The key reasoning made in [1] was that the FLFP calculation, which is valid at the large $\\bar{N}$ limit $\\bar{N}\\sim 1/k_F$ is defined as $\\bar{N}=k_F/(\\Lambda _{\\rm FS} -k_F)$ with $\\Lambda _{\\rm FS}$ standing for the cutoff on top of the Fermi surface.", "can be identified with the “Extreme Single Particle Model (ESPM)\" calculation of the superallowed Gamow-Teller beta decay of $^{100}$ Sn.", "As far as I am aware, this equivalence can be justified neither in light nuclei nor in non-doubly-magic-shell heavy nuclei.", "What's most noteworthy of the recent RIKEN measurement on this transition [10], claimed to be improved from all previous measurements, is that it deviates from what is expected if it agrees with the FLFP in Fermi-liquid theory $g_A^\\ast =g_A^{\\rm L} \\approx 1.$ To match with the FLFP theory with $1/\\bar{N}\\rightarrow 0$ , the Gamow-Teller strength ${\\cal B}_{GT}$ should be ${\\cal B}^{\\rm Landau}_{\\rm GT}\\approx 11.$ The RIKEN experiment came out to be much lower, $ {\\cal B}_{\\rm GT}^{\\rm RIKEN}=4.4^{+0.9}_{-0.7}$ .", "Taking the central value, one sees that there is roughly 40% quenching of $g_A$ compared with the FLFP value (REF ).", "I will interpret this quenching as a possible “fundamental renormalization\" of $g_A$ due to the trace anomaly of QCD.", "This will be referred to as anomaly-induced quenching (acronymed $AIQ$ ) of $g_A$ expressed by $q_{\\rm ssb} < 1$ (with $q_{\\rm ssb}$ defined below in (REF )).", "So far there has been no information on $\\beta ^\\prime $ for $N_f \\mathrel {\\unknown.", "{\\hspace{1.0pt}\\sim }}$ <$$ 8$ in QCD, so this would be the first signal for it in hadronic physics coming from the nuclear physics side.$ In contrast to this RIKEN result, the data from the older GSi measurement [11] in which the daughter state is identified with a single-state, leading instead to ${\\cal B}_{\\rm GT}^{\\rm GSI}\\approx 10$ , showed no indication of deviating from (REF ).", "This discrepancy, if real, could present a serious issue with the $AIQ$ .", "The RIKEN result in the $^{100}$ Sn decay, if further confirmed, will raise a serious problem in nuclear physics and more importantly in both $2\\nu $ and $0\\nu $ $\\beta \\beta $ decay processes.", "Furthermore most recent developments in experiments and theoretical analyses in the spectrum shape of multiply forbidden nonunique $\\beta $ decays – which involves axial operators different from the superallowed GT – in heavy nuclei indicate an equally important deviation of $q_{\\rm ssb}$ from 1.", "In comparing the theoretical spectral shape to the experimental spectrum in what is referred to as “spectral-shape method (SSM),\" a quenching of $g_A$ was noted to be $q\\sim (0.6 - 0.7)$  [12], [13], [14].", "This amount of quenching can be translated, as explained below, to $q_{\\rm ssb}\\approx (0.6 - 0.7)$ , comparable to that observed in the RIKEN's $^{100}$ Sn GT experiment.", "In what follows, I will argue this quenching factor $q_{\\rm ssb}\\approx 0.6-0.7$ is at odds, in some case seriously, with what has been established in other nuclear weak processes.", "Fundamental renormalization by scale symmetry breaking.— What governs the $AIQ$ is how the scale symmetry breaking giving the dilaton mass away from the infrared (IR) fixed point emerges in nuclear dynamics.", "Instead of the dilaton scalar $\\sigma $ that transforms nonlinearly as the pion $\\pi $ does in the chiral field $U=e^{i\\pi /f_\\pi }$ , it is convenient to use the conformal dilaton compensator field $\\chi =f_\\chi e^{\\sigma /f_\\chi }$ which transforms linearly in scale transformation.", "As described in [5], the argument relies on the “genuine dilaton (GD)\" approach to scale symmetry proposed by Crewther et al [15], [16]The “genuine dilaton (GD)\" is characterized by the existence of an infrared (IR) fixed point at which both scale symmetry and chiral symmetry are spontaneously broken (“hidden\") with the pions and the dilaton as Nambu-Goldstone bosons excited but the matter fields, baryons and vector mesons, remain massive.. At the classical level, the axial current coupling to the nucleon is scale-invariant.", "However there is an anomalous dimension contribution from the trace anomaly of QCD that enters nonperturbatively at the leading-order (LO) chiral-scale (CS) perturbation expansion [16].", "The axial current with the anomaly taken into account is of the form $J^{a\\mu }_{ 5}=g_A q_{\\rm ssb} \\bar{\\psi }\\gamma ^\\mu \\gamma _5 \\frac{\\tau ^a}{2}\\psi $ where $q_{\\rm ssb}= c_A+(1-c_A)(\\frac{\\chi }{f_\\chi })^{\\beta ^\\prime },$ $c_A$ is a constant that cannot be calculated perturbatively and $\\beta ^\\prime $ is the derivative of the $\\beta (\\alpha _s)$ at the IR fixed point $\\beta ^\\prime |_{\\alpha _s=\\alpha _{\\rm IR}} >0.$ In the matter-free space, the vacuum expectation value (VeV) is $\\langle \\chi \\rangle =f_\\chi $ , so if one ignores the fluctuating dilaton field that one is justified to do for the axial current where the dilaton does not figure at the leading order, one can set $q_{\\rm ssb}=1$ and the current is scale symmetric: There is no $\\beta ^\\prime $ effect.", "However in nuclear matter, $\\langle \\chi \\rangle ^\\ast =f^\\ast _\\chi \\ne f_\\chi $ brings in scale symmetry breaking, both explicit and spontaneous, hence bringing in the $\\beta ^\\prime $ dependence.", "Applied to nuclear matter, one then has the density-dependent factor $AIQ$ $q^\\ast _{\\rm ssb} = c_A +(1-c_A)(\\Phi ^\\ast )^{\\beta ^\\prime }$ where $\\Phi ^\\ast = f_\\chi ^\\ast /f_\\chi \\simeq f_\\pi ^\\ast /f_\\pi $ with $\\ast $ standing for density dependence.", "It is reasonable to expect that $c_A$ would not strongly depend on density, so unless $c_A$ differs substantially from 1, one would expect that $\\delta =1-q^\\ast _{\\rm ssb}$ would be $\\ll 1$ .", "I will now explain how this is not what seems to be the case.", "In fact it seems the new (RIKEN) result of $^{100}$ Sn as well as the spectral shape results indicate that the $AIQ$ is $q^{^{100}\\rm Sn}_{\\rm ssb}\\sim 0.6.$ This would imply that in nuclear medium, the “fundamental\" axial coupling constant is not $g_A =1.276$ as measured in the neutron $\\beta $ decay but less than 1.", "Hidden scale symmetry, nuclear correlations and fundamental quenching.— As mentioned, what's involved in the SSM in heavy nuclei is drastically different from what's involved in the superallowed Gamow-Teller (GT) transition in the doubly magic nucleus $^{100}$ Sn.", "The difference is important.", "To clarify what this means, it is worth recalling the strategy of mapping the FLFP approximation in Fermi-liquid theory to the ESPM calculation in shell model of the superallowed GT transition.", "The RIKEN quenching factor – which comes out to be $q_{\\rm RIKEN}\\approx 0.5$ – consists of two factors, the full nuclear correlation factor $q_{\\rm SNC}^{\\rm Landau}\\simeq 0.8$ derived in [1] and the $AIQ$ factor $q_{\\rm ssb}\\approx 0.6$ whereas the result of Hinke et al.", "[11] is $q_{\\rm GSI}\\approx 0.8$ implying $q_{\\rm ssb}\\approx 1$ .", "The difference between the two may lie in how to experimentally pinpoint the daughter state in the ESPM.", "The argument based on Fermi-liquid theory in equating it to the EPSM is that the daughter state is entirely or at least dominantly populated by one shell-model configuration.", "Now the problem is how to reliably identify the final-state configuration in the measurement.", "While this seemed feasible in [11] which claimed that the daughter state can be identified by the single $1^+$ state in $^{100}$ In predicted to be populated by more than 95% in an excitation energy of $\\sim 3$ MeV – which accounted for ${\\cal B}_{\\rm GT}^{\\rm GSI}\\approx 10$ , it is not clear whether this condition was met in [10].", "As for the spectrum-shape analyses [12], [13], [14], the axial-current operator is not directly governed by the hidden scale symmetry that plays the key role in the superallowed $^{100}$ Sn transitions.", "In fact in accessing the $2\\nu $ and $0\\nu $ double beta decays the SSM is aimed to address, the transition operator that enters in the spectral shape is, in contrast to the $\\sim 0$ momentum transfer in the allowed GT transitions, highly forbidden, involving up to $\\sim 100$ MeV momentum transfers.", "Note also that multifold-forbidden transitions in nuclei differ drastically from the unique first forbidden transition to which I will return below.", "The systematic power counting on which the current successful standard nuclear chiral field theory is based works well in nuclei because the nuclear interactions largely involve energy-momentum scale of soft pions [17].", "Applied to the nuclear electroweak current [18], this implies that the time component of the axial current $J^a_{5\\mu }$ could receive an enhanced pion-exchange contribution [19], but the space component which controls the GT transitions has a highly power-suppressed multi-body currents [19], [20].", "Unless the single-particle $\\sigma \\tau $ operator happens to be suppressed by kinematics or symmetry or others accidentally, the correction from the exchange currents should be strongly suppressed relative to the leading order (LO) “impulse\" term by N$^{z}$ LO for $z\\ge 3$ .", "By the tenet of nuclear EFT, a suppression of this order in the power counting should be taken with extreme caution.", "Either a different counting scheme is adopted or such suppressed terms should be dropped.", "If it turns out that a partial or incomplete sum of such corrections for $z=3$ is found to be “important,\" then one cannot ignore terms with $z\\ge 4$ .", ".", "Now the axial-current operator figuring in the SSM analyses [12], [13], [14] is of the “impulse approximation\" without multi-body current operators, with, however, configuration mixing (higher nuclear correlations) taken into account in various approximate ways.", "Given the multifold forbidden terms involved, it may very well be that the standard power counting in chiral EFT anchored on soft-ness kinematics makes no sense.", "So the corrections of the type applied to allowed and first-forbidden transitions could not even be formulated.", "Furthermore there is nothing like the “Landau quenching factor\" $q^{\\rm Landau}_{\\rm snc}$ that subsumes the strongly correlated nuclear effect controlled by hidden scale symmetry [1].", "It seems that what's done in [12], [13], [14] is likely the best one can do at present.", "It seems fair to assume that the theoretical treatment of SSM made in [12], [13], [14] captures most of the nuclear physics involved.", "Therefore the effective $g_A$ obtained, $g_A^\\ast $ , contains the $AIQ$ .", "This means that modulo the caveat that there is a tension with the decay rate as mentioned in the article, the most recent high resolution spectral measurement [14] (of the four-fold forbidden decay of $^{115}$ In) could be yielding the quenching factor $q_{\\rm ssb}^{{^{115}{\\rm In}}} \\approx 0.65-0.75$ leading to the quenched $g^\\ast _A\\approx 0.83 - 0.96$ .", "This is consistent with the $AIQ$ in the superallowed $^{100}$ Sn GT transition (REF ).", "Is this $AIQ$ reasonable?.— The $\\sim (30- 40)\\%$ $AIQ$ (REF ) and (REF ) seems much too big.", "Can this be compatible with what's going on in nuclear physics?", "As it stands, it immediately raises questions in various nuclear processes.", "First of all, given that as defined (REF ), the $AIQ$ is a fundamental quantity at the level of a nuclear effective field theory, it should not depend appreciably on density except perhaps near phase changes.", "As noted above, there is no indication for it in the Monte Carlo calculation for $A < 10$ and in the shell model for light nuclei $A < 21$ .", "And there is no indication in nuclear processes where the constant $g_A$ could figure importantly.", "For instance through low-energy theorems such as the Goldberger-Treiman relation which should hold in nuclear medium, it figures, albeit indirectly, without involving the EW currents and should affect nuclear processes where excitations of pionic quantum numbers – such as the nuclear tensor forces – are involved.", "Furthermore it enters in the calculation – in nuclear EFT – of the EoS of finite and infinite nuclei.", "So far no such effects have been observed.", "One of the examples directly linked to the axial channel, in particular for the forbidden $\\beta $ decays, is the first forbidden $0^\\pm \\leftrightarrow 0^\\mp \\ \\Delta T=1$ transition.", "Unlike the many-fold forbidden transitions in the SSM, the operator for the first-forbidden transition is fairly well-defined and the transition is well measured in heavy nuclei.", "The first forbidden transition involves the time component of the axial current, $J^{a0}_{5}$ , which for small momentum carried by the current gives a single-particle operator $\\propto g^\\ast _A {\\sigma }\\cdot {{p}}/m_N$ where ${p}$ is the nucleon momentum.", "It receives a strongly enhanced two-body exchange current operator, accurately calculated since it is dominated by a soft-pion exchange controlled by chiral symmetry.", "The result obtained in experiments is usually expressed in terms of $\\epsilon _{\\rm MEC}$ defined as the ratio of the matrix element of the axial-charge operator obtained from the data over $M_1$ , the theoretical matrix element of the one-body axial-charge operator evaluated with the unquenched $g_A$ .", "The theoretical expression for $\\epsilon _{\\rm MEC}$ with the soft-pion dominated two-body exchange-current term included but without $AIQ$ was first worked out in [21].", "It is simple to incorporate the $AIQ$ factor in the result obtained in [21].", "It takes the form (in G$n$ EFT) $\\epsilon ^{q_{\\rm ssb}}_{\\rm MEC}=\\frac{q_{\\rm ssb}}{\\Phi ^\\ast } \\big (1+\\frac{R}{\\Phi ^\\ast } \\big )$ where $R=M_2/q_{\\rm ssb}M_1$ with $M_i$ with $i=1, 2$ standing, respectively, for the matrix element of the axial charge 1-body and 2-body operators without the $AIQ$ factor.", "$\\epsilon ^{q_{\\rm ssb}=1}_{\\rm MEC}$ (REF ) was computed a long time ago for the Pb nucleus [21].", "Taking $\\Phi ^\\ast (n_0)\\simeq 0.8$ extrapolated from measurements in deeply bound pionic atoms [22], the prediction was $\\epsilon _{\\rm MEC}^{q_{\\rm ssb}=1}\\approx 2.0.$ With the quenching $q_{\\rm ssb}\\approx 0.6$ , it gives the ratio $\\epsilon _{\\rm MEC}^{q_{\\rm ssb}=0.6}\\approx 1.5.$ The experiment by Warburton in the Pb nuclei [23] gave $\\epsilon _{\\rm MEC}^{exp}=2.01\\pm 0.05.$ The theoretical result (REF ) has an uncertainty of about 10% which covers the range of values in $\\Phi $ extrapolated to nuclear matter density from the measured quantity in finite nuclei.", "The pion-exchange operator is dictated by chiral symmetry, so can be taken to be very accurate.", "The wavefunctions used may however be subject to improvement.", "Nonetheless the close agreement clearly favors the unquenched $g_A$ , (REF ).", "One may wonder how the $AIQ$ depends on density, that is, whether the $AIQ$ , negligible in light nuclei (low density), increases at higher density where the significant $AIQ$ seems to manifest.", "In fact there has been some discussion on the possible effect of the $\\beta ^\\prime $ in the range $1\\mathrel {\\unknown.", "{\\hspace{1.0pt}\\sim }}$ <$$$\\sim $ $<$ 3$ on baryonic matter at high density involving chiral restoration~\\cite {MR-omega}.", "For QCD with the flavor $ Nf3$, $$ is unknown, therefore the $ AIQ$ factor cannot be even roughly estimated.", "To have an idea of what $$ can do, take $ <2$ considered for dense matter which may not be applicable to the present problem.", "One finds that for this value of the $$, the $ AIQ$ both for$ 100$Sn decay and $ 105$In could be in a clear tension with Eq.~(\\ref {aiq}).", "What happens at high density is an open issue.$Effect of many-body currents in quenching.— In a recent Nature article [25], an argument was given that by means of changing the “resolution scale\" (or the cut-off scale of EFT), most of the quenching can be shifted from one-body to two-body exchange-current (2BC) effects leading to $q=0.75$ – comparable to the Landau-Migdal Fermi-liquid fixed point value $q^L_{snc}=0.79$  [1] – in the $^{100}$ Sn GT matrix element.", "Consequently no fundamental quenching was needed there.", "There is a problem here however concerning the 2BC.", "Unlike the axial-charge current for the first-forbidden transition mentioned above, the leading 2BC in GT transitions appears highly suppressed in chiral counting, appearing first at N$^{\\ge 3}$ LO relative to the one-body $\\sigma \\tau $ operator [20].", "This follows trivially from soft-pion theorems in the one-pion exchange graph.", "This was the other side of the coin – no “chiral filtering\" protection – to the side of strongly enhanced soft-pion term in the $0^\\pm \\leftrightarrow 0^\\mp $ transition, in what was coined as “chiral filtering mechanism (CFM).\"", "The tenet of nuclear EFT states that strongly suppressed N$^n$ LO terms, unless the lower order, say, N$^{n-1}$ LO (computed), term is accidentally small, cannot be trusted as in the Gamow-Teller transitions.", "If for instance N$^3$ LO term is non-negligible, it makes sense only if N$^{\\ge 4}$ LO terms are duly included,An example for this is given in [4] for $^8$ Li, $^8$ B and $^8$ He beta decays.", "It unfortunately is not possible to check this matter at present since there are too many unknown constants to fix at higher order.", "particularly in connection with the leading superallowed transition.", "In [25] the resolution scale is “tweaked\" such that 2BC appearing at high chiral-order even started to “dominate.\"", "An idea belonging to the same class of reasoning was made a long time ago when the entire quenching due to nuclear correlations was moved to $\\Delta $ -hole states by changing the resolution scale such that the $\\Delta $ -hole states in the relevant configuration space give the dominant contribution.", "Indeed the sum of $\\Delta $ -hole bubbles gave the $\\Delta $ -hole quenching $q_{\\Delta }\\approx 0.76$  [26], just about what is obtained in [25] by the two-body currents.", "The caveat there was that the Landau-Migdal $g_0^\\prime $ interaction parameter with universality assumption was unfounded, so the idea was dropped.", "Conclusion.— The effective $g_A^\\ast $ for a quasiparticle undergoing superallowed Gamow-Teller transition on top of the Fermi surface calculated in mean field with a simple nuclear chiral Lagrangian gave $g_A^\\ast =g_A^L\\simeq 1$  [8].", "This was given a justification by the Fermi-liquid fixed point (FLFP) approach in a more sophisticated EFT, G$n$ EFT, with heavy degrees of freedom encoded in hidden local symmetry and hidden scale symmetry [1].", "What enters in $g_A^L$ is the product $\\Phi ^\\ast \\tilde{F}_1^\\pi $ – where $F_1^\\pi $ is the pionic contribution to the Landau-Migdal interaction, both being Landau-Migdal fixed-point quantities.", "The product is remarkably insensitive to nuclear density in the vicinity of equilibrium nuclear matter, so $g_A^L\\simeq 1$ is suggested to apply to (heavy) finite nuclei as well as infinite nuclear matter.", "It is intriguing that $g_A^\\ast \\rightarrow 1$ even at the dilaton-limit fixed-point (DLFP) as discussed in [5] for compact stars.", "In [1], it was suggested that $g_A^L$ could be fairly reliably equated – although a rigorous proof is lacking at present – to the $g_A^\\ast $ in the superallowed GT transition in $^{100}$ Sn computed in the “Extreme Single Particle (shell) Model.\"", "It is this reasoning that led to the prediction of the Gamow-Teller strength for the $^{100}$ Sn transition ${\\cal B}_{GT}$ to be equal to (REF ).", "While the GSI result was in agreement with this prediction with the daughter state identified with a single ESPM configuration, the RIKEN result was clearly not.", "Supported by the SSM analysis of the spectral shape in strongly forbidden transitions, the RIKEN result indicated a $(30-40)\\%$ anomaly-induced quenching of $g_A$ .", "While such a quenching raises already a serious issue in nuclear dynamics (as in the unique first-forbidden transition), it will surely be a lot more important, involving the fourth power of $g_A$ , for the $0\\nu $ beta decays relevant to going BSM.", "How does one go about resolving this issue?", "It seems that given the possible theoretical simplicity of the superallowed transition in the doubly-magic-shell nuclei compared with the theoretically less well-controlled many-fold forbidden nonunique axial operators, revisiting the $^{100}$ Sn decay – and other doubly-magic shell nuclei if available – is clearly in order.", "The close mapping between the Fermi-liquid fixed-point approach anchored on renormalization group flow and the ESPM structure in doubly-magic shell heavy nuclei offers the possibility to identify experimentally, and to sharpen theoretically, what the ESPM final state in $^{100}$ In to which the transition takes place is.", "This poses a challenge to nuclear physics, both experimental and theoretical." ] ]
2212.05558
[ [ "Hadronic contributions to the muon $g-2$ in holographic QCD" ], [ "Abstract We discuss the recent progress made in using bottom-up holographic QCD models in calculating hadronic contributions to the anomalous magnetic moment of the muon, in particular the hadronic light-by-light scattering contribution, where holographic QCD naturally satisfies the Melnikov-Vainshtein constraint by an infinite series of axial vector meson contributions." ], [ "Introduction", "The anomaly of the magnetic moment of the muon $a_=(g-2)_/2$ [1] is currently known with an experimental uncertainty of only $41\\times 10^{-11}$ when the E821/BNL measurement from 2006 [2] is combined with the concordant result obtained in 2021 by the Muon $g-2$ Collaboration at Fermilab [3], corroborating a long-standing discrepancy with the Standard Model prediction, which according to the 2020 White Paper (WP) of the Muon $g-2$ Theory Initiative [4] has a similar estimated error of $43\\times 10^{-11}$ but a deviation of 4.2 $\\sigma $ in its value.", "The theoretical error is completely dominated by hadronic contributions, which arise from hadronic vacuum polarization (HVP) and hadronic light-by-light (HLBL) scattering, $a_^{\\text{HVP,WP}}=(6845 \\pm 40)\\times 10^{-11} \\quad \\text{(0.6\\% error)},\\nonumber \\\\a_^{\\text{HLBL,WP}}=(92 \\pm 19)\\times 10^{-11} \\quad \\text{(20\\% error}),$ in (effective) one-loop and two-loop contributions in the muon-photon vertex diagram.", "The WP result for the HVP contribution is obtained by a data-driven, so-called dispersive approach, which however has been challenged by a recent lattice QCD result with comparable errors but a central result that is about 3% larger [5], almost eliminating the discrepancy with the experimental result for $a_$ , but instead producing a $\\sim 3\\sigma $ deviation from results based on the $R$ -ratio in $e^+e^-\\rightarrow \\text{hadrons}$ .", "The WP estimate of the HLBL contribution is only partially determined by data and involves input from hadronic models with larger uncertainties, albeit direct complete lattice evaluations are now gradually bringing down their errors [6], [7].", "Holographic QCD (hQCD) is a conjectural approximation to strongly coupled QCD (in its large-$N_c$ limit) based on the more well-established AdS/CFT correspondence [8], [9] and experience with top-down string-theoretical constructions of gauge/gravity dual theories [10], [11], [12].", "Using a minimal set of parameters, holographic QCD has proved to be capable of good qualitative and often quantitative predictions in hadron physics [13], [14], [15] with typical errors of (sometimes less than) 10 to 30%.", "This is clearly too crude to be of help with the currently most pressing issue of the theory result for $a_^\\mathrm {HVP}$ , where the discrepancy between lattice and data-driven approaches is a few percent and where eventually sub-percent accuracy is needed.", "However, in the case of HLBL contributions, the error is dominated by two contributions, the estimate of the effect of short-distance constraints (SDC) and the contribution of axial-vector mesons, where the WP values are $a_^{\\text{HLBL,SDC}}=(15 \\pm 10)\\times 10^{-11} \\quad \\text{(67\\% error)},\\nonumber \\\\a_^{\\text{HLBL,axials}}=(6 \\pm 6)\\times 10^{-11} \\qquad \\text{(100\\% error}).$ It is here that hQCD can provide interesting results, whereas the rather precisely known HVP contribution can be used to assess potential deficiencies of the various hQCD models." ], [ "Top-down and bottom-up hQCD models", "Exact holographic duals are known only in certain limits for highly symmetric theories such as the superconformal $\\mathcal {N}=4$ Yang-Mills theory in the limit of large-$N_c$ and infinite 't Hooft coupling $g^2N_c$ .", "But already in 1998, Witten [10] succeeded in constructing a gauge/gravity dual to the low-energy limit of large-$N_c$ Yang-Mills theory, “top-down” from type-IIA superstring theory, where supersymmetry and conformal symmetry are broken by compactification on one extra dimension beyond the holographic dimension dual to inverse energy.", "In 2004, Sakai and Sugimoto found a D-brane construction within Witten's model that adds chiral quarks in the fundamental representation [11], [12], thus providing the so far closest dual theory to low-energy QCD in the large-$N_c$ and chiral limit.", "With having only a mass scale and one dimensionless number (the 't Hooft coupling at the scale of the Kaluza-Klein compactification) the (Witten-)Sakai-Sugimoto model turns out to be remarkably successful both qualitatively and also quantitatively.", "Chiral symmetry breaking is a direct consequence of its geometry, while flavor anomalies are naturally realized.", "However, above the Kaluza-Klein scale $M_\\mathrm {KK}\\approx 1$ GeV this model exhibits a different ultra-violet (UV) behavior than QCD, since its actual dual at high energy scales is a 5-dimensional superconformal theory instead of 4-dimensional QCD with asymptotic freedom.", "Skipping a string-theoretic top-down derivation, phenomenologically interesting “bottom-up” models of hadron physics were obtained in [13], [14], [15] that combine certain features also present in the Sakai-Sugimoto model with a simpler geometry that is asymptotically AdS$_5$ (and therefore conformal in the UV), breaking conformal symmetry in the infrared (IR) by a hard-wall (HW) cutoff with appropriate boundary conditions or by a soft wall (SW) provided by a nontrivial dilaton [16], [17], [18], [19], [20], [21], [22], [23], [24], which has similarities with light-front holographic QCD [25].", "A common feature of the various bottom-up and also the top-down Sakai-Sugimoto model is that vector and axial vector mesons as well as pions are described by 5-dimensional Yang-Mills fields $\\mathcal {B}^{L,R}_{M}=\\mathcal {B}^{V}_{M}\\mp \\mathcal {B}^{A}_{M}$ for the global $U(N_f)_L\\times U(N_f)_R$ chiral symmetry of boundary theory with 5-dimensional action $S_{\\rm YM}\\propto \\frac{1}{g_5^2}\\;\\text{tr}\\int d^4x \\int _0^{z_0} dz\\,e^{-\\Phi (z)}\\sqrt{-g}\\, g^{PR}g^{QS}\\nonumber \\\\\\times \\left(\\mathcal {F}^{(L)}_{PQ}\\mathcal {F}^{(L)}_{RS}+\\mathcal {F}^{(R)}_{PQ}\\mathcal {F}^{(R)}_{RS}\\right),$ where $P,Q,R,S=0,\\dots ,3,z$ and $\\mathcal {F}_{MN}=\\partial _M \\mathcal {B}_N-\\partial _N \\mathcal {B}_M-i[\\mathcal {B}_M,\\mathcal {B}_N]$ with conformal boundary at $z=0$ , and either a sharp cut-off of AdS$_5$ at $z_0$ (the location of the hard wall) or $z_0=\\infty $ (SW) with nontrivial dilaton.", "Chiral symmetry is broken either by introducing an extra bifundamental scalar field $X$ dual to quark bilinears $\\bar{q}_L q_R$ as in the original HW model of Erlich et al.", "[13], [14] (HW1), or through different boundary conditions for vector and axial vector fields at $z_0$ as in the Hirn-Sanz model [15] as well as in the top-down Sakai-Sugimoto model [11], [12].", "Vector meson dominance (VMD), in a form necessarily involving an infinite tower of vector mesons, is naturally built in.", "Photons are described by boundary values of the appropriate combination of the vector gauge fields as those are sourced by quark currents, and they couple through bulk-to-boundary propagators to mesonic degrees of freedom described by normalizable modes.", "Of particular importance for the following is that flavor anomalies follow uniquely from 5-dimensional Chern-Simons terms $S_{\\rm CS}^L-S_{\\rm CS}^R$ with $S_{\\rm CS}=\\frac{N_c}{24\\pi ^2}\\int \\text{tr}\\left(\\mathcal {B}\\mathcal {F}^2-\\frac{i}{2} \\mathcal {B}^3\\mathcal {F}-\\frac{1}{10}\\mathcal {B}^5\\right)$ in differential-form notation." ], [ "Holographic pion TFF", "The leading HLBL contribution to $a_$ comes from the $\\pi ^0$ exchange diagram shown in fig.", "REF due to the anomalous coupling of the pion to two photons.", "This involves both a singly-virtual transition form factor (TTF) in the upper vertex and a doubly-virtual one in the interior of the diagram.", "The holographic prediction is given by $F_{\\pi ^0\\gamma ^*\\gamma ^*}(Q_1^2,Q_2^2)=-\\frac{N_c}{12\\pi ^2 f_\\pi }\\int _{0}^{z_{0}}dz\\,\\mathcal {J}(Q_1,z)\\mathcal {J}(Q_2,z)\\Psi (z),$ where $\\mathcal {J}(Q,z)$ is the bulk-to-boundary propagator of a photon with virtuality $Q^2=-q^2$ and a holographic pion profile function $\\Psi (s)$ .", "This has been studied by Grigoryan and Radyushkin in [26], [27], [28], who noticed that hQCD models with asymptotic AdS$_5$ geometry reproduce the asymptotic momentum dependence obtained by Brodsky and Lepage [29], [30], [31], $&&F_{\\pi ^0\\gamma ^*\\gamma ^*}(Q_1^2,Q_2^2)\\nonumber \\\\&&\\rightarrow \\frac{2 f_\\pi }{Q^2}\\left[ \\frac{1}{w^2}-\\frac{1-w^2}{2w^3}\\ln \\frac{1+w}{1-w} \\right]$ with $Q^2=\\frac{1}{2}(Q_1^2+Q_2^2)\\rightarrow \\infty $ , $w=(Q_1^2-Q_2^2)/(Q_1^2+Q_2^2)$ , which is not achieved by conventional VMD models.", "Figure: Hadronic light-by-light scattering contribution to aa_ from single meson exchangeWith certain simplifications the holographic results for the pion TFF have been employed in ref.", "[32], [33] to evaluate the $a_$ contribution given by the diagram in fig.", "REF .", "In ref.", "[34] we have carried out a complete evaluation and found good agreement with the data-driven (dispersive) result, which is bracketed by the HW1 and HW2 results when these models are fitted to $f_\\pi $ and $m_\\rho $ .", "With such a fit, the HW2 model, which has one parameter less than the chiral HW1 model, actually undershoots the asymptotic limit of (REF ) by 38% (fitting the asymptotic limit would instead lead to an overweight $\\rho $ meson with mass of 987 MeV).", "Its prediction of $a_^{\\pi ^0}=56.9\\times 10^{-11}$ is correspondingly on the low side.", "The HW1 model, which satisfies (REF ) exactly, predicts $a_^{\\pi ^0}=65.2\\times 10^{-11}$ , which is consistent with, but somewhat larger than, the WP value $62.6^{+3.0}_{-2.5}\\times 10^{-11}$ .", "However, at not too large, phenomenologically relevant energy scales, the asymptotic behavior of the TFF is modified by gluonic corrections of the order of 10% [35], [36], while hQCD models with a simple AdS$_5$ background approach the asymptotic limit somewhat too quickly.", "Similar corrections but with opposite sign appear in the vector correlator appearing in HVP.", "Indeed, the HW1 model with complete UV fit underestimates HVP.", "Reducing $g_5^2$ by 10% (which is achieved by fitting instead the decay constant of the $\\rho $ meson) brings the HVP in line with the dispersive result (to within 5%) [37] and also makes the HLBL result $a_^{\\pi ^0}$ completely coincident with the central WP value [38].", "In fig.", "REF the HW1 result (with finite quark masses) for the singly virtual pion TFF is compared with experimental data, where the result with reduced $g_5^2$ is seen to agree somewhat better at the relevant energy scale $Q^2\\lesssim 10\\,\\mathrm {GeV}^2$ ; in fig.", "REF the doubly virtual result for $Q_1^2=Q_2^2$ is compared with the dispersive result of ref.", "[39] and the lattice result of Ref. [40].", "Here the HW1 result with reduced coupling coincides with the central result of the dispersive approach within line thickness and it also happens to be in the center of the lattice result.", "Figure: Holographic results for the single virtual TFF Q 2 F(Q 2 ,0)Q^2 F(Q^2,0) for π 0 \\pi ^0, plotted on top of experimental data as compiled in Fig.", "53 of Ref.", "for g 5 =2πg_5=2\\pi (OPE fit, blue) and the reduced value (red)corresponding to a fit of F ρ F_\\rho .", "(Taken from )Figure: Holographic results for the doubly virtual F π 0 γ * γ * F_{\\pi ^0\\gamma ^*\\gamma ^*} compared to the dispersive result of Ref.", "(green band) and the lattice result of Ref.", "(yellow band); the OPE limit is indicated by the dashed horizontal line.", "The blue line corresponds to g 5 =2πg_5=2\\pi (OPE fit) and the red one to the reduced value g 5 g_5 (F ρ F_\\rho -fit).", "(Taken from )" ], [ "Axial vector meson TFF", "In hQCD, the coupling of axial vector mesons to photons is determined by the same action (REF ) that gives rise to the anomalous TFFs of the pseudoscalars.", "There is an infinite tower of axial vector mesons and their decay amplitude has the form [42], [43] $&&\\mathcal {M}_{\\mathcal {A}\\gamma ^*\\gamma ^*}=i\\frac{N_c}{4\\pi ^2}\\mathrm {tr}(\\mathcal {Q}^2 t^a)\\,\\epsilon _{(1)}^\\mu \\epsilon _{(2)}^\\nu \\epsilon _\\mathcal {A}^{*\\rho } \\epsilon _{\\mu \\nu \\rho \\sigma }\\nonumber \\\\&&\\;\\times \\left[q_{(2)}^\\sigma Q_1^2 A_n^a(Q_1^2,Q_2^2)-q_{(1)}^\\sigma Q_2^2 A_n^a(Q_2^2,Q_1^2)\\right],$ involving an asymmetric structure function $A_n(Q_1^2,Q_2^2) = \\frac{2g_5}{Q_1^2} \\int _0^{z_0} dz \\left[ \\frac{d}{dz} \\mathcal {J}(Q_1,z) \\right]\\mathcal {J}(Q_2,z) \\,\\psi ^A_n(z) $ where $\\psi ^A_n(z)$ is the holographic wave function of the $n$ -th axial vector meson.", "The most general decay amplitude of axial vector mesons would permit a second asymmetric structure function [44], [45], [46], but this does not appear in the simplest versions of hQCD considered here.", "The holographic result satisfies the Landau-Yang theorem [47], [48], which forbids the decay of an axial vector meson in two real photons, due to the fact that $\\mathcal {J}^{\\prime }(Q,z)=0$ for $Q^2=0$ .", "The asymptotic form of $A_n(Q_1^2,Q_2^2)$ for large virtualities is given by [42] An(Q12,Q22) 122 FAnNc Q4 1w4[ w(3-2w)+12 (w+3)(1-w)1-w1+w ], $w=(Q_1^2-Q_2^2)/(Q_1^2+Q_2^2)$ , which agrees with the independent pQCD result obtained recently in [49].", "Figure: Single-virtual axial vector TFF from holographic models(SS: blue, HW1: orange, HW2: red) compared withdipole fit of L3 data for f 1 (1285)f_1(1285) (grey band).", "The parametersof all models are fixed by matching f π f_\\pi and m ρ m_\\rho .The results for HW1 and HW2 almost coincide, with HW2 at most a linethickness above HW1.", "(Taken from )Figure: Double-virtual axial vector TFF for Q 1 2 =Q 2 2 =Q 2 Q_1^2=Q_2^2=Q^2 from holographic models(SS: blue, HW1: orange, HW2: red).", "The black dashed lines denote the extrapolationof L3 data with a dipole model for each virtuality as used in the calculation of a f 1 a_^{f_1} in Ref. .", "(Taken from )At moderate virtualities, the shape of the singly virtual axial TFFs turns out to be consistent with the dipole fit of data obtained by the L3 experiment for $f_1(1285)$ [51], as shown in fig.", "REF .", "The overall magnitude, which for HW1 and HW2 models is found as $21.04$ and $16.63$ GeV$^{-2}$ is in roughly the right ballpark compared to experimental data, albeit somewhat too high for HW1 which overestimates the equivalent photon widths of $f_1(1285)$ and $f_1^{\\prime }=f_1(1420)$ [51], [52], but consistent with the phenomenological value $A(0,0)_{a_1(1230)}=19.3(5.0) \\,\\mathrm {GeV}^{-2}$ obtained in [45], [53].", "The results from the L3 experiment have been used by Pauk and Vanderhaeghen [50] to estimate the contributions of $f_1$ and $f_1^{\\prime }$ to $a_$ , assuming a symmetric factorized dipole moment ${A^{\\mathrm {PV}}(Q_1^2,Q_2^2)}/{A(0,0)}=(1+Q_1^2/\\Lambda _D^2)^{-2}(1+Q_2^2/\\Lambda _D^2)^{-2}$ .", "This drops too quickly in the doubly virtual case to match the short-distance behavior required by pQCD.", "In fig.", "REF the hQCD results are shown and compared with this simple ansatz, exhibiting a considerable difference." ], [ "Melnikov-Vainshtein SDC", "In [42], [43] it has been shown moreover that in hQCD the infinite tower of axial vector mesons is responsible for the correct implementation of the Melnikov-Vainshtein SDC [54] for the HLBL scattering amplitude, which is a consequence of the nonrenormalization theorem for the chiral anomaly.", "In terms of the tensor basis used in [55] it reads $\\lim _{Q_3\\rightarrow \\infty }\\lim _{Q \\rightarrow \\infty } Q_3^2 Q^2 \\bar{\\Pi }_1(Q,Q,Q_3)= -\\frac{2}{3 \\pi ^2}.$ Clearly, each single meson exchange contribution gives a vanishing result in this limit, because the propagator in fig.", "REF goes like $1/Q_3^2$ and the two form factors go like $1/Q^2$ and $1/Q_3^2$ .", "In [54] Melnikov and Vainshtein proposed to estimate the impact of the MV-SDC by replacing the external TFF by its constant on-shell value.", "With current input data, this would lead to an increase of almost 40% of the pseudoscalar contribution $a_^{\\pi ^0,\\eta ,\\eta ^{\\prime }}$ by $38\\times 10^{-11}$ .", "However, the MV-SDC can also be satisfied by having an infinite tower of single-meson contributions.", "In Ref.", "[56], [57] Colangelo et al.", "showed that a Regge model for a tower of excited pseudoscalars can be constructed that saturates the MV-SDC with a contribution $\\Delta a_^\\mathrm {PS}=13(6)\\times 10^{-11}$ when using the available experimental constraints for the photon decay amplitudes of excited pseudoscalars.", "However, as also noted in [57], excited pseudoscalar states decouple in the chiral large-$N_c$ limit, which is not the case for the infinite tower of axial vector mesons.", "Indeed, in holographic QCD it is the latter which are responsible for the implementation of the MV-SDC [42], [43].", "This is illustrated in fig.", "REF for the simple HW2 model, which permits a closed form solution for the full series, for $\\Pi _1(Q,Q,Q_3)$ with large $Q=50$ GeV and increasing $Q_3\\ll Q$ : while each axial vector meson mode gives an asymptotically vanishing contribution, their infinite series satisfies the MV-SDC constraint.", "Figure: Axial-vector contribution toQ 3 2 Q 2 Π ¯ 1 (Q,Q,Q 3 )Q_3^2 Q^2 \\bar{\\Pi }_1(Q,Q,Q_3) as a function of Q 3 Q_3 at Q=50Q=50 GeVin the HW2 model (with g 5 =2πg_5=2\\pi so that the SDC is satisfied to 100%).The black line corresponds to the infinite sum over the tower of axial vector mesons, and the colored linesgive the contributions of the 1st to 5th lightest axial vector mesons.", "(Taken from )In [38] two of us have shown that also in the HW1 model with massive quarks the infinite tower of axial vector mesons is responsible for the realization of the MV-SDC.", "The infinite tower of excited pseudoscalars, which does not decouple from the anomaly relation away from the chiral limit, contributes only subleading terms $\\propto \\ln (Q_3^2)/Q_3^4 Q^2$ to (REF ).", "The hQCD models generally satisfy the MV-SDC to the same level as they satisfy the SDCs on the TFFs (see the remarks in sect.", "REF .)", "In the limit where all $Q_i$ become large simultaneously, another SDC can be derived [54], [58] for $\\bar{\\Pi }$ , $\\lim _{Q\\rightarrow \\infty }Q^4\\bar{\\Pi }_1(Q,Q,Q)=-4/9\\pi ^2$ .", "In hQCD models with correct short-distance limit of the TFFs, the coefficient on the right-hand side comes out 19% smaller [43], [38]." ], [ "$a_$ contributions", "Fig.", "REF shows the individual contributions of axial vector meson excitations to the integrand in $a_^\\mathrm {AV}=\\int _0^\\infty dQ_1 \\int _0^\\infty dQ_2 \\int _{-1}^1 d\\tau \\,\\rho _a(Q_1,Q_2,\\tau )$ for $Q_1=Q_2$ and $\\tau =0$ .", "The ground-state axial vector meson mode clearly dominates, but the infinite sum of excited modes does contribute non-negligibly.", "For chiral models, this is shown quantitatively in table REF .", "In the Sakai-Sugimoto model, where the SDCs are not satisfied at all, excited modes contribute only a few percent.", "In the HW2 model, which satisfies the SDCs qualitatively, but quantitatively only at the level of 62% when $f_\\pi $ and $m_\\rho $ are fitted to physical values, the excited modes add +25% to the contribution of the ground-state mode ($j=1$ ).", "This percentage increases to about 30% in the HW1 model, where the SDCs are matched exactly.", "Also shown in table REF are two modified versions of the HW1 model with $g_5^2$ reduced by 10% and 15%, which can be viewed as taking into account next-to-leading order corrections at large but still physically relevant energy scales.", "Also in these models, the excited modes add around 30%, if not more, to $a_^{\\mathrm {AV}}({j=1})$ .", "Table: Axial vector contributions in chiral HW models with different levels of saturation of SDCs.", "In the chiral, flavor U(3)-symmetric case we have simply a AV =a a 1 +f 1 +f 1 ' =4a a 1 a_^{\\mathrm {AV}}=a_^{a_1+f_1+f_1^{\\prime }}=4 a_^{a_1}.", "Also given are the masses m a 1 m_{a_1} and the photon coupling amplitudes A 1 (0,0)A_1(0,0) of the ground-state axial vector mesons (j=1j=1).While being completely absent in the chiral HW2 model, excited pseudoscalars, which decouple from the axial current in the chiral limit, always contribute to $a_$ in the HW1 model and its variants.", "This has been studied in [38] in the flavor-symmetric case with and without modified scaling dimension of the bifundamental scalar and also for different boundary conditions on the hard wall.", "The various models show some differences in the composition of the contributions to $a_$ , but hardly deviate from the sum total in the chiral model (where the physical pion mass is inserted manually in the pion propagator).", "Figure: Bar chart of the individual contributions to aa_ from the isotriplet sectorin the Hirn-Sanz model fitted to IR (HW2) and UV data (HW2 UV-fit _\\text{UV-fit}), the chiral HW1 model and various extensions to the massive case, with excited modes given by increasingly darker colors, blue for the π 0 \\pi ^0's, red for the a 1 a_1's.", "(Taken from )In the flavor-U(3)-symmetric case the total axial vector sector is four times the contribution of the $a_1$ meson.", "With a 10% reduction of $g_5^2$ and correspondingly the level of saturation of the SDCs, the massive HW models together yield $a_^\\mathrm {AV}=4a_^\\mathrm {a_1}={39.5(1.6)}\\times 10^{-11}$ , where approximately 58% are due to longitudinal contributions entering $\\bar{\\Pi }_1$ .", "The contribution of excited pseudoscalars varies somewhat more, as can be seen from fig.", "REF .", "In models where their contribution is larger, the ground-state yields less, but we have $a_^\\mathrm {PS^*}\\gtrsim 3.3\\times 10^{-11}$ (again with U(3) symmetry and with 10% reduction of $g_5^2$ ).", "Even without the latter contribution, the holographic result is somewhat higher than the WP estimate for the combined effect of axial vector mesons and SDCs, $a_^\\mathrm {axials+SDC(WP)}=21(16)\\times 10^{-11}$ ." ], [ "Katz-Schwartz model: HW1 with solved U(1)$_A$ problem", "In the above estimates obtained from flavor-U(3)-symmetric HW models, the effects of the large strange quark mass and the anomalous breaking of U(1)$_A$ symmetry has been ignored (in [42], [43] the contribution of $\\eta $ and $\\eta ^{\\prime }$ mesons, where U(3) symmetry clearly is no good approximation, has been estimated by manually setting their masses and decay constants to physical values).", "In [41] we have recently considered the minimal extensionIn [59] an extension of the HW1 model was considered which in contrast to the minimal models leads to nonvanishing HLBL contributions of scalar mesons.", "While results were obtained that are consistent with previous estimates of these contributions, the scalar TFFs obtained therein do not agree with pQCD SDC constraints [49].", "of the HW1 proposed by Katz and Schwartz [60], [61] to implement the U(1)$_A$ anomaly through an additional complex scalar $Y$ whose magnitude and phase are dual to the gluon operators $\\alpha _s G_{\\mu \\nu }^2$ and $\\alpha _s G\\tilde{G}$ .", "There a coupling term $\\kappa Y^{N_f} \\det (X) \\subset \\mathcal {L}$ in the 5-dimensional theory accounts for U(1)$_A$ anomalous Ward identities.", "It turns out that all results are weakly dependent on the new coupling $\\kappa $ when $\\kappa \\gg 1$ .", "The only new free parameter is the value of the gluon condensate $\\Xi $ which can be prescribed in the choice of the background field $\\langle Y\\rangle =C+\\Xi z^4$ , while $C$ is fixed by OPE and the U(1)$_A$ anomaly to $C\\propto \\alpha _s$ .", "The latter is modeled by $\\alpha _s\\rightarrow 1/\\beta _0 \\ln (\\Lambda _{QCD}z)$ , $\\Lambda _{QCD}\\rightarrow z_0^{-1}$ .", "This model implements a Witten-Veneziano mechanism for the mass of isosinglet pseudoscalars, which are augmented by a pseudoscalar glueball mode mixing with the $\\eta $ and $\\eta ^{\\prime }$ mesons.", "In [60], Katz and Schwartz have found that their model can account for the masses of $\\eta $ and $\\eta ^{\\prime }$ mesons with deviations of about $10\\%$ .", "By turning on the gluon condensate parameter $\\Xi $ , we have found [41] that this can be improved to the percent level.", "In the case of axial vector mesons, this model is somewhat less successful: the masses of $f_1$ and $f_1^{\\prime }$ are raised too much compared to their experimental values, and also the experimental values of their mixing is not reproduced.", "Nevertheless, this provides a first glimpse of what a more complete hQCD model can give.", "The results that we have obtained with the Katz-Schwartz model augmented by a gluon condensate are given in table REF .", "In the isotriplet (pion and $a_1$ ) sector, the results are essentially unchanged, with the pion contribution matching perfectly the dispersive result, in particular when $g_5$ is fitted to match $F_\\rho $ , corresponding to a 90% level of saturation of the OPE constraints.", "The contributions from $\\eta $ and $\\eta ^{\\prime }$ mesons are then also completely in line with the WP estimates.", "Excited pseudoscalars (including a pseudoscalar glueball mode) contribute another $1.6\\times 10^{-11}$ , so that the total contribution of pseudoscalars is at the upper bound of the WP estimate.", "The combined contribution of ground-state axial vector mesons equals around 3.5 times the $a_1$ contribution, which is a moderate reduction of the factor 4 in the flavor-U(3) symmetric case.", "Somewhat surprisingly, the contribution of excited $f_1$ and $f_1^{\\prime }$ axial vector mesons is more strongly reduced, so that the “best guess” for the axials + LSDC contributions in this model turns out to be $a_^\\mathrm {AV+LSDC}=30.5\\times 10^{-11}$ , and $a_^\\mathrm {AV+PS^*+LSDC}=32\\times 10^{-11}$ when excited pseudoscalars are included.", "Both results are somewhat above but now safely within the estimated error of the WP estimate $a_^\\mathrm {axials+SDC(WP)}=21(16)\\times 10^{-11}$ .", "We should like to point out, however, that there are holographic QCD models that are closer to a string-theoretic top-down construction such as the typically more involved models of Ref.", "[62], [63], [64].", "It would be interesting to see if they can achieve a better fit of masses and mixing angles of $f_1$ , $f_1^{\\prime }$ mesons, and what they would predict for their contribution to $a_$ .", "Table: Summary of the results for the different contributions to aa_ in the massive HW model due to Katz and Schwartz (but augmented by a nonzero gluon condensate (v1)) in comparison with the White Paper values.", "The mass of π 0 \\pi ^0 is fixed as input, the masses of η\\eta and η ' \\eta ^{\\prime } are matched within 0.8 and 2.4%, while the axial vector mesons a 1 a_1 and f 1 f_1 are between 4% and 15% too heavy, f 1 ' f_1^{\\prime } by 28%." ], [ "Acknowledgments", "We would like to thank Luigi Cappiello, Gilberto Colangelo, Giancarlo D'Ambrosio, Martin Hoferichter, Elias Kiritsis, and Pablo Sanchez-Puertas for useful discussions.", "J. L. and J. M. have been supported by the Austrian Science Fund FWF, project no.", "P33655, and by the FWF doctoral program Particles & Interactions, project no.", "W1252-N27." ] ]
2212.05547
[ [ "A Compact TPC for the sPHENIX Experiment" ], [ "Abstract The sPHENIX detector to be installed at RHIC in 2022 is designed to precisely measure jets, jet correlations, and dilepton pairs in heavy-ion collisions.", "With these measurements in mind, sPHENIX will employ a compact TPC covering 20cm < r < 78 cm and |{\\eta}| < 1.1 as the central tracker.", "Utilizing an optimized Ar-CF4 gas mixture, zigzag readout pads, a 1.4 T solenoid, and a modified SAMPA chip for streaming readout, the TPC will provide a position resolution sufficient for measuring target observables in a high event rate environment.", "The design of the TPC will be discussed, as well as test beam data and potential applicability to the EIC" ], [ "Introduction", "sPHENIX [1] is a state-of-the-art detector currently under construction at the Relativistic Heavy-Ion Collider (RHIC) at Brookhaven National Laboratory.", "sPHENIX is designed to make measurements of jet substructure, fragmentation functions, heavy flavor, and individual $\\Upsilon $ (nS) states in proton and heavy ion collisions to advance understanding of the quark-gluon plasma [2].", "Employing a MAPS-based silicon vertex detector (MVTX), an intermediate silicon tracker (INTT), and a compact TPC [3] as the tracking system, along with precision electromagnetic and hadronic calorimetry [4], sPHENIX will round out the RHIC physics program with high statistics jet and heavy flavor measurements [5] [6].", "Key to the success of sPHENIX is a combined tracking momentum resolution good enough to resolve the $\\Upsilon $ states from each other in the di-electron decay channel in both $p+p$ and $Au+Au$ collisions at $\\sqrt{s_{NN}}$ $=$ 200 $GeV/c$ .", "Along with jet substructure measurements, this sets the requirement for the sPHENIX tracking system to have a momentum resolution of $\\frac{\\Delta P}{P}$ $\\approx $ $.02 \\cdot P$ for charged particles with momenta around 5 $GeV/c$ .", "In addition to the requirements on detector resolution, the sPHENIX experiment is required to take large amounts of data at a $Au+Au$ event rate of 15 kHz or greater to maximize the statistics for rare probes.", "This rate requirement demands that the TPC operate in an ungated mode with a fast drift velocity gas to reduce occupancy.", "The sPHENIX TPC covers 20 cm $< r <$ 78 cm but is only instrumented for physics starting at 30 cm.", "The maximum drift length for an electron is 1.05 m in z.", "The readout on each endcap is segmented 36 times.", "The field cage and gas volume of the TPC are merged into one structure.", "The field cage is constructed of five long, flexible PCBs with very fine stripes designed to step the voltage down smoothly from the central cathode to the top of the gas electron multipliers (GEMs) and produce a highly uniform electric field in the z-direction.", "The central cathode itself is constructed of rigid ENIG-coated FR4 segments supported by a central honeycomb structure.", "The cathode will be held at 45 kV and will have aluminum stripes evaporated on it for laser calibration.", "Figure: Sketch of the sPHENIX TPC." ], [ "Size", "The primary drivers of the size of the sPHENIX TPC are the requirements that sufficient room be allotted for calorimetry inside the magnet bore, and that the TPC sit in the uniform region of the magnetic field.", "Although the radial component of the magnetic field in the TPC volume is small compared to the overall magnitude of the field, this small component causes centimeter-scale distortions to electron positions that need to be corrected for.", "An ionizing laser system that provides straight lines of ionization throughout the TPC will allow for straightforward calibration of this effect.", "Additionally, the electromagnetic calorimeter requires radial extension to fully contain high $p_{T}$ EM showers.", "The measurement and identification of electrons and photons at high $p_{T}$ enables the study of $\\Upsilon $ decays and $\\gamma $ + jet events, both of which are cornerstones of the sPHENIX physics program.", "sPHENIX is also the first midrapidity detector at RHIC to utilize a full 2$\\pi $ hadronic calorimeter in the barrel.", "One portion of this hadronic calorimeter, the so-called inner HCal, also sits inside the magnet.", "This section improves the hadronic jet resolution of sPHENIX and better enables the use of particle flow techniques due to the reduced ambiguity in matching tracks to hadronic calorimeter clusters." ], [ "Momentum Resolution", "The measurement of momentum in a TPC boils down to the ability to pinpoint the initial location of primary ionization electrons in the gas.", "There are many factors which affect a TPC's ability to do this, but some of the most notable are the diffusion in the gas, the intrinsic spatial resolution of the readout, the magnetic field applied, and the distortion of tracks due to space charge.", "The spatial resolution of a TPC can thus be parameterized as in Eq.", "1.", "$\\sigma _{x}^{2} = \\frac{D_{T}^{2}L}{N_{eff}} + \\sigma _{pad}^{2} + \\sigma _{Space~Charge}^{2}$" ], [ "Diffusion", "To reduce the first term of Eq.", "1, the operating gas must have low diffusion.", "Since the sPHENIX physics program does not require particle identification by dE/dx in the TPC, the gas can be chosen to optimize position resolution.", "sPHENIX utilizes the 1.4 T solenoid that was designed for and operated successfully in the BaBar Experiment at SLAC.", "In comparison to STAR and ALICE, the magnetic field of this solenoid is quite strong.", "This is necessary for sPHENIX to precisely study high $p_{T}$ probes of the QGP, both due to the increased bending of tracks and the favorable effect this field has on the TPC spatial resolution.", "A strong, uniform magnetic field is especially suited for a TPC because the magnetic field decreases the diffusion of the electrons in the gas.", "The mechanism behind this is quite simple.", "As the electrons diffuse, they pick up velocity components in the directions transverse to the magnetic field.", "This causes them to spiral and remain pinned to the direction of the parallel $\\vec{E}$ and $\\vec{B}$ fields, effectively negating the transverse kicks given to the electrons as they traverse the gas.", "The diffusion will necessarily worsen the resolution for longer electron drift lengths.", "Thus the spatial resolution for electrons ionized near the readout will be better than those ionized near the central cathode.", "One of the advantages of the small size of the sPHENIX TPC is that the maximum drift length is only 1.05 m, to be compared with 2.45 m in ALICE [7] and 2 m in STAR [8].", "The sPHENIX TPC employs a gas mixture of 50$\\%$ neon with 50$\\%$ CF$_{4}$ At the time of writing of these proceedings, the gas mixture was Ne:CF4 50:50.", "Neon has recently become difficult to acquire, and for that reason the gas mixture was changed to Argon:CF4 in a 60:40 mixture.", "The properties are almost identical to those in Fig.", "3, but the ion mobility is worsened.", "to provide low diffusion and high drift velocity, as can be seen in Fig.", "3.", "The longitudinal diffusion and drift velocity are independent of the magnetic field to first order.", "Figure: Ne:CF 4 _{4} 50:50 gas properties.", "The vertical axis is a measure of diffusion or drift velocity.", "To minimize transverse diffusion, 400 V/cm is chosen as the operating point.The drift velocity of the sPHENIX configuration is $\\sim $ 80 $\\mathrm {\\mu m/ns}$ compared to $\\sim $ 55 $\\mathrm {\\mu m/ns}$ for the P10 gas of the STAR TPC and $\\sim $ 25 $\\mathrm {\\mu m/ns}$ for the Ne:CO$_{2}$ :N$_{2}$ 90:10:5 gas mixture of the ALICE TPC upgrade.", "The high drift velocity necessitates fast electronics to have acceptable position resolution in z and low occupancy.", "Therefore, sPHENIX is using the 80 ns peaking time option of the SAMPA ASIC designed for the ALICE upgrade [9].", "The sPHENIX TPC will additionally be operated in streaming readout mode." ], [ "Pad Resolution", "The second term in Eq.", "1 comes from the intrinsic pad resolution.", "It has been shown elsewhere [10] that pads designed to optimally share charge can have better intrinsic position resolution than similarly sized rectangular pads.", "This allows for macroscopically large pads to have excellent resolution with relatively small channel counts.", "For this reason, the sPHENIX TPC has zig-zag patterned readout pads, shown in Fig.", "5, which are designed to have charge sharing between at least two pads for all incoming charge clouds from the GEMs.", "The position resolution of the sPHENIX TPC prototype was studied extensively with the 120 GeV/c proton beam at the Fermilab Test Beam Facility.", "The prototype is a one-sided, 40 cm drift length TPC with one full readout module installed.", "Due to the bunched structure of the beam and the spacing between spills, space charge contribution to the resolution was negligible.", "The pad and diffusion resolution terms could be extrapolated to the case where a magnetic field was present and the TPC was full length.", "The end result of this study showed a position resolution of 90 $\\mu $ m for electrons deposited just above the readout and 130 $\\mu $ m for electrons drifting the full 105 cm TPC drift length.", "Figure: sPHENIX Quadruple GEM layout." ], [ "Space Charge", "The ability of the TPC to precisely measure momentum also depends on minimization of track distortions due to electric and magnetic field non-uniformities.", "All TPCs have time-independent track distortions due to realistic field effects such as field cage granularity, radial $\\vec{B}$ field components, etc.", "These distortions are typically large and remain throughout the life of an experiment, however they can typically be easily calibrated away.", "A more difficult to calibrate effect is the time-dependent electric field produced by the buildup of space charge.", "This effect is dominated by positive ions from the gain stage that drift slowly throughout the volume of the TPC until they reach the TPC field cage or cathode and neutralize.", "The space charge density and track distortion near the inner field cage is very large, which informed the decision to begin physics instrumentation 10 cm beyond the inner radius of the TPC.", "In lieu of normal readout, this region has large pads designed to monitor the amount of incoming charge, providing additional information about the amount of ions and their locations in the TPC volume at a given time.", "The magnitude of the space charge distortions can be drastically reduced by decreasing the fraction of ions that escape from the GEMs without neutralizing.", "To reduce this ion back-flow (IBF) fraction below 1$\\%$ , the sPHENIX TPC uses four GEMs in an orientation similar to the ALICE upgrade [11], as shown in Fig.", "6.", "The operating voltages of the GEMs were determined via specialized bench experiments that could measure the amount of IBF and were tested thoroughly in the sPHENIX prototype TPC in test beam.", "In addition to the ionizing laser mentioned earlier, a diffuse laser calibration system shining on aluminum stripes evaporated onto the TPC central cathode will allow for real-time measurement of space charge-induced distortions as the experiment runs with beam.", "Aluminum was chosen because its work function is lower than that of the gold cathode, allowing photoelectrons to be liberated from the stripes but not the central cathode itself.", "These liberated electrons will traverse the entire drift of the TPC, and accumulate information about the distribution of space charge.", "By comparing the known pattern of the aluminum stripes to the detected pattern at the readout plane, one can measure the integrated distortions that electrons receive as they traverse the TPC.", "Care has been taken to optimize the amount of diffusion and the power output of the laser such that a uniform illumination can be achieved.", "Many of the components of sPHENIX, including the aforementioned calorimeters, can be repurposed for studying $e+p$ and $e+A$ collisions at the future Electron-Ion Collider (EIC) to be housed at the current RHIC facility.", "The event rate for $e+p$ collisions at the EIC will be similar to $Au+Au$ collisions in sPHENIX ($\\sim $ 15 kHz), but the track multiplicity will be much lower.", "The average track multiplicity at the EIC will be around 15 tracks per event, compared to 400 tracks per event in $Au+Au$ events at RHIC.", "This means that the occupancy of the TPC will be much lower, and perhaps an optimization could be done to allow the TPC to identify hadron species via dE/dx while also improving the position resolution.", "Space charge will be less of an issue, so the region from 20-30 cm in r can be instrumented for physics.", "Figure: Example of a possible EIC detector configuration.Another option is to modify the TPC by removing one of the endcaps and moving the cathode to the end in the electron-going direction.", "The impetus for this modification is that the readout of the TPC requires mechanical support and cooling, which contribute to the material budget.", "Since the measurement of the direction, momentum, and species of the scattered electron is of utmost importance at the EIC, it is reasonable to attempt to reduce the material in the electron-going direction.", "While this will increase the average drift length and worsen the TPC spatial resolution at large drift lengths due to diffusion, the reduced material will generally improve the reconstruction and identification of the scattered electron." ], [ "Summary", "The compact sPHENIX TPC will be installed into sPHENIX in July 2022.", "The various factors affecting the TPC momentum resolution and rate capability have been discussed, along with the solutions to be employed.", "All studies performed, including test beams and bench experiments, indicate that the TPC will be able to achieve the resolution required by the sPHENIX physics program." ] ]
2212.05541
[ [ "Robust Inference in High Dimensional Linear Model with Cluster\n Dependence" ], [ "Abstract Cluster standard error (Liang and Zeger, 1986) is widely used by empirical researchers to account for cluster dependence in linear model.", "It is well known that this standard error is biased.", "We show that the bias does not vanish under high dimensional asymptotics by revisiting Chesher and Jewitt (1987)'s approach.", "An alternative leave-cluster-out crossfit (LCOC) estimator that is unbiased, consistent and robust to cluster dependence is provided under high dimensional setting introduced by Cattaneo, Jansson and Newey (2018).", "Since LCOC estimator nests the leave-one-out crossfit estimator of Kline, Saggio and Solvsten (2019), the two papers are unified.", "Monte Carlo comparisons are provided to give insights on its finite sample properties.", "The LCOC estimator is then applied to Angrist and Lavy's (2009) study of the effects of high school achievement award and Donohue III and Levitt's (2001) study of the impact of abortion on crime." ], [ "Introduction", "In linear regression models, it's common to assume the observations can be sorted into clusters where observations are independent across clusters but correlated within the same cluster.", "One method to conduct accurate statistical inference under this setting is to estimate the regression model without controlling for within-cluster error correlation, and then proceed to compute the so called cluster-robust standard errors (White, 1984; Liang and Zeger, 1986; Arellano, 1987, Hasen, 2011).These cluster-robust standard errors do not require a model for within-cluster error structure for consistency, but do require additional assumptions such as the number of cluster tends to infinity or the size of the cluster tends to infinity.", "The cluster-robust standard errors had became popular among applied researchers after Rogers (1993) incorporated the method in Stata.", "For a comprehensive methodology review, see Cameron and Miller (2015) or Imbens and Kolesar (2016).", "As demonstrated by the Monte Carlo results in Bell and McCaffrey (2002), BM thereafter, the cluster-robust variance is biased in finite sample in general.", "They devised a bias-correction method to reduce the finite sample bias.", "Since then, a large body of research has emerged to investigate and address this small sample problem.", "Young (2019) did an excellent meta analysis that involves 53 experimental paper from the journals of the American Economic Association and concluded that use of conventional clustered/robust standarderror in these paper yields more statistically significant results.", "We shall see in later section that a sufficent condition for this finite sample bias to vanish as sample size, $n$ , grows is that the maximum leverage of the sample, $\\max _i h_{ii}$ , tends to zero as sample size grows.", "This condition will be satisfied if the ratio of the number of parameters and samplel size, $\\frac{p}{n}$ , tends to zero as $n$ tends to infinity (see Huber, 1981).", "As an example, this sufficient condition is met in White's proof (1984) given his primitive conditions on the design matrix.", "The requirement that the expected value of leverage is zero asymptotically or equivalently $\\lim _{n\\rightarrow \\infty }\\frac{p}{n}=0$ is unattractive in many modern day applications as many datasets involve a large set of control variables where the number of control variables could grow at the same rate as the sample size.", "At the time this article was written, little researches have been done to relax this requriment in data with cluster dependence.", "One such papaer is Verdier (2018), it considers cluster-robust inference in fixed effect models under high dimensional asymptoitcs using subsetting.", "His method accomodates instrumental variables estimation at the cost of efficiency and restricted invertibility.", "D'Adamo (2019) also provided a consistent estimator under one-way clustering but the estimator also suffers from invertibility issues.", "Most other researches seem to focus exclusively on data with conditional heteroskedasticity.", "In particular, Cattaeno et al (2018) give inference methods that allow for many covariates and heteroscedasticity.", "They derive consistency of the OLS estimate under high dimensional asymptotics and provide a consistent variance estimator under the condition that maximum leverage is less than or equal to 1/2 as sample size tends to infinity.", "Kline, Saggio, Solvsten (2020) propose an unbiased leave-one-out estimator that is consistent and only requires maximum leverage is bounded away from 1.", "Jochmans (2021) develops an approximate version of the leave-one-out estimator under the asymptotic setting of Cattaneo et al (2018).", "In this paper, our main goal is to extend CJN (2018)'s framework to accomdate cluster dependence and follow up on KSS (2020)'s remark to develop a leave-cluster-out estension of their estimator.", "The asymptotic properties of our estimator are established under CJN (2018)'s asymptotic framework and the two papers are unified in the process.", "We found that, after some additional efforts, most results in CJN (2018) generalize to data with cluster depedence under some additional mild assumptions on the unobserved errors and no additional assumptions on the design matrix are required.", "The rest of the paper is structured as follows, section 2 motivates the leave-cluster-out estimator by providing bounds on the bias of the traditional cluster-robust standard error and showing that the bound collapses if maximum leverage tends to zero as sample size grows.", "Section 3 introduces the setup and notations and establishes the unbiasedness and consistency of the leave-cluster-out crossfit (LCOC) estimator.", "Section 4 provides monte carlo experiment results to access the finite sample performance of our estimator.", "Section 5 uses the LCOC estimator to Angist and Lavy's (2009) study of the effects of high school achievement award and Donohue and Levit's (2002) study of the the casual impact of legalized abortion on crime reduction.", "The paper ends with a short conclusion.", "Appendix contains the proofs to all the theoretical results." ], [ "Bias of Cluster-Robust Standard Error", "We start by bounding the bias of the popular cluster-robust standard errors in order to motivate our leave-cluster-out estimator.", "The approach here will follow Chesher and Jewitt (1987), CJ thereafter.", "In particular, we will analyze the eigenstructure of the data and provide bounds in terms of eigenvalues and elements of the hat matrix.", "As we will see, the bias is bounded by the maximum leverage of the sample and therefore will be asymptotic unbiased if the maximum leverage vanishes asymptotically." ], [ "OLS Estimator", "Let $ \\lbrace 1,\\ldots ,n\\rbrace $ be set of indexes for the sample where $n$ is equal to the sample size.", "Let $G$ be a partition of $ \\lbrace 1,\\ldots , n\\rbrace $ where $|G|$ is the number of clusters.", "We reserve $g$ to denote the element of $G$ only and each $g$ is an ordered subset of $\\lbrace 1,\\ldots ,n\\rbrace $ .", "Let $g(i)$ returns the index value of its $i$ th element, $g[i]$ be the cluster that individual $i$ belongs and $|g|$ be the size of the cluster $g$ .", "Notations regarding the data structure are given below $\\underbrace{\\mathbf {y}}_{n\\times 1}=\\begin{pmatrix}\\mathbf {y}_{g_1}\\\\\\mathbf {y}_{g_2}\\\\\\vdots \\\\\\mathbf {y}_{g_{|G|}}\\end{pmatrix}=\\begin{pmatrix}y_1\\\\y_2\\\\\\vdots \\\\y_n\\end{pmatrix}, \\ \\ \\ \\underbrace{\\tilde{X}}_{n\\times p}=\\begin{pmatrix}\\tilde{X}_{g_1}\\\\\\tilde{X}_{g_2}\\\\\\cdots \\\\\\tilde{X}_{g_{|G|}}\\end{pmatrix}=\\begin{pmatrix}\\tilde{X}_1\\\\\\tilde{X}_2\\\\\\cdots \\\\\\tilde{X}_n\\end{pmatrix}$ $u=\\begin{pmatrix}u_{g_1}\\\\\\vdots \\\\u_{g_{|G|}}\\end{pmatrix}=\\begin{pmatrix}u_1\\\\\\vdots \\\\u_{n}\\end{pmatrix}, \\ \\ \\text{E}(uu^{\\prime }|\\tilde{X} )=\\begin{pmatrix}\\Omega _{g_1} & 0 & \\cdots &0\\\\0& \\Omega _{g_2} & \\cdots &0\\\\\\vdots & \\vdots & \\ddots & \\vdots \\\\0 & 0 & \\cdots &\\Omega _{g_{|G|}}\\end{pmatrix}=\\Omega $ Consider the linear model $\\mathbf {y}=\\tilde{X}\\beta +u,\\ \\ \\text{E}(u|\\tilde{X})=0.$ The OLS estimator is given by $\\hat{\\beta }_{OLS}&=(\\tilde{X}^{\\prime }\\tilde{X})^{-1}\\tilde{X}^{\\prime }\\mathbf {y}=\\Big (\\sum _{g\\in G}\\tilde{X}_g^{\\prime }\\tilde{X}_g\\Big )^{-1}\\sum _{g\\in G}\\tilde{X}_g^{\\prime }y_g$ We are interested in the variance matrix conditional on $\\tilde{X}$ $\\text{Var}(\\hat{\\beta }_{OLS}|\\tilde{X})&=(\\tilde{X}^{\\prime }\\tilde{X})^{-1}\\tilde{X}^{\\prime }\\Omega \\tilde{X}(\\tilde{X}^{\\prime }\\tilde{X})^{-1}\\\\&=\\Big (\\sum _{g\\in G}\\tilde{X}_g^{\\prime }\\tilde{X}_g\\Big )^{-1}\\Big (\\sum _{g\\in G}\\tilde{X}_g^{\\prime }\\Omega _g \\tilde{X}_g\\Big )\\Big (\\sum _{g\\in G}\\tilde{X}_g^{\\prime }\\tilde{X}_g\\Big )^{-1}\\\\&=\\Big (\\sum _{g\\in G}\\tilde{X}_g^{\\prime }\\tilde{X}_g\\Big )^{-1}\\Big (\\sum _{g\\in G}\\sum _{i=1}^{|g|}\\sum ^{|g|}_{j=1}\\tilde{x}_{ig}\\tilde{x}_{jg}^{\\prime }(\\Omega _g)_{ij} \\Big )\\Big (\\sum _{g\\in G}\\tilde{X}_g^{\\prime }\\tilde{X}_g\\Big )^{-1}$ where $\\tilde{x}_{ig}$ and $\\tilde{x}_{jg}$ denote the $i$ th and $j$ th row of $\\tilde{X}_g$ respectively.", "The cluster robust covariace matrix replaces $\\Omega _g$ with the plug-in estimator $\\hat{u}_g\\hat{u}_g^{\\prime }\\text{ , }\\hat{u}_g=y_g-X_g\\hat{\\beta }_{OLS}.$ Formally, the cluster-robust estimate is consistent if $\\frac{1}{|G|}\\Big (\\sum _{g\\in G}\\tilde{X}_g^{\\prime }\\hat{u}_g\\hat{u}_g^{\\prime }\\tilde{X}_g\\Big )-\\frac{1}{|G|}\\Big (\\sum _{g\\in G}E[\\tilde{X}_g^{\\prime }\\Omega _g \\tilde{X}_g]\\Big )\\xrightarrow{}0\\text{ as }|G|\\rightarrow \\infty .$ The consistency of the above estimator is established by White (1984), Liang and Zeger (1986) and Hansen (2007) with varying degree of restrictiveness.", "For the purpose of this section, reader might assume we follow the asymptotic setting of White (1984) unless otherwise stated.", "For the proof of White (1984) to go through, we need the estimator $\\hat{u}_g\\hat{u}_g^{\\prime }$ to be an asymptotic unbiased estimator of $\\Omega _g$ .", "While this is true under White's regularity conditions, this is not necessary true when considering high dimensional asymptotics as it violates one of the regularity condition of White (1984) that the probability limit of $\\frac{1}{n}\\tilde{X}^{\\prime }\\tilde{X}$ to be finite and positive definite." ], [ "Finite Sample Bias of Cluster-Robust Variance", "It would be informative to directly analyze the finite bias of the cluster-robust variance estimator.", "We will generalize CJ (1987)'s analysis to a covariance matrix with non-diagonal elements.", "The cluster-specific bias is given by $B_g&=\\text{E}(\\hat{u}_g\\hat{u}_g^{\\prime }|\\tilde{X})-\\text{E}(u_gu_g^{\\prime }|\\tilde{X})\\\\&=\\tilde{X}_g(\\tilde{X}^{\\prime }\\tilde{X})^{-1}(\\sum _{g\\in G}\\tilde{X}_g^{\\prime }\\Omega _g\\tilde{X}_g)(\\tilde{X}^{\\prime }\\tilde{X})^{-1}\\tilde{X}_g^{\\prime }-\\Omega _g\\tilde{X}_g(\\tilde{X}^{\\prime }\\tilde{X})^{-1}\\tilde{X}_g^{\\prime }-\\tilde{X}_g(\\tilde{X}^{\\prime }\\tilde{X})^{-1}\\tilde{X}_g^{\\prime }\\Omega _g\\\\&=H_{g}\\Omega H_{g}^{\\prime }-(H_{g,g}\\Omega _g+\\Omega _gH_{g,g})$ where we define $H_{g}=\\tilde{X}_g(\\tilde{X}\\tilde{X})^{-1}\\tilde{X}^{\\prime }$ and $H_{g,g}=\\tilde{X}_g(\\tilde{X}^{\\prime }\\tilde{X})^{-1}\\tilde{X}_g$ (see appendix for detail).", "It is interesting to point out here that if $\\Omega =\\sigma ^2 I$ , then $B_g=-H_{g,g}\\sigma ^2$ which is negative definite.", "This is in line with the view that the bias is downward in general.", "It would be interesting to examine under what conditions would $B_g$ gurantee to be positive or negative definite and we leave this for future researches.", "We now consider the proprtionate bias term introducded by CJ (1987) $pb(\\widehat{\\text{Var}}_{\\text{cluster}})&=\\frac{w^{\\prime }\\Big (\\sum _{g\\in G}\\tilde{X}_g^{\\prime }\\tilde{X}_g\\Big )^{-1}(\\sum _{g\\in G}\\tilde{X}_g^{\\prime }B_g\\tilde{X}_g)\\Big (\\sum _{g\\in G}\\tilde{X}_g^{\\prime }\\tilde{X}_g\\Big )^{-1}w}{w^{\\prime }\\Big (\\sum _{g\\in G}\\tilde{X}_g^{\\prime }\\tilde{X}_g\\Big )^{-1}\\Big (\\sum _{g\\in G}\\tilde{X}_g^{\\prime }\\Omega _g \\tilde{X}_g\\Big )\\Big (\\sum _{g\\in G}\\tilde{X}_g^{\\prime }\\tilde{X}_g\\Big )^{-1}w}$ where $w$ is any non-zero vector that has the same dimension as $\\beta $ .", "If we define $z_g= \\tilde{X}_g\\Big (\\sum _{g\\in G} \\tilde{X}_g^{\\prime } \\tilde{X}_g\\Big )^{-1}w$ , then the above can be explicitly written as $pb(\\widehat{\\text{Var}}_{\\text{cluster}})=\\frac{z_{g_1}^{\\prime }B_{g_1}z_{g_1}+z_{g_2}^{\\prime }B_{g_2}z_{g_2}+\\ldots +z_{g_{|G|}}^{\\prime }B_{g_{|G|}}z_{g_{|G|}}}{z_{g_1}^{\\prime }\\Omega _{g_1}z_{g_1}+z_{g_2}^{\\prime }\\Omega _{g_2}z_{g_2}+\\ldots +z_{g_{|G|}}^{\\prime }\\Omega _{g_{|G|}}z_{g_{|G|}}}$ which is a ratio of sum of quadratic forms.", "The above term is further bounded by two Rayleigh-quotient-like quantities $\\max _{g\\in G}\\frac{z_g^{\\prime }B_g z_g}{z_g^{\\prime }\\Omega _g z_g}$ and $\\min _{g\\in G}\\frac{z_g^{\\prime }B_gz_g}{z_g^{\\prime }\\Omega _g z_g}$ .", "Applying result on ratio of quadratic forms from Rao p74 (1972) to these two quantities gives us the theorem below (see appendix for full proof).", "Theorem B.1 The proptionate bias is bounded by $\\lambda _{\\min }(B_{g_{\\min }}\\Omega _{g_{\\min }}^{-1})\\le pb(V_{\\text{cluster}})\\le \\lambda _{\\max }(B_{g_{\\max }}\\Omega _{g_{\\max }}^{-1}),$ where $g_{\\max }$ and $g_{\\min }$ denote the cluster with greatest cluster specific bias and the cluster with smallest cluster specific bias respectively.", "Our final result generalizes CJ (1987)'s insight to errors with cluster dependence.", "Theorem B.2 If $B_g$ is positive definite (or negative definite) for all $g$ , $|g|=O(1)$ , $\\Omega _g=O(1)$ and $\\lim _{n\\rightarrow \\infty } \\max _i h_{ii}= 0$ , then $\\lim _{n\\rightarrow 0}pb(\\widehat{\\text{Var}}_{\\text{cluster}})= 0$ In words, if the design is approximately balanced in that sense that the maximum leverage vanishes asymptotically and the size of the cluster is bounded, then the usual cluster-robust standard error (LZ, 1987) will be asymptotically unbiased.", "Theorem 2.2. motivates the use of an unbiased estaimtor in finite sample to avoid relying on the asympotic assumptions to kill the bias in large sample." ], [ "General Framework", "Suppose $\\lbrace (y_i,x_i^{\\prime },w_i^{\\prime }):1\\le i \\le n\\rbrace $ is generated by $y_i&=\\beta ^{\\prime } x_i+\\gamma ^{\\prime } w_i+u_i\\\\x_i&=\\alpha ^{\\prime }w_i+v_i=\\text{E}(x_i|\\mathcal {W}_n)+V_i$ for $i=1,\\ldots ,n$ where $\\alpha ^{\\prime } = (\\sum ^n_{j=1}\\text{E}[x_jw_j^{\\prime }])(\\sum ^n_{j=1}\\text{E}[w_jw_j^{\\prime }])^{-1}$ and $v_i$ is the deviation of $x_i$ from the population linear projection.", "We will refer equation (1) as the primary model and equation (2) as the auxiliary model.", "Our goal is to conduct valid inference on $\\beta $ .", "Note that the auxiliary model could be mis-specified in the sense that $ \\alpha _n^{\\prime }w_i\\ne \\text{E}(x_i|\\mathcal {W}_n)$ .", "We assume $\\text{E}(u_i|\\mathcal {X}_n,\\mathcal {W}_n)=0$ to take advantage of the unbiasedness of our variance estimator.", "This contrasts to CJN (2018)'s framework where they also allow for an asymptotic negligible amount of mis-specification errors in the primary model.", "We will recyle the OLS notations used in section 2.1.", "Note that $\\underbrace{\\tilde{X}}_{n\\times p}=\\begin{pmatrix}\\underbrace{W}_{n\\times k},\\underbrace{X}_{n\\times r}\\end{pmatrix}=\\begin{pmatrix}w_{1}, x_{1}\\\\w_{2}, x_{2}\\\\\\vdots \\\\w_{n}, x_{n}\\\\\\end{pmatrix}.$ Define $H_{\\tilde{X}}=\\tilde{X}(\\tilde{X}^{\\prime }\\tilde{X})^{-1}\\tilde{X}^{\\prime }$ and $H_{W}=W(W^{\\prime }W)^{-1}W^{\\prime }$ as the hat matrices generated by $\\tilde{X}$ and $W$ respectively.", "We have $M=I-H_{W}$ and $\\tilde{M}=I-H_{\\tilde{X}}$ .", "Then $M\\mathbf {y}$ and $\\mathbf {\\hat{v}}=MX$ are the residuals after regressing $Y$ and $X$ on $W$ respectively.", "Define $H_{\\mathbf {\\hat{v}}}=\\mathbf {\\hat{v}}(\\mathbf {\\hat{v}}^{\\prime }\\mathbf {\\hat{v}})^{-1}\\mathbf {\\hat{v}}^{\\prime }$ as the hat matrix generated by $\\mathbf {\\hat{v}}$ .", "Under our asymptotic framework, it would be convient to re-state OLS estimator in the following format $\\hat{\\beta }_{\\text{OLS}}=(\\mathbf {\\hat{v}}^{\\prime }\\mathbf {\\hat{v}})^{-1}\\mathbf {\\hat{v}}^{\\prime }\\mathbf {y},$ which is just an applciation of the Frisch–Waugh–Lovell theorem.", "Assumption 1 $\\max _{g\\in G} |g|=O(1)$ , where $|g|$ is the cardinality of $g$ and where $G=\\lbrace g_1,\\ldots ,g_{|G|}\\rbrace $ is a partition of $\\lbrace 1,\\ldots ,n\\rbrace $ such that $\\lbrace (u_{i},V_{i}{^{\\prime }}\\rbrace : i \\in g\\rbrace $ are independent across $g$ conditional on $(\\mathcal {X}_n,\\mathcal {W_n})$ .", "The assumption defines the sampling environment.", "It is a modified version of CJN (2018, assumption 1) that allows for within-cluster dependence that is common in panel data analysis.", "The asymptotics employed for the cluster structure is the same as the ones used in White (1984) where each cluster's size is bounded and $G$ is proportional to $n$ .", "Assumption 2 $P[\\lambda _{\\min }(\\sum ^n_{i=1}w_{i}w_{i}^{\\prime })>0)>0]\\rightarrow 1$ , $\\overline{\\lim }_{n\\rightarrow \\infty }\\frac{k}{n}<1$ and $\\max _{1\\le i\\le n}\\Big \\lbrace &\\text{E}[u_i{^4}|\\mathcal {X}_n,\\mathcal {W}_n]+\\text{E}[\\Vert V_i\\Vert ^4|\\mathcal {W}_n]+\\frac{1}{\\lambda _{\\min }(\\Omega )}+\\frac{1}{\\lambda _{\\min }(\\text{E}[\\frac{1}{n}\\sum ^n_{i=1}\\tilde{V}_i\\tilde{V}_i^{\\prime }|\\mathcal {W}_n])}\\Big \\rbrace =O_p(1)$ where $\\tilde{\\Sigma }=\\frac{1}{n}\\sum _{g\\in G}\\sum _{i,j\\in g}\\tilde{V}_{i}\\tilde{V}_{j}{^{\\prime }}E[u_{i}u_{j}|\\mathcal {X}_n,\\mathcal {W}_n]$ and $\\tilde{V}=\\sum ^n_{j=1}M_{ij}V_j$ .", "First condition prevents the elements of the design matrix of the nuisance covariates from being too close to singularity.", "This is a generalization of the uniformly nonsingularity assumption (White, 1984, p22) where it allows the rank of $\\sum ^n_{i=1}w_{i}w_{i}^{\\prime }$ to grow as $n$ grows.", "This assumption is not restrictive as any linear dependent nuisance covariates can be dropped without impacting the OLS estimate.", "Second condition allows the number of parameters to be estimated to grow in line with sample size as long as we have slightly more than one observations per parameter.", "Third condition are moment conditions that restrict distributions of $u_{i}$ and $V_{i}$ from the main model and auxiliary model respectively.", "Note that the assumption differs from CJN (2018)'s assumption 2 in that we restrict the eigenstructure of $\\Omega $ to control for the within-cluster correlations.", "Assumption 3 $E[u_{i}|\\mathcal {X}_n,\\mathcal {W}_n]=0\\ \\forall i$ , $\\chi =\\frac{1}{n}\\sum ^n_{i=1}\\text{E}(\\Vert \\underbrace{\\text{E}[v_i|\\mathcal {W}]}_{Q_i}\\Vert ^2)=O(1)$ and $\\frac{\\max _i\\Vert \\hat{v}_{i}\\Vert }{\\sqrt{n}}=o_p(1)$ .", "First condition is the usual exogeneity condition.", "This is necessary for the unbiasedness of our leave-cluster out estimator.", "One might relax this assumption to allow an asymptotic negligible amount of mis-specification bias in the primary model (see CJN 2018).", "Our estimator would then lose its unbiasedness but remain consistent.", "Second condition restricts the amount of inaccuracy permitted in the linear prediction of the conditional mean in the auxiliary model.", "The third condition is a necessary condition for the maximum leverage of design matrix of the second stage regression to vanish asymptotically.", "Assumption 4 $\\text{Pr}(\\min _g\\text{det}(\\tilde{M}_{g,g})>0)\\rightarrow 1$ $\\frac{1}{\\min _g\\lambda _{\\min }(\\tilde{M}_{g,g})}=O_p(1)$ $\\frac{\\sum ^n_{i=1}||\\tilde{Q}_{i}||^4}{n}=O_p(1)$ , where $\\tilde{Q}_i=\\sum ^n_{j=1}M_{ij}Q_i$ $\\frac{\\max _i||\\mu _{i}||}{\\sqrt{n}}=o_p(1)$ Assumption 4 is a set of conditions needed for variance estimation.", "First and second conditions are there to control perfect and near perfect collinearity.", "The first one allows the estimator to exist in large sample with high probability while the second one prevents the variance of the estimator to blow up in large sample due to near perfect collinearity.", "Third assumption is needed to bound the foruth moment of the error in the auxiliary model.", "This in effect allows us to bound $\\frac{1}{n}\\sum ^n_{i=1}\\hat{v}_i^4$ which also contributes to the variance of the estimator.", "The last assumption restricts the amount of noise in the level variable $y_i$ that could come from the conditional mean $\\mu _i$ .", "This is needed because the crossfit estimator uses $y_i$ as a proxy for the unobserved error $u_i$ ." ], [ "Theoretical Results", "Our first result extends Cattaneo's asymptotic normality result in high dimensional linear model to data with cluster dependence.", "Theorem C.1 Suppose Assumptions 1-3 hold and $\\lambda _{\\min }(\\sum ^n_{i=1}w_{i}w_{i}^{\\prime })>0,\\lambda _{\\min }(\\hat{\\Gamma })>0$ .", "Then, $\\Omega ^{-1/2}\\sqrt{n}(\\hat{\\beta }-\\beta )=\\hat{\\Gamma }^{-1}S\\xrightarrow{}\\mathcal {N}(0,I),\\ \\ \\ \\Omega =\\hat{\\Gamma }^{-1}\\Sigma \\hat{\\Gamma }^{-1},$ where $\\hat{\\beta }=1\\lbrace \\lambda _{\\min }(\\hat{\\Gamma })>0\\rbrace \\hat{\\Gamma }^{-1}\\Big (\\frac{1}{n}\\sum _{1\\le i\\le n}\\hat{v}_{i}y_{i}\\Big ),\\ \\ \\ \\hat{\\Sigma }=\\frac{1}{n}\\sum _{g\\in G}\\sum _{i,j\\in g}\\hat{v}_{i}\\hat{v}_{j}{^{\\prime }}E[u_{i}u_{j}|\\mathcal {X}_n,\\mathcal {W}_n],$ $\\hat{\\Gamma }=\\frac{1}{n}\\sum _{1\\le i \\le n}\\hat{v}_{i}\\hat{v}_{i}^{\\prime } \\ \\text{ , }\\ S=\\frac{1}{\\sqrt{n}}\\sum _{1\\le i\\le n}\\hat{v}_{i}u_{i}\\ \\ \\text{ and }\\ \\ \\ \\hat{v}_{i}=\\sum _{1\\le j\\le n}M_{ij}x_{j}.$ We can now introduce our leave-cluster-out crossfit (LCOC) estimator.", "Definition C.2 (Leave-Cluster-Out estimator) $\\hat{\\Sigma }^\\text{LCOC}=\\frac{1}{n}\\sum _{g\\in G}\\sum _{i,j\\in g}\\hat{v}_{i}\\hat{v}_{i}{^{\\prime }}(y_{i}\\hat{u}_{-g,j}+\\hat{u}_{-g,i}y_{j})$ where $\\hat{u}_{-g,i}=\\sum ^n_{j=1}M_{ij}(y_{j}-x_{j}^{\\prime }\\hat{\\beta }_{-g})$ and $\\hat{\\beta }_{-g}$ is the OLS estimator computed by excluding observations in the cluster $g$ .", "We now state the two results about the LCOC estimator.", "Theorem C.3 (Unbiasedness) Suppose $\\hat{\\Sigma }^\\text{LCOC}$ exists, then $\\text{E}[\\hat{\\Sigma }^\\text{LCOC}|\\mathcal {X}_n,\\mathcal {W}_n]=\\Sigma .$ Theorem C.4 (Consistency) Suppose assumption 1-4 holds, $\\hat{\\Sigma }^\\text{LCOC}=\\Sigma +o_p(1).$" ], [ "Numerical Results", "We consider a setup that emulates our empirical example, $y_{it}=x_{it}\\beta +w_{it}\\gamma _t+\\epsilon _{it}$ where $\\beta =0.5$ , $(x_{it},w_{it})\\sim _{iid}N(0,1)$ , $\\gamma _t\\sim U[-0.5,0.5]$ , $\\epsilon _{it}=\\Big (0.8\\epsilon _{it-1}+0.2u_{it}\\Big )|x_{it}|$ and $u_{it}\\sim N(0,1)$ .", "We generate a simple of $N$ individuals with $T$ periods.", "We perform a monte carlo simulation of 1,000 repetitions with the above setup.", "Note that model is both serially correlated and heteroskedastic.", "This specification gives a ration of parameter to observation of $\\frac{181}{1000}=18.1\\%$ .", "We find that the average bias of cluster-robust estimator is -0.1909 and the average bias of BM estimator is 0.0785.", "This supports the view that the LZ estimator is biased downward while the BM jackknife type estimator is biased upward in general.", "Our leave-cluster-out is unbiased so there is little surprise that the average bias is closed to zero.", "In terms of variance, the LZ estimator is the most precise (0.2438) while BM comes second (0.6038) and ours comes last (0.7552) .", "This illustrates the bias-variance trade-off and is expected because the BM estimator leaves out observations and LCOC estimtor uses the level variable which could be noisy.", "In terms of MSE performance, LZ comes out the ahead in this experiment.", "However, the differences among the three are small and all are of the same order of mangitude.", "Note that LCOC estimator is the only consistent estimator here so it will eventually outperform the other two by increasing the sample size of the monte carlor experiment.", "In terms of rejection rate, under the null that $\\hat{\\beta }=0.5$ , our estimator has the best size control.", "Note that the t-statistic constructed using the LZ estimator (BM estimator) under-rejects (over-rejects).", "Figure: Errors of the three estimators in each repetition." ], [ "Angrist and Lavy (2009)", "Angrist and Lavy (2009), AL thereafter, analyze the effect of high stakes high school achievement using cash incentives experiment.", "Their identication of treatment effect is given by the general model below $y_{ij}=\\Lambda [\\mathbf {X}_j^{\\prime }\\mathbf {\\alpha }+\\sum _qd_{qi}\\delta _q+\\mathbf {W}_i^{\\prime }\\mathbf {\\beta }+\\gamma z_j]+\\epsilon _{ij},\\ \\ \\text{E}(\\epsilon _{ij})=0$ where $i$ indexes students, $j$ indexes schools, $\\Lambda $ could be logistic function or identity function (OLS) depending the specification.", "Covariates include the treatment dummy (school level), $z_j$ , a vector of school-level controls, $\\mathbf {X}_j$ , a vector of individual controls $\\mathbf {W}_i$ .", "and lagged test scores $\\delta _q$ .", "AL (2009) assert that the bias of the LZ standard error is biased downward and use the Bell and Mcaffrey's (2009) Jackknife estimator in an attempt to address the finite sample bias problem of LZ (1987)'s estimator.", "Two potential concerns here are i) that there is no guarantee that bias of LZ is downward as seen in our bias analysis excersis and ii) that the BM standard error is also biased in finite sample.", "Thus, applying our cross-fit estimator here to their models would serve as excellent robustness checks to allievate the aforementioned concerns.", "We will focus on the linear specifications where $\\Lambda $ is the identity function and reproduce the AL's results in table 2's panel A.", "Two causual discoveries were made by AL (2009) from this table 1) they suggest that there is evidence that the Achievement Awards program increased Bagrut rates in 2001 and 2) the estimated treatment effect comes mainly from girls as suggested by the gender-specific regressions.", "However, they do note that most of their significant results are only “marginally significant”.", "A close examination of the implied p-value of the BM standard error seems to suggest that they mean significance level around or below 10% level.", "Inferences based on our leave-cluster-out estimator support their conclusion with some additional insights.", "The two key specifications in this table are (1) SC + Q+ M and (3) SC + Q+ M + P. The p-value of the estimated coefficients for these two equations are significantly lower than their BM counter parts we have 0.0165 vs 0.0208 and 0.0274 vs 0.1055 respectively.", "One potential reason for the BM being more conservative is that there are manys discrete covariates.", "Eventhough the sample average of leverage points is low (low parameter to observation ratio), the regression design is still relatively imbalanced.", "This could result in the BM estimator being biased upward as shown in the simluation." ], [ "Donohue and Levitt (2001)", "Donohue and Levitt (2001) concludes that legalized abortion has contributed significantly to crime reduction.", "They ran the following regression $\\ln (\\text{CRIME}_{st})=\\beta _1\\text{ABORT}_{st}+X_{st}\\Theta +\\gamma _s+\\lambda _t+\\epsilon _{st}$ where the left-hand-side variable is the logged crime rate per capita, ABORT$_{st}$ is the effective abortion rate for a given state, year and crime category, $X$ is a vector of state-level controls that includes prisoners and police per capita, a range of variables capturing state economic conditions, lagged state welfare generosity, the presence of concealed handgun laws, and per capita beer consumption.", "$\\gamma _s$ and $\\lambda _t$ represent state and year fixed effects.", "The cluster will be define at the state level.", "Clustering at the state level can be seen as a way to account for serial correlation in the sample.", "However, the cluster-robust standard error is biased due to the presence of serial correlation.", "If we assume textbook asymptotic setting, then the LCOC estimator can be seen as a finite sample bias correction to the cluster-robust standard error.", "In this model, it is reasonable to allow for high dimensional asymptotics due to the inclusion of many state-level characteristics.", "The state-specific fixed effects are not identified in the leave-cluster-out sample, so we get around this by applying a within-transformation to get rid of the state-specific fixed effects.", "The baseline model is thus $\\widetilde{\\ln (\\text{CRIME}_{st})}=\\beta _1\\widetilde{\\text{ABORT}}_{st}+\\tilde{X}_{st}\\Theta +\\tilde{\\lambda _t}+\\tilde{\\epsilon }_{st}.$ We estimate the model above using OLS and compute the LZ, LCOC and BM estimator.", "The results are essentially replication of Table IV in DL (2009).", "In line with their results, the coefficients $\\hat{\\beta }$ is negative for all crime types.", "In terms of the estimated variances, we see that the LZ estimator is the smallest across the board while the BM estimator is the largest across the board.", "The LCOC estimator is then exactly in the middle across the board.", "In terms of significance level, nothing is changed when we switch from the LZ to LCOC estimator.", "However, the coefficient for muder crime is no longer significant at 1% level when we switch from LZ to BM estimator.", "This suggests that the result of DL (2009) might be less robust for more serious crimes.", "Overall, the original results remain relatively insensitive to the choice of estimator used.", "If we look at the left subfigure in figure 3, the histogram of sample leverage points for this specification is very balanced where the average value of leverage points is low (red line) and the spread is also small.", "This suggests that the finite sample biases of the two estimators would be likely small and thus leading to small differences across the three estimators.", "Next, we consider a high dimensional specification that assumes the impact of the controls is time-varying.", "The model is given by $\\widetilde{\\ln (\\text{CRIME}_{st})}=\\beta _1\\widetilde{\\text{ABORT}}_{st}+\\tilde{X}_{st}\\Theta _t +\\tilde{\\lambda _t}+\\tilde{\\epsilon }_{st}$ this gives a ratio of parameter to observations equals to $\\frac{109}{624}\\approx 17.5\\%$ which is much larger than the ratio of the baseline model ($\\frac{21}{624}\\approx 3.4\\%$ ).", "The cofficient of interest $\\hat{\\beta }$ remains negative for all three crime types in this specification but the magnitudes of them are reduced by a non-trivial amount.", "This highlights the sensitivity to the choice of controls and supports the finding of Belloni, Chernozhukov and Hansen (2014).", "The estimated variances are larger than the baseline in across the board with again BM (LZ) being largest (smallest) across the board.", "Note that empirical distribution of leverage points in the levitt sample with high dimensional specification is quite similar to the empirical distribution of leverage points in this simulated sample.", "Thus, it seems reasonable that we observe similar pattern to that of the monte carlor experiment here.", "However, $\\hat{\\beta }$ is only highly signficant for property crime across the three estimators.", "Violent crime is no longer significant under 1% level for both LCO and BM.", "Murder crime is no longer significant under 10% level for BM but manages to stay significant under 10% level for both LZ and LCOC.", "This casts doubts into whether there is indeed a causal relationship between abortion rate and more serious crimes like murder and violent crimes.", "One potential explanation here is that the underlying unobserved dependence and heteroskedasticity are different across the crime types which leads to different inferential results." ], [ "Conclusion", "To motivate the use of our bias corrected cluster-robust variance estimator, we derive the explicit bounds on the finite-sample bias of the cluster-robust standard error.", "The results show that the cluster robust standard error will be asymptotically unbiased if the maximum leverage point vanishes asympoticallty.", "Following KSS (2020)'s remark, we construct an unbiased variance estimator that is robust to cluster dependence under high dimensional setting introduced by CJN (2018).", "This estimator can be seen as a bias-corrected cluster-robust variance estimator (White, 1984; Liang and Zeger, 1986).", "Monte Carlo results show that the LCOC estimator is unbiased but could be less precise depending on the empirical hat matrix.", "As empirical illustrations, the leave-cluster-out estimator is applied to Angist and Lavy's (2009) study of the effects of high school achievement award and Donohue and Levit's (2002) study of the the casual impact of legalized abortion on crime reduction." ], [ "Proofs of Theorem 2.1 and 2.2", "We first prove Theorem 2.1.", "Recall the standard plugin estimator of $u_gu_g^{\\prime }$ is $\\hat{u}_g\\hat{u}_g^{\\prime }&=(y_g-\\tilde{X}_g\\hat{\\beta }_{OLS})(y_g-\\tilde{X}_g\\hat{\\beta }_{OLS})^{\\prime }\\\\&=[y_g-\\tilde{X}_g(\\tilde{X}^{\\prime }\\tilde{X})^{-1}\\tilde{X}^{\\prime }y][y_g-\\tilde{X}_g(\\tilde{X}^{\\prime }\\tilde{X})^{-1}\\tilde{X}^{\\prime }y]^{\\prime }\\\\&=y_gy_g^{\\prime }-\\tilde{X}_g(\\tilde{X}^{\\prime }\\tilde{X})^{-1}\\tilde{X}^{\\prime }y y_g^{\\prime } -y_gy^{\\prime }\\tilde{X}(\\tilde{X}^{\\prime }\\tilde{X})^{-1}\\tilde{X}_g^{\\prime }+\\tilde{X}_g(\\tilde{X}^{\\prime }\\tilde{X})^{-1}\\tilde{X}^{\\prime }yy^{\\prime }\\tilde{X}(\\tilde{X}^{\\prime }\\tilde{X})^{-1}\\tilde{X}_g^{\\prime }.$ Taking expectation on each of the last three terms above, we have $\\text{E}(\\tilde{X}_g(\\tilde{X}^{\\prime }\\tilde{X})^{-1}\\tilde{X}^{\\prime }yy_g^{\\prime }|\\tilde{X})&=\\tilde{X}_g(\\tilde{X}^{\\prime }\\tilde{X})^{-1}\\tilde{X}^{\\prime }\\text{E}(yy_g^{\\prime }|\\tilde{X})\\\\&=\\tilde{X}_g(\\tilde{X}^{\\prime }\\tilde{X})^{-1}\\tilde{X}^{\\prime }\\text{E}[(\\tilde{X}\\beta )(\\tilde{X}_g\\beta )^{\\prime }+uu_g^{\\prime }|\\tilde{X})\\\\&=\\tilde{X}_g\\beta \\beta ^{\\prime } \\tilde{X}_g^{\\prime }+\\tilde{X}_g(\\tilde{X}^{\\prime }\\tilde{X})^{-1}\\tilde{X}^{\\prime }\\underbrace{\\begin{pmatrix}0\\\\ \\Omega _g\\\\0\\end{pmatrix}}_{n\\times |g|},$ $\\text{E}(y_gy^{\\prime }\\tilde{X}(\\tilde{X}^{\\prime }\\tilde{X})^{-1}\\tilde{X}_g^{\\prime }|\\tilde{X})&=\\tilde{X}_g\\beta \\beta ^{\\prime } \\tilde{X}_g^{\\prime }+\\underbrace{\\begin{pmatrix}0&\\Omega _g &0\\end{pmatrix}}_{|g|\\times n}\\tilde{X}(\\tilde{X}^{\\prime }\\tilde{X})^{-1}\\tilde{X}_g^{\\prime },$ and $\\text{E}[\\tilde{X}_g(\\tilde{X}^{\\prime }\\tilde{X})^{-1}\\tilde{X}^{\\prime }yy^{\\prime }\\tilde{X}(\\tilde{X}^{\\prime }\\tilde{X})^{-1}\\tilde{X}_g^{\\prime }|\\tilde{X}]&=\\tilde{X}_g\\beta \\beta ^{\\prime }\\tilde{X}_g^{\\prime }+\\tilde{X}_g(\\tilde{X}^{\\prime }\\tilde{X})^{-1}\\tilde{X}^{\\prime }\\Omega \\tilde{X}(\\tilde{X}^{\\prime }\\tilde{X})^{-1}\\tilde{X}_g^{\\prime }.$ Next, we look at the true cluster-specific covariance matrix $\\text{E}(u_gu_g^{\\prime }|\\tilde{X})&=\\text{E}[(y_g-\\tilde{X}_g\\beta )(y_g-\\tilde{X}_g\\beta )^{\\prime }|\\tilde{X}]\\\\&=\\text{E}[y_gy_g^{\\prime }-\\tilde{X}_g\\beta y_g^{\\prime }-y_g\\beta ^{\\prime }\\tilde{X}_g^{\\prime }-\\tilde{X}_g\\beta \\beta ^{\\prime }\\tilde{X}_g^{\\prime }|\\tilde{X}]\\\\&=E(y_gy_g^{\\prime }|\\tilde{X})-\\tilde{X}_g\\beta \\beta ^{\\prime }\\tilde{X}_g^{\\prime }-\\tilde{X}_g\\beta \\beta ^{\\prime } \\tilde{X}_g^{\\prime }+\\tilde{X}_g\\beta \\beta ^{\\prime } \\tilde{X}_g^{\\prime }\\\\&=E(y_gy_g^{\\prime }|\\tilde{X})-\\tilde{X}_g\\beta \\beta ^{\\prime }\\tilde{X}_g^{\\prime }$ Thus, the cluster-specific bias term is given by $B_g&=E(\\hat{u}_g\\hat{u}_g^{\\prime }|\\tilde{X})-E(u_gu_g^{\\prime }|\\tilde{X})\\\\&=\\tilde{X}_g(\\tilde{X}^{\\prime }\\tilde{X})^{-1}\\tilde{X}^{\\prime }\\Omega \\tilde{X}^{\\prime }(\\tilde{X}^{\\prime }\\tilde{X})^{-1}\\tilde{X}_g^{\\prime }-\\underbrace{\\begin{pmatrix}0&\\Omega _g&0\\end{pmatrix}}_{|g|\\times n}\\tilde{X}(\\tilde{X}^{\\prime }\\tilde{X})^{-1}\\tilde{X}_g^{\\prime }-\\tilde{X}_g(\\tilde{X}^{\\prime }\\tilde{X})^{-1}\\tilde{X}^{\\prime }\\underbrace{\\begin{pmatrix}0\\\\ \\Omega _g\\\\0\\end{pmatrix}}_{n\\times |g|}\\\\&=H_{g}\\Omega H_{g}^{\\prime }-(H_{g,g}\\Omega _g+\\Omega _gH_{g,g})$ where we define $H_{g}=\\tilde{X}_g(\\tilde{X}^{\\prime }\\tilde{X})^{-1}\\tilde{X}^{\\prime }$ and $H_{g,g}=\\tilde{X}_g(\\tilde{X}^{\\prime }\\tilde{X})^{-1}\\tilde{X}_g$ .", "The proprtionate bias is defined as $pb(\\widehat{\\text{Var}}_{\\text{cluster}})&=\\frac{w^{\\prime }\\Big (\\sum _{g\\in G}\\tilde{X}_g^{\\prime }\\tilde{X}_g\\Big )^{-1}(\\sum _{g\\in G}\\tilde{X}_g^{\\prime }B_g\\tilde{X}_g)\\Big (\\sum _{g\\in G}\\tilde{X}_g^{\\prime }\\tilde{X}_g\\Big )^{-1}w}{w^{\\prime }\\Big (\\sum _{g\\in G}\\tilde{X}_g^{\\prime }\\tilde{X}_g\\Big )^{-1}\\Big (\\sum _{g\\in G}\\tilde{X}_g^{\\prime }\\Omega _g \\tilde{X}_g\\Big )\\Big (\\sum _{g\\in G}\\tilde{X}_g^{\\prime }\\tilde{X}_g\\Big )^{-1}w}$ where $w$ is any non-zero vector that has the same dimension as $\\beta $ .", "Then $\\lambda _{\\min }(B_{g_{\\min }}\\Omega _{g_{\\min }}^{-1})\\le \\min _g\\Big (\\frac{w^{\\prime }\\Big (\\sum ^{n_G}_{g=1}\\tilde{X}_g^{\\prime }\\tilde{X}_g\\Big )^{-1}(\\tilde{X}_g^{\\prime }B_g\\tilde{X}_g)\\Big (\\sum ^{n_G}_{g=1}\\tilde{X}_g^{\\prime }\\tilde{X}_g\\Big )^{-1}w}{w^{\\prime }\\Big (\\sum ^{n_G}_{g=1}\\tilde{X}_g^{\\prime }\\tilde{X}_g\\Big )^{-1}\\Big (\\tilde{X}_g^{\\prime }\\Omega _g \\tilde{X}_g\\Big )\\Big (\\sum ^{n_G}_{g=1}\\tilde{X}_g^{\\prime }\\tilde{X}_g\\Big )^{-1}w}\\Big )\\\\\\le \\frac{w^{\\prime }\\Big (\\sum ^{n_G}_{g=1}\\tilde{X}_g^{\\prime }\\tilde{X}_g\\Big )^{-1}(\\sum ^{n_G}_gB_g)\\Big (\\sum ^{n_G}_{g=1}\\tilde{X}_g^{\\prime }\\tilde{X}_g\\Big )^{-1}w}{w^{\\prime }\\Big (\\sum ^{n_G}_{g=1}\\tilde{X}_g^{\\prime }\\tilde{X}_g\\Big )^{-1}\\Big (\\sum ^{n_G}_{g=1}\\tilde{X}_g^{\\prime }\\Omega _g \\tilde{X}_g\\Big )\\Big (\\sum ^{n_G}_{g=1}\\tilde{X}_g^{\\prime }\\tilde{X}_g\\Big )^{-1}w}\\\\\\le \\max _g\\Big (\\frac{w^{\\prime }\\Big (\\sum ^{n_G}_{g=1}\\tilde{X}_g^{\\prime }\\tilde{X}_g\\Big )^{-1}(\\tilde{X}_g^{\\prime }B_g\\tilde{X}_g)\\Big (\\sum ^{n_G}_{g=1}\\tilde{X}_g^{\\prime }\\tilde{X}_g\\Big )^{-1}w}{w^{\\prime }\\Big (\\sum ^{n_G}_{g=1}\\tilde{X}_g^{\\prime }\\tilde{X}_g\\Big )^{-1}\\Big (\\tilde{X}_g^{\\prime }\\Omega _g \\tilde{X}_g\\Big )\\Big (\\sum ^{n_G}_{g=1}\\tilde{X}_g^{\\prime }\\tilde{X}_g\\Big )^{-1}w}\\Big )\\le \\lambda _{\\max }(B_{g_{\\max }}\\Omega _{g_{\\max }}^{-1})$ where the outer inequalities are given by a result about the ratio of quadratic formThis is just an appllication of the Courant-Fischer-Weyl min-max principle.", "See Rao p74 (1972) for the exact proof of the result used here..", "The inner inequalities are due to fact that $\\min _i\\frac{a_i}{b_i}\\le \\frac{\\sum ^n_{i=1}a_i}{\\sum ^n_{i=1}b_i}\\le \\max _i \\frac{a_i}{b_i}$ for any $b_i>0$ and $a_i\\in \\mathbb {R}$ .", "We now proceed to prove Theorem 2.2.", "WLOG, assume $B_g$ is positive definite for all $g$ , we have $\\lambda _{\\max }(B_{g_{\\max }}\\Omega _{g_{\\max }}^{-1})\\le \\lambda _{\\max }(B_{g_{\\max }})\\lambda _{\\max }(\\Omega _{g_{\\max }}^{-1})$ To get $\\lambda _{\\max }(B_{g_{\\max }})$ , first notice that $B_{g_{\\max }}$ is the sum of two Hermitian matrices and its maximum eigenvalue satisfies the Weyl's inequality $\\lambda _{\\max }(B_{g_{\\max }})\\le \\lambda _{\\max }(H_g\\Omega H_g^{\\prime })-\\lambda _{\\min }(H_{g,g}\\Omega _g+\\Omega _gH_{g,g})$ Then, apply the Gershgorin circle theorem on each of these two terms above $\\lambda _{\\min }(H_{g,g}\\Omega _g+\\Omega _gH_{g,g})&\\ge \\min _i\\Big ([H_{g,g}\\Omega _g+\\Omega _gH_{g,g}]_{ii}-\\sum ^{|g|}_{j\\ne i}\\Big |[H_{g,g}\\Omega _g+\\Omega _gH_{g,g}]_{ij}\\Big |\\Big )\\\\&=\\min _i\\Big (\\sum _{k\\in g}h_{ik}\\omega _{ki}+\\sum _{k\\in g}\\omega _{ik}h_{ki}-\\sum ^{|g|}_{j\\ne i}\\Big |\\sum _{k\\in g}h_{ik}\\omega _{kj}+\\sum _{k\\in g}\\omega _{ik}h_{kj}\\Big |\\Big )$ and $\\lambda _{\\max }(H_{g}\\Omega H_{g}^{\\prime })&\\le \\lambda _{\\max }(H_{g}H_g^{\\prime })\\lambda _{\\max }(\\Omega )\\\\&=\\lambda _{\\max }(H_{g,g})\\lambda _{\\max }(\\Omega )\\\\&\\le \\max _i \\Big ( [H_{g,g}]_{ii}+\\sum ^{|g|}_{j\\ne i} \\Big |[H_{g,g}]_{ij}\\Big |\\Big )\\lambda _{\\max }(\\Omega )\\\\$ If $\\lim _{n\\rightarrow \\infty } \\max _i h_{ii}= 0$ and $|g|=O(1)$ , then $\\lim _{n\\rightarrow \\infty }\\max _i \\Big ( [H_{g,g}]_{ii}+\\sum ^{|g|}_{j\\ne i} \\Big |[H_{g,g}]_{ij}\\Big |\\Big )=0$ because the off diagonal entries of the hat matrix are bounded by the diagonal entries (leverage points).", "Bounded cluster size is needed here.", "Since any row sum of the hat matrix is one and $\\sum ^n_{j=1}\\Big |[H_g]_{ij}\\Big |\\ge \\sum ^{n}_{j=1} [H_{g}]_{ij}=1$ , so the above might not converge if $|g|$ is unbounded.", "Follow the same proof strategy to obtain the result for the lower bound." ], [ "Proof Theorem 3.1", "We will prove here that $\\hat{\\Sigma }^{-1}=O_p(1)$ and $\\Sigma ^{-1/2}S\\xrightarrow{}N(0,1)$An asymptotic normality result was also provided in D’Adamo (2019).", "He assumes directly that $\\hat{\\Sigma }^{-1}=O_p(1)$ and claims that CJN (2018)'s results will follow automatically.", "Our asymptotic normality results do not assume this and thus hold under weaker assumptions..", "Following CJN (2018), we will asumme the dimension of $\\hat{\\beta }$ is 1.", "We first show that $\\hat{\\Sigma }^{-1}=O_p(1)$ .", "Define $\\hat{\\mathbf {v}}_g=(\\hat{v}_{g(1)},\\ldots , \\hat{v}_{g(|g|)})$ .", "Then $\\hat{\\Sigma }&=\\text{Var}(S_n|\\mathcal {X}_n,\\mathcal {W}_n)\\\\&=\\frac{1}{n}\\sum _{g\\in G}^n\\hat{\\mathbf {v}}_{g}^{\\prime } \\Omega _g \\hat{\\mathbf {v}}_{g}\\\\&\\ge \\frac{1}{n}\\sum _{g\\in G} \\lambda _{\\min }(\\Omega _g)\\Vert \\hat{\\mathbf {v}}_{g}\\Vert ^2\\\\&\\ge \\frac{1}{n}\\sum _{i=1}^n\\hat{v}_i^2 \\lambda _{\\min }(\\Omega )$ First note that $\\frac{1}{\\frac{1}{n}\\sum _{i=1}^n\\hat{v}_i^2}=O_p(1)$ by CJN (2018) Lemma SA-1 and $\\frac{1}{\\lambda _{\\min }(\\Omega )}=O_p(1)$ by our assumption.", "So $\\hat{\\Sigma }^{-1}\\le \\frac{1}{\\frac{1}{n}\\sum _{i=1}^n\\hat{v}_i^2}\\frac{1}{\\lambda _{\\min }(\\Omega )}=O_p(1) .$ We now show that $\\hat{\\Sigma }^{-1/2}S\\xrightarrow{}N(0,1)$ .", "Since $\\hat{\\Sigma }^{-1}S=\\frac{1}{\\sqrt{n}}\\sum _{g\\in G}\\hat{\\Sigma }_n^{-1/2}\\sum _{i\\in g}\\hat{v}_{i}u_{i}$ the above can be satisfied, if $&\\sup _{z\\in \\mathbb {R}}\\Big |\\text{Pr}\\Big (\\hat{\\Sigma }^{-1/2}S_n\\le z| \\mathcal {X}_n,\\mathcal {W}_n\\Big )-\\Phi (z)\\Big |\\\\\\le & \\min \\Big ( 1,\\frac{1}{n^{3/2}}E\\Big [ \\sum {g\\in G}\\Big |\\hat{\\Sigma }^{-1/2}\\sum _{i\\in g}\\hat{v}_{i}u_{i}\\Big |^3|\\mathcal {X}_n,\\mathcal {W}_n\\Big ]\\Big )\\\\=&o_p(1),$ which follows from the conditional Berry-Esseen inequality.", "So $&\\frac{1}{n^{3/2}}\\sum _{g\\in G}E\\Big [\\Big |\\hat{\\Sigma }_n^{-1/2}\\sum _{i\\in g}\\hat{v}_{i}u_{i}\\Big |^3 \\ \\ |\\ \\mathcal {X}_n,\\mathcal {W}_n\\Big ]\\\\=&\\frac{1}{n^{3/2}}\\Sigma _n^{-3/2}\\sum _{g \\in G}E\\Big [\\Big |\\sum _{i\\in g}\\hat{v}_{i}u_{i}\\Big |^{4\\cdot \\frac{3}{4}} \\ \\ |\\ \\mathcal {X}_n,\\mathcal {W}_n\\Big ]\\\\\\le &\\frac{1}{n^{3/2}}\\Sigma _n^{-3/2}\\sum _{g\\in G}\\Big \\lbrace E\\Big [\\Big |\\sum _{i\\in g } \\hat{v}_{i}u_{i}\\Big |^4\\ |\\ \\mathcal {X}_n,\\mathcal {W}_n\\Big ]\\Big \\rbrace ^{3/4}\\\\\\le &\\frac{1}{n^{3/2}}\\Sigma _n^{-3/2}\\sum _{g\\in G}\\Big \\lbrace E\\Big [(\\sum ^n_{i\\in g} \\hat{v}_{i}^2)^2(\\sum _{i\\in g}u_{i}^2)^2\\ |\\ \\mathcal {X}_n,\\mathcal {W}_n\\Big ]\\Big \\rbrace ^{3/4}\\\\=&\\frac{1}{n^{3/2}}\\Sigma _n^{-3/2}\\sum _{g \\in G}(\\sum _{i\\in g} \\hat{v}_{i}^2)^{3/2}\\Big \\lbrace \\sum _{i,j\\in g}E\\Big [u_{i}^2u_{j}^2\\ |\\ \\mathcal {X}_n,\\mathcal {W}_n \\Big ]\\Big \\rbrace ^{3/4}\\\\\\le &\\frac{1}{n^{3/2}}\\Sigma _n^{-3/2}\\sum _{g \\in G}(\\sum ^n_{i\\in g} \\hat{v}_{i}^2)^{3/2}\\Big \\lbrace |g|^2\\cdot \\max _{i\\in g}E\\Big [u_{i}^4\\ |\\ \\mathcal {X}_n,\\mathcal {W}_n \\Big ]\\Big \\rbrace ^{3/4}\\\\=&\\frac{1}{n^{3/2}}\\Sigma _n^{-3/2}\\sum _{g \\in G}(\\sum _{i\\in g} \\hat{v}_{i}^2)^{3/2}|g|^{3/2}\\cdot \\Big \\lbrace \\max _{i\\in g}E\\Big [u_{i}^4\\ |\\ \\mathcal {X}_n,\\mathcal {W}_n \\Big ]\\Big \\rbrace ^{3/4}\\\\\\le &\\frac{1}{n^{3/2}}\\Sigma _n^{-3/2}\\sum _{g \\in G}\\Big (\\sqrt{\\sum _{i\\in g} \\hat{v}_{i}^4}\\sqrt{|g|}\\Big )^{3/2}|g|^{3/2}\\cdot \\Big \\lbrace \\max _{i\\in g}E\\Big [u_{i}^4\\ |\\ \\mathcal {X}_n,\\mathcal {W}_n \\Big ]\\Big \\rbrace ^{3/4}\\\\=&\\frac{1}{n^{3/2}}\\Sigma _n^{-3/2}\\sum _{g \\in G}\\Big (\\sum _{i\\in g} \\hat{v}_{i}^4\\Big )^{3/4}|g|^{9/4}\\cdot \\Big \\lbrace \\max _{i\\in g}E\\Big [u_{i}^4\\ |\\ \\mathcal {X}_n,\\mathcal {W}_n \\Big ]\\Big \\rbrace ^{3/4}\\\\\\le &\\frac{1}{n^{3/2}}\\Sigma _n^{-3/2}\\sum _{g \\in G}\\Big (\\max _{i\\in g}\\hat{v}_i^2\\sum _{i\\in g} \\hat{v}_{i}^2\\Big )^{3/4}|g|^{9/4}\\cdot \\Big \\lbrace \\max _{i\\in g}E\\Big [u_{i}^4\\ |\\ \\mathcal {X}_n,\\mathcal {W}_n \\Big ]\\Big \\rbrace ^{3/4}\\\\=&\\Sigma _n^{-3/2}\\sum _{g \\in G}\\Big (\\frac{1}{n^2}\\max _{i\\in g}\\hat{v}_i^2\\sum _{i\\in g} \\hat{v}_{i}^2\\Big )^{3/4}|g|^{9/4}\\cdot \\Big \\lbrace \\max _{i\\in g}E\\Big [u_{i}^4\\ |\\ \\mathcal {X}_n,\\mathcal {W}_n \\Big ]\\Big \\rbrace ^{3/4}\\\\=&\\Sigma _n^{-3/2}\\sum _{g \\in G}\\Big (\\frac{\\max _{i\\in g}|\\hat{v}_i|}{\\sqrt{n}}\\Big )^{3/2}\\Big (\\sum _{i\\in g} \\frac{1}{n}\\hat{v}_{i}^2\\Big )^{3/4}|g|^{9/4}\\cdot \\Big \\lbrace \\max _{i\\in g}E\\Big [u_{i}^4\\ |\\ \\mathcal {X}_n,\\mathcal {W}_n \\Big ]\\Big \\rbrace ^{3/4}\\\\\\le &\\Sigma _n^{-3/2}\\Big (\\frac{\\max _{i}|\\hat{v}_i|}{\\sqrt{n}}\\Big )^{3/2}\\sum _{g \\in G}\\Big (|g|\\cdot \\frac{1}{n}\\max _{i\\in g}\\hat{v}_{i}^2\\Big )^{3/4}|g|^{9/4}\\cdot \\Big \\lbrace \\max _{i\\in g}E\\Big [u_{i}^4\\ |\\ \\mathcal {X}_n,\\mathcal {W}_n \\Big ]\\Big \\rbrace ^{3/4}\\\\\\le &\\Sigma _n^{-3/2}\\Big (\\frac{\\max _{i}|\\hat{v}_i|}{\\sqrt{n}}\\Big )^{3/2}\\sum _{g \\in G}\\Big (\\frac{1}{n}\\max _{i\\in g}\\hat{v}_{i}^2\\Big )|g|^{3}\\cdot \\Big \\lbrace \\max _{i\\in g}E\\Big [u_{i}^4\\ |\\ \\mathcal {X}_n,\\mathcal {W}_n \\Big ]\\Big \\rbrace ^{3/4}\\\\\\le &\\Sigma _n^{-3/2}\\Big (\\frac{\\max _{i}|\\hat{v}_i|}{\\sqrt{n}}\\Big )^{3/2}\\Big (\\frac{1}{n}\\sum _{i=1}^n\\hat{v}_{i}^2\\Big )|g|^{3}\\cdot \\Big \\lbrace \\max _{i\\in g}E\\Big [u_{i}^4\\ |\\ \\mathcal {X}_n,\\mathcal {W}_n \\Big ]\\Big \\rbrace ^{3/4}\\\\\\le &o_p(1)$ where $\\frac{\\max _{1\\le i \\le N} |\\hat{v}_{i,n}|}{\\sqrt{n}}=o_p(1)$ and all other multiplicands are $O_p(1)$ .", "Remarks: (A.3) Apply Jensen inequality since $f(x)=x^{3/4}$ is a concave function for positive $x$ .", "(A.4) Apply Cauchy–Schwarz on $\\Big |\\sum _{i\\in g } \\hat{v}_{i}u_{i}\\Big |^4$ to get the bound $\\Big (\\sum _{i\\in g } \\hat{v}_{i}^2\\Big )^2\\Big (\\sum _{i\\in g }u_{i}^2\\Big )^2$ (A.5) Move the summation sign out of the expectation operator (A.6) Each $E(u_i^2u_j^2\\ |\\ \\mathcal {X}_n,\\mathcal {W}_n)$ is bounded by $\\max _{i\\in g}E(u_i^4\\ |\\ \\mathcal {X}_n,\\mathcal {W}_n)$ (A.8) Apply Cauchy–Schwarz on $(\\sum _{i\\in g } \\hat{v}_{i}^2\\cdot 1)^2$ to get the bound $(\\sum _{i\\in g } \\hat{v}_{i}^4)\\cdot |g|)$ (A.10) Note that $\\sum _{i\\in g}\\hat{v}_i^4\\le \\max _{i\\in g}\\hat{v}_i^2\\sum _{i\\in g}\\hat{v}_i^2$ (A.13) - (A.15) Note that $(\\max _{i\\in g}\\hat{v}_i^2/\\sqrt{n})^{3/2}\\le (\\max _{i}\\hat{v}_i^2/\\sqrt{n})^{3/2}$ and $\\sum _{g\\in G}\\Big (\\sum _{i\\in g}\\frac{1}{n}\\hat{v}_i^2\\Big )^{3/4}\\le \\sum _{g\\in G}\\Big (\\frac{|g|}{n}\\max _{i\\in g}\\hat{v}_i^2\\Big )^{3/4}\\le \\sum _{g\\in G}\\Big (\\frac{|g|}{n}\\max _{i\\in g}\\hat{v}_i^2\\Big ) \\le |g|\\cdot \\frac{1}{n}\\sum _{i=1}^n\\hat{v}_i^2$" ], [ "Proof of Theorem 3.3", " and $E(y_g\\hat{u}_g|\\tilde{X})=E(y_gy_g^{\\prime }|\\tilde{X})-\\tilde{X}_g\\beta \\beta ^{\\prime }\\tilde{X}_g^{\\prime }=\\Omega _g$" ], [ "Proof of Theorem 3.4", "Following Cattaneo, Jansson and Newey (2018), we let $\\dim (\\beta )=1$ to ease notation without loss of generality.", "Please refer to section 2.1 for the notations used throughout this section." ], [ "Notations and Lemmas", "We will now prove a few lemmas that would be used in the main proof later.", "Recall the LCOC estimator is given by $\\widehat{\\text{Var}}_{LCOC}(\\hat{\\beta }_{OLS})=(\\mathbf {\\hat{v}}^{\\prime }\\mathbf {\\hat{v}})^{-1}\\mathbf {\\hat{v}}^{\\prime } \\hat{\\Omega }_{LCOC}\\mathbf {\\hat{v}}(\\mathbf {\\hat{v}}^{\\prime }\\mathbf {\\hat{v}})^{-1}$ In particular, $\\mathbf {\\hat{v}}^{\\prime } \\hat{\\Omega }_{LCOC}\\mathbf {\\hat{v}}&=\\sum ^n_{i=1}\\sum _{j\\in g(i)}\\hat{v}_{i}(y_{i}\\hat{u}_{j,-g(i)})\\hat{v}_{j}\\\\&=\\sum ^n_{i=1}\\sum _{j\\in g(i)}\\hat{v}_{i}\\hat{v}_{j}(\\mu _{i}+\\epsilon _{i})\\hat{u}_{j,-g(i)}$ where $\\hat{u}_{i,-g(i)}&=y_{i}-\\tilde{x}_{i}\\begin{pmatrix}\\hat{\\gamma }_{-g(i),OLS}\\\\\\hat{\\beta }_{-g(i),OLS}\\end{pmatrix}\\\\&=y_{i}-\\tilde{x}_{i}\\begin{pmatrix}\\hat{\\gamma }_{OLS}\\\\\\hat{\\beta }_{OLS}\\end{pmatrix}+\\tilde{x}_{i}(\\tilde{X}^{\\prime }\\tilde{X})^{-1}\\tilde{X}_{g(i)}(I-\\tilde{X}_{g(i)}^{\\prime }(\\tilde{X}^{\\prime }\\tilde{X})^{-1}\\tilde{X}_{g(i)}^{\\prime })^{-1}\\hat{u}_{g(i)}\\\\&=\\hat{u}_{i,n}+\\tilde{x}_{i}(\\tilde{X}^{\\prime }\\tilde{X})^{-1}\\tilde{X}_{g(i)}(I-\\tilde{X}_{g(i)}^{\\prime }(\\tilde{X}^{\\prime }\\tilde{X})^{-1}\\tilde{X}_{g(i)}^{\\prime })^{-1}\\hat{u}_{g(i)}$ where the second line follows from the Woodbury matrix identity.", "Lemma A.1 The first lemma relates the leave-cluster-out (LCO) residuals to the OLS residuals: $\\hat{u}_{g,-g}=\\tilde{M}_{g,g}^{-1} \\hat{u}_{g},$ provided that $\\tilde{M}_{g,g}^{-1}$ exits.", "Proof $\\hat{u}_{g,-g}&=\\hat{u}_{g}+\\tilde{X}_{g}(\\tilde{X}^{\\prime }\\tilde{X})^{-1}\\tilde{X}_g ^{\\prime }\\hat{u}_{g}&& \\text{by the Wooldbury matrix identity}\\\\&=(I-\\tilde{X}_{g}\\tilde{X}^{\\prime }\\tilde{X}\\tilde{X}_{g}^{\\prime })^{-1}\\hat{u}_{g}\\\\&=\\tilde{M}_{g,g}^{-1} \\hat{u}_{g}$ Lemma A.2 The second lemma relates the LCO residuals to the true errors: $\\hat{u}_{g,-g}=(\\tilde{M}_n)_{g,g}^{-1}\\tilde{M}_{g,-g}\\epsilon _{-g} +u_g$ where $\\tilde{M}_{g,-g}$ is the submatrix of $\\tilde{M}_g$ after omitting columns relating to the cluster $g$ .", "Proof $\\hat{u}_{g,-g}&=\\tilde{M}_{g,g}^{-1} \\hat{u}_{g} \\\\&=\\tilde{M}_{g,g}^{-1} \\tilde{M}_g\\mathbf {u}\\\\&=(\\tilde{M}_n)_{g,g}^{-1}\\tilde{M}_{g,-g}u_{-g} +u_g$ Lemma A.3 $\\frac{1}{n}\\sum ^n_{i=1}\\hat{v}_i^2=O_p(1)$ Proof See appendix (page 2) of Jochmans (2020).", "Lemma A.4 $\\frac{1}{n}\\sum ^n_{i=1}\\hat{v}_i^4=O_p(1)$ Proof See appendix (page 6) of Jochmans (2020).", "Lemma A.5 $\\frac{1}{n}\\sum _{g\\in G}\\Vert \\hat{v}_g\\Vert ^2=O_p(1)$ Proof $\\frac{1}{n}\\sum _{g\\in G}\\Vert \\hat{v}_g\\Vert ^2=&\\frac{1}{n}\\sum _{g\\in G}\\sum _{i\\in g}\\hat{v}_i^2\\\\=&\\frac{1}{n}\\sum ^n_{i=1}\\hat{v}_i^2=O_p(1)$ Lemma A.6 $\\frac{1}{n}\\sum _{g\\in G}\\Vert \\hat{v}_g\\Vert ^4=O_p(1)$ Proof $\\frac{1}{n}\\sum _{g\\in G}\\Vert \\hat{v}_g\\Vert ^4=&\\frac{1}{n}\\sum _{g\\in G}(\\sum _{i\\in g}\\hat{v}_i^2)^2\\\\\\le & \\sum _{g\\in G} (\\Vert g\\Vert \\max _{i\\in g}\\hat{v}_i^2)^2\\\\=&\\frac{1}{n}\\max _g |g|^2\\sum _{g\\in G}\\max _{i\\in g}\\hat{v}_i^4\\\\\\le &\\max _g|g|^2\\frac{1}{n}\\sum ^n_{i=1}\\hat{v}_i^4\\\\=&O_p(1)O_p(1)=O_p(1)$ Lemma A.7 This lemma descibes the block structure of matrix $\\tilde{M}\\Omega \\tilde{M}$ .", "$\\tilde{M}\\Omega \\tilde{M}&=\\begin{pmatrix}\\tilde{M}_{g_1,g_1} & \\tilde{M}_{g_1,g_2} & \\cdots & \\tilde{M}_{g_1,g_{|G|}}\\\\\\vdots & \\ddots & \\cdots & \\vdots \\\\\\vdots & \\ddots & \\cdots & \\vdots \\\\\\tilde{M}_{g_{|G|},g_1} & \\tilde{M}_{g_{|G|},g_2} & \\cdots & \\tilde{M}_{g_{|G|},g_{|G|}}\\end{pmatrix}\\begin{pmatrix}\\Omega _{g_1} & 0 & \\cdots &0\\\\0& \\Omega _{g_2} & \\cdots &0\\\\\\vdots & \\vdots & \\ddots & \\vdots \\\\0 & 0 & \\cdots &\\Omega _{g_{|G|}}\\end{pmatrix}\\begin{pmatrix}\\tilde{M}_{g_1,g_1} & \\tilde{M}_{g_1,g_2} & \\cdots & \\tilde{M}_{g_1,g_{|G|}}\\\\\\vdots & \\ddots & \\cdots & \\vdots \\\\\\vdots & \\ddots & \\cdots & \\vdots \\\\\\tilde{M}_{g_{|G|},g_1} & \\tilde{M}_{g_{|G|},g_2} & \\cdots & \\tilde{M}_{g_{|G|},g_{|G|}}\\end{pmatrix}\\\\&=\\begin{pmatrix}\\tilde{M}_{g_1,g_1}\\Omega _{g_1} & \\tilde{M}_{g_1,g_2}\\Omega _{g_2} & \\cdots & \\tilde{M}_{g_1,g_{|G|}}\\Omega _{g_{|G|}}\\\\\\vdots & \\ddots & \\cdots & \\vdots \\\\\\vdots & \\ddots & \\cdots & \\vdots \\\\\\tilde{M}_{g_{|G|},g_1}\\Omega _{g_1}& \\tilde{M}_{g_{|G|},g_2} \\Omega _{g_2}& \\cdots & \\tilde{M}_{g_{|G|},g_{|G|}}\\Omega _{g_{|G|}}\\end{pmatrix}\\begin{pmatrix}\\tilde{M}_{g_1,g_1} & \\tilde{M}_{g_1,g_2} & \\cdots & \\tilde{M}_{g_1,g_{|G|}}\\\\\\vdots & \\ddots & \\cdots & \\vdots \\\\\\vdots & \\ddots & \\cdots & \\vdots \\\\\\tilde{M}_{g_{|G|},g_1} & \\tilde{M}_{g_{|G|},g_2} & \\cdots & \\tilde{M}_{g_{|G|},g_{|G|}}\\end{pmatrix}\\\\&=\\begin{pmatrix}\\sum _{i=1}^{|G|}\\tilde{M}_{g_1,g_i}\\Omega _{g_i}\\tilde{M}_{g_i,g_1} & \\sum _{i=1}^{|G|}\\tilde{M}_{g_1,g_i}\\Omega _{g_i}\\tilde{M}_{g_i,g_2}& \\cdots &\\sum _{i=1}^{|G|}\\tilde{M}_{g_1,g_i}\\Omega _{g_i}M_{g_i,g_{|G|}}\\\\\\sum _{i=1}^{|G|}\\tilde{M}_{g_2,g_i}\\Omega _{g_i}\\tilde{M}_{g_i,g_1} & \\ddots & \\cdots & \\vdots \\\\\\vdots & \\ddots & \\cdots & \\vdots \\\\\\sum _{i=1}^{|G|}\\tilde{M}_{g_{|G|},g_i}\\Omega _{g_i}\\tilde{M}_{g_i,g_1} & \\sum _{i=1}^{|G|}\\tilde{M}_{g_{|G|},g_i}\\Omega _{g_i}\\tilde{M}_{g_i,g_2}& \\cdots &\\sum _{i=1}^{|G|}\\tilde{M}_{g_{|G|},g_i}\\Omega _{g_i}\\tilde{M}_{g_{|G|},g_{|G|}}\\end{pmatrix}$" ], [ "Main Proof", "We need to prove $\\frac{1}{n}\\sum _{g\\in G}\\hat{v}_{g}^{\\prime }(\\mu _{g}+u_{g})\\hat{u}_{g,-g}^{\\prime }\\hat{v}_{g}-\\frac{1}{n}\\sum _{g\\in G}\\hat{v}_{g}^{\\prime }u_{g}^{\\prime }\\Omega _{g}u_{g}\\hat{v}_{g}=o_p(1)$ Equivalently, we will prove $&\\frac{1}{n}\\sum _{g\\in G}\\hat{v}_{g}^{\\prime }\\mu _{g}\\hat{u}_{g}^{\\prime }\\tilde{M}^{-1}_{g,g}\\hat{v}_{g}=o_p(1)\\\\&\\frac{1}{n}\\sum _{g\\in G}\\hat{v}_{g}^{\\prime }\\Big (u_{g}\\hat{u}_{g}^{\\prime }\\tilde{M}^{-1}_{g,g}-\\Omega _{g}\\Big )\\hat{v}_{g}=o_p(1)$ Proof of Statement (A.1) Note that expression on the left of (REF ) has mean zero and, by the conditional Markov Inequality, it is sufficient to show $\\text{E}\\Big [\\Big (\\frac{1}{n}\\sum _{g\\in G}\\hat{v}_{g}^{\\prime }\\mu _{g}\\hat{u}_{g}^{\\prime }\\tilde{M}^{-1}_{g,g}\\hat{v}_{g}\\Big )^2\\ |\\ \\mathcal {X}_n,\\mathcal {W}_n\\Big ]=o_p(1)$ So $&\\text{E}\\Big [\\Big (\\frac{1}{n}\\sum _{g\\in G}\\hat{v}_{g}^{\\prime }\\mu _{g}\\hat{u}_{g}^{\\prime }\\tilde{M}^{-1}_{g,g}\\hat{v}_{g}\\Big )^2\\ |\\ \\mathcal {X}_n,\\mathcal {W}_n\\Big ]\\\\=&\\frac{1}{n^2}\\sum _{g\\in G}\\sum _{h\\in G}\\text{E}\\Big [\\underbrace{\\hat{v}_{g}^{\\prime }\\mu _{g}}_{c_1}\\underbrace{\\hat{u}_{g}^{\\prime }\\tilde{M}^{-1}_{g,g}\\hat{v}_{g}}_{c_2}\\underbrace{\\hat{v}_{h}^{\\prime }\\mu _{h}}_{c_3}\\underbrace{\\hat{u}_{h}^{\\prime }\\tilde{M}^{-1}_{h,h}\\hat{v}_{h}}_{c_4}\\ |\\ \\mathcal {X}_n,\\mathcal {W}_n\\Big ]&& c_1,c_2,c_3,c_4\\text{ are scalars and commute}\\\\=&\\frac{1}{n^2}\\sum _{g\\in G}\\sum _{h\\in G}\\text{E}\\Big [\\underbrace{\\mu _{g}^{\\prime }\\hat{v}_{g}}_{c_1}\\underbrace{\\hat{v}_{g}\\tilde{M}^{-1}_{g,g}\\hat{u}_{g}}_{c_2}\\underbrace{\\hat{u}_{h}^{\\prime }\\tilde{M}^{-1}_{h,h}\\hat{v}_{h}}_{c_4}\\underbrace{\\hat{v}_{h}^{\\prime }\\mu _{h}}_{c_3} \\ |\\ \\mathcal {X}_n,\\mathcal {W}_n\\Big ]&& \\text{transpose and shuffle these scalars}\\\\=&\\frac{1}{n^2}\\sum _{g\\in G}\\sum _{h\\in G}\\text{E}\\Big [\\mu _{g}^{\\prime }\\hat{v}_{g}\\hat{v}_{g}^{\\prime }\\tilde{M}^{-1}_{g,g}\\tilde{M}_gu u^{\\prime } \\tilde{M}_h^{\\prime }\\tilde{M}^{-1}_{h,h}\\hat{v}_{h}\\hat{v}_{h}^{\\prime }\\mu _{h} \\ |\\ \\mathcal {X}_n,\\mathcal {W}_n\\Big ]\\\\=&\\frac{1}{n^2}\\sum _{g\\in G}\\sum _{h\\in G}\\mu _{g}^{\\prime }\\hat{v}_{g}\\hat{v}_{g}^{\\prime }\\tilde{M}^{-1}_{g,g}\\tilde{M}_g\\Omega \\tilde{M}_h^{\\prime }\\tilde{M}^{-1}_{h,h}\\hat{v}_{h}\\hat{v}_{h}^{\\prime }\\mu _{h}\\\\=&\\frac{1}{n^2}\\sum _{g\\in G}\\sum _{h\\in G}\\mu _{g}^{\\prime }\\hat{v}_{g}\\hat{v}_{g}^{\\prime }\\tilde{M}^{-1}_{g,g}(\\tilde{M}\\Omega \\tilde{M})_{g,h}^{\\prime }\\tilde{M}^{-1}_{h,h}\\hat{v}_{h}\\hat{v}_{h}^{\\prime }\\mu _{h}&&\\text{see lemma A.7.", "}\\\\=&\\frac{1}{n^2}\\sum _{g\\in G}\\sum _{h\\in G}a_g^{\\prime }(\\tilde{M}\\Omega \\tilde{M})_{g,h}a_h\\\\=&\\frac{1}{n^2}\\sum _{g\\in G}\\sum _{h\\in G}\\sum _{i\\in g}\\sum _{j\\in h}a_i[(\\tilde{M}\\Omega \\tilde{M})_{g,h}]_{ij}a_i\\\\\\ =&\\frac{1}{n^2}a_n^{\\prime }(\\tilde{M}\\Omega \\tilde{M})a_n$ where $a_g = \\mu _{g}^{\\prime }\\hat{v}_{g}\\hat{v}_{g}^{\\prime }\\tilde{M}^{-1}_{g,g}$ and $a_n^{\\prime }=(a_{g_1}^{\\prime },a_{g_2}^{\\prime },\\ldots ,a_{g_{|G|}}^{\\prime })$ .", "Continue with the argument $&\\frac{1}{n^2}a_n^{\\prime }(\\tilde{M}\\Omega \\tilde{M})a_n\\\\\\le &\\frac{1}{n^2}\\Vert a_n\\Vert ^2\\lambda _{\\max }(\\Omega ) &&\\text{ note that } \\Vert \\tilde{M}\\Vert ^2\\le 1\\\\= &\\frac{1}{n^2}\\sum _{g\\in G}\\Vert a_g\\Vert ^2\\lambda _{\\max }(\\Omega ) \\\\\\le &\\frac{1}{n^2}\\sum _{g\\in G} \\Vert \\mu _g\\Vert ^2\\Vert \\hat{v}_g\\Vert ^4\\Vert \\tilde{M}_{g,g}^{-1}\\Vert ^2\\lambda _{\\max }(\\Omega ) \\\\\\le & \\max _g|g|^3\\frac{1}{\\min _g \\lambda _{\\min }(\\tilde{M}_{g,g})^2}\\frac{\\max _i \\mu _i^2}{n}\\frac{\\sum _{g\\in G}\\Vert \\hat{v}_g\\Vert ^4 }{n}\\lambda _{\\max }(\\Omega ) \\\\=&O_P(1)O_P(1)o_p(1)O_P(1)O_P(1)=o_p(1)$ where we use following facts $\\Vert a_n\\Vert ^2=\\sum ^n_{i=1}a_i^22=\\sum _{g\\in G}\\Vert a_g\\Vert $ , $\\Vert a_g\\Vert ^2\\le \\Vert \\mu _g\\Vert ^2\\Vert \\hat{v}_g\\Vert ^4\\Vert \\tilde{M}_{g,g}^{-1}\\Vert ^2,$ and $\\Vert \\mu _g\\Vert ^2\\le \\max _g|g|\\max _i \\mu _i^2$ .", "Proof of Statement (A.2) First note that $\\frac{1}{n}\\sum _{g\\in G}\\hat{v}_{g}^{\\prime }\\Big (u_{g}\\hat{u}_{g}^{\\prime }\\tilde{M}^{-1}_{g,g}-\\Omega _{g}\\Big )\\hat{v}_{g}=\\frac{1}{n}\\sum _{g\\in G}\\hat{v}_{g}^{\\prime }\\Big (u_g u_g^{\\prime }-\\Omega _{g}\\Big )\\hat{v}_{g}+\\frac{1}{n}\\sum _{g\\in G}\\hat{v}_{g}^{\\prime }\\Big (u_{g}u_{-g}^{\\prime }\\tilde{M}_{g,-g}^{\\prime }\\tilde{M}_{g,g}^{-1}\\Big )\\hat{v}_{g}$ where $\\text{E}\\Big [u_g u_g^{\\prime }-\\Omega _{g}\\ |\\ \\mathcal {X}_n,\\mathcal {W}_n\\Big ]=\\mathbf {0}$ and $\\text{E}\\Big [u_{g}u_{-g}^{\\prime }\\ |\\ \\mathcal {X}_n,\\mathcal {W}_n\\Big ]=\\mathbf {0}$ .", "So both terms on the right have mean zero and, by the conditional Markov Inequality, it is sufficient to show $&\\text{E}\\Big [\\Big (\\frac{1}{n}\\sum _{g\\in G}\\hat{v}_{g}^{\\prime }\\Big (u_g u_g^{\\prime }-\\Omega _{g}\\Big )\\hat{v}_{g}\\ |\\ \\mathcal {X}_n,\\mathcal {W}_n\\Big )^2\\Big ]=o_p(1)\\\\&\\text{E}\\Big [\\Big (\\frac{1}{n}\\sum _{g\\in G}\\hat{v}_{g}^{\\prime }\\Big (u_{g}u_{-g}^{\\prime }\\tilde{M}_{g,-g}^{\\prime }\\tilde{M}_{g,g}^{-1}\\Big )\\hat{v}_{g}\\Big )^2\\ |\\ \\mathcal {X}_n,\\mathcal {W}_n\\Big ]=o_p(1).$ We first prove statement $(\\ref {eq:A3})$ $&\\frac{1}{n^2}\\text{E}\\Big [\\Big (\\sum _{g\\in G}\\hat{v}_{g}^{\\prime }\\Big (u_g u_g^{\\prime }-\\Omega _{g}\\Big )\\hat{v}_{g}\\Big )^2\\ |\\ \\mathcal {X}_n,\\mathcal {W}_n\\Big ]\\\\=&\\frac{1}{n^2}\\sum _{g\\in G}\\text{E}\\Big [\\hat{v}_{g}^{\\prime }\\Big (u_g u_g^{\\prime }-\\Omega _{g}\\Big )\\hat{v}_{g}\\hat{v}_{g}^{\\prime }\\Big (u_g u_g^{\\prime }-\\Omega _{g}\\Big )\\hat{v}_{g}\\ |\\ \\mathcal {X}_n,\\mathcal {W}_n\\Big ]&&\\text{independence across $g$}\\\\=&\\frac{1}{n^2}\\sum _{g\\in G}\\text{E}\\Big [\\hat{v}_{g}^{\\prime }\\Big (u_g u_g^{\\prime } \\Big )\\hat{v}_{g}\\hat{v}_{g}^{\\prime }\\Big (u_g u_g^{\\prime }\\Big )\\hat{v}_{g}\\ |\\ \\mathcal {X}_n,\\mathcal {W}_n\\Big ]-\\frac{1}{n^2}\\sum _{g\\in G}\\hat{v}_{g}^{\\prime }\\Big (\\Omega _g\\Big )\\hat{v}_{g}\\hat{v}_{g}^{\\prime }\\Big (\\Omega _g\\Big )\\hat{v}_{g}\\\\=&\\frac{1}{n^2}\\sum _{g\\in G}\\text{E}\\Big [\\Big (\\hat{v}_{g}^{\\prime }\\Big (u_g u_g^{\\prime } \\Big )\\hat{v}_{g}\\Big )^2\\ |\\ \\mathcal {X}_n,\\mathcal {W}_n\\Big ]-o_p(1)\\\\\\le &\\frac{1}{n^2}\\sum _{g\\in G}\\text{E}\\Big [\\Big (\\Vert \\hat{v}_{g}\\Vert ^2\\lambda _{\\max }(u_g u_g^{\\prime } )\\Big )^2\\ |\\ \\mathcal {X}_n,\\mathcal {W}_n\\Big ]-o_p(1)\\\\= &\\frac{1}{n^2}\\sum _{g\\in G}\\Vert \\hat{v}_{g}\\Vert ^2\\text{E}\\Big [\\Big (\\lambda _{\\max }(u_g u_g^{\\prime } )\\Big )^2\\ |\\ \\mathcal {X}_n,\\mathcal {W}_n\\Big ]-o_p(1)\\\\= &\\frac{1}{n^2}\\sum _{g\\in G}\\Vert \\hat{v}_{g}\\Vert ^2\\text{E}\\Big [\\Vert u_g\\Vert ^4\\ |\\ \\mathcal {X}_n,\\mathcal {W}_n\\Big ]-o_p(1)\\\\= &\\frac{1}{n^2}\\sum _{g\\in G}\\Vert \\hat{v}_{g}\\Vert ^2\\sum _{i\\in g}\\sum _{j\\in g}\\text{E}\\Big [u_i^2u_j^2\\ |\\ \\mathcal {X}_n,\\mathcal {W}_n\\Big ]-o_p(1)\\\\\\le &\\frac{1}{n}\\frac{1}{n}\\sum _{g\\in G}\\Vert \\hat{v}_{g}\\Vert ^2|g|^2\\max _{i\\in g}\\text{E}\\Big [u_i^4\\ |\\ \\mathcal {X}_n,\\mathcal {W}_n\\Big ]-o_p(1)\\\\=& o_p(1)O_p(1)O_p(1)-o_p(1)=o_p(1)$ We now prove statement $(\\ref {eq:A4})$ .", "$&\\text{E}\\Big [\\Big (\\frac{1}{n}\\sum _{g\\in G}\\hat{v}_{g}^{\\prime }\\Big (u_{g}u_{-g}^{\\prime }\\tilde{M}_{g,-g}^{\\prime }\\tilde{M}_{g,g}^{-1}\\Big )\\hat{v}_{g}\\Big )^2\\ |\\ \\mathcal {X}_n,\\mathcal {W}_n\\Big ]\\nonumber \\\\=&\\frac{1}{n^2}\\text{E}\\Big [\\Big (\\sum _{g\\in G}\\hat{v}_{g}^{\\prime }\\Big (u_{g}u_{-g}^{\\prime }\\tilde{M}_{g,-g}^{\\prime }\\tilde{M}_{g,g}^{-1}\\Big )\\hat{v}_{g}\\Big )\\Big (\\sum _{h\\in G}\\hat{v}_{h}^{\\prime }\\Big (u_{h}u_{-h}^{\\prime }\\tilde{M}_{h,-h}^{\\prime }\\tilde{M}_{h,h}^{-1}\\Big )\\hat{v}_{h}\\Big )\\ |\\ \\mathcal {X}_n,\\mathcal {W}_n\\Big ]\\nonumber \\\\=&\\frac{1}{n^2}\\text{E}\\Big [\\sum _{g\\in G}\\sum _{h\\in G}\\hat{v}_{g}^{\\prime }u_{g}u_{-g}^{\\prime }\\tilde{M}_{g,-g}^{\\prime }\\tilde{M}_{g,g}^{-1}\\hat{v}_{g}\\hat{v}_{h}^{\\prime }u_{h}u_{-h}^{\\prime }\\tilde{M}_{h,-h}^{\\prime }\\tilde{M}_{h,h}^{-1}\\hat{v}_{h}\\ |\\ \\mathcal {X}_n,\\mathcal {W}_n\\Big ]\\nonumber \\\\=&\\frac{1}{n^2}\\text{E}\\Big [\\sum _{g\\in G}\\sum _{h\\in G}\\hat{v}_{g}^{\\prime }\\tilde{M}_{g,g}^{-1}\\tilde{M}_{g,-g}u_{-g}u_{g}^{\\prime }\\hat{v}_{g}\\hat{v}_{h}^{\\prime }u_{h}u_{-h}^{\\prime }\\tilde{M}_{h,-h}^{\\prime }\\tilde{M}_{h,h}^{-1}\\hat{v}_{h}\\ |\\ \\mathcal {X}_n,\\mathcal {W}_n\\Big ]\\nonumber \\\\=&\\frac{1}{n^2}\\sum _{g\\in G}\\sum _{h\\in G}\\hat{v}_{g}^{\\prime }\\tilde{M}_{g,g}^{-1}\\tilde{M}_{g,-g}\\text{E}\\Big [u_{-g}u_{g}^{\\prime }\\hat{v}_{g}\\hat{v}_{h}^{\\prime }u_{h}u_{-h}^{\\prime }\\ |\\ \\mathcal {X}_n,\\mathcal {W}_n\\Big ]\\tilde{M}_{h,-h}^{\\prime }\\tilde{M}_{h,h}^{-1}\\hat{v}_{h}\\nonumber \\\\=&\\frac{1}{n^2}\\sum _{g\\in G}\\sum _{h\\in G}\\hat{v}_{g}^{\\prime }\\tilde{M}_{g,g}^{-1}\\tilde{M}_{g,-g}\\text{E}\\Big [ u_{-g}(\\sum _{i\\in g}\\hat{v}_iu_i) (\\sum _{i\\in h}\\hat{v}_iu_i)u_{-h}^{\\prime }|\\ \\mathcal {X}_n,\\mathcal {W}_n\\Big ]\\tilde{M}_{h,-h}^{\\prime }\\tilde{M}_{h,h}^{-1}\\hat{v}_{h} \\nonumber \\\\=&\\frac{1}{n^2}\\sum _{g\\in G}\\sum _{h\\in G}\\hat{v}_{g}^{\\prime }\\tilde{M}_{g,g}^{-1}\\tilde{M}_{g,-g}\\text{E}\\Big [(\\sum _{i\\in h}\\hat{v}_iu_i)u_{-g}u_{-h}^{\\prime }(\\sum _{i\\in g}\\hat{v}_iu_i) |\\ \\mathcal {X}_n,\\mathcal {W}_n\\Big ]\\tilde{M}_{h,-h}^{\\prime }\\tilde{M}_{h,h}^{-1}\\hat{v}_{h} $ We can simplify the expectation term in the middle further.", "For $h\\ne g$ , $&\\text{E}\\Big [(\\sum _{i\\in h}\\hat{v}_iu_i)u_{-g}u_{-h}^{\\prime }(\\sum _{i\\in g}\\hat{v}_iu_i) |\\ \\mathcal {X}_n,\\mathcal {W}_n\\Big ]\\\\=&\\text{E}\\Big [(\\sum _{i\\in h}\\hat{v}_iu_i)\\begin{pmatrix}u_{\\tilde{g}_1}\\\\\\vdots \\\\u_h\\\\\\vdots \\\\u_{\\tilde{g}_{|G|-1}}\\end{pmatrix}\\begin{pmatrix}u_{g_1}^{\\prime }&\\ldots &u_g &\\ldots &u_{g_{|G|-1}}^{\\prime }\\end{pmatrix}(\\sum _{i\\in g}\\hat{v}_iu_i) |\\ \\mathcal {X}_n,\\mathcal {W}_n\\Big ]\\\\=&\\begin{pmatrix}0\\\\\\vdots \\\\\\text{E}\\Big [u_h(\\sum _{i\\in h}\\hat{v}_iu_i) |\\ \\mathcal {X}_n,\\mathcal {W}_n\\Big ]\\\\\\vdots \\\\0\\end{pmatrix}\\begin{pmatrix}0&\\ldots &\\text{E}\\Big [(\\sum _{i\\in g}\\hat{v}_iu_i)u_g^{\\prime } |\\ \\mathcal {X}_n,\\mathcal {W}_n\\Big ]&\\ldots &0\\end{pmatrix}$ This is because $&\\text{E}\\Big [(\\sum _{i\\in h}\\hat{v}_iu_i)u_{l}u_{\\tilde{l}}^{\\prime }(\\sum _{i\\in g}\\hat{v}_iu_i)|\\ \\mathcal {X}_n,\\mathcal {W}_n\\Big ]=\\mathbf {0}$ where $\\tilde{l}\\ne h,g$ or $l\\ne h,g$ .", "To see this, if $\\tilde{l}\\ne h,g$ (and $l\\ne g$ ), then $\\text{E}\\Big [(\\sum _{i\\in h}\\hat{v}_iu_i)u_{l}u_{\\tilde{l}}^{\\prime }(\\sum _{i\\in g}\\hat{v}_iu_i)|\\ \\mathcal {X}_n,\\mathcal {W}_n\\Big ]=\\text{E}\\Big [(\\sum _{i\\in h}\\hat{v}_iu_i)u_{l}u_{\\tilde{l}}^{\\prime }|\\ \\mathcal {X}_n,\\mathcal {W}_n\\Big ]\\text{E}\\Big [(\\sum _{i\\in g}\\hat{v}_iu_i)|\\ \\mathcal {X}_n,\\mathcal {W}_n\\Big ]=\\mathbf {0}$ Similarly, if $l\\ne h,g$ (and $\\tilde{l}\\ne h$ ), then $\\text{E}\\Big [(\\sum _{i\\in h}\\hat{v}_iu_i)u_{l}u_{\\tilde{l}}^{\\prime }(\\sum _{i\\in g}\\hat{v}_iu_i)|\\ \\mathcal {X}_n,\\mathcal {W}_n\\Big ]=\\text{E}\\Big [(\\sum _{i\\in h}\\hat{v}_iu_i)|\\ \\mathcal {X}_n,\\mathcal {W}_n\\Big ]\\text{E}\\Big [u_{l}u_{\\tilde{l}}^{\\prime }(\\sum _{i\\in g}\\hat{v}_iu_i)|\\ \\mathcal {X}_n,\\mathcal {W}_n\\Big ]=\\mathbf {0}$ For $h=g$ , the term becomes $\\text{E}\\Big [(\\sum _{i\\in g}\\hat{v}_iu_i)u_{-g}u_{-g}^{\\prime }(\\sum _{i\\in g}\\hat{v}_iu_i) |\\ \\mathcal {X}_n,\\mathcal {W}_n\\Big ]=\\text{E}\\Big [(\\sum _{i\\in g}\\hat{v}_iu_i)^2 |\\ \\mathcal {X}_n,\\mathcal {W}_n\\Big ]\\text{E}\\Big [\\Omega _{-g}|\\ \\mathcal {X}_n,\\mathcal {W}_n\\Big ]$ Pluging the above back into (REF ) and continue $&\\frac{1}{n^2}\\sum _{g\\in G}\\sum _{h\\in G}\\hat{v}_{g}^{\\prime }\\tilde{M}_{g,g}^{-1}\\tilde{M}_{g,-g}\\text{E}\\Big [(\\sum _{i\\in h}\\hat{v}_iu_i)u_{-g}u_{-h}^{\\prime }(\\sum _{i\\in g}\\hat{v}_iu_i) |\\ \\mathcal {X}_n,\\mathcal {W}_n\\Big ]\\tilde{M}_{h,-h}^{\\prime }\\tilde{M}_{h,h}^{-1}\\hat{v}_{h}\\nonumber \\\\=&\\frac{1}{n^2}\\sum _{g\\in G}\\hat{v}_{g}^{\\prime }\\tilde{M}_{g,g}^{-1}\\tilde{M}_{g,-g}\\text{E}\\Big [(\\sum _{i\\in g}\\hat{v}_iu_i)^2 |\\ \\mathcal {X}_n,\\mathcal {W}_n\\Big ]\\text{E}\\Big [\\Omega _{-g}|\\ \\mathcal {X}_n,\\mathcal {W}_n\\Big ]\\tilde{M}_{g,-g}\\tilde{M}_{g,g}^{-1}\\hat{v}_{g} \\nonumber \\\\&\\ + \\frac{1}{n^2}\\sum _{g\\in G}\\sum _{h\\in G,h\\ne g}\\hat{v}_{g}^{\\prime }\\tilde{M}_{g,g}^{-1}\\tilde{M}_{g,-g}\\nonumber \\\\&\\times \\begin{pmatrix}0\\\\\\vdots \\\\\\text{E}\\Big [u_h(\\sum _{i\\in h}\\hat{v}_iu_i) |\\ \\mathcal {X}_n,\\mathcal {W}_n\\Big ]\\nonumber \\\\\\vdots \\\\0\\end{pmatrix}\\begin{pmatrix}0&\\ldots &\\text{E}\\Big [(\\sum _{i\\in g}\\hat{v}_iu_i)u_g^{\\prime } |\\ \\mathcal {X}_n,\\mathcal {W}_n\\Big ]&\\ldots &0\\end{pmatrix}\\nonumber \\\\&\\ \\times \\tilde{M}_{h,-h}^{\\prime }\\tilde{M}_{h,h}^{-1}\\hat{v}_{h}\\nonumber \\\\=&\\frac{1}{n^2}\\sum _{g\\in G}\\hat{v}_{g}^{\\prime }\\tilde{M}_{g,g}^{-1}\\tilde{M}_{g,-g}\\text{E}\\Big [(\\sum _{i\\in g}\\hat{v}_iu_i)^2 |\\ \\mathcal {X}_n,\\mathcal {W}_n\\Big ]\\text{E}\\Big [\\Omega _{-g}|\\ \\mathcal {X}_n,\\mathcal {W}_n\\Big ]\\tilde{M}_{g,-g}^{\\prime }\\tilde{M}_{g,g}^{-1}\\hat{v}_{g} \\\\&\\ + \\frac{1}{n^2}\\sum _{g\\in G}\\sum _{h\\in G,h\\ne g}\\hat{v}_{g}^{\\prime }\\tilde{M}_{g,g}^{-1}\\tilde{M}_{g,h}\\text{E}\\Big [u_h(\\sum _{i\\in h}\\hat{v}_iu_i) |\\ \\mathcal {X},\\mathcal {W}\\Big ]\\text{E}\\Big [(\\sum _{i\\in g}\\hat{v}_iu_i)u_g^{\\prime } |\\ \\mathcal {X}_n,\\mathcal {W}_n\\Big ]\\tilde{M}_{h,g}^{\\prime }\\tilde{M}_{h,h}^{-1}\\hat{v}_{h} $ We first show the summand on () is $o_p(1).$ $&\\frac{1}{n^2}\\sum _{g\\in G}\\sum _{h\\in G,h\\ne g}\\hat{v}_{g}^{\\prime }\\tilde{M}_{g,g}^{-1}\\tilde{M}_{g,h}\\text{E}\\Big [u_h(\\sum _{i\\in h}\\hat{v}_iu_i) |\\ \\mathcal {X}_n,\\mathcal {W}_n\\Big ]\\text{E}\\Big [(\\sum _{i\\in g}\\hat{v}_iu_i)u_g^{\\prime } |\\ \\mathcal {X}_n,\\mathcal {W}_n\\Big ]\\tilde{M}_{h,g}^{\\prime }\\tilde{M}_{h,h}^{-1}\\hat{v}_{h} \\\\=&\\frac{1}{n^2}\\sum _{g\\in G}\\sum _{h\\in G,h\\ne g}\\hat{v}_{g}^{\\prime }\\tilde{M}_{g,g}^{-1}\\tilde{M}_{g,h}\\Omega _{h}\\hat{v}_h\\hat{v}_g^{\\prime }\\Omega _{g}\\tilde{M}_{h,g}^{\\prime }\\tilde{M}_{h,h}^{-1}\\hat{v}_{h} \\\\=&\\frac{1}{n^2}\\sum _{g\\in G}\\sum _{h\\in G,h\\ne g}\\hat{v}_{g}^{\\prime }\\tilde{M}_{g,g}^{-1}\\tilde{M}_{g,h}\\Omega _{h}\\hat{v}_h\\hat{v}_g^{\\prime }\\Omega _{g}\\tilde{M}_{g,h}\\tilde{M}_{h,h}^{-1}\\hat{v}_{h} \\\\\\le &\\frac{1}{n^2}\\sum _{g\\in G}\\sum _{h\\in G}\\Big |\\hat{v}_{g}^{\\prime }\\tilde{M}_{g,g}^{-1}\\tilde{M}_{g,h}\\Omega _{h}\\hat{v}_h\\hat{v}_g^{\\prime }\\Omega _{g}\\tilde{M}_{g,h}\\tilde{M}_{h,h}^{-1}\\hat{v}_{h} \\Big |\\\\\\le &\\frac{1}{n^2}\\sum _{g\\in G}\\sum _{h\\in G}\\Vert \\hat{v}_{g}^{\\prime }\\tilde{M}_{g,g}^{-1}\\tilde{M}_{g,h}\\Vert \\Vert \\Omega _{h}\\hat{v}_h\\Vert \\Vert \\hat{v}_g^{\\prime }\\Omega _{g}\\tilde{M}_{g,h}\\Vert \\Vert \\tilde{M}_{h,h}^{-1}\\hat{v}_{h}\\Vert \\\\\\le &\\frac{1}{n^2}\\sum _{g\\in G}\\sum _{h\\in G}\\Vert \\hat{v}_{g}^{\\prime }\\tilde{M}_{g,g}^{-1}\\tilde{M}_{g,h}\\Vert \\Vert \\hat{v}_g^{\\prime }\\Omega _{g}\\tilde{M}_{g,h}\\Vert \\max \\Big \\lbrace \\Vert \\tilde{M}_{h,h}^{-1}\\hat{v}_{h}\\Vert ^2, \\Vert \\Omega _{h}\\hat{v}_h\\Vert ^2\\Big \\rbrace \\\\\\le &\\underbrace{\\max \\Big \\lbrace \\max _h \\Vert \\tilde{M}_{h,h}^{-1}\\Vert ^2, \\max _h\\Vert \\Omega _{h}\\Vert ^2\\Big \\rbrace }_{O_p(1)}\\max _h\\Vert \\hat{v}_h\\Vert \\frac{1}{n^2}\\sum _{g\\in G}\\sum _{h\\in G}\\underbrace{\\Vert \\hat{v}_{g}^{\\prime }\\tilde{M}_{g,g}^{-1}\\tilde{M}_{g,h}\\Vert }_{a_h}\\underbrace{\\Vert \\hat{v}_g^{\\prime }\\Omega _{g}\\tilde{M}_{g,h}\\Vert }_{b_h}\\\\\\le &O_p(1)\\max _h\\Vert \\hat{v}_h\\Vert \\frac{1}{n^2}\\sum _{g\\in G}\\sqrt{\\Big (\\sum _{h\\in G}\\underbrace{\\Vert \\hat{v}_{g}^{\\prime }\\tilde{M}_{g,g}^{-1}\\tilde{M}_{g,h}\\Vert ^2}_{a_h}\\Big )\\Big (\\sum _{h\\in G}\\underbrace{\\Vert \\hat{v}_g^{\\prime }\\Omega _{g}\\tilde{M}_{g,h}\\Vert ^2}_{b_h}\\Big )}\\\\\\le &O_p(1)\\max _h\\Vert \\hat{v}_h\\Vert \\frac{1}{n^2}\\sum _{g\\in G}\\sqrt{\\Big (\\underbrace{\\hat{v}_g^{\\prime }\\tilde{M}_{g,g}^{-1}\\Big (\\sum _{h\\in G}\\tilde{M}_{g,h}\\tilde{M}_{g,h}^{\\prime }\\Big )\\tilde{M}_{g,g}^{-1}\\hat{v}_g}_{a_h}\\Big )\\Big (\\underbrace{\\hat{v}_g^{\\prime }\\Omega _{g}\\Big (\\sum _{h\\in G}\\tilde{M}_{g,h}\\tilde{M}_{g,h}^{\\prime }\\Big )\\Omega _{g}\\hat{v}_g}_{b_h}\\Big )}&&\\text{by CS ineq.", "}\\\\\\le &O_p(1)\\max _h\\Vert \\hat{v}_h\\Vert \\frac{1}{n^2}\\sum _{g\\in G}\\sqrt{\\Big (\\hat{v}_g^{\\prime }\\tilde{M}_{g,g}^{-1}\\tilde{M}_{g,g}\\tilde{M}_{g,g}^{-1}\\hat{v}_g\\Big )\\Big (\\hat{v}_g^{\\prime }\\Omega _{g}\\tilde{M}_{g,g}\\Omega _{g}\\hat{v}_g}\\Big )\\\\\\le &O_p(1)\\max _h\\Vert \\hat{v}_h\\Vert \\frac{1}{n^2}\\sum _{g\\in G}\\sqrt{\\Vert \\tilde{M}_{g,g}^{-1}\\Vert \\tilde{M}_{g,g}\\Vert \\Vert \\Omega _{g}\\Vert ^2\\Vert \\hat{v}_g\\Vert ^4}\\\\\\le &O_p(1)\\frac{\\max _h\\Vert \\hat{v}_h\\Vert }{n}\\sqrt{\\max _g\\lambda _{\\max }(\\tilde{M}_{g,g}^{-1})\\max _g\\lambda _{\\max }(\\tilde{M}_{g,g})\\max _g\\lambda _{\\max }(\\Omega _{g})^2}\\sum _{g\\in G}\\frac{\\Vert \\hat{v}_g\\Vert ^2}{n}\\\\=&O_p(1)o_p(1)O_p(1)O_p(1)=o_p(1)$ It remains to show that the summand on (REF ) is $o_p(1).$ $&\\frac{1}{n^2}\\sum _{g\\in G}\\hat{v}_{g}^{\\prime }\\tilde{M}_{g,g}^{-1}\\tilde{M}_{g,-g}\\text{E}\\Big [(\\sum _{i\\in g}\\hat{v}_iu_i)^2 |\\ \\mathcal {X}_n,\\mathcal {W}_n\\Big ]\\text{E}\\Big [\\Omega _{-g}|\\ \\mathcal {X}_n,\\mathcal {W}_n\\Big ]\\tilde{M}_{g,-g}^{\\prime }\\tilde{M}_{g,g}^{-1}\\hat{v}_{g} \\\\=&\\frac{1}{n^2}\\sum _{g\\in G}\\hat{v}_{g}^{\\prime }\\tilde{M}_{g,g}^{-1}\\tilde{M}_{g,-g}\\text{E}\\Big [\\hat{v}_g^{\\prime }u_gu_g^{\\prime }\\hat{v}_g |\\ \\mathcal {X},\\mathcal {W}\\Big ]\\Omega _{-g}\\tilde{M}_{g,-g}^{\\prime }\\tilde{M}_{g,g}^{-1}\\hat{v}_{g} \\\\=&\\frac{1}{n^2}\\sum _{g\\in G}\\hat{v}_{g}^{\\prime }\\tilde{M}_{g,g}^{-1}\\tilde{M}_{g,-g}\\hat{v}_g^{\\prime }\\Omega _g\\hat{v}_g\\Omega _{-g}\\tilde{M}_{g,-g}^{\\prime }\\tilde{M}_{g,g}^{-1}\\hat{v}_{g} \\\\\\le &\\frac{1}{n^2}\\sum _{g\\in G}\\Vert \\hat{v}_{g}^{\\prime }\\tilde{M}_{g,g}^{-1}\\tilde{M}_{g,-g}\\Vert \\Vert \\hat{v}_g^{\\prime }\\Omega _g \\Vert \\Vert \\hat{v}_g\\Omega _{-g}\\Vert \\Vert \\tilde{M}_{g,-g}^{\\prime }\\tilde{M}_{g,g}^{-1}\\hat{v}_{g} \\Vert \\\\\\le &\\frac{1}{n}\\Vert \\tilde{M}_{g,g}^{-1}\\Vert ^2\\Vert \\Omega \\Vert ^2\\frac{\\sum _{g\\in G}\\Vert \\hat{v}_{g} \\Vert ^4}{n}\\\\=&o_p(1)O_p(1)O_p(1)O_p(1)=o_p(1)$ Hence, the consistency result of our estimator is proved." ], [ "Results of Empirical Ilustrations", "Replication of Angrist and Lavy (2009) Table 2, panel A Table: NO_CAPTIONReplication of Levitt (2002), Table IV Table: NO_CAPTION" ], [ "Monte Carlo Results", " Figure: Historgram of leverage points of the MC sample" ] ]
2212.05554
[ [ "A systematic literature review on Robotic Process Automation security" ], [ "Abstract The technocrat epoch is overflowing with new technologies and such cutting-edge facilities accompany the risks and pitfalls.", "Robotic process automation is another innovation that empowers the computerization of high-volume, manual, repeatable, everyday practice, rule-based, and unmotivating human errands.", "The principal objective of Robotic Process Automation is to supplant monotonous human errands with a virtual labor force or a computerized specialist playing out a similar work as the human laborer used to perform.", "This permits human laborers to zero in on troublesome undertakings and critical thinking.", "Robotic Process Automation instruments are viewed as straightforward and strong for explicit business process computerization.", "Robotic Process Automation comprises intelligence to decide if a process should occur.", "It has the capability to analyze the data presented and provide a decision based on the logic parameters set in place by the developer.", "Moreover, it does not demand for system integration, like other forms of automation.", "Be that as it may since the innovation is yet arising, the Robotic Process Automation faces a few difficulties during the execution." ], [ "Introduction", "Software called robotic process automation (RPA) makes it simple to create, use, and manage software robots that mimic how people interact with computers and software [1], [2].In other words, Robotic process automation (RPA), an automation technology that enables organisations to partially or entirely automate routine processes, is controlled by business logic and structured inputs.", "Robotic process automation software robots, often known as \"bots,\" can imitate human actions to execute tasks like data entering, transaction processing, reaction triggering, and interacting with other digital systems.", "Robotic Process Automation systems come in a variety of forms, ranging from straightforward online \"chat bots\" that can respond to common questions to massive deployments of thousands of bots that can automate tasks like credit card processing and fraud detection [3].", "Robotic process automation is a procedure that uses the artificial intelligence and machine learning capabilities to handle the high-volume data task effectively.", "Distinct steps are included in Robotic Process Automation such as discovery, design, development, testing, and production or deployment.", "In the automation process, each phase has a prevailing impact.", "Finding the processes that can be automated is the goal of the discovery phase.", "To find the ideal candidate for automation, technical and business feasibility studies are conducted.", "The design phase includes creating the various process steps.", "Business analysts and solution architects, respectively, draught the process design document (PDD) and the solution design document(SDD).", "Then, bots based on process design document and Solution Design Document are being developed by developers.", "They even run unit tests to ensure that the development is proceeding properly during the development stage.", "The testing team can now use various test cases to do System Integration Testing (SIT) to test the BOT, and after it passes, either the testing team or the business team can undertake User Acceptance Testing (UAT).", "The code is deployed to the production environment once it has undergone testing and received approval from User Acceptance Testing (UAT) and System Integration Testing (SIT).", "The deployment phase is entered after the initial runs on the production environment.", "Robotic Process Automation uses tools in order to implement task like software application.", "It can be stated that Robotic Process Automation is an error-free and risk-free process which can get more customer satisfaction and Return of Investment.", "On the financial and organizational perspective, it provides an aid in depleting the training cost, labour cost and boosting capabilities along with saving time.", "There are distinct sectors where Robotic Process Automation can work effectively like banking, human resources and Customer Relationship Management.", "Regardless of the perks, there are two main risks associated with Robotic Process Automation such as data leakage and fraud.", "Lacking in adequate security measures, the sensitive data that Robotic Process Automation bot credentials or customer data that Robotic Process Automation handles, can be exposed to attackers.", "To mitigate the security failures in Robotic Process Automation projects, security and risk management leaders need to follow a four-step action plan which consists of ensuring accountability for bot actions, avoiding abuse and fraud, protecting log integrity and enabling secure Robotic Process Automation development [4], [5], [6] ." ], [ "Prior Research", "To our knowledge, relatively few Systematic Literature Reviews have been conducted specifically on the use of block chain to address the issue of cyber security (SLRs) [7].", "In the area of robotic process automation in cyber security, one of the most current survey study was completed by [8] Robotic Process Automation cannot yet completely substitute human labour.", "Automation is limited to straightforward, predictable tasks.", "Whenever a specific situation arises for which the rule set does not provide an appropriate answer.", "Escalation to a human supervisor is possible thanks to robots.", "This is particularly true in modern application situations where even expected activities suddenly less predictable as a result of the massive amounts of data and events generated in these circumstances that could impact how they are implemented.", "Another study conducted for in-depth research on Robotic Process Automation is displayed in the book [9] Instead of using robots to perform human jobs, robotic process automation uses software.", "Robotic Process Automation has recently gained popularity because it can automate repetitive and high-volume processes in order to reduce manual effort and errors and increase productivity.", "That is to say.", "By lowering errors, unsettling behaviour, conserving resources, and balancing variance, Robotic Process Automation promotes higher operational efficiency.", "This idea is adaptable and flexible.", "It facilitates seamless integration with already-in-place procedures and aids in lowering costs, maintaining quality, accelerating delivery times, and improving customer satisfaction.", "Robotic Process Automation, as it is used in practise, enables bots, or specifically created software programmes, to take over various complex processes and effectively carry out activities that are typically performed by humans.", "These include inventory and supply chain management, operational tasks, procure-to-pay processing, and data extraction and management.", "However, just like any other technology, Robotic Process Automation has security problems and obstacles, some of which are shown in Figure 1.", "In order to ensure that there is no wrongdoing that could result in errors and harm, robots, or bots, must be constantly monitored at multiple levels.", "In addition, risk rises as the number of variables increases.", "Bots give malicious actors another attack vector when they are integrated into the system.", "This stands out in terms of data privacy, or the improper use of data.", "Cyber criminals can, for example, use malicious software to gain unauthorised access to bot systems in order to steal sensitive user data and information.", "Additionally, since bots work quickly, they might be able to continue processing data in the event of a breach with a delayed response, even though they shouldn't.", "It may result in erroneous and faulty data.", "Although intelligent, bots are not infallible.", "Since they are not designed for intent identification, it may be difficult to identify a security compromise.", "The usage of Robotic Process Automation frameworks may expose enterprises to new kinds of online vulnerabilities.", "If Robotic Process Automation is not in compliance with regulations provided by regulatory organisations, their inclusion into operational and corporate activities may result in penalties.", "This cost of non-compliance results from the system being more complex.", "Figure: NO_CAPTION Figure-1" ], [ "Research Goals", "This study's objectives are to review prior research, summarise its conclusions, and concentrate on robotic process automation for Cyber security." ], [ "Contributions and Layout", "In order to advance the work of people with an interest in robotic process automation and cyber security, this SLR complements existing research and offers the following additions: Table: Research Questions Up until mid-2022, we identified 35 primary research publications and documents connected to robotic process automation.", "This SLR may be cited by other researchers to further their work.", "We narrowed down these 35 research papers that were chosen to 23 that perfectly matches this SLR.", "These articles can be consulted for advice on any inference.", "These 23 research articles and materials served as the basis for a thorough examination that led to the thoughts, deductions, and conclusions we reached on the subject of robotic process automation in cyber security.", "By adhering to these documents, we suggest developing a standard to offer assistance in any research projects including robotic process automation in cyber security.", "The format of this article is as follows: The techniques used to choose the primary studies for analysis in a methodical manner are described in [sec:ResMeth]Section 2.", "The results of all the primary research chosen are presented in [sec:fndgs]Section 3.", "[sec:discs]Section 4 addresses the conclusions in relation to the earlier-presented study questions.", "[sec:conclandfut]Section 5 wraps up the study and makes some recommendations for additional research." ], [ "Research Methodology", "To accomplish the goal of addressing the exploration questions, we led the Systematic Literature Review.", "We looked to travel through the preparation, directing, and detailing periods of the survey in emphasis to consider a careful assessment of the Systematic Literature Review." ], [ "Selection of Primary Studies", "Essential investigations were featured by passing watchwords to the inquiry office of a specific distribution or web index.", "The catchphrases were chosen to advance the rise of examination results that would help with responding to the exploration questions.", "The platform that were used to make the search are: Google Scholar Science Direct SpringerLink Association for Computing Machinery IEEE Xplore digital Library arXiv e-Print Archive Research Gate Social Science Research Network(SSRN) [13] [14]." ], [ "Inclusion and Exclusion Criteria", "Studies to be remembered for this Systematic Literature Review should report observational discoveries and could be papers on contextual investigations, new specialized blockchain applications also, and critiques on the improvement of existing security components through blockchain coordination [15], .", "They should be peer-checked and written in English.", "Any outcomes from Google Researcher will be checked for consistency with these measures as there is an opportunity for Google Researcher to return lower-grade papers.", "Just the latest variant of a review will be remembered for this Systematic Literature Review.The key inclusion and exclusion are depicted in the [tab:table2]Table 2.", "Table: Inclusion and Exclusion criteria" ], [ "Selection Results", "There was a sum of 187 investigations recognized from the underlying watchword look through the chosen stages.", "This was diminished to 160 later eliminating copy studies.", "In the wake of really looking at the examinations under the consideration/rejection rules, the number of papers staying for perusing was 25.", "The 25 papers were perused in full with the consideration/prohibition rules being re-applied, and 20 papers remained.", "Forward and reverse compounding distinguished an extra 5 and 5 papers separately, giving the last figure for the number of papers to be remembered for this Systematic Literature Review as 22." ], [ "Quality Assessment", "An appraisal of the nature of essential investigations was made concurring with the direction.", "This took into consideration an appraisal of the significance of the papers to the examination questions, with thought to any indications of exploration predisposition and the legitimacy of trial information.", "Five arbitrarily chosen papers were exposed to the accompanying quality evaluation cycle to really take a look at their viability[16]: Stage-1: Robotic Process Automation.", "The paper should be centered around the utilization of Robotic Process Automation or the utilization of Robotic Process Automation innovation to a particular an issue very much remarked.", "Stage-2: Context.", "Enough setting should be accommodated in the exploration goals and discoveries.", "This will take into consideration an exact understanding of the exploration.", "Stage-3 Robotic Process Automation application.", "There should be an adequate number of subtleties in the study to make a precise show of how the innovation has been applied to a particular issue.", "Stage-4 Security factors.", "The paper should give clarification to the security issue.", "Stage-5 Robotic Process Automation Performance.", "Surveying the presentation of Robotic Process Automation in the climate for which it is applied will permit for correlations of various Robotic Process Automation applications.", "Stage-6 Data Gathering.", "Insights regarding how the information was procured, estimated and announced should be given to deciding exactness.", "[17]" ], [ "Data Extraction", "The information extraction should be applied to all finding that has finished the quality evaluation assessment that can be seen in the [fig:figure2]Figure2.", "Be that as it may, to check regardless of whether the strategy of information extraction is appropriate, we applied this cycle to the underlying five discoveries.", "Once, we come by the ideal outcome, the information extraction process is applied to all articles.", "Context Data: The information about the point and extreme objective of the review.", "Qualitative Data: The records and inductions are given by creators and looked into by peers.", "Quantitative Data:The information gathered in light of any measurements or any exploratory examinations." ], [ "Data Analysis", "To meet the goal of addressing the exploration questions, we incorporated the information held inside the subjective and quantitative information classes.", "Furthermore, we directed a meta-examination of those papers that were exposed to the last information extraction process.", "Figure: NO_CAPTION Figure-2 Figure: NO_CAPTION Figure-3" ], [ "Significant word Counts", "To sum up, the normal subjects among the chosen essential investigations, an examination of watchwords was performed across each of the 22 studies.", "[tab:table3]Table 3 shows the times a few explicit words showed up in the essential examinations in general.", "As should be visible in the table, barring the watchwords chosen by the creator, i.e., \"Robotic Process Automation\" and \"security\", the third catchphrase showing up most often in our dataset is \"Robotic Process Automation\", later \"Security\" and \"Bots\".", "This shows rising interest in the reception of Robotic Process Automation with regard to security.", "Table: Keyword frequencies in Primary StudiesThe aforementioned points can be considered as some of the major pitfalls in the field of Robotic Process Automation in cyber security.", "Therefore, prominent solutions can be applied to mitigate the concerned issues which can in turn provide more secured data and better security.", "This research paper will mainly focus on finding the considerable solutions for the proposed complications that can be a hindrance in securing the data that are really crucial for the organizations." ], [ "Findings", "Each paper's ideas and pertinent research have been examined and are provided in These studies were conducted with an emphasis towards common threats [18], [19], issues with Robotic Process Automation, and potential fixes that might be provided using cyber security or other cutting-edge emerging technologies.", "Additionally, each document was divided into a distinct category to make the analysis simpler.", "For instance, initial research on Robotic Process Automation security is included under security.", "Similar to that, a study that focuses on Robotic Process Automation application falls under the heading of Robotic Process Automation application.", "Additionally, the study focuses on employing various techniques to find a defence against various Robotic Process Automation attacks.", "Thus, it is evident that approximately (48%) of the documents are concerned with the security of robotic process automation.", "Additionally, for Robotic Process Automation solutions and Robotic Process Automation development, we received (28%) and (32%) respectively.", "The provided [tab:table4]Table4 shows the statistics related to these discoveries.", "The research comprises a study of Robotic Process Automation security, an examination of the dangers and attacks that affect Robotic Process Automation security, and any workable conclusions that might help to strengthen security.", "Studies that concentrate on the applications of Robotic Process Automation include information on the need for automation, challenges that can be solved with Robotic Process Automation, how network security will be boosted by Robotic Process Automation, and other topics.", "On the other hand, if cutting-edge and new technology, like machine learning, can be employed to solve the problem, it can be handled better." ], [ "Discussions", "Initial keyword searches reveal that there are a considerable number of publications that are connected to Robotic Process Automation.", "Robotic Process Automation technology and truly distributed, decentralised systems have just recently been invented and are still in their infancy.", "The chosen main studies contain a significant proportion of experimental hypotheses or notions with limited quantitative information and few applications to real-world situations.", "The initial discoveries were many, focused on what Robotic Process Automation is, why Robotic Process Automation is required, and what issues Robotic Process Automation can resolve.", "This was done while searching with the security of Robotic Process Automation.", "Due to its ability to accelerate business growth by eliminating a significant amount of manual and repetitive work, Robotic Process Automation has recently attracted a lot of interest from both the commercial and academic worlds.", "But there are still a lot of difficulties with Robotic Process Automation implementation right now.", "The inability to analyse process priorities (40%),absence of risk management tools (28%), insufficient internal staff skills (24%), and lack of urgency (23%), among others, are issues at the organisational structure level, according to the Global Robotic Process Automation Survey 2019 study[20], which is represented in [fig:figure4]Figure 4.", "Information and data security (40% of the technical risk), scaling challenges (37%), and choosing an appropriate development platform (30%).", "Inappropriate application scenarios (32%), increased implementation costs (37%), and external legal and regulatory constraints (30%) are some of the financial and regulatory factors.", "A further discussion on these challenges is presented in [fig:figure5]Figure 5[21].", "Figure: NO_CAPTION Figure-4 Figure: NO_CAPTIONFigure: NO_CAPTIONFigure: NO_CAPTION Figure-5" ], [ "RQ1:What are the drawbacks of using Robotic Process Automation in cyber security? ", "It is crucial to emphasise that the purpose of this systematic literature review is to find solutions to the problems caused by Robotic Process Automation's use in cyber security, not to focus on its benefits.", "Robotic process automation (RPA) adoption has many implications.", "However, it also faces a myriad of challenges, such as cyber threats.", "Data theft, abuse of privileged access, and denial-of-service are frequent and developing Robotic Process Automation growth restrictions that present serious threats to businesses.", "Robotic Process Automation carries security vulnerabilities that constitute careful planning for prevention.", "Because of the vulnerability of the technology to cyberattacks, organisations are at risk.", "It is important for a robotic automation system to take into account both the business process and the security concerns because firms are gradually embracing Robotic Process Automation-based technology as a digital strategy for business development.", "This will allow for the implementation of safety measures and checks.", "Since Robotic Process Automation credentials are frequently exchanged and utilised, the first thing that needs to be ensured is that they are not left unmodified and unprotected.", "This could expose the system to a future Cyber attack wherein the passwords are stolen and utilized for malicious reasons[10].", "However, in order to maximise their technological investments, business leaders must also comprehend and evaluate the possible hazards of Robotic Process Automation.", "Although Robotic Process Automation can drive innovation and optimise competitiveness, organisations frequently establish unreasonable goals and expectations for Robotic Process Automation implementation, or misuse it for a one-off, isolated area.", "These factors can lower profitability, damage employee productivity, and disrupt company workflows.", "As a result, any Robotic Process Automation efforts suffer from under-resourcing because Robotic Process Automation is unable to live up to its promise of delivering greater value.", "Organizations that just use Robotic Process Automation to reduce costs by lowering FTE manpower rather than utilising it to innovate and improve how work is done, lack any strategic goal or end-point design in their Robotic Process Automation projects.", "A sound, future-proof target operating model must be put in place, and the appropriate intelligent process automation tools must be used to reduce the risk associated with Robotic Process Automation.", "approach[22]." ], [ "RQ2:What can be considered as some of the best Robotic Process Automation practices to mitigate the risk in cyber security?", "Some Robotic Process Automation security best practices are summarised below.", "Choose Robotic Process Automation carefully.", "Robotic Process Automation developers vary greatly from one another.", "Information security needs to be taken into account along with functional specifications when choosing a new Robotic Process Automation technology.", "Malicious code or security flaws could be present in a bot with inadequate coding[17].", "Create a security governance framework for Robotic Process Automation.", "Regular risk analyses and audits of Robotic Process Automation processing activities must be part of an Robotic Process Automation governance structure.", "Employees in charge of the Robotic Process Automation must have a comprehensive understanding of their security obligations, which include restricting access to the environment, logging and tracking activity, and more[23].", "Avoid using hard-coded access rights.", "Robot scripts must replace all hard-coded access permissions with API calls, with each request linked directly to the required access privileges kept in a single repository.", "An additional layer of defence is added, decreasing the likelihood of an attack [24].", "Build in-error handling Automation can be halted by errors like unsuccessful login attempts, missing directories, or running out of disc space.", "Automation may also be slowed down by glitches like a timed-out application, inaccurate data, or a new screen inside the application.", "Workflows should include error handling for this reason.", "An Robotic Process Automation programmer should programme the automation to manage the exception and respond appropriately depending on the type of exception that happens, whether it is a business or application exception.", "For instance, the Robotic Process Automation bot should log the business error and set up the environment to handle queue item number three if it happens on item number two in the queue.", "The bot should bounce back from errors and keep working through all the transactions[25]." ], [ "RQ3:What can be the best use of the Robotic Process Automation tools in cyber-security?", "Based on the features they offer and the feedback from users, the following manufacturers offer some of the best Robotic Process Automation solutions available on the market.", "UiPath With the aim of streamlining, accelerating, and optimising digital transformation for businesses, UiPath is an amazing and user-friendly Robotic Process Automation platform that enables users to automate their manual operations fast and efficiently.", "By automatically analysing a company's operations, UiPath can decide which ones should be automated.", "In addition to automating mundane tasks like data entry, email marketing, and site scraping, it also takes care of recurring obligations like notice, documentation, and set-up follow-ups.", "UiPath provides capabilities like encryption and role-based access control in addition to automation that is simple to set up.", "It can also manage processes of any size or complexity[26].", "Blue Prism An Robotic Process Automation tool called Blue Prism has the ability to create a software-powered virtual workforce.", "This enables businesses to automate business processes in a flexible and economical way.", "The application features a visual designer with drag-and-drop functionality and is based on the Java programming language.", "Blue Prism provides a visual designer that is free of any interference, recorders, or scripts [27].", "Kofax The workflows are referred to as robots in Kofax Robotic Process Automation.", "You are free to explore the applications as you combine them while building a robot.", "You can log into programmes, extract data from pages, fill out forms or search boxes with information, choose options from menus, and scroll through numerous pages.", "Additionally, your robot has access to databases, files, web services, APIs, and other robots.", "It can export data from one application and load it into another, altering it as necessary in the process.", "You can automate Windows and Java programmes on your network devices with the help of Kofax Robotic Process Automation's Desktop Automation feature.", "Desktop automation replaces manual operations by invoking a desktop or terminal application [28].", "Pega Robotic Process Automation Organizations may automate those tiresome, time-consuming manual operations using Pega Robotic Process Automation (RPA).", "Robotics connects old systems and gets away of tedious manual data entry.", "By converting manual processes to digital ones, low-value operations become much more predictable, allowing workers to concentrate on more strategic responsibilities.", "Businesses may use bots that produce dependable outcomes by using the power of Pega Robotic Process Automation.", "Using a visual interface, workflows can be rapidly and effectively drawn out and updated as your organisation grows [29]." ], [ "Future research directions of Robotic Process Automation ", "We provide the following research directions for robotic process automation for cyber security that need additional study based on the findings of this survey and our observations: Integration of additional tools and SPA Artificial intelligence and machine learning are becoming a part of Robotic Process Automation.", "We may anticipate that in the near future, Robotic Process Automation will support both straightforward judgment-based automation and the processing of unstructured data.", "This will assist Robotic Process Automation in moving past rule-based technology.", "Robotic Process Automation will increasingly be integrated with other tools and technologies as businesses embrace it to automate their activities.", "To improve the features and simplify automation, other tools will be incorporated with it.", "Smart Process Automation is the abbreviation.", "Robotic Process Automation is currently having some difficulty automating the process of handling unstructured data.", "The unstructured data process will be automated with the aid of SPA, which combines a number of various technologies like machine learning, AI, and cloud technology[19], [30], [31].", "RPA's effective evolution Robotic Process Automation will eventually be able to recognise and enhance processes within and across your systems without the need for human interaction as a result of technological improvements.", "In other words, your company will be able to completely get in front of processes rather than just automate them.", "Process management and Robotic Process Automation will soon be used interchangeably.", "The automation perspective will be applied to every business function.", "Leading analysts forecast that Robotic Process Automation will soon become a common tool for boosting productivity.", "The Robotic Process Automation tool's ability to work alongside intelligent enterprise automation, a group of integrated technologies that may include intelligent capture, artificial intelligence, machine learning, case management, workflow, low-code capabilities, and cloud-based content services, will be a key differentiator[32], .", "Boosting security with RPA Software robots that automate manual tasks can increase productivity, decrease errors, increase income, and provide a host of other advantages for businesses.", "But the use of Robotic Process Automation in cybersecurity is one of its most attractive and significant applications.", "By automating many of the manual processes that these professionals still utilise, Robotic Process Automation may have a big positive impact while enabling them to contribute their own expertise and insight when it matters most.", "Robotic Process Automation may, of course, offer a significant level of automation to the overall cybersecurity workflow, but it's also critical to make sure the Robotic Process Automation platform is safe.", "The platform should also work well with other security measures already in place, such as user authentication and permission systems, to guarantee the security of any manual activities it automates." ], [ "Conclusion and Future Work", "When implementing RPA, security must be considered; it cannot be added as a \"bolt-on\" feature later.", "In a summary, meticulous planning should go into RPA installation.", "This includes choosing a software vendor or platform that is well-known and has the security features previously mentioned, as well as implementing or incorporating your RPA users in corporate governance and security protocols.", "Constant oversight to guarantee compliance[33].", "By institutionalising crucial security features like identity verification, access control, data encryption, deployment security, and bot monitoring, one may leverage critical automation to help any organisation save money and become more productive while maintaining security[34].", "RPA will be crucial in the future for creating a seamless organisational context because it has the ability to lower errors and boost efficiency.", "Repetitive jobs will be finished more quickly and efficiently, allowing people to focus on abilities that are more human-centric, such reasoning, judgement, and emotional intelligence[35], [2], [36]." ], [ "Declarations of interest", "None." ] ]
2212.05544
[ [ "Context-aware 6D Pose Estimation of Known Objects using RGB-D data" ], [ "Abstract 6D object pose estimation has been a research topic in the field of computer vision and robotics.", "Many modern world applications like robot grasping, manipulation, autonomous navigation etc, require the correct pose of objects present in a scene to perform their specific task.", "It becomes even harder when the objects are placed in a cluttered scene and the level of occlusion is high.", "Prior works have tried to overcome this problem but could not achieve accuracy that can be considered reliable in real-world applications.", "In this paper, we present an architecture that, unlike prior work, is context-aware.", "It utilizes the context information available to us about the objects.", "Our proposed architecture treats the objects separately according to their types i.e; symmetric and non-symmetric.", "A deeper estimator and refiner network pair is used for non-symmetric objects as compared to symmetric due to their intrinsic differences.", "Our experiments show an enhancement in the accuracy of about 3.2% over the LineMOD dataset, which is considered a benchmark for pose estimation in the occluded and cluttered scenes, against the prior state-of-the-art DenseFusion.", "Our results also show that the inference time we got is sufficient for real-time usage." ], [ "Pose estimation of objects with 6 Degree of Freedom (6DoF) is among the most important initial steps for many modern real-world applications like robotic hand grasping and manipulation of objects , , autonomous navigation , , and the exciting domain of augmented reality , .", "We see that robotic hands are being used in the automation of manufacturing industries, but these robots are not intelligent enough as they get their input in a strictly defined manner, which means the coordinates and orientation is kept fixed and even for a slight change in the orientation or the shape and size of the input, the whole setup may need to be redefined.", "In the domain of autonomous navigation, we need proper lighting on the obstructions for them to be recognized and dealt with.", "In the world of augmented reality, the occluded area of the components in the scene also plays a very important role we can't simply ignore it as we need to map the augmented imagery with the real world based on its coordinates and orientation.", "From the above use cases, we can infer that we need a solution that can deal with the objects that are of varying shape and size along with the different types of surface properties i.e; textures.", "Also, the solution should be able to deal with the objects having heavy occlusion, varying light exposure, imperfect lighting and sensor noise.", "All this needs to be done in real-time in order to cope up with the needs of the real-world applications requirement.", "With the advancement in sensor building technology we now have cheap sensors that can capture RGB-D images efficiently and in real-time, so we can now device better approaches in solving our problems using RGB-D data as opposed to when we only use RGB data.", "Initially, the classical approaches were to somehow extract the features from the RGB-D data, perform the corresponding grouping and, then using it for hypothesis verification , , , , .", "But these methods were used to rely on handcrafted features and a fixed matching procedure which resulted in limited performances in presence of light variation and occlusion.", "Then after success in visual recognition gave rise to data-driven methods that used deep multi-layered perceptron networks for pose estimation of known objects from RGB-D data, some examples are PoseCNN and MCN .", "But the major shortcoming of all these methods was that they required time-consuming post-processing refinement steps to completely use the essence of 3D information, such as ICP (Iterative Closest Point) in PoseCNN and a multilevel-view hypothesis verification in MCN.", "All these refinement steps were not possible to be made efficient and real-time performing.", "Moreover, in the domain of autonomous driving, some promising technologies that have proved their efficacy are Frustum PointNet and PointFusion .", "These models have shown good performance and in real-time too but these methods were found not much effective in heavy occlusion which is normal in the manipulation domains.", "After these methods, a more advanced architecture was proposed named DenseFusion , where an end-to-end deep learning based architecture is used for the practically efficient solution for estimating the 6-DoF pose of known objects from RGB-D inputs.", "Before this, image crops to compute global features , or 2D boxes that bound the objects were used, but in the DenseFusion the core approach was to somehow take the essence of both RGB and Depth information and fuse their information and embed both to obtain the collective information at the per-pixel level.", "This fusion helped to reason about local appearance and orientation which in turn makes it able to handle heavy occlusions as well.", "And after the fusion, there is an integrated end-to-end learning framework that replaced the all expensive post-hoc refinement in terms of time.", "This is an iterative method that performs pose refinement.", "In this work, we propose an enhanced model that is based on the core idea of DenseFusion, which identifies the context information present in the dataset which was unused in the previous methods and use it to obtain much better accuracy.", "The symmetric and non-symmetric objects are treated separately so that one does not alter the information gain from the other.", "We propose a separate pose estimator and refiner architecture that has shown improved performance in terms of accuracy keeping the inference time in the same order.", "We have improved the accuracy by 3.2 % over the DenseFusion model for the LineMOD dataset which is one of the most popular benchmarks for 6D pose estimation.", "Our improved results signify the higher performance in highly cluttered scenes, scenes with variable lighting, and for occluded objects.", "In summary, the contribution of our work is to improve the accuracy for 6D pose estimation of known objects using RGB-D data and its context information.", "The core architecture of DenseFusion is used as the starting point and then enhanced significantly to achieve higher accuracy without hindering the real-time speed being offered." ], [ "The field of Pose estimation is relatively new as the advent of RGB-D sensors and the Machine Learning based techniques are new which have shown promise in this domain.", "Upon researching one can find that there have been several efforts made in the past to correctly predict the pose of an object.", "Many have tried only RGB data, many only depth information and some have tried using a combination of both.", "Initial methods that took shape using RGB data only, used to rely on key-points detection and matching with known object models , , .", "After some progress we started seeing methods that tried to solve the challenge in hand by learning to predict the 2D keypoints , , , and solve the pose by PnP that uses RANSAC (RANdom SAmple Consensus), but it could not cope with the speed demanding tasks that needs to be performed in real time when the input is low textured or low resolution.", "Other methods that uses only RGB image to predict pose is by using CNN-based architectures .", "Some of the methods are Xiang et al.", ", which clusters 3D features from object models and then learns to predict pose according to the viewpoints.", "Mousavian et al.", "recovers poses by single view geometric constraints and predicts 3D object parameters.", "Sundermeyer et al.", "creates a codebook which contains encoding of orientations in a latent space then finds the best match from the codebook as the prediction.", "Despite putting in this much effort it was concluded that something extra along with RGB image is required in order to estimate object poses in even 3D let alone 6D.", "Various efforts have been done over the time where the problem in hand i.e; pose estimation or 3D object detection is tried to be solved using depth information only which we also refer to as point clouds.", "As an example Song et al.", ", estimates the poses of objects by featuring inputs with 3D ConvNets to generate 3D bounding box.", "It has been observed that these methods can pretty much accurately encode geometric information , but as a trade-off they are implicitly expensive, takes around 20 seconds which makes them unfit for real time applications.", "Recently deep learning architectures have evolved very much such that it has enabled methods that can directly predict the poses of objects on point clouds or 3D data.", "For example Frustrum PointNet and VoxelNet uses PointNet like structure and have proven their performances as state-of-the-art on KITTI benchmark .", "DenseFusion also uses similar kind of architecture but unlike urban driving applications, datasets like YCB- video dataset also require keen observation on appearance along with geometric information, which is implemented in DenseFusion by 2D-3D sensor fusion architecture.", "The DenseFusion architecture is further improved in our work by segregating separate prediction networks for symmetric and non symmetric objects keeping the point cloud and colour embedding part unchanged.", "Classical approaches use the image processing principles in which 3D features are extracted from the RGB-D input data then they perform hypothesis verification after doing correspondence grouping , , , , , .", "However, these features are chosen based on the bag of words principle that are hardcoded beforehand or are learned by optimizing surrogate objectives , .", "Certain newer methods took shape that directly estimates 6D poses from image data such as PoseCNN and some try to utilize the depth information by fussing the depth value as an additional channel such as Li et al.", ".", "But all such methods performance further confirms that depth information also needs to be handled carefully and not just appended with image data.", "However, these methods need expensive post-processing steps to rely on to reach their full capacity.", "The latest addition to this domain is DenseFusion which doesn't require such post-processing steps but also uses the RGB data along with depth.", "It uses a novel fusion mechanism to map colour and depth information which yielded much better results but still falls just short to provide accuracy that could be termed reliable for real world applications.", "Then our work which keeping uncompromised the speed of the DenseFusion and further improves the performance in terms of accuracy on the LineMOD dataset." ], [ "Our main focus here is on the real-world problem of estimating the 6D pose of the objects whose shape properties are known to us beforehand using the information collected as the colour image as well as the depth information.", "Lets now break down the problem statement to discuss it in details.", "Figure: The object coordinate system and the camera coordinate system are described.The 6D pose essentially means the location of the object in the 3D space i.e; the coordinates of the object, along with the orientation in the 3D space given as the angle of rotation in the 3-axes.", "Since the coordinates and orientation of anything in the world are defined relative to other object, here we define it relative to the camera coordinates that captures the colour image as well as the depth sensor as shown in Fig.", "REF (considering both to be on the same hardware).", "6D pose estimation then means the prediction of the correct magnitude of translation T as well as the rotation R with the 3-axes concerning the point of reference (in this case the camera sensors ).", "All the prediction is done using the RGB-D data, RGB means Red, Green, and Blue the 3 primary colours which are used to store the real world image digitally on the computers and the depth information is the distance map from the camera to any location corresponding to that pixel.", "So, the predictions are done via examining the images of the objects along with the depth information about the same image.", "So to summarize the problem statement, the 6D estimation of the location of the object along with the orientation (6D means 6 Degree of freedom i.e; the translation on 3-axes as well as the rotation about the 3-axes) in the 3D world of an object whose intrinsic properties and appearance are known to us beforehand using the colour information captured by the camera sensor along with the depth information captured by the Depth sensor and at last its type like whether the object is symmetric or non-symmetric.", "Figure: Architectural overview of proposed method" ], [ "Our work proved an enhanced mechanism over DenseFusion that uses context information that improves the accuracy on the LineMOD dataset.", "The modified hybrid architecture used has yielded better accuracy.", "This section explains the logical motivation and the modification in detail.", "Motivation: So, the motivation for the current hybrid architecture comes after analysing the intrinsic properties of the types of objects present in the dataset.", "There are two types of objects present in the dataset, they are symmetric and non-symmetric.", "There are also two types of loss functions defined for each i.e; ADD for non-symmetric and ADD-S for symmetric objects.", "Although both types of distances can be taken care of in the same model but they tend to hinder the full potential of the network to learn them.", "In a way we can infer that the prediction for symmetric objects is a bit less stricter compared to non-symmetric objects.", "As for symmetric objects, same pose also fits with different orientations if we change the object about the axis of symmetry.", "This laid the basis for our model.", "In our work, when we tried assessing and observing the model, we observed that for the symmetric objects the learning was faster and also reaching the saturation point in lesser epoch iteration and also observed that the prediction for non-symmetric objects were taking greater time to reach saturation or not attaining the proper saturation in the 6D pose estimator phase (i.e; without refinement step).", "This laid the idea to train two separate models.", "Since the extraction and fusion of features had apparently no effect by the type of object, these steps are still common, but the estimation and refinement networks are separated for each type of objects.", "Structure: A hybrid architecture is used for the pose estimation and refinement steps.", "Since we discussed that the learning of non symmetric objects were harder than symmetric, that is why a deeper network is used that consists of ResNet18 network.", "Additional convolution and pooling layers were added making it deeper, keeping the tradition pooling technique used here intact i.e; the average pooling and finally the output layer outputs the quaternion values, the translation values along with the confidence score for each pixel that helps us choose the best pose after each of the pixel has voted the pose according to their features.", "For the symmetric objects it was evident from the results published in that the original architecture was good, so similar network is used like with fine tweaking to enhance the prediction.", "And for the Non-symmetric objects, a deeper network is used in an attempt to learn the information more accurately and without interference from the other.", "Then after the pose estimation step comes the pose refinement step.", "Since the refinement model is used to refine the output by the estimator model, a CNN network is used.", "But there arises a decision whether to use same refinement network for both estimators or to use separate refiner for each.", "This decision were made again by considering the fact that both type of objects have their respective intrinsic properties as well as different evaluation measures (loss functions) so separate refinement models were required for both Estimator models.", "So for both estimators separate refiners were used, basic structure of a typical CNN was used here but the refiner for non-symmetric objects contains additional layers to cope up with the new estimator." ], [ "Our main focus in this work is on the 6D pose estimation of known objects that are present in the scene of RGB-D image data.", "Unlike many other approaches, we are focusing on variable lighting, occluded objects, and cluttered scenes.", "The first thing to make a model is to represent our output.", "Basic architecture of DenseFusion is used as a starting point then an enhanced architecture is proposed in this work.", "Since we are predicting the poses of known objects present in the scene in the image, we need something that can express the orientation and the coordinate in the 3D space referenced about a fixed point.", "Since our data is captured using Depth and image sensor then we can safely assume that both the sensors are practically at the same point (although the distance between them is not zero but compared to the distance of objects from the camera it could be neglected), so the pose of an object is defined with respect to the coordinates and frame of the camera.", "The pose of any object needs to be defined before we can prepare to predict it.", "So for any object, we know its basic structure hence any pose is the combination of rotation in the 3-axes as well as the translation in the 3D space.", "So, mathematically it can be denoted as p $\\in $ SE(3), where p = $[R|t]$ and R $\\in $ SO(3) and t $\\in \\mathbb {R}^3$ .", "As we have discussed up until now that the estimation of 6D poses in the occluded area and poorly or overly exposed to lighting is possible with high efficiency only by combining the depth and color information intelligently.", "But since both types of data are different, just combining them is not enough instead we need some method that can extract the essence from each type of data and combine them to be meaningful also need the individual essential information contained within them.", "These challenges were acknowledged and have been pretty accurately taken care in DenseFusion , so our model is based on similar architecture to use the essence of data but in the later stage of the architecture a hybrid model is used.", "The challenges mentioned above are tackled by : By a heterogeneous architecture that takes in the colour and depth information separately keeping their individual essential information intact in both the spaces (discussed in section Dense Feature Extraction) and By intelligently combining their intrinsic mapping between the data sources, it is done by pixel wise dense fusion which utilises the intrinsic camera parameters to map each 3D point cloud to colour pixel.", "Then after the process described above, we predict the initial poses of objects.", "There are two separate routes defined for two different kinds of objects that are present in the datasets.", "First is regular non-symmetric objects i.e; the objects that do not contain any line of symmetry or axis of symmetry at which we can divide the object into two exactly similar sub-parts.", "These kinds of objects are treated differently than symmetric objects.", "After the fusion of color and depth information is done by a pixel-wise fusion of both types of data, they are fed to their pose estimator model.", "For non symmetric objects, the basic ResNet18 with 2 additional layers is used as pose estimator model.", "For the symmetric objects, only the basic model of ResNet18 is used, as it showed promising performance on symmetric objects in .", "All these changes were added after observing the nature of both the objects, after training the models for a sufficiently long time we observed that the output accuracy for symmetric objects was getting saturated but for non-symmetric objects, there were still some fluctuations.", "So separate predictor models were defined and they showed better performance.", "After the pose prediction, it is further improved by the integrated iterative refinement network.", "It is a multi layered CNN network followed by 3 fully connected layer and output layer that further refines the pose predicted by the pose estimator model.", "Each of the types of objects i.e; have their separate refinement network.", "And this refinement step is integrated with the architecture so no expensive post-processing techniques (, ) are required." ], [ "The overall architecture shown in Fig.", "REF , can be broken down into two major stages based on the function they perform.", "The first stage is the semantic segmentation of each known object in the color image.", "The segmentation provides us with an $(N+1)$ channeled map, where $N = $ number of objects in the dataset.", "This semantic segmentation map is used to generate a bounding box which in turn is used to extract cropped patch of the object.", "Then this patch along with the masked cloud or masked depth pixel is forwarded to second stage.", "The second stage is the estimator or predictor stage where actual prediction takes place.", "This stage further comprises of various steps / components.", "These components are : A CNN based network that works on the images crops generated by semantic segmentation stage and maps the colour information into colour feature embedding.", "A PointNet like network that takes in the masked (according to image crop) 3D point cloud as input and gives us the geometric feature embedding.", "After the features are embedded from both the spaces i.e; colour and point cloud then we need to combine them to create global features, it is done by a fusion network that performs pixel wise combination and prediction based on self-supervised confidence scoring .", "But here unlike the Dense Fusion there are separate networks for symmetric and non-symmetric object types that has helped obtain better accuracy.", "Then this pose prediction is fed to iterative refinement network in a curriculum learning manner.", "Here also there are separate iterative refinement networks for both types of objects to support their respective estimators such that each estimator network and iterative network form a separate pair for each type of object.", "The detailed explanation about each stage and component is given below: To predict any object's pose we first need to find out that particular object from the scene.", "Segmentation helps us to classify a scene into various segments.", "Semantic segmentation means recognizing and labeling each known object in the scene.", "Since it is the prerequisite for our stage II and already many efficient models exist for segmentation, a preexisting vanilla segmentation model is used.", "Vanilla segmentation is an encoder-decoder-based architecture that first takes the color image as input then encodes the information into smaller dimension features then decodes them into N+1 channeled segmentation map where each channel is a binary mask and true pixels indicate the presence of that particular object.", "One extra channel is to denote the background or no object.", "Since in this work, the focus is on pose estimation rather than segmentation we use an existing architecture ." ], [ " Dense Feature Extraction: Since we are using both image data and depth data, we need to extract meaningful information out of it.", "Some of the previous methods used depth information as an additional channel and then used CNN based architecture to directly predict poses, but the main shortcoming about this approach is that we are considering both types of data i.e; image and depth to be the same hence neglecting their respective implicit structure even though they lie in different spaces.", "In DenseFusion this problem is recognized and critical architecture was used which is being reused in our work as well.", "First, the 3D data is converted into a 3D point cloud by using intrinsic camera properties (a concept from image processing) then PointNet like structure is used.", "The PointNet was able to perform well in this segment as they used the symmetric max pooling function to get permutation in-variance in unordered point sets.", "Similar architecture is used here just the symmetric function is replaced by average pooling in place of max pooling which is commonly used.", "Secondly, the colour image data needs to be embedded into features, it is done by using a CNN based encoder decoder network that converts H X W X 3 space into H X W X $d_{rgb}$ space which means each pixel will now get $d_{rgb}$ dimension feature vector.", "Dense Feature Fusion: Now that we have obtained separate feature embedding, we need to find out a way to combine them to produce effective features.", "Our discussion up until now have made it clear that many have tried to use the RGB-D data for pose estimation but what is required is actually how we fuse both types of data since they lie in different spaces.", "The work in DenseFusion proves effective in this particular segment.", "Instead of treating both RGB and Depth information similar and blindly fusing them, which in turn will result in degraded performance, the proposed a novel pixel-wise fusion network for this purpose.", "Due to segmentation errors, occlusion, and variable lighting, directly associating color info with corresponding depth info on the same pixel will not retain the 3D behavior.", "So, projection (image processing concept) is used, which first projects the color information of a pixel in the segment into 3D space by using intrinsic camera parameters then at each pixel associates it with a geometric feature.", "So now we have got a pair of colors and geometric features.", "Now that we have got hold of each of the data sources' intrinsic properties, we need a way to extract information that is the property of combined data.", "This is done by using an MLP network with a symmetric function (in this case average pooling ) and then concatenated to our original per pixel features.", "So now we have for each pixel color features, geometric features, and global features.", "This whole process of forming per pixel features is named as Dense Fusion of features in DenseFusion.", "Now after obtaining the features, the actual estimation is remaining, the prediction is based on the work of Xu et al.", "in which for each pixel feature there is a prediction of pose along with a confidence score.", "Then we predict the final pose in this that has the highest confidence score among all the per-pixel prediction.", "6D Pose Estimation: Our work puts emphasis on this component to obtain better accuracy.", "While the DenseFusion used a single ResNet18 based network for both types of data, we modified its design after observing the difference between symmetric and non-symmetric objects.", "We can infer that for a symmetric object there can be multiple orientations that are accurate for a particular pose.", "About the axis of symmetry if we rotate the object then the pose doesn't show any observable change while there is no such property present in non-symmetric, so we can say that in a way predicting the pose of a symmetric object is less strict than non-symmetric.", "So taking this fundamental difference in mind we propose separate networks for each type of object.", "More details are explained in the later section.", "For the model we first need to define a learning rule and learning rule is always based on the loss function.", "A loss function is the mathematical expression that calculates how much difference there is in predicted and desired output.", "The loss function used here is defined as: $ L_{i}^{p} = \\frac{1}{M} \\sum _{j} || (Rx_{j} + t) - (R_{i}^{^}x_{j} + t^{^}_{i}) ||$ where $x_j$ denotes $j^{th}$ point out of the randomly selected M points from object, p = $[R|t]$ is the desired or actual pose while $p^{^}_i = [R^{^}_i|t^{^}_i]$ is the prediction from the $i^{th}$ pixel features.", "But as discussed a moment ago same pose can be applicable to even infinity orientation of symmetric objects so the loss for symmetric function needs to be defined as current one will lead to ambiguity.", "Therefore different loss function is required for symmetric objects, the one used here is defined as: $ L_{i}^{p} = \\frac{1}{M} \\sum _{j} \\min _{0<k<M} || (Rx_{j} + t) - (R_{i}^{^}x_{k} + t^{^}_{i}) ||$ So, now that individual object's loss is defined then we want to define overall loss given by: $ L = \\frac{1}{N} \\sum _{i} ( L_{i}^{p}c_{i} - w\\log ({c_{i}}) )$ where N is the number of randomly selected pixels feature from P elements of the particular segment and w is regularization term.", "Using this loss we can balance the highly confident prediction and low confidence prediction along with the loss.", "With the regularization term the low confidence prediction will get low loss but will incur higher penalty too.", "After this we choose the pose with the highest confidence score.", "Iterative Refinement: Many refinement approaches have used post-processing techniques like ICP used in , , which showed promising results in pose estimation but due to their costly post-processing nature, they were not fit for real-time applications.", "The solution was proposed in DenseFusion which is used in our work.", "DenseFusion uses an integrated CNN based iterative refinement module that can further improve the pose estimation by our previous component.", "This refinement network's work is not to predict something new but to refine the output by the main network.", "It takes in as input the pose predicted in the previous iteration then this, along with the global feature from the Feature fusion stage, is used to calculate the pose residual which is then used to convert the input point clouds into the previously predicted pose as an estimate of canonical form.", "But initially, the training of the refinement network would yield no good results as there will be too much noise present in the initial stage, so it will start after the prediction attains certain accuracy." ], [ "This section is dedicated for the discussion of the experiments conducted, the performance analysis of the model, comparison with preexisting work, the datasets specifications, etc.", "For the evaluation of the model we need dataset on which the training and testing is to be done, the measurement unit which can express the performance into real numbers, and the comparisons with existing models to compare the accuracy as a benchmark.", "So all these things are discussed in details below." ], [ "One of the most popular datasets used for the evaluation of pose estimation task, namely LineMOD dataset.", "Each of which comprises images videos (which is simply the collection of images) containing some known objects in each image.", "The LineMOD dataset from Hinterstoisser et al.", "comprises 13 low textured objects spanning 13 videos.", "Many classical, as well as the modern learning-based, , , has adopted this for training, testing, and evaluation purposes.", "The training and testing partition used is the same as some of the prior learning works , , without appending synthetically generated data.", "The 3D models of objects are also provided in the dataset.", "This dataset is considered the benchmark for highly cluttered object's pose estimation.", "Fig.", "REF and Fig.", "REF shows RGB samples and Depth samples respectively from LineMOD dataset." ], [ "There are two matrices to measure the prediction.", "ADD and ADD-S, both are a measures of average point wise distance.", "Both of them operates on the predicted pose $[R^{^}|t^{^}]$ and the ground truth $[R|t]$ ." ], [ "It is the Average distance of Model Points, which means the distance between the predicted location and the actual location of the model points selected randomly.", "This metric is applicable only for non-symmetric objects as the symmetric object will have ambiguous orientation pertaining to the same pose due to symmetric property.", "Its mathematical representation is given by (REF )." ], [ "ADD-S is similar to ADD only difference being that the distance between predicted and actual is calculated using the closest point only.", "This makes it fit and non-ambiguous for symmetric objects as well.", "The ADD-S below 2cm is considered as correctly predicted as it is considered as the threshold for robot manipulation tasks.", "Mathematically, it is denoted by (REF ).", "For the LineMOD dataset both metrics are used, ADD for non symmetric and ADD-S for symmetric objects." ], [ "In this section we compare the performance of our model with that of some of the most popular methods as show in TABLE REF .", "On the LineMOD dataset the DenseFusion’s accuracy were 86.2% without refinement and 94.3% with 2 iterations of refinement which got saturated after 2 iterations, whereas in our experiment we obtained 96.4% accuracy with 2 refinement steps while 97.5% with 10 iteration of refinement, which is 2.2% more with 2 refinement iteration and 3.2% more with 10 iteration.", "In Fig.", "REF , we have visualized the estimated 6D pose for some of the objects of the LineMOD dataset." ], [ "Along with accuracy, time taken to process one frame is also important if we wish to use the model in real time application.", "So, we also compared the time taken for the model to process and output for one single frame.", "PoseCNN + ICP took around 10.6 seconds for one frame, DenseFusion took around 0.06 seconds and our model took around 0.065 seconds for the same which is approximately the same as DenseFusion.", "Table: Comparison of time taken for each frame (in seconds).So according to the TABLE REF , our model takes around 0.065 sec which when converted into video frames per sec gives around 13-15 fps which is pretty sufficient for real time applications." ], [ "In this work, a hybrid context-aware architecture is introduced that performs the pose estimation of objects in cluttered areas efficiently by using the inherent difference in properties of two types of objects present in the data.", "It is done by treating symmetric and non-symmetric objects separately.", "Our method has achieved an accuracy of 97.52% on the LineMOD dataset, which makes it 3.2% accurate than DenseFusion.", "Our model also keeps the inference time very low to make it an efficient choice for real-time applications.", "The present research is partially funded by the I-Hub foundation for Cobotics (Technology Innovation Hub of IIT-Delhi setup by the Department of Science and Technology, Govt.", "of India).", "The authors declare that they have no conflict of interest." ] ]
2212.05560
[ [ "Doubly and triply extended MSRD codes" ], [ "Abstract In this work, doubly extended linearized Reed--Solomon codes and triply extended Reed--Solomon codes are generalized.", "We obtain a general result in which we characterize when a multiply extended code for a general metric attains the Singleton bound.", "We then use this result to obtain several families of doubly extended and triply extended maximum sum-rank distance (MSRD) codes that include doubly extended linearized Reed--Solomon codes and triply extended Reed--Solomon codes as particular cases.", "To conclude, we discuss when these codes are one-weight codes." ], [ "Introduction", "Let $ \\mathbb {F}_q $ denote the finite field of size $ q $ , and denote by $ \\mathbb {F}_q^n $ and $ \\mathbb {F}_q^{m \\times n} $ the spaces of row vectors of length $ n $ and matrices of size $ m \\times n $ , respectively, over $ \\mathbb {F}_q $ , for positive integers $ m $ and $ n $ .", "We also denote $ \\mathbb {N} = \\lbrace 0,1,2,\\ldots \\rbrace $ and $ [n] = \\lbrace 1,2, \\ldots , n \\rbrace $ for a positive integer $ n $ .", "The Hamming metric in $ \\mathbb {F}_q^n $ is given by $ {\\rm d}_H(\\mathbf {c}, \\mathbf {d}) = | \\lbrace i \\in [n] \\mid c_i \\ne d_i \\rbrace | $ , for $ \\mathbf {c}, \\mathbf {d} \\in \\mathbb {F}_q^n $ .", "Doubly extended Reed–Solomon codes [6] [9] are the linear codes in $ \\mathbb {F}_q^{n+2} $ given by the generator matrix $\\left( \\begin{array}{cccc|cc}1 & 1 & \\ldots & 1 & 1 & 0 \\\\a_1 & a_2 & \\ldots & a_n & 0 & 0 \\\\a_1^2 & a_2^2 & \\ldots & a_n^2 & 0 & 0 \\\\\\vdots & \\vdots & \\ddots & \\vdots & \\vdots & \\vdots \\\\a_1^{k-2} & a_2^{k-2} & \\ldots & a_n^{k-2} & 0 & 0 \\\\a_1^{k-1} & a_2^{k-1} & \\ldots & a_n^{k-1} & 0 & 1 \\\\\\end{array} \\right) \\in \\mathbb {F}_q^{k \\times (n+2)},$ for $ k \\in [n] $ and distinct $ a_1, a_2, \\ldots , a_n \\in \\mathbb {F}_q^* $ (hence $ n \\le q-1 $ and $ n+2 \\le q+1 $ , where equalities may be attained).", "One may show by using conventional polynomial results that the doubly extended Reed–Solomon code above is maximum distance separable (MDS).", "See [6].", "In other words, it attains the Singleton bound for the Hamming metric.", "Furthermore, these codes may have length $ q+1 $ , which is conjectured to be maximum for most values of the code dimension $ k $ .", "This is the well-known MDS conjecture (see [6]), which has been proven for $ q $ prime [2].", "Recently, a generalization of this result was given in [15] for the sum-rank metric, a metric that simultaneously generalizes the Hamming metric and the rank metric [4], [5], [17].", "The generalization of Reed–Solomon codes to the sum-rank metric is called linearized Reed–Solomon codes, introduced in [11], which are maximum sum-rank distance (MSRD) codes, i.e., they attain the Singleton bound for the sum-rank metric.", "More general families of linear MSRD codes exist [12], [14].", "The authors of [15] introduced doubly extended linearized Reed–Solomon codes and showed, using geometric tools, that they are also MSRD.", "In this work, we show how one may extend codes attaining the Singleton bound for any metric given by a weight.", "The metric considered for the extended codes is obtained by adding Hamming-metric components, as was done for the sum-rank metric in [15] (Section ).", "In Section , we provide necessary and sufficient conditions for multiply extended codes to attain the Singleton bound based on the original codes.", "In Sections and , we apply double and triple extensions, respectively, to the general MSRD codes obtained in [12], which include linearized Reed–Solomon codes (and therefore classical Reed–Solomon codes and Gabidulin codes [5], [17]).", "In Section , we study what happens when the extended portion is not considered with Hamming-metric components, but by considering the rank metric in the whole added block, and show that doubly extended codes are no longer MSRD in general.", "Finally, in Section , we investigate when the obtained doubly and triply extended MSRD codes are one-weight codes." ], [ "The Singleton bound for sums of metrics", "In this manuscript, we consider metrics given by weights.", "Here, a weight function is a function $ {\\rm wt} : \\mathbb {F}_q^n \\longrightarrow \\mathbb {N} $ satisfying the following properties: $ {\\rm wt}(\\mathbf {c}) \\ge 0 $ and it equals 0 if, and only if, $ \\mathbf {c} = \\mathbf {0} $ , for all $ \\mathbf {c} \\in \\mathbb {F}_q^n $ .", "$ {\\rm wt}(\\lambda \\mathbf {c}) = {\\rm wt}(\\mathbf {c}) $ , for all $ \\mathbf {c} \\in \\mathbb {F}_q^n $ and all $ \\lambda \\in \\mathbb {F}_q^* $ .", "$ {\\rm wt}(\\mathbf {c} + \\mathbf {d}) \\le {\\rm wt}(\\mathbf {c}) + {\\rm wt}(\\mathbf {d}) $ , for all $ \\mathbf {c}, \\mathbf {d} \\in \\mathbb {F}_q^n $ .", "Its associated metric is the function $ {\\rm d} : (\\mathbb {F}_q^n)^2 \\longrightarrow \\mathbb {N} $ given by $ {\\rm d}(\\mathbf {c}, \\mathbf {d}) = {\\rm wt}(\\mathbf {c} - \\mathbf {d}) $ , for $ \\mathbf {c}, \\mathbf {d} \\in \\mathbb {F}_q^n $ .", "It is straightforward to prove that a metric given by a weight as above is indeed a metric (see [6]).", "As usual, we define the minimum distance of a code $ \\mathcal {C} \\subseteq \\mathbb {F}_q^n $ (a code is just a set) with respect to $ {\\rm d} $ as $ {\\rm d}(\\mathcal {C}) = \\min \\lbrace {\\rm d}(\\mathbf {c}, \\mathbf {d}) \\mid \\mathbf {c}, \\mathbf {d} \\in \\mathcal {C}, \\mathbf {c} \\ne \\mathbf {d} \\rbrace .", "$ It is well-known that, if $ \\mathcal {C} $ is linear (i.e., an $ \\mathbb {F}_q $ -linear subspace of $ \\mathbb {F}_q^n $ ), then $ {\\rm d}(\\mathcal {C}) = \\min \\lbrace {\\rm wt}(\\mathbf {c}) \\mid \\mathbf {c} \\in \\mathcal {C} \\setminus \\lbrace \\mathbf {0} \\rbrace \\rbrace $ , where $ {\\rm wt} $ is the weight giving the metric $ {\\rm d} $ .", "We will say that a metric $ {\\rm d} $ satisfies the Singleton bound if ${\\rm d}(\\mathcal {C}) \\le n - k + 1,$ where $ k = \\log _q|\\mathcal {C}| $ , for any code $ \\mathcal {C} \\subseteq \\mathbb {F}_q^n $ .", "Any metric given by a weight that is upper bounded by the Hamming weight satisfies the Singleton bound.", "Many examples exist, including the Hamming metric itself, the rank metric [4], [5], the sum-rank metric [11], the cover metric [17] and the multi-cover metric [13], among others.", "Some of these metrics, e.g., the sum-rank metric, the multi-cover metric or the Hamming metric itself, are given by sums of other metrics.", "In general, given weights $ {\\rm wt}_i $ in $ \\mathbb {F}_q^{n_i} $ , for $ i \\in [\\ell ] $ , we may define their sum as $ {\\rm wt}_{{\\rm sum}}(\\mathbf {c}) = {\\rm wt}_1(\\mathbf {c}_1) + {\\rm wt}_2(\\mathbf {c}_2) + \\cdots + {\\rm wt}_\\ell (\\mathbf {c}_\\ell ), $ for $ \\mathbf {c} = (\\mathbf {c}_1, \\mathbf {c}_2, \\ldots , \\mathbf {c}_\\ell ) \\in \\mathbb {F}_q^n $ , where $ n = n_1 + n_2 + \\cdots + n_\\ell $ and $ \\mathbf {c}_i \\in \\mathbb {F}_q^{n_i} $ , for $ i \\in [\\ell ] $ .", "Clearly, $ {\\rm wt}_{{\\rm sum}} $ is a weight.", "We denote similarly the corresponding associated metric.", "It is easy to see that $ {\\rm d}_{{\\rm sum}} $ satisfies the Singleton bound if so do the metrics $ {\\rm d}_i $ , for $ i \\in [\\ell ] $ .", "In the remainder of the manuscript, we will only consider metrics $ {\\rm d}: (\\mathbb {F}_q^n)^2 \\longrightarrow \\mathbb {N} $ given by weights and satisfying the Singleton bound (REF )." ], [ "Multiply extended codes", "In this section, we give a definition of multiply extended codes for general metrics and show that they attain the Singleton bound if so do certain codes related to the original code and the metric is extended by adding a Hamming-metric component.", "In Sections and , we will particularize these results to construct doubly and triply extended MSRD codes.", "Theorem 1 Let $ \\mathbf {g}_1, \\mathbf {g}_2, \\ldots , \\mathbf {g}_k \\in \\mathbb {F}_q^n $ be linearly independent, and let $ t \\in [k] $ .", "Consider the $ k $ -dimensional linear code $ \\mathcal {C}_e \\subseteq \\mathbb {F}_q^{n+t} $ with generator matrix $ G_e = \\left( \\begin{array}{c|cccc}\\mathbf {g}_1 & 1 & 0 & \\ldots & 0 \\\\\\mathbf {g}_2 & 0 & 1 & \\ldots & 0 \\\\\\vdots & \\vdots & \\vdots & \\ddots & \\vdots \\\\\\mathbf {g}_t & 0 & 0 & \\ldots & 1 \\\\\\hline \\mathbf {g}_{t+1} & 0 & 0 & \\ldots & 0 \\\\\\vdots & \\vdots & \\vdots & \\ddots & \\vdots \\\\\\mathbf {g}_k & 0 & 0 & \\ldots & 0\\end{array} \\right) \\in \\mathbb {F}_q^{k \\times (n+t)}.", "$ Define also the linear codes $ \\mathcal {C}_I = \\langle \\lbrace \\mathbf {g}_i \\mid i \\in I \\rbrace \\rangle + \\langle \\mathbf {g}_{t+1}, \\ldots , \\mathbf {g}_k \\rangle $ , and set $ d_I = {\\rm d}(\\mathcal {C}_I) $ , for $ I \\subseteq [t] $ .", "Here $ \\langle \\cdot \\rangle $ denotes linear span.", "Then it holds that $ {\\rm d}_e (\\mathcal {C}_e) = \\min \\lbrace d_I + |I| \\mid I \\subseteq [t] \\rbrace $ , where the metric $ {\\rm d}_e : (\\mathbb {F}_q^{n+t})^2 \\longrightarrow \\mathbb {N} $ is given by $ {\\rm d}_e( (\\mathbf {c}_1, \\mathbf {c}_2), (\\mathbf {d}_1, \\mathbf {d}_2)) = {\\rm d}(\\mathbf {c}_1,\\mathbf {d}_1) + {\\rm d}_H(\\mathbf {c}_2,\\mathbf {d}_2), $ for $ \\mathbf {c}_1 , \\mathbf {d}_1 \\in \\mathbb {F}_q^n $ and $ \\mathbf {c}_2,\\mathbf {d}_2 \\in \\mathbb {F}_q^t $ .", "Let $ \\mathbf {e}_1, \\mathbf {e}_2, \\ldots , \\mathbf {e}_t \\in \\mathbb {F}_q^t $ denote the canonical basis.", "A codeword in $ \\mathcal {C}_e $ is of the form $ \\mathbf {c} = \\left( \\sum _{i \\in I} \\lambda _i \\mathbf {g}_i + \\sum _{j=t+1}^k \\lambda _j \\mathbf {g}_j, \\sum _{i \\in I} \\lambda _i \\mathbf {e}_i \\right), $ where $ I \\subseteq [t] $ , $ \\lambda _i \\in \\mathbb {F}_q^* $ , for $ i \\in I $ , and $ \\lambda _j \\in \\mathbb {F}_q $ , for $ j = t+1, \\ldots , k $ .", "Note that possibly $ I = \\varnothing $ .", "Since $ \\lambda _i \\ne 0 $ for $ i \\in I $ , we deduce that $\\begin{split}{\\rm wt}_e (\\mathbf {c}) & = {\\rm wt} \\left( \\sum _{i \\in I} \\lambda _i \\mathbf {g}_i + \\sum _{j=t+1}^k \\lambda _j \\mathbf {g}_j \\right) + {\\rm wt}_H \\left( \\sum _{i \\in I} \\lambda _i \\mathbf {e}_i \\right) \\\\& = {\\rm wt} \\left( \\sum _{i \\in I} \\lambda _i \\mathbf {g}_i + \\sum _{j=t+1}^k \\lambda _j \\mathbf {g}_j \\right) + |I| \\\\& \\ge {\\rm d}_I + |I|.\\end{split}$ Therefore, we have that $ {\\rm d}_e(\\mathcal {C}_e) \\ge \\min \\lbrace d_I + |I| \\mid I \\subseteq [t] \\rbrace $ .", "Now, consider a subset $ I \\subseteq [t] $ and take $ \\mathbf {d} = \\sum _{i \\in I} \\lambda _i \\mathbf {g}_i + \\sum _{j=t+1}^k \\lambda _j \\mathbf {g}_j \\in \\mathcal {C}_I $ such that $ {\\rm wt}(\\mathbf {d}) = d_I $ , where $ \\lambda _i \\in \\mathbb {F}_q $ for $ i \\in I \\cup \\lbrace t+1, \\ldots , k \\rbrace $ .", "If $ J \\subseteq I $ is such that $ \\mathbf {d} = \\sum _{i \\in J} \\lambda _i \\mathbf {g}_i + \\sum _{j=t+1}^k \\lambda _j \\mathbf {g}_j $ (i.e., $ \\lambda _i = 0 $ if $ i \\in I \\setminus J $ ), then $ \\mathbf {d} \\in \\mathcal {C}_J $ and thus $ d_J \\le {\\rm wt}(\\mathbf {d}) = d_I \\le d_J.", "$ Hence we also have $ {\\rm wt}(\\mathbf {d}) = d_J $ .", "Thus there exist $ I \\subseteq [t] $ and a codeword $ \\mathbf {d} = \\sum _{i \\in I} \\lambda _i \\mathbf {g}_i + \\sum _{j=t+1}^k \\lambda _j \\mathbf {g}_j $ with $ {\\rm wt}(\\mathbf {d}) = d_I $ , $ \\lambda _{t+1}, \\ldots , \\lambda _k \\in \\mathbb {F}_q $ and $ \\lambda _i \\in \\mathbb {F}_q^* $ , for all $ i \\in I $ .", "Therefore, $ {\\rm wt}_e \\left( \\sum _{i \\in I} \\lambda _i \\mathbf {g}_i + \\sum _{j=t+1}^k \\lambda _j \\mathbf {g}_j , \\sum _{i \\in I} \\lambda _i \\mathbf {e}_i \\right) = d_I + |I|.", "$ Considering all of the subsets $ I \\subseteq [t] $ such that $ d_I = {\\rm wt}(\\mathbf {d}) $ for some $ \\mathbf {d} = \\sum _{i \\in J} \\lambda _i \\mathbf {g}_i + \\sum _{j=t+1}^k \\lambda _j \\mathbf {g}_j $ , with $ \\lambda _i \\in \\mathbb {F}_q^* $ for $ i \\in I $ and $ \\lambda _j \\in \\mathbb {F}_q $ for $ j \\in \\lbrace t+1, \\ldots , k \\rbrace $ , we conclude that $ {\\rm d}_e(\\mathcal {C}_e) = \\min \\lbrace d_I + |I| \\mid I \\subseteq [t] \\rbrace $ .", "We now deduce the following result on multiply extended codes that attain the Singleton bound.", "Corollary 1 With notation as in Theorem REF , the code $ \\mathcal {C}_e $ attains the Singleton bound for $ {\\rm d}_e $ if, and only if, so do the codes $ \\mathcal {C}_I $ for $ {\\rm d} $ , for all $ I \\subseteq [t] $ .", "Note that $ \\dim (\\mathcal {C}_e) = k $ and $ \\dim (\\mathcal {C}_I) = k + |I| - t $ , for $ I \\subseteq [t] $ .", "Hence $ \\mathcal {C}_I $ attains the Singleton bound for $ {\\rm d} $ if, and only if, $ d_I = n - (k+|I|-t) + 1 = (n+t) - k - |I| + 1 .", "$ We also have that $ \\mathcal {C}_e $ attains the Singleton bound if, and only if, $\\begin{split}{\\rm d}_e (\\mathcal {C}_e) & = \\min \\lbrace d_I + |I| \\mid I \\subseteq [t] \\rbrace \\\\& = (n+t) - k + 1 \\\\& = \\min \\lbrace (n+t) - k - |I| + 1 + |I| \\mid I \\subseteq [t] \\rbrace ,\\end{split}$ and the result follows.", "Remark 2 Setting $ t = k $ and $ {\\rm d} = {\\rm d}_H $ (i.e., $ {\\rm d}_e = {\\rm d}_H $ ), then Corollary REF recovers the well-known characterization of systematic generator matrices of MDS codes from [9].", "In other words, when $ t = k $ and $ {\\rm d} = {\\rm d}_H $ , Corollary REF states that $ \\mathcal {C}_e $ is MDS if, and only if, every square submatrix of $ G $ is invertible, where $ G $ is the matrix whose rows are $ \\mathbf {g}_1, \\mathbf {g}_2, \\ldots , \\mathbf {g}_k \\in \\mathbb {F}_q^n $ .", "Corollary REF extends this result to any $ t \\in [k] $ and any metric $ {\\rm d} $ given by a weight satisfying the Singleton bound.", "Finally, we note that we have a lattice of linear codes $ \\mathcal {C}_I \\subseteq \\mathbb {F}_q^n $ , for $ I \\subseteq [t] $ , with respect to inclusions or, equivalently, unions and intersections, i.e., we have the following inclusion graph: $ \\begin{array}{ccccc}& & \\mathcal {C}_{I \\cup J} & & \\\\& \\nearrow & & \\nwarrow & \\\\\\mathcal {C}_I & & & & \\mathcal {C}_J \\\\& \\nwarrow & & \\nearrow & \\\\& & \\mathcal {C}_{I \\cap J} .", "& &\\end{array} $ By taking systematic generator matrices, we deduce that the existence of a linear code in $ \\mathbb {F}_q^{n + t} $ attaining the Singleton bound for $ {\\rm d}_e $ is equivalent to the existence of a lattice of linear codes $ \\mathcal {C}_I \\subseteq \\mathbb {F}_q^n $ , for $ I \\subseteq [t] $ , as above, attaining the Singleton bound for $ {\\rm d} $ .", "This property also holds for the dual codes, as stated in the following proposition.", "Here, we define the dual of a linear code $ \\mathcal {C} \\subseteq \\mathbb {F}_q^n $ as usual: $ \\mathcal {C}^\\perp = \\lbrace \\mathbf {d} \\in \\mathbb {F}_q^n \\mid \\mathbf {c} \\cdot \\mathbf {d}^\\intercal = 0, \\forall \\mathbf {c} \\in \\mathcal {C} \\rbrace $ .", "Proposition 3 Let $ \\mathcal {C}_I \\subseteq \\mathbb {F}_q^n $ , for $ I \\subseteq [t] $ , be a family of linear codes such that the map $ I \\mapsto \\mathcal {C}_I $ is a lattice isomorphism.", "Define now the linear codes $ \\mathcal {D}_I = (\\mathcal {C}_{I^c})^\\perp \\subseteq \\mathbb {F}_q^n $ , for $ I \\subseteq [t] $ , where $ I^c = [t] \\setminus I $ denotes the complement of $ I $ in $ [t] $ .", "Then the map $ I \\mapsto \\mathcal {D}_I $ is also a lattice isomorphism.", "Simply notice that, for $ I,J \\subseteq [t] $ , we have $ \\mathcal {D}_I + \\mathcal {D}_J = (\\mathcal {C}_{I^c})^\\perp + (\\mathcal {C}_{J^c})^\\perp = (\\mathcal {C}_{I^c} \\cap \\mathcal {C}_{J^c})^\\perp = (\\mathcal {C}_{I^c \\cap J^c})^\\perp = (\\mathcal {C}_{(I \\cup J)^c})^\\perp = \\mathcal {D}_{I \\cup J}, $ $ \\mathcal {D}_I \\cap \\mathcal {D}_J = (\\mathcal {C}_{I^c})^\\perp \\cap (\\mathcal {C}_{J^c})^\\perp = (\\mathcal {C}_{I^c} + \\mathcal {C}_{J^c})^\\perp = (\\mathcal {C}_{I^c \\cup J^c})^\\perp = (\\mathcal {C}_{(I \\cap J)^c})^\\perp = \\mathcal {D}_{I \\cap J}.", "$ Assume that $ {\\rm d} $ is a metric such that a linear code attains the Singleton bound if, and only if, so does its dual code.", "In such a case, Proposition REF states that we do not need to check the conditions in Corollary REF for both the primary and dual codes, but only for one of them.", "This is the case of the sum-rank metric [10], and thus of the Hamming and rank metrics in particular." ], [ "Doubly extended MSRD codes", "In this section, we generalize the construction of doubly extended linearized Reed–Solomon codes from [15] to the general family of MSRD codes from [12].", "Using Corollary REF , we will show that such doubly extended MSRD codes are again MSRD.", "Recall that the sum-rank metric [16] in $ \\mathbb {F}_{q^m}^n $ over $ \\mathbb {F}_q $ for the length partition $ (g,r) $ is defined as a sum of rank metrics, i.e., sum-rank weights are given by $ {\\rm wt}_{SR}(\\mathbf {c}) = \\sum _{i=1}^g {\\rm wt}_R \\left( \\mathbf {c}^{(i)} \\right), $ for $ \\mathbf {c} = \\left( \\mathbf {c}^{(1)}, \\mathbf {c}^{(2)}, \\ldots , \\mathbf {c}^{(g)} \\right) \\in \\mathbb {F}_{q^m}^n $ , where $ \\mathbf {c}^{(i)} \\in \\mathbb {F}_{q^m}^r $ , for $ i \\in [g] $ , and $ n = gr $ .", "Recall that rank weights in $ \\mathbb {F}_{q^m}^r $ are given by $ {\\rm wt}_R(\\mathbf {d}) = \\dim _{\\mathbb {F}_q}( \\langle d_1, d_2, \\ldots , d_r \\rangle _{\\mathbb {F}_q} ) $ , for $ \\mathbf {d} = (d_1, d_2, \\ldots , d_r) \\in \\mathbb {F}_{q^m}^r $ .", "We now give the definition of extended Moore matrices from [12].", "Definition 4 (Extended Moore matrices [12]) Fix positive integers $ \\ell $ and $ \\eta $ .", "Let $ \\mathbf {a} = ( a_1, a_2, \\ldots , a_\\ell ) \\in (\\mathbb {F}_{q^m}^*)^\\ell $ be such that $ N_{\\mathbb {F}_{q^m}/\\mathbb {F}_q}(a_i) \\ne N_{\\mathbb {F}_{q^m}/\\mathbb {F}_q}(a_j) $ if $ i \\ne j $ , where $ N_{\\mathbb {F}_{q^m}/\\mathbb {F}_q}(a) = a \\cdot a^q \\cdots a^{q^{m-1}} $ , for $ a \\in \\mathbb {F}_{q^m} $ .", "For any $ \\beta = ( \\beta _1, \\beta _2, \\ldots , \\beta _\\eta ) \\in \\mathbb {F}_{q^m}^{\\eta } $ and $ k \\in [\\ell \\eta ] $ , we define the extended Moore matrix $ M_k(\\mathbf {a}, \\beta ) \\in \\mathbb {F}_{q^m}^{k \\times (\\ell \\eta )} $ by $ M_k(\\mathbf {a}, \\beta ) = $ $\\left( \\begin{array}{lll|c|lll}\\beta _1 & \\ldots & \\beta _\\eta & \\ldots & \\beta _1 & \\ldots & \\beta _\\eta \\\\\\beta _1^q a_1 & \\ldots & \\beta _\\eta ^q a_1 & \\ldots & \\beta _1^q a_\\ell & \\ldots & \\beta _\\eta ^q a_\\ell \\\\\\beta _1^{q^2} a_1^{\\frac{q^2-1}{q-1}} & \\ldots & \\beta _\\eta ^{q^2} a_1^{\\frac{q^2-1}{q-1}} & \\ldots & \\beta _1^{q^2} a_\\ell ^{\\frac{q^2-1}{q-1}} & \\ldots & \\beta _\\eta ^{q^2} a_\\ell ^{\\frac{q^2-1}{q-1}} \\\\\\vdots & \\ddots & \\vdots & \\ddots & \\vdots & \\ddots & \\vdots \\\\\\beta _1^{q^{k-1}} a_1^{\\frac{q^{k-1}-1}{q-1}} & \\ldots & \\beta _\\eta ^{q^{k-1}} a_1^{\\frac{q^{k-1}-1}{q-1}} & \\ldots & \\beta _1^{q^{k-1}} a_\\ell ^{\\frac{q^{k-1}-1}{q-1}} & \\ldots & \\beta _\\eta ^{q^{k-1}} a_\\ell ^{\\frac{q^{k-1}-1}{q-1}} \\\\\\end{array} \\right),$ and we denote by $ \\mathcal {C}_k(\\mathbf {a}, \\beta ) \\subseteq \\mathbb {F}_{q^m}^{\\ell \\eta } $ the $ k $ -dimensional linear code generated by $ M_k(\\mathbf {a}, \\beta ) $ (i.e., the rows of $ M_k(\\mathbf {a}, \\beta ) $ generate the vector space $ \\mathcal {C}_k(\\mathbf {a}, \\beta ) $ ).", "The following result [12] characterizes when a code with an extended Moore matrix as generator or parity-check matrix is MSRD.", "Theorem 2 ([12]) Let $ \\mathbf {a} = ( a_1, a_2, \\ldots , a_\\ell ) \\in (\\mathbb {F}_{q^m}^*)^\\ell $ be as in Definition REF .", "Let $ \\beta = (\\beta _1, \\beta _2, \\ldots , \\beta _{\\mu r}) \\in \\mathbb {F}_{q^m}^{\\mu r} $ , for positive integers $ \\mu $ and $ r $ , and set $ g = \\ell \\mu $ .", "Define the $ \\mathbb {F}_q $ -linear subspace $\\mathcal {H}_i = \\left\\langle \\beta _{(i-1)r+1}, \\beta _{(i-1)r+2}, \\ldots , \\beta _{ir} \\right\\rangle _{\\mathbb {F}_q} \\subseteq \\mathbb {F}_{q^m},$ for $ i \\in [\\mu ] $ .", "Given $ k \\in [gr] $ , the code $ \\mathcal {C}_k(\\mathbf {a}, \\beta ) $ from Definition REF is MSRD over $ \\mathbb {F}_q $ for the length partition $ (g,r) $ if, and only if, the following two conditions hold for all $ i \\in [\\mu ] $ : $ \\dim _{\\mathbb {F}_q}(\\mathcal {H}_i) = r $ , and $ \\mathcal {H}_i \\cap \\left( \\sum _{j \\in \\Gamma } \\mathcal {H}_j \\right) = \\lbrace 0 \\rbrace $ , for any set $ \\Gamma \\subseteq [\\mu ] $ , such that $ i \\notin \\Gamma $ and $ |\\Gamma | \\le \\min \\lbrace k,\\mu \\rbrace -1 $ .", "Several constructions of MSRD codes based on Theorem REF were obtained in [12].", "These include linearized Reed–Solomon codes [11] by taking $ \\mu = 1 $ (in that case, Condition 2 is empty and Condition 1 means that $ \\beta _1, \\beta _2, \\ldots , \\beta _r $ are $ \\mathbb {F}_q $ -linearly independent).", "For our purposes, we also need to consider the $ k $ -dimensional linear codes $ \\mathcal {D}_k(\\mathbf {a}, \\beta ) \\subseteq \\mathbb {F}_{q^m}^{\\ell \\eta } $ with generator matrices $ M^\\prime _k(\\mathbf {a}, \\beta ) = $ $\\left( \\begin{array}{lll|c|lll}\\beta _1^q a_1 & \\ldots & \\beta _\\eta ^q a_1 & \\ldots & \\beta _1^q a_\\ell & \\ldots & \\beta _\\eta ^q a_\\ell \\\\\\beta _1^{q^2} a_1^{\\frac{q^2-1}{q-1}} & \\ldots & \\beta _\\eta ^{q^2} a_1^{\\frac{q^2-1}{q-1}} & \\ldots & \\beta _1^{q^2} a_\\ell ^{\\frac{q^2-1}{q-1}} & \\ldots & \\beta _\\eta ^{q^2} a_\\ell ^{\\frac{q^2-1}{q-1}} \\\\\\vdots & \\ddots & \\vdots & \\ddots & \\vdots & \\ddots & \\vdots \\\\\\beta _1^{q^k} a_1^{\\frac{q^k-1}{q-1}} & \\ldots & \\beta _\\eta ^{q^k} a_1^{\\frac{q^k-1}{q-1}} & \\ldots & \\beta _1^{q^k} a_\\ell ^{\\frac{q^k-1}{q-1}} & \\ldots & \\beta _\\eta ^{q^k} a_\\ell ^{\\frac{q^k-1}{q-1}} \\\\\\end{array} \\right),$ for $ k \\in [\\ell \\eta ] $ .", "Observe that we have the following inclusion graph: $ \\begin{array}{ccccc}& & \\mathcal {C}_k(\\mathbf {a}, \\beta ) & & \\\\& \\nearrow & & \\nwarrow & \\\\\\mathcal {C}_{k-1}(\\mathbf {a}, \\beta ) & & & & \\mathcal {D}_{k-1}(\\mathbf {a}, \\beta ) \\\\& \\nwarrow & & \\nearrow & \\\\& & \\mathcal {D}_{k-2}(\\mathbf {a}, \\beta ) .", "& &\\end{array} $ The codes $ \\mathcal {C}_k(\\mathbf {a}, \\beta ) $ are MSRD given Conditions 1 and 2 in Theorem REF .", "We now show that the same conditions turn the codes $ \\mathcal {D}_k(\\mathbf {a}, \\beta ) $ into MSRD codes.", "Lemma 5 Let $ \\ell $ , $ \\mu $ and $ r $ be positive integers, let $ \\mathbf {a} = ( a_1, a_2, \\ldots , a_\\ell ) \\in (\\mathbb {F}_{q^m}^*)^\\ell $ and $ \\beta = (\\beta _1, \\beta _2, \\ldots , \\beta _{\\mu r}) \\in \\mathbb {F}_{q^m}^{\\mu r} $ as in Theorem REF , and set $ g = \\ell \\mu $ .", "For $ k \\in [g r] $ , $ \\mathcal {C}_k(\\mathbf {a}, \\beta ) $ is MSRD if, and only if, so is $ \\mathcal {D}_k(\\mathbf {a}, \\beta ) $ , in both cases over $ \\mathbb {F}_q $ for the length partition $ (g,r) $ .", "For $ a, \\beta \\in \\mathbb {F}_{q^m} $ and a positive integer $ i $ , we have that $ \\beta ^{q^i} a^{\\frac{q^i-1}{q-1}} = \\beta ^{q^i} a^{q^{i-1}} \\cdots a^q \\cdot a = \\left( \\beta ^{q^{i-1}} a^{q^{i-2}} \\cdots a^q \\cdot a \\right)^q a = \\left( \\beta ^{q^{i-1}} a^{\\frac{q^{i-2}-1}{q-1}} \\right)^q a .", "$ Hence it holds that $ M^\\prime _k(\\mathbf {a}, \\beta ) = M_k(\\mathbf {a}, \\beta )^q {\\rm diag}(a_1, \\ldots , a_1 | \\ldots | a_\\ell , \\ldots , a_\\ell ), $ where $ M_k(\\mathbf {a}, \\beta )^q $ means that we raise every entry of $ M_k(\\mathbf {a}, \\beta ) $ to the $ q $ th power, and $ {\\rm diag}(\\cdot ) $ denotes diagonal matrix.", "In particular, the same holds for the corresponding codes, i.e., $ \\mathcal {D}_k(\\mathbf {a}, \\beta ) = \\mathcal {C}_k(\\mathbf {a}, \\beta )^q {\\rm diag}(a_1, \\ldots , a_1 | \\ldots | a_\\ell , \\ldots , a_\\ell ), $ where $ \\mathcal {C}_k(\\mathbf {a}, \\beta )^q $ means that we raise every component of every codeword of $ \\mathcal {C}_k(\\mathbf {a}, \\beta ) $ to the $ q $ th power.", "Now, observe that the map $ \\phi : \\mathbb {F}_{q^m}^{gr} \\longrightarrow \\mathbb {F}_{q^m}^{gr} $ given by $ \\phi \\left( c_1 , \\ldots , c_{\\mu r} | \\ldots | c_{(\\ell - 1)(\\mu r) + 1}, \\ldots , c_{\\ell (\\mu r)} \\right) = \\left( c_1^q a_1 , \\ldots , c_{\\mu r}^q a_1 | \\ldots | c_{(\\ell - 1)(\\mu r) + 1}^q a_\\ell , \\ldots , c_{\\ell (\\mu r)}^q a_\\ell \\right) $ is a semilinear isometry for the sum-rank metric over $ \\mathbb {F}_q $ for the length partition $ (g,r) $ , since $ a_i \\ne 0 $ , for $ i \\in [\\ell ] $ (see [1]).", "Hence the result follows.", "Therefore, we are in the situation of Corollary REF for the sum-rank metric.", "For this reason, we define the following codes.", "Definition 6 Let $ \\mathbf {a} = ( a_1, a_2, \\ldots , a_\\ell ) \\in (\\mathbb {F}_{q^m}^*)^\\ell $ be as in Definition REF .", "Let $ \\beta = ( \\beta _1, \\beta _2, \\ldots , \\beta _\\eta ) \\in \\mathbb {F}_{q^m}^{\\eta } $ be arbitrary, for a positive integer $ \\eta $ .", "For $ k = 2,3, \\ldots , \\ell \\eta $ , we define the doubly extended Moore matrix $ M^e_k(\\mathbf {a}, \\beta ) \\in \\mathbb {F}_{q^m}^{k \\times (\\ell \\eta + 2)} $ by $ M^e_k(\\mathbf {a}, \\beta ) = $ $\\left( \\begin{array}{lll|c|lll|cc}\\beta _1 & \\ldots & \\beta _\\eta & \\ldots & \\beta _1 & \\ldots & \\beta _\\eta & 1 & 0 \\\\\\beta _1^q a_1 & \\ldots & \\beta _\\eta ^q a_1 & \\ldots & \\beta _1^q a_\\ell & \\ldots & \\beta _\\eta ^q a_\\ell & 0 & 0 \\\\\\beta _1^{q^2} a_1^{\\frac{q^2-1}{q-1}} & \\ldots & \\beta _\\eta ^{q^2} a_1^{\\frac{q^2-1}{q-1}} & \\ldots & \\beta _1^{q^2} a_\\ell ^{\\frac{q^2-1}{q-1}} & \\ldots & \\beta _\\eta ^{q^2} a_\\ell ^{\\frac{q^2-1}{q-1}} & 0 & 0 \\\\\\vdots & \\ddots & \\vdots & \\ddots & \\vdots & \\ddots & \\vdots & \\vdots & \\vdots \\\\\\beta _1^{q^{k-1}} a_1^{\\frac{q^{k-1}-1}{q-1}} & \\ldots & \\beta _\\eta ^{q^{k-1}} a_1^{\\frac{q^{k-1}-1}{q-1}} & \\ldots & \\beta _1^{q^{k-1}} a_\\ell ^{\\frac{q^{k-1}-1}{q-1}} & \\ldots & \\beta _\\eta ^{q^{k-1}} a_\\ell ^{\\frac{q^{k-1}-1}{q-1}} & 0 & 1 \\\\\\end{array} \\right),$ and we denote by $ \\mathcal {C}^e_k(\\mathbf {a}, \\beta ) \\subseteq \\mathbb {F}_{q^m}^{\\ell \\eta + 2} $ the $ k $ -dimensional linear code generated by $ M^e_k(\\mathbf {a}, \\beta ) $ .", "Thus, by Corollary REF and Lemma REF , we deduce the following.", "Corollary 7 Let $\\ell $ , $ \\mu $ and $ r $ be positive integers, define $ g = \\ell \\mu $ and $ n = gr $ , and let $ \\mathbf {a} = ( a_1, a_2, \\ldots , a_\\ell ) \\in (\\mathbb {F}_{q^m}^*)^\\ell $ and $ \\beta = (\\beta _1, \\beta _2, \\ldots , \\beta _{\\mu r}) \\in \\mathbb {F}_{q^m}^{\\mu r} $ as in Theorem REF .", "For $ k = 2,3, \\ldots , n $ , $ \\mathcal {C}_k(\\mathbf {a}, \\beta ) \\subseteq \\mathbb {F}_{q^m}^n $ is MSRD (i.e., Conditions 1 and 2 in Theorem REF hold) if, and only if, $ \\mathcal {C}^e_k(\\mathbf {a}, \\beta ) \\subseteq \\mathbb {F}_{q^m}^{n + 2} $ is MSRD for the extended sum-rank metric $ {\\rm d}_e( (\\mathbf {c}, c_{n+1}, c_{n+2}), (\\mathbf {d}, d_{n+1}, d_{n+2})) = {\\rm d}_{SR}(\\mathbf {c},\\mathbf {d}) + {\\rm d}_H((c_{n+1}, c_{n+2}), (d_{n+1}, d_{n+2})), $ for $ \\mathbf {c} , \\mathbf {d} \\in \\mathbb {F}_{q^m}^n $ and $ c_{n+1}, c_{n+2}, d_{n+1}, d_{n+2} \\in \\mathbb {F}_{q^m} $ , where $ {\\rm d}_{SR} $ denotes the sum-rank metric in $ \\mathbb {F}_{q^m}^n $ over $ \\mathbb {F}_q $ for the length partition $ (g,r) $ .", "In particular, if $ \\ell = q-1 $ and $ \\beta = (\\beta _1, \\beta _2, \\ldots , $ $ \\beta _{\\mu r}) $ $ \\in \\mathbb {F}_{q^m}^{\\mu r} $ satisfies Conditions 1 and 2 in Theorem REF , then the doubly extended code $ \\mathcal {C}^e_k(\\mathbf {a}, \\beta ) \\subseteq \\mathbb {F}_{q^m}^{n + 2} $ is MSRD as in the corollary above, where $ n = (q-1)\\mu r $ and where we consider in $ \\mathbb {F}_{q^m}^n $ the sum-rank metric over $ \\mathbb {F}_q $ for the length partition $ (g,r) $ , $ g = (q-1)\\mu $ .", "See [12] for seven concrete explicit families of MSRD codes constructed in this way.", "All of them can be doubly extended as mentioned in this paragraph while preserving their MSRD property.", "In particular, choosing $ \\mu = 1 $ , Corollary REF recovers [15] as a particular case for linearized Reed–Solomon codes, which in turn recovers the classical result [6] for classical Reed–Solomon codes." ], [ "Triply extended MSRD codes", "In contrast to the case of doubly extended MSRD codes (Section ), triply extended MSRD codes are not always MSRD, as we show in this section.", "We will only consider 3-dimensional codes.", "We start with cases where triple extension preserves the MSRD property.", "Notice that the case of (3-dimensional) classical Reed–Solomon codes and the Hamming metric in characteristic 2 [9] is recovered from the following theorem by taking $ m=\\mu =r=1 $ and $ \\beta _1 = 1 $ .", "Theorem 3 Let $ m $ be odd, let $ q $ be even, and set $ n = (q-1)\\mu r $ for positive integers $ \\mu $ and $ r $ .", "Let $ \\beta = (\\beta _1, \\ldots , \\beta _{\\mu r}) \\in \\mathbb {F}_{q^m}^{\\mu r} $ satisfy Conditions 1 and 2 in Theorem REF .", "Let $ \\mathbf {a} = ( a_1, a_2, \\ldots , a_{q-1} ) \\in (\\mathbb {F}_{q^m}^*)^{q-1} $ be such that $ N_{\\mathbb {F}_{q^m}/\\mathbb {F}_q}(a_i) \\ne N_{\\mathbb {F}_{q^m}/\\mathbb {F}_q}(a_j) $ if $ i \\ne j $ .", "The triply extended code $ \\mathcal {C}_e \\subseteq \\mathbb {F}_{q^m}^{n + 3} $ with generator matrix $ G_e = \\left( \\begin{array}{ccc|c|ccc|ccc}\\beta _1 & \\ldots & \\beta _{\\mu r} & \\ldots & \\beta _1 & \\ldots & \\beta _{\\mu r} & 1 & 0 & 0 \\\\a_1 \\beta _1^q & \\ldots & a_1 \\beta _{\\mu r}^q & \\ldots & a_{q-1} \\beta _1^q & \\ldots & a_{q-1} \\beta _{\\mu r}^q & 0 & 1 & 0 \\\\a_1^{q+1} \\beta _1^{q^2} & \\ldots & a_1^{q+1} \\beta _{\\mu r}^{q^2} & \\ldots & a_{q-1}^{q+1} \\beta _1^{q^2} & \\ldots & a_{q-1}^{q+1} \\beta _{\\mu r}^{q^2} & 0 & 0 & 1\\end{array} \\right) \\in \\mathbb {F}_{q^m}^{3 \\times (n + 3)} $ is MSRD for the extended sum-rank metric $ {\\rm d}_e( (\\mathbf {c}, \\mathbf {c}^\\prime ), (\\mathbf {d}, \\mathbf {d}^\\prime )) = {\\rm d}_{SR}(\\mathbf {c},\\mathbf {d}) + {\\rm d}_H(\\mathbf {c}^\\prime , \\mathbf {d}^\\prime ), $ for $ \\mathbf {c} , \\mathbf {d} \\in \\mathbb {F}_{q^m}^n $ and $ \\mathbf {c}^\\prime , \\mathbf {d}^\\prime \\in \\mathbb {F}_{q^m}^3 $ , where $ {\\rm d}_{SR} $ denotes the sum-rank metric in $ \\mathbb {F}_{q^m}^n $ over $ \\mathbb {F}_q $ for the length partition $ (g, r) $ , where $ g = (q-1) \\mu $ .", "By Corollary REF and Lemma REF , we only need to show that the code with generator matrix $ G = \\left( \\begin{array}{ccc|c|ccc}\\beta _1 & \\ldots & \\beta _{\\mu r} & \\ldots & \\beta _1 & \\ldots & \\beta _{\\mu r} \\\\a_1^{q+1} \\beta _1^{q^2} & \\ldots & a_1^{q+1} \\beta _{\\mu r}^{q^2} & \\ldots & a_{q-1}^{q+1} \\beta _1^{q^2} & \\ldots & a_{q-1}^{q+1} \\beta _{\\mu r}^{q^2}\\end{array} \\right) \\in \\mathbb {F}_{q^m}^{2 \\times n} $ is MSRD over $ \\mathbb {F}_q $ for the length partition $ (g, r) $ .", "First, since $ q $ is even, then if $ a,b \\in \\mathbb {F}_q $ are such that $ a \\ne b $ , then $ a^2 - b^2 = (a-b)^2 \\ne 0 $ , hence $ a^2 \\ne b^2 $ .", "Therefore if $ i \\ne j $ , since $ N_{\\mathbb {F}_{q^m}/\\mathbb {F}_q} (a_i) \\ne N_{\\mathbb {F}_{q^m}/\\mathbb {F}_q} (a_j) $ , we deduce that $ N_{\\mathbb {F}_{q^m}/\\mathbb {F}_q} (a_i^{q+1}) = N_{\\mathbb {F}_{q^m}/\\mathbb {F}_q} (a_i)^2 \\ne N_{\\mathbb {F}_{q^m}/\\mathbb {F}_q} (a_i)^2 = N_{\\mathbb {F}_{q^m}/\\mathbb {F}_q} (a_j^{q+1}).", "$ Second, since $ m $ is odd, then $ \\tau : \\mathbb {F}_{q^m} \\longrightarrow \\mathbb {F}_{q^m} $ given by $ \\tau (a) = a^{q^2} $ , for $ a \\in \\mathbb {F}_{q^m} $ , is a field automorphism such that $ \\lbrace a \\in \\mathbb {F}_{q^m} \\mid a^{q^2} = a \\rbrace = \\mathbb {F}_q $ .", "In particular, $ N_{\\mathbb {F}_{q^m}/\\mathbb {F}_q}(a) = a \\tau (a) \\cdots \\tau ^{m-1}(a) $ , for $ a \\in \\mathbb {F}_{q^m} $ .", "Hence the generator matrix $ G $ is an extended Moore matrix (Definition REF ) satisfying the conditions in Theorem REF , and therefore the code it generates is MSRD and we are done.", "On the other hand, when $ m $ is even or $ q $ is odd, a triply extended (full-length) linearized Reed–Solomon code is never MSRD.", "Proposition 8 Let $ \\beta _1, \\beta _2 , \\ldots , \\beta _m \\in \\mathbb {F}_{q^m} $ be $ \\mathbb {F}_q $ -linearly independent and let $ \\mathbf {a} = ( a_1, a_2, $ $ \\ldots , $ $ a_{q-1} ) \\in (\\mathbb {F}_{q^m}^*)^{q-1} $ be such that $ N_{\\mathbb {F}_{q^m}/\\mathbb {F}_q}(a_i) \\ne N_{\\mathbb {F}_{q^m}/\\mathbb {F}_q}(a_j) $ if $ i \\ne j $ .", "Set $ n = (q-1)m $ .", "If $ m $ is even or $ q $ is odd, then the triply extended code $ \\mathcal {C}_e \\subseteq \\mathbb {F}_{q^m}^{n + 3} $ with generator matrix $ G_e = \\left( \\begin{array}{ccc|c|ccc|ccc}\\beta _1 & \\ldots & \\beta _m & \\ldots & \\beta _1 & \\ldots & \\beta _m & 1 & 0 & 0 \\\\a_1 \\beta _1^q & \\ldots & a_1 \\beta _m^q & \\ldots & a_{q-1} \\beta _1^q & \\ldots & a_{q-1} \\beta _m^q & 0 & 1 & 0 \\\\a_1^{q+1} \\beta _1^{q^2} & \\ldots & a_1^{q+1} \\beta _m^{q^2} & \\ldots & a_{q-1}^{q+1} \\beta _1^{q^2} & \\ldots & a_{q-1}^{q+1} \\beta _m^{q^2} & 0 & 0 & 1\\end{array} \\right) \\in \\mathbb {F}_{q^m}^{3 \\times (n + 3)} $ is not MSRD for the extended sum-rank metric $ {\\rm d}_e( (\\mathbf {c}, \\mathbf {c}^\\prime ), (\\mathbf {d}, \\mathbf {d}^\\prime )) = {\\rm d}_{SR}(\\mathbf {c},\\mathbf {d}) + {\\rm d}_H(\\mathbf {c}^\\prime , \\mathbf {d}^\\prime ), $ for $ \\mathbf {c} , \\mathbf {d} \\in \\mathbb {F}_{q^m}^n $ and $ \\mathbf {c}^\\prime , \\mathbf {d}^\\prime \\in \\mathbb {F}_{q^m}^3 $ , where $ {\\rm d}_{SR} $ denotes the sum-rank metric in $ \\mathbb {F}_{q^m}^n $ over $ \\mathbb {F}_q $ for the length partition $ (q-1, m) $ .", "We first consider the case where $ m $ is even.", "Since $ \\mathbb {F}_{q^2} \\subseteq \\mathbb {F}_{q^m} $ in this case, there exists an invertible matrix $ A \\in \\mathbb {F}_q^{m \\times m} $ such that the first two components of $ (\\beta _1, \\beta _2, \\ldots , \\beta _m) A \\in \\mathbb {F}_{q^m}^m $ lie in $ \\mathbb {F}_{q^2} $ .", "Since such a multiplication constitutes a linear sum-rank isometry, we may assume that $ \\beta _1, \\beta _2 \\in \\mathbb {F}_{q^2} $ without loss of generality.", "Let $G = \\left( \\begin{array}{ccc|c|ccc}\\beta _1 & \\ldots & \\beta _m & \\ldots & \\beta _1 & \\ldots & \\beta _m \\\\a_1^{q+1} \\beta _1^{q^2} & \\ldots & a_1^{q+1} \\beta _m^{q^2} & \\ldots & a_{q-1}^{q+1} \\beta _1^{q^2} & \\ldots & a_{q-1}^{q+1} \\beta _m^{q^2}\\end{array} \\right) \\in \\mathbb {F}_{q^m}^{2 \\times n}.$ Since $ \\beta _i - \\beta _i^{q^2} = 0 $ ($ \\beta _i \\in \\mathbb {F}_{q^2} $ ), for $ i = 1,2 $ , we conclude that the codeword $ ( a_1^{q+1}, -1 ) G $ has sum-rank weight at most $ n-2 $ , hence the code generated by $ G $ is not MSRD over $ \\mathbb {F}_q $ for the length partition $ (q-1,m) $ .", "Thus the code generated by $ G_e $ is not MSRD with respect to $ {\\rm d}_e $ by Corollary REF .", "We now consider the case where both $ q $ and $ m $ are odd.", "By assumption, we have that $ \\lbrace N_{\\mathbb {F}_{q^m}/\\mathbb {F}_q}(a_i) \\mid i \\in [q-1] \\rbrace = \\mathbb {F}_q^* $ .", "Since $ q $ is odd, there exist $ 1 \\le i < j \\le q-1 $ such that $ N_{\\mathbb {F}_{q^m}/\\mathbb {F}_q}(a_i) = - N_{\\mathbb {F}_{q^m}/\\mathbb {F}_q}(a_j) $ .", "In particular, $ N_{\\mathbb {F}_{q^m}/\\mathbb {F}_q} (a_i^{q+1}) = N_{\\mathbb {F}_{q^m}/\\mathbb {F}_q} (a_i)^2 = N_{\\mathbb {F}_{q^m}/\\mathbb {F}_q} (a_i)^2 = N_{\\mathbb {F}_{q^m}/\\mathbb {F}_q} (a_j^{q+1}).", "$ Consider the matrix $ G $ as in (REF ).", "Since $ N_{\\mathbb {F}_{q^m}/\\mathbb {F}_q} (a_i^{q+1}) = N_{\\mathbb {F}_{q^m}/\\mathbb {F}_q} (a_j^{q+1}) $ and 2 and $ m $ are coprime, there exists $ \\beta \\in \\mathbb {F}_{q^m}^* $ such that $ a_i^{q+1} \\beta = a_j^{q+1} \\beta ^{q^2} $ by Hilbert's Theorem 90 [7].", "Now, there exist invertible matrices $ A_i, A_j \\in \\mathbb {F}_q^{m \\times m} $ such that 1 is the first component of $ (\\beta _1, \\beta _2, \\ldots , \\beta _m) A_i $ and $ \\beta $ is the first component of $ (\\beta _1, \\beta _2, \\ldots , \\beta _m) A_j $ .", "Let $ A_l = I_m $ for $ l \\in [q-1] \\setminus \\lbrace i,j \\rbrace $ .", "Denoting $ {\\rm diag}\\left(A_1,A_2, \\ldots , A_{q-1} \\right) = \\left( \\begin{array}{cccc}A_1 & 0 & \\ldots & 0 \\\\0 & A_2 & \\ldots & 0 \\\\\\vdots & \\vdots & \\ddots & \\vdots \\\\0 & 0 & \\ldots & A_{q-1}\\end{array} \\right) \\in \\mathbb {F}_q^{n \\times n} , $ we deduce that $ G \\cdot {\\rm diag}\\left(A_1,A_2, \\ldots , A_{q-1} \\right) $ contains the submatrix $ \\left( \\begin{array}{cc}1 & \\beta \\\\a_i^{q+1} & a_j^{q+1} \\beta ^{q^2}\\end{array} \\right), $ which is not invertible since $ a_i^{q+1} \\beta = a_j^{q+1} \\beta ^{q^2} $ .", "Since multiplying by the invertible block diagonal matrix $ {\\rm diag} \\left(A_1,A_2, \\ldots , A_{q-1} \\right) \\in \\mathbb {F}_q^{n \\times n} $ constitutes a linear sum-rank isometry, we deduce that the code generated by $ G $ is not MSRD over $ \\mathbb {F}_q $ for the length partition $ (q-1,m) $ .", "Thus the code generated by $ G_e $ is not MSRD with respect to $ {\\rm d}_e $ by Corollary REF .", "Remark 9 Notice that Proposition REF works with the same proof in more general cases, where we consider the sum-rank metric in $ \\mathbb {F}_{q^m}^n $ , $ n =gr $ , for the length partition $ (g,r) $ , $ g = (q-1)\\mu $ , using $ \\beta = (\\beta _1, \\ldots , \\beta _{\\mu r}) \\in \\mathbb {F}_{q^m}^{\\mu r} $ satisfying Conditions 1 and 2 in Theorem REF , under the following assumptions: 1) $ m $ is even and $ \\mathbb {F}_{q^2} \\subseteq \\mathcal {H}_i $ for some $ i \\in [\\mu ] $ ; or 2) $ q $ and $ m $ are odd and $ \\bigcup _{i=1}^\\mu \\mathcal {H}_i = \\mathbb {F}_{q^m} $ .", "Here, we define $ \\mathcal {H}_i $ , for $ i \\in [\\mu ] $ , as in (REF ).", "Since the linearized Reed–Solomon code case is $ \\mu = 1 $ , both conditions on the (single) subspace $ \\mathcal {H}_1 $ hold when $ m $ is even or $ q $ and $ m $ are odd." ], [ "A negative result in the sum-rank metric", "Up to this point, we have studied extensions of a metric $ {\\rm d} $ by adding a Hamming-metric component $ {\\rm d}_H $ .", "The reader may wonder if the results in Section also hold if we extend $ {\\rm d} $ by adding another metric, for instance, the rank metric.", "In this section, we give a negative answer to this question by trying to doubly extend MSRD codes as in Theorem REF (for the largest value of $ \\ell $ , i.e., $ \\ell = q-1 $ ) by adding a non-trivial rank-metric block and showing that the resulting code is not MSRD even if the conditions in Corollary REF hold.", "Proposition 10 Let $ a_1, a_2, \\ldots , a_{q-1} \\in \\mathbb {F}_{q^m}^* $ be such that $ N_{\\mathbb {F}_{q^m}/\\mathbb {F}_q}(a_i) \\ne N_{\\mathbb {F}_{q^m}/\\mathbb {F}_q}(a_j) $ if $ i \\ne j $ .", "Let $ \\beta = (\\beta _1, \\beta _2, \\ldots , \\beta _{\\mu r}) \\in \\mathbb {F}_{q^m}^{\\mu r} $ and $ \\mathcal {H}_i = \\langle \\beta _{(i-1)r+1} , \\ldots , \\beta _{ir} \\rangle _{\\mathbb {F}_q} \\subseteq \\mathbb {F}_{q^m} $ , for $ i \\in [\\mu ] $ , satisfy Conditions 1 and 2 in Theorem REF .", "Consider the extended sum-rank metric $ {\\rm d}_e( (\\mathbf {c}, c_{n+1}, c_{n+2}), (\\mathbf {d}, d_{n+1}, d_{n+2})) = {\\rm d}_{SR}(\\mathbf {c},\\mathbf {d}) + {\\rm d}_R((c_{n+1}, c_{n+2}), (d_{n+1}, d_{n+2})), $ for $ \\mathbf {c} , \\mathbf {d} \\in \\mathbb {F}_{q^m}^n $ and $ c_{n+1}, c_{n+2}, d_{n+1}, d_{n+2} \\in \\mathbb {F}_{q^m} $ , where $ {\\rm d}_{SR} $ denotes the sum-rank metric in $ \\mathbb {F}_{q^m}^n $ over $ \\mathbb {F}_q $ for the length partition $ (g,r) $ , where $ g = (q-1)\\mu $ and $ n = gr $ .", "Let $ a,b,c,d \\in \\mathbb {F}_{q^m} $ with $ (0,0) \\notin \\lbrace (a,c), (b,d), (a,b), (c,d) \\rbrace $ .", "Then the extended 2-dimensional code $ \\mathcal {C}_e $ with generator matrix $ G_e = \\left( \\begin{array}{ccc|ccc|c|ccc|cc}\\beta _1 & \\ldots & \\beta _{\\mu r} & \\beta _1 & \\ldots & \\beta _{\\mu r} & \\ldots & \\beta _1 & \\ldots & \\beta _{\\mu r} & a & c \\\\a_1 \\beta _1^q & \\ldots & a_1 \\beta _{\\mu r}^q & a_2 \\beta _1^q & \\ldots & a_2 \\beta _{\\mu r}^q & \\ldots & a_{q-1} \\beta _1^q & \\ldots & a_{q-1} \\beta _{\\mu r}^q & b & d\\end{array} \\right) $ is MSRD for $ {\\rm d}_e $ if, and only if, $ - \\tau ^{-1} \\notin \\bigcup _{i=1}^{q-1} \\left\\lbrace a_i \\beta ^{q-1} \\left| \\beta \\in \\bigcup _{j=1}^\\mu \\mathcal {H}_j \\setminus \\lbrace 0 \\rbrace \\right.", "\\right\\rbrace , $ for every $ \\tau \\in \\mathbb {F}_{q^m}^* $ such that $ a + \\tau b $ and $ c + \\tau d $ are $ \\mathbb {F}_q $ -linearly dependent.", "In particular, if $ \\bigcup _{j=1}^\\mu \\mathcal {H}_j = \\mathbb {F}_{q^m} $ , then $ \\mathcal {C}_e $ is not MSRD for all $ a,b,c,d \\in \\mathbb {F}_{q^m} $ .", "First of all, the reader may verify that there exists $ \\tau \\in \\mathbb {F}_{q^m}^* $ such that $ a + \\tau b $ and $ c + \\tau d $ are $ \\mathbb {F}_q $ -linearly dependent, since $ (0,0) \\notin \\lbrace (a,c), (b,d), (a,b), (c,d) \\rbrace $ .", "Let $ \\mathbf {g}_1, \\mathbf {g}_2 \\in \\mathbb {F}_{q^m}^n $ be the first and second rows of $ G_e $ , respectively, projected on the first $ n $ coordinates.", "If $ \\tau \\in \\mathbb {F}_{q^m}^* $ is such that $ a + \\tau b $ and $ c + \\tau d $ are $ \\mathbb {F}_q $ -linearly independent, then we have that $ {\\rm wt}_e (\\mathbf {g}_1 + \\tau \\mathbf {g}_2, a + \\tau b, c + \\tau d) \\ge n+1.", "$ Therefore $ \\mathcal {C}_e $ is not MSRD if, and only if, $ {\\rm wt}_{SR}(\\mathbf {g}_1 + \\tau \\mathbf {g}_2) = n-1 $ , for some $ \\tau \\in \\mathbb {F}_{q^m}^* $ such that $ a + \\tau b $ and $ c + \\tau d $ are $ \\mathbb {F}_q $ -linearly dependent.", "Fix one such $ \\tau $ .", "We have $ {\\rm wt}_{SR}(\\mathbf {g}_1 + \\tau \\mathbf {g}_2) = n-1 $ if, and only if, there exist $ \\lambda _1, \\ldots , \\lambda _r \\in \\mathbb {F}_q $ , not all zero, such that $ \\sum _{k=1}^r \\lambda _k \\beta _{(j-1)r+k} + \\tau a_i \\sum _{k=1}^r \\lambda _k \\beta _{(j-1)r+k}^q = 0, $ for some $ j \\in [\\mu ] $ and some $ i \\in [q-1] $ .", "Let $ \\beta = \\sum _{k=1}^r \\lambda _k \\beta _{(j-1)r+k} \\in \\mathcal {H}_j \\setminus \\lbrace 0 \\rbrace $ .", "Then the equation above is simply $ -\\tau ^{-1} = a_i \\beta ^{q-1} $ .", "This is possible for some $ i \\in [q-1] $ and some $ \\beta \\in \\mathcal {H}_j \\setminus \\lbrace 0 \\rbrace $ if, and only if, $ -\\tau ^{-1} \\in \\bigcup _{i=1}^{q-1} \\left\\lbrace a_i \\beta ^{q-1} \\left| \\beta \\in \\bigcup _{j=1}^\\mu \\mathcal {H}_j \\setminus \\lbrace 0 \\rbrace \\right.", "\\right\\rbrace , $ and we are done.", "Finally, assume that $ \\bigcup _{j=1}^\\mu \\mathcal {H}_j = \\mathbb {F}_{q^m} $ .", "For $ \\tau \\in \\mathbb {F}_{q^m}^* $ , there exists $ i \\in [q-1] $ such that $ N_{\\mathbb {F}_{q^m}/\\mathbb {F}_q}(-\\tau ^{-1}) $ $ = N_{\\mathbb {F}_{q^m}/\\mathbb {F}_q}(a_i) $ .", "By Hilbert's Theorem 90, there exists $ \\beta \\in \\mathbb {F}_{q^m}^* = \\bigcup _{j=1}^\\mu \\mathcal {H}_j \\setminus \\lbrace 0 \\rbrace $ such that $ -\\tau ^{-1} = a_i \\beta ^{q-1} $ and we conclude that $ \\mathcal {C}_e $ is not MSRD when $ \\bigcup _{j=1}^\\mu \\mathcal {H}_j = \\mathbb {F}_{q^m} $ .", "In the case where $ (\\beta _1,\\beta _2, \\ldots , \\beta _{\\mu r}) $ is constructed using field reduction (as in the following lemma, see also [12]), we have the following easy criterion to determine when $ \\bigcup _{j=1}^\\mu \\mathcal {H}_j = \\mathbb {F}_{q^m} $ .", "Lemma 11 Let $ m = r \\rho $ , for positive integers $ r $ and $ \\rho $ , and let $ (\\beta _{(j-1)r+1} , \\ldots , \\beta _{jr}) = \\gamma _j (\\alpha _1, \\ldots , \\alpha _r) $ , for $ j \\in [\\mu ] $ , where $ \\alpha _1, \\ldots , \\alpha _r \\in \\mathbb {F}_{q^r} $ are $ \\mathbb {F}_q $ -linearly independent, and $ \\gamma _1, \\ldots , \\gamma _\\mu \\in \\mathbb {F}_{q^m}^* $ are such that $ \\gamma _i $ and $ \\gamma _j $ are $ \\mathbb {F}_{q^r} $ -linearly independent if $ i \\ne j $ .", "Define $ \\mathcal {H}_j = \\langle \\beta _{(j-1)r+1} , \\ldots , \\beta _{jr} \\rangle _{\\mathbb {F}_q} \\subseteq \\mathbb {F}_{q^m} $ , for $ j \\in [\\mu ] $ .", "In this setting, we have $ \\bigcup _{j=1}^\\mu \\mathcal {H}_j = \\mathbb {F}_{q^m} $ if, and only if, $ \\mu = (q^m-1) / (q^r-1) $ .", "In this case, the condition $ \\bigcup _{j=1}^\\mu \\mathcal {H}_j = \\mathbb {F}_{q^m} $ holds if, and only if, $ \\lbrace [\\gamma _1], \\ldots , [\\gamma _\\mu ] \\rbrace = \\mathbb {P}_{\\mathbb {F}_{q^r}}(\\mathbb {F}_{q^m}) $ , where $ [\\gamma ] = \\lbrace \\lambda \\gamma \\mid \\lambda \\in \\mathbb {F}_{q^r}^* \\rbrace $ is the projective point associated to $ \\gamma \\in \\mathbb {F}_{q^m}^* $ over $ \\mathbb {F}_{q^r} $ .", "Now since $ \\gamma _i $ and $ \\gamma _j $ are $ \\mathbb {F}_{q^r} $ -linearly independent if $ i \\ne j $ , then $ [\\gamma _1], \\ldots , [\\gamma _\\mu ] $ are distinct projective points.", "Therefore they form the whole projective space if, and only if, there are $ (q^m-1) / (q^r-1) $ of them.", "This implies that Proposition REF holds for 2-dimensional (full-length) linearized Reed–Solomon codes (the case $ r = m $ and $ \\mu = \\rho = 1 $ , see [12]) and the more general family of MSRD codes obtained from Hamming codes given in [12], which are the longest known 2-dimensional linear MSRD codes.", "In other words, those two families of 2-dimensional MSRD codes may not be doubly extended as in Proposition REF .", "In the case $ r = 2 $ , it was known that the latter family could not be doubly extended as in Proposition REF since their number of blocks (the parameter $ g = (q-1) \\mu $ ) attains the upper bound from [3] since $ g = (q-1)(q^m-1) / (q^r-1) - 1 $ in this case.", "The fact that it may not be doubly extended for $ r \\ge 3 $ is new." ], [ "One-weight codes", "In this section, we give necessary and sufficient conditions for the doubly extended MSRD codes from Corollary REF to be one-weight codes (or constant-weight codes), that is, such that all of their codewords have the same weight (thus equal to the minimum distance of the code).", "The next proposition recovers [15] for linearized Reed–Solomon codes by taking $ \\mu = 1 $ .", "Proposition 12 Let $ a_1, a_2, \\ldots , a_{q-1} \\in \\mathbb {F}_{q^m}^* $ be such that $ N_{\\mathbb {F}_{q^m}/\\mathbb {F}_q}(a_i) \\ne N_{\\mathbb {F}_{q^m}/\\mathbb {F}_q}(a_j) $ if $ i \\ne j $ .", "Let $ \\beta = (\\beta _1, \\beta _2, \\ldots , \\beta _{\\mu r}) \\in \\mathbb {F}_{q^m}^{\\mu r} $ and $ \\mathcal {H}_i = \\langle \\beta _{(i-1)r+1} , \\ldots , \\beta _{ir} \\rangle _{\\mathbb {F}_q} \\subseteq \\mathbb {F}_{q^m} $ , for $ i \\in [\\mu ] $ , satisfy Conditions 1 and 2 in Theorem REF .", "Consider the extended sum-rank metric $ {\\rm d}_e( (\\mathbf {c}, c_{n+1}, c_{n+2}), (\\mathbf {d}, d_{n+1}, d_{n+2})) = {\\rm d}_{SR}(\\mathbf {c},\\mathbf {d}) + {\\rm d}_H((c_{n+1}, c_{n+2}), (d_{n+1}, d_{n+2})), $ for $ \\mathbf {c} , \\mathbf {d} \\in \\mathbb {F}_{q^m}^n $ and $ c_{n+1}, c_{n+2}, d_{n+1}, d_{n+2} \\in \\mathbb {F}_{q^m} $ , where $ {\\rm d}_{SR} $ denotes the sum-rank metric in $ \\mathbb {F}_{q^m}^n $ over $ \\mathbb {F}_q $ for the length partition $ (g,r) $ , where $ g = (q-1) \\mu $ and $ n = gr $ .", "Then the extended 2-dimensional MSRD code $ \\mathcal {C}_e $ with generator matrix $ G_e = \\left( \\begin{array}{ccc|ccc|c|ccc|cc}\\beta _1 & \\ldots & \\beta _{\\mu r} & \\beta _1 & \\ldots & \\beta _{\\mu r} & \\ldots & \\beta _1 & \\ldots & \\beta _{\\mu r} & 1 & 0 \\\\a_1 \\beta _1^q & \\ldots & a_1 \\beta _{\\mu r}^q & a_2 \\beta _1^q & \\ldots & a_2 \\beta _{\\mu r}^q & \\ldots & a_{q-1} \\beta _1^q & \\ldots & a_{q-1} \\beta _{\\mu r}^q & 0 & 1\\end{array} \\right) $ is a one-weight code for $ {\\rm d}_e $ if, and only if, $ \\bigcup _{i=1}^\\mu \\mathcal {H}_i = \\mathbb {F}_{q^m} $ .", "Let $ \\mathbf {g}_1, \\mathbf {g}_2 \\in \\mathbb {F}_{q^m}^{n+2} $ be the first and second rows of $ G_e $ , respectively.", "Since $ {\\rm d}_e(\\mathcal {C}_e) = n+1 $ , we need to show that $ {\\rm wt}_e(\\mathbf {g}_1 + \\lambda \\mathbf {g}_2) = n+1 $ , for all $ \\lambda \\in \\mathbb {F}_{q^m}^* $ .", "Fix $ \\lambda \\in \\mathbb {F}_{q^m}^* $ .", "We need to show that there exist $ \\lambda _1, \\lambda _2, \\ldots , \\lambda _r \\in \\mathbb {F}_q $ , not all zero, such that $ \\sum _{k=1}^r \\lambda _k \\beta _{(j-1)r+k} + \\lambda a_i \\sum _{k=1}^r \\lambda _k \\beta _{(j-1)r+k}^q = 0, $ for some $ j \\in [\\mu ] $ and some $ i \\in [q-1] $ .", "Let $ \\beta = \\sum _{k=1}^r \\lambda _k \\beta _{(j-1)r+k} \\in \\mathcal {H}_j \\setminus \\lbrace 0 \\rbrace $ .", "Then the equation above is simply $ -\\lambda ^{-1} = a_i \\beta ^{q-1} $ .", "This is possible for all $ \\lambda \\in \\mathbb {F}_{q^m}^* $ if, and only if, $\\mathbb {F}_{q^m}^* = \\bigcup _{i=1}^{q-1} \\left\\lbrace a_i \\beta ^{q-1} \\left| \\beta \\in \\bigcup _{j=1}^\\mu \\mathcal {H}_j \\setminus \\lbrace 0 \\rbrace \\right.", "\\right\\rbrace .$ Since $ \\beta ^{q-1} = \\gamma ^{q-1} $ holds for $ \\beta ,\\gamma \\in \\mathbb {F}_{q^m}^* $ if, and only if, $ \\beta /\\gamma \\in \\mathbb {F}_q^* $ , it is easy to see that (REF ) holds if, and only if, $ \\bigcup _{i=1}^\\mu \\mathcal {H}_i = \\mathbb {F}_{q^m} $ , and we are done.", "In the case where $ \\beta $ is constructed using field reduction as in Lemma REF , we see that the extended 2-dimensional MSRD code $ \\mathcal {C}_e $ is a one-weight code for $ {\\rm d}_e $ if, and only if, $ \\mu = (q^m-1) / (q^r-1) $ .", "In other words, 2-dimensional doubly extended linearized Reed–Solomon codes and the doubly extended MSRD codes based on Hamming codes as in [12] are all one-weight codes for the extended metric $ {\\rm d}_e $ .", "Finally, we show that triply extended MSRD codes are never one-weight codes for $ q = 2 $ .", "Due to the results from Section , we only consider the case where $ m $ is odd.", "Notice that in this case the vector $ \\mathbf {a} $ is of length one and we may simply consider it as $ \\mathbf {a} = (1) $ .", "Proposition 13 Let $ q = 2 $ , let $ m \\ge 3 $ be odd and set $ n = \\mu r $ for positive integers $ \\mu $ and $ r $ .", "Let $ \\beta = (\\beta _1, \\beta _2, \\ldots , \\beta _{\\mu r}) \\in \\mathbb {F}_{2^m}^{\\mu r} $ satisfy Conditions 1 and 2 in Theorem REF .", "The triply extended code $ \\mathcal {C}_e \\subseteq \\mathbb {F}_{2^m}^{n + 3} $ with generator matrix $ G_e = \\left( \\begin{array}{cccc|ccc}\\beta _1 & \\beta _2 & \\ldots & \\beta _{\\mu r} & 1 & 0 & 0 \\\\\\beta _1^2 & \\beta _2^2 & \\ldots & \\beta _{\\mu r}^2 & 0 & 1 & 0 \\\\\\beta _1^4 & \\beta _2^4 & \\ldots & \\beta _{\\mu r}^4 & 0 & 0 & 1\\end{array} \\right) \\in \\mathbb {F}_{2^m}^{3 \\times (n + 3)} $ is MSRD but not a one-weight code for the extended sum-rank metric $ {\\rm d}_e( (\\mathbf {c}, \\mathbf {c}^\\prime ), (\\mathbf {d}, \\mathbf {d}^\\prime )) = {\\rm d}_{SR}(\\mathbf {c},\\mathbf {d}) + {\\rm d}_H(\\mathbf {c}^\\prime , \\mathbf {d}^\\prime ), $ for $ \\mathbf {c} , \\mathbf {d} \\in \\mathbb {F}_{2^m}^n $ and $ \\mathbf {c}^\\prime , \\mathbf {d}^\\prime \\in \\mathbb {F}_{2^m}^3 $ , where $ {\\rm d}_{SR} $ denotes the sum-rank metric in $ \\mathbb {F}_{2^m}^n $ over $ \\mathbb {F}_2 $ for the length partition $ (\\mu , r) $ .", "The fact that $ \\mathcal {C}_e $ is MSRD for $ {\\rm d}_e $ is Theorem REF .", "Now, since $ {\\rm d}_e(\\mathcal {C}_e) = n-2 $ , it is enough to show that there exists a codeword $ \\mathbf {c} \\in \\mathcal {C}_e $ with $ {\\rm wt}_e (\\mathbf {c}) = n $ .", "For $ \\lambda , \\nu \\in \\mathbb {F}_{2^m}^* $ , let $ \\mathbf {c}_{\\lambda ,\\nu } = ( \\lambda \\beta _1 + \\nu \\beta _1^2 + \\beta _1^4, \\ldots , \\lambda \\beta _{\\mu r} + \\nu \\beta _{\\mu r}^2 + \\beta _{\\mu r}^4, \\lambda , \\nu , 1 ) \\in \\mathcal {C}_e.", "$ Since $ \\lambda \\ne 0 \\ne \\nu $ , it holds that $ {\\rm wt}_e (\\mathbf {c}_{\\lambda ,\\nu }) < n $ if, and only if, there exists an index $ i \\in [\\mu ] $ and scalars $ \\lambda _1, \\lambda _2, \\ldots , \\lambda _r \\in \\mathbb {F}_2 $ , not all zero, such that $ \\sum _{j=1}^r \\lambda _j \\left( \\lambda \\beta _{(i-1)r+j} + \\nu \\beta _{(i-1)r+j}^2 + \\beta _{(i-1)r+j}^4 \\right) = 0.", "$ By considering $ \\beta = \\sum _{j=1}^r \\lambda _j \\beta _{(i-1)r+j} \\in \\mathbb {F}_{q^m}^* $ , we have that $ {\\rm wt}_e (\\mathbf {c}_{\\lambda ,\\nu }) < n $ if, and only if, there exists an index $ i \\in [\\mu ] $ and $ \\beta \\in \\mathcal {H}_i = \\langle \\beta _{(i-1)r+1}, \\beta _{(i-1)r+2}, \\ldots , \\beta _{ir} \\rangle _{\\mathbb {F}_2} \\setminus \\lbrace 0 \\rbrace $ such that $ \\lambda \\beta + \\nu \\beta ^2 + \\beta ^4 = 0 $ , that is, $ \\beta ^3 + \\nu \\beta + \\lambda = 0 $ .", "Now, since by [8] there are $ N_{2^m}(3) = (2^{3m} - 2^m)/3 > 2 \\cdot 2^{2m} $ irreducible polynomials in $ \\mathbb {F}_{2^m}[x] $ , then there is at least one irreducible polynomial $ f = x^3 + a x^2 + b x + c \\in \\mathbb {F}_{2^m}[x] $ such that $ b \\ne a^2 $ and $ b \\ne 1 $ .", "Furthermore, $ c \\ne 0 $ since $ f $ is irreducible.", "Define $ g = f(x+a) = x^3 + (a^2+b)x + c(b+1) $ , which is irreducible since so is $ f $ .", "Let $ \\nu = a^2+b $ and $ \\lambda = c(b+1) $ , which satisfy $ \\lambda \\ne 0 \\ne \\nu $ .", "Since $ g $ is irreducible of degree 3, there is no $ \\beta \\in \\mathbb {F}_{2^m} $ such that $ g(\\beta ) = \\beta ^3 + \\nu \\beta + \\lambda = 0 $ .", "In other words, the codeword $ \\mathbf {c}_{\\lambda ,\\nu } \\in \\mathcal {C}_e $ as above satisfies $ {\\rm wt}_e (\\mathbf {c}_{\\lambda ,\\nu }) = n $ , and we are done." ], [ "Acknowledgement", "The author gratefully acknowledges the support from a María Zambrano contract by the University of Valladolid, Spain (Contract no.", "E-47-2022-0001486)." ] ]
2212.05528
[ [ "From Bloch Oscillations to a Steady-State Current in Strongly Biased\n Mesoscopic Devices" ], [ "Abstract It has long been known that quantum particles in a periodic lattice exhibit an oscillatory motion that is solely driven by a constant and uniform force field.", "In a strongly biased mesoscopic device, this would appear as an ongoing time-dependent current oscillation (a Bloch oscillation) but, even when electrons can move coherently and without scattering, a steady-state regime of charge transport (a Landauer current) have been seen to quickly emerge.", "Here, we theoretically investigate the non-equilibrium current dynamics of a strongly biased two-terminal mesoscopic device, in order to show that such a system can exhibit Bloch oscillations as a transient regime that relaxes into a Landauer steady-state from charge being drained into the leads.", "Analytical results from the one-dimensional Wannier-Stark ladder problem are combined with numerical quantum time-evolution of a tight-binding toy model with finite leads to characterize the decay times of transient Bloch oscillations and establish the conditions under which they can occur." ], [ "Introduction", "One way the quantum nature of electrons gets fully revealed is within solid-state systems in the mesoscopic regime [1], [2].", "Generally, these are nanoscale devices at very low temperatures, in which the electronic wavefunctions can maintain their phase-coherence throughout the device, thus allowing for quantum interference effects to be observed.", "A  paradigmatic  mesoscopic  physics' experiment measures the charge carrier transport across a device that is connected to a few ideal metallic contacts (or leads) that host different electrostatic potentials and are placed in some geometrical arrangement.", "As  first  shown  by  Landauer [3], [4] (and later generalized by Büttiker [5]), the steady-state current between any pair of leads in this non-equilibrium scenario is proportional to the quantum transmittance of the device: a non-local property that is sample-specific and strongly depends on the precise geometry of  the  device [6], [7].", "Assuming that the device is in a steady-state of charge transport, the (Landauer) current can be derived from its Hamiltonian [8], [9] through recursive methods in which the leads enter as boundary conditions [10], [11], [12], [13], [14], [15], [16], [17], [18] that locally connect the system to free fermion baths [19], [20].", "Additionally,  in the absence of bound states [21], [22], this non-equilibrium steady-state is naturally achieved by time-evolving the mesoscopic system when acted by a perturbation that drives the current.", "This has been theoretically demonstrated in systems with and without inelastic mechanisms, assuming that the current was driven by either the lead-sample couplings (partitioned setup) [23] or a static electric field that is being applied across the device (partition-free setup) [24], [25], [26].", "Besides the transport steady-state, the preceding transient dynamics is also the subject of an increasing interest [22], [27], [28], [29], [30], [31], [32], [33], [34], [35], [36], [37], [38], [39], having been recently deemed important to unveil some quantum features that would be washed out in the long-time limit.", "Two remarkable examples of this are:  (i)  the  ability  to distinguish Andreev and quasi-Majorana states from quantum transport data in superconducting nano-wires [33],  and  (ii)  the description of time-dependent radiation from biased nano-antennas [37].", "All  in  all,  these studies were made possible by the recent development of numerical time-dependent Landauer-Büttiker methods [40], [28], [41], [32], [36], [42], such as the ones that involve coupling the mesoscopic system to “finite leads” which are then evolved as a whole isolated quantum system.", "Although steady-state currents can certainly display many signatures of quantum coherence in the underlying motion, the potential for oscillatory currents driven by static electric fields is possibly the most astonishing quantum feature of electrons traveling through a crystal.", "This is a long-established phenomenon called Bloch oscillation [43], [44] (BO) and is expected for any quantum particle that moves across a periodic background potential in the presence of an uniform driving force (see Glück et al.", "[45] for an extensive review).", "Despite being theoretically well understood, the experimental observation of BOs remains an outstanding challenge in solid-state systems, where they are washed out by unavoidable effects of lattice disorder and inelastic scattering by phonons or other electrons [46].", "This happens because the period of a BO (inversely proportional to the applied field) is typically much larger that the electron scattering times, thus leading to a loss of phase-coherence before a single current oscillation can be finalized.", "Therefore, BOs were never detected in normal solid-state devices, whether mesoscopic or not; instead, they could be observed in synthetic semiconducting superlattices [47], [48], [49], [50], as well as in alternative platforms such as modulated photonic waveguides [51], [52], [53], [54], [55], [56], arrays of coupled acoustic cavities [57], [58], ultra-cold atoms in optical potentials [59], [60], [61], and even in superconducting q-bit arrays [62].", "Figure: Depiction of the 1D mesoscopic device used throughoutthis work.", "The red line represents the space profile of the appliedelectric potential, with ΔV\\Delta V being the bias voltage and tt(t l t_{l}) the hopping parameter inside the central sample (each ofthe leads).In this paper, we revisit the non-equilibrium current dynamics in a single-band mesoscopic partition-free setup consisting of a small tight-binding chain connected between two metallic leads and subject to a strong potential bias $\\Delta V$ (see scheme of Fig.", "REF ).", "This paradigmatic system has been previously studied by Popescu and Croy [63] who found out that raising the bias voltage ($\\Delta V$ ) beyond the leads' bandwidth $(B_{l})$ causes the non-equilibrium dynamics to change from the slow-starting onset of a constant Landauer current [32], [36] to a stable current BO confined within the central  sample [64].", "Two  different mechanisms are responsible for this change in  behavior.", "On  the  one hand, no steady current can flow through the entire system if $\\Delta V>B_{l}$ , as there are no open transmission channels that go from one lead to the other.", "On the other hand, an increase in the bias electric field also leads to a more localized motion of the electrons undergoing BOs within the sample (see, for example, Hartmann et al.", "[64]) which reduces their capacity to escape through the leads.", "In what follows, we present an alternative interpretation of the aforementioned findings which is based on the theory of localization in Wannier-Stark ladders [65], [45].", "By separating variations in the applied electric field from the presence of inter-lead transmission channels, we significantly expand the study of this setup demonstrating a previously unobserved regime of temporary Bloch oscillations (tBOs) inside the central sample.", "In this regime, the tBOs emerge as a long-lived transient state that dynamically relaxes to a transport steady-state after some characteristic time that is determined by the system's parameters.", "The remaining of this paper is structured as follows: In Sec.", ", we review theoretical elements of 1D Wannier-Stark ladders (and their connection to BOs) that are crucial for grasping the implications of our numerical results.", "In Sec.", "we outline the model Hamiltonian that served as the basis for our work, as well as the numerical procedure we used for simulating the quantum time-evolution.", "The  main numerical results showing the tBO regime are presented in Sec.", ", which also includes the re-interpretation of the emergent BOs found by Popescu and Croy [63].", "The tBOs' decay times are computed perturbatively in Sec.", "and compared to numerical measurements.", "Finally,  in Sec.", "we summarize our key findings and offer some potential future research." ], [ "Bloch Oscillations and the Wannier-Stark States", "An important theoretical cornerstone for our study is the solution to the problem of a free electron (of charge $-e$ ) hopping on an infinite tight-binding chain (of lattice parameter $a$ ) while it is subject to a scalar potential ramp of the form $V_{n}\\!=\\!aeEn$ .", "Such a system has been dubbed a Wannier-Stark ladder, and can be described by the following Hamiltonian: $\\!\\!\\mathcal {H}_{{\\scriptscriptstyle \\text{WS}}}\\!=\\!-t\\!\\!\\!\\sum _{n=-\\infty }^{+\\infty }\\!\\!\\!\\left(\\!\\left|n\\right\\rangle \\!\\left\\langle n\\!+\\!1\\right|\\!+\\!\\left|n\\!+\\!1\\right\\rangle \\!\\left\\langle n\\right|-\\!\\frac{aeE}{t}n\\left|n\\right\\rangle \\!\\left\\langle n\\right|\\!\\right),\\!\\!$ where $\\left|n\\right\\rangle $ are local Wannier orbitals, $t$ is the nearest-neighbor hopping, and $E$ is the electric field being imposed to the system.", "The energy levels of $\\mathcal {H}_{{\\scriptscriptstyle \\text{WS}}}$ are non-degenerate, discrete and equally space ($\\varepsilon _{m}\\!=\\!maeE$ for $m\\in \\mathbb {Z}$ ) and are associated to the Wannier-Stark states (WSS) [64], $\\left|\\Psi _{m}\\right\\rangle \\!=\\!\\sqrt{\\frac{a}{2\\pi }}\\sum _{k}\\exp \\left[-iamk\\!+\\!\\frac{2t}{iaeE}\\sin ka\\right]\\left|\\phi _{k}\\right\\rangle ,$ which have conveniently been expressed in a momentum space basis, $\\left|\\phi _{k}\\right\\rangle $ , with $-\\pi \\le ka<\\pi $ .", "With these exact eigenstate, one can write down the full time-evolution operator, $\\mathcal {U}_{\\tau }\\!", "& =\\!\\!\\int _{-\\frac{\\pi }{a}}^{\\frac{\\pi }{a}}\\!\\!\\!\\!\\!\\!dk\\exp \\!\\left[\\!\\frac{2t}{iaeE}\\!\\sin \\!\\left(\\!ka\\!-\\!\\frac{eaE\\tau }{\\hbar }\\!\\right)\\right]\\\\& \\qquad \\hfill \\qquad \\times \\exp \\left[-\\!\\frac{2t}{iaeE}\\!\\sin ka\\right]\\!\\left|\\phi _{k-eE\\tau /\\hbar }\\right\\rangle \\!\\left\\langle \\phi _{k}\\right|,\\nonumber $ where $\\tau $ is the time parameter, and which can now be used to determine the dynamics of any given initial state of this system.", "For example, if we start with a thermalized state of the system in the absence of the electric field (i.e., $E=0$ ), our initial state would be described by the reduced density matrix, $\\rho _{0}=\\!\\!\\!\\int _{-\\frac{\\pi }{a}}^{\\frac{\\pi }{a}}\\!\\!\\!\\!\\!dk\\,f\\left(\\varepsilon _{k}\\right)\\left|\\phi _{k}\\right\\rangle \\!\\left\\langle \\phi _{k}\\right|,$ where $f(\\varepsilon )$ is the appropriate Fermi-Dirac distribution function, and $\\varepsilon _{k}\\!=\\!-2t\\cos ka$ are the energy eigenvalues in the absence of the applied  field.", "In  such  case, the time-dependent expectation value of the electric current operator, $\\begin{aligned}\\mathcal {J}\\!", "& =\\frac{2eat}{\\hbar }\\int _{-\\frac{\\pi }{a}}^{\\frac{\\pi }{a}}\\!\\!\\!\\!\\!dk\\,\\sin ka\\left|\\phi _{k}\\right\\rangle \\!\\left\\langle \\phi _{k}\\right|\\end{aligned},$ can be shown to yield the following oscillatory result, $\\!\\!\\!\\begin{aligned}J(\\tau )\\!", "& =\\!\\text{Tr}\\left[\\rho _{o}\\mathcal {U}_{\\tau }\\mathcal {J}\\mathcal {U}_{\\tau }^{\\dagger }\\right]\\!=\\!-\\frac{ea}{\\hbar }\\sin \\!\\frac{eEa\\tau }{\\hbar }\\!\\!\\int _{-\\frac{\\pi }{a}}^{\\frac{\\pi }{a}}\\!\\!\\!\\!\\!\\!dk\\,f(\\varepsilon _{k})\\varepsilon _{k},\\end{aligned}\\!\\!$ Equation (REF ) clearly demonstrates that, in stark contrast to our (classically biased) intuition, the application of an uniform static electric field to a periodic tight-binding chain leads to an electric current that oscillates in time with the period, $T_{{\\scriptscriptstyle \\textrm {BO}}}\\!=\\!\\frac{2\\pi \\hbar }{aeE}.$ While the presence of current BOs is directly derived from this momentum-space formulation, a different perspective can be obtained by examining the Wannier-Stark states (WSS) in a real-space representation.", "From this point-of-view, the existence of BOs in a Wannier-Stark ladder can seen as a localization phenomenon induced by a strong electric field.", "To see this, we start by re-expressing the WSS of Eq.", "(REF ) in terms of local Wannier orbitals, thus yielding [66], [67] $\\left|\\Psi _{m}\\right\\rangle \\!=\\!\\!\\!\\sum _{n=-\\infty }^{\\infty }\\!\\!\\!J_{n-m}\\!\\left(\\frac{2t}{aeE}\\right)\\!\\left|n\\right\\rangle $ in terms of Bessel functions of the first-kind [68].", "Since any WSS can be obtained from $\\left|\\Psi _{0}\\right\\rangle $ by means of a lattice translation, one may focus solely on the $m\\!=\\!0$ case and recap that $\\Psi _{0}(n)=\\left\\langle n\\mid \\Psi _{0}\\right\\rangle $ , is (i) mainly localized within the interval $\\left|n\\right|\\!\\le \\!\\ell _{{\\scriptscriptstyle \\textrm {WS}}}\\!=\\!T_{{\\scriptscriptstyle \\textrm {BO}}}t\\,/\\,\\pi \\hbar $ , (ii) it has the alternating inversion symmetry $\\Psi _{0}(-n)\\!=\\!", "(-1)^{n}\\Psi _{0}(n)$ , and (iii) it decays outside this interval as, Figure: Wavefunction associated to a Wannier-Starkstate (centered in site x=0x\\!=\\!0) and its modulus squared, respectively.", "$\\left|\\Psi _{0}(n)\\right|\\propto \\exp \\left[-n\\ln \\!\\left(\\frac{aeE}{2t}\\right)\\right].$ The last equation allows for the definition of another characteristic length scale (different from $\\ell _{{\\scriptscriptstyle \\textrm {WS}}}$ ) associated to the decaying tails, $ $ $\\xi _{{\\scriptscriptstyle \\textrm {WS}}}\\!=\\!\\frac{1}{\\ln \\!\\left(\\frac{aeE}{2t}\\right)},$ which becomes much smaller than $\\ell _{{\\scriptscriptstyle \\textrm {WS}}}$ in the strong bias regime ($aE\\gg t$ ).", "To sum up, we have seen that an infinite Wannier-Stark ladder supports BOs, while featuring an unbounded, non-degenerate and uniform discrete energy spectrum, whose spacing is proportional to the applied electric field.", "In real-space, there is an exponentially localized wavefunction associated to each energy level, which is centered on a lattice site and only has significant amplitude in a scale of $2T_{{\\scriptscriptstyle \\textrm {BO}}}t\\,/\\,\\pi \\hbar $ around its center.", "For completeness, a WSS wavefunction is plotted in Fig.", "REF , where the state's localized character is made evident." ], [ "Mesoscopic Setup and Numerical Methods", "After detouring into the exact treatment of the infinite Wannier-Stark ladder, we now return to studying our basic one-dimensional mesoscopic device depicted in Fig.", "REF .", "This system can be generically described by the time-dependent Hamiltonian, $\\mathcal {H}(\\tau )\\!=\\!\\mathcal {H}_{{\\scriptscriptstyle \\textrm {C}}}(\\tau )+\\sum _{\\alpha =L,R}\\left[\\mathcal {H}_{{\\scriptscriptstyle \\alpha }}(\\tau )+\\mathcal {H}_{{\\scriptscriptstyle \\textrm {C}\\alpha }}(\\tau )\\right],$ where $\\mathcal {H}_{{\\scriptscriptstyle \\textrm {C}}}$ describes a non-disordered central sample, $\\mathcal {H}_{{\\scriptscriptstyle L(R)}}$ is the Hamiltonian for the left (right) lead, and $\\mathcal {H}_{{\\scriptscriptstyle \\textrm {C}}L(R)}$ is the coupling term between the left (right) lead and the central sample From now on, these coupling terms will be included in the respective lead's Hamiltonian..", "Assuming that the central sample has $2L\\!+\\!1$ lattice sites and we are working in units of $\\hbar ,e,a,t\\!=\\!1$ , the different parts of the Hamiltonian can be written simply as follows: $\\!\\!\\!\\!\\begin{aligned}\\mathcal {H}_{{\\scriptscriptstyle \\textrm {C}}}(\\tau ) & \\!=\\!\\!\\sum _{n=-L}^{L}\\!\\!V_{n}^{\\tau }\\!\\left|n\\right\\rangle \\!\\left\\langle n\\right|-\\!\\!\\!\\!\\sum _{n=-L}^{L-1}\\!\\!\\!\\left(\\,\\left|n\\right\\rangle \\!\\left\\langle n\\!+\\!1\\right|\\!+\\!\\left|n\\!+\\!1\\right\\rangle \\!\\left\\langle n\\right|\\,\\right),\\\\\\mathcal {H}_{{\\scriptscriptstyle L}}(\\tau ) & \\!=\\!\\!\\!\\sum _{n=-\\infty }^{-L-1}\\!\\!\\!\\left(\\,V_{n}^{\\tau }\\!\\left|n\\right\\rangle \\!\\left\\langle n\\right|\\!+\\!t_{l}\\left|n\\right\\rangle \\!\\left\\langle n\\!+\\!1\\right|\\!+\\!t_{l}\\left|n\\!+\\!1\\right\\rangle \\!\\left\\langle n\\right|\\,\\right),\\\\\\mathcal {H}_{{\\scriptscriptstyle R}}(\\tau ) & \\!=\\!\\sum _{n=L}^{\\infty }\\left(\\,V_{n}^{\\tau }\\!\\left|n\\right\\rangle \\!\\left\\langle n\\right|\\!+\\!t_{l}\\left|n\\right\\rangle \\!\\left\\langle n\\!+\\!1\\right|\\!+\\!t_{l}\\left|n\\!+\\!1\\right\\rangle \\!\\left\\langle n\\right|\\,\\right),\\end{aligned}$ where $t_{l}$ is the hopping strength (in units of $t$ ) to/from any site in the leads.", "At this point, it is important to emphasize the fact that, unlike Popescu and Croy [63], Santos Pires et al.", "[36] and Pal et al.", "[32], here we will not necessarily take $t_{l}\\!=\\!1$ .", "Now that we have devised the Hamiltonian, we employ a partition-free approach [24] to evolve it in time.", "This means that the unbiased mesoscopic sample starts off connected to, and in thermal equilibrium with the leads.", "This means that $V_{n}^{\\tau =0}=0$ and also that the initial mixed state of the entire system is $\\rho _{0}=\\sum _{\\alpha }f\\left(\\varepsilon _{\\alpha }\\right)\\left|\\psi _{\\alpha }\\right\\rangle \\!\\left\\langle \\psi _{\\alpha }\\right|,$ where $\\varepsilon _{\\alpha }$ is the $\\alpha ^{\\text{th}}$ eigenvalue of the initial Hamiltonian, $\\mathcal {H}_{0}\\!=\\!\\mathcal {H}(\\tau \\!=\\!0)$ , corresponding to the eigenstate $\\left|\\psi _{\\alpha }\\right\\rangle $ .", "At $\\tau =0$ , the system potential profile shown in Fig.", "REF , i.e, $\\begin{aligned}V_{n}^{\\tau >0}\\!", "& ={\\left\\lbrace \\begin{array}{ll}\\frac{\\Delta V}{2}\\frac{n}{L}, & \\!\\!\\left|n\\right|\\,\\le \\,L,\\\\\\!-\\frac{\\Delta V}{2}, & n<\\!-L,\\\\\\:\\:\\,\\frac{\\Delta V}{2}, & n\\,>\\,\\,L,\\end{array}\\right.", "}\\end{aligned}$ with $E\\!=\\!\\Delta V/2L$ being the electric field applied to the central sample.", "Note that inside the central sample, the potential landscape is exactly a ramp, which allows one to use the limiting results of Sec.", "for the infinite Wannier-Stark ladder to explain some of our forthcoming results." ], [ "Method of Quantum Time-Evolution", "Our numerical study simulates the time-dependent charge current that traverses a bond in the system, once the bias potential [Eq.", "(REF )] has been turned on.", "The local current going from site $n\\!\\rightarrow \\!n\\!+\\!1$ is represented by the operator, $\\mathcal {J}_{{\\scriptscriptstyle n,n+1}}=-i\\left(\\left|n\\!+\\!1\\right\\rangle \\!\\left\\langle n\\right|\\!-\\!\\left|n\\right\\rangle \\!\\left\\langle n\\!+\\!1\\right|\\right),$ whose expectation value over time, starting from the partition-free equilibrium state of Eq.", "(REF ), is given as, $J_{{\\scriptscriptstyle n,n+1}}(\\tau )\\!", "& =\\!\\text{Tr}\\left[\\rho _{0}\\,e^{i\\mathcal {H}_{{\\scriptscriptstyle +}}\\tau }\\mathcal {J}_{{\\scriptscriptstyle n,n+1}}\\,e^{-i\\mathcal {H}_{{\\scriptscriptstyle +}}\\tau }\\right]\\\\& =\\!2\\,\\Im \\!\\left\\langle n\\right|\\!e^{i\\mathcal {H}_{{\\scriptscriptstyle +}}\\tau }\\,\\rho _{0}\\,e^{-i\\mathcal {H}_{{\\scriptscriptstyle +}}\\tau }\\left|n\\!+\\!1\\right\\rangle ,\\nonumber $ where $\\mathcal {H}_{{\\scriptscriptstyle +}}\\!\\equiv \\!\\mathcal {H}(\\tau \\!>\\!0)$ is the perturbed Hamiltonian.", "In Eq.", "(REF ), the essential elements are (i) the perturbed time-evolution operators and (ii) the initial reduced density matrix of the  system.", "Respectively,  these  are functions of the biased and unbiased Hamiltonians and, therefore, amenable to be represented as quickly converging Chebyshev series [70] in the following way: $\\!\\!\\!\\exp \\left[i\\mathcal {H}_{{\\scriptscriptstyle +}}\\tau \\right]\\!\\approx \\!\\!\\sum _{m=0}^{M_{\\tau }}\\!\\frac{2(-i)^{m}}{1+\\delta _{m,0}}J_{m}(\\Delta \\tau )\\,T_{m}\\!\\left(\\!\\frac{\\mathcal {H}_{{\\scriptscriptstyle +}}}{\\Delta }\\!\\right),$ $\\!\\!\\!\\rho _{0}=\\frac{1}{1\\!+\\!e^{\\beta \\left(\\mathcal {H}_{0}-\\mu \\right)}}\\!", "& \\approx \\!\\!\\sum _{m=0}^{M_{\\rho }}\\frac{2\\mu _{m}^{\\rho }}{1+\\delta _{m,0}}\\,T_{m}\\!\\left(\\!\\frac{\\mathcal {H}_{0}}{\\Delta }\\!\\right),$ where $\\beta $ is the inverse temperature of the system Note that $1/\\beta $ will be taken as smaller than the mean level spacing of the system, effectively reproducing a zero temperature limit., $\\mu $ is its chemical potential, $\\Delta $ is a positive energy scale that normalizes the Hamiltonian to place its spectrum within $[-1,1]$ , $J_{m}(x)$ is a Bessel function of the first kind, $T_{m}(x)$ is a Chebyshev polynomial of the first kind, and $M_{\\rho }$ /$M_{\\tau }$ indicate the truncation order of each expansion.", "We also point out that even though the expansion coefficients for the time-evolution operator can be determined analytically [72], the values of $\\mu _{m}^{\\rho }$ must be determined through numerical quadrature of the integral, $\\mu _{m}^{\\rho }=\\int _{-1}^{1}\\!\\!du\\frac{T_{m}(u)}{\\pi \\sqrt{1\\!-\\!u^{2}}\\left[1+e^{\\beta \\left(\\lambda u-\\mu \\right)}\\right]},$ which is not challenging.", "A crucial additional approximation must be made on top of expanding the operators in Eq.", "(REF ): truncate the leads to a finite size, thus turning the Hamiltonian into an $N\\!\\times \\!N$ finite-dimensional matrix.", "According to Santos Pires et al.", "[36], the main trade-off for truncating the leads is to introduce reflections at the open boundaries, which only have a long-term impact on the time-evolution.", "From this point forward, we will keep all reflections outside our window of time-evolution.", "Using the expansions of Eqs.", "(REF ) and (REF ), we can calculate $J_{{\\scriptscriptstyle n,n+1}}(\\tau )$ in all instants of a discrete mesh of $N_{\\tau }$ times — $\\lbrace 0,\\delta \\tau ,2\\delta \\tau ,\\cdots ,\\tau _{\\text{max}}\\rbrace $ — with a time step of $\\delta \\tau $ .", "For that, we begin with two initial vectors, $\\left|\\Psi _{0}^{n}\\right\\rangle \\!=\\!\\left|n\\right\\rangle $ and $\\left|\\Psi _{0}^{n+1}\\right\\rangle \\!=\\!\\left|n\\!+\\!1\\right\\rangle $ , and apply to them the following iterative routine: ${\\left\\lbrace \\begin{array}{ll}\\left|\\Psi _{\\tau }^{n}\\right\\rangle \\!\\rightarrow & \\!\\!\\!\\!\\!\\!\\left|\\Psi _{\\tau +\\delta \\tau }^{n}\\right\\rangle \\!\\twoheadrightarrow \\!\\left|\\chi _{\\tau +\\delta \\tau }^{n}\\right\\rangle \\\\\\left|\\Psi _{\\tau }^{n+1}\\right\\rangle \\, & \\quad \\!\\longrightarrow \\quad \\left|\\Psi _{\\tau +\\delta \\tau }^{n+1}\\right\\rangle \\end{array}\\right.", "}\\!\\!\\Rightarrow \\!\\!\\!\\begin{array}{c}\\\\\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!J_{{\\scriptscriptstyle n,n+1}}(\\tau \\!+\\!\\delta \\tau )\\\\\\:=2\\,\\Im \\left\\langle \\Psi _{\\tau +\\delta \\tau }^{n+1}\\mid \\Psi _{\\tau +\\delta \\tau }^{n}\\right\\rangle ,\\\\\\\\\\end{array}$ with the different arrows, $\\rightarrow $ and $\\twoheadrightarrow $ , representing the following linear transformations: $\\left|\\Psi _{\\tau }^{n}\\right\\rangle \\!", "& =\\!\\!\\sum _{m=0}^{M_{\\tau }}\\!\\frac{2(-i)^{m}}{1+\\delta _{m,0}}J_{m}(\\Delta \\,\\delta \\tau )T_{m}\\!\\left(\\!\\frac{\\mathcal {H}_{{\\scriptscriptstyle +}}}{\\Delta }\\!\\right)\\left|\\Psi _{\\tau -\\delta \\tau }^{n}\\right\\rangle \\\\\\left|\\,\\chi _{\\tau }^{n}\\right\\rangle \\!", "& =\\!\\!\\sum _{m=0}^{M_{\\rho }}\\frac{2\\mu _{m}^{\\rho }}{1+\\delta _{m,0}}T_{m}\\left(\\!\\frac{\\mathcal {H}_{0}}{\\Delta }\\!\\right)\\left|\\Psi _{\\tau }^{n}\\right\\rangle .$ Note that, given any operator $\\mathcal {M}$ , the vectors $T_{m}(\\mathcal {M})\\left|\\Psi _{\\tau }^{n}\\right\\rangle $ is efficiently calculated through the three-point recursive property of the Chebyshev polynomials, Figure: Cartoon depiction of the delocalizationcaused in the WSS of a mesoscopic sample, as it transition from aregime of BOs and a regime of steady-state Landauer transport.", "$\\!\\!\\!\\!T_{{\\scriptscriptstyle m+2}}(\\mathcal {M})\\left|\\Psi _{\\tau }^{n}\\right\\rangle \\!=\\!2\\,\\mathcal {M}\\,T_{{\\scriptscriptstyle m+1}}(\\mathcal {M})\\left|\\Psi _{\\tau }^{n}\\right\\rangle \\!-\\!T_{{\\scriptscriptstyle m}}(\\mathcal {M})\\left|\\Psi _{\\tau }^{n}\\right\\rangle ,\\!\\!$ starting with $T_{0}(\\mathcal {M}\\!", ")\\!\\left|\\Psi _{\\tau }^{n}\\right\\rangle \\!\\!=\\!\\!\\left|\\Psi _{\\tau }^{n}\\right\\rangle $  and $T_{1}(\\mathcal {M}\\!", ")\\!\\left|\\Psi _{\\tau }^{n}\\right\\rangle \\!\\!=\\!\\!\\!\\mathcal {M}\\!\\left|\\Psi _{\\tau }^{n}\\right\\rangle $ .", "Overall, this method has a computational complexity of $\\mathcal {O}\\!\\left(N_{\\tau }\\,N\\,M_{\\tau }\\,M_{\\rho }\\right)$ , for sparse Hamiltonians." ], [ "Bloch Oscillations Within a Mesoscopic Device", "Section  clearly demonstrated that the electric current in an infinite tight-binding chain has an oscillatory Figure: Plots of the current time-evolution measuredinside (a) and outside (b) of a cental sample with L=128L\\!=\\!128 andcompared to their the Landauer values in dashed grey, respectively.", "(c) Quantum transmittance of the biased sample, as a function of theWannier-Stark localization length divided by the size of the sample,for various sizes.", "The dashed grey lines depict the place where ℓ WS =12\\ell _{{\\scriptscriptstyle \\textrm {WS}}}\\!=\\!", "{1}{2}and ℓ WS =1\\ell _{{\\scriptscriptstyle \\textrm {WS}}}\\!=\\!1.behavior driven by static electric field.", "In practice, we are interested to see how these BOs appear in a biased mesoscopic device, in which the electric field is applied to only a small part of it (the central sample).", "For this purpose, it turns out to be very useful take the real-space viewpoint and use the properties of Wannier-Stark states we have recapped in Sec. .", "The reasoning is outlined in the scheme of Fig.", "REF .", "If the bias potential is very strong, the exact eigenstates within the central sample will be indistinguishable from the WSS of the infinite Wannier-Stark ladder and, surely, there will be BOs inside the central sample.", "As the bias gets decreased, this picture will remain largely unchanged deep within the sample but, at its borders, the WSS will appear deformed by hybridization with propagating states of the closest lead.", "Despite this, there will be no eigenstates bridges the two leads and, therefore, BOs will persist inside the sample albeit appearing deformed (clipped) in time.", "Finally, if the bias becomes too small, the central WSS will eventually couple both leads, bridging them together and inducing a transport steady-state current.", "Assuming that $t_{l}\\!=\\!1$ , this reasoning determines BOs should be present within a mesoscopic sample if, $\\ell _{{\\scriptscriptstyle \\textrm {WS}}}\\!<\\!L\\Leftrightarrow \\Delta V\\!>\\!4.$ Note that this real-space description allows  the  emergence (or disappearance) of BOs in mesoscopic devices, found by Popescu and Croy [63], to be interpreted as a localization (or delocalization) transition of Wannier-Stark states due to an applied electric field that gets stronger (or weaker).", "In order to support this interpretation, we present [Fig.", "REF  a] some numerical simulation results that show the time-dependent current traversing a central bond in the sample, by employing the model and methods described in Sec. .", "As can be seen, the different curves, obtained for increasing values of $\\Delta V$ display all the aforementioned stages of BOs.", "As a further demonstration, we also track down the time-dependent current outside of the central sample [Fig.", "REF  b].", "In this case, BOs are never detected and instead, after an initial transient, the current stabilizes into a constant value which is zero, when BOs exist inside the central sample, or otherwise finite.", "In  the  latter, the current stabilizes in the non-linear Landauer current that is associated to the central sample, in full agreement with the conclusions of Santos Pires et al. [36].", "Overall, in Fig.", "REF  c, this behavior is summarized by a plot of the quantum transmittance (obtained using the Kwant package [17]) for different central sample sizes and different values of $\\ell _{{\\scriptscriptstyle \\textrm {WS}}}$ ." ], [ "Temporary Bloch oscillations in the presence of wide leads", "By this point, it appears to be settled that a mesoscopic device either supports BOs or steady-state currents (Landauer transport) but never both at the same time.", "From the physical picture conveyed by Fig.", "REF , such seems inevitable in the regime of strong bias, for which the localized central WSS is an effective bound-state of the sample, fully uncoupled from the leads, and thereby unable to serve as a transmission channel.", "In actuality, this statement is only approximate because the central WSS always displays exponential tails extending bilaterally towards the leads.", "The fact that this does not affect the regime of clipped BOs, however, is nothing more than a result of the WSS hybridizing with lead states that are outside of their band of propagating states [as shown in Fig.", "REF a].", "Therefore, the resulting states are evanescent inside the lead and also unable to transport charge away from the central sample.", "It is this model-specific feature, arising only because $t_{l}\\!=\\!1$ , that explains why clipped BO do not eventually decay into a finite steady-state of charge transport  between  the  leads.", "In order to decouple the effect of localizing the central WSS from the closure of propagating channels in the leads, we now retrace the previous analysis by choosing $t_{l}\\!>\\!1.$ As illustrated in Fig.", "REF  b, such a simple alteration in the Hamiltonian completely bypasses this issue and allows for a small, but finite, inter-lead transmission to be mediated by the central WSS within the clipped BO regime.", "Besides creating a steady-state current that coexists with oscillatory currents inside the central sample, this altered model further supports  a  new  regime  of temporary Bloch oscillations (tBOs) that consists of in-sample BOs that decay exponentially in time.  In  Fig.", "REF , we demonstrate this behavior through representative simulation results obtained from the time-evolution of the altered system.", "Before moving into a more precise description, we stress that this tBO regime can be fully understood as a perturbative effect of the metallic leads to the central WSS of the strongly biased central sample.", "The extent to which the leads affect this central state can be gauged by the expectation value of the leads self-energy operator ($\\Sigma $) in the said state.", "In Sec.", ", we will use this idea to devise an analytical framework that, not only explains the decay of BOs over time, but also provides a way to calculate their characteristic decay time in the weak coupling regime.", "Figure: Scheme describing the coupling of the centralWannier-Stark state with the leads having (a) t l =1t_{l}\\!=\\!1and (b) t l >0t_{l}\\!>\\!0.", "In both cases, the central sampleis in the strong bias regime, i.e., ΔV>4\\Delta V\\!>\\!4." ], [ "Theory of Decaying Bloch Oscillations in Mesoscopic Devices", "As we previously noted, the decay of BOs with time is inextricably linked to the fact that, if $t_{l}$ is allowed to exceed ${\\Delta V}{4}$ , the central sample has a finite transmission.", "Therefore, our theory starts off from a study of the central sample's quantum transmittance, $\\mathcal {T}_{\\varepsilon }$ , in the strong bias regime.", "As first shown by Caroli et al.", "[8], this quantity can be expressed using a Green's functions formalism, as follows: $\\mathcal {T}_{\\varepsilon }\\!=\\!\\text{Tr}\\left[\\mathbf {G}_{\\varepsilon }^{\\dagger }\\cdot \\Gamma _{\\varepsilon }^{\\text{R}}\\cdot \\mathbf {G}_{\\varepsilon }\\cdot \\Gamma _{\\varepsilon }^{\\text{L}}\\right],$ where $\\varepsilon $ is the energy (in units of $t$ ), $\\text{Tr}\\left[\\cdots \\right]$ is a trace over the central sample's Hilbert space, and $\\mathbf {G}_{\\varepsilon }$ is the Green's function of the central sample connected to the leads.", "The latter is defined as Figure: Showcase of how a mesoscopic device withstable BOs can transition to decaying BOs by increasing t l t_{l}.", "$\\mathbf {G}_{\\varepsilon }=\\left[\\varepsilon \\!-\\!\\mathcal {H}_{{\\scriptscriptstyle \\textrm {C}}}\\!-\\!\\Sigma _{\\varepsilon }^{\\text{R}}\\!-\\!\\Sigma _{\\varepsilon }^{\\text{L}}\\right]^{-1},$ in terms of the isolated central sample's Hamiltonian $\\mathcal {H}_{{\\scriptscriptstyle \\textrm {C}}}$ [defined in Eq.", "(REF ) for $\\tau \\!>\\!0$ ] and the self-energies introduced by the connected semi-infinite leads, $\\Sigma _{\\varepsilon }^{\\text{R/L}}$ .", "Note that, in Eq.", "(REF ), the functions $\\Gamma _{\\varepsilon }^{\\text{R/L}}\\!\\!=\\!\\!-2\\,\\Im \\,\\Sigma _{\\varepsilon }^{\\text{R/L}}$  are called level-width operators and roughly describe how the exact energy levels of the isolated sample are broadened as it is coupled to the metallic leads.", "Since the leads in this problem are simple semi-infinite tight-binding chains, one can analytically determine the induced self-energy, arriving at the following result [1], [36]: $\\!\\!\\Sigma _{\\varepsilon }^{\\text{L}} & \\!\\!=\\!\\!", "{\\left\\lbrace \\begin{array}{ll}\\frac{1}{4}\\left(2\\varepsilon \\!-\\!2i\\alpha _{{\\scriptscriptstyle \\text{L}}}^{\\varepsilon }\\!+\\!\\Delta V\\right)\\left|-L\\right\\rangle \\!\\left\\langle -L\\right| & \\!\\!\\!\\kappa _{{\\scriptscriptstyle \\text{L}}}\\!>\\!0\\\\\\frac{1}{4}\\left(2\\varepsilon \\!-\\!2i\\,\\text{sgn}\\left(\\varepsilon \\!+\\!\\Delta V\\right)\\beta _{{\\scriptscriptstyle \\text{L}}}^{\\varepsilon }\\!+\\!\\Delta V\\right)\\left|-L\\right\\rangle \\!\\left\\langle -L\\right| & \\!\\!\\!\\kappa _{{\\scriptscriptstyle \\text{L}}}\\!\\le \\!0\\end{array}\\right.", "}\\\\\\!\\!\\Sigma _{\\varepsilon }^{\\text{R}} & \\!\\!=\\!\\!", "{\\left\\lbrace \\begin{array}{ll}\\frac{1}{4}\\left(2\\varepsilon \\!-2i\\alpha _{{\\scriptscriptstyle \\text{R}}}^{\\varepsilon }\\!-\\!\\Delta V\\right)\\left|L\\right\\rangle \\!\\left\\langle L\\right| & \\;\\;\\;\\kappa _{{\\scriptscriptstyle \\text{R}}}\\!>\\!0\\\\\\frac{1}{4}\\left(2\\varepsilon \\!-\\!2i\\,\\text{sgn}\\left(\\varepsilon \\!-\\!\\Delta V\\right)\\beta _{{\\scriptscriptstyle \\text{R}}}^{\\varepsilon }\\!-\\!\\Delta V\\right)\\left|L\\right\\rangle \\!\\left\\langle L\\right| & \\;\\;\\;\\kappa _{{\\scriptscriptstyle \\text{R}}}\\!\\le \\!0\\end{array}\\right.", "},$ for the right and left leads, respectively.", "Note that, to write down Eqs.", "(REF ) and (), we have defined $\\kappa _{{\\scriptscriptstyle \\text{R/L}}}\\!=\\!", "(2\\varepsilon \\!\\pm \\!\\Delta V)/4t_{l}$ and also the auxiliary functions, $\\alpha _{{\\scriptscriptstyle \\text{R/L}}}^{\\varepsilon }\\!=\\!\\sqrt{4t_{l}^{2}\\!-\\!\\left(\\varepsilon \\!\\pm \\!", "{\\Delta V}{2}\\right)^{2}}$  and $\\beta _{{\\scriptscriptstyle \\text{R/L}}}^{\\varepsilon }\\!\\!=\\!\\!\\sqrt{\\left(\\varepsilon \\!\\pm \\!", "{\\Delta V}{2}\\right)^{2}\\!\\!-\\!4t_{l}^{2}}$ .", "By plugging the self-energy operators into Eq.", "(REF ), we can now easily express $\\mathbf {G}_{\\varepsilon }$ in terms of right and left eigenvectors Since the effective Hamiltonian, $\\mathcal {H}_{{\\scriptscriptstyle \\textrm {C}}}\\!+\\!\\Sigma _{\\varepsilon }^{\\text{R}}\\!+\\!\\Sigma _{\\varepsilon }^{\\text{L}}$ , is not hermitian, the right-eigenstates $\\text{$\\left|\\Phi _{n}^{R}\\right\\rangle $}$ do not necessarily form a complete orthogonal basis and, therefore, we must also consider eft-eigenstates $\\text{$\\left|\\Phi _{n}^{L}\\right\\rangle $}$ as its dual basis.", "as $\\mathbf {G}_{\\varepsilon }\\!=\\!\\sum _{n}\\frac{\\left|\\Phi _{n}^{R}\\right\\rangle \\!\\left\\langle \\Phi _{n}^{L}\\right|}{\\varepsilon -\\epsilon _{n}-i\\gamma _{n}},$ where the summation is over the entire Hilbert subspace of the central sample, $\\epsilon _{n}\\!=\\!\\Re \\left\\langle \\Phi _{n}^{L}\\right|\\mathcal {H}_{{\\scriptscriptstyle \\textrm {C}}}\\!+\\!\\Sigma _{\\varepsilon }^{\\text{R}}\\!+\\!\\Sigma _{\\varepsilon }^{\\text{L}}\\left|\\Phi _{n}^{R}\\right\\rangle $ , and $\\gamma _{n}\\!=\\!\\Im \\left\\langle \\Phi _{n}^{L}\\right|\\Sigma _{\\varepsilon }^{\\text{R}}\\!+\\!\\Sigma _{\\varepsilon }^{\\text{L}}\\left|\\Phi _{n}^{R}\\right\\rangle $ .", "Physically, one may interpret $\\epsilon _{n}$ as the energy levels of the connected sample (slightly shifted from the Wannier-Stark levels) and $\\gamma _{n}$ as the corresponding spectral broadenings.", "Obviously, this interpretation must be taken carefully as both $\\epsilon _{n}$ and $\\gamma _{n}$ may explicitly depend on the energy parameter.", "As for the sample's Green function, we may also use Eqs.", "(REF ) and () to write down the level-width operators as $\\Gamma _{\\varepsilon }^{{\\scriptscriptstyle \\text{L}}}\\!=\\!\\alpha _{{\\scriptscriptstyle \\text{L}}}^{\\varepsilon }\\!\\left|-L\\right\\rangle \\!\\left\\langle -L\\right|$ and $\\Gamma _{\\varepsilon }^{{\\scriptscriptstyle \\text{R}}}\\!=\\!\\alpha _{{\\scriptscriptstyle \\text{R}}}^{\\varepsilon }\\!\\left|L\\right\\rangle \\!\\left\\langle L\\right|$ and therefore express the entire quantum transmittance [Eq.", "(REF ) $\\rightarrow $ $\\mathcal {T}_{\\varepsilon }\\!=\\!\\alpha _{{\\scriptscriptstyle \\text{L}}}^{\\varepsilon }\\alpha _{{\\scriptscriptstyle \\text{R}}}^{\\varepsilon }\\left|\\left\\langle -L\\right|\\!\\mathbf {G}_{\\varepsilon }\\!\\left|L\\right\\rangle \\right|^{2}$ ] in terms of the eigenstates of the effective Hamiltonian, $\\mathcal {H}_{{\\scriptscriptstyle \\textrm {C}}}\\!+\\!\\Sigma _{\\varepsilon }^{\\text{R}}\\!+\\!\\Sigma _{\\varepsilon }^{\\text{L}}$ , i.e., $\\begin{aligned}\\mathcal {T}_{\\varepsilon }\\!", "& =\\!\\alpha _{{\\scriptscriptstyle \\text{L}}}^{\\varepsilon }\\alpha _{{\\scriptscriptstyle \\text{R}}}^{\\varepsilon }\\!\\sum _{n}\\frac{\\left\\langle -L\\!\\mid \\!\\Phi _{n}^{R}\\right\\rangle ^{\\!2}\\!\\left\\langle \\Phi _{n}^{L}\\!\\mid \\!L\\right\\rangle ^{\\!2}}{\\left(\\varepsilon \\!-\\!\\epsilon _{n}\\right)^{2}\\!+\\gamma _{n}^{2}}+\\\\& \\qquad +\\alpha _{{\\scriptscriptstyle \\text{L}}}^{\\varepsilon }\\alpha _{{\\scriptscriptstyle \\text{R}}}^{\\varepsilon }\\!\\sum _{m}\\sum _{n\\ne m}\\frac{\\left\\langle -L\\mid \\Phi _{n}^{R}\\right\\rangle \\!\\!\\left\\langle \\Phi _{n}^{L}\\mid L\\right\\rangle }{\\varepsilon -\\epsilon _{n}-i\\gamma _{n}}\\\\& \\qquad \\qquad \\qquad \\quad \\qquad \\times \\frac{\\left\\langle -L\\mid \\Phi _{m}^{R}\\right\\rangle \\!\\!\\left\\langle \\Phi _{m}^{L}\\mid L\\right\\rangle }{\\varepsilon -\\epsilon _{m}-i\\gamma _{m}}\\end{aligned}$ After arriving at this exact expression, it is time to make some simplifying assumptions that will allow us to describe our regime of interest: the central WSS of the sample is only weakly coupled to the leads.", "In spectral terms, this essentially means that the quantum transmittance function of the sample is made up of a set of sharp lorentzian-shaped peaks centered at around the exact Wannier-Stark energies.", "To describe this weak coupling limit, we must make two assumptions about the spectrum of the effective Hamiltonian:  (i)  the broadenings vary slowly across the spectrum around each peak, i.e., $\\gamma _{n}(\\varepsilon )\\!\\simeq \\!\\gamma _{n}(\\epsilon _{n})$ , and (ii)  these peaks are sufficiently far apart so that they do not overlap significantly, that is $\\epsilon _{n}\\!-\\!\\epsilon _{m}\\!\\ge \\!\\gamma _{n}$ for every $n\\!\\ne \\!m$ .", "These two conditions guarantee that, for any value of $\\varepsilon $ at least one of the factors in each term of the double summation is close to zero.", "Hence, the quantum transmittance is approximately given by a sum of lorentzian peaks (resonances), of the form Figure: Transmission coefficient curves as function of energyfor a sample of L=12L\\!=\\!12 (25 sites) and different lead hoppingsfor a) ΔV=6\\Delta V\\!=\\!6 and b) ΔV=5\\Delta V\\!=\\!5.", "The broadeningof the peaks with the decreasing of the potential difference is clear.The transmittance seems to vanish whenever the resonances become toonarrow to be seen.", "$\\mathcal {T}_{\\varepsilon }\\!\\approx \\!\\alpha _{{\\scriptscriptstyle \\text{L}}}^{\\varepsilon }\\alpha _{{\\scriptscriptstyle \\text{R}}}^{\\varepsilon }\\!\\sum _{n}\\frac{\\left\\langle -L\\!\\mid \\!\\Phi _{n}^{R}\\right\\rangle ^{\\!2}\\!\\left\\langle \\Phi _{n}^{L}\\!\\mid \\!L\\right\\rangle ^{\\!2}}{\\left(\\varepsilon \\!-\\!\\epsilon _{n}\\right)^{2}\\!+\\gamma _{n}^{2}}.$ Unsurprisingly, this approximate expression clearly identifies $\\gamma _{n}$ as the half-width at half maximum (HWHM) of the Lorentzian resonances that compose the sample's transmission spectrum.", "Additionally, it is also important to mention that the previous approximation does not imply that $t_{l}\\!\\gg \\!1$  (as for the well-known wide band limit) but rather that the major effect of the leads in the Wannier-Stark energy levels is to endow them with a finite lifetime.", "As we shall shortly see, the lifetimes of the WSSs placed around $\\varepsilon \\!=\\!0$ are directly related to the lifetime of the decaying BOs shown in Fig.", "REF , which is one of the main results of this paper.", "However, prior to analyzing the Wannier-Stark lifetimes in a greater detail, we complement our study of the quantum transmittance by presenting some numerical results obtained using the $\\texttt {Kwant}$ package [17] for strongly biased one-dimensional samples.", "These data are summarized in the plots of Fig.", "REF , where it is clear that (i)  the transmittance spectrum is a comb of sharp resonances provided $t_{l}$ is large enough for the propagation bands of both leads to overlap, and (ii)  an increase in $\\Delta V$ leads to a sharpening of the resonances that reflects a decrease in the size of the sample's WSS (i.e., a smaller $\\ell _{{\\scriptscriptstyle \\text{WS}}}\\!=\\!4L/\\Delta V$ ).", "Overall, the numerical results support the validity of the approximate expression in Eq.", "(REF ), provided the central sample lies in the strong bias regime ($\\ell _{{\\scriptscriptstyle \\text{WS}}}\\!\\ll \\!L$ )." ], [ "The Lifetime of a Wannier-Stark State in a Mesoscopic Device", "To determine the lifetime of a WSS inside a strongly biased sample, we are not required to know its full transmittance as a function of energy.", "In effect, that information is all contained in the Green's function of the connected sample [defined in Eq.", "(REF )] which, within the  aforementioned weak coupling limit, can be approximately written as, $\\mathbf {G}_{\\varepsilon }\\simeq \\sum _{n}\\frac{\\left\\langle \\Psi _{n}\\mid \\Phi _{n}^{R}\\right\\rangle \\!\\!\\left\\langle \\Phi _{n}^{L}\\mid \\Psi _{n}\\right\\rangle }{\\varepsilon -\\varepsilon _{n}-\\delta _{n}-i\\gamma _{n}}\\left|\\Psi _{n}\\right\\rangle \\!\\left\\langle \\Psi _{n}\\right|,$ where $\\left\\lbrace \\left|\\Psi _{n}\\right\\rangle \\right\\rbrace $  are the WSS defined in Eq.", "(REF ), $\\varepsilon _{n}\\!\\!=\\!\\!n\\Delta V/2L$ are the exact Wannier-Stark energies, and $\\delta _{n}$ is the small energy-shift caused by the leads in the $n^{\\text{th}}$ - level (a constant real self-energy).", "Note also that the diagonal form of Eq.", "(REF ) further assumes that there is a one-to-one correspondence between the WSS of the central sample and the eigenstates $\\left\\lbrace \\left|\\Phi _{n}^{R}\\right\\rangle \\right\\rbrace $ of the effective Hamiltonian, such that $\\left\\langle \\Psi _{m}\\mid \\Phi _{n}^{R}\\right\\rangle \\simeq C_{n}\\delta _{mn}$ for a complex number  $C_{n}$ .", "Our earlier calculations of the quantum transmittance attest for the accuracy of these assumptions.", "Since we are looking into lifetimes, it becomes useful to Fourier transform $\\mathbf {G}_{\\varepsilon }$ into the time-domain, which yields $\\smash{\\begin{aligned}\\mathbf {G}_{\\tau >0}\\!", "& \\simeq \\!\\sum _{n}\\mathcal {A}_{n}e^{-i\\left(\\epsilon _{n}\\!-\\!\\delta _{n}\\right)\\tau }e^{-\\gamma _{n}\\tau }\\left|\\Psi _{n}\\right\\rangle \\!\\!\\left\\langle \\Psi _{n}\\right|\\end{aligned}}$ with $\\mathcal {A}_{n}\\!\\!=\\!\\left\\langle \\Psi _{n}\\mid \\Phi _{n}^{R}\\right\\rangle \\!\\!\\left\\langle \\Phi _{n}^{L}\\mid \\Psi _{n}\\right\\rangle $ being a complex normalization factor.", "Equation (REF ) means that each $\\gamma _{n}$ is to be physically interpreted as a decaying rate of the corresponding Wannier-Stark state inside the sample, as it gets drained into the leads.", "As an example, we can take a particle to be in the central Wannier-Stark state ($\\left|\\Psi _{0}\\right\\rangle $ ) of the mesoscopic device [Fig.", "REF ] and time-evolve it by using the Chebyshev expansion described in Sec.", "REF .", "The probability for the particle to remain in this state, defined as $G_{00}(\\tau )\\!=\\!\\left\\langle \\Psi _{0}\\right|\\exp \\left[i\\mathcal {H}_{{\\scriptscriptstyle +}}\\tau \\right]\\left|\\Psi _{0}\\right\\rangle ,$ is shown over time as red plots in Fig.", "REF , where different strong values of $\\Delta V$ were used.", "As can be seen, in all three cases, this probability has an exponential decay for long times which agrees with the supposition that there were constant decay rates of these states (i.e., $\\gamma _{n}$ ) associated to the coupling to leads.", "Figure: a), b) and c) - 𝐆 00 (t)\\mathbf {G}_{00}(t) evolution withtime for different ΔV\\Delta V with a logarithmic scale in the y axis.The black dashed line is the exponential function e -t/τ 0 e^{-t/\\tau _{0}}with τ 0 \\tau _{0} being the characteristic decaying time for the correspondingsystem.", "d) - Fourier transform of 𝐆 00 (t)\\mathbf {G}_{00}(t) for ΔV=6.\\Delta V\\!=\\!6.,which originates a Lorentzian with width 1/τ 0 1/\\tau _{0}.Having identified $\\tau _{n}\\!=\\!1/\\gamma _{n}$ as being the lifetime of the corresponding WSS, we must now find a way to calculate them.", "By definition, we have $\\gamma _{n}\\!=\\!\\Im \\left\\langle \\Phi _{n}^{L}\\right|\\Sigma _{\\varepsilon _{n}}^{\\text{R}}\\!+\\!\\Sigma _{\\varepsilon _{n}}^{\\text{L}}\\left|\\Phi _{n}^{R}\\right\\rangle ,$ with the self-energy operators being defined in Eqs.", "(REF ) and ().", "Since we are working on the weak coupling regime, it is sensible to try computing this quantity using the following anzats: $\\!\\left\\langle \\Phi _{n}^{L}\\right|\\Sigma _{\\varepsilon _{n}}^{\\text{R}}\\!\\!+\\!\\Sigma _{\\varepsilon _{n}}^{\\text{L}}\\left|\\Phi _{n}^{R}\\right\\rangle \\!\\simeq \\!h_{n}\\!\\left(t_{l},\\Delta V,L\\right)\\left\\langle \\Psi _{n}\\right|\\Sigma _{\\varepsilon _{n}}^{\\text{R}}\\!\\!+\\!\\Sigma _{\\varepsilon _{n}}^{\\text{L}}\\left|\\Psi _{0}\\right\\rangle \\!,$ where $h_{n}\\left(t_{l},\\Delta V,L\\right)$ are undetermined real-valued functions of $\\Delta V$ , $L$ , and $t_{l}$ .", "The advantage of this hypothesis is that $\\left\\langle \\Psi _{n}\\right|\\Sigma _{\\varepsilon _{n}}^{\\text{R}}\\!+\\!\\Sigma _{\\varepsilon _{n}}^{\\text{L}}\\left|\\Psi _{n}\\right\\rangle $ is nothing but the level-width function associated to the Wannier-Stark state of energy $\\varepsilon _{n}$ , which promptly gives $\\Im \\left\\langle \\Psi _{n}\\right|\\Sigma _{\\varepsilon _{n}}^{\\text{R}}\\!+\\!\\Sigma _{\\varepsilon _{n}}^{\\text{L}}\\left|\\Psi _{n}\\right\\rangle \\!", "& =\\left(\\alpha _{\\text{L}}^{\\varepsilon _{n}}\\!+\\!\\alpha _{\\text{R}}^{\\varepsilon _{n}}\\right)J_{L+n}^{2}\\left(\\!\\frac{4t\\left(L\\!+\\!1\\right)}{\\Delta V}\\!\\right)\\nonumber \\\\& +\\left(\\alpha _{\\text{L}}^{\\varepsilon _{n}}\\!+\\!\\alpha _{\\text{R}}^{\\varepsilon _{n}}\\right)J_{L-n}^{2}\\left(\\!\\frac{4t\\left(L\\!+\\!1\\right)}{\\Delta V}\\!\\right)$ upon employing Eq.", "(REF ).", "By this point, we can greatly simplify the result of Eq.", "(REF ) provided the system obeys the condition $L\\gg 4\\left(L+1\\right)/\\Delta V$ .", "In such a case, we could use the fact that $J_{n}(\\alpha n)\\underset{n\\rightarrow \\infty }{\\longrightarrow }\\frac{1}{\\sqrt{2\\pi n}}\\left(\\frac{e\\alpha }{2}\\right)^{n}$ which is a known property of first-kind Bessel functions if $\\alpha \\!\\ll \\!1$ (see Abramowitz and Stegun [68]).", "We will now proceed to apply such asymptotic expression for the central-most WSS $\\left|\\Psi _{0}\\right\\rangle $ , $\\left|\\Psi _{1}\\right\\rangle $ and $\\left|\\Psi _{-1}\\right\\rangle $ , as these are of particular importance for the characterization of the transient current regime.", "Firstly, for $\\left|\\Psi _{0}\\right\\rangle $ we should note that, if $L\\!\\!10$ , the condition for this assymptotic expression to be valid is exactly that the central sample is in the strong bias regime ($\\Delta V\\!\\gg \\!4$ ).", "Therefore, defining $\\alpha \\equiv \\alpha _{L}\\left(0\\right)=\\alpha _{R}\\left(0\\right)$ , we are entitled to write down, $\\begin{aligned}\\Im \\left\\langle \\Psi _{0}\\right|\\Sigma _{0}^{\\text{R}}\\!+\\!\\Sigma _{0}^{\\text{L}}\\left|\\Psi _{0}\\right\\rangle & \\!\\approx \\!\\frac{2\\alpha }{\\pi L}\\left(\\frac{2te\\left(L+1\\right)}{\\Delta VL}\\right)^{2L}\\end{aligned},$ which displays a strong inverse power-law dependence on $\\Delta V$ .", "Now, using Eq.", "(REF ), one can determine $h_{0}$ -function by using data obtained from numerically simuating the system's time-evolution.", "and obtain that $h_{0}\\left(\\alpha \\Delta V,\\Delta V,L\\right)=C\\left(\\alpha \\right)$ , meaning that $h_{0}\\left(t_{l}/\\Delta V\\right)$ will be solely a function of $t_{l}/\\Delta V$ .", "By plotting $h$ as a function of $t_{l}/\\Delta V$ for different values of $\\Delta V$ (Fig ), one can actually conclude that $h_{0}\\left(t_{l}/\\Delta V\\right)=A\\left(\\frac{t_{l}}{\\Delta V}\\right)^{-\\nu },$ Where $\\nu \\simeq 2$ .", "A small dependence of the amplitude coefficient $A$ on $\\Delta V$ can be observed, but this will ultimately be negligible when compared to the power law dependence of the other term obtained in Eq.", "(REF ), and thus the amplitude assumes the value of $A\\simeq 0.1514712$ by combining Eqs.", "(REF ) and (REF ), one can then rebuild an expression for the decaying time $\\tau _{0}$ as $\\tau _{0}\\simeq \\frac{2t_{l}^{2}}{A_{0}}\\frac{\\pi L\\left(\\frac{L}{2we\\left(L+1\\right)}\\right)^{2L}}{\\sqrt{\\left(\\frac{4t_{l}}{\\Delta V}\\right)^{2}-1}}\\left(\\Delta V\\right)^{2L-3},$ Which for high enough $\\Delta V$ , scales with the potential bias as a power law of $2L-3$ .", "The validity of such analytical result is confirmed numerically by plotting this expression against the the HWHM of the central Lorentzian of the transmission function for various potential biases and system sizes, showing a good correspondence (Fig.", "REF  b)).", "Figure: (a) hαh\\left(\\alpha \\right) as a functionof t l /ΔVt_{l}/\\Delta V with t l =αΔVt_{l}=\\alpha \\Delta V for different valuesof ΔV\\Delta V, with the dashed lines being curve fits of the formgiven in Eq. ().", "(b) Inverted central Lorentzian'sHWHM coefficients from the transmission functions (points) againstthe inverse of the analytical expression obtained via perturbationtheory (dashed lines) as a function of ΔV\\Delta V for various systemsizes, with t l =2ΔVt_{l}\\!=\\!2\\Delta V.Furthermore, it is now seen in this figure that $\\tau _{0}$ varies over 6 orders of magnitude with $\\Delta V$ spanning from $11.8$ to 22, which completely annihilates the dependence of the amplitude coefficient $A$ on the potential bias seen in Fig.", "REF  a).", "Finally, the obtainance of larger times is capped by machine error (which is of order of $10^{-16}$ ), making it impossible to probe for higher potential biases.", "With a similar analysis, we may also obtain the decaying time of states $\\left|\\Psi _{-1}\\right\\rangle $ and $\\left|\\Psi _{1}\\right\\rangle $ for large $\\Delta V$ as $\\tau _{1}\\simeq \\frac{4t_{l}^{2}}{A_{1}}\\frac{\\pi \\left(L-1\\right)\\left(\\frac{L-1}{2te\\left(L+1\\right)}\\right)^{2L-2}}{\\sqrt{\\left(\\frac{4t_{l}}{\\Delta V}\\right)^{2}-1}}\\left(\\Delta V\\right)^{2L-6},$ where $A_{1}$ is the amplitude of the corresponding $h_{1}$ -function, which is also a function of $\\frac{t_{l}}{\\Delta V}$ and is written as $h_{0}\\left(t_{l}/\\Delta V\\right)=A_{1}\\left(\\frac{t_{l}}{\\Delta V}\\right)^{-2}.$ We see from Eq.", "(REF ) that $\\tau _{1}$ also scales as a power law with $\\Delta V$ ." ], [ "The Decay Times of Temporary Bloch Oscillations", "We now have everything we need to characterize the transient regime of the central average current evolution for $\\Delta V>4t$ and a proper choice of $t_{l}$ that mitigates the bound states inside the sample.", "We may use the sample's Green function $\\mathbf {G}_{\\tau }$ to obtain the contribution to the average current evolution through a specific bond from the states that are escaping the sample.", "It is worth to notice that this quantity cannot account for incoming states from the leads, and therefore will not describe the constant asymptotic Landauer prediction for the current $J\\left(\\tau \\rightarrow +\\infty \\right)$ .", "To this specific current we will call the transient current, and define it as $J_{n,n+1}^{\\text{Trans}}\\left(\\tau \\right)\\equiv J_{n,n+1}\\left(\\tau \\right)-J\\left(\\tau \\rightarrow +\\infty \\right)$ , where $J_{n,n+1}\\left(\\tau \\right)$ is the time-dependent average current, if we were also accounting for incoming states of the leads.", "Such current can then be expressed as $J_{n,n+1}^{\\text{Trans}}\\left(\\tau \\right)=\\text{Tr}\\left[f(\\mathcal {H}_{{\\scriptscriptstyle \\textrm {C}}}^{0})\\mathbf {G}_{\\tau }\\mathcal {J}_{{\\scriptscriptstyle n,n+1}}\\mathbf {G}_{\\tau }^{\\dagger }\\right],$ where $\\mathcal {H}_{{\\scriptscriptstyle \\textrm {C}}}^{0}$ is the unperturbed Hamiltonian and $f(\\mathcal {H}_{{\\scriptscriptstyle \\textrm {C}}}^{0})$ is the Fermi-Dirac distribution of the initially thermalized state $f(\\mathcal {H}_{{\\scriptscriptstyle \\textrm {C}}}^{0})=\\sum _{\\sigma }\\frac{1}{1\\!+\\!e^{-\\beta \\left(\\varepsilon _{\\sigma }-\\mu \\right)}}\\left|\\psi _{\\sigma }\\right\\rangle \\left\\langle \\psi _{\\sigma }\\right|,$ where $\\left|\\psi _{\\sigma }\\right\\rangle $ is the initial eigenbasis of $\\mathcal {H}_{{\\scriptscriptstyle \\textrm {C}}}^{0}$ .", "Using the Wannier-Stark basis to perform the trace, it can be seen that the Fermi-Dirac distribution will be responsible for the mixing of the different components of the two Green functions, coupling different exponentials with different decaying times.", "Assuming that the Wannier-Stark energies $\\varepsilon _{n}$ are much larger than the energetic shift introduced by the leads $\\tilde{\\varepsilon }_{n}$ (which is a valid assumption for sufficiently large $\\Delta V$ ), the transient current can thus be written as $J_{n,n+1}^{Trans}\\left(\\tau \\right)\\sim \\sum _{nm}A_{nm}e^{-i\\left(\\varepsilon _{n}-\\varepsilon _{m}\\right)\\tau }e^{-\\left(1/\\tau _{n}+1/\\tau _{m}\\right)\\tau }.$ The dominating decaying time scale should therefore be $\\tau _{0}/2$ , since the central Wannier-Stark state is the least coupled to the leads.", "However, one should remember that Wannier-Stark states are real states, meaning the states themselves do not carry any current, i.e, $\\left\\langle \\Psi _{n}\\right|\\mathcal {J}_{{\\scriptscriptstyle n,n+1}}\\left|\\Psi _{n}\\right\\rangle =0$ .", "This leads to $A_{nn}=\\left\\langle \\Psi _{n}\\right|f\\left(\\mathcal {H}_{{\\scriptscriptstyle \\textrm {C}}}^{0}\\right)\\left|\\Psi _{n}\\right\\rangle \\!\\left\\langle \\Psi _{n}\\right|\\mathcal {J}_{{\\scriptscriptstyle n,n+1}}\\left|\\Psi _{n}\\right\\rangle =0,$ Therefore, we could never see a decayment of $\\tau _{0}/2$ in the current, and the largest non-zero decaying time scale is given by $\\left(1/\\tau _{0}+1/\\tau _{1}\\right)^{-1}=\\tau _{0}\\tau _{1}/\\left(\\tau _{0}+\\tau _{1}\\right)$ .", "By plotting the average central current time evolution for different potential biases with an appropriate choice of the leads' hopping parameter (in this case, $t_{l}=\\left(1/4+1/16\\right)\\Delta V$ ), it is clear that Bloch Oscillations cease to be a permanent phenomenon and instead take part as a transient phase, decaying to the corresponding Landauer value [Fig.", "REF  a)].", "Figure: a) Average current evolution dividedby the respective Landauer value for different potential biases (inunits of tt) for a sample of size L=25aL=25a.", "b) Same current evolutionswith time scaled by the inverse of HWHM coefficients and amplitudesadjusted.", "The dashed brown line denotes Current/Landauer = 1.Furthermore, by scaling the time by the respective $\\tau _{0}\\tau _{1}/\\left(\\tau _{0}+\\tau _{1}\\right)$ value, it is possible to obtain a similar decaying behavior across the different curves (Fig REF .b)).", "It is to be noted that the BO's should not be expected to collapse to the same curve, because their frequency depends only on $\\Delta V$ while the decaying time depends on both $\\Delta V$ and $t_{l}$ .", "We would still like to further instill the notion that the observed oscillations are in fact Bloch Oscillations decaying exponentially and $\\tau _{0}\\tau _{1}/\\left(\\tau _{0}+\\tau _{1}\\right)$ is indeed the dominating time scale of such decayment.", "If this is true, then for large times the average current should be a sinusoidal function with the frequency of the BO's modulated by an exponential, i.e., $j_{n}\\left(\\tau \\gg 0\\right)\\simeq A\\cos \\left(\\Omega \\tau +\\phi \\right)e^{-\\tau /\\tau _{\\text{eff}}}+j\\left(\\tau \\!\\rightarrow \\!+\\infty \\right).$ We can then remove the Landauer value from the current and try to fit such function to the current curves.", "Unfortunately, sinusoidal functions are known to be prone to many fitting errors because of their periodic nature.", "A more sophisticated analysis may then be put forward by Fourier transforming the signal.", "It can be shown from a simple calculation that the real and complex components of the Fourier transform of the current are given by $\\text{Re}\\left[j\\left(\\omega \\right)\\right] & =A\\tau _{eff}\\frac{\\cos \\phi +\\tau _{eff}\\sin \\phi \\left(\\omega -\\Omega \\right)}{1+\\tau _{eff}^{2}\\left(\\omega -\\Omega \\right)^{2}}\\\\\\text{Im}\\left[j\\left(\\omega \\right)\\right] & =A\\tau _{eff}\\frac{\\sin \\phi -\\tau _{eff}\\cos \\phi \\left(\\omega -\\Omega \\right)}{1+\\tau _{eff}^{2}\\left(\\omega -\\Omega \\right)^{2}}.$ The result of fitting this expressions to the Fourier transform is shown in Fig.", "REF , where it is seen that the fits are actually quite good and increase for stronger potential biases.", "Figure: a) Real and b) complex part of the Fouriertransform of the current for different potential biases (in unitsof tt) for a system of size L=25aL=25a.", "The dashed lines are the correspondingfits of the functions in Eqs.", "() and ().c) Ω\\Omega values obtain from the fit for different ΔV\\Delta V(blue points) compared to the frequency of BOs for the correspondingΔV\\Delta V (dashed blue line).", "d) τ eff \\tau _{\\text{eff}} values obtainfrom the fit for different ΔV\\Delta V (blue points) compared to thecorresponding values of τ 0 τ 1 /τ 0 +τ 1 \\tau _{0}\\tau _{1}/\\left(\\tau _{0}+\\tau _{1}\\right)(dashed blue lines).From this fits, $\\Omega $ may be extracted and compared to the BO's frequency $2\\pi /T_{BO}$ .", "As seen in Fig.", "REF  c), such frequencis are in full accordance with the frequency of usual BOs, allowing us to identify this oscillations as such.", "The $\\tau _{\\text{eff}}$ coefficient may also be extracted, with it corresponding to the respective values of $\\tau _{0}\\tau _{1}/\\left(\\tau _{0}+\\tau _{1}\\right)$ [Fig.", "REF c)]." ], [ "Conclusions and Outlook", "As a compact way to summarize the results, we have constructed a phase diagram depicting the different states of the transport phenomenon: “Stable” and “Unstable BOs” refer to Bloch Oscillations which occur infinitely or as a transient phenomenon, whereas “Landauer” is a regime where no BOs are to be detected, i.e, all Wannier-Stark states are completely delocalized from the sample.", "In this phase diagram, four different phases are then identifiable.", "The dashed line between the “Unstable BOs” and the “Landauer” phases denotes a continuous cross-over rather than a phase transition.", "Throughout this work, we sought to connect the Landauer constant current value to the observed oscillations for high potential biases.", "This was successfully done when considering a system without bound states, i.e, where the leads' energy bands are overlapping, where we have concluded that the oscillations will eventually decay for the Landauer value.", "The dominating time scale of this decay was demonstrated to scale as a power law with de potential bias, and such scaling factor increases with the size of the sample.", "One does not need very long samples and strong biases to have a system where the decaying time is longer than the coherence time, rendering the Landauer current useless even in the mesoscopic regime.", "Therefore, because of the strong localization of the Wannier-Stark states, not only is this Landauer value small but the time taken to reach the steady state may be virtually infinite, making it unfeasible to detect in an experiment.", "Landauer steady state transport and Bloch oscillations are two contrasting quantum transport regimes, which are typically seen as mutually exclusive.", "In a partition-free setup where the leads are identical to the sample, a small potential drop across the leads leads to steady state transport without Bloch oscillations.", "The transient quickly dies off without oscillating and a steady state current develops.", "As the potential is increased beyond the bandwidth of the system, the transient current suddenly starts oscillating, developing into full Bloch oscillations, with zero current across the device being transmitted.", "This is the regime of “Stable Bloch oscillations”.", "This abrupt change in behaviour is due to a coincidence: precisely when the potential drop is equal to the bandwidth, the Wannier-Stark states have a localization length equal to the size of the sample, thus preventing any of them from connecting propagating states from one lead to the other.", "In this work, we developed a way to avoid this coincidence of scales by changing the hopping of the leads relative to the sample, and found a new transient regime dubbed “Unstable Bloch Oscillations”.", "In this new regime, the current is allowed to oscillate before reaching a non-zero steady state, in contrast to the usual picture of transition between Bloch Oscillations and Landauer transport.", "The ratio $t_{l}/t$ appears naturally as the order parameter mediating this transition, and the exponential tails of the localized WSS are the key to understand this new regime.", "The diagram of Fig.", "REF summarizes the main findings.", "It is drawn in terms of $t_{l}/t$ and $\\delta B/t$ , where $\\delta B$ is the total overlap between the leads' bands, given by $\\delta B=-\\Delta V+4t_{l}$ When $\\Delta V$ is sufficiently large, there exist WSS entirely localized within the sample ($\\Delta V>4t$ ), but with exponential tails which are still able to reach the leads.", "If $t_{l}=t$ , then there are no states inside either lead with which the WSS can hybridize, so the tails remain exponentially localized and only BO inside the sample are visible.", "This is the “Stable BO” regime.", "As $t_{l}$ is increased, the leads’ bands get broader and when $4t_{l}>\\Delta V>4t$ , there will be states available in both leads that can simultaneously hybridize with some of the WSS.", "If the hybridization to both leads is sufficiently small (but nonzero), these hybridized WSS will resemble normal WSS and the initial transient current will be oscillatory.", "However, because the charge in these states is now allowed to escape into both leads, the BO will eventually disappear and steady state transport will be reached across the device.", "This is the “Unstable BO” regime, and the decay time is thus inversely proportional to the amount of hybridization and controllable via tuning $\\Delta V$ .", "In the opposite limit $4t>\\Delta V>4t_{l}$ , there are no open transport channels between the leads because $\\Delta V>4t_{l}$ , but since $\\Delta V<4t$ , no WSS will be entirely localized within the sample, so no BO will exist either.", "As $\\Delta V$ is decreased below $4t_{l}$ , the bands of the bands overlap and so Landauer transport is possible after a brief non-oscillatory transient.", "While tBO were only studied here in the simple 1D TB chain, this new regime brings about broader implications.", "The necessity of $t\\ne t_{l}$ suggests that if contact effects are ignored, tBO could in principle be induced by connecting a sample to leads of a different material from the sample and adjusting $\\Delta V$ accordingly.", "The decay time would provide a measurement of the hybridization between the WSS and the leads.", "In practice, however, this would prove more difficult to achieve experimentally than regular Bloch oscillations and this decay time scale would compete with other decay chanels such as phonon scattering.", "While the decay time scale could be parametrically changed via $\\Delta V$ and constasted against the competing time scales, the construction of such a device would pose a much more challenging endeavour.", "Figure: Phase diagram in terms of the leads' hopping and leadoverlap.Work supported by the Portuguese Foundation for Science and Technology (FCT) within the Strategic Funding UIDB/04650/2020 and through projects No.", "POCI-01-0145-FEDER-028887 (J.P.S.P., S.M.J and J.M.V.P.L.)", "and No.", "CEECIND/02936/2017 (B.A.).", "J.P.S.P.", "and S.M.J are funded by FCT grants No.", "PD/BD/142774/2018 and PD/BD/142798/2018, respectively.", "$\\:$ $\\:$ $\\:$ $\\:$ $\\:$ $\\:$" ] ]
2212.05574
[ [ "Nonequilibrium polariton condensation in biannular optically induced\n traps" ], [ "Abstract We report the mean field model of nonequilibrium polariton condensation in annular effective non-Hermitian potential traps, stemming from incoherent optically induced excitonic reservoirs of annular shape.", "We solve the linearized extended Gross-Pitaevskii equation in the approximation of two delta-function effective shell potentials for complex spectra of trapped polariton modes and calculate corresponding condensation threshold optical pumping powers.", "The exhaustive map of condensate quantum number transitions in the multi-dimensional space of trap parameters, including a cascade of topological charge increments, is drastically different from the single annular trap case in topology and the range of accessible condensate states." ], [ "Introduction", "Exciton-polaritons as mixed light-matter quasi-particles, formed in optical microcavities in the regime of strong coupling of cavity photon and exciton modes, present a peculiar platform supporting interacting bosonic condensates and low-threshold bosonic lasing [1].", "In particular, nonequilibrium bosonic polariton condensates in effective potential traps, generated with profiled nonresonant optical pumping, can be simultaneously confined and populated via stimulated scattering by incoherent excitonic reservoirs [2].", "This system supports surprisingly rich physical phenomenology, including Berry phase stemming from exceptional [3] and diabolical points [4], formation of vortex lattices [5], spin bifurcations [6], [7], spin ordering [8], [9] and topological transitions [10] in trap lattices.", "Unlike their equilibrium counterparts, such polariton condensates are not necessarily formed in the ground state of a potential trap if one of its excited states is strongly populated by the reservoir due to a better overlap with the latter [11], [12].", "In annular shaped traps this leads to formation of persistent counter-rotating polariton currents due to condensation in high angular momentum states [13], [14].", "In the nonlinear regime, where polariton interactions play a significant role, such condensates can evolve into space-time ordered phases [15] and persistent vortex states [16], whose topological charge can be controlled optically [17], [18] or, as in the case of spinor rotating condensates [19], with magnetic field [20].", "Overall, nonequilibrium optically trapped polariton condensates offer a versatile platform for generation of coherent light with optically controllable spatiotemporal parameters, including angular momentum and topology.", "The model suitable for description of polariton condensation in annular traps depends on their size, shape, and particular applications.", "Elliptic traps of small radii, where condensation occurs at the ground state or at low excited states, are well described with the parabolic effective potential model [21].", "In the case of larger traps the step-like effective potential model is reproduces the condensate angular momentum increasing with the trap size via a cascade of transitions from confined modes to continuum and to reproduce the superlinear dependence of the condensation threshold on the trap radius [15].", "Delta function shell potential model also results in a ladder of angular momentum transitions in trap size and, in addition, allows assessing dissipative coupling of adjacent traps via overlapping evanescent condensate wavefunction tails [22].", "In both step-like and delta function potential models the azimuthal periodicity of condensate density is explained in terms of counter-rotating vortex degeneracy lifted by imperfections of the cavity itself or the pumping profile.", "In this work we address polariton condensation in biannular traps, formed by two concentric narrow shell potentials.", "Such optically induced traps demonstrated possibility of condensation in the first excited radial quantum number state [13].", "We apply the non-Hermitian mean field model based on the extended Gross-Pitaevskii equation to compute complex energy spectra of the system.", "Investigating the spectrum dependence on the density of the excitonic reservoir forming the trap, we then calculate the lasing threshold and identify the condensate parameters at this threshold.", "The paper is organized as follows.", "The theoretical model based on matching the solutions of extended Gross-Pitaevskii equation is described in Section .", "Section describes numerical solutions of the model and presents phase diagrams of condensate parameters.", "Finally, the results and their implications are discussed in Section ." ], [ "Theoretical model", "The nonequilibrium polariton condensate is described with the two-dimensional Gross-Pitaevskii equation (see, for example, [23]) $ \\textbf {i}\\hbar \\displaystyle {\\frac{\\partial \\Psi (t,\\mathbf {r}) }{\\partial t }} = \\left[-\\frac{\\hbar ^2}{2m_p}\\left( {\\partial ^2 \\over \\partial x^2} + {\\partial ^2 \\over \\partial y^2}\\right)+\\frac{\\alpha +\\textbf {i}\\beta }{2}N(t,\\mathbf {r})-\\textbf {i}\\frac{\\hbar \\Gamma }{2}\\right]\\Psi (t,\\mathbf {r}).$ Here $\\Psi (t,\\mathbf {r})$ and $N(\\mathbf {r})$ are the condensate wave function and the reservoir density respectively, $m_p$ is the effective polariton mass, $\\alpha $ and $\\beta $ are the condensate-reservoir interaction parameters describing repulsion and stimulated scattering from the reservoir into the condensate respectively, and $\\Gamma $ is the polariton decay rate.", "Eq.", "(REF ) is coupled to the semiclassical rate equation on the reservoir density $ \\displaystyle {\\frac{d N(t,\\mathbf {r}) }{d t }}=P(\\mathbf {r})-(\\beta |\\Psi |^2+\\gamma )N(t,\\mathbf {r}).$ Here $\\gamma $ is the excitonic reservoir decay rate and $P(\\mathbf {r})$ is the spatially non-uniform pumping power.", "In the vicinity of the condensation threshold pumping the condensate density may be neglected in Eq.", "(REF ) on the reservoir density.", "In the adiabatic approximation the reservoir density relaxation time is assumed to be short compared to the characteristic timescales of the condensate dynamics.", "The density of the reservoir thus instantly adjusts to the slowly evolving condensate, which allows expressing the quasi-stationary reservoir density as $N(t,\\mathbf {r})=P(\\mathbf {r})/(\\beta |\\Psi |^2+\\gamma )$ .", "Substituting this expression, linearized in weakly populated condensate density $|\\Psi |^2\\ll \\gamma /\\beta $ , in Eq.", "(REF ) results in local anti-Hermitian dissipative nonlinear terms in the effective Hamiltonian.", "However, we are primarily interested in stationary states, where both condensed and reservoir parts of the polariton system are stationary.", "Reservoir density in this case inherits the spatial profile of the pump and may be approximated by a superposition of two concentric delta function rings: $ N(r)=c[\\delta (r-r_{1})+\\delta (r-r_{2})],$ Given the circular symmetry of the system the variables are separated in Eq.", "(REF ) with substitution $\\Psi (t,r,\\varphi )=e^{-\\textbf {i}E t/\\hbar + \\textbf {i}m\\varphi }R(r)$ , where $\\varphi $ and $r$ are polar coordinates, $m$ is the angular momentum, $E$ is the complex energy, and the radial wavefunction $R$ obeys the stationary dimensionless equation $ \\rho ^2\\displaystyle {\\frac{d ^2R }{d \\rho ^2 }}+\\rho \\displaystyle {\\frac{d R }{d \\rho }}-\\left\\lbrace \\rho ^2\\cdot Z(\\rho )+m^2\\right\\rbrace R=0.$ Here $\\rho =r\\sqrt{2\\Gamma m_p/\\hbar }$ is the normalized radius and $Z(\\rho )$ , in the case of a biannular trap with the radial profile formed by two delta functions, given by $ Z(\\rho )= \\frac{(\\varkappa +\\textbf {i})}{2}\\eta \\left[\\delta (\\rho -\\rho _1)+\\delta (\\rho -\\rho _2)\\right]-\\left(\\frac{\\textbf {i}}{2}+\\epsilon \\right),$ where $\\varkappa ={\\alpha }/{\\beta }$ , $\\epsilon ={E}/{(\\hbar \\Gamma )}$ , and $\\eta =c\\beta /\\Gamma $ is the normalized pumping power.", "The radial wavefunction converging at $\\rho =0$ and $\\rho \\rightarrow \\infty $ may be piecewise defined in three regions: $ R(\\rho )=\\left\\lbrace \\begin{array}{c}R_I(\\rho ) = A J_{m}\\left(\\rho \\sqrt{\\epsilon +\\frac{\\textbf {i}}{2}}\\right), \\quad \\rho \\le \\rho _{1}\\\\R_{II}(\\rho ) = C J_{m}\\left(\\rho \\sqrt{\\epsilon +\\frac{\\textbf {i}}{2}}\\right)+D Y_{m}\\left(\\rho \\sqrt{\\epsilon +\\frac{\\textbf {i}}{2}}\\right),\\quad \\rho _{1} \\le \\rho \\le \\rho _{2}\\\\R_{III}(\\rho ) = F H_{m}\\left(\\rho \\sqrt{\\epsilon +\\frac{\\textbf {i}}{2}}\\right), \\quad \\rho _{2} \\le \\rho \\end{array}\\right.$ The wavefunction continuity requires that $R_{\\text{I}}(\\rho _{1})=R_{II}(\\rho _{1})$ and $R_{II}(\\rho _{2})=R_{III}(\\rho _{2})$ .", "The other couple of conditions on the coefficients may be obtained by integrating Eq.", "(REF ) in infinitesimal vicinities of $\\rho _1$ and $\\rho _2$ : 21[d RII(1) d -d RI(1) d ]-+i212RI(1)=0, 22[d RIII(2) d -d RII(2) d ]-+i222RII(2)=0.", "The above conditions form a linear system of equations on the coefficients of the wavefunction.", "It results in the relation between the complex energy $\\epsilon $ and the pumping power $\\eta $ : $ \\det (M(\\epsilon ,\\eta ))=0,$ where the matrix $M$ is explicitly given in Appendix A." ], [ "Numerical approach", "The resonance condition Eq.", "(REF ) was numerically solved using two different approaches.", "In the first approach, complex discrete spectra of energies $\\epsilon $ were computed for varying values of pumping power $\\eta $ .", "This allowed qualitatively illustrating the spectrum evolution with increasing pumping power and specifying the particular state that first crosses the condensation threshold $\\mathrm {Im}\\lbrace \\epsilon \\rbrace =0$ .", "Alternatively, mode-specific critical pumping powers were computed for a wide range of quantum numbers by solving (REF ) for real-valued $\\epsilon $ and $\\eta $ .", "The condensation threshold and the condensate parameters were studied in depth in this second approach by identifying the minimal critical pumping among all modes.", "The results of both approaches are presented separately in the following subsections." ], [ "Complex spectrum computation", "Here we illustrate the behavior of polariton complex spectra in biannular optically induced traps.", "Polariton complex spectrum in the presence of varying complex potential is discrete and may be labelled with integer non-negative angular number $m$ and radial number $n$ .", "Graphically represented evolution of the spectrum with increasing pumping power illustrates the competition of modes and the mechanism of polariton nonequilibrium condensation.", "Figure (REF ) shows the evolution of the complex spectrum for $m=0$ , $\\rho _1 = 1.7$ , $k =\\rho _2/\\rho _1= 1.1$ , and $\\varkappa = 1$ with increasing pumping power.", "Note that each value of $n$ corresponds to a series of spectral points on the complex plane.", "While the real part of the complex energy is monotonously increasing with pumping power, the imaginary part, corresponding to the growth rate ($\\mathrm {Im}\\lbrace \\epsilon \\rbrace >0$ ) or decay rate ($\\mathrm {Im}\\lbrace \\epsilon \\rbrace <0$ ), reaches its maximal value at a certain pumping power and decreases at further growing power.", "If this maximal value is positive, the mode is characterized with a critical pumping power, where $\\mathrm {Im}\\lbrace \\epsilon \\rbrace =0$ and may form the condensate if this critical power is minimal among all modes governed by the quantum numbers $n$ and $m$ .", "These critical powers were numerically computed and are discussed in the following subsection.", "For pumping powers above the condensation threshold multiple modes can simultaneously have positive imaginary parts of complex energies, which results in mode competition among such modes.", "Although this regime is outside of the linearized model validity range, one may still expect the mode with the slowest effective decay rate (corrected for stimulated pumping by the reservoir) to form a stable condensate state if interactions weakly affect mode stability.", "If, however, interactions play a significant role, multiple competing modes can dynamically mix, resulting in complex nonlinear behaviour.", "Figure: Numerically computed complex spectrum evolution with increasing pumping power indicated with colour.The three visible series correspond to radial modes n=0,1,2n=0,1,2.Intersection of each line with the real axis ϵ 𝑖𝑚 \\epsilon _\\textit {im} corresponds to a mode-specific critical pumping power and signifies possibility of polariton condensation at the mode.Parameters: m=0m=0, ρ 1 =1.7\\rho _{1}=1.7, ϰ=1.1\\varkappa =1.1, k=1.0k=1.0." ], [ "Critical pumping powers and quantum number maps", "In this part, we present the numerical results obtained by solving the equation (REF ) for real-valued $\\epsilon $ and $\\eta $ using the Levenberg–Marquardt algorithm.", "We numerically computed the critical pumping values corresponding to various angular numbers $m$ in a range of trap radius values $\\rho _1$ for fixed values radii ratio $k$ and $\\varkappa = \\alpha /\\beta $ .", "Recovering the radial wavefunction $R(\\rho )$ with extracted coefficients of the piecewise definition (REF ), we then attributed these states with radial quantum numbers $n=0,1,2,...$ .", "Finally, we computed the condensation threshold pumping power and the corresponding condensate quantum numbers $n$ and $m$ by minimizing critical pumping power values over all states in a region of the parameter space set by $\\rho _1$ and $\\varkappa $ for fixed values of $k$ .", "Figure: Polariton condensation critical pumping power η\\eta dependence on the trap inner radius ρ 1 \\rho _{1} for angular momentum modes m=0,1,...,5m=0,1,...,5 and n=0,1n = 0,1.The minimal value of η\\eta among all modes at given trap size ρ 1 \\rho _1 corresponds quantifies the condensation threshold and specifies the condensate quantum numbers (n,mn,m).An example of transition between two condensate modes (0,00,0) and (0,1)(0,1) with spatial density distributions shown in the insets, is indicated with the red dot.Parameters: k=1.1k=1.1, ϰ=1\\varkappa = 1.Figure REF shows the critical pumping dependence on the inner circle radius $\\rho _1$ for fixed values of $k=1.1$ and $\\varkappa = 1$ .", "The red dot indicates crossing of two graphs corresponding to $m=0$ and $m=1$ ($n=0$ for both) and illustrates switching between two condensate modes at $\\rho _1\\approx 1.1$ .", "For smaller traps the condensation threshold corresponds to the lowest mode ($n=0$ , $m=0$ ), while in larger traps the condensate forms at the mode ($n=0$ , $m=1$ ).", "The condensate density spatial distribution $|\\Psi |^2$ of both modes is shown in the insets.", "Further condensate quantum number switching is visible for larger traps at $\\rho _1\\approx 1.6$ .", "Figure: Angular quantum number distribution in the parameter space, formed by the trap inner radius ρ 1 \\rho _1 and the interaction parameter ratio ϰ=α/β\\varkappa =\\alpha /\\beta in the case k=ρ 2 /ρ 1 =1.1k=\\rho _2/\\rho _1 = 1.1.The radial quantum number remains fixed (n=0n=0) in the entire region of parameters.Spatial condensate density distribution is shown with round insets for selected points on the plane.The case $k=1.1$ , where the two circle radii are close, is qualitatively similar to the single annular trap, where the condensate only forms at the ground radial state $n=0$ [15], [22].", "This is illustrated in Figure REF showing the distribution of the condensate quantum numbers on the parameter plane $(\\rho _1,\\varkappa )$ for $k=1.1$ .", "Similarly to the single annular trap case, switching cascade leads to the angular momentum $m$ increasing with the trap radius $\\rho _1$ and $\\varkappa $ .", "Insets illustrate the spatial density distribution of the corresponding condensate wavefunctions at selected parameter plane points.", "Figure: Distribution of quantum numbers mm (shown with colour) and nn in the parameter space, formed by the trap inner radius ρ 1 \\rho _1 and the interaction parameter ratio ϰ=α/β\\varkappa =\\alpha /\\beta in the case of a higher ratio of radii k=ρ 2 /ρ 1 =2k=\\rho _2/\\rho _1 = 2.Spatial condensate density distribution is shown with round insets for selected points on the plane.Figure: The form of wave functions near and at the critical point for a double delta-like potential.", "The graph shows the real parts of the energy levels sorted in ascending order.For higher values of $k$ , the condensate quantum number distribution map significantly changes.", "Figure REF shows the condensate quantum number distribution in the case $k=2$ .", "Most importantly, condensation at states with nonzero radial numbers $n\\ne 0$ is possible in certain domains of the parameter plane, in agreement with the experimental findings of Ref.", "[13], which have not been reproduced in theory.", "At the same time, monotonous growth of the azimuthal quantum number $m$ with $\\rho _1$ and $\\varkappa $ is replaced with a non-monotonous cascade of transitions, where both quantum numbers $n$ and $m$ change by varying increments.", "One may also note that the distance between neighbouring transitions on the parameter plane generally increases.", "Finally, in addition to continuous lines separating domains of condensate quantum numbers triple points lying on the edges of three domains emerge in drastic contrast to the single annular trap case.", "Nontrivial behaviour of mode switching in this case if underlined by the fact that eigenstate wavefunction antinodes do not necessarily overlap with the two potential peaks.", "This is illustrated in Fig.", "REF , showing the real parts of the trapped polariton mode energies and corresponding probability density radial profiles.", "Figure: Numerically computed approximation for the distance between neighbouring azimuthal nodes of condensate density d=πρ 1 /md=\\pi \\rho _1/m in the same region of parameter space as in Fig.", "(k=2k=2).Finally, the average distance between neighbouring azimuthal nodes of polariton condensate density spatial distribution was numerically estimated in the same region of parameter space.", "Qualitatively, this parameter is approximately proportional to the ratio of the trap circumference $2\\pi \\rho _1$ and the number of azimuthal nodes $2m$ .", "The $\\rho _1/m$ ratio distribution is presented in Fig REF .", "Interestingly, although in each domain of quantum number distribution this value is linearly increasing with the trap radius $\\rho _1$ , at larger scales including many transitions the internodal distance is decreasing with the trap size in qualitative agreement with results of Refs.", "[12], [13]." ], [ "Discussion", "Polariton condensation was modelled in biannular optically induced complex potential traps within the framework of mean-field generalized Gross-Pitaevskii equation.", "The model of biannular delta-function complex potential, emerging due to excitonic density generated by external nonresonant pumping, allowed quantitative in-depth analysis of polariton nonequilibrium condensation.", "A drastic qualitative modification of polariton condensation picture in comparison to the single annular trap case was discovered.", "This model allowed us to reproduce for the first time and to give qualitative interpretation to formation of polariton condensates in excited radial quantum number states of the trap, experimentally observed in Ref.", "[13].", "The developed numerical method allowed constructing phase diagrams of polariton condensate quantum numbers in arbitrary regions of the multidimensional space of optical trap parameters.", "This paves the pave towards harnessing the radial degree of freedom of polariton laser emission via all-optical control of polariton condensate angular and radial quantum numbers.", "One of the most intriguing predictions of the model is the modification of the condensate quantum number distribution map topology with increasing ratio of the biannular trap radii.", "It supplements the layered cascade of incremental quantum number transitions with triple points in angular quantum number and quadrupule points if the radial quantum number is taken into account.", "Topological characteristics of such points and the possibility of exceptional point emergence in this system remain unrevealed and may be studied within the developed model framework.", "It should be noted that the discussed phenomena rely on both Hermitian and anti-Hermitian terms of the effective Hamiltonian, stemming from exciton-exciton repulsive interactions and bosonic stimulated particle exchange between the reservoir and the condensate respectively.", "It is illustrated by qualitative difference of condensate mode switching patterns on the parameter $\\varkappa $ , quantifying the ratio of the two types of condensate-reservoir interaction.", "Being an interacting non-Hermitian system, nonresonantly pumped nonequilibrium exciton-polariton condensates present an opportunity to vary this parameter by independently controlling exciton-exciton repulsion and stimulated scattering rates with the Hopfield coefficient on the one hand, and the energy dispersion anti-crossing shape on the other." ], [ "Appendix A", "The general view of the resulting matrix on the spectrum can be given as $ M=\\left(\\begin{array}{cccc}M_{11}& M_{21} & M_{31} & 0 \\\\0 & M_{22} & M_{32} & M_{42} \\\\M_{13}& M_{23} & M_{33} & 0 \\\\0 & M_{24} & M_{34} & M_{44}\\end{array}\\right).$ First row $M_{11}= J_{m}\\left(\\rho _{1}\\sqrt{\\epsilon +\\frac{\\textbf {i}}{2}}\\right), M_{21}=-J_{m}\\left(\\rho _{1}\\sqrt{\\epsilon +\\frac{\\textbf {i}}{2}}\\right)$ , $M_{31}=-Y_{m}\\left(\\rho _{1}\\sqrt{\\epsilon +\\frac{\\textbf {i}}{2}}\\right)$ Second row $M_{22}= J_{m}\\left(\\rho _{2}\\sqrt{\\epsilon +\\frac{\\textbf {i}}{2}}\\right) $ , $M_{32}=Y_{m}\\left(\\rho _{2}\\sqrt{\\epsilon +\\frac{\\textbf {i}}{2}}\\right)$ , $M_{42}=-H_{m}\\left(\\rho _{2}\\sqrt{\\epsilon +\\frac{\\textbf {i}}{2}}\\right)$ Third row $M_{13}= -J^{^{\\prime }}_{m}\\left(\\rho _{1}\\sqrt{\\epsilon +\\frac{\\textbf {i}}{2}}\\right)-\\frac{\\varkappa +\\textbf {i}}{2}\\alpha _{1}J_{m}\\left(\\rho _{1}\\sqrt{\\epsilon +\\frac{\\textbf {i}}{2}}\\right) $ , $M_{23}=J^{^{\\prime }}_{m}\\left(\\rho _{1}\\sqrt{\\epsilon +\\frac{\\textbf {i}}{2}}\\right)$ , $M_{33}=Y^{^{\\prime }}_{m}\\left(\\rho _{1}\\sqrt{\\epsilon +\\frac{\\textbf {i}}{2}}\\right)$ Fourth row $M_{24}=-J^{^{\\prime }}_{m}\\left(\\rho _{2}\\sqrt{\\epsilon +\\frac{\\textbf {i}}{2}}\\right)$ , $M_{34}=-Y^{^{\\prime }}_{m}\\left(\\rho _{2}\\sqrt{\\epsilon +\\frac{\\textbf {i}}{2}}\\right)$ , $M_{44}=H^{^{\\prime }}_{m}\\left(\\rho _{2}\\sqrt{\\epsilon +\\frac{\\textbf {i}}{2}}\\right)-\\frac{\\varkappa +\\textbf {i}}{2}\\alpha _{1}H_{m}\\left(\\rho _{2}\\sqrt{\\epsilon +\\frac{\\textbf {i}}{2}}\\right)$" ] ]
2212.05533
[ [ "Local Limit of Nonlocal Gravity: A Teleparallel Extension of General\n Relativity" ], [ "Abstract We describe a general constitutive framework for a teleparallel extension of the general theory of relativity.", "This approach goes beyond the teleparallel equivalent of general relativity (TEGR) by broadening the analogy with the electrodynamics of media.", "In particular, the main purpose of this paper is to investigate in detail a local constitutive extension of TEGR that is the local limit of nonlocal gravity (NLG).", "The cosmological implications of this theory are briefly explored; specifically, the modified flat cosmological model is discussed in some detail." ], [ "Introduction", "To explain current astronomical observations, dark matter is apparently necessary to describe the dynamics of galaxies, clusters of galaxies, and structure formation in cosmology.", "Similarly, dark energy seems necessary to explain the accelerated expansion of the universe.", "In the benchmark model of cosmology, the energy content of the universe comprises about 70% dark energy, about 25% dark matter and about 5% visible matter.", "The physical nature of dark matter and dark energy is unknown at present.", "One or both dark aspects could conceivably be characteristic of the gravitational interaction.", "It therefore seems reasonable to attempt to modify Einstein's general relativity (GR) on the scales of galaxies and beyond in order to account for the observational data without any need for the dark content of the universe.", "To this end, many modified gravity theories have been proposed [1].", "This paper is about a certain constitutive extension of teleparallel gravity.", "The resulting theory can be described as the local limit of nonlocal gravity.", "Nonlocal gravity (NLG) is a classical nonlocal generalization of GR patterned after the nonlocal electrodynamics of media.", "To motivate this approach to the modification of GR, let us note that a postulate of locality runs through the special and general theories of relativity.", "In special relativity, Lorentz transformations are applied point by point along the world line of an accelerated observer in Minkowski spacetime in order to determine what the observer measures [2].", "However, to measure the properties of electromagnetic waves, one must take their intrinsic nonlocal nature into account in accordance with the Huygens principle.", "Moreover, Bohr and Rosenfeld [3] have pointed out that electromagnetic fields cannot be measured at a spacetime event; instead, a certain spacetime averaging procedure is necessary for this purpose.", "To extend the postulate of locality to the measurement of wave phenomena in Minkowski spacetime, one must consider the past history of the accelerated observer and this leads to nonlocal special relativity theory in which the observer's memory of its past acceleration is properly taken into account [4].", "The locality postulate plays an essential role in Einstein's local principle of equivalence in rendering observers pointwise inertial in a gravitational field.", "The gravitational field equations in general relativity are thus partial differential equations.", "The intimate connection between inertia and gravitation, clearly revealed via Einstein's development of the general theory of relativity, implies that the universal gravitational interaction could be nonlocal as well.", "That is, the gravitational field equations could allow for the gravitational memory of past events in such a way that the local gravitational field would then satisfy partial integro-differential field equations.", "Einstein's GR, as a field theory of gravitation, was modeled after Maxwell's electrodynamics.", "A nonlocal extension of general relativity theory has been developed that is modeled after the nonlocal electrodynamics of media [5], [6].", "The nonlocal constitutive kernel in the electrodynamics of media has its origin in atomic physics [7], [8], [9]; however, no analogous atomic medium exists in the gravitational case.", "Therefore, the nonlocal kernel in the gravitational case must be determined from observation.", "A comprehensive account of the resulting nonlocal gravity (NLG) theory is contained in Ref. [10].", "A significant observational consequence of this classical nonlocal generalization of Einstein's theory of gravitation is that the nonlocal aspect of gravity in the Newtonian regime of the theory appears to simulate dark matter [11], [12], [13], [14], [15].", "The classical nonlocal extension of GR can be accomplished through the framework of teleparallelism.", "Briefly, we start with GR and consider a gravitational field where events are characterized by an admissible system of spacetime coordinates $x^\\mu $ with metric $ds^2 = g_{\\mu \\nu }\\,dx^\\mu \\, dx^\\nu \\,.$ Free test particles and null rays follow geodesics in this spacetime manifold.", "Here, Greek indices run from 0 to 3, while Latin indices run from 1 to 3; moreover, the signature of the metric is +2.", "We use units such that $c = 1$ , unless specified otherwise.", "In this spacetime, we choose a preferred set of observers with adapted tetrads $e^\\mu {}_{\\hat{\\alpha }}(x)$ that are orthonormal; that is, $g_{\\mu \\nu }(x) \\, e^\\mu {}_{\\hat{\\alpha }}(x)\\, e^\\nu {}_{\\hat{\\beta }}(x)= \\eta _{\\hat{\\alpha } \\hat{\\beta }}\\,.$ In our convention, hatted indices enumerate the tetrad axes in the local tangent space, while indices without hats are normal spacetime indices; moreover, $\\eta _{\\alpha \\beta }$ is the Minkowski metric tensor given by diag$(-1,1,1,1)$ .", "We now employ our preferred tetrad frame field to define a nonsymmetric Weitzenböck connection [16] $\\Gamma ^\\mu _{\\alpha \\beta }=e^\\mu {}_{\\hat{\\rho }}~\\partial _\\alpha \\,e_\\beta {}^{\\hat{\\rho }}\\,.$ This curvature-free connection is such that $\\nabla _\\nu \\,e_\\mu {}^{\\hat{\\alpha }}=0$ , where $\\nabla $ denotes covariant differentiation with respect to the Weitzenböck connection.", "Our preferred tetrad frames are thus parallel throughout spacetime; that is, the Weitzenböck connection renders spacetime a parallelizable manifold.", "In this teleparallelism framework [17], [18], [19], distant vectors can be considered parallel if they have the same local components with respect to the preferred frame field.", "Furthermore, the Weitzenböck connection is metric compatible, $\\nabla _\\nu \\, g_{\\alpha \\beta }=0$ , since the metric can be defined through the tetrad orthonormality relation.", "In our extended pseudo-Riemannian structure of GR, we have both the Levi-Civita and Weitzenböck connections that are compatible with the same spacetime metric $g_{\\mu \\nu }$ .", "Curvature and torsion are basic tensors associated with a given connection.", "The symmetric Levi-Civita connection is given by the Christoffel symbols ${^0}\\Gamma ^\\mu _{\\alpha \\beta }= \\frac{1}{2} g^{\\mu \\nu } (g_{\\nu \\alpha ,\\beta }+g_{\\nu \\beta ,\\alpha }-g_{\\alpha \\beta ,\\nu })\\,.$ This connection is torsion free, but has Riemannian curvature ${^0}R_{\\alpha \\beta \\gamma \\delta }$ that represents the gravitational field in GR.", "In our convention, a left superscript “0\" is used to refer to geometric quantities directly related to the Levi-Civita connection.", "In particular, the gravitational field equations of GR can be expressed as [2] ${^0}G_{\\mu \\nu } + \\Lambda \\, g_{\\mu \\nu }=\\kappa \\,T_{\\mu \\nu }\\,,$ where ${^0}G_{\\mu \\nu }$ is the Einstein tensor ${^0}G_{\\mu \\nu } := {^0}R_{\\mu \\nu }-\\frac{1}{2} g_{\\mu \\nu }\\,{^0}R\\,.$ We denote the symmetric energy-momentum tensor of matter by $T_{\\mu \\nu }$ ; moreover, $\\Lambda $ is the cosmological constant and $\\kappa :=8 \\pi G/c^4$ .", "In Eq.", "(REF ), the Ricci tensor ${^0}R_{\\mu \\nu } = {^0}R^{\\alpha }{}_{\\mu \\alpha \\nu }$ is the trace of the Riemann tensor and the scalar curvature ${^0}R = {^0}R^{\\mu }{}_{\\mu }$ is the trace of the Ricci tensor.", "The spacetime torsion tensor associated with the Weitzenböck connection is given by $C_{\\mu \\nu }{}^{\\alpha }=\\Gamma ^{\\alpha }_{\\mu \\nu }-\\Gamma ^{\\alpha }_{\\nu \\mu }=e^\\alpha {}_{\\hat{\\beta }}\\Big (\\partial _{\\mu }e_{\\nu }{}^{\\hat{\\beta }}-\\partial _{\\nu }e_{\\mu }{}^{\\hat{\\beta }}\\Big )\\,.$ Furthermore, the difference between two connections on the same manifold is always a tensor.", "Hence, we define the contorsion tensor $K_{\\mu \\nu }{}^\\alpha = {^0} \\Gamma ^\\alpha _{\\mu \\nu } - \\Gamma ^\\alpha _{\\mu \\nu }\\,.$ The metric compatibility of the Weitzenböck connection means that contorsion is related to torsion; that is, $K_{\\mu \\nu \\rho } = \\frac{1}{2}\\, (C_{\\mu \\rho \\nu }+C_{\\nu \\rho \\mu }-C_{\\mu \\nu \\rho })\\,.$ The contorsion tensor is antisymmetric in its last two indices; in contrast, the torsion tensor is antisymmetric in its first two indices.", "It turns out that the curvature of the Levi-Civita connection ${^0}R_{\\mu \\nu \\rho \\sigma }$ and the torsion of the Weitzenböck connection $C_{\\mu \\nu \\rho }$ are complementary aspects of the gravitational field in this extended framework [10].", "It is important to point out a certain natural connection between the gravitational field strength $C_{\\mu \\nu \\rho }$ and the electromagnetic field strength $F_{\\mu \\nu } = \\partial _\\mu A_\\nu - \\partial _\\nu A_\\mu $ .", "Writing the torsion tensor as $C_{\\mu \\nu }{}^{\\hat{\\alpha }}=e_\\rho {}^{\\hat{\\alpha }}C_{\\mu \\nu }{}^{\\rho }= \\partial _{\\mu }e_{\\nu }{}^{\\hat{\\alpha }}-\\partial _{\\nu }e_{\\mu }{}^{\\hat{\\alpha }}\\,,$ we note that for each ${\\hat{\\alpha }}={\\hat{0}}, {\\hat{1}}, {\\hat{2}}, {\\hat{3}}$ , we have here an analog of the electromagnetic field tensor defined in terms of the vector potential $ A_\\mu = e_{\\mu }{}^{\\hat{\\alpha }}$ .", "The traditional construction of the GR field equations is based on the Levi-Civita connection.", "On the other hand, the Levi-Civita connection is the sum of the Weitzenböck connection and the contorsion tensor in accordance with Eq.", "(REF ).", "Hence, GR field equations can be expressed in terms of the torsion tensor.", "This formulation of Einstein's theory is naturally analogous to Maxwell's electrodynamics.", "Indeed, the result is the teleparallel equivalent of general relativity (TEGR) to which we now turn.", "It is possible to express the Einstein tensor as [5], [6], [10] ${^0}G_{\\mu \\nu }=\\frac{\\kappa }{\\sqrt{-g}}\\Big [e_\\mu {}^{\\hat{\\gamma }}\\,g_{\\nu \\alpha }\\, \\frac{\\partial }{\\partial x^\\beta }\\,\\mathfrak {H}^{\\alpha \\beta }{}_{\\hat{\\gamma }}-\\Big (C_{\\mu }{}^{\\rho \\sigma }\\,\\mathfrak {H}_{\\nu \\rho \\sigma }-\\frac{1}{4}\\,g_{\\mu \\nu }\\,C^{\\alpha \\beta \\gamma }\\,\\mathfrak {H}_{\\alpha \\beta \\gamma }\\Big ) \\Big ]\\,,$ where the auxiliary torsion field $\\mathfrak {H}_{\\mu \\nu \\rho }$ is defined by $\\mathfrak {H}_{\\mu \\nu \\rho }:= \\frac{\\sqrt{-g}}{\\kappa }\\,\\mathfrak {C}_{\\mu \\nu \\rho }\\,, \\qquad \\mathfrak {C}_{\\alpha \\beta \\gamma } :=C_\\alpha \\, g_{\\beta \\gamma } - C_\\beta \\,g_{\\alpha \\gamma }+K_{\\gamma \\alpha \\beta }\\,.$ The auxiliary torsion tensor $\\mathfrak {C}_{\\alpha \\beta \\gamma }$ , where $C_\\mu :=C^{\\alpha }{}_{\\mu \\alpha } = - C_{\\mu }{}^{\\alpha }{}_{\\alpha }$ is the torsion vector, is antisymmetric in its first two indices just like the torsion tensor.", "Moreover, as in GR, $g:=\\det (g_{\\mu \\nu })$ and $\\sqrt{-g}=\\det (e_{\\mu }{}^{\\hat{\\alpha }})$ .", "Einstein's field equations (REF ) expressed in terms of torsion thus become the TEGR field equations $\\frac{\\partial }{\\partial x^\\nu }\\,\\mathfrak {H}^{\\mu \\nu }{}_{\\hat{\\alpha }}+\\frac{\\sqrt{-g}}{\\kappa }\\,\\Lambda \\,e^\\mu {}_{\\hat{\\alpha }} =\\sqrt{-g}\\,(T_{\\hat{\\alpha }}{}^\\mu + \\mathbb {T}_{\\hat{\\alpha }}{}^\\mu )\\,,$ where $\\mathbb {T}_{\\mu \\nu }$ is the traceless energy-momentum tensor of the gravitational field and is defined by $\\sqrt{-g}\\,\\mathbb {T}_{\\mu \\nu } :=C_{\\mu \\rho \\sigma }\\, \\mathfrak {H}_{\\nu }{}^{\\rho \\sigma }-\\frac{1}{4} g_{\\mu \\nu }\\,C_{\\rho \\sigma \\delta }\\,\\mathfrak {H}^{\\rho \\sigma \\delta }\\,.$ The antisymmetry of $\\mathfrak {H}^{\\mu \\nu }{}_{\\hat{\\alpha }}$ in its first two indices leads to the law of conservation of total energy-momentum tensor, namely, $\\frac{\\partial }{\\partial x^\\mu }\\,\\Big [\\sqrt{-g}\\,(T_{\\hat{\\alpha }}{}^\\mu + \\mathbb {T}_{\\hat{\\alpha }}{}^\\mu -\\frac{\\Lambda }{\\kappa }\\,e^\\mu {}_{\\hat{\\alpha }})\\Big ]=0\\,,$ which follows from taking partial derivative $\\partial /\\partial x^\\mu $ of Eq.", "(REF ).", "Finally, it is important to note that while GR is based on the metric tensor $g_{\\mu \\nu }$ , TEGR is based on the orthonormal tetrad frame field $e^{\\mu }{}_{\\hat{\\alpha }}(x)$ that is globally parallel via the Weitzenböck connection.", "The teleparallel equivalent of general relativity (TEGR) is the gauge theory of the Abelian group of spacetime translations [20], [21], [22].", "Though nonlinear, TEGR is therefore in a certain sense analogous to Maxwell's equations in a medium with a simple constitutive relation [9].", "That is, $C_{\\mu \\nu }{}^{\\hat{\\alpha }}$ is, as noted before, similar to the electromagnetic field $F_{\\mu \\nu }$ , where $(\\mathbf {E}, \\mathbf {B})\\mapsto F_{\\mu \\nu }$ , while, $\\mathfrak {H}_{\\mu \\nu }{}^{\\hat{\\alpha }}$ is similar to the electromagnetic excitation $H_{\\mu \\nu }$ , where $(\\mathbf {D}, \\mathbf {H}) \\mapsto H_{\\mu \\nu }$ .", "Moreover, we can look upon Eq.", "(REF ), namely, $\\mathfrak {H}_{\\alpha \\beta \\gamma } = \\frac{\\sqrt{-g}}{\\kappa }\\,\\mathfrak {C}_{\\alpha \\beta \\gamma } = \\frac{\\sqrt{-g}}{\\kappa }\\,\\left[\\frac{1}{2}(C_{\\gamma \\beta \\alpha } + C_{\\alpha \\beta \\gamma } -C_{\\gamma \\alpha \\beta }) +C_\\alpha \\, g_{\\beta \\gamma } - C_\\beta \\,g_{\\alpha \\gamma }\\right]\\,,$ as the local constitutive relation of TEGR, since it connects $\\mathfrak {H}_{\\alpha \\beta \\gamma }$ to $C_{\\alpha \\beta \\gamma }$ .", "When we study the electrodynamics of media, we keep the fundamental field equations intact and change only the constitutive relation appropriate for the medium at hand.", "We adopt this approach in extending TEGR; that is, we simply modify Eq.", "(REF ) in the rest of this paper.", "This approach differs from other extensions of TEGR that involve, for instance, the introduction of scalar fields into the theory.", "That is, scalar-torsion theories of gravity, which are analogues of scalar-tensor theories that extend GR, have been studied by a number of authors; for recent reviews, see [23], [24].", "It is important to recognize that we could have arrived at TEGR using any other smooth orthonormal tetrad frame field $\\lambda ^{\\mu }{}_{\\hat{\\alpha }}(x)$ .", "This circumstance is natural, since GR only depends on the metric tensor $g_{\\mu \\nu }$ .", "At each event, the two tetrad frame fields $\\lambda ^\\mu {}_{\\hat{\\alpha }}(x)$ and $e^\\mu {}_{\\hat{\\alpha }}(x)$ are related by a six-parameter element of the local Lorentz group involving three boosts and three rotations; that is, $\\lambda ^\\mu {}_{\\hat{\\alpha }}(x) = \\mathcal {L}_{\\hat{\\alpha }}{}^{\\hat{\\beta }}(x)\\, e^\\mu {}_{\\hat{\\beta }}(x)$ .", "This pointwise 6-fold degeneracy is generally removed when we modify the constitutive relation of TEGR.", "In our teleparallel extension of GR, the modified theory is then invariant only under the global Lorentz group." ], [ "Constitutive Extension of TEGR", "To maintain the analogy with electrodynamics, we retain the gravitational field equations of TEGR, while the constitutive relation is modified.", "This means, in effect, that we replace $\\mathfrak {H}$ in Eqs.", "(REF ) and (REF ) by $\\mathcal {H}$ given by $\\mathcal {H}_{\\mu \\nu \\rho } = \\frac{\\sqrt{-g}}{\\kappa }(\\mathfrak {C}_{\\mu \\nu \\rho }+ N_{\\mu \\nu \\rho })\\,,$ where $N_{\\mu \\nu \\rho } = - N_{\\nu \\mu \\rho }$ is a tensor that is related to the torsion tensor $C_{\\mu \\nu \\rho }$ .", "For the moment, let us find the new extension of GR based on the new tensor field $N_{\\mu \\nu \\rho }$ .", "The gravitational field equations take the form $\\frac{\\partial }{\\partial x^\\nu }\\,\\mathcal {H}^{\\mu \\nu }{}_{\\hat{\\alpha }}+\\frac{\\sqrt{-g}}{\\kappa }\\,\\Lambda \\,e^\\mu {}_{\\hat{\\alpha }} =\\sqrt{-g}\\,(T_{\\hat{\\alpha }}{}^\\mu + \\mathcal {T}_{\\hat{\\alpha }}{}^\\mu )\\,,$ where $\\mathcal {T}_{\\mu \\nu }$ is the traceless energy-momentum tensor of the gravitational field in this case.", "We have $\\kappa \\,\\mathcal {T}_{\\mu \\nu } = \\kappa \\,\\mathbb {T}_{\\mu \\nu } + Q_{\\mu \\nu }\\,,$ where $Q_{\\mu \\nu }$ is a traceless tensor defined by $Q_{\\mu \\nu } := C_{\\mu \\rho \\sigma } N_{\\nu }{}^{\\rho \\sigma }-\\frac{1}{4}\\, g_{\\mu \\nu }\\,C_{ \\delta \\rho \\sigma }N^{\\delta \\rho \\sigma }\\,.$ The total energy-momentum conservation law can now be expressed as $\\frac{\\partial }{\\partial x^\\mu }\\,\\Big [\\sqrt{-g}\\,(T_{\\hat{\\alpha }}{}^\\mu + \\mathcal {T}_{\\hat{\\alpha }}{}^\\mu -\\frac{\\Lambda }{\\kappa }\\,e^\\mu {}_{\\hat{\\alpha }})\\Big ]=0\\,.$ To find the modified GR field equations, let us start with Eq.", "(REF ) and substitute $\\mathfrak {H}_{\\mu \\nu \\rho } = \\mathcal {H}_{\\mu \\nu \\rho } - \\frac{\\sqrt{-g}}{\\kappa } N_{\\mu \\nu \\rho }\\,$ in the Einstein tensor (REF ) to get $^{0}G_{\\mu \\nu } + \\mathcal {N}_{\\mu \\nu } = \\kappa T_{\\mu \\nu } - \\Lambda g_{\\mu \\nu } + Q_{\\mu \\nu }\\,,$ where we have used Eq.", "(REF ).", "Here, $\\mathcal {N}_{\\mu \\nu }$ is a tensor defined by $\\mathcal {N}_{\\mu \\nu } = g_{\\nu \\alpha } e_\\mu {}^{\\hat{\\gamma }} \\frac{1}{\\sqrt{-g}} \\frac{\\partial }{\\partial x^\\beta }\\,(\\sqrt{-g}N^{\\alpha \\beta }{}_{\\hat{\\gamma }})\\,.$ It is natural to split the modified GR field equations into its symmetric and antisymmetric parts; that is, we have the modified Einstein equations $^{0}G_{\\mu \\nu } = \\kappa T_{\\mu \\nu } - \\Lambda g_{\\mu \\nu } - \\mathcal {N}_{(\\mu \\nu )} + Q_{(\\mu \\nu )}\\,$ and the constraint equations $\\mathcal {N}_{[\\mu \\nu ]} = Q_{[\\mu \\nu ]}\\,.$ We thus have 16 field equations for the 16 components of the tetrad frame field.", "Of the 16 components of the fundamental tetrad $e^\\mu {}_{\\hat{\\alpha }}$ , 10 fix the components of the metric tensor $g_{\\mu \\nu }$ via the orthonormality condition, while the other 6 are local Lorentz degrees of freedom (i.e., boosts and rotations).", "Similarly, the 16 field equations of modified GR for the 16 components of the fundamental tetrad $e^\\mu {}_{\\hat{\\alpha }}$ naturally split into 10 modified Einstein equations plus 6 integral constraint equations for the new tensor $N_{\\mu \\nu \\rho }$ .", "We have here a general framework for the teleparallel extension of GR.", "It remains to specify the exact connection between $N_{\\mu \\nu \\rho }$ and $C_{\\mu \\nu \\rho }$ .", "A nonlocal relation has led to nonlocal gravity(NLG) theory [5], [6], [10].", "The local limit of this nonlocal relation is the main focus of the present investigation." ], [ "Nonlocal Gravity", "In nonlocal gravity (NLG), we assume, in close analogy with the nonlocal electrodynamics of media, that the components of $N_{\\mu \\nu \\rho }$ , as measured by the fundamental observers of the theory with adapted tetrads $e^\\mu {}_{\\hat{\\alpha }}$ , must be physically related to the corresponding measured components of $X_{\\mu \\nu \\rho }$ that is directly connected to the torsion tensor [10], [22], [25].", "That is, $N_{\\hat{\\mu }\\hat{\\nu }\\hat{\\rho }}(x) = \\int \\mathcal {K}(x, x^{\\prime })\\,X_{\\hat{\\mu }\\hat{\\nu }\\hat{\\rho }}(x^{\\prime }) \\sqrt{-g(x^{\\prime })}\\, d^4x^{\\prime } \\,,$ where $\\mathcal {K}(x, x^{\\prime })$ is the basic causal kernel of NLG and $X_{\\hat{\\mu }\\hat{\\nu }\\hat{\\rho }}= \\mathfrak {C}_{\\hat{\\mu }\\hat{\\nu }\\hat{\\rho }}+ \\check{p}\\,(\\check{C}_{\\hat{\\mu }}\\, \\eta _{\\hat{\\nu }\\hat{\\rho }}-\\check{C}_{\\hat{\\nu }}\\, \\eta _{\\hat{\\mu }\\hat{\\rho }})\\,.$ Here, $\\check{p}\\ne 0$ is a constant dimensionless parameter and $\\check{C}^\\mu $ is the torsion pseudovector defined via the Levi-Civita tensor $E_{\\alpha \\beta \\gamma \\delta }$ by $\\check{C}_\\mu :=\\frac{1}{3!}", "C^{\\alpha \\beta \\gamma }\\,E_{\\alpha \\beta \\gamma \\mu }\\,.$ It is important to remark that the only known exact solution of NLG is the trivial solution, namely, we recover Minkowski spacetime in the absence of the gravitational field.", "Thus far, it has only been possible to show that de Sitter solution is not an exact solution of NLG [25].", "There are indeed many other nonlocal models of gravity and cosmology; see, for instance [26], [27], [28], [29] and the references cited therein." ], [ "Local Limit of NLG", "The local limit of Eq.", "(REF ) can be obtained by assuming that the kernel is proportional to the 4D Dirac delta function, namely, $\\mathcal {K}(x, x^{\\prime }) := \\frac{S(x)}{\\sqrt{-g(x)}}\\,\\delta (x-x^{\\prime })\\,,$ where $S(x)$ is a dimensionless scalar function.", "In this case, the nonlocal constitutive relation (REF ) reduces to $N_{\\mu \\nu \\rho }(x) = S(x) X_{\\mu \\nu \\rho } = S(x)\\,[\\mathfrak {C}_{\\mu \\nu \\rho }(x) + \\check{p}\\,(\\check{C}_\\mu \\, g_{\\nu \\rho }-\\check{C}_\\nu \\, g_{\\mu \\rho })]\\,.$ Moreover, Eq.", "(REF ) takes the form $\\mathcal {H}_{\\mu \\nu \\rho } = \\frac{\\sqrt{-g}}{\\kappa }[(1+S)\\,\\mathfrak {C}_{\\mu \\nu \\rho }+ S\\,\\check{p}\\,(\\check{C}_\\mu \\, g_{\\nu \\rho }-\\check{C}_\\nu \\, g_{\\mu \\rho })]\\,.$ If $S(x) = 0$ , we recover TEGR; otherwise, we have a generalization of GR.", "Of course, the local and linear relation (REF ) can be generalized; that is, $N_{\\mu \\nu \\rho }(x) = \\frac{1}{2} \\chi _{\\mu \\nu \\rho }{}^{\\alpha \\beta \\gamma }(x) X_{\\alpha \\beta \\gamma }(x)\\,.$ Such six-index gravitational constitutive tensors as $\\chi _{\\mu \\nu \\rho }{}^{\\alpha \\beta \\gamma }$ have been studied and classified in [17].", "The new local constitutive relation enlarges TEGR, which is equivalent to a pure spin-2 theory, namely, GR, by the addition of a scalar function.", "It is clear from Eq.", "(REF ) that $1+S \\ne 0$ ; otherwise, the new theory will not have GR as a limit.", "There is no field equation for $S(x)$ .", "Therefore, it remains to determine this function.", "In nonlocal gravity, we must ultimately determine the fundamental nonlocal kernel of the theory from comparison of the theory with observational data [11], [12], [13], [14].", "In a similar way, we may be able to fix $S(x)$ on the basis of certain consistency conditions in analogy with the electrodynamics of media." ], [ "Analogy with the Electrodynamics of Media", "In the electrodynamics of media, especially magnetic media, the phenomena associated with hysteresis cannot be ignored; therefore, the constitutive relations are in general nonlocal [8].", "However, in most applications of the electrodynamics of media, one uses the simple relations $\\mathbf {D} = \\epsilon (x) \\mathbf {E}$ and $\\mathbf {B} = \\mu (x) \\mathbf {H}$ .", "Presumably, these local limits of nonlocal constitutive relations of linear media capture some important aspects of the general nonlocal problem.", "The same is expected to hold in the gravitational case.", "We can partially compensate for the lack of exact solutions of NLG in the case of strong gravitational fields by searching for solutions of the new local theory in the areas of cosmology and black hole physics.", "The local quantities $\\epsilon (x)$ and $\\mu (x)$ are characteristics of the medium in electrodynamics; similarly, $S(x)$ is characteristic of the background spacetime in the new local theory.", "The functional form of $S(x)$ must therefore be consistent with the nature of the background spacetime.", "In analogy with electrodynamics, we call $S(x)$ the susceptibility function and tentatively call the new theory “Modified TEGR\", since we simply add an extra term to the constitutive relation of TEGR.", "It must be mentioned that other modified teleparallel theories of gravity have been considered by a number of authors; see, for instance [30], [31], [32], [33] and the references cited therein.", "To gain insight into the nature of this theory, we must solve its field equations which are obtained from the substitution of the auxiliary torsion field (REF ) in Eq.", "(REF ).", "The resulting field equations are indeed satisfied in the case of Minkowski spacetime with Cartesian coordinates $x^\\mu $ and preferred tetrad frame field $e^\\mu {}_{\\hat{\\alpha }}(x) = \\delta ^\\mu _\\alpha $ in the absence of any sources ($T_{\\mu \\nu } = 0$ and $\\Lambda =0$ ); here, $S(x) \\ne 0$ is a function of the background Cartesian coordinates.", "Let us therefore look for an approximate solution of modified TEGR field equations with $S(x) \\ne 0$ and $\\Lambda =0$ that is a first-order perturbation about Minkowski spacetime.", "The purpose of this section is to develop and study the resulting general linear weak-field solution of modified TEGR." ], [ "Linearization ", "We begin by linearizing the theory about Minkowski spacetime; that is, we assume the fundamental frame field of the theory is given by [10], [34] $e_\\mu {}^{\\hat{\\alpha }}=\\delta _\\mu ^{\\alpha }+\\psi ^{\\alpha }{}_\\mu \\,, \\qquad e^\\mu {}_{\\hat{\\alpha }}=\\delta ^\\mu _{\\alpha } -\\psi ^\\mu {}_{\\alpha }\\,,$ where in the linear perturbing field $\\psi _{\\mu \\nu }(x)$ the distinction between spacetime and tetrad indices can be ignored at this level of approximation.", "Here, the 16 components of $\\psi _{\\mu \\nu }$ represent the gravitational potentials of a finite source of mass-energy that is at rest in a compact region of space.", "We thus break the invariance of the theory under the global Lorentz group by fixing the background inertial frame of reference to be the rest frame of the source.", "We define the symmetric and antisymmetric components of $\\psi _{\\mu \\nu }$ by $h_{\\mu \\nu }:=2\\psi _{(\\mu \\nu )}\\,, \\qquad \\phi _{\\mu \\nu }:=2\\psi _{[\\mu \\nu ]}\\,.$ The tetrad orthonormality condition then implies $g_{\\mu \\nu }=\\eta _{\\mu \\nu } + h_{\\mu \\nu }\\,.$ As in GR, we introduce the the trace-reversed potentials $\\bar{h}_{\\mu \\nu }=h_{\\mu \\nu }-\\frac{1}{2}\\,\\eta _{\\mu \\nu }h\\,, \\qquad h:=\\eta _{\\mu \\nu }h^{\\mu \\nu }\\,,$ where $\\bar{h}=-h$ and $\\psi _{\\mu \\nu }=\\frac{1}{2}\\,\\bar{h}_{\\mu \\nu }+\\frac{1}{2}\\,\\phi _{\\mu \\nu }-\\frac{1}{4}\\,\\eta _{\\mu \\nu }\\,\\bar{h}\\,.$ The gravitational potentials are now in suitable form to calculate the gravitational field quantities to first order in the perturbation.", "The linearized torsion tensor is $C_{\\mu \\nu \\sigma }=\\partial _\\mu \\psi _{\\sigma \\nu }-\\partial _\\nu \\psi _{\\sigma \\mu }$ and the torsion vector and pseudovector are given by $C_{\\mu }=\\frac{1}{4}\\, \\partial _\\mu \\bar{h} + \\frac{1}{2}\\, \\partial _\\nu (\\bar{h}^{\\nu }{}_\\mu +\\phi ^{\\nu }{}_\\mu )\\,, \\qquad \\check{C}^\\mu = \\frac{1}{6}\\, \\epsilon ^{\\mu \\nu \\rho \\sigma }\\,\\phi _{\\nu \\rho , \\sigma }\\,.$ Similarly, the auxiliary torsion tensor is $\\mathfrak {C}_{\\mu \\sigma \\nu }=-\\bar{h}_{\\nu [\\mu ,\\sigma ]}-\\eta _{\\nu [\\mu }\\bar{h}_{\\sigma ]\\rho ,}{}^\\rho +\\frac{1}{2}\\,\\phi _{\\mu \\sigma , \\nu }+\\eta _{\\nu [\\mu } \\phi _{\\sigma ] \\rho ,}{}^\\rho \\,,$ and the Einstein tensor can be written as $^{0}G_{\\mu \\nu }=\\partial _\\sigma \\mathfrak {C}_{\\mu }{}^{\\sigma }{}_{\\nu }=-\\frac{1}{2}\\,\\Box \\,\\bar{h}_{\\mu \\nu }+\\bar{h}^\\rho {}_{(\\mu ,\\nu )\\rho }-\\frac{1}{2}\\,\\eta _{\\mu \\nu }\\bar{h}^{\\rho \\sigma }{}_{,\\rho \\sigma }\\,,$ where $\\Box :=\\eta ^{\\alpha \\beta }\\partial _\\alpha \\partial _\\beta $ and $\\partial _{\\nu }\\, ^{0}G^{\\mu \\nu }=0$ , since the auxiliary torsion tensor is antisymmetric in its first two indices.", "Let us recall that in general the linearized form of modified GR field equations (REF ) and (REF ) are given by [10] $^{0}G_{\\mu \\nu }+\\frac{1}{2}\\, \\partial _\\sigma \\,(N_{\\mu }{}^{\\sigma }{}_{\\nu }+N_{\\nu }{}^{\\sigma }{}_{\\mu })= \\kappa \\, T_{\\mu \\nu }\\,$ and $\\partial _\\sigma \\,N_{\\mu }{}^{\\sigma }{}_{\\nu }= \\partial _\\sigma \\, N_{\\nu }{}^{\\sigma }{}_{\\mu }\\,,$ respectively.", "These imply, just as in GR, the energy–momentum conservation law for mass–energy, namely, $\\partial _{\\nu } T^{\\mu \\nu }=0$ .", "Writing Eq.", "(REF ) as $N_{\\mu }{}^{\\sigma }{}_{\\nu }= S(x) X_{\\mu }{}^{\\sigma }{}_{\\nu }(x)\\,,$ the modified GR field equations become $^{0}G_{\\mu \\nu }(x) + \\partial _\\sigma \\,[S(x)\\,X_{(\\mu }{}^{\\sigma }{}_{\\nu )}(x)] = \\kappa \\, T_{\\mu \\nu }(x)\\,$ and $\\partial _\\sigma \\,[S(x)\\, X_{[\\mu }{}^{\\sigma }{}_{\\nu ]}(x)] =0\\,.$ With $X_{\\mu \\sigma \\nu }$ given by Eq.", "(REF ) and $\\check{p} \\ne 0$ , we find that in the linear regime, $X_{(\\mu }{}^{\\sigma }{}_{\\nu )} = \\mathfrak {C}_{(\\mu }{}^{\\sigma }{}_{\\nu )}+ \\check{p}\\,\\big [\\check{C}_{(\\mu }\\delta ^\\sigma _{\\nu )}-\\check{C}^\\sigma \\eta _{\\mu \\nu }\\big ]\\,, \\qquad X_{[\\mu }{}^{\\sigma }{}_{\\nu ]}=\\mathfrak {C}_{[\\mu }{}^{\\sigma }{}_{\\nu ]}+ \\check{p}\\,\\check{C}_{[\\mu }\\delta ^\\sigma _{\\nu ]}\\,.$ The torsion pseudovector $\\check{C}^\\sigma $ is the dual of $C_{[\\mu \\nu \\rho ]}$ ; moreover, in our linear approximation scheme $C_{[\\mu \\nu \\rho ]}=-\\phi _{[\\mu \\nu , \\rho ]}$ and $\\check{C}^{\\sigma }{}_{,\\sigma }=0$ .", "Thus the part of the constitutive relation proportional to $\\check{p}$ is given exclusively by the derivatives of the antisymmetric tetrad potentials $\\phi _{\\mu \\nu }$ and vanishes for $\\phi _{\\mu \\nu }=0$ .", "Using $^{0}G_{\\mu \\nu }=\\partial _\\sigma \\mathfrak {C}_{(\\mu }{}^{\\sigma }{}_{\\nu )}$ and $\\partial _\\sigma \\mathfrak {C}_{[\\mu }{}^{\\sigma }{}_{\\nu ]} = 0$ , the field equations take the form $(1+S)\\, ^{0}G_{\\mu \\nu }(x) + S_{,\\sigma }\\,X_{(\\mu }{}^{\\sigma }{}_{\\nu )}(x)+ S \\check{p}\\,\\check{C}_{(\\mu , \\nu )} = \\kappa \\, T_{\\mu \\nu }(x)\\,$ and $S_{,\\sigma }\\,X_{[\\mu }{}^{\\sigma }{}_{\\nu ]}(x)+ S \\check{p}\\,\\check{C}_{[\\mu , \\nu ]} =0\\,.$ To have regular second-order field equations, we must have $1+S(x) \\ne 0$ .", "It is simple to see that source-free (i.e., $T_{\\mu \\nu } = 0$ and $\\Lambda = 0$ ) linearized field equations (REF ) and (REF ) are satisfied for $\\phi _{\\mu \\nu } = 0$ and $S = -1$ .", "Extending this result to the nonlinear case is not so simple; that is, the general source-free field equations are satisfied for $S = -1$ provided $\\check{C}_\\mu = 0$ and $Q_{\\mu \\nu } = 0$ .", "It is very possible that no such solution exists.", "The general field equations of modified TEGR for $S = -1$ are first-order partial differential equations that have a peculiar form.", "It seems that there is no reasonable spacetime that could satisfy these conditions.", "The gravitational potentials $\\psi _{\\mu \\nu }$ are gauge dependent.", "Under an infinitesimal coordinate transformation, $x^\\mu \\mapsto x^{\\prime \\mu }=x^\\mu -\\epsilon ^\\mu (x)$ , we find to linear order in $\\epsilon ^\\mu $ , $\\psi _{\\mu \\nu } \\mapsto \\psi ^{\\prime }_{\\mu \\nu }=\\psi _{\\mu \\nu }+\\epsilon _{\\mu ,\\nu }$ .", "Therefore, $\\bar{h}^{\\prime }_{\\mu \\nu }=\\bar{h}_{\\mu \\nu }+\\epsilon _{\\mu ,\\nu }+\\epsilon _{\\nu ,\\mu }-\\eta _{\\mu \\nu }\\epsilon ^\\alpha {}_{,\\alpha }\\,, \\qquad \\phi ^{\\prime }_{\\mu \\nu }=\\phi _{\\mu \\nu }+\\epsilon _{\\mu ,\\nu }-\\epsilon _{\\nu ,\\mu }$ and $\\bar{h}^{\\prime }=\\bar{h}-2\\epsilon ^\\alpha {}_{,\\alpha }$ .", "On the other hand, the gravitational field tensors $C_{\\mu \\nu \\rho }$ and $\\mathfrak {C}_{\\mu \\nu \\rho }$ as well as the gravitational field equations are gauge invariant, as expected.", "In general, we can impose the transverse gauge condition $\\bar{h}^{\\mu \\nu }{}_{, \\nu }=0\\,,$ which does not completely fix the gauge.", "We can still use four functions $\\epsilon ^\\mu $ such that $\\mathchoice{\\hrule height.4pt\\hbox{\\vrule width.4ptheight6pt \\hspace{6.0pt}\\vrule width.4pt}\\hrule height.4pt}{}{}{}$  = 0$.", "With the transverse gauge condition, we have\\begin{equation}^{0}G_{\\mu \\nu } = -\\frac{1}{2}\\mathchoice{\\hrule height.4pt\\hbox{\\vrule width.4ptheight6pt \\hspace{6.0pt}\\vrule width.4pt}\\hrule height.4pt}{}{}{}\\end{equation}{\\hrule height.4pt\\hbox{\\vrule width.4ptheight6pt \\hspace{6.0pt}\\vrule width.4pt}\\hrule height.4pt}$  h , which simplifies Eq.", "(REF )." ], [ " Newtonian Limit", "In this limit, we consider weak fields and slow motions; moreover, we can formally let $c \\rightarrow \\infty $ .", "As in Ref.", "[34], we assume the transverse gauge condition holds and $\\phi _{\\mu \\nu } = 0$ .", "Furthermore, we assume $S = S_0$ is constant.", "A detailed examination of the field equations (REF ) and (REF ) reveals that only the $ \\mu = \\nu = 0$ case is important with $\\bar{h}_{0 0} = -4 \\Phi /c^2$ and $T_{00} = \\rho \\, c^2$ , where $\\Phi $ is the Newtonian gravitational potential and $\\rho $ is the density of matter [10], [34].", "The background is the Newtonian space and time; hence, $S = S_0$ must be a constant.", "We find from Eq.", "(REF ) that $\\nabla ^2\\,\\Phi (x) = \\frac{1}{1+S_0}4 \\pi G \\rho \\,$ and the second field equation disappears.", "Let us assume that $(1+S_0) (1+Q_0) = 1\\,, \\qquad S_0\\in (-1, 0]\\,, \\qquad Q_0 \\in [0, \\infty )\\,.$ Then, we can write the modified Poisson Eq.", "(REF ) as $\\nabla ^2\\,\\Phi (x) = 4 \\pi G (\\rho + \\rho _D)\\,, \\qquad \\rho _D = Q_0 \\rho \\,,$ where $\\rho _D$ has the interpretation of the density of dark matter in this framework.", "For a point mass $m$ , say, the dark matter associated with the point mass $m$ is located at the point mass and has magnitude $Q_0 \\,m$ .", "In the Newtonian regime of nonlocal gravity (NLG), the density of dark matter is the convolution of the density of matter with a spherically symmetric reciprocal kernel.", "In the local limit of NLG in the Newtonian regime, the folding with the reciprocal kernel reduces to multiplication by a constant, namely, $\\rho _D = Q_0 \\rho $ .", "For a point mass, the static spherical cocoon of effective dark matter associated with a point mass in NLG [13], [14] settles on the point particle in the local limit of NLG.", "In this Newtonian framework, the force of gravity on a particle of mass $m$ is given by $\\mathbf {F} = - m\\,\\nabla \\Phi $ .", "Thus the gravitational force between two point masses has the Newtonian form except that it is augmented by a constant factor of $Q_0 > 0$ ." ], [ " Free Gravitational Waves", "In nonlocal gravity, free gravitational waves satisfy the nonlocal gravitational wave equation [10] $\\mathchoice{\\hrule height.4pt\\hbox{\\vrule width.4ptheight6pt \\hspace{6.0pt}\\vrule width.4pt}\\hrule height.4pt}{}{}{}$  hij(x) + W(x-y)   hij,0(y) d4y = 0 , where $W$ is a certain kernel of NLG.", "This result follows from the source-free linearized field equations once the gauge conditions $\\bar{h}^{\\mu \\nu }{}_{, \\nu }=0$ , $\\bar{h}_{0\\mu } = 0$ and $\\phi _{\\mu \\nu } = 0$ are imposed.", "The nonlocal wave Eq.", "(REF ) is reminiscent of a damped oscillator whose velocity would be represented by $\\partial \\bar{h}_{ij}/\\partial t$ .", "The wave amplitude decays as the wave propagates due to fading memory in NLG.", "In the case of source-free field equations (REF ) and (REF ), we can simply impose the transverse gauge condition and $\\phi _{\\mu \\nu } = 0$ .", "To enforce the additional gauge conditions $\\bar{h}_{0\\mu } = 0$ , we must require that $S$ is only a function of time, i.e.", "$S = S(t)$ .", "With these assumptions, the field equations then become $(1+S)\\mathchoice{\\hrule height.4pt\\hbox{\\vrule width.4ptheight6pt \\hspace{6.0pt}\\vrule width.4pt}\\hrule height.4pt}{}{}{}$  h + dSdt [h,0 - h(0, )] = 0 , $\\frac{dS}{dt} \\bar{h}_{[\\mu }{}^{0}{}_{,\\nu ]} = 0\\,,$ which are consistent with $\\bar{h}_{0\\mu } = 0$ .", "Here, we have used $\\mathfrak {C}_{\\mu }{}^{\\sigma }{}_{\\nu } = -\\frac{1}{2} \\bar{h}_{\\mu \\nu ,}{}^\\sigma + \\frac{1}{2} \\bar{h}_{\\nu }{}^{\\sigma }{}_{, \\mu }\\,$ and $\\mathfrak {C}_{(\\mu }{}^{\\sigma }{}_{\\nu )} = -\\frac{1}{2} \\bar{h}_{\\mu \\nu ,}{}^\\sigma + \\frac{1}{2} \\bar{h}_{(\\nu }{}^{\\sigma }{}_{, \\mu )}\\,, \\qquad \\mathfrak {C}_{[\\mu }{}^{\\sigma }{}_{\\nu ]} = - \\frac{1}{2}\\bar{h}_{[\\mu }{}^{\\sigma }{}_{,\\nu ]}\\,.$ On the other hand, for $\\bar{h}_{ij}$ , we find $\\mathchoice{\\hrule height.4pt\\hbox{\\vrule width.4ptheight6pt \\hspace{6.0pt}\\vrule width.4pt}\\hrule height.4pt}{}{}{}$  hij - (t) hijt = 0 , where $\\bar{\\gamma }(t) = \\frac{1}{1+S}\\frac{dS}{dt} = \\frac{d}{dt}\\ln (1+S)\\,.$ The transverse gauge condition reduces to $\\bar{h}^{ij}{}_{, j}=0$ , which is consistent with the propagation Eq.", "(REF ).", "To solve Eq.", "(REF ), we assume that each component of the wave function $\\bar{h}_{ij}$ has the form $\\mathbb {H}(t) e^{i \\bar{\\mathbf {k}}\\cdot \\mathbf {x}}\\,,$ where $\\mathbb {H}$ satisfies a harmonic oscillator equation with time-dependent damping.", "That is, $\\frac{d^2\\mathbb {H}}{dt^2} + \\bar{k}^2 \\mathbb {H} + \\bar{\\gamma }(t) \\frac{d\\mathbb {H}}{dt} = 0\\,,$ where $\\bar{k} = |\\bar{\\mathbf {k}}|$ .", "For a positive constant $\\bar{\\gamma }$ , we have the standard damped harmonic oscillator.", "On the other hand, the solution will always be damped if $\\bar{\\gamma }(t) > 0$ , since the energy associated with the harmonic oscillator constantly decays, namely, $\\frac{d}{dt}\\left[\\frac{1}{2} \\left(\\frac{d\\mathbb {H}}{dt}\\right)^2 + \\frac{1}{2} \\bar{k}^2 \\mathbb {H}^2\\right] = - \\bar{\\gamma }(t) \\left(\\frac{d\\mathbb {H}}{dt}\\right)^2\\,.$ As in NLG, free gravitational waves in our modified TEGR theory are indeed damped provided $dS/dt > 0$ , since $1+S > 0$ .", "The WKB treatment of wave Eq.", "(REF ) is contained in Appendix A.", "Beyond linearized NLG, no exact nonlinear solution of nonlocal gravity is known at present; in fact, some of the difficulties have been discussed in [35].", "In the local limit of NLG, we have the prospect of finding exact solutions that explore strong field regimes involving cosmological models and black holes.", "Furthermore, parity-violating solutions may exist in which the torsion pseudovector $\\check{C}$ that appears in the constitutive relation (REF ) of modified TEGR may be nonzero.", "Modified theories that exhibit gravitational parity violation have been of current interest [36], [37].", "The rest of this paper is devoted to finding the simplest exact cosmological models of modified TEGR.", "The current benchmark model of cosmology is the flat FLRW solution; therefore, we focus on the way this model is modified in our approach.", "We begin with the simplest modified conformally flat spacetimes." ], [ "Modified TEGR: Conformally Flat Spacetimes", "We are interested in exact solutions of modified TEGR that have a conformally flat metric $ds^2= e^{2U} \\, \\eta _{\\mu \\nu }\\, dx^\\mu \\,dx^\\nu \\,,$ where $x^\\mu = (\\eta , x^i)$ and $U(x)$ is a scalar.", "We use conformal time $x^0 = \\eta $ in this section to agree with standard usage in cosmology and choose preferred observers that are at rest in space and their adapted tetrad axes point along the coordinate directions $e^\\mu {}_{\\hat{\\alpha }}=e^{-U}\\,\\delta ^\\mu _{\\alpha }\\,, \\qquad e_\\mu {}^{\\hat{\\alpha }}=e^{U}\\,\\delta _\\mu ^{\\alpha }\\,.$ We seek an exact solution of the field equations (REF ) and (REF ) in this case.", "The torsion tensor can be simply computed and is given by $C_{\\alpha \\beta }{}^{\\mu }=U_{\\alpha }\\, \\delta ^\\mu _\\beta - U_{\\beta }\\, \\delta ^\\mu _\\alpha \\,, \\quad C_{\\alpha \\beta \\gamma }=e^{2U}\\,(U_{\\alpha }\\, \\eta _{\\beta \\gamma }- U_{\\beta }\\, \\eta _{\\gamma \\alpha })\\,,$ where $U_{\\mu }:= \\partial _{\\mu }U$ in our convention.", "Similarly, the torsion vector, the contorsion tensor and the auxiliary torsion tensor are $C_{\\alpha }=-3\\,U_{\\alpha }$ , $K_{\\alpha \\beta \\gamma }=C_{\\beta \\gamma \\alpha }$ and $ \\mathfrak {C}_{\\alpha \\beta \\gamma }=-2\\,C_{\\alpha \\beta \\gamma }$ , respectively.", "Moreover, in this case, $\\check{C}_\\alpha =0$ ; therefore, $X_{\\mu \\nu \\rho } = \\mathfrak {C}_{\\mu \\nu \\rho }\\,.$ The Einstein tensor for conformally flat spacetimes is given by [38] $^0G_{\\mu \\nu }=-2\\,(U_{\\mu \\nu }-U_\\mu \\,U_\\nu )+\\eta _{\\mu \\nu }\\eta ^{\\alpha \\beta }(U_\\alpha \\,U_\\beta +2\\,U_{\\alpha \\beta })\\,,$ where $U_{\\mu \\nu } = \\partial _\\mu \\partial _\\nu U$ .", "We assume that the energy-momentum tensor of matter is given by a perfect fluid of energy density $\\rho $ and pressure $P$ such that $T_{\\mu \\nu }=\\rho \\,u_\\mu \\,u_\\nu +P\\,(g_{\\mu \\nu }+u_\\mu \\,u_\\nu )\\,,$ where $u^\\mu $ is the 4-velocity vector of the perfect fluid.", "As in the standard cosmological models, we assume the fundamental particles are comoving with the preferred observers and are thus spatially at rest, namely, $u^\\mu = e^{\\mu }{}_{\\hat{0}} = e^{-U}\\delta ^\\mu _0\\,, \\qquad u_\\mu = e^U\\eta _{\\mu 0}\\,$ and $\\rho $ and $P$ are functions of time $\\eta $ .", "Let us return to the field equations (REF ) and (REF ) and note that the constitutive relation (REF ) in this case reduces to $N_{\\mu \\rho \\nu } = S \\, \\mathfrak {C}_{\\mu \\rho \\nu } = -2S\\, C_{\\mu \\rho \\nu } = -2Se^{2U}(U_\\mu \\eta _{\\rho \\nu }-U_\\rho \\eta _{\\mu \\nu })\\,.$ Therefore, $N_{\\mu }{}^{\\rho }{}_{\\nu } = - 2S (U_\\mu \\delta ^\\rho _\\nu -\\eta ^{\\rho \\alpha }U_\\alpha \\eta _{\\mu \\nu })\\,, \\qquad N_{\\mu }{}^{\\rho }{}_{\\rho } = - 6S U_\\mu \\,.$ Using these relations in $Q_{\\mu \\nu }$ and $\\mathcal {N}_{\\mu \\nu }$ , $Q_{\\mu \\nu }=U_\\mu \\,N_{\\nu }{}^{\\rho }{}_\\rho -U_\\rho \\,N_{\\nu }{}^{\\rho }{}_\\mu -\\frac{1}{2}\\,g_{\\mu \\nu }\\,U_\\alpha \\,N^{\\alpha \\beta }{}_{\\beta }\\,, \\quad \\mathcal {N}_{\\mu \\nu }=e^{-U} \\,\\eta _{\\nu \\alpha }\\,\\frac{\\partial }{\\partial x^\\beta }\\,\\Big (e^{3\\,U}\\,N^{\\alpha \\beta }{}_{\\mu }\\Big )\\,,$ we find $Q_{\\mu \\nu }$ is symmetric $Q_{\\mu \\nu }= - 4 S\\,U_\\mu U_\\nu + S \\,\\eta _{\\mu \\nu }\\,\\eta ^{\\alpha \\beta }U_\\alpha U_\\beta \\,,$ and $\\mathcal {N}_{\\mu \\nu }= - 2 (S_\\mu U_\\nu -\\eta _{\\mu \\nu }\\,\\eta ^{\\alpha \\beta }S_\\alpha U_\\beta ) - 2 S\\,(U_\\mu U_\\nu -\\eta _{\\mu \\nu }\\,\\eta ^{\\alpha \\beta }U_\\alpha U_\\beta ) - 2S(U_{\\mu \\nu } - \\eta _{\\mu \\nu }\\,\\eta ^{\\alpha \\beta }U_{\\alpha \\beta })\\,.$ The second field equation (REF ) implies $ \\mathcal {N}_{[\\mu \\nu ]} = 0$ , which means $S_\\mu U_\\nu = S_\\nu U_\\mu \\,.$ Hence, $dS \\wedge dU = 0$ .", "This equation has the natural solution that $dS$ is proportional to $dU$ , which means $S = S(U)\\,.$ The symmetric field equation (REF ) is simply Einstein's field equation together with a new source $Q_{\\mu \\nu } - \\mathcal {N}_{\\mu \\nu } = - 2 \\left(S -\\frac{dS}{dU}\\right)\\,U_\\mu U_\\nu - \\left(S +2\\frac{dS}{dU}\\right)\\,\\eta _{\\mu \\nu }\\,\\eta ^{\\alpha \\beta }U_\\alpha U_\\beta + 2S(U_{\\mu \\nu } - \\eta _{\\mu \\nu }\\,\\eta ^{\\alpha \\beta }U_{\\alpha \\beta })\\,.$ Let us write Eq.", "(REF ) in the form $^{0}G_{\\mu \\nu } - Q_{\\mu \\nu } + \\mathcal {N}_{\\mu \\nu } = \\kappa T_{\\mu \\nu } - \\Lambda g_{\\mu \\nu }\\,.$ The left side of this equation can be written as $-2(1+S)(U_{\\mu \\nu } - \\eta _{\\mu \\nu }\\,\\eta ^{\\alpha \\beta }U_{\\alpha \\beta }) + 2 \\left(1+S -\\frac{dS}{dU}\\right)\\,U_\\mu U_\\nu + \\left(1+S +2\\frac{dS}{dU}\\right)\\,\\eta _{\\mu \\nu }\\,\\eta ^{\\alpha \\beta }U_\\alpha U_\\beta \\,.$ To proceed, we note that in the context of standard flat cosmology, the natural choice for $U(x)$ would be to assume $U = U(\\eta )$ .", "Therefore, $S = S(\\eta )$ as well, in agreement with the time dependent nature of the background.", "Moreover, we introduce the scale factor $a$ , $e^U = a(\\eta )\\,, \\qquad a^{\\prime }:= \\frac{da}{d\\eta }\\,, \\qquad U_\\mu = \\frac{a^{\\prime }}{a}\\delta ^0_\\mu \\,.$ With these assumptions, Eq.", "(REF ) has nonzero contributions for indices $(\\mu ,\\nu ) = (0 ,0)$ and $(\\mu , \\nu ) = (i , j)$ .", "We find $\\frac{3}{a^2}(1+S) \\left( \\frac{a^{\\prime }}{a}\\right)^2 = \\Lambda + 8\\pi G \\rho \\,$ and $2(1+S) \\left( \\frac{a^{\\prime }}{a}\\right)^{\\prime } + (1+S) \\left( \\frac{a^{\\prime }}{a}\\right)^2 = a^2(\\Lambda - 8\\pi G P) - 2 \\frac{dS}{dU} \\left( \\frac{a^{\\prime }}{a}\\right)^2\\,,$ respectively.", "When $S = 0$ , we have the standard GR results, as expected.", "It is useful to express these equations in the more traditional form of standard flat cosmology." ], [ "Modified Standard Flat Cosmology", "In the traditional flat model, we assume a metric of the form $ds^2 = -dt^2 + a^2(t)\\,\\delta _{ij}\\,dx^i\\,dx^j\\,$ in terms of cosmic time $t$ .", "With $dt = a d\\eta $ , the metric can then be written as $ds^2 = a^2 \\eta _{\\mu \\nu }\\, dx^\\mu \\,dx^\\nu $ , just as in the previous section.", "Let us write $\\dot{a} := \\frac{da}{dt} = \\frac{1}{a} a^{\\prime }\\,.$ The traditional form of the dynamical equations of our model becomes $3(1+S) \\left( \\frac{\\dot{a}}{a}\\right)^2 = \\Lambda + 8\\pi G \\rho \\,$ and $2(1+S) \\frac{\\ddot{a}}{a} + (1+S) \\left( \\frac{\\dot{a}}{a}\\right)^2 = \\Lambda - 8\\pi G P - 2 \\frac{dS}{dt} \\frac{\\dot{a}}{a}\\,,$ respectively.", "Next, differentiating Eq.", "(REF ) with respect to cosmic time $t$ and using Eq.", "(REF ), we find $\\frac{d\\rho }{dt} = - 3(\\rho + P) \\frac{\\dot{a}}{a} - \\frac{3}{8 \\pi G} \\frac{dS}{dt}\\left( \\frac{\\dot{a}}{a}\\right)^2\\,.$ To interpret these results physically, let us first consider Eq.", "(REF ).", "We assume that $S$ monotonically increases with time $t$ , as before.", "Writing $(1+S)(1+Q) = 1\\,, \\qquad Q = -\\frac{S}{1+S}\\,, \\qquad \\frac{dQ}{dt} = -\\frac{1}{(1+S)^2}\\frac{dS}{dt}\\,,$ we note that $Q$ monotonically decreases as the universe expands.", "Thus Eqs.", "(REF ) and (REF ) take the standard form—except for the extra term proportional to $dS/dt$ in the latter—if $\\Lambda $ , $\\rho $ and $P$ are replaced by $(1+Q)\\Lambda = \\Lambda + Q\\Lambda $ , $(1+Q)\\rho = \\rho + \\rho _D$ and $(1+Q)P$ .", "There is dark energy $\\Lambda + Q\\,\\Lambda $ as well as dark matter $Q\\,\\rho $ in this theory, but both decrease as the universe expands, in agreement with the concept of fading memory in nonlocal gravity [10].", "Next, we consider Eq.", "(REF ), where the important additional term due to $dS/dt$ acts like added pressure.", "For $S= {\\rm constant}$ , the basic thermodynamic relation for adiabatic processes, namely, $d\\mathbb {U} = - P d\\mathbb {V}$ is satisfied.", "That is, imagine an amount of energy $\\mathbb {U}$ of the background perfect fluid contained within a local sphere of radius $\\ell $ that expands with the universe; then, with $\\mathbb {U} := \\rho \\mathbb {V}\\,, \\qquad \\mathbb {V} = \\frac{4\\pi }{3} \\ell ^3\\,,\\qquad \\frac{\\dot{\\ell }}{\\ell } = \\frac{\\dot{a} }{a}\\,,$ we have $\\frac{d\\rho }{dt} = - 3(\\rho + P) \\frac{\\dot{a}}{a}\\,.$ On the other hand, the variation in $S$ is related to variation in entropy $\\mathbb {S}$ .", "More generally, the basic thermodynamic relation for a nonadiabatic process is $d\\mathbb {U} = \\mathbb {T} d\\mathbb {S} - P d\\mathbb {V}\\,,$ where $\\mathbb {T}$ is the temperature and $\\mathbb {S}$ is the entropy.", "Writing the change in heat as $\\delta \\mathbb {Q} := \\mathbb {T} d\\mathbb {S}$ , we find in this case $\\delta \\mathbb {Q} = - \\frac{1}{2G} (dS) a \\dot{a}^2\\,.$ Usually, heat is generated by friction.", "As the universe expands and $S$ increases, the corresponding entropy decreases.", "For a discussion of entropy variation in the context of cosmology, see [39].", "Within the context of standard cosmology, we expect that entropy increases as the universe expands.", "However, we deal here with comoving space and it can be shown that the local comoving entropy of matter and radiation remains constant in the standard model of cosmology and one can write the local law of energy conservation (or energy continuity equation) without heat flow.", "On the other hand, we find in our modified cosmology that there is more deceleration accompanied with negative entropy production.", "Normally, this should mean more order and less chaos.", "Finally, with the Hubble ($H$ ) and deceleration ($q$ ) parameters defined by $H := \\frac{\\dot{a}}{a}\\,, \\qquad qH^2 := - \\frac{\\ddot{a}}{a}\\,,$ we have $3(1+S) H^2 = \\Lambda + 8\\pi G \\rho \\,$ and $3(1+S) q H^2 = -\\Lambda + 4\\pi G (\\rho + 3 P) + 3 \\frac{dS}{dt}H\\,.$ To illustrate these results, we introduce a toy model that is dominated by dark matter and has a nonzero cosmological constant." ], [ " Toy Model", "Let us introduce the cosmological redshift $z$ via $z := \\frac{a_0}{a(t)} -1\\,, \\qquad \\frac{dz}{dt} = - (1+z) H\\,,$ where $a_0 = a(t_0)$ is the scale factor at the present epoch $t = t_0$ .", "We can set $a_0 = 1$ with no loss in generality.", "In terms of the cosmological redshift, Eq.", "(REF ) can be written as $\\frac{d\\rho }{dz} = 3 \\rho \\frac{1+w}{1+z} - \\frac{3}{8 \\pi G}H^2 \\frac{dS}{dz}\\,,$ where we have defined $w$ via $P = w \\rho $ .", "Let us recall that we must have $dS/dz < 0$ with $1+S > 0$ .", "We must solve Eq.", "(REF ) together with Eq.", "(REF ), namely, $3(1+S) H^2 = \\Lambda + 8\\pi G \\rho $ .", "To simplify matters, we let $w = 0$ , so that pressure vanishes.", "Let us define the quantity $\\mathbb {L}(z) := \\frac{H^2[1+S(z)]^2}{(1+z)^3}\\,;$ then, we find that the equations for the toy model imply $\\frac{d\\mathbb {L}(z)}{dz} := -\\Lambda \\,\\frac{[1+S(z)]}{(1+z)^4}\\,.$ If we set $\\Lambda = 0$ , we have $\\frac{H^2}{H_0^2}= \\left(\\frac{1+S_0}{1+S}\\right)^2\\,(1+z)^3\\,.$ Here, $H_0 = H(t_0)$ and $S_0 = S(t_0)$ at the present epoch.", "Moreover, the deceleration parameter in this case is given by $q = \\frac{1}{2} -\\frac{1+z}{1+S}\\,\\frac{dS}{dz}\\,.$ The model has extra deceleration due to $S \\ne 0$ .", "With a suitable function $S = S(z) > -1$ with $dS/dz < 0$ , the model parameters can be worked out.", "For $S = 0$ , our toy model reduces to the Einstein–de Sitter model with $a = (t/t_0)^{2/3}$ , $H = 2/(3t)$ and $q = 1/2$ ." ], [ "Modified FLRW Cosmology", "In our modified TEGR framework, it may be possible to extend the flat model of the previous section to the standard cosmological models.", "We begin with $ds^2 = -dt^2 + \\frac{a^2(t)}{K^2(r)}\\,\\delta _{ij}\\,dx^i\\,dx^j\\,,$ where $a(t)$ is the scale factor, $K(r) = 1 + \\frac{1}{4}\\, k\\, r^2\\,, \\qquad r^2 = \\delta _{ij}\\, x^i x^j\\,,$ and $k = 1, -1$ , or 0 for the closed, open, or flat FLRW model, respectively.", "The fundamental observers are at rest in space and carry adapted tetrads that point along the coordinate directions.", "That is, $e^\\mu {}_{\\hat{0}}= \\delta ^\\mu _{0}\\,, \\qquad e^\\mu {}_{\\hat{i}}= \\frac{K(r)}{a(t)}\\delta ^\\mu _{i}\\,$ and $e_\\mu {}^{\\hat{0}}= \\delta _\\mu ^{0}\\,, \\qquad e_\\mu {}^{\\hat{i}}= \\frac{a(t)}{K(r)}\\delta _\\mu ^{i}\\,.$ After some work, details of which we relegate to Appendix B, we find that the constraint equations imply $k\\dot{S} = 0$ .", "Therefore, the equations of modified TEGR are consistent with $S = S(t)$ in this case provided $k = 0$ and we thus recover the flat model of the previous section.", "On the other hand, it is possible that $k \\ne 0$ , in which case $S$ must be constant.", "In the latter case, the main equations of FLRW model become $\\frac{3}{c^2}(1+S) \\left( \\frac{\\dot{a}}{a}\\right)^2 + 3(1+S)\\,\\frac{k}{a^2} = \\Lambda + \\frac{8\\pi G}{c^2} \\rho \\,$ and $\\frac{2}{c^2}(1+S) \\frac{\\ddot{a}}{a} +\\frac{1}{c^2} (1+S) \\left( \\frac{\\dot{a}}{a}\\right)^2 + (1+S)\\,\\frac{k}{a^2} = \\Lambda - \\frac{8\\pi G}{c^4} P\\,.$ For $S = 0$ , we get the standard equations of FLRW model.", "In a spatially homogeneous and isotropic background that varies with time, we expect that the susceptibility $S$ would be time dependent as well, in close analogy with the electrodynamics of media.", "It is therefore interesting that in this case only the flat model of standard cosmology is allowed within the framework of the modified theory of gravitation under consideration here." ], [ "Discussion", "We have investigated the local limit of nonlocal gravity (NLG).", "This modified TEGR can also provide hints regarding NLG.", "We have worked out the modified version of the standard FLRW cosmological models.", "In particular, we have studied the modification of the standard flat cosmological model.", "Within the $\\Lambda $ CDM framework, the standard benchmark cosmological model has had significant success in explaining the vast majority of observations in connection with the cosmic microwave background (CMB) [41] and large scale structure formation [42].", "However, in recent years some discrepancies have been observed [43].", "These discrepancies could be hints for physics beyond the standard benchmark model of cosmology.", "The most recent discrepancy that is now under frequent discussion is related to the measurement of $H_0$ , the relative expansion rate of the Universe at the present time [44].", "There is a 4-5 $\\sigma $ discrepancy between the measurement of the Hubble constant using local studies of the nearby supernovas, for instance, and the measurement of the recession rate using the CMB on the basis of the $\\Lambda $ CDM model [45].", "This inconsistency has opened up a new arena in cosmological studies.", "Many alternative models have been suggested to reconcile this tension, from the early dark energy models to late-time modified gravity theories [45].", "The constitutive relation of the NLG permits a different cosmic expansion history as compared to the prediction of the standard $\\Lambda $ CDM benchmark model.", "Consequently, the modified TEGR cosmological models can give rise to a rich phenomenology to be confronted with observation data.", "It should be mentioned that besides the $H_0$ tension, there is a less significant discrepancy known as $\\sigma _8$ tension.", "The problem is related to cosmological observations that suggest a smaller amplitude in the matter perturbation in the late time universe as compared to the prediction of $\\Lambda $ CDM on the basis of CMB data.", "The late-time observations that show this tension are mainly in the domain of weak lensing results [46], [47], [48], and the growth rate measurements [43].", "The fading memory effect of modified TEGR can be a hint that the model can address these tensions as well.", "In future work, we plan to investigate the cosmological implications of this model at both the background and perturbation levels." ], [ "ACKNOWLEDGMENTS", "S. B. has been partially supported by the Abdus Salam International Centre for Theoretical Physics (ICTP) under the junior associateship scheme.", "B. M. is grateful to Yuri Obukhov for valuable discussions." ], [ "Eikonal Approximation for Wave Eq. (", "The general connection between the wave equation and the motion of classical particles in spacetime is established via the eikonal (“WKB\") approximation method.", "In the present case, we are interested in the high-frequency solution of $(1+S)\\mathchoice{\\hrule height.4pt\\hbox{\\vrule width.4ptheight6pt \\hspace{6.0pt}\\vrule width.4pt}\\hrule height.4pt}{}{}{}$  hkl - dSdt hklt = 0 , where $S$ is a real function of time $t$ .", "For the solution of the wave equation, we assume an eikonal expansion of the form $\\bar{h}^{kl} = e^{i \\, \\mathcal {S}(x)/} \\sum _{j = 0}^{\\infty } ^j \\mathbb {F}^{kl}_j(x)\\,,$ where the action $ \\mathcal {S}(x)$ is a real scalar function of the spacetime coordinates and $= c/\\omega $ approaches zero.", "In the asymptotic series in Eq.", "(REF ), $\\mathbb {F}^{kl}_j$ , $j = 0, 1, 2, ...$ , are slowly varying functions such that $\\mathbb {F}^{kl}_0 \\ne 0$ by assumption.", "The wave function $\\bar{h}^{kl}$ represents a real gravitational wave and the wave equation (REF ) is linear; therefore, we can simply deal with $\\bar{h}^{kl}$ as a complex quantity with the proviso that its real part has physical significance.", "The substitution of ansatz (REF ) in the second-order wave equation (REF ) results in an expansion in increasing powers of $$ beginning with $^{-2}$ .", "The equation of motion in the WKB approximation is obtained by setting the coefficients of $^{-2}$ and $^{-1}$ terms equal to zero.", "We note that the wavelength of the radiation is not a scalar quantity and the traditional eikonal approach is not generally covariant.", "It turns out, however, that the covariant eikonal approximation is physically meaningful only in the actual limit of zero wavelength [40].", "Substituting ansatz (REF ) in the wave equation, we find that the vanishing of the coefficient of $^{-2}$ term results in $\\eta ^{\\mu \\nu } \\mathcal {S}_{,\\mu } \\, \\mathcal {S}_{,\\nu } = 0\\,,$ since $\\mathbb {F}^{kl}_0$ does not vanish by assumption.", "Thus the radiation follows a straight null path in the eikonal limit and for the solution of Eq.", "(REF ) we can write $\\mathcal {S} = \\mathcal {S}_0 ( t - \\mathbf {n} \\cdot \\mathbf {x} )\\,,$ where $\\mathbf {n}$ is a constant unit vector and $\\mathcal {S}_0$ is a nonzero constant.", "Next, the vanishing of the coefficient of $^{-1}$ term results in $2 (1+S) \\frac{\\partial \\mathbb {F}^{kl}_0}{\\partial t} + 2 (1+S)n^i \\frac{\\partial \\mathbb {F}^{kl}_0}{\\partial x^i} + \\frac{dS}{dt} \\, \\mathbb {F}^{kl}_{0} = 0\\,,$ once we take Eq.", "(REF ) into account.", "Moreover, the wave function satisfies the transverse gauge condition $\\bar{h}^{kl}{}_{,l}=0$ , which in this case simply means $\\mathbb {F}^{kl}_{0}\\, n_l = 0$ .", "Let $\\mathbf {e}_1$ and $\\mathbf {e}_2$ , $\\mathbf {e}_1 \\cdot \\mathbf {e}_2 = 0$ , be constant unit vectors that are orthogonal to the direction of propagation of the null ray $\\mathbf {n}$ ; moreover, let $\\mathbf {e}$ be a linear combination of $\\mathbf {e}_1$ and $\\mathbf {e}_2$ .", "Then, the solution of Eq.", "(REF ) can be written as $\\mathbb {F}^{kl}_{0}(t, \\mathbf {x}) = [f_{11}\\, e_1^k e_1^l + f_{12}\\, (e_1^k e_2^l + e_2^k e_1^l) + f_{22}\\, e_2^k e_2^l]\\, \\mathcal {F}(\\mathbf {e} \\cdot \\mathbf {x})\\,[1+S(t)]^{-1/2}\\,,$ where $f_{11}$ , $f_{12}$ and $f_{22}$ are constants and $\\mathcal {F}$ is a smooth function that varies slowly in the plane transverse to the direction of motion of the ray.", "This result is consistent with the temporal decay of the wave amplitude along the ray when $S$ monotonically increases with time." ], [ "Modified Standard Cosmological Models", "Starting with the tetrads given in Eqs.", "(REF ) and (REF ), one can show that $C_{\\mu \\nu 0} = 0$ and the only nonzero components of the torsion tensor are given by $C_{0ij} = - C_{i0j} = \\frac{a \\dot{a}}{K^2} \\delta _{ij}\\,, \\qquad C_{ijk} = - C_{jik} = \\frac{1}{2}\\,\\frac{ka^2}{K^3} (\\delta _{ik} x^j - \\delta _{jk} x^i)\\,.$ It follows that the torsion vector can be expressed as $C_{0} = -3\\,\\frac{\\dot{a}}{a}\\,, \\qquad C_{i} = \\frac{k x^i}{K}\\,.$ Moreover, for the contorsion tensor we have $K_{0\\mu \\nu } = 0$ and the only nonzero components can be written as $K_{i0j} = - K_{ij0} = C_{0ij} = \\frac{a \\dot{a}}{K^2} \\delta _{ij} \\,, \\qquad K_{ijk} = - K_{ikj} = C_{jki} = \\frac{1}{2}\\frac{ka^2}{K^3} (\\delta _{ij} x^k - \\delta _{ik} x^j)\\,.$ Similarly, for the auxiliary torsion tensor we have $\\mathfrak {C}_{ij0} = 0$ , while the only nonzero components are $\\mathfrak {C}_{0i0} = - \\mathfrak {C}_{i00} = C_i\\,, \\qquad \\mathfrak {C}_{0ij} = - \\mathfrak {C}_{i0j} = -2\\, C_{0ij}\\,, \\qquad \\mathfrak {C}_{ijk} = - \\mathfrak {C}_{jik} = - C_{ijk}\\,.$ Finally, the torsion pseudovector vanishes due to the symmetries of the torsion tensor in this particular case.", "With $\\check{C} = 0$ , the constitutive relation (REF ) reduces to $N_{\\mu \\nu \\rho } = S \\mathfrak {C}_{\\mu \\nu \\rho }$ ; hence, $N_{ij0} = 0$ and the only nonzero components of $N_{\\mu \\nu \\rho }$ are given by $N_{0i0} = - N_{i00} = S C_i = \\frac{kSx^i}{K}\\,, \\qquad N_{0ij} = - N_{i0j} = -2\\, S C_{0ij} = - 2\\, \\frac{Sa \\dot{a}}{K^2} \\delta _{ij}\\,$ and $N_{ijk} = - N_{jik} = - S C_{ijk} = - \\frac{1}{2}\\,\\frac{kSa^2}{K^3} (\\delta _{ik} x^j - \\delta _{jk} x^i)\\,.$ Next, we must employ Eqs.", "(REF ) and (REF ) to calculate $Q_{\\mu \\nu }$ and $\\mathcal {N}_{\\mu \\nu }$ , respectively.", "Then, the modified Einstein's field equations (REF ) can be expressed as $^{0}G_{\\mu \\nu } + \\Lambda g_{\\mu \\nu } - \\kappa T_{\\mu \\nu } = \\mathcal {R}_{\\mu \\nu }\\,,$ where $\\mathcal {R}_{\\mu \\nu } := Q_{\\mu \\nu } - \\mathcal {N}_{\\mu \\nu }\\,.$ To compute $Q_{\\mu \\nu }$ in this case, we first note that $C_{\\mu \\nu \\rho }N^{\\mu \\nu \\rho } = 12 S \\left(\\frac{\\dot{a}}{a}\\right)^2 - S\\, \\frac{k^2 r^2}{a^2}\\,.$ We find $Q_{00} = -3\\, S \\left(\\frac{\\dot{a}}{a}\\right)^2 - \\frac{1}{4}\\,S\\, \\frac{k^2 r^2}{a^2}\\,,$ $Q_{0i} = S \\,\\frac{\\dot{a}}{a} \\frac{kx^i}{K}\\,, \\qquad Q_{i0} = 2\\,S \\,\\frac{\\dot{a}}{a} \\frac{kx^i}{K}\\,,$ and $Q_{ij} = - S \\left(\\frac{\\dot{a}}{K}\\right)^2 \\delta _{ij} - \\frac{1}{4}\\, \\frac{k^2 S x^ix^j}{K^2}\\,.$ Similarly, for $\\mathcal {N}_{\\mu \\nu }$ , we have $\\mathcal {N}_{00} = \\frac{kS}{a^2}(3K -kr^2)\\,, \\qquad \\mathcal {N}_{0i} = Q_{0i} + \\frac{kx^i}{K} \\frac{dS}{dt}\\,, \\qquad \\mathcal {N}_{i0} = Q_{i0}\\,$ and $\\mathcal {N}_{ij} = - 2\\,\\frac{1}{K^2}\\frac{\\partial (Sa\\dot{a})}{\\partial t} \\delta _{ij} - \\frac{Sk}{K^2}\\,\\delta _{ij} - \\frac{1}{4}\\, \\frac{k^2 S x^ix^j}{K^2}\\,.$ Finally, we can compute the components of $\\mathcal {R}_{\\mu \\nu }$ .", "The results are $\\mathcal {R}_{0i} = -(x^i/K) k\\dot{S}$ , $\\mathcal {R}_{i0} = 0$ and $\\mathcal {R}_{00} = -3\\, S \\left(\\frac{\\dot{a}}{a}\\right)^2 -3\\,\\frac{Sk}{a^2}\\,,\\quad \\mathcal {R}_{ij} = -S \\left(\\frac{\\dot{a}}{a}\\right)^2\\delta _{ij} + 2\\,\\frac{1}{K^2}\\frac{\\partial (Sa\\dot{a})}{\\partial t} \\delta _{ij} + \\frac{Sk}{K^2}\\,\\delta _{ij}\\,.$ Substituting these results in Eq.", "(REF ), we find that $k \\dot{S} = 0$ .", "Therefore, either $k = 0$ , as in the flat model investigated in Section V or $S = {\\rm constant}$ .", "In the latter case, the field equations reduce to Eqs.", "(REF ) and (REF ) of Section VI." ] ]
2212.05536
[ [ "Modeling halo and central galaxy orientations on the SO(3) manifold with\n score-based generative models" ], [ "Abstract Upcoming cosmological weak lensing surveys are expected to constrain cosmological parameters with unprecedented precision.", "In preparation for these surveys, large simulations with realistic galaxy populations are required to test and validate analysis pipelines.", "However, these simulations are computationally very costly -- and at the volumes and resolutions demanded by upcoming cosmological surveys, they are computationally infeasible.", "Here, we propose a Deep Generative Modeling approach to address the specific problem of emulating realistic 3D galaxy orientations in synthetic catalogs.", "For this purpose, we develop a novel Score-Based Diffusion Model specifically for the SO(3) manifold.", "The model accurately learns and reproduces correlated orientations of galaxies and dark matter halos that are statistically consistent with those of a reference high-resolution hydrodynamical simulation." ], [ "Introduction", "Future wide-field astronomical imaging surveys, such as the Vera C. Rubin Observatory Legacy Survey of Space and Timehttps://www.lsst.org/, Roman Space Telescopehttps://roman.gsfc.nasa.gov/ High Latitude Survey and Euclidhttps://www.euclid-ec.org/ will provide precise constraints on cosmological parameters by imaging billions of galaxies.", "Deriving physical understanding from these data will require increasingly costly large-volume simulations with high resolution to test and validate analysis pipelines [3], [4], [12] and to constrain cosmology via Simulation-Based Inference [9].", "In this regard generative machine learning approaches represent an interesting avenue as they could serve as fast and robust emulators to greatly accelerate parts of the simulation pipelines.", "In particular, they could be used to populate realistic galaxies in large volume dark matter only simulations.", "Most machine learning methods in this line of research have been concerned with modeling scalar properties of galaxies, however in this work we are particularly interested in modeling the 3D orientations of galaxies and their host dark matter halos in simulations.", "These intrinsic orientations can indeed contaminate measurements of weak gravitational lensing in upcoming surveys and constitute a major source of systematic errors if not accounted for [10].", "Diffusion models are flexible in their domains that the datal ives, we want to jointly model various properties that live on various different spaces/manifolds Currently, score-based diffusion models represent the state-of-the-art in generative tasks such as: image, audio and molecules generation.", "[6].", "Modeling distributions on the manifold of 3D rotations is however a non trivial task, and to address this problem we develop a new type of score-based diffusion model specifically for the SO(3) manifold, by extending the Euclidean framework introduced in [22].", "We chose diffusion models due to their flexibility to model data that live on various different spaces (e.g.", "scalars and rotation matrices) compared normalizing flows and due to their stability compared to Generative Adversarial Networks.", "Based on these developments, we build a conditional generative model on SO(3) which allows us to sample from the posterior distribution of 3D orientations of galaxies and dark matter halo given information about their surrounding gravitational tidal field." ], [ "Related Work", "Machine learning approaches have been adopted in astrophysics and cosmology in various contexts, including emulation methods, inference and forward modeling [5].", "In particular, deep generative models have been implemented in the works of [8] for generative modeling of correlated galaxy properties, such as shapes and orientations, with graph-based generative adversarial networks.", "Our work takes the next step to build generative models for various galaxy properties associated with galaxy and halo orientations (which are described by a non-Euclidean manifold) with score-based denoising diffusion models." ], [ "Score-Based Generative Model on SO(3)", "Here we briefly outline our novel approach for modeling distributions on SO(3), heavily inspired by the diffusion framework developed in [22].", "The idea behind diffusion models is to introduce a noising process that perturbs the data distribution until it reaches a nearly pure noise distribution.", "Consider the following Stochastic Differential Equation (SDE) on the SO(3) manifold: $\\mathop {}\\!\\mathrm {d}X = \\mathbf {f}(X, t) \\mathop {}\\!\\mathrm {d}t + g(t) \\mathop {}\\!\\mathrm {d}W, $ where $W$ is a Brownian process on SO(3), $\\mathbf {f}(\\cdot \\ , t): \\text{SO(3)} \\rightarrow T_{X}$ SO(3) is a drift term, and $g(\\cdot ): \\mathbb {R} \\rightarrow \\mathbb {R}$ is a diffusion term.", "Given samples $X(0) \\sim p_\\text{data}$ from an empirical data distribution $p_\\text{data}$ at time $t=0$ , the marginal distribution of samples $X(t)$ evolved under this SDE at a subsequent time $t > 0$ will be denoted $p_t$ , and will converge for large $t=T$ towards a given predetermined distribution $p_T$ typically chosen to be easy to sample from.", "On SO(3), a natural choice for $p_T$ is $\\mathcal {U}_{SO(3)}$ , the uniform distribution on SO(3).", "The key realization of [22] is that under mild regularity conditions this noising process of the data process can be reversed, in particular through the following so-called probability flow Ordinary Differential Equation (ODE): $\\mathrm {d} X = [\\mathbf {f}(X, t) - g(t)^2 \\nabla \\log p_t(X)] \\mathrm {d} t .$ [2] recently extended this result to compact Riemannian manifolds, which include in particular SO(3).", "This deterministic process is entirely defined as soon as the score function $\\nabla \\log p_t(X) \\in T_X \\text{SO(3)}$ is known, and running this ODE backward in time from samples $X(T) \\sim p_T$ down to $t=0$ will yield samples $X(0) \\sim p_0 = p_\\text{data}$ .", "Training such a generative model will therefore boil down to estimating this score function with a neural network.", "While these results are direct analogs of Euclidean diffusion models [22], implementing similar models on SO(3) brings practical difficulties: Unlike in the Euclidean case where the Gaussian is a closed-form solution of heat diffusion (a key element in Euclidean SGMs), there is no closed-form solution on general Riemannian manifolds.", "Our contribution is to propose solutions to these issues in order to implement efficient score-based diffusion models on SO(3).", "On SO(3), although the exact heat kernel is only available as an infinite series [19], it can be robustly approximated in practice either by truncating this series or by using a closed form expression [15], depending on the width of the kernel.", "It is used to define the so-called Isotropic Gaussian Distribution on SO(3), $\\mathcal {IG}_{\\text{SO(3)}}(R, \\epsilon )$ [19], [15], [13], where $R \\in \\text{SO(3)}$ is a mean rotation matrix, and $\\epsilon $ a scale parameter.", "$\\mathcal {IG}_{\\text{SO(3)}}$ enjoys tractable likelihood evaluation and sampling, and most importantly is closed under convolution.", "We can now define a noise kernel $p_\\epsilon (X | X^\\prime )=\\mathcal {IG}_{\\text{SO(3)}}(X ; X^\\prime , \\epsilon )$ which can be used to convolve the data distribution such that $ p_\\epsilon (X) = \\int _{SO(3)} p_\\text{data}(X^\\prime ) p_\\epsilon (X | X^\\prime ) \\mathop {}\\!\\mathrm {d}X^\\prime $ .", "For simplicity, we further make the following specific choice, for the diffusion SDE eqn:forwardsde: $\\mathbf {f}(X, t) = 0$ , $g(t)=\\sqrt{\\frac{\\mathop {}\\!\\mathrm {d}\\epsilon (t)}{\\mathop {}\\!\\mathrm {d}t}}$ where $\\epsilon (t)$ is a given noise schedule (e.g.", "$\\epsilon (t)=t$ ).", "We then recover that convolving the data distribution with an $\\mathcal {IG}_{\\text{SO(3)}}$ of scale $\\epsilon (t)$ corresponds to the marginal distribution of the SDE at time $t$ : $p_{\\epsilon (t)} = p_t$ .", "This noise kernel allows us to use on SO(3) the usual Denoising Score-Matching loss at no extra complexity compared to the Euclidean case.", "To learn the score function we introduce a neural score estimator $s_\\theta (X, \\epsilon ) : \\text{SO(3)}\\times \\mathbb {R}^{+ \\star } \\rightarrow \\mathbb {R}^3$ , which we train under the following loss: $\\mathcal {L}_{DSM} = \\mathbb {E}_{p_\\text{data}(X)} \\mathbb {E}_{\\epsilon \\sim \\mathcal {N}(0, \\sigma _\\epsilon ^2)} \\mathbb {E}_{p_{|\\epsilon |}(\\tilde{X} | X )} \\left[ |\\epsilon | \\ \\parallel s_\\theta (\\tilde{X}, \\epsilon ) - \\nabla \\log p_{|\\epsilon |}( \\tilde{X} | X) \\parallel _2^2 \\right]$ where we sample at training time random noise scales $\\epsilon \\sim \\mathcal {N}(0, \\sigma _\\epsilon ^2)$ similarly to [21].", "The minimum of this loss will be achieved for $s_\\theta (X, \\epsilon ) = \\nabla \\log p_{\\epsilon }(X)$ .", "Once the score function is learned through $\\mathcal {L}_{DSM}$ , we can plug it in eqn:probabilityflowODE, yielding the following sampling procedure given our specific choices for the SDE terms: $X_T \\sim \\mathcal {U}_\\text{SO(3)} \\qquad ; \\qquad \\mathop {}\\!\\mathrm {d}X_t = -\\frac{1}{2} \\frac{\\mathop {}\\!\\mathrm {d}\\epsilon (t)}{\\mathop {}\\!\\mathrm {d}t} s_\\theta (X_t, \\epsilon (t)) \\mathop {}\\!\\mathrm {d}t \\;.", "$ We solve this ODE down to $t=0$ to yield samples from the learned distribution.", "We illustrate this process in tbl:toydensity.", "Note that this is a manifold-valued ODE, which we solve using Runge-Kutta-Munthe-Kaas (RK-MK) algorithms for ODEs on Lie Groups (and we direct the interested reader to [7] for a review).", "Finally, we note that this generative model can trivially be made conditional, by conditionning $s_\\theta (X, t, y)$ on external information $y$ during training and sampling.", "Figure: Learned synthetic density on SO(3).", "On the left, starting from uniform noise on the sphere at t=Tt=T, solving the ODE eqn:samplingode transports noise samples back into the target density at t=0t=0." ], [ "Application: Emulating Galaxy Intrinsic Alignments in the Illustris-TNG simulations", "Weak gravitational lensing occurs when light rays from distant galaxies get deflected due to the presence of massive objects along their trajectory [1].", "By measuring the coherent shape distortions of ensembles of galaxies, we can study the lensing effect caused by the distribution of matter in the Universe, and thereby learn about dark energy [11].", "One important systematic to model when measuring lensing is the intrinsic alignments (IA) of galaxy shapes; IAs arise due to galaxies tending to point coherently towards other galaxies due to gravitational tidal effects, which mimics a coherent lensing effect [24].", "For cosmological measurements, IA must be taken into account, which means that realistic models for it must be included in synthetic galaxy catalogs." ], [ "Cosmological Simulation", "We will explore the efficacy of our model using the hydrodynamical TNG100-1 run at $z=0$ from the IllustrisTNG simulation suite [18], [20], [23], [16], [14], [17].", "We employ a stellar mass threshold of $ \\log _{10}(M_*/M_\\odot ) \\ge 9 $ for all galaxies, using the stellar mass from their SUBFIND catalog, and select the central galaxies from each group for our analysis.", "The corresponding host dark matter halos were used to study halo alignments.", "Figure: The two-point ED correlation function, ω(r)\\omega (r), which captures the correlation between position and the axis direction,of all galaxy (right) and dark matter halo (left) axes with galaxy positions: the solid lines show the measured values from the TNG simulation, while the dashed lines show the generated values from the SGM.", "The top panels show measured ω(r)\\omega (r) values, and the bottom panels show the ratio ω(r)\\omega (r) from the SGM to that measured in TNG.", "The SGM curve was shifted by 5 per cent to the left for visual clarity.", "For the ellipsoid, we denote the major, intermediate, and minor axes as aa, bb, and cc, respectively." ], [ "Results ", "Throughout the section we refer to the sample generated from the diffusion model as the SGM sample, and the sample from TNG100 as the TNG sample.", "The inputs to the model are the TNG100 gravitational tidal field (obtained from the 3D tidal tensor which carries some information about the alignment at large scales), and the outputs are the 3D orientations of halos and central galaxies: the model generates the orientations of halos and galaxies conditioned on the tidal field.", "We test our model using the 3D orientation-position correlation function, $\\omega (r)$ , often referred to as the ED correlation.", "It captures the correlation between overdensity (galaxy positions) and orientations of the selected halo/galaxy axes (modeling the halos/galaxies as ellipsoids and selecting either the major, intermediate, or minor axis).", "Positive $\\omega (r)$ values indicate that the selected halo/galaxy axis exhibits a coherent alignment towards the positions of nearby galaxies.", "The ED correlation functions for all three axes of the halos and galaxies are presented in Fig.", "REF .", "Here, the errorbars were calculated using the jackknife estimator.", "In general, the qualitative trend of ED as a function of 3D separation is captured by the SGM for both DM halos and central galaxies.", "For small scales (below $r \\le 1 $ Mpc/h), there is a general deviation from the measured values, which may be explained by the highly complex non-linear processes that might not have been captured by the neural network.", "Quantitatively, for the major axes of both halos and central galaxies, the generated samples agree well with the simulation.", "For the intermediate axes of DM halos and central galaxies, the signal is very weak, though the SGM managed to captured the correlation with statistical consistency.", "However, for the minor axes, the SGM model slightly underestimates the correlation and overestimates it for central galaxies at small scales.", "Overall, the SGM model can describe synthetic densities with high statistical correlations (as illustrated in tbl:toydensity), and those with low statistical correlations, as shown in the case of galaxy/halo alignments.", "Regarding the limitations, the model did not capture the correlations at small scales to a good quantitative agreement, for which adding a graph-based layer may help [8]." ], [ "Conclusions", "We have introduced a novel score-based generative model for the SO(3) manifold, and applied it in an astrophysical context to the modeling of the 3D orientation of galaxies and dark matter halos in the TNG100 hydrodynamical simulations.", "Predicting galaxy properties given a dark matter halo, or vice versa, is known as the galaxy-halo connection.", "Deep generative models show promise in tackling this high-dimensional multivariate problem.", "We have demonstrated that a smaller subset of the problem of modeling halo/galaxy orientation given the tidal field can be addressed with score-based denoising diffusion models.", "The diffusion model generates orientations that have statistical correlations consistent with those of the cosmological simulation, in addition to reproducing high-correlation synthetic densities on SO(3).", "In the future, we would like to extend this work by implementing a graph layer in order to fully capture the correlation at non-linear (small) scales and extend the number of halo and galaxy properties predicted by the model.", "Applying our model to a large volume cosmological simulation, to test the ability to model these alignments, will be highly useful for future weak lensing surveys." ], [ " Broader Impact", "The proposed methodology of deep manifold learning on SO(3) will be practical in many disciplines outside of astrophysics/cosmology.", "For instance, in robotics the problem of estimating poses of objects is an intensely studied problem and our method provides a way of tackling this problem from a generative perspective with diffusion models.", "Additionally, in biochemistry it is often hard to find the optimal angle for molecular docking; with our proposed method biochemists could efficiently find the optimal angle that minimizes the potential energy.", "We do not believe that our work poses any negative societal impacts or ethics-related issues.", "This work was supported in part by a grant from the Simons Foundation (Simons Investigator in Astrophysics, Award ID 620789) and by the NSF AI Institute: Physics of the Future, NSF PHY- 2020295." ] ]
2212.05592
[ [ "Random Feature Models for Learning Interacting Dynamical Systems" ], [ "Abstract Particle dynamics and multi-agent systems provide accurate dynamical models for studying and forecasting the behavior of complex interacting systems.", "They often take the form of a high-dimensional system of differential equations parameterized by an interaction kernel that models the underlying attractive or repulsive forces between agents.", "We consider the problem of constructing a data-based approximation of the interacting forces directly from noisy observations of the paths of the agents in time.", "The learned interaction kernels are then used to predict the agents behavior over a longer time interval.", "The approximation developed in this work uses a randomized feature algorithm and a sparse randomized feature approach.", "Sparsity-promoting regression provides a mechanism for pruning the randomly generated features which was observed to be beneficial when one has limited data, in particular, leading to less overfitting than other approaches.", "In addition, imposing sparsity reduces the kernel evaluation cost which significantly lowers the simulation cost for forecasting the multi-agent systems.", "Our method is applied to various examples, including first-order systems with homogeneous and heterogeneous interactions, second order homogeneous systems, and a new sheep swarming system." ], [ "Introduction", "Agent-based dynamics typically employ systems of ordinary differential equations (ODEs) to model a wide range of complex behaviors such as particle dynamics, multi-body celestial mechanics, flocking dynamics, species interactions, opinion dynamics, and many other physical or biological systems.", "Fundamentally, agent-based dynamical systems model the collective behavior between multiple objects of interest by using a kernel that represent their interactions and thus provide equations that govern their motion.", "They are able to reproduce a variety of collective phenomena with a minimal set of parameters which make them especially accessible for modelers [10], [40], [62], [33], [15].", "Such models of collective behavior are numerically slow to solve and difficult to analyze which often leads to issues in trying to extract parameters, i.e.", "the kernels.", "This difficulty is typically circumvented by passing to continuum limits in order to aid computations [63] and understand facets of the dynamics of the discrete system such as emergent patterning [25], [5], [61] or properties of the asymptotic dynamics [8], [3], [4], [39], [59], [18], [2].", "These interaction kernels provide information on the behavior between agents as well as the agents' self-driven forces.", "However, in practice these kernels may not be fully known and thus one needs to be able to approximate them from observation of the agents' states.", "In this work, we develop a sparse random feature model that uses a randomized Gaussian basis to approximate the interaction kernels from the data and use it to forecast future states and behavior.", "Learning dynamical systems from data is an important modeling problem in which one approximates the underlying equations of motion governing the evolution of some unknown system [6], [55].", "This is essentially a model selection and parameter estimation problem, where the equations of interest are those that define a differential equation.", "One of the main challenges is to construct methods that do not overfit on the data, can handle noise and outliers, and can approximate a wide enough class of dynamics.", "While there are many works in the last decade on this topic, we highlight some of the related work along the direction of interpretable models, i.e.", "those that provide not only an approximation but a meaningful system of equations.", "For a detailed overview of the approximation and inference of physical equations from data see [17] and the citations within.", "The SINDy algorithm [7] is a popular method for extracting governing equations from data using a sparsity-promoting method.", "The key idea is learn a sparse representation for the dynamical system using a dictionary of candidate functions (e.g.", "polynomials or trigonometric functions) applied to the data.", "This approach has lead to many related algorithms and applications, see [9], [66], [48], [24], [21], [57], [36], [37] and the reference within.", "The sparse optimization framework for learning governing equations was proposed in [50] along with an approach to discovering partial differential equations using a dictionary of derivatives.", "The sparse optimization and compressive sensing approach for learning governing equations include $\\ell ^1$ based approaches for high-dimensional ODE [54], [53], [52] and noisy robust $\\ell ^0$ approaches using the weak form [51].", "Another approach is the operator inference technique [41] which can be used to approximate high-dimensional dynamics by learning the ODEs that govern the coefficients of the dynamics projected onto a linear subspace of small dimension.", "This results in a model for the governing equation in the reduced space that does not require computing terms in the full dimension.", "While the methods of [7], [50], [41], [34] can be applied to a wide range of problems, they are not optimal for the problems considered here.", "A nonparametric approach for learning multi-agent systems was developed in [32] based on the assumption that the interaction kernel $g:\\mathbb {R}_+\\rightarrow \\mathbb {R}$ is a function of the pairwise distances between agents.", "Their method builds a piecewise polynomial approximation of the interaction kernel by partitioning the set of possible distances (i.e.", "the input space for $g$ ) and (locally) estimating the kernel using the least squares method (see Section ).", "The training problem is similar in setup to other learning approaches for dynamical systems [41], [7], [60], [50] which use trajectory based regression; however, since the ODE takes a special form, the training problem becomes a one dimensional regression problem and thus piecewise estimators do not incur the curse-of-dimensionality.", "The theoretical results in [32] focused on the first-order homogeneous case (one type of agent), but recent works generalize and extend the theory and numerical model to heterogeneous systems (i.e.", "different types of agents and different interaction kernels) [30], stochastic systems [31], and second-order systems [38].", "In [67], the accuracy and consistency of the approach from [32] was numerically verified on a range of synthetic examples, including predicting emergent behaviors at large timescales.", "A theoretical analysis of the identifiability of interaction systems was presented in [27] which relies on a coercivity condition that is related to the conditions in [32], [30], [31].", "In this work, we will construct a random feature model (RFM) for learning the interaction kernel and thus the governing system for multi-agent dynamics.", "RFMs are a class of nonparametric methods used in machine learning [43], [44], [45] which are related to kernel approximations and shallow neural networks.", "The standard RFM is a shallow two-layer fully connected neural network whose (sole) hidden layer is randomized and then fixed [43], [44], [45] while its output layer is trained with some optimization routine.", "From the perspective of dictionary learning, a RFM uses a set of candidate functions that are parameterized by a random weight vector as opposed to the standard polynomial or trigonometric basis whose constructions use a fixed set of functions.", "Theoretical results on RFM establish various error estimates, including, uniform error estimates [44], generalization bounds related to an $L^\\infty $ like space [45], and generalization bounds for reproducing kernel Hilbert spaces [47], [28], [35], [1].", "A detailed comparison between the approaches, training problems, and results can be found in [29]." ], [ "Learning Multi-Agent Systems", "Consider the $n$ -agent interacting system, define the $n$ time-dependent state variables $\\lbrace \\mathbf {x}_i(t)\\rbrace _{i=1}^n$ , $\\mathbf {x}_i(t) \\in \\mathbb {R}^d$ for all $1\\le i \\le n$ , whose dynamics are governed by the following system of ordinary differential equations (ODEs): $ {\\left\\lbrace \\begin{array}{ll}&\\frac{d}{dt} {\\mathbf {x}}_i(t) = \\frac{1}{n} \\sum \\limits _{i^{\\prime }=1}^n g(\\Vert \\mathbf {r}_{i^{\\prime },i}(t)\\Vert _2)\\, \\mathbf {r}_{i^{\\prime },i}(t)\\\\&\\mathbf {r}_{i^{\\prime },i}(t) = \\mathbf {x}_{i^{\\prime }}(t)-\\mathbf {x}_i(t)\\end{array}\\right.", "}$ where the interaction kernel is $g:\\mathbb {R}_+\\rightarrow \\mathbb {R}$ .", "The velocity of each agent $\\mathbf {x}_i(t)$ defined by (REF ) depends on the $g$ -weighted average of distances to all other agents.", "Note that the velocity only depends on the relative distances between agents and not on their overall locations in space or time.", "We will assume that $g$ is continuously differentiable and has either compact support or has sufficient decay, which guarantees the well-posedness of (REF ).", "In addition, we assume that $g$ is a positive definite function, which leads to a specific representation (see Section REF ) that is used to derive the proposed model although these assumption will be relaxed for applications.", "For simplicity, denote $\\mathbf {x}(t):= (\\mathbf {x}_1(t),\\dots ,\\mathbf {x}_n(t))$ and $\\mathbf {r}_{:,i}(t) = (\\mathbf {x}_1(t)-\\mathbf {x}_i(t),\\dots ,\\mathbf {x}_n(t)-\\mathbf {x}_i(t))$ .", "Equation (REF ) can be derived from the gradient flow of the potential energy $\\mathcal {U}(\\mathbf {x}(t))=\\sum _{i\\ne i^{\\prime }} G(\\mathbf {r}_{i^{\\prime },i}(t))$ where $g(r) := \\frac{G^{\\prime }(r)}{r}$ .", "Such gradient flow systems have proven effective as a minimal framework to model a variety of chemical and living systems such as for micelle formation [58] or aggregation and swarming of living things, even without the addition of stochastic noise.", "The pairwise interaction kernel can be derived from physical constraints for the potential energy in non-living systems, but this type of first principles approach becomes unreasonable to derive the governing equations for biological systems." ], [ "New Random Features", "Motivated by the characterization of radial positive definite functions on $\\mathbb {R}^d$ from [56] and the earlier work on spherical random feature models [42], we propose approximating the interacting kernel $g$ in (REF ) using a random radial feature space defined by Gaussians centered at zero.", "[[56]] A continuous function $g:\\mathbb {R}_+\\rightarrow \\mathbb {R}$ is radial and positive definite on $\\mathbb {R}^d$ for all $d$ if and only if it is of the form $g(r) = \\int _0^\\infty e^{-r^2\\omega ^2}\\;d\\nu (\\omega )$ where $\\nu $ is a finite non-negative Borel measure on $\\mathbb {R}_+$ .", "We consider a weighted integral representation, namely, we assume that $g(r) = \\int _0^\\infty \\alpha (\\omega ) e^{-r^2\\omega ^2}\\;d\\nu (\\omega )$ , then (REF ) becomes $& \\frac{d}{dt} {\\mathbf {x}}_i(t) = \\frac{1}{n} \\sum _{i^{\\prime }=1}^n\\limits g(\\Vert \\mathbf {r}_{i^{\\prime },i}(t)\\Vert _2)\\, \\mathbf {r}_{i^{\\prime },i}(t) \\\\&= \\frac{1}{n} \\left(\\sum _{i^{\\prime }=1}^n \\int _0^\\infty \\alpha (\\omega ) e^{-\\Vert \\mathbf {r}_{i^{\\prime },i}(t)\\Vert _2^2\\, \\omega ^2}\\;d\\nu (\\omega )\\right)\\mathbf {r}_{i^{\\prime },i}(t)\\\\&= \\int _0^\\infty \\left(\\frac{\\alpha (\\omega )}{n} \\sum \\limits _{i^{\\prime }=1}^n e^{- \\Vert \\mathbf {r}_{i^{\\prime },i}(t)\\Vert _2^2\\, \\omega ^2}\\, \\mathbf {r}_{i^{\\prime },i}(t) \\right) \\;d\\nu (\\omega ) \\\\&= \\int _0^\\infty \\phi ( \\mathbf {r}_{1,i}(t), \\ldots , \\mathbf {r}_{n,i}(t), \\omega ) \\;d\\nu (\\omega )\\\\&=: F( \\mathbf {r}_{1,i}(t), \\ldots , \\mathbf {r}_{n,i}(t)),$ for all $i \\in [n]:=\\lbrace 1, \\ldots , n\\rbrace $ .", "In (REF ), the integrand $\\phi $ is parameterized by the scalar $\\omega \\in \\mathbb {R}_+$ with respect to the measure $\\nu $ .", "To approximate (REF ), we build a set of $N$ random features by sampling $\\omega $ from a user defined probability measure $\\theta $ , i.e.", "$\\omega _k \\sim \\theta (\\omega )$ for $k \\in [N]$ .", "The function $F$ does not explicitly depend on the agents' locations or time and instead is a function of the radial distances.", "Thus for simplicity consider $F( \\mathbf {r}_{1}, \\ldots , \\mathbf {r}_{n})$ , i.e.", "a function of $n$ inputs, then the random feature approximation for this problem becomes $F_N( \\mathbf {r}_{1}, \\ldots , \\mathbf {r}_{n}) &= \\frac{1}{N} \\sum \\limits _{k=1}^N c_k\\, \\phi _k( \\mathbf {r}_{1}, \\ldots , \\mathbf {r}_{n})$ where $\\mathbf {c}=(c_1, \\ldots , c_N)$ are the coefficients of the expansion of $F_N$ with respect to the proposed radial feature space $\\left\\lbrace \\phi _k\\, \\bigg |\\, \\phi _k( \\mathbf {r}_{1}, \\ldots , \\mathbf {r}_{n})= \\frac{\\alpha (\\omega _k)}{n} \\sum \\limits _{i=1}^n e^{- \\Vert \\mathbf {r}_i\\Vert _2^2\\, \\omega _k^2}\\, \\mathbf {r}_{i}\\right\\rbrace .$ The weight $\\alpha (\\omega )$ is chosen to normalize the basis functions $\\phi _k$ , i.e.", "we set $\\alpha (\\omega )=\\left(\\int _0^\\infty \\frac{1}{n} \\sum \\limits _{i=1}^n e^{- \\Vert \\mathbf {\\mathbf {r}} \\Vert _2^2\\, \\omega ^2}\\, r_{j} \\, \\;dr_{1} \\ldots \\;dr_{n} \\right)^{-1}=\\frac{2^n\\omega ^{n+1}}{\\pi ^{\\frac{n-1}{2}}}.$ The motivation for rescaling the terms is to avoid the ill-conditioning which occurs for larger $\\omega _k$ , that is, the Gaussians become nearly zero which leads to poor generalization.", "We observed that rescaling by an increasing function of $\\omega $ , e.g.", "$\\omega ^p$ for $p\\ge 1$ was sufficient to obtain meaningful results; however, for theoretical consistency we use the normalization factor (REF ).", "Although the random feature representation for the interaction kernel $g$ is similar to [42], the approximation to $F$ differs since we are considering a vectorized system of $n$ -agents and a different functional form.", "In Figure REF , we compare the formulation (REF ) with random Fourier features (RFF) [43], [44], [45].", "The global and oscillatory nature of RFF does not match the typical behavior of the interaction kernels, i.e.", "the specific growth and decay properties either near the origin or in far-field.", "This leads to a smooth but oscillatory approximation of the kernel that causes a large accumulation of errors when forecasting the states using the learned dynamical system.", "In the zoomed-in plot (for region $r\\in [40,60]$ ), we see that the RFF produces a non-monotone and “noisy” fit.", "In this example, we see that our approach outperforms the RFF by matching the structural assumptions.", "Figure: Radial Features versus Fourier Features: The plot includes the true interaction kernel (the black curve) and the learned interaction kernels using the radial features defined by () (the red curve) and the random Fourier features (blue curve).", "The purple area displays the empirical distribution of the radial distances estimated on the trajectories." ], [ "Least Squares Learning Problem", "The training problem is to approximate $F$ in (REF ) by $F_N$ from (REF ) given a set of discrete trajectory paths.", "That is, given $J$ observation timestamps $\\lbrace t_j \\rbrace _{j\\in [J]}$ with $t_j \\in [0,T]$ and $t_{j}<t_{j+1}$ for all $j \\in [J]$ and a total of $L$ initial conditions (IC) $\\mathbf {x}^\\ell (0) = (\\mathbf {x}_1^\\ell (0),\\dots ,\\mathbf {x}_n^\\ell (0))$ , we observe $L$ trajectories indexed by $\\ell \\in [L]$ and stored as $ \\mathbf {X}^{\\ell }=[\\mathbf {x}^\\ell (t_j)]_{j \\in [J], i \\in [n]}\\in \\mathbb {R}^{dnJ}$ .", "The entire training set is the concatenation of the $L$ trajectories $\\mathbf {X}=[\\mathbf {X}^{\\ell }]_{\\ell \\in [L]} \\in \\mathbb {R}^{n_\\text{tot}}$ , where $n_\\text{tot} = dnJL$ , i.e.", "the product between the dimension of each agent, the number of agents, the number of timestamps, and the total number of trajectories.", "The IC $\\mathbf {x}^\\ell (0)$ are drawn independently from a prescribed probability measure $\\mathbf {\\mu }_0$ on $\\mathbb {R}^{dn}$ .", "The velocity can be estimated by a finite difference method and concatenated to form the velocity dataset $\\mathbf {V}=[\\mathbf {V}^{\\ell }_{i,j}]_{\\ell \\in [L]} \\in \\mathbb {R}^{n_\\text{tot}}$ , where $\\mathbf {V}^{\\ell }$ is an approximation to $\\frac{d}{dt} \\mathbf {x}_{i}^\\ell (t_j)$ .", "We used the central difference formula in all examples unless otherwise stated.", "To train the system, we want to minimize the risk with respect to the unknown non-negative Borel measure $\\nu $ : $\\mathcal {L}(\\nu ) := \\frac{1}{nLT}\\sum \\limits _{i \\in [n], \\ell \\in [L]} \\int _0^T {\\frac{d}{dt} \\mathbf {x}_{i}^\\ell (t) - F( \\mathbf {r}^\\ell _{:,i}(t); \\nu )}_2^2 \\text{dt},$ which measures the error between the velocity and the RHS of the ODE (REF ) where $F( \\cdot \\, ; \\nu )$ is used to denote the dependence of $F$ on $\\nu $ .", "This is referred to as a trajectory based regression problem.", "The empirical risk is given by $ \\mathcal {L}_N(\\mathbf {c}) := \\frac{1}{nJL}\\sum \\limits _{i \\in [n], j \\in [J], \\ell \\in [L]} \\left|\\mathbf {V}_{i,j}^\\ell - F_N( \\mathbf {r}^{\\ell }_{:,i}(t_j); \\mathbf {c}) \\right|^2,$ which measures the error between the estimated (or observed) velocity and the random feature model $F_N( \\cdot \\, ; \\mathbf {c})$ from (REF ) (which depends on the vector $c$ ).", "The sum in (REF ) is evaluated on a finite set of timestamps.", "To simplify the expression for $F_N$ , let $\\mathbf {A} \\in ^{n_\\text{tot}\\times N}$ be the random radial feature matrix whose elements are defined by $a_{\\tilde{i},k} = \\phi _k(\\textbf {r}^\\ell _{:,i}(t_j))$ where $\\tilde{i}$ represents the triple $(i,j,\\ell )$ after reindexing to a single index.", "Then (REF ) can be written as $ \\mathcal {L}_N(\\mathbf {c}) = \\frac{1}{nJL} {\\mathbf {V} - \\mathbf {A} \\mathbf {c}}_2^2,$ using vector notation.", "We can obtain the trained coefficient vector by minimizing the empirical loss directly, i.e.", "$ \\mathbf {c}= \\arg \\min _{\\mathbf {c}^{\\prime } \\in \\mathbb {R}^N}\\, {\\mathbf {V} - \\mathbf {A} \\mathbf {c}^{\\prime }}_2^2,$ i.e.", "the ordinary least squares problem.", "Note that, for a positive definite function, the coefficients $\\mathbf {c}$ should be constrained to be non-negative based on Theorem REF .", "However, by relaxing the assumptions we find that the approximation is still valid in practice without the additional computational cost incurred by adding a constraint.", "In Figure REF , we compare the approximation of two interaction kernels (the true kernels are the black curves) using the unconstrained optimization formulation (in red) and the non-negative constrained optimization formulation (in blue).", "The (empirical) density function of radial distances is represented by the purple region in all figures.", "The first plot in Figure REF shows that for a radial and positive definite kernel $g$ , the non-negative constrained problem produces a smoother approximation that agrees better with the true kernel for small $r>0$ .", "However, the regions where $g_N$ does not agree with $g$ have low sampling density and are thus less likely events in the dynamics.", "The second plot in Figure REF provides an example where the true interaction kernel $g$ does not satisfy the assumptions of Theorem REF .", "In this case, the kernel $g(r)\\rightarrow -\\infty $ as $r\\rightarrow 0^+$ , thus for visualization we do not include the large (negative) values.", "The non-negative constrained solutions is trivial since positive coefficients add more deviation for $0<r<1$ than the potential fit gained for the region $r>1$ .", "Based on this example, we argue that for positive definite and radial kernels the discrepancies between the two formulations are not statistically significant, while for non-positive definite kernels the non-negative constraint can lead to large errors.", "Thus without prior knowledge about the system, we use the unconstrained formulation.", "Figure: Comparing Constraints: The plots compare the true interaction kernels (the black curves) and the learned interaction kernels using the radial features with the non-negative constraints (the red curves) and with the non-negative constraints (the blue curves).", "The purple region plots the empirical distribution of the radial distances estimated on the trajectories.", "The first graph shows that (within the region of observed radial distances), the two kernels obtain similar accuracy when the true kernel satisfies the constraints.", "The second graph shows that if the true kernel does not satisfy the constraints, then the unconstrained approach is preferred.The training problem (REF ) measures the “stationary” error, since it compares the data and model using the observed velocity at each point in time.", "Another way to state this is that the features, i.e.", "the columns of $\\mathbf {A}$ , are applied to the data directly and do not depend on the learned solution generated by the trained system.", "To evaluate the effectiveness of the trained model, we want to measure the error of the model (i.e.", "the equations of motion) and the error that is incurred over the path generated by the trained ODE system: $ {\\left\\lbrace \\begin{array}{ll}&\\frac{d}{dt} \\tilde{\\mathbf {x}}_i(t) = F_N( \\tilde{\\mathbf {r}}_{1,i}(t), \\ldots , \\tilde{\\mathbf {r}}_{n,i}(t); c)\\\\&\\tilde{\\mathbf {r}}_{i^{\\prime },i}(t) = \\tilde{\\mathbf {x}}_{i^{\\prime }}(t)-\\tilde{\\mathbf {x}}_i(t).\\end{array}\\right.", "}$ To define the generalization error of the model, consider the probability measure $\\rho $ on $\\mathbb {R}_+$ that defines the distribution of pairwise distances between agents.", "This gives us the needed distribution for the values of $r$ that define the input distribution to the true interaction kernel $g$ and the trained interaction kernel defined by $g_N({\\mathbf {r}}) = \\frac{2^n}{\\pi ^{\\frac{n-1}{2}}\\, N} \\sum \\limits _{k=1}^N c_k\\, \\omega _k^{n+1} \\,e^{- \\Vert {\\mathbf {r}}\\Vert _2^2\\, \\omega _k^2}.$ Formally, for any interval $U\\subseteq \\mathbb {R}_+$ , $\\rho (U)$ is the probability that the pairwise distance between two agents is in $U$ , for all IC sampled from $\\mathbf {\\mu }_0$ and for $t\\in [0,T]$ , i.e.", "$\\rho (r) = \\frac{1}{\\binom{n}{2}T}\\int _{0}^T \\mathbb {E}_{\\mathbf {x}(0)\\sim \\mathbf {\\mu }_0}\\left[\\sum \\limits _{1\\le i^{\\prime }<i \\le n} \\delta _{r_{i^{\\prime },i}(t)}(r) \\right]\\;dt.$ We define the $L^2(\\rho )$ kernel generalization error as $\\Vert G^{\\prime }-G^{\\prime }_N\\Vert _{L^2(\\rho )}$ where $G^{\\prime }(r) = g(r) r$ and $G_N^{\\prime }(r) = g_N(r) r$ .", "In practice, the expectation and supremum cannot be computed directly, so instead one can consider the empirical probability density by replacing the integrals and expectations via averages.", "Lastly, the path-wise generalization error can be defined as $\\mathbf {E}(c)=_{\\mathbf {x}(0)\\sim \\mathbf {\\mu }_0}\\left[\\sup _{t\\in [0,\\tilde{T}],i\\in [N]} {\\mathbf {x}_i(t) - \\tilde{\\mathbf {x}}_i(t) }_2\\right]$ where $\\tilde{\\mathbf {x}}$ is the simulated path using (REF ) and thus depends implicitly on $c$ .", "When final forecast time $\\tilde{T}=T$ , $\\mathbf {E}$ measures how close the simulated path is to the training data, noting that this is not the training error used in (REF ) since the training problem minimizes the stationary risk and not the discrepancy between the data and the simulated trajectory.", "For all experiments, we use $\\tilde{T}>T$ to evaluate the performance of the model in the extrapolation regime, i.e.", "how well does the model predict future states using a test dataset.", "The error (REF ) is approximated numerically by averaging over a finite set of randomly sampled paths and maximized over a finite set of timestamps.", "The errors depend on the ability of the training problem to approximate the interaction kernel $g$ .", "For functions in the appropriate reproducing kernel Hilbert spaces, it was shown that under some technical assumptions, a random feature approximation obtained from minimizing the empirical risk with a ridge regression penalty (i.e.", "adding the term $\\lambda \\Vert \\mathbf {c}\\Vert _2^2$ to (REF )) produces generalization errors that scale like $\\mathcal {O}(m^{-\\frac{1}{2}})$ when the number of random features scales like $N=\\mathcal {O}({m}^{\\frac{1}{2}} \\, \\log {m})$ where $m$ is the total number of data samples [47].", "For the ordinary least squares case, [35], [11] analyzed the behavior of the approximation and its associated risk as a function of the dimension of the data, the size of the training set, and the number of random features.", "In the overparameterized setting, i.e.", "when there are many more features than data samples, [35] concluded that the ordinary least squares model may produce the optimal approximation amongst all kernel methods.", "In [11], [12], the random feature matrix was shown to be well-conditioned in the sense that its singular values were bounded away from zero with high probability.", "In addition, the risk associated with underparameterized random feature methods scale like $\\mathcal {O}(N^{-1}+m^{-\\frac{1}{2}})$ .", "Although the assumptions in [47], [35], [11] do not directly hold in our setting, their results do indicate the expected behavior of using a RFM for learning interacting dynamics in the regime where the number of agents or the number of random initial conditions is sufficiently large (thus matching the random sampling needed in the theory for random feature methods).", "We leave a detailed theoretical examination of this problem for future work." ], [ "Sparse Learning Problem", "The theory for RFMs suggests that $N$ must be sufficiently large to obtain high accuracy models.", "However, the cost of simulating the path using (REF ) requires at least one query of the random feature model for each time-step and thus scales linearly in $N$ .", "Therefore, for computational efficiency one does not want $N$ to be too large but for accuracy large $N$ is needed.", "This motivates us to use a sparse random feature regression problem, in particular, we replace (REF ) with the $\\ell _0$ constrained least squares problem: $ {\\mathbf {c}}= \\arg \\min _{\\begin{array}{c}\\mathbf {c}^{\\prime } \\in \\mathbb {R}^N, \\Vert \\mathbf {c}^{\\prime }\\Vert _0 \\le s\\end{array}}\\ {\\mathbf {V} - \\mathbf {A} \\mathbf {c}}_2^2,$ where $\\Vert \\mathbf {c}\\Vert _0$ is the number of nonzero entries of $\\mathbf {c}$ .", "To solve (REF ), we use a greedy approach called the hard thresholding pursuit (HTP) algorithm [16] ${\\left\\lbrace \\begin{array}{ll}& S^{h+1} = \\lbrace \\text{indices of }s\\ \\text{largest (in magnitude) entries of} \\\\& \\hspace{56.9055pt} \\mathbf {c}^h + \\mathbf {A}^*(\\mathbf {V} - \\mathbf {A}\\mathbf {c}^h) \\rbrace \\\\& \\mathbf {c}^{h+1} = \\arg \\min \\lbrace \\Vert \\mathbf {V} - \\mathbf {A}\\mathbf {c}\\Vert _2^2,\\quad \\text{supp}(\\mathbf {c})\\subseteq S^{h+1}\\rbrace \\end{array}\\right.}", "$ which generates a sequence $\\mathbf {c}^{h}$ index by $h\\in [0,H]$ .", "The two step approach iteratively sparsifies and then re-fits the coefficients with respect to the random radial feature matrix $\\mathbf {A}$ applied to the training dataset.", "The motivation for using sparsity is that it enables us to find the most dominate modes from a large feature space.", "Theoretically, we expect that the sparse coefficients $\\mathbf {c}$ concentrate near the largest values of $\\nu (\\omega )$ when taking into account the random sampling density $\\theta (\\omega )$ .", "To verify the sparsity assumption, we measure the (relative) generalization errors produced by the random radial basis approximation obtain from the HTP algorithm (REF ) with various sparsity levels $s$ in the range $[1,150]$ ; see Figure REF .", "Both relative errors generally decay as $s$ increases and we see that they plateau for $s\\ge 80$ , thus no accuracy gains are made by increasing the number of terms used in the approximation.", "When $s$ is close to 40, the errors are within an order of magnitude of the optimal error.", "Thus, it would be advantageous to choose a sparsity level $s$ smaller than the dimension of the full feature space in order to lower the simulation cost of (REF ).", "Figure: Motivation for Sparsity: The plot shows the relative generalization errors for the kernel and learned trajectories as a function of the coefficient sparsity level ss for the Lennard-Jones system.", "The errors plateau for s≥80s\\ge 80 indicating that adding more features does not necessarily improve the error in this example.There have been several recent random feature methods that either use an $\\ell ^1$ or an $\\ell ^0$ penalty to obtain sparse representations.", "In [20], the sparse random feature expansion was proposed which utilized an $\\ell ^1$ basis pursuit denoising formulation in order to obtain sparse coefficients with statistical guarantees, see also [11].", "The results in [20] match the risk bounds of the $\\ell ^2$ formulation, i.e.", "an error of $\\mathcal {O}\\left(N^{-1} + m^{-\\frac{1}{2}}\\right)$ , when the sparsity level is set to $s=N$ and the number of measurements $m= \\mathcal {O}(N \\log (N))$ .", "When the coefficient vector is compressible, i.e.", "they can be well-represented by a sparse vector, then $s<N$ gives similar results with a lower model complexity than in the $\\ell ^2$ case and with fewer samples $m= \\mathcal {O}(s \\log (N))$ .", "In [65], a step-wise LASSO approach was used to iteratively increase the number of random features by adding a sparse subset from multiple random samples.", "In [64], an iterative magnitude pruning approach was used to extract sparse representations for RFM.", "Our proposed model is related to the hard-ridge random feature model [49] where it was shown that for certain feature functions $\\phi _k$ with Gaussian data and weights, the iterative method converges to a solution with a given error bound, see also [46].", "Although both the $\\ell ^2$ and the sparse random feature models come with various theoretical properties, those results do not hold directly here.", "In the setting of learning multi-agent interaction kernels, the data samples are typically less ideal than in the theoretical setting, namely, trajectories are highly correlated and so they are not independent and identically distributed random variables.", "In fact, the randomness in the data is obtained from the sampling of various initial conditions.", "Additionally, the trajectories may contain noise and outliers which leads to non-standard biasing in the velocity estimates in the training problem.", "Empirically, we observe that the two approaches behave in a similar fashion, and that the sparsity-promoting approach often avoids overfitting on the data." ], [ "Numerical Method", "The datasets are created by approximating the solutions to the systems of ODE (see Section ) in MATLAB using the ODE15s solver with relative tolerance $10^{-5}$ and absolute tolerance $10^{-6}$ to handle the stiffness of the system.", "The train and test datasets have the same size collection of $L$ independent trajectories computed over the time interval $[0,T]$ .", "The $L$ initial conditions are independently sampled from the user prescribed probability measure $\\mathbf {\\mu }_0$ , i.e.", "$\\mathbf {x}^{\\ell }(0) \\sim \\mathbf {\\mu }_0$ for $\\ell \\in [L]$ which is specified in each example.", "The observations are the state variables (i.e.", "we have measurements of $\\mathbf {x}(t)$ and we do not have direct measurements of the velocity $\\frac{d}{dt}\\mathbf {x}(t)$ ) with multiplicative uniform noise of magnitude $\\sigma _{\\text{noise}}$ at $J$ equidistant time-stamps in the interval $[0,T]$ .", "An approximation of the unknown distribution $\\rho (r)$ (defined in (REF )) is obtained by using a large number $L^{\\prime }$ ($L^{\\prime }\\gg L$ ) of independent trajectories and averaging these trials to estimate $\\rho (r)$ empirically.", "This is used for visualizing the distribution of the data and assessing the error of the learned kernel." ], [ "Loss Function for First-order Homogeneous Systems", "Given $n$ agents $\\lbrace \\mathbf {x}_i(t)\\rbrace _{i=1}^n,\\mathbf {x}_i(t)\\in ^d$ for all $i\\in [n]$ , the first-order homogeneous system is governed by the following system of ODEs: ${\\left\\lbrace \\begin{array}{ll}&\\frac{d}{dt} {\\mathbf {x}}_i(t) = \\frac{1}{n} \\sum \\limits _{i^{\\prime }=1}^n g(\\Vert \\mathbf {r}_{i^{\\prime },i}(t)\\Vert _2)\\, \\mathbf {r}_{i^{\\prime },i}(t) =: F(\\mathbf {r}_{1,i}(t),\\dots ,\\mathbf {r}_{n,i}(t))\\\\&\\mathbf {r}_{i^{\\prime },i}(t) = \\mathbf {x}_{i^{\\prime }}(t)-\\mathbf {x}_i(t)\\end{array}\\right.", "}$ where $g$ is the interaction kernel.", "Consider the following radial feature space $\\left\\lbrace \\phi _k\\, \\bigg |\\, \\phi _k( \\mathbf {r}_{1}, \\ldots , \\mathbf {r}_{n})= \\frac{\\alpha (\\omega _k)}{n} \\sum \\limits _{i=1}^n e^{- \\Vert \\mathbf {r}_i\\Vert _2^2\\, \\omega _k^2}\\, \\mathbf {r}_{i}\\right\\rbrace .$ We seek to approximate $F$ using a linear combination of elements from the above radial feature space: $F_N( \\mathbf {r}_{1}, \\ldots , \\mathbf {r}_{n}) &= \\frac{1}{N} \\sum \\limits _{k=1}^N c_k\\, \\phi _k( \\mathbf {r}_{1}, \\ldots , \\mathbf {r}_{n}).$ Denote the (approximated) velocity dataset by $\\mathbf {V} =[\\mathbf {V}_{i,j}^\\ell ]_{\\ell \\in [L]}= \\left[\\frac{d}{dt}\\mathbf {x}_i^\\ell (t_j)\\right]\\in ^{n_\\text{tot}}$ and the radial feature matrix for space $\\lbrace \\phi _kk\\in [N]\\rbrace $ by $\\mathbf {A} = [a_{\\tilde{i},k}] = [\\phi _k(\\textbf {r}^\\ell _{:,i}(t_j))]\\in ^{n_\\text{tot}\\times N}$ where $\\tilde{i}$ where $\\tilde{i}$ represents the triple $(i,j,\\ell )$ after reindexing to a single index, the loss function is $\\mathcal {L}_N(\\mathbf {c}) &= \\frac{1}{nJL}\\sum \\limits _{i \\in [n], j \\in [J], \\ell \\in [L]} \\left|\\mathbf {V}_{i,j}^\\ell - F_N( \\mathbf {r}^{\\ell }_{:,i}(t_j); \\mathbf {c}) \\right|^2\\\\&= \\frac{1}{nJL} {\\mathbf {V} - \\mathbf {A} \\mathbf {c}}_2^2.$" ], [ "Loss Function for First-order Heterogeneous Systems", "In a first-order heterogeneous system, each of the $n$ agents $\\lbrace \\mathbf {x}_i(t)\\rbrace _{i=1}^n$ has a type label $k_i\\in \\lbrace 1,2\\rbrace $ .", "We use $n_1$ and $n_2$ to denote the number of agents of type 1 and 2.", "The system is governed by the following system of ODEs: ${\\left\\lbrace \\begin{array}{ll}&\\frac{d}{dt} {\\mathbf {x}}_i(t) = \\sum \\limits _{i^{\\prime }=1}^n \\frac{1}{n_{k_{i^{\\prime }}}} g_{k_ik_{i^{\\prime }}}(\\Vert \\mathbf {r}_{i^{\\prime },i}(t)\\Vert _2)\\, \\mathbf {r}_{i^{\\prime },i}(t) =: F^\\text{heterog.", "}(\\mathbf {r}_{:,i}(t))\\\\&\\mathbf {r}_{i^{\\prime },i}(t) = \\mathbf {x}_{i^{\\prime }}(t)-\\mathbf {x}_i(t)\\end{array}\\right.", "}$ where $g_{k_ik_{i^{\\prime }}}$ is the interaction kernel governing how agents of type $k_{i^{\\prime }}$ influence agents of type $k_i$ .", "Consider the following radial feature space $\\left\\lbrace \\phi _k^\\text{heterog.", "}\\, \\bigg |\\, \\phi _k^\\text{heterog.", "}( \\mathbf {r}_{1}, \\ldots , \\mathbf {r}_{n})= \\sum \\limits _{i=1}^n \\frac{\\beta (\\omega _k)}{n_{k_{i}}} e^{- \\Vert \\mathbf {r}_i\\Vert _2^2\\, \\omega _k^2}\\, \\mathbf {r}_{i}\\right\\rbrace .$ We seek to approximate $F^\\text{heterog.", "}$ using a linear combination of elements from the above radial feature space: $F_N^\\text{heterog.", "}( \\mathbf {r}_{1}, \\ldots , \\mathbf {r}_{n}) &= \\frac{1}{N} \\sum \\limits _{k=1}^N c_k\\, \\phi _k^\\text{heterog.", "}( \\mathbf {r}_{1}, \\ldots , \\mathbf {r}_{n})$ where $\\beta (\\omega _k) = \\frac{\\sqrt{\\pi }}{2\\omega _k}$ is a normalization factor.", "Denote the (approximated) velocity dataset by $\\mathbf {V} =[\\mathbf {V}_{i,j}^\\ell ]_{\\ell \\in [L]}= \\left[\\frac{d}{dt}\\mathbf {x}_i^\\ell (t_j)\\right]\\in ^{n_\\text{tot}}$ and the radial feature matrix for space $\\lbrace \\phi _k^\\text{heterog.", "}k\\in [N]\\rbrace $ by $\\mathbf {A} = [a_{\\tilde{i},k}] = [\\phi _k^\\text{heterog.", "}(\\textbf {r}^\\ell _{:,i}(t_j))]\\in ^{n_\\text{tot}\\times N}$ , the loss function is $\\mathcal {L}_N^\\text{heterog.", "}(\\mathbf {c}) &= \\frac{1}{nJL}\\sum \\limits _{i \\in [n], j \\in [J], \\ell \\in [L]} \\left|\\mathbf {V}_{i,j}^\\ell - F_N^\\text{heterog.", "}( \\mathbf {r}^{\\ell }_{:,i}(t_j); \\mathbf {c}) \\right|^2\\\\&= \\frac{1}{nJL} {\\mathbf {V} - \\mathbf {A} \\mathbf {c}}_2^2.$" ], [ "Loss Function for Second Order Homogeneous Systems", "Given $n$ agents $\\lbrace \\mathbf {x}_i(t)\\rbrace _{i=1}^n,\\mathbf {x}_i(t)\\in ^d$ for all $i\\in [n]$ , the second order homogeneous system is governed by the following system of ODEs: ${\\left\\lbrace \\begin{array}{ll}&\\frac{d^2}{dt^2} {\\mathbf {x}}_i(t) = \\frac{1}{n} \\sum \\limits _{i^{\\prime }=1}^n g(\\Vert \\mathbf {r}_{i^{\\prime },i}(t)\\Vert _2)\\, \\frac{d}{dt}\\mathbf {r}_{i^{\\prime },i}(t)=:F^\\text{sec.", "}\\left(\\mathbf {r}_{:,i}(t),\\frac{d}{dt}\\mathbf {r}_{:,i}(t)\\right)\\\\&\\mathbf {r}_{i^{\\prime },i}(t) = \\mathbf {x}_{i^{\\prime }}(t)-\\mathbf {x}_i(t)\\end{array}\\right.", "}$ where $g$ is the interaction kernel.", "Consider the following radial feature space $\\left\\lbrace \\phi _k^\\text{sec.", "}\\, \\bigg |\\, \\phi _k^\\text{sec.", "}( \\mathbf {r}_{1}, \\ldots , \\mathbf {r}_{n})= \\frac{\\beta (\\omega _k)}{n} \\sum \\limits _{i=1}^n e^{- \\Vert \\mathbf {r}_i\\Vert _2^2\\, \\omega _k^2}\\, \\frac{d}{dt}\\mathbf {r}_{i}\\right\\rbrace .$ We seek to approximate $F^\\text{sec.", "}$ using a linear combination of elements from the above radial feature space: $F_N^\\text{sec.", "}\\left( \\mathbf {r}_{:},\\frac{d}{dt}\\mathbf {r}_{:,i}(t)\\right)= \\frac{1}{N} \\sum \\limits _{k=1}^N c_k\\, \\phi _k^\\text{sec.", "}\\left( \\mathbf {r}_{:},\\frac{d}{dt}\\mathbf {r}_{:,i}(t)\\right).$ Denote the (approximated) acceleration dataset by $\\mathbf {V} =[\\mathbf {V}_{i,j}^\\ell ]_{\\ell \\in [L]}= \\left[\\frac{d^2}{dt^2}\\mathbf {x}_i^\\ell (t_j)\\right]\\in ^{n_\\text{tot}}$ and the radial feature matrix for space $\\lbrace \\phi _k^\\text{sec.", "}k\\in [N]\\rbrace $ by $\\mathbf {A} = [a_{\\tilde{i},k}] = \\left[\\phi _k^\\text{sec.", "}\\left(\\mathbf {r}^\\ell _{:,i}(t_j),\\frac{d}{dt}\\mathbf {r}^\\ell _{:,i}(t_j)\\right)\\right]\\in ^{n_\\text{tot}\\times N}$ , the loss function is $\\mathcal {L}_N^\\text{sec.", "}(\\mathbf {c}) &= \\frac{1}{nJL}\\sum \\limits _{i \\in [n], j \\in [J], \\ell \\in [L]} \\left|\\mathbf {V}_{i,j}^\\ell - F_N^\\text{sec.", "}\\left( \\mathbf {r}^{\\ell }_{:,i}(t_j),\\frac{d}{dt}\\mathbf {r}^{\\ell }_{:,i}(t_j); \\mathbf {c}\\right) \\right|^2\\\\&= \\frac{1}{nJL} {\\mathbf {V} - \\mathbf {A} \\mathbf {c}}_2^2.$" ], [ "Remarks", "Given training set $\\mathbf {X}^\\ell = [\\mathbf {x}^\\ell (t_j)]_{j\\in [J],i\\in [n]}\\in ^{dnJ}$ for $\\ell \\in [L]$ , we first obtain the velocity dataset $\\mathbf {V} =[\\mathbf {V}_{i,j}^\\ell ]_{\\ell \\in [L]}= [\\frac{d}{dt}\\mathbf {x}_i^\\ell (t_j)]\\in ^\\text{tot}$ using the central difference scheme: $\\frac{d}{dt}\\mathbf {x}_i^\\ell (t_j)\\approx \\frac{\\mathbf {x}_i^\\ell (t_{j+1})-\\mathbf {x}_i^\\ell (t_{j-1})}{t_{j+1}-t_{j-1}},$ and we do not assume that the samples are evenly spaced in time.", "After having the state and (approximated) velocity information as training data, we randomly sample $\\omega _k$ from $(0,\\sigma ^2)$ (where $\\sigma ^2$ is user-defined) to form the radial feature space for learning: $\\left\\lbrace \\phi _k\\, \\bigg |\\, \\phi _k( \\mathbf {r}_{1}, \\ldots , \\mathbf {r}_{n})= \\frac{\\alpha (\\omega _k)}{n} \\sum \\limits _{i=1}^n e^{- \\Vert \\mathbf {r}_i\\Vert _2^2\\, \\omega _k^2}\\, \\mathbf {r}_{i}\\right\\rbrace .$ We then assemble them into a random radial feature matrix $\\mathbf {A}\\in ^{n_\\text{tot}\\times N}$ whose elements are defined by $a_{\\tilde{i},k} = \\phi _k(\\textbf {r}^l_{:,i}(t_j))$ .", "The empirical loss $\\mathcal {L}_N(\\mathbf {c}) = \\frac{1}{nJL} {\\mathbf {V} - \\mathbf {A} \\mathbf {c}}_2^2$ is minimized using either least squares or HTP with sparsity $s$ to obtain coefficients $\\mathbf {c}$ , and we form the approximations $g_N\\approx g$ and $F_N\\approx F$ via $g_N({\\mathbf {r}}) &= \\frac{2^n}{\\pi ^{\\frac{n-1}{2}}\\, N} \\sum \\limits _{k=1}^N c_k\\, \\omega _k^{n+1} \\,e^{- \\Vert {\\mathbf {r}}\\Vert _2^2\\, \\omega _k^2}\\\\F_N( \\mathbf {r}_{1}, \\ldots , \\mathbf {r}_{n}) &= \\frac{1}{N} \\sum \\limits _{k=1}^N c_k\\, \\phi _k( \\mathbf {r}_{1}, \\ldots , \\mathbf {r}_{n}) .$ These approximations allow us to compute kernel and path-wise generalization errors." ], [ "Hyperparameters", "In each of the examples, the following hyperparameters are specified in the associated table: the initial conditions' probability distributions $\\mathbf {\\mu }_0$ , the number of training trajectories $L$ , the number of trajectories used to approximate the empirical data distribution $L^{\\prime }$ , the number of time-stamps (i.e.", "the observation times) $J$ , the training time range $T$ , the generalization time range $\\tilde{T}$ , the uniform multiplicative noise parameter $\\sigma _\\text{noise}$ , the number of agents $n$ , the number of features $N$ , the probability distribution for the random features $\\theta $ , and the sparsity $s$ .", "For the random features, the probability distribution is set to the normal distribution $(0,\\sigma ^2)$ , and the tunable learning parameters are $N$ , $s$ and $\\sigma $ which were determined through a coarse grid search.", "All codes are available at https://github.com/felix-lyx/random_feature_interacting_system." ], [ "Experimental Results", "For visualization purpose, we choose $d=2$ for our examples.", "In all tests, we use the random radial expansion where the trained coefficients are obtained using the least squares method (RRE) or using the sparse approximation (SRRE).", "The main indication of success is based on the path-wise error, which is reported for each experiment.", "Note that the data is acquired over a time interval $T$ which is smaller than the total time $\\tilde{T}$ used in the prediction.", "All of the experiment include uniform multiplicative noise on the state variables with scaling parameter $\\sigma _\\text{noise}$ .", "In all tests, the algorithm does not know what the true interaction kernels are and thus must learn them from observations of the dynamics.", "We test the two approaches on known interacting kernels so that we can provide empirical error bounds.", "The Lennard–Jones system is a first-order homogeneous interacting system that models intermolecular dynamics.", "The governing equations are given by the following system of ODEs [22], [23], [26]: ${\\left\\lbrace \\begin{array}{ll}&\\frac{d}{dt} {\\mathbf {x}}_i(t) = \\frac{1}{n} \\sum \\limits _{i^{\\prime }=1}^n g(\\Vert \\mathbf {r}_{i^{\\prime },i}(t)\\Vert _2)\\, \\mathbf {r}_{i^{\\prime },i}(t)\\\\&\\mathbf {r}_{i^{\\prime },i}(t) = \\mathbf {x}_{i^{\\prime }}(t)-\\mathbf {x}_i(t)\\end{array}\\right.", "}$ where $g(r)= \\frac{G^{\\prime }(r)}{r}$ and $G(r)$ is the Lennard–Jones potential given by $G(r) = 4\\left[\\left(\\frac{\\sigma }{r}\\right)^{12}-\\left(\\frac{\\sigma }{r}\\right)^6\\right],$ where $=10$ is the depth of the potential well (dispersion energy) and $\\sigma =1$ is the distance at which the potential between particles is 0.", "The $r^{-12}$ term accounts for repulsion at short ranges and the $r^{-6}$ term accounts for attraction at long ranges.", "The trajectories form a specific configuration of states, which can limit the available information when training on observations of the configuration (see Figure REF where each curve is the path of one agent in time).", "This is observed numerically in Figure REF , where the empirical density of radial distances $\\rho (r)$ (the purple region) concentrates around $r=1$ with very few measurements outside of $r\\in \\left[\\frac{1}{2},5\\right]$ .", "Even though the data distribution does not provide a rich sampling of $r$ , the RRE (blue curve on the left) and SRRE (blue curve on the right) provide close approximations within the entire interval of interest.", "Both the RRE and SRRE approximations produce similar relative kernel errors of $8.64\\cdot 10^{-4}$ and $3.00\\cdot 10^{-4}$ , respectively.", "Using the learned kernels from Figure REF , in Figure REF the simulated paths are compared to the true paths starting from new initial data.", "The total prediction time is $50\\times $ longer than the interval in which we observe the training trajectories.", "The path-wise errors using random ICs are $2.22\\cdot 10^{-2}\\pm 6.48\\cdot 10^{-2}$ (RRE) and $8.20\\cdot 10^{-3}\\pm 2.44\\cdot 10^{-2}$ (SRRE).", "The learned paths require at least one query of the learned kernels for each time-step, thus the SRRE approach leads to significant improvements in run time cost for forecasting and predictions.", "For this experiment, the error is slightly lower in the SRRE case with a $6.67\\times $ speed up in queries at each step.", "Table REF contains the hyperparameters used for this experiment.", "Figure: Lennard–Jones System: Plots of the true interaction kernel (the black curves) and the learned interaction kernels (the blue curves).", "The first graph uses the RRE model and the second uses the SRRE model.Figure: Lennard–Jones System: Comparison of the true paths (left column) and the learned paths (right column, top row is RRE and the bottom row is SRRE) on the test data using the trained interacting kernels.", "Each curve is the path of one agent in time (denoted by the colorbar).Table: Hyperparameters used in the Lennard–Jones system example.The Cucker–Smale system is a second-order homogeneous system.", "The dynamics are governed by the following systems of ODEs [13], [14]: ${\\left\\lbrace \\begin{array}{ll}&\\frac{d^2}{dt^2} {\\mathbf {x}}_i(t) = \\frac{1}{n} \\sum \\limits _{i^{\\prime }=1}^n g(\\Vert \\mathbf {r}_{i^{\\prime },i}(t)\\Vert _2)\\, \\frac{d}{dt}\\mathbf {r}_{i^{\\prime },i}(t)\\\\&\\mathbf {r}_{i^{\\prime },i}(t) = \\mathbf {x}_{i^{\\prime }}(t)-\\mathbf {x}_i(t)\\end{array}\\right.", "}$ where $g(r) = (1+r^2)^{-\\frac{1}{4}}$ is an alignment-based interaction kernel.", "Table REF contains the hyperparameters used for the system.", "In this setting, the RRE obtains a smaller relative kernel error than the SRRE, that is $1.30\\cdot 10^{-3}$ and $1.74\\cdot 10^{-1}$ , respectively, since the SRRE kernel produces a small oscillation within the support set of the distribution of observed radial distance.", "This is likely due to an instability formed from the second order model which can be resolved by increasing the sparsity.", "However, the learned and true trajectories shown in Figure REF indicate that the overall model agrees with the true governing system.", "In fact, path-wise errors using random ICs are close to each other: $6.39\\cdot 10^{-1}\\pm 5.40\\cdot 10^{-2}$ (RRE) and $6.36\\cdot 10^{-1}\\pm 5.58\\cdot 10^{-2}$ (SRRE).", "Table: Hyperparameters used in the Cucker–Smale system example.Figure: Cucker–Smale System: Comparison of the true paths (left column) and the learned paths (right column, top row is RRE and the bottom row is SRRE) on the test data using the trained interacting kernels.", "Each curve is the path of one agent in time (denoted by the colorbar).Figure: Cucker–Smale System: Plots of the true interaction kernel (the black curves) and the learned interaction kernels (the blue curves).", "The first graph uses the RRE model and the second uses the SRRE model.Single Predator-Swarming Prey Interactions is an example of a first-order heterogeneous particle system with four interaction kernels.", "Each agent $\\mathbf {x}_i$ has a type label $k_i\\in \\lbrace 1,2\\rbrace $ .", "We use $n_1$ and $n_2$ to denote the number of agents of type 1 (prey) and 2 (predator), and $g_{k_ik_{i^{\\prime }}}$ is the interaction kernel governing how agents of type $k_{i^{\\prime }}$ influence agents of type $k_i$ .", "The dynamics are governed by the following system: ${\\left\\lbrace \\begin{array}{ll}&\\frac{d}{dt} {\\mathbf {x}}_i(t) = \\sum \\limits _{i^{\\prime }=1}^n \\frac{1}{n_{k_{i^{\\prime }}}} g_{k_ik_{i^{\\prime }}}(\\Vert \\mathbf {r}_{i^{\\prime },i}(t)\\Vert _2)\\, \\mathbf {r}_{i^{\\prime },i}(t)\\\\&\\mathbf {r}_{i^{\\prime },i}(t) = \\mathbf {x}_{i^{\\prime }}(t)-\\mathbf {x}_i(t)\\end{array}\\right.", "}$ where the four interaction kernels (prey-prey, prey-predator, predator-prey, predator-predator) are given by Table: NO_CAPTIONand all terms will be learned.", "One of the potential difficulties with this example is that the path-wise error depends on learning each of the kernels to within the same accuracy.", "In addition, the density of radial distances between the predator-prey (and by symmetry the prey-predator) concentrates at $r=1.5$ with a rapid decay outside of the peak.", "This example tests the methods ability to obtain accurate models with a limited sampling distribution.", "Both the RRE and SRRE produce similar relative kernel errors for all kernels, with their path-wise errors using random ICs being comparable: $3.70\\cdot 10^{-2}\\pm 2.49\\cdot 10^{-1}$ for RRE and $8.19\\cdot 10^{-3}\\pm 4.63\\cdot 10^{-2}$ for SRRE.", "The true and learned trajectories are plotted in Figure REF , where the dots represent the transition point between the length of time used in the training set and the additional time “extrapolated” by the learned dynamical system.", "The data in Figure REF is a new set of conditions that are not from the training set.", "Table REF contains the hyperparameters used for the experiment.", "Table: Hyperparameters used in the Single Predator-Swarming Prey system example.Table: Kernel and generalization errors of Sheep-Food interacting system example.Figure: Single Predator-Swarming Prey Interactions: Comparison of the true paths (left column) and the learned paths (right column, top row is RRE and the bottom row is SRRE) on the test data using the trained interacting kernels.", "Each curve is the path of one agent in time (denoted by the colorbar).", "The warm tone is the predator and the cool tone is used for the prey.", "The dots represent the transition point between the length of time used in the training set and the additional time in the test.Figure: Single Predator-Swarming Prey Interactions: Plots of the true interaction kernels (the black curves) and the learned interaction kernels (the blue curves).", "The first four graphs use the RRE model and the last four graphs use the SRRE model.Sheep-Food Interacting System: Inspired by art which captures large flocks of sheep swarming various grain patterns [19], we propose a sheep-food interacting model and try to discover the interactions based on observations.", "Our approach to model the sheep is based off of the prey dynamics in [10].", "Similarly, the sheep take on the role of predator in their interaction with the stationary food source.", "The sheep both want to cluster together for safety, but also are drawn to seek out the food at the cost of the group cohesion.", "We manually set the dynamics to be governed by a first-order heterogeneous system: $(\\ref {eq:first_heterog_system})$ where Table: NO_CAPTIONwhere type 1 is the food and type 2 are the sheep.", "The food is the stationary “prey” and are arranged in a “heart” shape and are plotted as targets in Figure REF .", "The sheep (the predator swarm) are initialized randomly below the food, near $y=-10$ .", "The challenge is to correctly learn the dynamics from the earlier time interactions in order to correctly forecast the entire path the sheep take around the heart configuration.", "Note that although the training set only includes the interaction in the blue region, up to $T=100$ , the forecasted paths capture the full geometry.", "The errors for the kernels and paths are displayed in Table REF .", "The RRE approximation produces smaller errors but the overall kernels are similar, see Figure REF .", "Figure: Sheep-Food Interacting System: Comparison of the true paths (left column) and the learned paths (right column, top row is RRE and the bottom row is SRRE) on the test data using the trained interacting kernels.Figure: Sheep-Food System: Plots of the true interaction kernels (the black curves) and the learned interaction kernels (the blue curves).", "The first four graphs use the RRE model and the last four graphs use the SRRE model.Table: Hyperparameters used in the sheep-food interacting system example.Comparison: Figure REF provides a comparison of our approach and the standard RFF [43], [44], [45], showing that our randomized feature construction produces a more accurate approximation to the interacting kernels.", "We compare our algorithm on the Lennard–Jones example with the approximation produced by the piecewise polynomial model from [32].", "The setup is as described in the previous Lennard–Jones example.", "The model from [32] using $L=1000$ initial conditions for the training set produces a relative kernel error of $2.26\\cdot 10^{-2}$ and a path-wise generalization error of $7.31\\cdot 10^{-2}$ .", "The SRRE using $L=100$ initial conditions for the training set produces a relative kernel error of $3.00\\cdot 10^{-4}$ and a path-wise generalization error of $8.20 \\cdot 10^{-3}$ .", "This highlights two advantages of our approach.", "The RRE and SRRE methods need an order of magnitude fewer samples since they use a global smooth candidate set rather than a local basis.", "Secondly, the generalization error is lower due to the regularizing nature of the method, i.e.", "only retaining the dominate random features." ], [ "Discussion", "We developed a random feature model and learning algorithm for approximating interacting multi-agent systems.", "The method is constructed using an integral based model of the interacting velocity field which is approximated by a RFM directly on observations of state variables.", "The resulting functions, i.e.", "the sums-of-Gaussians, led to a new radial randomized feature space which is theoretically justified and was shown to produce a more consistent and stable result than other RFMs.", "We employ a randomized feature space rather than a pre-defined set of candidate functions, so that the algorithm would not require prior knowledge on the system or its interaction kernels.", "This is motivated by the applications, since manually modeling and training complex multi-agent systems is often computationally infeasible.", "We used a sparse random feature training algorithm based on the HTP algorithm.", "We note that when the sparsity parameter is set to the feature space size, i.e.", "$s=N$ , the training reverts to the least squares problem and is applicable for some of the problems examined in this work.", "In particular, when one has a sufficiently rich dataset, least squares based RFMs produce accurate solutions.", "On the other hand, it is likely that there is only a limited set of observations of the system.", "In that case there are several benefits to incorporating sparsity in the training of RFM, i.e.", "extracting a RFM that only uses a small number of randomized features.", "First, sparsity lowers the cost of forecasting for high-dimensional multi-agent system, since the cost scales with $s$ rather than $N$ .", "Since the trained systems are used to forecast the multi-agent dynamical system, a large number of repeated queries are needed, which would limit the use of standard RFM and overparameterized neural networks.", "Sparse regression also leads to less overfitting when one has a scarce or limited training set.", "Lastly, although SRRE may lead to models which are less interpretable than fixed candidate dictionaries such as [7], [50], [41], the SRRE approach does provide some insight into the structure of the interaction kernel and can be used to guide analysis of the underlying system.", "Our approach obtained accurate predictions for popular first-order system with homogeneous and heterogeneous interactions, second-order systems, and a new sheep-food interacting system from noisy measurements.", "A comparison with a similar approach showed that the SRRE produced lower path-wise generalization error and lower kernel error while also needing an order of magnitude fewer training points.", "As with most approximation techniques for training dynamical systems, one limitation of the SRRE is with high levels of noise and outliers.", "One way to alleviate this is to preprocess the data, possibly using a neural network model, to approximate the underlying training paths before extracting its collective behavior.", "However, the choice of preprocessing algorithms may bias the trained dynamics.", "Another limitation is its inability to train highly accurate predictive models when the underlying interacting system incorporates additional terms outside of the assumptions.", "A future direction may be to train only the interacting kernel component of the system and develop a closure approach for obtaining the remaining aspects of the dynamics." ], [ "Acknowledgements", "S.G.M.", "would like to acknowledge the support of NSF grants $\\#1813654$ , $\\#2112085$ and the Army Research Office (W911NF-19-1-0288).", "H.S.", "was supported in part by AFOSR MURI FA9550-21-1-0084 and NSF DMS-1752116." ] ]
2212.05591
[ [ "Periodic and compacton travelling wave solutions of discrete nonlinear\n Klein-Gordon lattices" ], [ "Abstract We prove the existence of periodic travelling wave solutions for general discrete nonlinear Klein-Gordon systems, considering both cases of hard and soft on-site potentials.", "In the case of hard on-site potentials we implement a fixed point theory approach, combining Schauder's fixed point theorem and the contraction mapping principle.", "This approach enables us to identify a ring in the energy space for non-trivial solutions to exist, energy (norm) thresholds for their existence and upper bounds on their velocity.", "In the case of soft on-site potentials, the proof of existence of periodic travelling wave solutions is facilitated by a variational approach based on the Mountain Pass Theorem.", "The proof of the existence of travelling wave solutions satisfying Dirichlet boundary conditions establishes rigorously the presence of compactons in discrete nonlinear Klein-Gordon chains.", "Thresholds on the averaged kinetic energy for these solutions to exist are also derived." ], [ "Introduction", "Travelling wave solutions (TWSs) in lattice dynamical systems have attracted considerable interest due to their fundamental importance in numerous physical contexts such as the energy transfer in biomolecules, the mobility of dislocations in crystalline materials and the propagation of pulses in optical systems, to name a few [1]-[5].", "A variety of fundamental models for the description of such phenomena has been studied, including Fermi-Pasta-Ulam-Tsingou, discrete nonlinear Klein-Gordon (DKG) and discrete nonlinear Schrödinger (DNLS) models [1]-[27], as well as, spatially discrete reaction diffusion systems [39]-[46] which are relevant to pattern formation resulting from phase transitions.", "Discrete systems are appropriate when the scale of decompositions are too small to be effectively described by continuous approximations.", "The existence of solitary travelling wave solutions has been rigorously treated with various approaches from nonlinear analysis [6],[7],[8], based on direct and minimax variational methods [11]-[19], reduction to finite dimensional manifolds and normal form techniques [20],[21] and fixed-point methods[22],[23],[26].", "For a presentation of several functional-analytic methods and corresponding background for their implementation in nonlinear lattices we refer to [27].", "On the other hand, the problem of the existence of periodic TWSs is much less explored, in particular for second order in time lattice dynamical systems [28], such as DKG lattices.", "In the present work, by combining a fixed-point approach and variational methods, we study the existence of periodic TWSs for general DKG systems with anharmonic on-site potentials in one-dimensional lattices.", "In particular, we study general DKG systems described by the following set of coupled oscillator equations $\\frac{d^2 q_n}{dt^2}=\\kappa (q_{n+1}-2q_n+q_{n-1})-V^{\\prime }(q_n).$ The prime $^{\\prime }$ stands for the derivative with respect to $q_n$ , the latter being the coordinate of the oscillator at site $n$ evolving in an anharmonic on-site potential $V(q_n)$ .", "Each oscillator interacts with its neighbours to the left and right and the strength of the interaction is regulated by the value of the parameter $\\kappa $ .", "This system has a Hamiltonian structure related to the energy $H=\\sum _{n}\\left(\\frac{1}{2}p_n^2+V(q_n)+\\frac{\\kappa }{2}(q_{n+1}-q_{n})^2\\right),$ and it is time-reversible with respect to the involution $p\\mapsto -p$ .", "In the case of a finite lattice, the system (REF ) describes the dynamics of an arbitrary number of $N+1$ oscillators, which are placed equidistantly on the interval $\\Omega =(-L,L)$ of length $2L$ .", "The quantity $\\kappa =1/2h^2$ serves as the discretisation parameter, with $h=2L/N$ defining the lattice spacing.", "The position of the oscillators is given by the discrete spatial coordinate $x_n=-L+nh, \\quad n= 0,1,2,\\ldots ,N.$ For finite lattices (REF ) we impose periodic boundary conditions $q_n = q_{n+N}.$ We consider TWSs of the form: $q_n(t)=Q\\left(n-c\\,t\\right)=Q(z),$ with a $2L-$ periodic function $Q(z)$ , $z=n-c\\,t$ , satisfying $\\int _{-L}^LQ(z)dz=0,$ where $c \\in {\\mathbb {R}}\\setminus \\lbrace 0\\rbrace $ is the velocity.", "The solutions (REF ) satisfy the advance-delay equation $c^2{Q}^{\\prime \\prime }(z)=\\kappa (Q(z+1)-2Q(z)+Q(z-1))-V^{\\prime }(Q(z)).$ As the system (REF ) possesses the time-reversibility symmetry it suffices to consider $c>0$ .", "The presentation of the results is as follows: Section deals with the existence of periodic TWSs in finite lattices with hard on-site potentials.", "Imposing periodic boundary conditions, the existence of periodic TWSs is proved by a fixed-point approach, based on Schauder's fixed point theorem and contraction mapping principles.", "The Schauder's fixed point method was made use of in our recent works [34], [35] in order to establish the existence of breather solutions.", "It is extended herein to equation (REF ) in order to treat the existence of periodic TWSs.", "This method, combined with a contraction mapping argument, yields the identification of a ring in the energy space where nontrivial solutions may exist, together with the derivation of energy thresholds for their existence, and an upper bound for their velocity.", "For the latter, the contraction mapping argument is applied on a map defined by a suitable auxiliary linear, non-homogeneous problem stemming from (REF ), whose solvability is guaranteed by the Friedrichs extension theorem.", "We should remark that while the current literature on the energy threshold problem concerns mainly breather solutions for DNLS models [36],[37] (see also [24],[25],[26],[38] and references therein), to the best of our knowledge, the results of the present work are the first on energy thresholds for TWSs.", "In section we consider the case of soft on-site potentials.", "This time, for the proof of existence of periodic TWSs on finite lattices supplemented with periodic or Dirichlet boundary conditions, we implement a variational approach based on the Mountain Pass Theorem.", "Notably, our proof of existence of nontrivial solitary TWSs satisfying Dirichlet boundary conditions, also establishes the presence of discrete compactons [29], [30], [31], [32], [33], as localised waves with compact support (finite wave length) in nonlinear DKG lattices, that is, systems with anharmonic on-site potentials and linear interactions.", "This has to be distinguished from the numerical and analytical studies of compactons in nonlinear discrete systems with no on-site potential and nonlinear interactions as in [32] and [33], respectively.", "Again threshold values for the average kinetic energy of TWSs are derived and this time with the aid of the contraction mapping principle.", "We remark that the derivation of the kinetic energy thresholds is valid for finite as well as infinite lattices.", "In fact, we present the proof for the latter case where the TWSs satisfy vanishing boundary conditions." ], [ "Hard on-site potentials", "In this section we consider hard on-site (uniformly convex) potentials $V(x)$ assuming that the following assumption holds on them: A: $V:{\\mathbb {R}}\\rightarrow {\\mathbb {R}}$ is non-negative and at least twice continuously differentiable and is characterised by the following properties: $V(0)=V^\\prime (0)=0,\\,\\,\\,V^{\\prime \\prime }(0)>0.$ The unique equilibrium at $x=0$ is a global minimum of $V(x)$ .", "Further, we assume that $V$ satisfies for some positive constants $\\overline{m}$ $\\alpha $ , $\\beta $ , and $K$ , the conditions $|V^{\\prime \\prime }(x)|&\\le & \\overline{m}|x|^\\alpha ,\\;\\;\\forall x\\in \\mathbb {R},\\\\|V^\\prime (x_1)-V^\\prime (x_2)|&\\le & K(|x_1|^\\beta +|x_2|^\\beta )|x_1-x_2|.", "$ In what follows, we reformulate the original problem (REF ) and (REF ) as a fixed point problem for a suitably defined operator in a Banach space.", "Motivated by our recent works [34],[35] applying Schauder's Fixed Point Theorem [8] to prove the existence of breathers in discrete nonlinear KG lattices, we show that the same Theorem is applicable, in order to prove the existence of periodic travelling waves.", "We will use the following version of Schauder's Fixed Point Theorem (see e.g.", "in [8]): Theorem 2.1 Let $G$ be a non-empty closed convex subset of the Banach space $X$ .", "Suppose $U:\\,\\,G\\rightarrow \\,G$ is a compact map, then $U$ has a fixed point in $G$ ." ], [ "Existence of periodic TWSs", "We start by introducing appropriate function spaces on which our methods will be employed.", "We consider first the space ${{\\mathcal {X}}}_0\\,=\\,\\left\\lbrace \\,Q\\in L^{2}_{per}(-L,L)\\,\\vert \\,\\,\\,\\int _{-L}^LQ(s)ds=0\\,\\right\\rbrace \\,,$ that is, the space of $L^2_{per}(-L,L)$ , $2L-$ periodic, square integrable functions of zero mean, endowed with the norm, $|| Q||_{{{\\mathcal {X}}}_0}=\\left(\\frac{1}{2L}\\,\\int _{-L}^L\\,(Q(s))^2ds\\right)^{1/2}= \\left(\\sum _{k\\in {\\mathbb {Z}}\\setminus \\lbrace 0\\rbrace }|\\hat{{Q}}_{k}|^2 dk\\right)^{1/2}.$ $\\hat{Q}_{k}$ determines the Fourier-coefficient in the Fourier series expansion: $Q(s)&=&\\sum _{k\\in {\\mathbb {Z}}\\setminus \\lbrace 0 \\rbrace }\\hat{Q}_{k} \\exp \\left( i\\Omega k s\\right),\\,s\\in (-L,L),\\,\\,\\,\\Omega =\\frac{\\pi }{L},\\\\{\\hat{Q}}_{k}&=&\\frac{1}{2L} \\int _{-L}^{L} Q(s)\\exp (-i \\Omega k s)ds,\\,\\,\\,\\hat{Q}_{k}=\\overline{\\hat{Q}}_{-k},$ of $Q(s)$ and $\\overline{x}$ denotes the complex conjugate of $x$ .", "Evidently, $\\mathcal {X}_0$ is a closed subspace of $L^{2}_{per}(-L,L)$ .", "We shall also use the spaces ${{\\mathcal {X}}}_1\\,=\\,\\left\\lbrace \\,Q\\in H^{1}_{per}(-L,L)\\,\\vert \\,\\,\\,\\int _{-L}^LQ(s)ds=0\\,\\right\\rbrace \\,,$ endowed with the scalar product and induced norm $(P,Q)_{\\mathcal {X}_1}=\\int _{-L}^{L}P^{\\prime }(z)Q^{\\prime }(z)d{z},\\;\\;||Q||^2_{\\mathcal {X}_1}=\\int _{-L}^{L}Q^{\\prime }(z)^2d {z}.$ It can be easily checked that the Poincaré inequality $\\int _{-L}^{L}(Q(z))^2dz\\le {C}(L)\\int _{-L}^{L}(Q^{\\prime }(z))^2dz,$ holds with the optimal constant ${C}(L)={L^2}/{\\pi ^2}$ when the Hilbert space $\\mathcal {X}_1$ is used.", "Of use will also be the space ${{\\mathcal {X}}}_2\\,=\\,\\left\\lbrace \\,Q\\in H^{2}_{per}(-L,L)\\,\\vert \\,\\,\\,\\int _{-L}^LQ(s)ds=0\\,\\right\\rbrace .$ For ${{\\mathcal {X}}}_2$ we shall use the norms $|| Q ||_{\\mathcal {X}_2}^2&=&\\frac{1}{2L}\\int _{-L}^{L}\\,\\left((Q(s))^2+(DQ(s))^2+(D^2Q(s))^2\\right)ds=\\sum _{k\\in {\\mathbb {Z}}\\setminus \\lbrace 0\\rbrace }\\left(1+(k\\Omega )^2+(k\\Omega )^4\\right)|\\hat{{Q}}_{k}|^2.$ It can be readily seen that $\\mathcal {X}_{2}$ is a closed subspace of $H_{per}^{2}(-L,L)$ .", "Furthermore, $\\mathcal {X}_2$ is compactly embedded in $\\mathcal {X}_0$ ($\\mathcal {X}_2\\Subset \\mathcal {X}_{0}$ ).", "In some cases, we will facilitate variational methods.", "In particular, solutions of (REF ) will be considered as critical points of the action functional $S:\\,\\mathcal {X}_1 \\rightarrow {\\mathbb {R}}$ given by $S(Q)=\\int _{-L}^L\\left[\\frac{1}{2}\\left(cQ^\\prime (z)\\right)^2-V(Q(z))-\\frac{1}{2}\\kappa \\,[Q(z+1)-Q(z)]^2\\right]dz.\\nonumber $ Related with the action functional is the (total) energy functional $E$ on $\\mathcal {X}_1$ : $E(Q)=\\int _{-L}^L\\left[\\frac{1}{2}\\left(Q^\\prime (z)\\right)^2+V(Q(z))+\\frac{\\kappa }{2}\\,[Q(z+1)-Q(z)]^2\\right]dz,$ which will be also involved in the derivation of several bounds for the solutions.", "Let us denote by $C_{0,*}$ the constant of the embedding $\\mathcal {X}_2\\Subset \\mathcal {X}_0$ , and by $C_{2,*}$ the constant of the embedding $\\mathcal {X}_2\\subset L^{\\infty }([-L,L])$ .", "We now proceed to the statement and proof of the result on the existence of periodic TWSs: Theorem 2.2 Let assumption A hold.", "If $\\frac{4\\kappa }{c^2}<\\Omega ^2,$ then there exists a periodic TWS $q_n(t)=Q\\left(n-\\,ct\\right)\\equiv Q(z)$ of (REF ) with $Q \\in H^2(-L,L)$ and $||Q||_{\\mathcal {X}_0}\\le \\left( \\frac{c^2\\Omega ^2-4\\kappa }{{\\overline{m}}C_{3,*}^{\\beta }}\\right)^{1/\\beta }:={R}_{max},$ where the constant $C_{3,*}$ is given by $C_{3,*}:=\\frac{C_{2,*}}{C_{0,*}}, $ such that $Q(z+2L)=Q(z),\\,\\,\\forall z \\in {\\mathbb {R}}.$ Proof: We seek $2L-$ periodic functions $Q\\in H^{2}(-L,L)$ satisfying (REF ) and solve Eq.", "(REF ).", "For the following discussions it is suitable to re-write Eq.", "(REF ) as: ${Q}^{\\prime \\prime }(z)-\\frac{\\kappa }{c^2} (Q(z+1)-2Q(z)+Q(z-1))=-\\frac{1}{c^2}V^{\\prime }(Q(z)).$ Thus only the right-hand side of (REF ) features terms nonlinear in $Q$ .", "Ultimately, (REF ) shall be expressed as a fixed point equation in $Q$ .", "To proceed, we recall first some basic facts for the intersection of Banach spaces [10]: For two Banach spaces $X$ and $Y$ , the intersection $X\\cap Y$ is a Banach space endowed with the norm $||x||_{X\\cap Y}=||x||_{X}+||x||_{Y},\\;\\;\\mbox{for all $x\\in X\\cap Y$.", "}$ Clearly, from (REF ), we have that $||x||_{X}\\le ||x||_{X\\cap Y},\\;\\;\\mbox{and}\\;\\;||x||_{Y}\\le ||x||_{X\\cap Y},\\;\\;\\mbox{for all $x\\in X\\cap Y$.", "}$ Moreover, in the particular case where $Y$ is continuously embedded in $X$ with an embedding constant $c_0$ , that is $||x||_{x}\\le c_0||x||_{Y}$ , for all $x\\in Y$ , then, due to (REF ) and (REF ), $||x||_{X}\\le ||x||_{X\\cap Y}\\le c_0^*||x||_{Y},\\;\\;\\mbox{for all $x\\in Y$,}\\;\\;c_0^*=1+c_0.$ Hence, if $x$ is in a closed ball of $Y$ of radius $r$ , then due to (REF ), $x$ is in the closed ball of $X\\cap Y$ of radius $c_0^{*}r$ , and in the closed ball of the same radius of $X$ .", "With these preparations we apply the above for $X=\\mathcal {X}_0$ and $Y=\\mathcal {X}_2$ , and we define a convex set of $\\mathcal {X}_0\\cap \\mathcal {X}_2$ , as follows: First, we consider arbitrary elements $P\\in \\mathcal {X}_0\\cap \\mathcal {X}_2$ , such that $||P||_{\\mathcal {X}_2}\\le \\varrho .", "$ Since $P\\in \\mathcal {X}_2\\cap \\mathcal {X}_0$ and $\\mathcal {X}_2\\Subset \\mathcal {X}_0$ with compact embedding, it holds by application of (REF ), that $||P||_{\\mathcal {X}_0}\\le ||P||_{\\mathcal {X}_0\\cap \\mathcal {X}_2}\\le C_{0,*}||P||_{\\mathcal {X}_2}\\le C_{0,*}\\varrho ,$ Then, we set for simplicity, $R=C_{0,*}\\varrho ,$ and we consider the convex set of $\\mathcal {X}_0\\cap \\mathcal {X}_2$ as ${\\mathcal {Y}}_0\\,=\\,\\left\\lbrace \\,P\\in \\mathcal {X}_0\\cap \\mathcal {X}_2\\,\\,:\\,\\,|| P||_{\\mathcal {X}_0}\\le R\\,\\right\\rbrace .$ Evidentially, the set $\\mathcal {Y}_0$ is well defined and non-empty due to (REF ) and (REF ).", "As we will prove below, $R$ (and thus, $\\varrho $ , due to (REF )), will be explicitly determined, giving rise to the estimate (REF ) for the TWs.", "We relate the left-hand side of (REF ) to the linear mapping: $M\\,:\\,\\mathcal {X}_2\\,\\rightarrow \\,\\mathcal {X}_{0}$ : $M(P)={P}^{\\prime \\prime }(z)-\\frac{\\kappa }{c^2}(P(z+1)-2P(z)+P(z-1)).$ As a next step, we establish the invertibility of this mapping and derive an upper bound for the norm of its inverse.", "Applying the operator $M$ to the Fourier elements $\\exp (i\\,\\Omega ks)$ in (REF ), results in $M\\exp (i\\,\\Omega ks)=\\nu _k \\exp (i\\,\\Omega ks),$ where $\\nu _k= -\\Omega ^2\\,k^2+4\\,\\frac{\\kappa }{c^2}\\sin ^2\\left(\\frac{\\Omega }{2}k\\right).$ The hypothesis (REF ) guarantees that $\\nu _k \\ne 0$ , for all $k \\in {\\mathbb {Z}}\\setminus \\lbrace 0\\rbrace $ .", "Therefore, the mapping $M$ possesses an inverse obeying $M^{-1}\\exp (i\\,\\Omega ks)=(1/\\nu _k)\\exp (i\\,\\Omega ks)$ .", "For the norm of the linear operator $M^{-1}:\\,\\,\\mathcal {X}_{0} \\rightarrow \\mathcal {X}_2$ , one gets, by using the hypothesis (REF ), the upper bound: ${ || M^{-1} ||_{\\mathcal {X}_{0},\\mathcal {X}_2}=\\sup _{0 \\ne Q \\in \\mathcal {X}_0}\\frac{|| M^{-1}\\,Q ||_{\\mathcal {X}_2}}{|| Q ||_{\\mathcal {X}_0}}}\\nonumber \\\\&=&\\sup _{0 \\ne Q \\in \\mathcal {X}_0} \\,\\frac{\\left(\\frac{1}{2L}\\int _{-L}^L\\,\\left((M^{-1}Q(s))^2 + (DM^{-1}{Q}(s))^2+(D^2M^{-1}{Q}(s))^2\\right)ds\\right)^{1/2}}{|| Q ||_{\\mathcal {X}_0}}\\nonumber \\\\&=&\\sup _{0 \\ne Q \\in \\mathcal {X}_0}\\frac{\\left(\\frac{1}{2L}\\int _{-L}^L\\, \\left(| \\sum _{l}^\\prime \\frac{1}{\\nu _l}\\hat{Q}_{l}\\exp (i\\,l\\Omega s)|^2+| \\sum _{l}^\\prime \\frac{i\\Omega \\,l}{\\nu _l}\\hat{Q}_{l}\\exp (i\\,l\\Omega s)|^2+|\\sum _{l}^\\prime \\frac{(i\\Omega \\,l)^2}{\\nu _l}\\hat{Q}_{l}\\exp (i\\,l\\Omega s)|^2\\right) ds\\right)^{1/2}}{|| Q ||_{\\mathcal {X}_0}}\\nonumber \\\\&\\le & \\sup _{l\\in {\\mathbb {Z}}\\setminus \\lbrace 0 \\rbrace } \\frac{\\sqrt{1+(\\Omega \\,l)^2+(\\Omega \\,l)^4} }{|\\nu _l|}\\,\\cdot \\sup _{0 \\ne Q \\in \\mathcal {X}_0}\\frac{\\left(\\sum _{l}^\\prime |\\hat{Q}_{l}|^2\\right)^{1/2}}{|| Q ||_{\\mathcal {X}_0}}\\nonumber \\\\&=&\\sup _{l\\in {\\mathbb {Z}}\\setminus \\lbrace 0 \\rbrace } \\frac{\\sqrt{1+(\\Omega \\,l)^2+(\\Omega \\,l)^4}}{|-\\Omega ^2\\,l^2+\\frac{4\\kappa }{c^2}\\sin ^2\\left(\\frac{\\Omega }{2}l\\right)|}\\nonumber \\\\&\\le &\\frac{1+\\Omega ^2}{\\,\\Omega ^2-\\frac{4\\kappa }{c^2}},$ which verifies the boundedness of $M^{-1}$ ; note that we have used the notation $\\sum _l^\\prime = \\sum _{l\\in {\\mathbb {Z}}\\setminus \\lbrace 0 \\rbrace }$ .", "Obviously, the same estimate (REF ) verifies the boundedness of $M^{-1}$ as an operator $M^{-1}:\\mathcal {X}_2\\cap \\mathcal {X}_0\\rightarrow \\mathcal {X}_0$ On the other hand, we remark for later use, that if we consider only the lower-order terms of the estimate (REF ), we get that $||M^{-1}||_{\\mathcal {X}_0,\\mathcal {X}_0}\\le \\frac{c^2}{c^2\\Omega ^2-4\\kappa }.$ Associated with the right-hand side of (REF ) we introduce the nonlinear operator $N\\,:\\,\\mathcal {Y}_{0}\\,\\rightarrow \\,\\mathcal {Y}_{0}$ , as $N(P)= -\\frac{1}{c^2}V^{\\prime }(P).$ To verify continuity of the operator $N$ on $\\mathcal {X}_{0}$ , for any $P\\in \\mathcal {Y}_0$ , (which implies by the definition of $\\mathcal {Y}_0$ that $P\\in \\mathcal {X}_{0}\\cap \\mathcal {X}_2$ ), we prove that $N$ is Frechet differentiable with a locally bounded derivative.", "Using (REF ) we have $N^{\\prime }(P):\\,h\\in \\mathcal {X}_{0}\\,\\mapsto N^{\\prime }(P)[h] =-\\frac{1}{c^2}V^{\\prime \\prime }(P)h \\in \\mathcal {X}_{0}.$ Then, since $P\\in \\mathcal {X}_2\\cap \\mathcal {X}_0$ and $\\mathcal {X}_2\\subset L^{\\infty }([-L,L])$ with continuous embedding, we have that $|| N^{\\prime }(P)[h]||_{\\mathcal {X}_{0}}&=&|| -\\frac{1}{c^2}V^{\\prime \\prime }(P)\\,h ||_{\\mathcal {X}_0}\\nonumber \\\\&=& \\frac{1}{c^2}\\left(\\frac{1}{2L}\\int _{-L}^L| V^{\\prime \\prime }(P(z))\\,h(z)|^2dz\\right)^{1/2}\\le \\frac{1}{c^2}\\left(\\overline{m}^2 \\max _{-L\\le z\\le L}|P(z)|^{2\\alpha }\\frac{1}{2L}\\int _{-L}^L|h(z)|^2dz\\right)^{1/2}\\nonumber \\\\&\\le & \\frac{\\overline{m}}{c^2}\\,\\max _{-L\\le z\\le L}|P(z)|^{\\alpha }\\,|| h||_{\\mathcal {X}_{0}}\\nonumber \\\\&\\le & \\frac{\\overline{m}}{c^2}C_{2,*}^{\\alpha }||P||_{\\mathcal {X}_2}^{\\alpha }|| h||_{\\mathcal {X}_{0}}\\le A(\\overline{m},c^2,C_{2,*},\\varrho )|| h||_{\\mathcal {X}_{0}},$ for the constant $A=A(\\overline{m},c^2,C_{2,*},\\varrho )=\\frac{\\overline{m}}{c^2}C_{2,*}^{\\alpha }\\varrho ^{\\alpha },$ proving the local boundedness of the differential.", "Then, one concludes that the Frechet derivative is locally bounded as $|| N^{\\prime }(P)||_{{\\cal {L}}(\\mathcal {X}_{0},\\mathcal {X}_{0})}\\le A.$ With the aid of the locally bounded derivative, we can prove the local Lipschitz continuity of $N$ as follows: $|| N(P)-N(Q) ||_{\\mathcal {X}_{0}}&\\le & \\sup _{S \\in [P,Q]} || N^{\\prime }(S)||_{{\\cal {L}}(\\mathcal {X}_{0},\\mathcal {X}_{0})}|| P-Q||_{\\mathcal {X}_{0}}\\nonumber \\\\&\\le &A|| P-Q||_{\\mathcal {X}_{0}}.$ As the range of $N$ is concerned, we derive that for any $P\\in \\mathcal {Y}_0$ , $|| N(P) ||_{\\mathcal {X}_{0}}=\\frac{1}{c^2}|| \\,-V^{\\prime }(P)\\, ||_{\\mathcal {X}_0}&=&\\frac{1}{c^2}\\left(\\frac{1}{2L}\\int _{-L}^L|-V^{\\prime }(P(z))|^2du\\right)^{1/2}\\le \\frac{1}{c^2}\\left(\\frac{1}{2L}\\int _{-L}^L\\overline{m}^2| P(z)|^{2(\\beta +1)}dz\\right)^{1/2}\\nonumber \\\\& \\le & \\frac{\\overline{m}}{c^2}\\,\\max _{-L\\le z \\le L}|P(z)|^{\\beta }|| P ||_{\\mathcal {X}_{0}}\\nonumber \\\\& \\le &\\frac{\\overline{m}}{c^2}C_{2,*}^{\\beta }||P||_{\\mathcal {X}_2}^{\\beta }||P||_{\\mathcal {X}_0}\\nonumber \\\\& \\le &\\frac{\\overline{m}}{c^2}C_{2,*}^{\\beta }C_{0,*}||P||_{\\mathcal {X}_2}^{\\beta +1}\\nonumber \\\\&\\le &\\frac{\\overline{m}}{c^2}C_{2,*}^{\\beta }C_{0,*}\\varrho ^{\\beta +1}= \\frac{\\overline{m}}{c^2}\\left(\\frac{C_{2,*}}{C_{0,*}}\\right)^{\\beta }R^{\\beta +1}.$ Thus, we proved in (REF ), that for any $P\\in \\mathcal {Y}_0$ , $|| N(P) ||_{\\mathcal {X}_{0}}\\le \\frac{\\overline{m}C_{3,*}^{\\beta }}{c^2}R^{\\beta +1},$ with the constant $C_{3,*}=\\frac{C_{2,*}}{C_{0,*}}$ , as defined in (REF ).", "At last, we express the problem (REF ) as a fixed point equation in terms of a mapping $\\mathcal {Y}_0\\,\\rightarrow \\, \\mathcal {Y}_{0}$ : $Q=M^{-1}\\circ N(Q)\\equiv {L}(Q).$ Using (REF ) and (REF ), we have $||{L}(Q)||_{\\mathcal {X}_0}&=&|| M^{-1}(\\,N(Q))||_{\\mathcal {X}_0}\\le || M^{-1} ||_{\\mathcal {X}_0,\\mathcal {X}_0}|| \\,N(Q)||_{\\mathcal {X}_0}\\nonumber \\\\& \\le &\\frac{{\\overline{m}}C_{3,*}^{\\beta }}{c^2\\Omega ^2-4\\kappa }R^{\\beta +1}\\le R,$ assuring by assumptions (REF ) and (REF ), that indeed, ${L}(\\mathcal {Y}_0)\\subseteq \\mathcal {Y}_0.$ Furthermore, since it holds that ${L}(Q)\\in \\mathcal {X}_2$ for all $Q \\in \\mathcal {Y}_0\\subseteq \\mathcal {X}_0$ , one has ${L}(\\mathcal {Y}_0)\\subseteq \\mathcal {X}_2 \\cap \\mathcal {Y}_0$ , and as the embedding of $\\mathcal {X}_2$ in $\\mathcal {X}_0$ is compact, the operator ${L}$ is compact.", "It remains to prove that ${L}$ is continuous on $\\mathcal {Y}_{0}$ : For arbitrary $P_1,P_2\\in \\mathcal {Y}_{0}$ , we have $||{L}(P_1)-{L}(P_2)||_{\\mathcal {X}_0}&=&|| M^{-1}(N(P_1))- M^{-1}(N(P_2))||_{\\mathcal {X}_2}\\le || M^{-1}||_{\\mathcal {X}_{0},\\mathcal {X}_{0}}\\,|| N(P_1)- N(P_2)||_{\\mathcal {X}_{0}}\\nonumber \\\\&\\le & \\frac{Ac^2}{c^2\\Omega ^2-4\\kappa } ||P_1-P_2||_{\\mathcal {X}_0}<\\epsilon ,$ if $|| P_1-P_2||_{\\mathcal {X}_0} < \\delta =\\frac{c^2\\Omega ^2-4\\kappa }{Ac^2}\\epsilon ,$ for any given $\\epsilon >0$ , verifying that ${L}(Q)$ is continuous on $\\mathcal {Y}_0$ .", "Thus, all the assumptions of Schauder’s fixed point theorem are satisfied, and hence, the fixed point equation $Q = {L}(Q)$ has at least one solution.", "$\\square $ Remark 2.1 (Regularity of travelling waves).", "By the Sobolev embeddings, the obtained $H^2$ -travelling wave solutions $Q$ are $C^{1}$ .", "Therefore, due to Eq.", "(REF ) it holds that ${Q}^{\\prime \\prime } \\in C^{1}$ and conclusively $Q\\in C^{3}$ , that is, they are classical solutions." ], [ "Existence of non-trivial TWSs and an energy threshold", "In this section, we consider the problem of existence and non-existence of non-trivial TWSs with frequencies satisfying the strengthened condition $\\Omega ^2>\\frac{4\\kappa +\\overline{m}C_{3,*}^{\\beta }}{c^2},$ if compared to (REF ).", "The motivation for assuming (REF ), is explained in Theorem REF concluding the section.", "We start by stating a result on non-existence of non-trivial TWSs with frequencies satisfying (REF ).", "Proposition 2.1 Suppose that conditions A and (REF ) hold and that $||Q||_{\\mathcal {X}_0}\\le R<\\left(\\frac{c^2\\Omega ^2-4\\kappa }{{\\overline{m}}C_{3,*}^{\\beta }}\\right)^{1/(1+\\beta )}:={R}_{crit}.$ Then the equation $Q={L}(Q)$ has only the trivial solution, that is, there exist no non-trivial TWSs.", "Proof: If (REF ) holds, we get $|| {L}(Q) ||_{\\mathcal {X}_0}=|| M^{-1}(N(Q))||_{\\mathcal {X}_0} \\le || M^{-1}||_{\\mathcal {X}_0,\\mathcal {X}_0}\\cdot || N(Q)||_{\\mathcal {X}_0}<1$ .", "Thus ${L}$ is a contraction, and the Contraction Mapping Theorem implies that there is a unique function $Q$ that solves the equation $Q={L}(Q)$ .", "Since ${L}(0)=0$ , this unique solutions is the trivial one.", "Note that due to (REF ) the potential $V(x)$ can be expressed as $V(x)=(1/2)x^2+W(x)$ , with $W$ satisfying (REF ) (see also [35]).", "Furthermore, let us observe that as the (conserved) energy functional (REF ) is coercive, it can be used to bound the $L^2(-L,L)$ -norm of the TWSs as follows $E(Q)&=&\\int _{-L}^L\\left[\\frac{1}{2}\\left(Q^\\prime (z)\\right)^2+\\frac{1}{2}Q^2(z)+W(Q(z))+\\frac{\\kappa }{2}\\,[Q(z+1)-Q(z)]^2\\right]dz\\nonumber \\\\&=& \\int _{-L}^L\\left[\\frac{1}{2}\\left(Q^\\prime (z)\\right)^2+W(Q(z))+\\frac{\\kappa }{2}\\,[Q(z+1)-Q(z)]^2\\right]dz+\\frac{1}{2}||Q||_{L^2(-L,L)}^2,$ so that $|| Q||_{\\mathcal {X}_0}\\le \\sqrt{2E}.$ In conclusion, if the energy of the lattice system is less than $E_{crit}(c,\\kappa ,\\overline{m},\\Omega )=\\sqrt{2R_{crit}}$ , no TWSs of given values for $c,\\kappa , \\overline{m},\\Omega $ exist.", "$\\square $ Combining the existence Theorem REF and Proposition REF we may identify a ring in the phase space $\\mathcal {X}_0$ with quantified radii in which the non-trivial travelling wave solutions with frequencies satisfying the enhanced condition (REF ), exist.", "This is illustrated in the cartoon of Fig.", "REF .", "Theorem 2.3 Let the assumption A and the condition (REF ) hold.", "The system (REF ) may possess non-trivial TWSs only if ${R}_{crit}= \\left(\\frac{c^2\\Omega ^2-4\\kappa }{{\\overline{m}}C_{3,*}^{\\beta }}\\right)^{1/(1+\\beta )}\\le ||Q||_{\\mathcal {X}_0}\\le {R}_{max}=\\left(\\frac{c^2\\Omega ^2-4\\kappa }{{\\overline{m}}C_{3,*}^{\\beta }}\\right)^{1/\\beta }.$ Proof: On the one hand, Theorem REF establishes the existence of solutions when $||Q||_{\\mathcal {X}_0}\\le {R}_{max}$ .", "On the other hand, according to Proposition REF , if $||Q||_{\\mathcal {X}_0}< {R}_{crit}$ only the trivial solution exists.", "Thus, a non-trivial solution exists only if ${R}_{crit}<||Q||_{\\mathcal {X}_0}\\le {R}_{max},$ that is, when (REF ) is satisfied.", "For the latter to be valid, we require ${R}_{crit}<{R}_{max}$ , motivating the condition (REF ) on the frequencies of the TWSs.", "$\\square $ Figure: Illustration of the statement of Theorem : Non-trivial travelling wave solutions exist in the ring 𝐄=ℬ(0,R max ) ¯∖ℬ(0,R crit ) ¯\\mathbf {E}=\\overline{\\mathcal {B}(0,{R}_{max})}\\setminus \\overline{\\mathcal {B}(0,{R}_{crit})} of the space 𝒳 0 \\mathcal {X}_0 [light (green) coloured area (𝐄\\mathbf {E})].", "The radius R crit {R}_{crit} of the closed ball ℬ(0,R crit ) ¯\\overline{\\mathcal {B}(0,{R}_{crit})} follows from the contraction mapping Theorem (see Proposition ) and defines a threshold for non-existence of non-trivial solutions [in the darker (red) coloured area (𝐍-𝐄)\\mathbf {N}-\\mathbf {E})].", "The radius R max {R}_{max} of the closed ball ℬ(0,R max ) ¯\\overline{\\mathcal {B}(0,{R}_{max})} results from the Schauder's fixed point theorem (see Theorem )." ], [ "Upper bound for the velocity", "By using a fixed point approach, we may derive upper bounds for the velocity $c$ of TWSs.", "For this purpose, we rewrite equation (REF ) in the following operator form: $-Q^{\\prime \\prime }(z)={\\frac{1}{c^2}\\left\\lbrace \\kappa \\left([Q(z)-Q(z-1)]-[Q(z+1)-Q(z)]\\right)+V^\\prime (Q(z))\\right\\rbrace }.$ We recall some basic auxiliary results, starting with the Friedrichs extension Theorem.", "Theorem 2.4 Let $\\mathcal {L}_0:D(\\mathcal {L}_0)\\subseteq {X}_0\\rightarrow {X}_0$ be a linear symmetric operator on the Hilbert space ${X}_0$ with its domain $D(\\mathcal {L}_0)$ being dense in ${X}_0$ .", "Assume that there exists a constant $c>0$ such that $(\\mathcal {L}_0v,v)_{{X}_0}\\ge c||v||^2_{{X}_0}\\;\\;\\mbox{for all}\\;\\;v\\in D(\\mathcal {L}_0).$ Then $\\mathcal {L}_0$ has a self-adjoint extension $\\mathcal {L}:D(\\mathcal {L})\\subseteq {X}_1\\subseteq {X}_0\\rightarrow {X}_0$ where ${X}_1$ denotes the energetic Hilbert space endowed with the energetic scalar product $(v,w)_{{X}_1}=(\\mathcal {L}v,w)_{{X}_0}$ for all $u,v\\in {X}_1$ and the energetic norm $||v||_{{X}_1}^2=(\\mathcal {L}v,v)_{{X}_0}$ .", "Furthermore, the operator equation $\\mathcal {L}v=f, f\\in {X}_0,$ has a unique solution $v\\in D(\\mathcal {L})$ .", "In addition, if $\\hat{\\mathcal {L}}:{X}_1\\rightarrow {X}_1^*$ denotes the energetic extension of $\\mathcal {L}$ , then $\\hat{\\mathcal {L}}$ is the canonical isomorphism from ${X}_1$ to its dual ${X}_1^*$ and the operator equation $\\hat{\\mathcal {L}}v=f, f\\in {X}_1^*,$ has also a unique solution $v\\in {X}_1$ .", "With Theorem REF in hand, we discuss the left-hand side of Eq.", "(REF ).", "It is well known that Theorem REF is applicable to the operator $\\mathcal {L}_0:D(\\mathcal {L}_0)\\subseteq L^2(-L,L)\\rightarrow L^2(-L,L)$ , $\\mathcal {L}_0Q=-Q^{\\prime \\prime }(z)$ , with domain of definition, $D(\\mathcal {L}_0)$ , the space of $C^{\\infty }$ -functions on $(-L,L)$ .", "Since $D(\\mathcal {L}_0)$ is dense in $\\mathcal {X}_0$ , and inequality (REF ) holds, the Friedrichs extension of $\\mathcal {L}_0$ is the operator $\\mathcal {L}:D(\\mathcal {L})\\rightarrow \\mathcal {X}_0$ where $D(\\mathcal {L})=\\left\\lbrace v\\in \\mathcal {X}_1\\;:\\;\\mathcal {L}v\\in L^2(-L,L)\\right\\rbrace .$ Consequently, the equation $-Q^{\\prime \\prime }(z)=f,\\;\\;\\mbox{for every}\\;\\;f\\in L^2(-L,L),$ has a unique solution in $D(\\mathcal {L})$ .", "Thus, we shall consider the right-hand side of Eq.", "(REF ) as a suitable mapping on $L^2(-L,L)$ .", "For its linear part, we have the following lemma proved in [12].", "Lemma 2.1 The linear operators $A_{1}[Q(z)]=Q(z+1)-Q(z)=\\int _{z}^{z+1}Q^{\\prime }(s)ds,\\;\\;A_{2}[Q(z)]=Q(z)-Q(z-1)=\\int _{z-1}^{z}Q^{\\prime }(s)ds,$ are continuous from $\\mathcal {X}_1$ to $ L^2(-L,L)\\cap L^{\\infty }(-L,L)$ and $||A_{i}Q||_{L^\\infty }\\le ||Q||_{{\\mathcal {X}_1}}$ , $||A_iQ||_{{\\mathcal {X}_0}}\\le ||Q||_{{\\mathcal {X}_1}}$ , $i=1,2$ .", "Using Theorem REF , we treat the following auxiliary linear, non-homogeneous problem $-Q^{\\prime \\prime }(z)=\\frac{1}{c^2}\\left\\lbrace \\kappa \\left([\\Psi (z)-\\Psi (z-1)]-[\\Psi (z+1)-\\Psi (z)]\\right)+V^\\prime (\\Psi (z))\\right\\rbrace .$ for some arbitrary fixed $\\Psi \\in \\mathcal {X}_1$ , as an equation of the form (REF ).", "In particular, we have the following result: Proposition 2.2 For any $\\Psi \\in \\mathcal {X}_1$ , the equation (REF ) has a unique solution $Q\\in D(\\mathcal {L})\\subset \\mathcal {X}_1$ .", "Proof: Equation (REF ) can be rewritten in the form $-Q^{\\prime \\prime }(z)=\\frac{1}{c^2}\\left\\lbrace \\kappa {\\left([A_2[\\Psi (z)]-A_1[\\Psi (z)]\\right)}+V^\\prime (\\Psi (z))\\right\\rbrace :=\\mathcal {F}[\\Psi (z)].$ Due to the continuous embedding $\\mathcal {X}_1\\subset L^{r}(-L,L)$ for any $1\\le r\\le \\infty $ and condition (), we have that for some constant $C=C(K,\\beta )>0$ , $||V^\\prime (\\Psi )||^2_{\\mathcal {X}_0}=\\int _{-L}^{L}(V^\\prime (\\Psi (z)))^{2}dz\\le K^2 \\int _{-L}^{L}|\\Psi (z)|^{2(\\beta +1)}dz\\le C||\\Psi ||_{\\mathcal {X}_1}^{2(\\beta +1)},$ Then, for the right-hand side of Eq.", "(REF ) we get $||\\mathcal {F}[\\Psi ]||_{\\mathcal {X}_0}&\\le & \\frac{1}{c^2}\\left\\lbrace \\kappa ||A_2[\\Psi ]-A_1[\\Psi ]||_{\\mathcal {X}_0} +||V^\\prime (\\Psi (z))||_{{\\mathcal {X}_0}}\\right\\rbrace \\nonumber \\\\&\\le &\\frac{1}{c^2}\\left\\lbrace 2\\kappa ||\\Psi ||_{\\mathcal {X}_1}+C ||\\Psi ||_{\\mathcal {X}_1}^{2(\\beta +1)}\\right\\rbrace .$ Thus, ${ \\mathcal {F}[\\Psi ] \\in L^2(-L,L)}$ , and by virtue of Theorem REF Eq.", "(REF ) has a unique solution $Q\\in D(\\mathcal {L})$ .", "$\\square $ We now proceed with the implementation of the fixed point argument.", "To this aim we consider for some $R>0$ the closed ball of $\\mathcal {X}_1$ , $\\mathcal {B}_R:=\\left\\lbrace \\psi \\in \\mathcal {X}_{1}\\,:\\,||\\psi ||_{{\\mathcal {X}_1}}\\le R\\right\\rbrace $ .", "Proposition REF shows that the map $\\mathcal {T}:\\mathcal {X}_1\\rightarrow \\mathcal {X}_1$ defined as $\\mathcal {T}[\\Psi ]=Q,$ where $Q$ is the unique solution of the auxiliary problem (REF ), is well defined.", "Hence we may introduce $\\Psi _1,\\Psi _2\\in \\mathcal {B}_R$ such that $Q=\\mathcal {T}[\\Psi _1]\\;\\;\\mbox{and}\\;\\;P=\\mathcal {T}[\\Psi _2].$ Then the difference $Y=Q-P$ satisfies the equation $-Y^{\\prime \\prime }(z)&=&\\mathcal {F}[\\Psi _1(z)]-\\mathcal {F}[\\Psi _2(z)]\\nonumber \\\\&=&\\frac{1}{c^2}\\left\\lbrace \\kappa \\left(A_1[\\Psi _2(z)]-A_1[\\Psi _1(z)]+A_2[\\Psi _1(z)]-A_2[\\Psi _2(z)]\\right)\\right\\rbrace \\nonumber \\\\&+&\\left.V^\\prime (\\Psi _1(z))-V^\\prime (\\Psi _2(z))\\right\\rbrace .$ From Lemma REF the linear operators $A_i:\\mathcal {X}_1\\rightarrow L^2(-L,L)\\cap L^{\\infty }(-L,L)$ are globally Lipschitz, $||A_i\\psi _1-A_i\\psi _2||_{\\mathcal {X}_0}\\le ||\\psi _1-\\psi _2||_{\\mathcal {X}_1},\\\\||A_i\\psi _1-A_i\\psi _2||_{L^\\infty }\\le ||\\psi _1-\\psi _2||_{\\mathcal {X}_1},\\,\\,\\, {{{i=1,2.", "}}}\\nonumber $ To estimate the difference of the remaining terms in Eq.", "(REF ), we use () and the embedding $\\mathcal {X}_1\\subset L^{\\infty }(-L,L)$ with its embedding constant $C_*$ , $\\int _{-L}^{L}|V^\\prime (\\Psi _1(z))-V^\\prime (\\Psi _2(z))|^2dz&\\le & K^2\\int _{-L}^{L}(|\\Psi _1(z)|^\\beta +|\\Psi _2(z)|^\\beta )^2|\\Psi _1(z)-\\Psi _2(z)|^2dz\\nonumber \\\\&\\le & K^2\\max _{-L\\le z \\le L}(|\\Psi _1(z)|^\\beta +|\\Psi _2(z)|^\\beta )\\int _{-L}^{L}|\\Psi _1(z)-\\Psi _2(z)|^2dz\\nonumber \\\\&=&2K^2 C_*^{2\\beta } R^{2\\beta }||\\Psi _1(z)-\\Psi _2(z)||_{\\mathcal {X}_0}^2$ Hence, for the right-hand side of (REF ) we get $||\\mathcal {F}[\\Psi _1(z)]-\\mathcal {F}[\\Psi _2(z)]||_{\\mathcal {X}_0}\\le M\\Vert |\\Psi _1-\\Psi _2||_{\\mathcal {X}_1},$ where the constant $M$ is given by $M=\\frac{2}{c^2}\\left(\\kappa +KC_*R^\\beta \\right).$ Next, by multiplying (REF ) in the $ L^2(-L,L)$ -scalar product and using the Cauchy-Schwarz inequality and Young's inequality, we get the estimate $||Y||_{\\mathcal {X}_1}^2&\\le & ||\\mathcal {F}[\\Psi _1(z)]-\\mathcal {F}[\\Psi _2(z)]||_{\\mathcal {X}_0}\\,||Y||_{\\mathcal {X}_0}\\nonumber \\\\&\\le &M\\sqrt{C(L)}||\\Psi _1-\\Psi _2||_{\\mathcal {X}_1}\\,||Y||_{\\mathcal {X}_1}\\nonumber \\\\&\\le &\\frac{1}{2}||Y||_{\\mathcal {X}_1}^2+\\frac{M^2C(L)}{2}||\\Psi _1-\\Psi _2||_{\\mathcal {X}_1}^2.$ Note that the Poincaré inequality (REF ) has been used.", "From (REF ) we derive $||Y||_{\\mathcal {X}_1}^2=||\\mathcal {T}[\\Psi _1]-\\mathcal {T}[\\Psi _2]||_{\\mathcal {X}_1}^2\\le M^2C(L)||\\Psi _1-\\Psi _2||_{\\mathcal {X}_1}^2.$ We conclude that if the Lipschitz constant satisfies $M\\sqrt{C(L)}<1,$ then the map $\\mathcal {T}:\\mathcal {B}_R\\rightarrow \\mathcal {B}_R$ is a contraction.", "Hence the map $\\mathcal {T}$ satisfies the assumptions of the Banach Fixed Point Theorem and has a unique fixed point.", "By the assumptions we have that $\\mathcal {F}(0)=0$ .", "Therefore we deduce that if (REF ) holds, then the unique fixed point is the trivial one.", "Thus nontrivial solutions exist only if (REF ) is violated, that is, when $M\\sqrt{C(L)}>1.$ Regarding the upper bound for the velocity we summarise in Theorem 2.5 An upper bound for the velocity $c$ of nontrivial periodic TWSs $q_n(t)=Q(n-ct)=Q(z)$ of prescribed norm $||Q||_{\\mathcal {X}_1}\\le R$ to the system (REF ), on the periodic lattice $-L\\le n \\le L$ , is given by $c^2<2(\\kappa +KC_*R^\\beta )C(L).$" ], [ "The estimates on the TWSs proved in Theorems REF , REF and REF implicate a coherent dependence on the lattice parameters, the frequency $\\Omega $ and $R$ and the velocity $c$ .", "Motivated by the discussion of [35], we aim to discuss the potential physical relevance of these estimates: For fixed $\\overline{m}$ and $\\kappa $ , we observe that $\\lim _{\\Omega \\rightarrow \\infty }{R}_{max}&=&\\lim _{\\Omega \\rightarrow \\infty }{R}_{crit}=\\infty ,\\;\\;\\mbox{for fixed $c$},\\\\\\lim _{c\\rightarrow \\infty }{R}_{max}&=&\\lim _{c\\rightarrow \\infty }{R}_{crit}=\\infty ,\\;\\;\\mbox{for fixed $\\Omega $}.$ Both limits are physically relevant in the sense that in the limit of arbitrary large frequency or velocity, a type of “energy” of the solution, measured herein in the norm of $\\mathcal {X}_0$ , should become also arbitrarily large.", "This behavior can be relevant to energy localization phenomena [47] or the notion of quasi-collapse [48].", "Note that in the second limit as $c\\rightarrow \\infty $ , the growth of the norm in $\\mathcal {X}_0$ , implies due to the Poincaré inequality (REF ) (or due to Sobolev embeddings), the growth of the kinetic energy of the TWSs, which is consistent with the growth of $c$ .", "The above coherent dependence on the lattice parameters is also evident in the derived upper bound for the velocity (REF ).", "Theorem REF justifies that TWs of given “energy” measured by the norm of $\\mathcal {X}_1$ can evolve with velocity satisfying the upper-bound (REF )." ], [ "Soft on-site potentials", "In this section we study the KG lattice with soft on-site potentials of the form $V(x)=-\\frac{\\omega _0^2}{2}x^2+\\frac{a}{p+1}x^{p+1},\\,\\,\\,a>0,\\,\\,p>1.$ The standard quartic double-well potential is obtained for $p=3$ .", "We prove the existence of periodic TWSs on finite lattices with imposed periodic boundary conditions and compacton TWSs on finite lattices with Dirichlet boundary conditions, utilising the Mountain Pass Theorem." ], [ "Periodic TWSs: Existence by the Mountain Pass Theorem", "If $p+1=2r>0$ , the on-site potentials $V(x)$ possess the reflection symmetry $V(x)=V(-x)$ .", "For such potentials we first consider periodic TWSs satisfying (REF ), that is, solutions $Q(z)$ performing oscillations about $Q=0$ so that the associated energy $E> 0$ .", "Hence, the Poincaré inequality applies.", "These periodic TWSs, as solutions of (REF ), are critical points of the action functional $S:\\,\\mathcal {X}_1 \\rightarrow {\\mathbb {R}}$ given by $S(Q)=\\int _{-L}^{L}\\left[\\frac{c^2}{2}\\left(Q^\\prime (z)\\right)^2+\\frac{\\omega _0^2}{2}Q^2(z)-\\frac{a}{p+1}Q^{p+1}(z)-\\frac{\\kappa }{2}[Q(z+1)-Q(z)]^2\\right]dz.\\nonumber $ We get $<S^\\prime (Q),P>&=& \\int _{-L}^{L}\\left[c^2Q^\\prime (z) P^\\prime (z)+\\omega _0^2 Q(z) P(z)-a Q^p(z)P(z)\\right.\\nonumber \\\\&+&\\left.\\kappa [Q(z+1)-2Q(z)+Q(z-1)]P(z)\\right]dz,\\,\\,\\, P,Q \\in \\mathcal {X}_1,$ where $<\\cdot ,\\cdot >$ is the standard duality bracket between $\\mathcal {X}_1$ and its dual $\\mathcal {X}_1^*$ ; by the definition of the derivative $S^{\\prime }$ , for any $Q\\in \\mathcal {X}_1$ , the functional $S^{\\prime }(Q):\\mathcal {X}_1\\rightarrow \\mathbb {R}$ is a linear functional acting on any $P\\in \\mathcal {X}_1$ as $S^{\\prime }(Q)[P]= <S^\\prime (Q),P>$ [7].", "To prove the existence of TWSs we facilitate the Mountain Pass Theorem (MPT).", "We recall [[7], Definition 4.1, p. 130] (Palais-Smale (PS) condition) and [[7], Theorem 6.1, p. 140] (Mountain Pass Theorem (MPT) of Ambrosetti-Rabinowitz [6]).", "Definition 3.1 Let $X$ be a Banach space and ${\\bf E}:X\\rightarrow \\mathbb {R}$ be $C^1$ .", "We say that ${\\bf E}$ satisfies condition (PS) if, for any sequence $\\lbrace u_n\\rbrace \\in X$ such that $|{\\bf E}(u_n)|$ is bounded and ${\\bf E}^\\prime (u_n) \\rightarrow 0$ as $n \\rightarrow \\infty $ in $X^*$ , there exists a convergent subsequence.", "Theorem 3.1 Let ${\\bf E}:X\\rightarrow \\mathbb {R}$ be $C^1$ and satisfy (A) ${\\bf E}(0)=0$ , (B) $\\exists \\rho >0$ , $\\alpha >0$ : $||u||_X=\\rho $ implies ${\\bf E}(u)\\ge \\alpha $ , (C) $\\exists u_1 \\in X$ : $||u_1||_X\\ge \\rho $ and ${\\bf E}(u_1)<\\alpha $ .", "Define $\\Gamma =\\left\\lbrace \\gamma \\in C^0([0,1],X)\\,:\\,\\gamma (0)=0,\\, \\gamma (1)=u_1\\right\\rbrace .$ Let $F_{\\gamma }=\\lbrace \\gamma (t)\\in X\\,:\\,0\\le t\\le 1\\rbrace $ and $\\mathcal {L}=\\lbrace F_\\gamma \\,:\\,\\gamma \\in \\Gamma \\rbrace .$ If ${\\bf E}$ satisfies (PS), then $\\beta :=\\inf _{F_\\gamma \\in \\mathcal {L}}\\,\\sup \\lbrace {\\bf E}(v)\\,:\\,v\\in F_\\gamma \\rbrace \\ge \\alpha ,$ is a critical value of the functional ${\\bf E}$ .", "We proceed by proving the validity of the assumptions of the MPT: Lemma 3.1 Assume either (i) $\\omega _0^2\\ge 4\\kappa $ , or (ii) $\\omega _0^2<4\\kappa $ and $c^2>C(L)(4\\kappa -\\omega _0^2)$ .", "Then the functional $S$ satisfies the PS-condition.", "Proof: Assume $S(Q_m)$ is bounded, i.e.", "$|S(Q_m)|\\le M$ for all $m\\in {\\mathbb {N}}$ , and $S^\\prime (Q_m)\\rightarrow 0,\\;\\;\\mbox{as $m\\rightarrow \\infty $ in $\\mathcal {X}_1^*$}.$ Recall also that $|<S^\\prime (Q_m),Q_m>|\\le ||S^\\prime (Q_m)||_{\\mathcal {X}_1^*}||Q_m||_{X_1}, $ by the standard inequality for the duality bracket [9].", "Then, using (REF ),we deduce that for any $\\epsilon >0$ , there exists $N(\\epsilon ) \\in \\mathbb {N}$ such that $||S^{\\prime }(Q_m)||_{\\mathcal {X}_1^*}\\le \\epsilon ,\\;\\;\\mbox{for $m>N(\\epsilon )$}.$ Applying (REF ) for $\\epsilon \\le 1$ and using the inequality (REF ), we deduce that $|<S^\\prime (Q_m),Q_m>|\\le ||Q_m||_{\\mathcal {X}_1},\\;\\;\\mbox{for $m>N(\\epsilon )$}.", "$ Hence, for chosen $b\\in (1/(p+1),1/2)$ , we may deduce from (REF ), that $b \\,|<S^\\prime (Q_m),Q_m>|\\le ||Q_m||_{\\mathcal {X}_1}<||Q_m||_{\\mathcal {X}_1}+1,\\;\\;\\mbox{for $m>N(\\epsilon )$}.", "$ We will use (REF ) to prove that $Q_m$ is bounded in $\\mathcal {X}_1$ , as follows: First, we derive the inequality $1+M+|| Q_m||_{\\mathcal {X}_1}&\\ge &M-b <S^\\prime (Q_m),Q_m\\ge S(Q_m)-b <S^\\prime (Q_m),Q_m>\\nonumber \\\\&=&\\left(\\frac{1}{2}-b\\right)\\int _{-L}^{L}\\left[c^2\\left(Q^\\prime _m(z)\\right)^2+\\omega _0^2Q_m^2(z)-\\kappa [Q_m(z+1)-Q_m(z)]^2\\right]dz\\nonumber \\\\&-&\\frac{a}{p+1}(1-(p+1)b)\\int _{-L}^{L}Q_m^p(z)dz\\nonumber \\\\&\\ge & \\left(\\frac{1}{2}-b\\right)\\left[c^2 || Q^\\prime _m||_{L^2}^2 +(\\omega _0^2-4\\kappa ) || {Q}_m||_{L^2}^2\\right].$ If assumption (i) holds, then $1+M+|| Q_m||_{\\mathcal {X}_1}\\ge \\left(\\frac{1}{2}-b\\right)c^2 || {Q}_m||_{\\mathcal {X}_1}^2,$ and if assumption (ii) holds, then $1+M+|| Q_m||_{\\mathcal {X}_1}\\ge \\left(\\frac{1}{2}-b\\right)\\left(c^2 -C(L)(4\\kappa -\\omega _0^2)\\right]|| {Q}_m||_{\\mathcal {X}_1}^2,$ implying that $Q_m$ is bounded in $\\mathcal {X}_1$ .", "Hence, there is a subsequence of $Q_m$ (not relabeled) and a $Q\\in \\mathcal {X}_1$ such that $Q_{m}\\rightarrow Q$ weakly in $\\mathcal {X}_1$ , so that by Sobolev compact embedding, one has the strong convergence $Q_{m} \\rightarrow Q$ in $L^2(-L,L)$ (and in $C([-L,L])$ ).", "Using Hölder's inequality and the embedding $\\mathcal {X}_1 \\subset L^\\infty (-L,L)$ , we obtain $|| Q_m-Q||_{\\mathcal {X}_1}^2&=&\\frac{1}{c^2}\\left(<S^\\prime (Q_m)-S^\\prime (Q),Q_m-Q>\\right.\\nonumber \\\\&-&\\frac{1}{c^2}\\int _{-L}^{L}\\left[\\left(\\omega _0^2(Q_m(z)-Q(z))^2-\\kappa [Q_m(z+1)-Q(z+1)-(Q_m(z)-Q(z))]^2\\right.\\right.\\nonumber \\\\&-&\\left.\\left.a(Q_m(z)-Q(z))^{p+1}\\right]dz\\right)\\nonumber \\\\&\\le & \\frac{1}{c^2}|<S^\\prime (Q_m)-S^\\prime (Q),Q_m-Q>|\\nonumber \\\\&+&\\frac{1}{c^2}\\left((\\omega _0^2+4\\kappa )||Q_m-Q||_{L^2}+a ||Q_m^p-Q^p||_{L^2(-L,L)}\\right) ||Q_m-Q||_{L^2(-L,L)}.$ The first term on the right-hand side of (REF ) converges to zero because by assumption $<S^\\prime (Q_m)-S^\\prime (Q),Q_m-Q>\\rightarrow 0$ as $m\\rightarrow \\infty $ .", "The last converges to zero by strong convergence.", "Thus, $||Q_m-Q||_{\\mathcal {X}_1}=0$ so that $(Q_m)_{m\\in {\\mathbb {Z}}}$ has a strongly convergent subsequence and the proof is finished.", "$\\square $ Lemma 3.2 The functional $S$ is $C^1$ on $\\mathcal {X}_1$ .", "Proof: The functional $S$ can be expressed as $S(q)=\\frac{c^2}{2}(Q,Q)+\\Gamma (Q),$ where $\\Gamma (Q)=\\int _{-L}^{L}\\left[\\frac{\\omega _0^2}{2}Q^2(z)-\\frac{a}{p+1}Q^{p+1}(z)-\\frac{\\kappa }{2}(Q(z+1)-Q(z))^2\\right]dz.$ Continuity of the quadratic term $(Q,Q)$ is obvious.", "By using The embedding $\\mathcal {X}_1\\subset L^\\infty (-L,L)$ which implies that $Q^p\\in \\mathcal {X}_0$ and the Poincaré inequality (REF ) we get the estimate, $||Q^{p}||^2_{\\mathcal {X}_0}=\\int _{-L}^L |Q(z)|^{2p}dz\\le C(L) ||Q||_{\\mathcal {X}_1}^{2p}.$ Then, we have that $\\left|\\Gamma (Q)\\right|\\le \\frac{\\omega _0^2}{2}||Q||_{L^2}^2+\\frac{a}{p+1}C(L) ||Q||_{\\mathcal {X}_1}^{p+1}+2\\kappa ||Q||_{L^2}^2\\le C (L)\\left( \\left(\\frac{\\omega _0^2}{2}+2\\kappa \\right)||Q||_{\\mathcal {X}_1}^2+\\frac{a}{p+1}||Q||_{\\mathcal {X}_1}^{p+1}\\right)<\\infty .$ The Gateaux derivative of $\\Gamma $ exists and is given by $<\\Gamma ^\\prime (Q),h>&=& \\int _{-L}^{L}\\left[\\omega _0^2 Q(z)-a Q^p(z)\\right.\\nonumber \\\\&+&\\left.\\kappa (A_1[Q(z)]-A_2[Q(z)])\\right]h(z)dz.$ To prove that $\\Gamma ^\\prime $ is continuous, we let $||h||_{\\mathcal {X}_1}\\le 1$ and $Q_m \\rightarrow Q$ in $\\mathcal {X}_1$ .", "Then $\\left|<\\Gamma ^\\prime (Q_m)-\\Gamma ^\\prime (Q),h>\\right|&=&\\left|\\int _{-L}^{L}\\left[\\omega _0^2(Q_m(z)-Q(z))-a(Q_m^{p}(z)-Q^{p}(z))\\right.\\right.\\nonumber \\\\&+&\\left.\\left.\\kappa (A_1[Q_m(z)]-A_2[Q_m(z)]-(A_1[Q(z)]-A_2[Q(z)]))\\right]h(z)dz\\right|\\nonumber \\\\&\\le & \\omega _0^2||Q_m-Q||_{L^2}^2+a C\\left(||Q_m||_{L^\\infty },||Q||_{L^\\infty }\\right)||Q_m-Q||_{L^2(-L,L)}^2+2\\kappa ||Q_m-Q||_{L^2(-L,L)}^2\\nonumber \\\\&=& \\left(\\omega _0^2+aC\\left(||Q_m||_{L^\\infty },||Q||_{L^\\infty }\\right)+2\\kappa \\right)||Q_m-Q||_{L^2(-L,L)}^2\\le C \\epsilon ,\\nonumber $ for $m$ sufficiently large.", "Hence $\\left|<\\Gamma ^\\prime (Q_m)-\\Gamma ^\\prime (Q),h>\\right|\\rightarrow 0\\,\\,\\, {\\rm as}\\,\\,\\,m\\rightarrow \\infty ,\\nonumber $ and the proof of the lemma is completed.", "$\\square $ Obviously, $S(0)=0$ , therefore condition (A) of the MPT is satisfied.", "For the remaining conditions (B) and (C), the proofs are given as follows: Proof of (B):      We distinguish the cases (i) $\\omega _0^2\\ge 4\\kappa $ , and (ii) $\\omega _0^2<4\\kappa $ , $c^2> C(L)(4\\kappa -\\omega _0^2)$ .", "(i) With the aid of the estimate $\\int _{-L}^{L}\\left[\\frac{a}{p+1}Q^{p+1}(z)+\\frac{\\kappa }{2}[Q(z+1)-Q(z)]^2\\right]dz \\le \\frac{a}{p+1}C(L)||Q||_{\\mathcal {X}_1}^{p+1} +2\\kappa ||Q||_{L^2(-L,L)}^2,\\nonumber $ we get the sufficient condition $\\frac{c^2}{2}||Q||_{\\mathcal {X}_1}^2> \\frac{a}{p+1}C(L) ||Q||_{\\mathcal {X}_1}^{p+1},\\nonumber $ for $S(Q)$ being positive.", "Conclusively, for $||Q||_{\\mathcal {X}_1}$ small enough, say $||Q||_{\\mathcal {X}_1}=\\rho $ , and $0<\\rho <\\left(\\frac{(p+1)c^2}{2a C(L)}\\right)^{1/{(p-1)}}$ there is a $\\alpha >0$ with $S(Q)\\ge \\alpha $ for all $||Q||_{\\mathcal {X}_1}=\\rho $ .", "(ii) In this case we use the estimate $\\int _{-L}^{L}\\left[\\frac{a}{p+1}Q^{p+1}(z)+\\frac{\\kappa }{2}[Q(z+1)-Q(z)]^2\\right]dz \\le \\frac{a}{p+1}C(L)||Q||_{\\mathcal {X}_1}^{p+1} +2\\kappa C(L)||Q||_{\\mathcal {X}_1}^2.\\nonumber $ Using the Poincaré inequality we derive the following condition for $S>0$ : $\\frac{1}{2}\\left(c^2-(4\\kappa -\\omega _0^2)C(L)\\right)||Q||_{\\mathcal {X}_1}^2> \\frac{a}{p+1}C(L) ||Q||_{\\mathcal {X}_1}^{p+1}.\\nonumber $ Hence, for $||Q||_{\\mathcal {X}_1}$ small enough, say $||Q||_{\\mathcal {X}_1}=\\rho $ , and $0<\\rho <\\left(\\frac{(p+1)}{2a C(L)}(c^2-(4\\kappa -\\omega _0^2))\\right)^{1/{(p-1)}},$ there is a $\\alpha >0$ with $S(Q)\\ge \\alpha $ for all $||Q||_{\\mathcal {X}_1}=\\rho $ .", "Proof of (C):      For $||Q||_{\\mathcal {X}_0}\\ne 0$ we observe that $S(tQ)=\\int _{-L}^{L}\\left[\\frac{c^2}{2}t^2\\left(Q^\\prime (z)\\right)^2+\\frac{\\omega _0^2}{2}t^2Q^2(z)-\\frac{a}{p+1}t^{p+1}Q^{p+1}(z)-\\frac{\\kappa }{2}t^2[Q(z+1)-Q(z)]^2\\right]dz \\rightarrow -\\infty ,$ as $t\\rightarrow \\infty $ .", "To summarise, by virtue of the MPT we may state Theorem 3.2 Let either (i) $\\omega _0^2\\ge 4\\kappa $ , or (ii) $\\omega _0^2<4\\kappa $ and $c^2>C(L)(4\\kappa -\\omega _0^2)$ .", "Then the system (REF ) on the periodic lattice $-L\\le n \\le L$ with an on-site potential (REF ) has at least one nontrivial periodic TWS.", "We conclude this section with the remark that positive (negative) periodic TWSs associated with oscillations of $Q(z)$ about the minima $\\tilde{Q}_{\\pm }=\\pm (\\omega _0^2/a)^{1/(p-1)}$ of the potential $V(Q)$ , possessing reflection symmetry $V(Q)=V(-Q)$ , can also be treated using the approach above.", "These TWSs are characterised by oscillations of $Q$ without changes of sign and possess energy $E<0$ .", "One only needs to consider the system (REF ) expressed in the shifted variable $Q\\rightarrow Q\\mp \\tilde{Q}_{\\pm }$ .", "Note that the linear operators $A_{1,2}$ are invariant with respect to the shift operator $Q\\rightarrow Q+Q_0$ ." ], [ "Thresholds for the average kinetic energy of TWSs of prescribed speed", "For the derivation of lower bounds of the average kinetic energy of TWSs we utilise the fixed point method outlined in Section .", "In the present case, using Theorem REF , we treat the following auxiliary linear, non-homogeneous problem $-Q^{\\prime \\prime }(z)=\\frac{\\kappa }{c^2}\\left\\lbrace [\\Psi (z)-\\Psi (z-1)]-[\\Psi (z+1)-\\Psi (z)]-\\omega _0^2\\Psi (z)+a[\\Psi (z)]^p\\right\\rbrace .$ for some arbitrary fixed $\\Psi \\in \\mathcal {X}_1$ , as an equation of the form (REF ).", "We have the following result.", "Proposition 3.1 For any $\\Psi \\in \\mathcal {X}_1$ , the equation (REF ) has a unique solution $Q\\in D(\\mathcal {L})\\subset \\mathcal {X}_1$ .", "Proof: We reformulate Eq.", "(REF ) as $-Q^{\\prime \\prime }(z)=\\frac{1}{c^2}\\left\\lbrace \\kappa {\\left([A_2[\\Psi (z)]-A_1[\\Psi (z)]\\right)}-\\omega _0^2\\Psi (z)+a[\\Psi (z)]^p\\right\\rbrace :=\\mathcal {F}[\\Psi (z)].$ We may use again the estimate (REF ) for $\\Psi $ , $||\\Psi ^p||^2_{\\mathcal {X}_0}=\\int _{-L}^{L}[\\Psi (z)]^{2p}dz\\le C(L)||\\Psi ||_{\\mathcal {X}_1}^{2p}.$ Then, for the right-hand side of Eq.", "(REF ) we get $||\\mathcal {F}[\\Psi ]||_{\\mathcal {X}_0}&\\le & \\frac{1}{c^2}\\left\\lbrace \\kappa ||A_2[\\Psi ]-A_1[\\Psi ]||_{\\mathcal {X}_0} +C(L)\\omega _0^2||\\Psi ||_{\\mathcal {X}_1}+C_{{{0}}} a||\\Psi ||^{2p}_{\\mathcal {X}_1}\\right\\rbrace \\nonumber \\\\&\\le &\\frac{1}{c^2}\\left\\lbrace 2\\kappa ||\\Psi ||_{\\mathcal {X}_1}+C_1\\omega _0^2||\\Psi ||_{\\mathcal {X}_1}+C_{{{0}}}a ||\\Psi ||^{2p}_{\\mathcal {X}_1}\\right\\rbrace .$ Thus, ${ \\mathcal {F}[\\Psi ] \\in L^2[-L,L]}$ , and due to Theorem REF , Eq.", "(REF ) has a unique solution $Q\\in D(\\mathcal {L})$ .", "$\\square $ We proceed along the lines in Section REF by adapting the steps for the use of the fixed point argument to the present case, that is replacing the hard potential $V(Q)$ in the expressions (REF ) by the soft potential (REF ).", "In particular, we need the following estimate of the difference of the power nonlinearity terms $\\int _{-L}^{L}|s_1^{p}-s_2^{p}|^2&\\le & p^2\\int _{-L}^{L}\\left\\lbrace \\int _0^1|\\xi |^{p-1}|s_1-s_2|d\\theta \\right\\rbrace ^2,$ where $s_1,\\;s_2\\in \\mathbb {R}$ , and $\\xi =\\theta s_1+(1-\\theta )s_2$ , $\\theta \\in (0,1)$ .", "Then, we have that $\\int _{-L}^{L}\\left|[\\Psi _1(z)]^{p}-[\\Psi _2(z)]^{p}\\right|^2dz&\\le & p^2||\\xi ||_{L^\\infty }^{2(p-1)}\\int _{-L}^{L}|\\Psi _1(z)-\\Psi _2(z)|^2dz\\nonumber \\\\&\\le &p^2||\\xi ||_{L^\\infty }^{2(p-1)}C\\int _{-L}^{L}|\\Psi _1^{\\prime }(z)-\\Psi _2^{\\prime }(z)|^2dz,$ for $\\xi (z)=\\theta \\Psi _1(z)+(1-\\theta )\\Psi _2(z)$ .", "For the norm $||\\xi ||_{L^\\infty }$ we have the estimate $||\\xi ||_{L^\\infty }\\le \\theta ||\\Psi _1||_{L^\\infty }+(1-\\theta )||\\Psi _2||_{L^\\infty }\\le \\theta C_*||\\Psi _1||_{\\mathcal {X}_1}+(1-\\theta )C_*||\\Psi _2||_{\\mathcal {X}_1},$ where the constant $C_*>0$ denotes the optimal constant of the embedding $\\mathcal {X}_1\\subset L^{\\infty }(-L,L)$ .", "Therefore, since $\\Psi _1,\\Psi _2\\in \\mathcal {B}_R$ , we have that $||\\xi ||_{L^\\infty }\\le C_*R$ .", "Thus by using (REF ) and (REF ) we deduce the inequality $||[\\Psi _1(z)]^{p}-[\\Psi _2(z)]^{p}||_{\\mathcal {X}_0}\\le p\\sqrt{C}(C_*R)^{p-1}||\\Psi _1-\\Psi _2||_{\\mathcal {X}_1}.$ Then the Lipschitz constant $M$ in the equation equivalent to (REF ) in REF is determined by $M=\\frac{1}{c^2}\\left({{{2}}}\\kappa +{{{C(L)}}}\\omega _0^2+p\\sqrt{C(L)} a C_*^{p-1}R^{p-1}\\right).$ Using again the fixed point argument we end up with the following statement: If $M\\sqrt{C(L)}<1$ holds, the unique fixed point is the trivial one.", "Nontrivial solutions exist only if $M\\sqrt{C(L)}>1.$ From Eq.", "(REF ) we derive the following condition for the existence of nontrivial solutions: $R^2>\\left[\\frac{c^2-\\sqrt{C(L)}({{{2}}}\\kappa +{{{C(L)}}}\\omega _0^2)}{C(L)}\\right]^{\\frac{2}{p-1}}\\left(\\frac{1}{p a C_*^{p-1}}\\right)^{\\frac{2}{p-1}}:=T_{\\mathrm {thresh}}.$ We conclude: Theorem 3.5 Consider the system (REF ) on the periodic lattice of $2L$ particles, $-L\\le n\\le L$ with a soft on-site potential (REF ) with reflection symmetry $V(x)=V(-x)$ .", "Every nontrivial periodic travelling wave solution $q_n(t)=Q(n-ct)=Q(z)$ with speed $c$ satisfying $c^2>\\sqrt{C(L)}({{{2}}}\\kappa +{{{C(L)}}}\\omega _0^2)=c_{\\mathrm {crit}}^2,$ must have average kinetic energy $T(Q)=\\frac{1}{2}\\int _{-L}^{L}U^{\\prime }(z)^2dz$ satisfying the lower bound $T_{\\mathrm {thresh}}<2T(Q).$ The relation (REF ) can be regarded as a threshold value criterion for the average kinetic energy in order that travelling waves of speed $c>c^*$ exist." ], [ " TWSs in the case of the finite lattice with Dirichlet boundary conditions ", "In this section we consider solitary TWSs on the finite lattice with Dirichlet boundary conditions.", "Again, the $N+1$ -oscillators of the lattice are placed equidistantly on the interval $\\Omega =(-L,L)$ of length $2L$ , and the boundary conditions read as $q_0=q_N=0.$ Then the natural phase space is the standard Sobolev space $H^1_0(\\Omega ):=\\left\\lbrace Q:\\Omega \\rightarrow \\mathbb {R}\\;\\;:\\;\\;\\Vert Q\\Vert _{H^1_0(\\Omega )}^2=\\int _{\\Omega }|Q(z)|^2dz+\\int _{\\Omega }|Q^{\\prime }(z)|^2dz<\\infty \\right\\rbrace ,$ endowed with the scalar product and induced norm $(Q,P)_{H^1_0(\\Omega )}=\\int _{\\Omega }Q(z)P(z)dz+\\int _{\\Omega }Q^{\\prime }(z)P^{\\prime }(z)dz,\\;\\;||Q||^2_{H^1_0(\\Omega )}=\\int _{\\Omega }Q(z)^2dz+\\int _{\\Omega }Q^{\\prime }(z)^2dz,$ which is the closure of the infinitely differentiable functions with compact support on $\\Omega $ denoted by $C^{\\infty }_0(\\Omega )$ , in the norm (REF ).", "In this setup, we may use the standard compact Sobolev embedding $H^1_0(\\Omega )\\subset L^2(\\Omega )$ and the Poincaré inequality $||Q||^2_{L^2(\\Omega )}\\le C(L) ||Q^{\\prime }||^2_{L^2(\\Omega )},$ which implies the equivalence of norm (REF ) with the norm $||Q||^2_{H^1_0(\\Omega )}=\\int _{\\Omega }Q^{\\prime }(z)^2dz.$ It is also useful to recall that the embedding $H^1_0(\\Omega )\\subset L^p(\\Omega ), \\;\\;1\\le p\\le \\infty ,$ is compact.", "In this setting, solutions of (REF ) are considered as critical points of the action functional $S:\\,H^1_0(\\Omega ) \\rightarrow \\mathbb {R}$ given by $S(Q)=\\int _{\\Omega }\\left[\\frac{c^2}{2}\\left(Q^\\prime (z)\\right)^2+\\frac{\\omega _0^2}{2}Q^2(z)-\\frac{a}{p+1}Q^{p+1}(z)-\\frac{\\kappa }{2}[Q(z+1)-Q(z)]^2\\right]dz.$" ], [ "Existence of TWSs and the Mountain Pass Theorem", "Concerning the existence of solitary TWSs we have the following: Lemma 3.3 Let $\\omega _0^2>4\\kappa $ .", "Then the functional (REF ) satisfies the PS-condition.", "Proof: Assume $S(Q_m)$ is bounded, i.e.", "$|S(Q_m)|\\le M$ for all $m\\in {\\mathbb {N}}$ , and $S^\\prime (Q_m)\\rightarrow 0$ as $m\\rightarrow \\infty $ in $\\left(H_0^1(\\Omega )\\right)^*$ .", "For chosen $b\\in (1/(p+1),1/2)$ and $m$ sufficiently large, we have $1+M+|| Q_m||_{H_0^1(\\Omega )}&\\ge &S(Q_m)-b <S^\\prime (Q_m),Q_m>\\nonumber \\\\&=&\\left(\\frac{1}{2}-b\\right)\\int _{\\Omega }\\left[c^2\\left(Q^\\prime _m(z)\\right)^2+\\omega _0^2Q_m^2(z)-\\kappa [Q_m(z+1)-Q_m(z)]^2\\right]dz\\nonumber \\\\&-&\\frac{a}{p+1}(1-(p+1)b)\\int _{\\Omega }Q_m^p(z)dz\\nonumber \\\\&\\ge & \\left(\\frac{1}{2}-b\\right)\\left[c^2 || Q^\\prime _m||_{L^2(\\Omega )}^2 +(\\omega _0^2-4\\kappa ) || {Q}_m||_{L^2(\\Omega )}^2\\right]\\nonumber \\\\&\\ge & \\left(\\frac{1}{2}-b\\right)\\underline{M} || {Q}_m||_{H_0^1(\\Omega )}^2,\\,\\,\\,\\, \\underline{M}=\\min \\left\\lbrace c^2,\\omega _0^2-4\\kappa \\right\\rbrace ,$ implying that $Q_m$ is bounded in $H_0^1(\\Omega )$ .", "Hence, there is a subsequence of $Q_m$ (not relabeled) and a $Q\\in H_0^1(\\Omega )$ such that $Q_{m}\\rightarrow Q$ weakly in $H_0^1(\\Omega )$ , and by Sobolev embedding one has strong convergence $Q_{m} \\rightarrow Q$ in $L^2(\\Omega )$ .", "Furthermore using the Hölder inequality we obtain $|| Q_m-Q||_{H_0^1(\\Omega )}^2&=&\\frac{1}{c^2}\\left(<S^\\prime (Q_m)-S^\\prime (Q),Q_m-Q>\\right.\\nonumber \\\\&-&\\frac{1}{c^2}\\int _{\\Omega }\\left[\\left(\\omega _0^2(Q_m(z)-Q(z))^2-\\kappa [Q_m(z+1)-Q(z+1)-(Q_m(z)-Q(z))]^2\\right.\\right.\\nonumber \\\\&-&\\left.\\left.a(Q_m(z)-Q(z))^{p+1}\\right]dz\\right)\\nonumber \\\\&\\le & \\frac{1}{c^2}|<S^\\prime (Q_m)-S^\\prime (Q),Q_m-Q>|+\\frac{1}{c^2}\\left((\\omega _0^2+4\\kappa )||Q_m-Q||_{L^2(\\Omega )}\\right.\\nonumber \\\\&+& \\left.a ||Q_m^p-Q^p||_{L^{2}(\\Omega )}\\right) ||Q_m-Q||_{L^2(\\Omega )}.$ The two expressions on the right-hand side of (REF ) converge to zero; the first one by assumption and the last one by strong convergence.", "Thus, $||Q_m-Q||_{H_0^1(\\Omega )}=0$ so that $(Q_m)_{m\\in {\\mathbb {N}}}$ has a strongly convergent subsequence and the proof is finished.", "$\\square $ Lemma 3.4 Assume $\\omega _0^2>4\\kappa $ .", "Then the functional (REF ) satisfies the conditions for an application of the MPT.", "Proof: For condition (A) we note that $S(0)=0$ .", "Regarding the remaining conditions (B) and (C) of the MPT we have the following proofs: Proof of (B):      For any $p>1$ , the embedding (REF ) yields $||Q||_{L^{p+1}(\\Omega )}^{p+1}||\\le {C}\\,||Q^\\prime ||_{L^2(\\Omega )}^{p+1}.$ Then we get the estimate $\\int _{\\Omega }\\left[\\frac{a}{p+1}Q^{p+1}(z)+\\frac{\\kappa }{2}[Q(z+1)-Q(z)]^2\\right]dz &\\le & \\frac{a}{p+1}C||Q^\\prime ||_{L^2(\\Omega )}^{p+1}+2\\kappa ||Q||_{L^2(\\Omega )}^2,\\nonumber \\\\&\\le & \\frac{a}{p+1}C||Q||_{H^1(\\Omega )}^{p+1}+2\\kappa ||Q||_{L^2(\\Omega )}^2,$ giving rise to the condition $\\frac{c^2}{2}||Q^\\prime ||_{L^2(\\Omega )}^2+\\frac{\\omega _0^2}{2}||Q||_{L^2(\\Omega )}^2> \\frac{a}{p+1}C||Q||_{H_0^1(\\Omega )}^{p+1}+2\\kappa ||Q||_{L^2(\\Omega )}^2$ for $S$ being positive.", "We infer that if $\\frac{\\underline{m}}{2}||Q||_{H_0^1(\\Omega )}^2> \\frac{a}{p+1} C||Q||_{H_0^1(\\Omega )}^{p+1},\\,\\,\\,\\, \\underline{M}=\\min \\left\\lbrace c^2,\\omega _0^2-4\\kappa \\right\\rbrace , \\nonumber $ is satisfied, then $S(Q)>0$ .", "That is, for $||Q||_{H^1(\\Omega )}$ small enough, say $||Q||_{H_0^1(\\Omega )}=\\rho $ , and $0<\\rho <\\left(\\frac{(p+1)\\underline{M}}{2a C}\\right)^{1/(p-1)}$ there is an $\\alpha >0$ with $S(Q)\\ge \\alpha $ for all $||Q||_{H_0^1(\\Omega )}=\\rho $ .", "Proof of (C):      For $||Q||_{\\mathcal {X}_0}$ one has $S(tQ)=\\int _{\\Omega }\\left[\\frac{c^2}{2}t^2\\left(Q^\\prime (z)\\right)^2+\\frac{\\omega _0^2}{2}t^2Q^2(z)-\\frac{a}{p+1}t^4Q^{p+1}(z)-\\frac{\\kappa }{2}t^2[Q(z+1)-Q(z)]^2\\right]dz \\rightarrow -\\infty ,$ as $t\\rightarrow \\infty $ .", "$\\square $ Conclusively, by virtue of the MPT we state: Theorem 3.6 The system (REF ) on the finite lattice with Dirichlet boundary conditions (REF ) and an on-site potential (REF ) has at least one nontrivial solitary TWS.", "While in this paper we do not establish the existence of TWSs in the case of the infinite lattice, we nevertheless may prove the existence of suitable energy thresholds for such solutions if they exist, by implementing again a fixed point approach.", "Such thresholds also hold in the case of the finite lattice with Dirichlet boundary conditions, whose existence was shown in Theorem REF .", "For the brevity of the presentation we focus only on the case of the infinite lattice with the vanishing boundary conditions $\\lim _{|n|\\rightarrow \\infty }q_n=0,$ associated with the energy level $E=0$ .", "Then the natural phase space is the standard Sobolev space $H^1(\\mathbb {R}):=\\left\\lbrace Q:\\mathbb {R}\\rightarrow \\mathbb {R}\\;\\;:\\;\\;\\Vert Q\\Vert _{H^1(\\mathbb {R})}^2=\\int _{\\mathbb {R}}|Q(z)|^2dz+\\int _{\\mathbb {R}}|Q^{\\prime }(z)|^2dz<\\infty \\right\\rbrace ,$ endowed with the scalar product and induced norm $(Q,P)_{H^1(\\mathbb {R})}=\\int _{\\mathbb {R}}Q(z)P(z)dz+\\int _{\\mathbb {R}}Q^{\\prime }(z)P^{\\prime }(z)dz,\\;\\;||Q||^2_{H^1(\\mathbb {R})}=\\int _{\\mathbb {R}}Q(z)^2dz+\\int _{\\mathbb {R}}Q^{\\prime }(z)^2dz.$ In this case, we may use the standard continuous Sobolev embedding $H^1(\\mathbb {R})\\subset L^2(\\mathbb {R})$ and the inequality $||Q||^2_{L^2(\\mathbb {R})}\\le C||Q||^2_{H^1(\\mathbb {R})}.$ In order to derive an energy threshold criterion for the existence of solitary TWSs we conveniently write Eq.", "(REF ) as $-Q^{\\prime \\prime }(z)+{\\frac{\\omega _0^2}{c^2}}Q(z)&=&\\frac{{{1}}}{c^2}\\left\\lbrace \\kappa \\left([Q(z)-Q(z-1)]-[Q(z+1)-Q(z)]\\right)+a[Q(z)]^p\\right\\rbrace \\nonumber \\\\&=&\\mathcal {F}]Q(z)],$ so that the left side defines a strongly monotone operator on $L^2$ .", "Hence, Theorem REF can be applied.", "For every $Q\\in C_0^{\\infty }(\\mathbb {R})$ , setting this time $\\mathcal {L}_0Q=-Q^{\\prime \\prime }(z)+ {{\\omega _0^2}/{c^2}} Q(z)$ , one observes that $(\\mathcal {L}_0Q,Q)_{L^2(\\mathbb {R})}=||Q^{\\prime }||^2_{L^2(\\mathbb {R})}+{\\frac{\\omega _0^2}{c^2}}||Q||^2_{L^2(\\mathbb {R})}\\ge {\\frac{\\omega _0^2}{c^2}}||Q||^2_{L^2(\\mathbb {R})}.$ Obviously, the left-hand side of (REF ) defines an equivalent norm on $H^1(\\mathbb {R})$ , since $\\omega _1^2||Q||_{H^1(\\mathbb {R})}^2\\le (\\mathcal {L}_0Q,Q)_{L^2(\\mathbb {R})}\\le \\omega _2^2||Q||_{H^1(\\mathbb {R})}^2,\\;\\;\\mbox{for every}\\;\\;Q\\in H^1(\\mathbb {R}),$ with $\\omega _1^2=\\min \\left\\lbrace 1,{{\\omega _0^2}/{c^2}}\\right\\rbrace $ and $\\omega _2^2=\\max \\left\\lbrace 1,{{\\omega _0^2}/{c^2}}\\right\\rbrace $ .", "With these preparations we can apply Theorem REF with $\\mathcal {X}_1=H^1(\\mathbb {R})$ and $D(\\mathcal {L}_0)=H^2(\\mathbb {R})$ , and repeat the main lines of proofs of Propositions REF and REF with the following similarities and modifications: Lemma REF remains unchanged due to the continuous embedding (REF ) and the proofs of the respective counterparts of Proposition REF and Theorem REF are almost identical, however some constants will quantitatively change.", "Due to (REF ), here $C(L)=1$ .", "Moreover, the optimal constant $C_*$ will be replaced by the optimal constant $C_{1,*}$ resulting from the embedding $H^1(\\mathbb {R})\\subset L^{\\infty }(\\mathbb {R})$ .", "Then, the constant $M$ given in (REF ) is modified as $M_1=\\frac{1}{c^2}\\left({{2}}\\kappa +p a C_{1,*}^{p-1}R^{p-1}\\right).$ Furthermore, due to (REF ) and (REF ), the estimate (REF ) becomes $\\omega _1^2||Y||_1^2\\le ||Y^{\\prime }||^2_{L^2(\\mathbb {R})}+\\omega _0^2||Y||^2_{L^2(\\mathbb {R})}&\\le & ||\\mathcal {F}[\\Psi _1(z)]-\\mathcal {F}[\\Psi _2(z)]||_0\\,||Y||_0\\nonumber \\\\&\\le &M_1||\\Psi _1-\\Psi _2||_1\\,||Y||_1,$ and therefore $||Y||_1^2\\le \\frac{1}{2}||Y||_1^2+\\frac{M_1^2}{2\\omega _1^2}||\\Psi _1-\\Psi _2||_1^2.$ From (REF ), we derive that $||Y||_1^2=||\\mathcal {T}[\\Psi _1]-\\mathcal {T}[\\Psi _2]||_1^2\\le \\frac{M_1^2}{\\omega _1^2}||\\Psi _1-\\Psi _2||_1^2.$ Hence, in the case of vanishing boundary conditions we have Theorem 3.7 Consider the system (REF ) on the infinite lattice with vanishing boundary conditions.", "Every (nontrivial) solitary TWS $q_n(t)=Q(n-ct)=Q(z)$ with speed $c$ satisfying $c^2>\\frac{\\kappa +2\\omega _0^2}{\\omega _1^2}=c_{\\mathrm {crit}}^2,$ must have average kinetic energy $T(Q)=\\frac{1}{2}\\int _{-L}^{L}Q^{\\prime }(z)^2dz$ satisfying the lower bound $2T(Q)>T_{\\mathrm {thresh}}:=\\left[\\frac{c^2\\omega _1^2-(\\kappa +2\\omega _0^2)}{p a C_{1,*}^{p-1}}\\right]^{\\frac{2}{p-1}}.$ In the case of the finite lattice, we have the following Corollary 3.1 The result of Theorem REF remains valid in the case of the finite lattice supplemented with Dirichlet boundary conditions (REF ) (modulo the modifications of constants of Sobolev embeddings).", "We remark that further useful quantifications of the norm and energy thresholds can be provided as explicit values of the optimal constants $C_*$ and $C_{1,*}$ of the Sobolev embeddings used in our proofs (see in [49]).", "Data Availability Statement The article has no associated data.", "Authors Declarations The authors have no conflicts to disclose.", "Authors Contributions Statement All authors contributed equally to the study conception, design and writing of the manuscript.", "Material preparation, data collection and analysis were performed equally by all authors.", "All authors read and approved the final manuscript." ] ]
2212.05575
[ [ "Nearby voids and their galaxies: recent progress and prospects" ], [ "Abstract Voids occupy about 3/4 of the volume of the Universe and contain about 15% of its mass.", "Due to various observational selection effects, these structure elements and galaxies populating voids, are highly under-explored.", "This especially relates to the lowest mass galaxies which comprise the main void population.", "Studying the nearby voids allows us to improve our understanding of the most elusive void objects.", "We present the brief overview of the current status and the prospects of the study of the nearest voids and their galaxies.", "First, we summarize the pioneer study of a hundred galaxies residing in the nearby Lynx-Cancer void which clearly evidence for the slower evolution of void galaxies and finds also the unusual very metal-poor and gas-rich dwarfs.", "Then we describe the recently defined sample of the nearby voids within the sphere with R = 25 Mpc and a sample of 1350 galaxies residing in these voids (~20% of all galaxies within this volume).", "We discuss the current results obtained for several directions of the study of this sample.", "They include: the search for Very Young Galaxies, the study of HI properties, the clustering of void galaxies and its relation to the void substructures, and the unbiased study of 260 void galaxies within the Local Volume (R < 11 Mpc).", "Altogether, this opens a perspective way to address the suggested peculiarities of the void galaxy formation and evolution.", "Finally, we briefly overview the expected advancements in the void galaxy studies related to the upcoming new facilities." ], [ "Introduction. Voids and their galaxies", "Voids represent one of the four types of cosmic web elements, ranked according to the decreasing matter density: nodes, filaments, walls, and voids (e.g., Cautun et al., 2014 [1]).", "Having a matter density of $\\sim $ 1/5 of the mean density of the Universe, voids are unique objects for cosmology since they have substructure still growing in the linear regime so that they can be treated as “time machines” and “cosmological microscopes” (Aragon-Calvo & Szalay, 2013 [2]).", "The Nearby Void Galaxy Sample (d $<$ 25 Mpc) (Pustilnik et al., 2019 [3]) and its subsample within the Local Volume (d $<$ 11 Mpc) directly relate to the near-field cosmology.", "They allow us to probe in the greatest detail the processes of the Mpc-scale structure formation from the initial perturbations and to witness the recent galaxy build-up." ], [ "Three main directions in void studies", "1.", "Statistics of observed voids as an ensemble of separate entities.", "Comparison to cosmological models and simulations.", "2.", "Studying of void substructure and dynamics.", "This is based on model simulations.", "To address these issues observationally, it takes to identify many galaxies per void as tracers of the substructure.", "Currently, this is beyond the available opportunities.", "However, the expected technological advancements, related to the upcoming new instruments (mentioned later), will open this observational direction.", "3.", "Formation and evolution of void galaxies.", "Currently, we already have reasonable observational support of these studies with large telescopes.", "However, the future new facilities will greatly advance this direction as well.", "This will allow us, in particular, to address the issue of “dark” protogalaxies and the minimal baryonic mass of galaxies survived cosmic reionization." ], [ "Previous studies of voids and their galaxies", "To study void structures and galaxy evolution, we need large and deep samples of void galaxies.", "Previous mass studies of void galaxies dealt mainly with the large “distant” voids ($D\\sim 100$ –20 Mpc), in which the SDSS-based galaxy samples probed the top of the void luminosity function ($-20 < M_{\\rm B,r} < -17$) (e.g.", "Rojas et al.", "2005 [4], Kreckel et al.", "2012 [5]).", "The caveats of these works are the shallowness in galaxy luminosities and the small number of galaxies per void.", "Differences in the properties of void and wall galaxies for the brighter void galaxies are small: void objects are more gas-rich, and have higher SFR.", "This suggests that the “massive” part of void galaxies are less sensitive to the global environment.", "Density of galaxies grows from the void centre to its border.", "The central density is $\\sim $ 0.1 of the mean density in the Universe.", "The main (dwarf, say M$_{\\rm B} > -16$ ) void population remained till the recent time largely unexplored.", "However, our earlier study evidences for the slower average evolution and on the unusual properties of a part of void objects (e.g.", "Pustilnik et al.", "2010, 2011, 2016 [6], [7], [8])." ], [ "Importance of the studying of nearby voids", "To study void structure, we need many galaxies which delineate it.", "Due to the raising galaxy luminosity function, the number of galaxies grows with decreasing luminosity.", "The effect of environment also gets stronger for lower mass galaxies.", "Therefore, to study substructure of voids, and to understand the diversity of their galaxy properties and evolutionary scenarios, we need large and deep galaxy samples.", "In typical wide-field redshift surveys, the limiting apparent magnitudes of B$_{\\rm tot} \\sim 18-19$ allows one to collect faint void dwarfs (to M$_{\\rm B} = -10$ , $-12$ mag) only in the surroundings of the Local Volume (LV), at distances of $\\lesssim $ 20–25 Mpc.", "Thus, studying of low-mass dwarfs in voids dictates the need of defining void regions adjacent the LV." ], [ "Brief summary of the study of the Lynx–Cancer void galaxy sample", "A sample of a hundred galaxies in the nearby Lynx–Cancer void (d$_{\\rm centre} \\sim $ 18 Mpc) was formed (Pustilnik & Tepliakova 2011 [9]) and studied in a series of 10 papers (Pustilnik et al.", "2016 [8] and references therein).", "The main results and conclusions of this study are as follows: (a) the major fraction of void galaxies are low surface brightness (LSB) dwarfs; (b) the average M(gas)/L$_{\\rm B}$ is a factor of $\\sim $ 1.4 larger than that of the reference sample; (c) gas metallicity Z(gas) at a fixed M$_{\\rm B}$ is reduced, on average, by a factor of $\\sim $ 1.4 (in comparison to the LV reference sample from Berg et al., 2012 [10]); (d) in addition, there exists a small group of “unusual” low-mass dwarfs.", "They have extremely low Z(gas) $\\sim $ Z$_{\\odot }$ /50–Z$_{\\odot }$ /30, reduced by a factor of 2–5 for their luminosity, very high M(gas)/M(bary) $\\sim $ 0.97–0.99, and blue colours of the periphery, corresponding to the ages of the oldest visible stellar population of 1–3 Gyr.", "Conclusions.", "(a) Void galaxies as a whole, on average appear less evolved.", "This implies either the slower secular evolution or/and delayed galaxy formation.", "(b) There exists a small number ($\\sim $ 10%) of the least massive void dwarfs with very low gas metallicities and other extreme properties, indicating their early stages of evolution." ], [ "eXtremely Metal-Poor (XMP) galaxies. Relation to voids", "XMP galaxies (conditionally with Z(gas) $<$ Z$_{\\odot }$ /30) are very rare in the nearby Universe.", "Since the discovery in 1970 of the first such object, the blue compact galaxy IZw18 (Sargent & Searle, 1970 [11]), there have been numerous attempts to find more such galaxies.", "The main motivation was that they present a unique opportunity to study in detail the processes in metal-poor gas and stars typical of the conditions in galaxies in the early Universe.", "Only a handful of such galaxies were identified among hundreds of thousands star-forming emission-line galaxies, mainly from the SDSS spectral survey (e.g., Izotov et al.", "2019 [12]).", "As the follow-up analysis of the spatial positions of the known XMP dwarfs has shown, the majority of them, including the prototype IZw18, reside in voids.", "Having in mind also our results on the galaxy population in the Lynx-Cancer void, it was clear that the search for XMP galaxies in voids was promising." ], [ "Advancement in the studying of nearby voids and their galaxies", "Based on the above results, it was tempting to extend the sample of a hundred galaxies in the Lynx-Cancer void and to study the phenomenon of void galaxies on a much greater statistical ground.", "1.", "In 2017, we identified 25 voids within a sphere with R $<$ 25 Mpc and formed a sample of 1350 galaxies residing in these voids (Nearby Void Galaxies, NVG, Pustilnik et al.", "2019 [3]).", "2.", "In 2017–2020, we undertook a search for new XMP dwarfs among the least luminous NVG objects, namely those with M$_{\\rm B} > -14.2$  mag.", "From 60 additionally preselected candidates, we found via BTA and SALT spectroscopy 30 new dwarfs with Z(gas) = (0.02–0.04) Z$_{\\odot }$ (Pustilnik et al.", "2020, 2021 [13], [14]).", "These dwarfs serve as a basement for studying the diversity of low-metallicity galaxies and various issues of galaxy evolution in voids.", "Part of them resemble in their properties predicted in simulations the so-called Very Young Galaxies (VYG, Tweed et al.", "2018 [15]), in which the majority of stars are formed during the last $\\sim $ 1 Gyr." ], [ "New project: studying void galaxies in the Local Volume", "The Local Volume (LV, R $<$ 11 Mpc) and the sample of the LV $\\sim $ 1250 galaxies (Karachentsev et al.", "2013 [16]) represent a very important reference sample used for comparison with models of the nearby Universe in numerical cosmological simulations.", "However, there is a significant biasLow-redshift lowest-metallicity star-forming galaxies in the SDSS DR14 in the observational studies of the sample galaxies, which mostly include those from typical groups.", "This bias was caused by the absence of the information on voids in the LV, with except of the well known giant Tully void with $\\sim $ 15 galaxies catalogued within its boundaries.", "The creation of the nearby void galaxies sample [3] allowed us to trace the part of the nearby voids filling the Local Volume.", "As a result, we separated a subsample of 260 void galaxies residing within the LV.", "The study of this sample is being conducted in several stages.", "The first stage (2020–2022), which is almost finished, started from a comparative study of gas metallicity in 61 the least luminous (M$_{\\rm B} > -13$  mag) void galaxies within the LV with that for the reference sample of the late-type galaxies in the LV from [10].", "The second, ongoing, stage (2022–2024?)", "includes the spectral study of all the 130 LV void galaxies with M$_{\\rm B} > -14.3$  mag, which is the median of the luminosity distribution for this sample.", "For the great majority of the studied void galaxies, their Z(gas) falls into the low-metallicity region (conditionally, Z(gas) $<$ Z$_{\\odot }$ /5, or 12+log(O/H) $<$ 8.0 dex).", "However, only for a minor part of the studied void galaxies, O/H(gas) was determined via the direct method when the temperature-sensitive faint line [Oiii]$\\lambda $ 4363 was detected.", "For the remaining objects, O/H was determined via empirical estimators based on the strong lines of Oxygen and Hydrogen.", "Intermediate summary To date, spectra of $\\sim $ 76 galaxies of this subsample are available, mostly thanks to our observations.", "Their gas metallicity appears to be in the range of 12+log(O/H) = 7.05–8.0 dex.", "Among others, several new XMP dwarfs are discovered.", "In Figure REF we show the relation of 12+log(O/H) versus M$_{\\rm B}$ for 73 LV void galaxies with the most reliable data in comparison with the reference sample of late-type galaxies from Berg et al.", "2012 [10].", "It is drawn with the solid line along with two parallel dash-dotted lines illustrating the scatter of the log(O/H) in the reference sample around the linear regression ($\\pm $ 0.15 dex).", "As discussed in Pustilnik et al.", "(2016, 2021) [8], [14], void galaxies have, on average, a reduced value of gas O/H.", "Furthermore, its scatter is substantially elevated.", "Figure: Relation of 12+log(O/H) versus M B _{\\rm B} for 73 void galaxiesresiding in the LV: black octagons with error bars.", "The red solid line shows the linearregression for the reference sample from the LV sample .Two parallel dash-dotted lines show the r.m.s.", "scatter of the reference samplearound the linear regression (±\\pm 0.15 dex)." ], [ "Large scatter of metallicity in void galaxies. Clustering in\nvoids and its possible implications", "The elevated scatter of gas metallicity for a given M$_{\\rm B}$ in void galaxies seemingly indicates additional factors affecting secular evolution.", "It is also possible that for a fraction of the observed void galaxies their reduced gas metallicity can be a short-term effect due to the localized induced star formation episodes related to the accretion of a “primordial” gas blob (e.g.", "Ceverino et al., 2016 [17]).", "The galaxies of the reference sample from Berg et al.", "(2012) [10] belong mostly to the typical groups of the LV.", "Their mutual interactions induce elevated star formation and, thus, lead to additional production of metals.", "Void galaxies cluster similar to non-void ones, but with substantially reduced amplitudes on all scales.", "Depending on their local environment, their properties can vary significantly.", "That is the observed properties of void galaxies are the product of the interplay between the global and local environments.", "There are several topics for a deeper insight on the void galaxy clustering and the related peculiarities of their evolution.", "They include the following: the identification of various galaxy associations (pairs, triplets, quartets, and other aggregates, including a dominant host galaxy) as the probable nodes of the void web; the identification of the unbound galaxies at mutual distances of $<$ 0.5–1 Mpc as the probable tracers of void filaments.", "Chemical evolution in the “massive” aggregates in voids may be more similar to that in the non-void groups.", "E.g., sub-luminous hosts in voids can trigger SF and accelerate their companion's evolution via tidal interaction.", "There are hints on the existence of XMP dwarf associations in voids.", "That is some void regions can appear the birthplaces of “young” galaxies.", "This would be an important finding.", "However, a much larger statistics of such XMP dwarf associations is needed to get reliable results." ], [ "Gas in void galaxies", "Atomic gas (Hi and HeI) in the majority of void galaxies appears to be the dominant component of their baryonic matter.", "Therefore, its study is an obligatory step in order to understand the properties of void objects.", "We mapped tens gas-rich void dwarfs in the Hi21-cm line using the Giant Metrewave Radio Telescope (GMRT).", "In Figure REF we present such maps for three isolated XMP dwarfs in the nearby voids.", "The disturbed morphology of Hi gas in AGC124629, AGC208397, and AGC239144 indicates its non-equilibrium state and hints the continuing build-up of their gas body.", "This observed phenomenon seems to get support from modern high-resolution cosmological simulations of void galaxies.", "At the same time, this unprocessed “primordial” inflowing gas should reduce the total galaxy metallicity, playing as a factor of slower secular evolution." ], [ "Study of nearby voids: new horizons", "The new facilities coming during the next 5–10 years will qualitatively advance studies in the area.", "1.", "The SKA and ngVLA will increase the sensitivity to the low-baryon-mass objects by orders of magnitude, increasing the current $\\sim $ 1300 known Local Volume galaxies, 260 of them in voids, to tens of thousands LV objects and thousands of void galaxies with masses M(HI) $\\lesssim 10^{6}$ M$_{\\odot }$ (if they exist).", "This progress will make it possible: – to probe the mass function to this limit, – to uncover the tenuous substructure of voids, – to discover “dark” galaxies (conditionally with M*/M(gas) $<$ 0.01), – to address SF in very atypical conditions.", "2.", "The JWST and then the E-ELT and TMT will expand the tip-RGB distances to at least D$\\sim $ 25 Mpc, greatly improving the accuracy of void galaxy positions, and will allow us to probe the detailed SF histories and fractions of the old and younger stellar populations.", "New deep optical and NIR surveys: 3.", "The DES imaging survey and its clones and deep NIR surveys will allow us to better determine the parameters of global stellar populations via multi-colour photometry and SED fitting.", "4.", "Extending of the SDSS spectral survey to the Southern hemisphere will substantially improve spectral data and open the opportunity to address the metallicity of many new void dwarfs and improve the completeness of the census of nearby voids." ], [ "Conclusion", "The deep studying of dwarf galaxies in the nearby voids is in many aspects challenging even for the largest ground-based telescopes.", "However, thanks to their proximity, this gives us valuable information in intimate details of galaxy evolution and star formation at the extreme conditions partly related to the conditions in the early Universe.", "Acknowledgements The work on the presented research programs has been supported during the recent years by grants of the Russian Foundation for Basic Research (No.", "18-52-45008) and the Russian Science Foundation (No.", "22-22-00654, the part related to the Local Volume void galaxies)." ] ]
2212.05640
[ [ "Gravity from the Determinant of the Energy-Momentum: Astrophysical\n Implications" ], [ "Abstract Determinants of the second-rank tensors stand useful in forming generally invariant terms as in the case of the volume element of the gravitational actions.", "Here, we extend the action of the matter fields by an arbitrary function $f(D)$ of the determinants of their energy-momentum, and the metric, $D=|\\textbf{det}.T|/|\\textbf{det}.g|$.", "We derive the gravitational field equations and examine the nonlinear terms induced by the determinant, specifically, the inverse of the energy-momentum tensor.", "We also show that these extensions require a nonzero stress-energy tensor for the vacuum.", "We propose a scale-free model, $f(D)=\\lambda D^{1/4}$, and show how it induces the familiar invariant terms formed by the trace of the energy-momentum tensor by expanding the action around the stress-energy of the vacuum.", "We study the hydrostatic equilibrium equations for a neutron star by providing relevant values of the dimensionless constant $\\lambda$.", "We show that the differences from the predictions of general relativity, in the mass-radius relations, which are sensitive to the equations of state, are conspicuous for $\\lambda \\sim -10^{-2}$.", "We also show that the model does not affect the predictions on the primordial nucleosynthesis when it is applied to the early radiation era.", "This novel and unfamiliar type of gravity-matter coupling can lead to a rich phenomenology in gravitational physics." ], [ "Introduction", "Despite the great success of General Relativity (GR) as a theory of gravity, it is still facing various challenges.", "Theoretical problems such as spacetime singularities and the incomplete description of the theory itself at the quantum level, as well as from the observational data which brings to the forefront the problems of the dark sector, may be an indication to the need for going beyond GR [1], [2], [3].", "There have been many attempts to modify GR in the last few years.", "First of all, gravity, as presently understood, is identified by the curvature of spacetime which in turn results from the distribution of matter fields via their energy-momentum tensor $T_{\\mu \\nu }$ .", "Therefore, there are ways to modify GR either from the pure geometric (curvature) or from the matter (energy-momentum) parts.", "One line of thought seeks to modify the geometric sector solely and consider scalar curvature invariant terms added to the Einstein-Hilbert action.", "The resulting theories such as the famous $f(R)$ gravity have been extensively studied and considered for a possible explanation of various cosmological problems [4].", "The other line of thought is to modify the matter sector.", "In this way, the extension largely follows from the energy-momentum tensor of matter fields by adding a general function of its trace $T$ or its square $T_{\\mu \\nu }T^{\\mu \\nu }$ .", "Among these, are the familiar $f(R, T, T_{\\mu \\nu }T^{\\mu \\nu }, \\ldots )$ constructed from both spacetime curvature $R$ and the stress-energy momentum tensor [5].", "The shared property of all these terms is that they are formed by contracting spacetime indices, a standard procedure that allows the covariant character of the theories to manifest both generally and locally.", "However, it is known that the determinant of the spacetime metric tensor is also crucial for the overall actions to be generally invariant.", "It turns out that not only the metric but any tensor of second rank can be considered.", "Thereby, the determinant of rank-two tensors are supported by the principle of general invariance, and therefore are allowed in the generalized gravitational actions unless forbidden by a specific symmetry.", "For instance, the square-root of the determinant of the Ricci tensor (usually referred to Eddington action) leads to the same field equations which can be derived from the standard Einstein-Hilbert action.", "Interestingly, the Ricci-determinant can be combined with the metric-determinant to finally form a scalar that is also relevant to gravity [6], [7].", "Motivated by these determinant-based constructions, it is worth considering the matter sector as well by establishing a new type of matter-geometry interaction through its stress-energy momentum tensor.", "In this paper, we will consider the determinant of the energy-momentum tensor $|\\textbf {det}.T|$ for the possible extension of the matter sector.", "The common practice is to build the volume element from the metric density, $|\\textbf {det}.g|^{1/2}$ as in GR, and therefore gravity from the determinant of the energy momentum will be based on a general function $f({D})$ of the scalar ${D}=|\\textbf {det}.T|/|\\textbf {det}.g|$ .", "Besides the fundamental difference from the usual extensions, the presence of the determinant will induce the inverse of the energy-momentum of matter fields in the equations of motion.", "In the cosmological context where the sources are approximated by perfect fluids, this contribution will in turn induce the inverse of the equation of state of the species [8].", "Generally speaking, the inverse of the energy-momentum tensor is defined only when the determinant does not vanish everywhere, and therefore one has to consider the nonzero vacuum energy (the cosmological constant) in the absence of matter fields.", "Thus, at large scales, the solutions of these models are not completely flat but (anti-) de Sitter spacetimes.", "After setting up the general framework, we propose a specific model in which the coupling constant is dimensionless, i.e.", "$f({D})=\\lambda \\, {D}^{1/4}$ .", "We adapt the resulting field equation for a static spherically symmetric background and study the corresponding stellar structure.", "In order to constrain our coupling constant, we apply the model to the famed mass-radius relation of a neutron star.", "The latter has been a viable accommodation for testing gravity models at the strong field regime.", "However, since the equation of state is not rigorously constrained, we will study our model through various, mainly four different equations of state.", "We will solve the stellar structure equations numerically, and based on the resulting mass-radius relations we show how the predictions of our model become distinguishable from those of GR.", "This obviously depends on the strength of matter-gravity coupling induced by the determinant of the stress energy.", "Guided by the maximum measured mass and radius of the neutron star, we deduce the relevant constraints on the free parameter of the model.", "We will also highlight the possible effects on primordial nucleosynthesis.", "The rest of the article is organized as follows: in section REF we set up the general formalism starting from the action principle that includes the determinant of the energy-momentum.", "In section REF , we propose the scale-independent model and apply it to the perfect fluid sources.", "We then study the stellar structure in REF and conclude in section .", "In this section, we will introduce our total gravitational action which includes the standard GR action (Einstein-Hilbert) plus matter and extend it with the determinant of the energy-momentum.", "We will derive the associated field equations by varying the action with respect to the main field (the metric tensor.)", "This can also be realized in the Palatini formalism if the GR action is written in terms of both metric and a symmetric connection as independent variables.", "In this paper, we will be interested only in the former.", "The invariant action reads $S =&&\\int d^{4}x \\sqrt{|\\textbf {det}.g|}\\,\\left\\lbrace \\frac{1}{2\\kappa }\\left(R-2\\Lambda _{0}\\right) + L^{\\text{m}}[g]\\right\\rbrace \\nonumber \\\\&&+\\int d^{4}x \\sqrt{|\\textbf {det}.g|}\\,f({D}),$ where $\\kappa =8\\pi G$ , with $G$ being Newton's constant.", "In the first line of this action we have the Ricci scalar $R$ , the cosmological constant $\\Lambda _{0}$ , and the Lagrangian density of matter fields $L^{\\text{m}}[g]$ .", "The latter describes the minimal coupling of matter to gravity, i.e.", "through only the metric.", "The novel quantity in the above action is $f({D})$ which is a general function of the scalar ${D} \\equiv \\frac{|\\textbf {det}.T|}{|\\textbf {det}.g|}~.$ where $\\textbf {det}.T$ is the determinant of the energy-momentum tensor $T_{\\alpha \\beta }$ .", "This is given by its definition $\\textbf {det}.T=\\frac{1}{4!", "}\\epsilon ^{\\alpha \\beta \\gamma \\rho }\\epsilon ^{\\bar{\\alpha }\\bar{\\beta }\\bar{\\gamma }\\bar{\\rho }}T_{\\alpha \\bar{\\alpha }}T_{\\beta \\bar{\\beta }}T_{\\gamma \\bar{\\gamma }}T_{\\rho \\bar{\\rho }},$ where $\\epsilon ^{\\alpha \\beta \\gamma \\rho }$ is the anti-symmetric Levi-Civita (permutation) symbol.", "Like the determinant of any second rank tensor, the quantity (REF ) transforms identically to $\\textbf {det}.g$ , thus, the quantity ${D}$ transforms like a scalar and then the total action is invariant under general coordinate transformation.", "In varying the action, one has to take into account that the energy-momentum depends on the metric, hence, it is worth giving the expression for the variation of its determinant.", "This reads $\\delta _{g}\\textbf {det}.T=\\textbf {det}.T\\left(T^{\\text{inv}}\\right)^{\\mu \\nu }\\delta _{g}T_{\\mu \\nu },$ where $\\left(T^{\\text{inv}}\\right)^{\\mu \\nu }$ is the inverse of $T_{\\mu \\nu }$ such that $\\left(T^{\\text{inv}}\\right)^{\\nu \\alpha }T_{\\mu \\alpha }=\\delta ^{\\nu }_{\\,\\mu }$ , and it is given by its definition as $\\left(T^{\\text{inv}}\\right)^{\\mu \\nu }=\\frac{1}{3!", "}\\frac{1}{\\textbf {det}.T}\\epsilon ^{\\mu \\alpha \\beta \\gamma }\\epsilon ^{\\nu \\bar{\\alpha }\\bar{\\beta }\\bar{\\gamma }}T_{\\alpha \\bar{\\alpha }}T_{\\beta \\bar{\\beta }}T_{\\gamma \\bar{\\gamma }}.$ Therefore, by varying the total action (REF ), the gravitational field equations are obtained as $G_{\\mu \\nu }=-\\Lambda _{0} g_{\\mu \\nu }+\\kappa T_{\\mu \\nu }+\\kappa f({D})g_{\\mu \\nu }+2\\kappa {D}f^{\\prime }({D})\\mathcal {T}_{\\mu \\nu } \\nonumber \\\\$ where $G_{\\mu \\nu }$ is the Einstein tensor, $T_{\\mu \\nu }=L^{\\text{m}}g_{\\mu \\nu }-2\\delta L^{\\text{m}}/\\delta g^{\\mu \\nu }$ is the energy-momentum tensor of matter, and $f^{\\prime }({D})=df/d{D}$ .", "The last term is given by $\\mathcal {T}_{\\mu \\nu }=&&-g_{\\mu \\nu }+L^{\\text{m}}\\left(T^{\\text{inv}}_{\\mu \\nu } -\\frac{1}{2}g_{\\mu \\nu }T^{\\text{inv}} \\right)+\\frac{1}{2}T^{\\text{inv}} T_{\\mu \\nu }\\nonumber \\\\&&+ 2 (T^{\\text{inv}})^{\\alpha \\beta }\\frac{\\delta ^{2}L^{\\text{m}}}{\\delta g^{\\alpha \\beta }\\delta g^{\\mu \\nu }}.$ where $T^{\\text{inv}}$ is the trace of $(T^{\\text{inv}})^{\\mu \\nu }$ (not to be confused with the inverse of the trace of $T_{\\mu \\nu }$ ), and $T^{\\text{inv}}_{\\mu \\nu }=g_{\\alpha \\mu }g_{\\beta \\nu }\\left(T^{\\text{inv}}\\right)^{\\alpha \\beta }$ .", "Therefore, invoking the determinant of the stress-energy tensor in the gravitational action modifies the Einstein's field equations as follows: First, the geometrical part (curvature terms) is not altered since matter field are assumed to be coupled only to the metric.", "Then, the Bianchi identity, $\\nabla ^{\\mu }G_{\\mu \\nu }=0$ , can be easily applied.", "However, this will certainly lead to an extended energy-momentum conservation equation.", "An important contribution to the modified matter sector originates from the inverse of the energy-momentum tensor.", "Hence, the determinant of the energy-momentum tensor should not vanish.", "This is guaranteed if one considers the non-vanishing cosmological constant which is highly supported by observations.", "Indeed, the total energy-momentum tensor contains, in addition to matter fields' sources, the stress-energy tensor of the vacuum.", "$T_{\\mu \\nu }^{(\\text{tot})}= T_{\\mu \\nu }^{\\text{(vac)}}+T_{\\mu \\nu }^{(\\text{i})},$ where $T_{\\mu \\nu }^{(\\text{i})}$ is the energy-momentum tensor of all fluid types (matter and radiation) in the universe.", "The tensor $T_{\\mu \\nu }^{\\text{(vac)}}=\\mathcal {E}g_{\\mu \\nu }$ is the stress-energy of the vacuum where $\\mathcal {E}$ being the vacuum energy density.", "The latter contains the bare cosmological constant $\\Lambda _{0}$ and all non-vanishing contributions from the ground states of the quantum fields as well as from phase transitions (see Refs [9] for detailed reviews).", "Cosmological observations imply a value around $10^{-10}\\, \\text{erg}/\\text{cm}^{3}$ but not zero for the vacuum energy density.", "Thus, even in the absence of matter (and radiation) sources, i.e., when $T_{\\mu \\nu }^{(\\text{i})}=0$ , the energy-momentum tensor and its determinant do not vanish thanks to the cosmological constant term (the vacuum energy)." ], [ "The case of perfect fluids", "Before proposing any particular model $f({D})$ , one can calculate and examine the quantity ${D}$ in terms of all sources of gravity at hand.", "If the latter are described by perfect fluids then one writes $T_{\\mu \\nu }=\\sum _{a} (\\rho _{a} + P_{a})u_{\\mu }u_{\\nu }+\\sum _{a}P_{a}g_{\\mu \\nu },$ which contains all the content of the universe, namely, nonrelativistic matter and radiation as well as the vacuum (cosmological constant).", "With all the contributions, one finds ${D}=\\left| \\left(\\sum _{a}\\rho _{a}\\right)\\left(\\sum _{a}P_{a}\\right)^{3}\\right|.$ We notice here that unlike the case of models extended by functions of the trace or the square of the energy-momentum, the contribution of the pressure controls this new quantity at the level of the action.", "However, it is worth noticing again that the inverse of both pressure and energy density will come out due to the emergence of the inverse of the energy-momentum after the variation of the action.", "That being said, we should emphasize again that, in general, the total energy-momentum tensor involves a nonzero vacuum energy.", "In fact, the last expression of the determinant can be written for matter and radiation ($a=\\text{i}$ ), and the vacuum ($a=\\text{vac}$ ) as ${D}=\\left|\\left(\\sum _{\\text{i}}\\rho _{\\text{i}} +\\mathcal {E}\\right)\\left(\\sum _{\\text{i}}P_{\\text{i}} -\\mathcal {E}\\right)^{3}\\right|.$ Hence, for matter (or radiation) such as the case of the interior of massive objects, the cosmological constant is negligible, whilst in the vacuum (where $\\rho _{\\text{i}}$ and $P_{\\text{i}}$ vanish), the cosmological constant becomes important, and in this case ${D}= \\mathcal {E}^{4}$ is nonzero.", "In the subsequent section we will propose a scale free-model; the first model of the energy-momentum-determinant gravity that we consider in practice." ], [ "Scale-free model", "As in extended gravity theories, one may think of ${D}^{n}$ models where $n$ does not need to be an integer.", "In contrast, dimensional analysis favors an inverse of an integer.", "However, the high dimension of the determinant requires a high dimensional constant for the action to be dimensionless.", "With a mass scale $M$ , this requires a general model of the form $f({D})=M^{4(1-4n)}{D}^{n}.$ Powers of ${D}$ are not the only interesting models one may consider.", "In fact, from the field equations (REF ) we notice the presence of the term ${D}f^{\\prime }({D})$ which can be absorbed by a logarithmic model, and leads to a simple dynamics.", "Nevertheless, as a first application of our setup based on the energy-momentum determinant, we will consider the scale-independent model, $n=1/4$ and $M^{0}\\equiv \\lambda $ (a dimensionless constant) which takes the form $f({D})=\\lambda {D}^{1/4}.$ Based on the generic case we studied above, the model (REF ) is worth studying because it is the only resulting case that does not require a higher (mass) dimensional constant in the gravitational action (REF ).", "Additionally, with the non-vanishing stress-energy of the vacuum (see the above discussion), this particular model can lead to interesting implications.", "In fact, the determinant structure (REF ) can induce the familiar invariant terms formed by the trace of the energy-momentum tensor as an approximation.", "This can be realized if we expand the energy-momentum tensor around the stress-energy of the vacuum as $T_{\\mu \\nu } \\rightarrow \\mathcal {E}g_{\\mu \\nu } + T_{\\mu \\nu },$ where $T_{\\mu \\nu }$ is a small perturbation.", "This leads to $\\textbf {det}.T\\simeq \\mathcal {E}^{4} \\times \\textbf {det}.g\\times \\Bigg \\lbrace 1&&+\\frac{T_{\\,\\,\\,\\nu }^{\\nu }}{\\mathcal {E}}+\\left(\\frac{T_{\\,\\,\\,\\nu }^{\\nu }}{2\\mathcal {E}}\\right)^{2}-\\frac{T_{\\,\\,\\,\\beta }^{\\nu } T_{\\,\\,\\,\\nu }^{\\beta }}{2\\mathcal {E}^{2}}\\nonumber \\\\&& + \\mathcal {O}\\left(\\frac{T_{\\,\\,\\,\\nu }^{\\nu }}{\\mathcal {E}}\\right)^{3}\\Bigg \\rbrace ,$ and finally ${D}^{1/4} \\simeq \\mathcal {E}&&+\\frac{1}{4}\\,T+\\left(\\frac{1}{16\\mathcal {E}}\\right)\\,T^{2}-\\left(\\frac{1}{8\\mathcal {E}}\\right)\\,T_{\\mu \\nu }T^{\\mu \\nu } \\nonumber \\\\&&+\\mathcal {O}(T/\\mathcal {E})^{3}.$ The first term in this expansion is the vacuum energy with a theoretical estimation of $\\mathcal {E}\\sim \\Lambda _{\\text{UV}}^{4}$ , where $\\Lambda _{\\text{UV}}$ being an Ultra-Violet cutoff, defined as the energy scale up to which one trusts quantum field theory.", "At the level of the action (REF ), this term would simply contribute to the cosmological constant.", "The terms proportional to the trace of the energy-momentum tensor have been considered separately in the literature as possible models to be applied to cosmology and astrophysics, such as the $f(R,T)$ and energy-momentum squared gravity [5].", "These models seem to arise from the determinant structure we proposed here.", "Next, we will explore the astrophysical implication of this model where we will focus on the stellar structure for neutron stars.", "We will give a thorough phenomenological study and examine the new effects by putting relevant constraints on our free parameter.", "In this section we restrict the study to the interior of a neutron star where the cosmological term is ignored compared to matter.", "To simplify the calculation, we simply use $\\rho _{\\text{i}}=\\rho $ and $P_{\\text{i}}=P$ for the single fluid describing the star.", "Then, from (REF ), one writes ${D}=\\rho P^{3}$ which does not vanish inside the star.", "In this case, the gravitational field equations (REF ) read (taking $\\kappa =1$ ) $G_{\\mu \\nu }=&&\\left(\\rho +P \\right)u_{\\mu }u_{\\nu }+Pg_{\\mu \\nu } \\nonumber \\\\&&+\\lambda {D}^{1/4}\\left\\lbrace g_{\\mu \\nu }+\\left(1+\\frac{P}{4\\rho }+\\frac{3\\rho }{4P}\\right)u_{\\mu }u_{\\nu }\\right\\rbrace $ However, as stated above, outside the star the energy-momentum tensor will be described in terms of the cosmological constant.", "In this vacuum case, the Lagrangian $L^{\\text{vac}}=P^{\\text{vac}}=-\\mathcal {E}$ and the energy-momentum tensor $T^{\\text{vac}}_{\\mu \\nu }=\\mathcal {E}g_{\\mu \\nu }$ .", "Hence, one can easily show that the gravitational action (REF ) leads to the field equations in vacuum $G_{\\mu \\nu }=-\\Lambda _{\\text{eff}}\\,g_{\\mu \\nu },$ where $\\Lambda _{\\text{eff}}=\\Lambda _{0}+\\kappa (1-\\lambda ) \\mathcal {E}$ is simply a nonzero effective cosmological term.", "If the contribution to the vacuum energy comes only from the bare cosmological constant $\\Lambda _{0}$ , i.e., when $\\mathcal {E}=\\Lambda _{0}/\\kappa $ , one would simply have $\\Lambda _{\\text{eff}}=(1-\\lambda )\\Lambda _{0}$ .", "Here, the factor of two in $\\Lambda _{0}$ that would arise from the second term in (REF ) is ignored because in this case the (similar) contributions from the Lagrangian $L^{\\text{vac}}$ and the term $\\Lambda _{0}$ in the action describe the same source.", "The nonzero effective cosmological constant (REF ) implies (anti-) de Sitter spacetime solution." ], [ "Weak-field limit", "The weak field limit (or the Newtonian regime) of the previous equations will be derived now by considering small perturbations about all the quantities.", "We expand the energy density and pressure around their background values $\\bar{\\rho }$ and $\\bar{P}$ as $\\rho \\rightarrow \\bar{\\rho }+\\rho , \\quad P\\rightarrow \\bar{P}+P,$ and apply that for the perturbed spacetime metric in which the time-time component is given by the Newtonian potential $|\\Phi | \\ll 1$ as $g_{00}\\simeq -1 -2\\Phi $ .", "In the weak field limit, one considers tiny stresses $T_{ij}$ (or pressure) compared to the energy density $T_{00}$ , i.e.", "$|T_{ij}|/T_{00} \\ll 1$ .", "Therefore, we will neglect all the terms proportional to $P/\\rho $ for both the background and first order terms.", "To that end, at first order, the time-time component of the gravitational equations (REF ) reads $\\nabla ^{2}\\Phi =4\\pi G_{N}\\left(1+\\frac{3\\lambda }{4}\\left(\\frac{\\bar{\\rho }}{\\bar{P}}\\right)^{1/4}\\right)\\rho $ which describes the deviation from the standard Poisson equation when $\\lambda \\ne 0$ .", "It is worth noting here that, unlike most of the familiar modified theories, this deviation is proportional to the inverse of the pressure which results from the inverse of the stress-energy tensor.", "This contribution is significant in the case of small pressure (compared to the energy density) as in the case of the Solar system which we consider here.", "The quantity $\\bar{\\rho }/\\bar{P}$ in the second term can be constrained from the strength of the Newtonian gravitational potential of the Solar system which is nowhere larger than $10^{-5}$ [11].", "In fact, the pressure to which the system is subjected is comparable to $\\rho |\\Phi |$ , hence, one has $\\frac{\\bar{P}}{\\bar{\\rho }} \\sim |\\Phi | \\sim 10^{-5},$ which leads to the modified Poisson equation $\\nabla ^{2}\\Phi \\simeq 4\\pi G_{N}\\left(1+0.13\\times 10^{2} \\lambda \\right)\\rho .$ In the following section we study the stellar structure equations for a neutron star based on the energy-momentum determinant gravity model." ], [ "Stellar structure equations for a neutron star", "In what follows we apply our field equations (REF ) to a static spherically symmetric background $ds^{2} = -{\\rm e}^{2\\Phi (r)}\\, dt^{2} + {\\rm e}^{2\\Psi (r)}\\,dr^{2}+r^{2}\\,d\\theta ^{2} + r^{2}\\sin ^{2}\\theta \\, d\\phi ^{2}$ where $\\Phi (r)$ and $\\Psi (r)$ are functions of the radial coordinate.", "The following main equations can be easily derived by the same method used in GR [12].", "The main differences are in the right hand side of (REF ) which includes the nonlinear terms in the energy density and pressure.", "Therefore, we find it unnecessary to put here all the details of the derivation.", "Using the field equations (REF ), one finds the equation for the potential $\\Phi (r)$ as $\\frac{d \\Phi }{dr}=\\frac{m+4\\pi r^{3}P}{r(r-2m)}+\\lambda {D}^{1/4}\\left(\\frac{4\\pi r^{3}}{r(r-2m)}\\right),$ where we have introduced the mass $m(r)$ of the sphere of radius $r$ .", "As we have seen previously, outside the star (at large $r$ ), the spacetime approaches not the flat but (anti) de Sitter solution due to the presence of the cosmological constant.", "Hence, the exterior vacuum solution implies ${\\rm e}^{2\\Psi (r)}=\\left(1-\\frac{2M}{r}+\\frac{\\Lambda _{\\text{eff}}}{3}r^{2}\\right)^{-1}.$ Needless to say, although the term $\\Lambda _{\\text{eff}}$ is required in our theories in order to prevent the inverse of the energy-momentum from going singular, its effect however is purely cosmological, thus it must not affect the stellar structure.", "On the other hand, the radial component of (REF ) gives the equation for the mass as $\\frac{d m}{dr}=4\\pi r^{2} \\rho \\left\\lbrace 1+\\frac{\\lambda }{4}{D}^{1/4}\\left(\\frac{P}{\\rho ^{2}}+\\frac{1}{P}\\right)\\right\\rbrace .$ The extended TOV equation is derived by applying the Bianchi identity on (REF ).", "This in turn leads to a generalized continuity equation from which one gets the equation for the pressure $\\frac{d P}{dr}=-&&\\left\\lbrace \\frac{m+4\\pi r^{3}P}{r(r-2m)}+\\lambda {D}^{1/4}\\frac{4\\pi r^{3}}{r(r-2m)}\\right\\rbrace (\\rho +P)\\nonumber \\\\&&\\times \\left\\lbrace 1+\\frac{\\lambda }{4}{D}^{1/4} \\left(\\frac{1}{\\rho } +\\frac{3}{P}\\right)\\right\\rbrace \\nonumber \\\\&& \\times \\left\\lbrace 1+\\frac{\\lambda }{4}{D}^{1/4}\\left(\\frac{1}{c_{s}^{2}\\rho }+\\frac{3}{P}\\right)\\right\\rbrace ^{-1},$ where $c_{s}^{2}=dP/d \\rho $ is the speed of the sound wave.", "The set of equations (REF )-(REF ) describes the stellar structure where their GR-limit is clearly understood for $\\lambda =0$ .", "We notice that the new contributions, induced by the determinant of the stress-energy of matter, are nonlinear in the energy density and pressure.", "Thus, analytical expressions of these quantities cannot be found easily.", "This necessitates a numerical solution which we present below." ], [ "Constraints from neutron stars", "In this section we will put a constraint on the coupling constant $\\lambda $ by studying the effects of the scale-free model (REF ) on the mass-radius relation of the neutron stars.", "The latter have been used extensively in constraining various models of gravity in strong field regimes [13].", "Figure: Mass vs. radius of neutron stars from model ().", "Each panel corresponds to a different equation of state.The gray shaded region corresponds to R<2GM/c 2 R< 2GM/c^2 (Black hole).", "The maximum measured mass M=2.14 -0.09 +0.10 M = 2.14_{-0.09}^{+0.10} for a neutron star is shown by the horizontal line.", "The measured radius R=12.39 -0.98 1.30 km R = 12.39_{-0.98}^{1.30}\\,{\\rm km} is illustrated by the cyan colored shaded region.", "Departure from GR is clear for values around λ∼-10 -2 \\lambda \\sim -10^{-2} for all the cases.", "However, the model is compatible with the maximum measured mass and radius in the case of MPA1.Here, we briefly describe the numerical method used to solve the hydrostatic equilibrium equations (REF )-(REF ).", "First, one has to close the system of equations by providing an equation of state relating pressure to the energy density, $P=P(\\rho )$ .", "In the case of neutron stars, the equations of state of the dense matter is not accurately constrained by the nucleon scattering experiments.", "Nevertheless, with various assumptions on the compositions and the nucleon-nucleon interactions, several equations of state are provided [14].", "In our case, we will solve the main equations for four different equations of state, namely, AP4 [15], SLY4 [16], MPA1 [17], and MS1 [18], by inspecting the effects of the novel contributions for each case.", "For the numerical solution, we use the second order Runge-Kutta method (midpoint method).", "We also use the adaptive radial step-sizes [19] $\\Delta r = 0.01 \\left(\\frac{1}{m}\\frac{dm}{dr} - \\frac{1}{P}\\frac{dP}{dr} \\right)^{-1},$ which is adapted via the local mass and pressure gradients.", "This technique permits for acceptable radial resolution at the regions of high pressure gradients.", "Furthermore, we keep the steps up to $10^3\\,{\\rm cm}$ , but not larger.", "First, we choose a central pressure $P_{\\rm c}$ (at the origin $r=0$ ) and $m(0)=0$ as a boundary condition for our system.", "We then integrate outwards up to the surface of the star where the pressure takes a very small value.", "We take for the surface pressure $P \\sim 10^{10}\\, \\text{dyne}/\\text{cm}^{2}$ which can be considered negligible compared to the pressure at the center, but yet is non-vanishing.", "This point corresponds to the radius of the star $R$ and its total mass $M$ .", "Last but not least, we obtain both mass and radius by varying the central pressure from $3\\times 10^{33}$ to $9\\times 10^{36}\\,{\\rm dyne\\, cm^{-2}}$ .", "Since our equations include a free parameter, we repeat the above process for various values of $\\lambda $ , and the above equations of state.", "An important condition on $\\lambda $ can be found from the requirement that the mass $m(r)$ should increase while integrating outwards starting from the center.", "Therefore, for general energy density and pressure, $dm/dr>0$ implies $\\left\\lbrace 1-\\frac{\\rho +P}{2\\rho }\\left(1 + \\frac{3\\rho -P}{2P}\\right)\\right\\rbrace \\left(\\frac{P}{\\rho }\\right)^{\\frac{3}{4}}\\lambda <1.$ At the center of a typical neutron star one has $P/\\rho \\sim 0.2$ which leads to $\\lambda > -0.8$ .", "Therefore, the constraints on $\\lambda $ from the measurements of the mass-radius relation (see below) must not fall below this negative value.", "In Figure REF , we have depicted the mass of the neutron star versus its radius for both GR and the model (REF ) for different values of $\\lambda $ and for four different equations of state.", "In all four cases, we notice that the departure from GR becomes conspicuous starting from values $\\lambda \\lesssim -10^{-2}$ .", "However, this deviation from GR falls within the region of the maximum measured mass and radius of a neutron star PSR J0740+6620, $M = 2.14_{-0.09}^{+0.10}$ and $R = 12.39_{-0.98}^{1.30}\\,{\\rm km}$ [20], for the case MPA1." ], [ "Effects on the big bang nucleosynthesis", "One of the most important predictions of the standard hot big bang cosmology is the determination of the abundances of the light elements formed in the early universe.", "Below we discuss briefly the possible deviations from the standard big bang-predicted abundances for $\\mathrm {D}$ and $^{4}\\mathrm {He}$ .", "The possible deviations can be described by the expansion rate parameter $S\\equiv H/H_{\\text{SBBN}}$ , where $H$ is the Hubble parameter associated to the present model and $H_{\\text{SBBN}}$ for the standard big bang nucleosynthesis [21].", "These parameters are calculated during the early radiation era where the energy density of the universe is dominated by relativistic particles.", "From the gravitational field equations (REF ), one can show that the energy density of the radiation evolves with the redshift $z$ as [8] $\\rho _{\\text{r}}=\\rho _{\\text{r} 0}(1+z)^{\\frac{4+10\\times (1/3)^{3/4}\\lambda }{1+7\\times (1/3)^{7/4}\\lambda }},$ whereas the expansion rate is given by $3H^{2}=\\kappa \\left(1+\\frac{7}{3}\\left(\\frac{1}{3} \\right)^{4/3}\\lambda \\right)\\rho _{\\text{r}}.$ The changes from the standard big bang nucleosynthesis arise when $S\\ne 1$ where $S=\\frac{2\\left(9+7\\times 3^{1/4}\\lambda \\right)}{3\\left(6+5\\times 3^{1/4}\\lambda \\right)}.$ In [22], a limited range on the baryon density and the parameter $S$ are provided as $4\\lesssim \\eta _{10}\\lesssim 8$ and $0.85 \\lesssim S \\lesssim 1.15$ , respectively.", "Interestingly, the values of $\\lambda $ predicted from the mass-radius relations of the neutron star (see Figure REF above) are compatible with this range.", "Hence, the scale-free model in (REF ) does not alter the cosmological constraints on the abundances of the light elements although it leads to deviations from the standard hot big bang nucleosynthesis.", "We leave the thourough study of more cosmological implications of this model to a separate work [8].", "In this paper, we have considered a new type of matter-gravity coupling which is based on the determinant of the energy-momentum of matter fields.", "The main motivation is inherited from the fact that, like scalar invariant terms which are formed by contracting the covariant indices, the determinantal actions (mainly second rank tensors) also stands useful in curved spactimes.", "In a general framework, we have extended the standard matter Lagrangian by an arbitrary function $f({D})$ that involves the determinant of the energy-momentum and the metric so that the overall quantity, ${D} \\equiv |\\textbf {det}.T|/|\\textbf {det}.g|$ , transforms as a scalar under the general coordinate transformations.", "We have derived the most general field equations from the action principle and examined the effects of the new terms induced by the determinant.", "Unlike the familiar extensions of gravity, we noticed the emergence of the inverse of the stress-energy tensor.", "We have discussed how the theory coincides with general relativity plus an effective cosmological constant in the case where the energy-momentum involves only the vacuum energy.", "To apply the work to an astrophysical phenomenon, we have proposed a model in which the free parameter is a dimensionless constant, i.e.", "a scale-independent model where $f({D})=\\lambda {D}^{1/4}$ .", "When matter fields are approximated to perfect fluids, we have seen that, as expected, the novel contribution are nonlinear in the energy density and pressure, and involve their inverse.", "We derived the hydrostatic equilibrium equations and solved them numerically by assuming various equations of state for a neutron star.", "Then, we have presented our predictions in plots showing the important mass-radius relation for different values of the coupling constant $\\lambda $ .", "In this respect, we have focused on values that bring out noticeable departure from the predictions of general relativity, namely, values around $\\lambda \\sim - 10^{-2}$ .", "We have also shed light on the possible effects of the scale-free model on the predictions of primordial nucleosynthesis.", "In this regard, we have shown that although the model must bring out slight deviations from the abundances of light elements predicted by the standard big bang nucleosynthesis, it still fits with observation.", "A separate work will be devoted to a more detailed study of the cosmological implications of this model [8].", "Gravity models where matter fields enter the action principle through its determinant can lead to more interesting phenomenology.", "Therefore, various scale-dependent models different than the one presented here must be explored in various theoretical and phenomenological contexts.", "On the other hand, the rapid growth of the gravitational wave astronomy via LIGO, LISA, and other experiments is expected to probe precisely the gravitational physics [23].", "This will provide a test for the validity of various extended theories of gravity including the stress-energy determinant gravity we explored in the present work [24]." ], [ "acknowledgments", "HA is indebted to Yavuz Ekşi for the help with the numerical solutions, and to Daniela Doneva for useful comments.", "He also thanks Glenn Starkman, Fabian Schmidt, and Albert Stebbins for fruitful discussions.", "This work is supported by the UAEU under UPAR Grant No.", "12S004." ] ]
2212.05585
[ [ "Accelerating Self-Supervised Learning via Efficient Training Strategies" ], [ "Abstract Recently the focus of the computer vision community has shifted from expensive supervised learning towards self-supervised learning of visual representations.", "While the performance gap between supervised and self-supervised has been narrowing, the time for training self-supervised deep networks remains an order of magnitude larger than its supervised counterparts, which hinders progress, imposes carbon cost, and limits societal benefits to institutions with substantial resources.", "Motivated by these issues, this paper investigates reducing the training time of recent self-supervised methods by various model-agnostic strategies that have not been used for this problem.", "In particular, we study three strategies: an extendable cyclic learning rate schedule, a matching progressive augmentation magnitude and image resolutions schedule, and a hard positive mining strategy based on augmentation difficulty.", "We show that all three methods combined lead up to 2.7 times speed-up in the training time of several self-supervised methods while retaining comparable performance to the standard self-supervised learning setting." ], [ "Introduction", "Learning representations without manual human annotations that can be successfully transferred to various downstream tasks has been a long standing goal in machine learning [17], [48].", "Self-supervised learning (SSL) aims at learning such representations discriminatively through pretext tasks such as identifying the relative position of image patches [14] and solving jigsaw puzzles [36].", "The recent success of SSL methods [23], [6], [8] builds on contrastive learning where the representations are in a latent space invariant to various image transformations such as cropping, blurring and colour jittering.", "Contrastive learned representations have been shown to obtain on par performance with their supervised counterparts when transferred to various vision tasks including image classification, object detection, semantic segmentation [7], [22], and extended to medical imaging [1] as well as multi-view [45] and multi-modal learning [37].", "Figure: Classification accuracy v.s.", "training cost of various SSL methods, which are trained on a subset of ImageNet by using ResNet50 (see subsec:modelecl for experimental details).", "Our method (drawn in blue) successfully significantly accelerates the SSL methods (drawn in red) without any significant drop in their performance.Despite the remarkable progress, an important downside of SSL methods is their high training cost which hampers the development and adoption of these promising techniques.", "Even the most efficient SSL methods require at least an order of magnitude more computation to reach the performance of supervised methods, e.g.", "training BYOL [22] requires 23 times more computation resulting from 8.8 times more iterations and 2.6 times more computation at each iteration than its supervised counterpart to reach similar accuracy.", "The large training cost is largely due to the challenging task of learning invariant representations over a large set of images and various augmentation transforms.", "In this paper, we focus on developing algorithms that can speed up the training of SSL while maintaining their performance.", "Prior work in efficient supervised training reduced the cost of training by gradually increasing the training resolution together with the augmentations magnitude using Progressive Learning [44].", "While augmentation is a regularization mechanism in supervised learning it is the main source of supervision in SSL and therefore plays a much more important role.", "Additionally, Super Convergence [41] is introduced which uses a cyclic learning rate and anti-phased momentum schedule that is used to accelerate convergence in supervised learning tasks.", "The longer duration of training makes the application of Super Convergence more difficult and using it together with Progressive Learning causes instabilities in the early stage of training.", "In this work, we aim to increase the training efficiency of self-supervised training methods by optimizing the learning rate, resolution, and augmentation schedules to allow reaching the same level of performance with a smaller computational budget.", "Inspired from [41] and [44] that were used in supervised learning we propose a combined learning rate and resolution schedule for self-supervised learning.", "Additionally, we modify the augmentation strategy used in self-supervised learning for faster training.", "Our contributions are three folds.", "Fixed 1-cycle Learning Rate schedules (see sec:sc) allows Super-Convergence [41] to take place for longer training times by fixing the duration of its warm-up phase while extending the decay phase and comparing it against alternative learning rate schedules used in self-supervised learning.", "Super Progressive Learning (see sec:sps) proposes a suitable Progressive Learning schedule [44] for SSL by adding a warm-up stage with full resolution at the beginning of the resolution schedule while increasing the augmentation magnitude gradually during training so that it works well with our proposed learning rate schedule.", "Hard Augment (see sec:ha) selects a hard augmentation pair dynamically from multiple low resolution augmentations in order to boost the learning signal and regularize the training.", "We show that our method accelerates the training of different self-supervised learning methods and architectures while maintaining comparable performance to standard training.", "We provide ablation studies for analysing the contribution of different methods and determining the optimum hyperparameters of our methods and add theoretical justifications." ], [ "Related Works", "Generative models like DBN [25], VAE [32] and GANs [20] have been used for unsupervised representation learning however discriminative SSL have proven to be more effective.", "While early work on SSL focused on designing various pretext tasks such as solving jigsaw puzzles [36], colouring [51], [33], [52] and predicting rotation [18], more recent techniques focused on learning augmentation invariant representation [49] [46] to replace the loss functions that require manual supervision.", "Recently, Contrastive learning methods[23], [6], [8] have shown promising results for various target tasks and caused renewed interest in this area.", "However, at least an order of magnitude, more computation is required when training these methods in order to get comparable performance to supervised learning.", "In fact, we demonstrate our method in these recent models and show that SimSiam [9], BYOL [22], MoCov2 [8], SimCLR [6] and DINO [4] training can significantly be sped up.", "By adaptation gradient noise during training large scale data parallel training has been used for increasing training speed in large deep learning methods.", "Using many accelerators together and increasing the total batch size training speed can be scaled up almost linearly with the number of accelerators [35].", "This way, supervised ImageNet training can be done in minutes [42] or hours [21] instead of days even though the total training cost remains roughly the same.", "This required adapting the hyperparameters so that gradient noise was managed properly [43], [28], [5] by using the linear relationship between the batch size and learning rate [21], [42].", "It was shown that training with larger learning rates had also a regularization effect [28], [5], [34] and prevented convergence to sharper minima [31].", "We use the relationship between gradient noise and learning rate schedules to optimize for training efficiency instead of speed where we synchronize the learning rate schedule with our resolution schedule to reduce the total cost of training.", "We use a cyclic learning rate schedule [41] which combines cyclic learning rate and cyclic momentum adaptation together as our learning rate schedule due to its efficient training performance and adapt it to longer training regimes common in SSL training.", "Curriculum learning [2] has been used to accelerate convergence for deep learning methods and when applying it to self-supervised learning, augmentation strength and image resolution are the most relevant parameters.", "Image resolution directly affects accuracy [26], [47] while reducing run-time quadratically.", "Previous work [27] on efficient supervised learning gradually increased the image resolution in the DAWN benchmark [11] to accelerate training while having slightly lower performance.", "Recently [44] proposed gradually increasing the training image resolution while increasing augmentations strength to allow for both faster and accurate training.", "While augmentations are used for regularization in supervised learning they are the main source of supervision for recent self-supervised methods and determine task difficulty.", "We build on this intuition and adapt Progressive Learning for efficient self-supervised training since in order to reduce the training cost especially in the initial phase while increasing the difficulty of the task gradually.", "To the best of our knowledge having a good schedule for augmentation magnitude and its relation to resolution hasn't been examined for self-supervised learning.", "Hard positive mining is another technique that can be used to accelerate training.", "Importance sampling based on sample loss has been shown to accelerate supervised learning [30], [29].", "In object detection [38] selects regions with the highest loss in order to select useful positives and balance positive to negative regions effectively.", "[19] applied multiple augmentations and back-propagated only the augmented sample with the maximum loss in order to improve the adversarial robustness of their method while increasing supervised performance.", "[3] have introduced a multi-crop strategy for self-supervised learning that combines low resolution and high resolution crops in order to show increased performance while trying to keep the computational cost limited.", "We mine hard augmentations during training by utilizing the loss based importance sampling technique to dynamically select the most useful augmentations.", "Since our focus is on efficient training, we used down sampled versions of the augmented images in the selection pass to minimize the overhead." ], [ "Method", "In this section we introduce the techniques used to accelerate self-supervised training.", "Each technique can be used to accelerate training on its own but they can be combined together in a synergistic way to enable faster training.", "We introduce Fixed 1-cycle Learning Rate Schedule in sec:sc, Super Progressive Learning in sec:sps and Hard Augment in sec:ha.", "We formulate a contrastive loss function that measures difference between pairs of images and optimizing this loss allows us to learn parameters $\\theta $ for a deep neural network $f_{\\theta }$ that produces similar representations for augmentations of the same image and have representation that is going to be useful for various target tasks.", "Let $D$ be an unlabeled dataset consisting of $|D|$ images with resolution $r$ .", "We randomly sample two image transformations $\\tau $ and $\\bar{\\tau }$ from the transformation space $T$ for each training image $\\mathbf {x}$ , apply them to $\\mathbf {x}$ to obtain two views $\\mathbf {v}$ and $\\mathbf {\\bar{v}}$ , extract their features through the deep neural network $\\mathbf {z}=f_{\\theta }(\\mathbf {v})$ and $\\mathbf {\\bar{z}}=f_{\\theta }(\\mathbf {\\bar{v}})$ respectively.", "To learn the network weights $\\theta $ , we minimize the loss function $L$ that represents the mismatch between two representations $\\mathbf {z}$ and $\\mathbf {\\bar{z}}$ over the dataset and transformation space: $\\min _{\\theta } \\mathbb {E}_{x\\sim D, (\\tau ,\\bar{\\tau }) \\sim T} L( \\mathbf {z}, \\mathbf {\\bar{z}})$ We use minibatch Stochastic Gradient Descent (SGD) optimizer with momentum: $L^{(B)}(\\theta )= \\frac{1}{|B|} \\sum _{\\mathbf {x}\\sim B, (\\tau ,\\bar{\\tau })\\sim T} L(\\mathbf {z}, \\bar{\\mathbf {z}} ),$ where the minibatch $B$ consists of $|B|$ randomly sampled images from $D$ and the loss is averaged over the samples to obtain a noisy yet unbiased estimate of the true gradient.", "The update rule for $\\theta $ is given as: $\\begin{aligned}\\mu _{t} & = \\beta _t \\mu _{t-1} - \\epsilon _t \\nabla _{\\theta _t} L^{(B)}(\\theta _t) \\\\\\theta _{t+1} & = \\theta _t - \\mu _t\\end{aligned}$ where $ \\epsilon _t $ is the learning rate and $\\beta _t$ is the momentum weighs at step $t$ .", "The values for the learning rate and momentum is given by a learning rate scheduler $(\\epsilon _t, \\beta _t) = S(t) $ .", "eq:sgdup can be interpreted as stochastic differential equation ([43]): $\\begin{aligned}\\frac{d\\theta }{dt} & = \\mu , & \\frac{d\\mu }{dt} & = \\beta _t\\mu -\\frac{dL}{d\\theta } + \\eta _t\\end{aligned}$ where $\\eta (t) \\sim \\mathcal {N} (0, g\\mathbf {F}(\\theta )/|D|)$ is an additive Gaussian noise originating from the stochasticity and $\\mathbf {F}(\\theta )$ is the covariance matrix for gradients of the samples and $g$ is the “noise scale” which is given by $g\\approx \\frac{\\epsilon _t |D|}{b(1-\\beta _t)}$ .", "For our analysis we will utilize the noise in the gradients which is proportional with the learning rate $\\epsilon _t$ while being inversely proportional with batch size $b$ and $1-\\beta _t$ .", "This relationship plays an important role when adapting hyperparameters to different setups and for giving us a theoretically grounded understanding for learning rate schedules and their relationship with Progressive Learning." ], [ "Fixed 1-cycle Learning Rate Schedule", "Cosine annealing learning rate schedule which has been used widely in SSL [23], [8], [9] decays the learning rate starting from a maximum learning rate value $\\epsilon _{max}$ using a cosine function and can be described as, $\\epsilon _t = \\frac{1}{2}\\epsilon _{max}(\\cos ({\\frac{t}{L} \\pi )}+1),$ where $L$ is the total number of iterations.", "Methods that perform better with larger batch sizes train with larger learning rates in order to maintain the gradient “noise scale” $g \\propto \\frac{\\epsilon _t}{b}$ during training.", "In order to prevent instabilities in the early stages of training due to rapidly changing parameters values [21] a linear warm-up phase for the learning rate is added [6], [22], [10], [50], [3], [4], $\\epsilon _t={\\left\\lbrace \\begin{array}{ll}\\frac{t}{t_w} \\epsilon _{max}, & t < t_{w} \\\\\\frac{1}{2}\\epsilon _{max}(\\cos ({\\frac{t-t_w}{L-t_w} \\pi )}+1), & \\text{otherwise}\\end{array}\\right.", "}$ where $t_w$ is the number of warm up steps.", "On the other hand, Cyclic Learning Rate (CLR) schedule introduced by [39] and later refined as the 1-cycle learning rate (1-CLR) schedule [41], [40] starts with a small learning rate and than increases the learning rate to a very high value and than decays it gradually to a very small value while changing the momentum at the opposite direction as the learning rate.", "The learning rates and momentum values at each step using a cosine window can be calculated by $\\epsilon _t={\\left\\lbrace \\begin{array}{ll}\\epsilon _{max} - \\frac{1}{2}\\epsilon _{max}(\\cos ({\\frac{t}{\\rho L} \\pi )}+1) & t < \\rho L \\\\\\frac{1}{2}\\epsilon _{max}(\\cos ({\\frac{t-\\rho L}{L(1-\\rho )} \\pi )}+1) & \\text{otherwise}\\end{array}\\right.", "}$ $\\beta _t={\\left\\lbrace \\begin{array}{ll}\\beta _{l}+\\frac{1}{2}(\\beta _{h}-\\beta _{l})(\\cos ({\\frac{t}{\\rho L} \\pi )}+1) & t < \\rho L \\\\\\beta _{h}-\\frac{1}{2}(\\beta _{h}-\\beta _{l})(\\cos ({\\frac{t-\\rho L}{L(1-\\rho )} \\pi )}+1) & \\text{otherwise}\\end{array}\\right.", "}$ where $\\rho $ is the time percentage of time allocated for the first phase, while $\\beta _l$ and $\\beta _h$ are lower and higher limits for momentum respectively.", "We use the learning rate (LR) range test proposed by [40] to set the maximum learning rate $\\epsilon _{max}$ please see the details in supplementary material .", "A phenomenon called Super Convergence has been demonstrated in supervised classification where using larger learning rates and the 1-CLR schedule training time to reach a specified performance has been decreased dramatically [27].", "However, unlike supervised learning where longer training times do not generally result in better performance and can sometimes even cause worse performance due to over-fitting, in SSL the quality of the representation typically improves with longer training time [8], [6].", "A problem with the current 1-CLR is that the percentage of the first phase is being determined in proportion to the full training duration which causes an extremely long warm-up phase in longer training settings wasting compute time.", "To address this problem, we propose to extend the annealing phase of 1CLR while keeping the warm up length $t_w$ the same which we call Fixed 1-cycle Learning Rate (F1-CLR) schedule, $\\epsilon _t={\\left\\lbrace \\begin{array}{ll}\\epsilon _{max} - \\frac{1}{2}\\epsilon _{max}(\\cos ({\\frac{t}{t_w} \\pi )}+1 & t < t_w \\\\\\frac{1}{2}\\epsilon _{max}(\\cos ({\\frac{t-t_w}{L-t_w} \\pi )}+1) & \\text{otherwise}\\end{array}\\right.", "}$ $\\beta _t={\\left\\lbrace \\begin{array}{ll}\\beta _{l}+\\frac{1}{2}(\\beta _{h}-\\beta _{l})(\\cos ({\\frac{t}{t_w} \\pi )}+1) & t < t_w \\\\\\beta _{h}-\\frac{1}{2}(\\beta _{h}-\\beta _{l})(\\cos ({\\frac{t-t_w}{L-t_w} \\pi )}+1).", "& \\text{otherwise}\\end{array}\\right.", "}$ fig:sclrmomentum shows a comparison between cosine schedule and F1-CLR schedule in terms of learning rate (left) and momentum (middle) as well as illustrating the F1-CLR learning rate schedule for different lengths of training (right).", "The anti-phased movement of momentum in F1-CLR allows larger learning rates to be achieved while keeping gradient noise in check ($g \\propto \\epsilon _t/(1-\\beta _t)$ see eqn:sgd-sde).", "Note that we do not extend the warm up phase, as the warm up is used as a stabilizer, while the learning rate and gradient noise scale are increasing ($g \\propto \\epsilon _t$ see eqn:sgd-sde)." ], [ "Super Progressive Learning", "Input resolution is very important factor that effects training time.", "Typically image resolution is a trade-off hyperparameter between performance and computational load, i.e.", "higher resolution, higher performance but also more computations.", "In SSL, the standard practice [23], [8], [9], [6], [22], [10], [50], [3], [4] is to train with fixed resolution and fixed augmentation settings.", "We hypothesize that higher resolution inputs are only necessary when there is small amount of noise in the gradients, and hence, propose to employ a learning rate aware progressive learning strategy inspired by [44].", "However, as starting training with small input resolution as in [44] results in additional noise in the gradients which prevent it from being trivially incorporated to F1-CLR due to the instabilities in the warm-up phase.", "Hence, we propose train with full resolution images during the warm-up phase before going into the linearly increasing resolution schedule while we gradually increase the augmentation magnitude as shown in fig:spsschedule (left, middle).", "We call this strategy Super Progressive Learning.", "When the learning rate is high there is a large amount of noise in the gradients $g \\propto \\epsilon _t$ and this allows us to use low resolution images at that point without resulting in bad performance however since the training starts with small learning rates we need to adapt our schedule and use large resolution images.", "Note that the speedup of this strategy depends on the input resolution schedule ($r_t$ ).", "Assuming a quadratic relationship between training time and resolution, the speedup $M$ can be calculated as: $M = \\frac{1}{L} \\sum _{t} (\\frac{r_{max}}{r_t})^2,$ where $r_t$ is the input image resolution at time $t$ , $r_\\text{max}$ is the final resolution.", "We discretize the resolution steps by 32 due to the fact that most network architectures have 5 pooling/dilation layers.", "The minimum value for the resolution schedule is determined empirically." ], [ "Hard Augment", "So far we focused on varying the effective step size and input resolution.", "Informativeness of training samples is another important factor that can speed up the training by providing more efficient gradients.", "We propose Hard Augment to boost the learning signal in self-supervised learning.", "We generate $m$ augmentations $\\tau ^m = \\lbrace \\tau _{1..m} : \\tau _{1..m} \\sim T\\rbrace $ for each image and evaluate the loss for all pairs of augmentations $p=\\binom{\\tau ^m}{2}$ using a forward pass to select the augmentation pair that results in the largest amount of loss for back-propagation.", "By focusing on the samples that produce the most lost and ignoring a large fraction of the augmentations that produce a small loss and do not make a significant contribution to the training we can accelerate training dramatically.", "The overall objective that we optimize is $L^{(B)}_{ha}(\\theta )= \\frac{1}{|B|} \\sum _{\\mathbf {x} \\sim D, \\tau _{1..m} \\sim T} \\left[ \\max _{ \\tau _p \\in p } L( \\mathbf {z_i}, \\mathbf {z_j}) \\right].$ [19] have shown that using the maximum of multiple augmentations applies a regularisation on the gradient-norm with respect to the input images $\\left\\Vert \\nabla _{\\mathbf {x}} L( \\mathbf {z_i}, \\mathbf {z_j}) \\right\\Vert _2$ that is in the order of $ \\sigma \\sqrt{\\log m} $ where $\\sigma $ is the strength of the augmentation under the assumption that $\\tau (x) \\sim \\mathcal {N} (x, \\sigma ^2 I)$ .", "The regularization effect in our method can be seen as a corollary of their theorem that uses a pair of augmentations instead with a number of equivalent augmentations as the number of pairs $|p|$ .", "Crucially in order to decrease the overhead of selection we propose to down sample the images during the selection pass to $r_{sel}$ .", "The image pairs that have the highest loss will than be used in training in their original resolution $r$ .", "We do not need to have full resolution for the selection pass as we only need to have sufficient resolution to rank the augmentation pairs with respect to their loss and to find the highest loss pair which requires far less precision than is required for obtaining good quality gradient updates during training.", "$M_{ha} = \\frac{r^2 C}{r^2 C + mr_{sel}^2}$ We find the minimum resolution $r_{sel}$ that we can use for the selection pass empirically.", "On a typical setting in ImageNet where the original resolution is $r=224$ and the reduced selection resolution is $r_{sel}=64$ with $m=4$ augmentations and 6 possible pairs and the fraction for full training iteration cost to a forward pass cost is $C=6$ we only add $1-M_{ha}=\\%5$ overhead on the training while significantly boosting the training speed by focusing on the most valuable augmentations.", "[t] Efficient SSL Training, PyTorch-like whitefullflexiblebcodeblue     # f: backbone + projection layers     # criterion: loss function for SSL method     # aug: augmentation function     # F1CLR: Super Convergence Schedule (Section 3.1)     # SP: Super Progressive Schedule (Section 3.2)       t=0 # iteration     for e in range(epochs):         for x in loader:  # load a minibatch x with |B| samples            eta, beta = F1CLR(t) # Update learning rate and momentum weight            r, m = SP(t) # Update resolution(r) and augmentation magnitude(m)            vs = [aug(x, r, m) for i in range(n)]  # n random augmentation            vi, vj = hard_augment(vs) # select hardes augmentations            zi, zj = f(vi), f(vj) # forward-propagation              L = criterion(zi,zj) # Loss calculation            L.backward()         # back-propagation            update(f, eta, beta) # SGD update            t+=1       def hard_augment(vs):  # negative cosine similarity         vs_small = interpolate(vs, r_min) # down sample views to r_min         zs = f(vs)                        # forward prop for all views           indexes = arg_max_loss_pair(zs)   # determine pairs with maximum loss         vi, vj = hard_select(vs, indexes) # select pairs by tensor indexing         return vi, vj We have provided the pseudo code for our method in alg:combined.", "Our method can easily be adopted to different SSL methods without any major difficulty as it operates at the level of augmentations, optimizer parameters and sampling strategies.", "Table: Our efficient strategies applied to different backbone architectures using SimSiam model on Imagenet dataset.", "Speed up is reported based on the reduction in training cost which is measured in terms of petaFLOPsTable: Our efficient strategies applied to different SSL models training on ResNet50 backbone architecture on Imagenette dataset.", "Speed up is reported based on the reduction in training cost which is measured in terms of petaFLOPs" ], [ "Experiments", "In this section, we first study evaluate our method on different backbone architectures in subsec:backbonessl and on various existing SSL methods in subsec:modelecl, finally we analyze the effect of each proposed component in our method in subsec:experimentalsetup and run ablation studies that analyze the hyperparameters and different trade-offs of our method in subsec:ablationsssl.", "We evaluate the experiments on the ImageNet dataset [13] in terms of linear probing accuracy.", "This is done by freezing the backbone after pre-training and training a linear classification layer in a supervised way following [9].", "The evaluation on the Imagenette datasetImageNet subset that consist of 10 classes (tench, English springer, cassette player, chain saw, church, French horn, garbage truck, gas pump, golf ball, parachute) avalilable at https://github.com/fastai/imagenette is done using online kNN accuracy of the feature maps.", "When calculating speed up we consider the number of steps used in training and multiply that by the number of floating point calculations made in each step and compare it against the baseline." ], [ "Implementation Details", "Here we provide details on the default parameters that we have used during our experiments where we use the SimSiam [9] model with ResNet50 backbone using the SGD optimizer with .9 momentum.", "On ImageNet experiments we use a batch size of 512 and weight decay of 0.0001.", "For the baseline experiments we use the Cosine Annealing learning rate schedule with learning rate of 0.1 ($\\eta _{max}$ ) and trained them for 200 epochs ($t_{max}$ ) and for our accelerated F1-CLR schedule we use a learning rate of 0.16 ($\\eta _{max}$ ) with 10 epoch warm-up ($t_w$ ) and train them for 120 epochs ($t_{max}$ ).", "For Imagenette experiments we use a batch size of 128, weight decay of 0.0005, learning rate of 0.1 ($\\eta _{max}$ ) with Cosine Annealing scheduler and train them for 800 epochs ($t_{max}$ ).", "For our accelerated setting we use the F1-CLR schedule with learning rate of 0.2 ($\\eta _{max}$ ) with 80 epochs for warm up ($t_w$ ) and train for 480 epochs ($t_{max}$ ).", "For Hard Augment we use 6 augmentations generated using SimCLR[6] transforms and scale the colour jittering linearly where the standard scale is defined as 5.", "For Super Progressive learning we use 96 as the minimum resolution and use 6 stages each a multiple of 32.", "Additional details can be seen in Supplementary Sec.", ".", "We have observed that GPU pre-processing plays an important role in obtaining speedups in real time.", "Since loading images and making multiple augmentations can become a bottleneck when training with faster GPU cards like the Nvidia V100.", "We use the Nvidia DALIavailable at https://github.com/NVIDIA/DALI library for fast loading and augmentation of the images in the accelerators." ], [ "Backbone Architecture Analysis", "In order to show the generality of our method to different different backbone architectures we applied our training strategies to SimSiam [9] model on the ImageNet dataset and report linear evaluation results and speed-up in tab:eclarchitectureexperiment.", "Our results show that our method can accelerate both ResNet-18 and ResNet-50 architectures by 2.3 times while maintaining the baseline accuracy." ], [ "Model Experiments", "We have evaluated our acceleration strategies on different SSL models in tab:eclmodelexperiment by pre-taining on a ResNet 50 backbone on the Imagenette dataset.", "We used the default hyperparameters for SimSiam[9], BYOL[22], MoCov2+ [8], DINO [4] and SimCLR [6] models while we used the same optimizer and augmentation hyper-parameters for all of them which can be seen in the supplementary Sec.", ".", "Table: Comparing the contribution of each component in our experimental setup.", "Speed up is reported based on the reduction in training cost.", "Using SimSiam model with ResNet50 backbone architecture on Imagenette dataset.", "CA stands for the baseline Cosine Annealing learning rate schedule.We have observed that that our method can accelerate the training of different models.", "We note that methods with a momentum encoder typically have a higher accuracy and training cost even when trained with the same number of steps due to the additional forward passes made with the momentum encoder which helps to stabilize the training.", "Our method is especially effective for models with a momentum encoder where we can accelerate the training up to 2.7 times." ], [ "Experimental Setup", "Here we compare the contribution of each component in our experimental setup in tab:esslexperimentalsetup training with ResNet50 backbone on the Imagenette dataset using default hyperparameters whenever possible.", "We have observed that our improvements are generally compatible with each other and allow us to train close to the same accuracy while reducing the training cost by 2.4 times.", "While Super Progressive Learning is the main driver in reducing computational cost, Hard Augment allows us to improve performance dramatically by prioritizing useful augmentations.", "We note that Super Progressive Learning and Hard Augment also work well energetically since Hard Augment can select from a harder set of alternatives towards the end of the training which further boost performance." ], [ "Ablations", "In the ablation studies use the Imagenette dataset with the parameters described in the previous section.", "We aim to identify various parameters of our model and examine its behaviour in different circumstances." ], [ "Learning Rate Schedule Comparison", "We compared different learning rate schedulers in isolation in order to see what percentage of improvement is attributable to the different learning rate schedulers on the ImageNet dataset in tab:sccomparison.", "We have seen that our Extended Super Convergence method still gives a reliable performance increase in this setup compared to Cosine Annealing and Cosine Annealing with linear warm up.", "Table: Comparing different learning rate schedules.", "Cosine decay with linear warm-up and Super convergence both have 10 epochs of warm-up in order to make the comparison more direct." ], [ "F1-CLR Warm Up", "Here we make an ablation study where we train the SimSiam model for 320 epochs with different number of warm up epoch lengths in tab:scwarmup.", "We observe that 80 epochs is the optimal warm up length and we use this setting in our future experiments and ablations that utilize F1-CLR schedule.", "Table: Super Convergence training where the number of warm up epochs is changed." ], [ "Minimum Resolution", "An important hyper-parameter that determines how much speed-up can be achieved with Super Progressive Learning is the minimum resolution.", "We made a ablation study in tab:spsspeedup to see the maximum speedup that can be achieved with our training strategies by increasing the minimum resolution by 32 increments starting from 64.", "The smallest resolution that does not result in a drop in performance is 96 which we used in our other experiments while 64 was giving better speedups it wasn't maintaining the level of performance we are seeking.", "Table: Ablation study for the Super Progressive learning speedup and accuracy trade-off with different values for minimum resolution using 224 as the maximum resolution." ], [ "F1-CLR Training Length", "To build a better understanding of 1CLR schedule we change the length of the training from 160 epochs to 800 epochs using the cosine annealing learning rate schedule and compare against how the proposed extended super convergence schedule performs.", "fig:scextension shows that the difference in performance is stark when the training duration is short and gradually becomes smaller.", "Additionally, our F1-CLR setting leads to between 1.3x-1.6x speedup of the training to reach equivalent performance on the baseline." ], [ "Number of Positives", "In order to understand the effect of number of augmentations we have made an ablation study that examines the trade-off between increased overhead and the effect on accuracy in tab:hapositives.", "We have used the SimSiam [9] model with F1-CLR schedule and our default hyperparameters.", "We have observed that 6 positives gives good balance between maximizing accuracy while minimizing the overhead so we use this setting in other experiments.", "Table: Ablation study for the numbers of positives used in Hard Augment.", "6 augmentations result in good performance increase while maintaining reasonable overhead." ], [ "Multi-Crop Comparison", "Multi-crop augmentation proposed by SwAV [3] uses multiple augmentations with a combination of high resolution and low resolution crops where only high resolution crops are used for back-propagation and low resolution ones are used only as extra comparison targets.", "A slight modification to our Hard Augmentation method will produce an intermediate algorithm useful for a comparison where the additional low resolution augmentations that do not have the highest loss can be used as extra targets.", "We made an ablation study that compares the multi-crop augmentation strategy at the last line of tab:hamulticrop against our method and our method with extra targets.", "We train our method with Extended Super Convergence for 400 epochs and used 4 augmentations for each image.", "Table: Progressive learning speedup with different values for minimum to maximum resolution fractionWe have seen that extra targets are useful when 96 resolution is used for the selection pass and hard augment with extra targets and hard augment outperforms multi-crop augmentation.", "We have also seen that the standard resolution of 64 that is enough for ranking loss pairs in Hard Augment is not enough for generating accurate targets.", "Since increasing selection resolution has a direct effect on the overhead for Hard Augment and the additional improvement that can be obtained from extra targets is small we have decided not to include extra targets in our method." ], [ "Conclusion", "We have demonstrated that self-supervised training can be accelerated by adapting the learning rate, augmentation and resolution schedules for self supervised training and boosting the training signal by hard positive mining on the augmentations.", "Our method have shows an training speed-up between 2.3 and 2.7 times.", "which allows a much wider community to reproduce and contribute to the self-supervised learning literature, reduce the financial and carbon cost of training these models.", "However there is a risk that using efficient training methods the community will adopt larger and computationally more expensive benchmarks which will eliminate some of the intended benefits of our method.", "As future work we aim to apply our method to recent methods [4], [10] that used Vision Transformer [15] backbones.", "Speed Up Calculation While Super Progressive Learning reduces the number of floating point calculations made by using a smaller resolution, Hard Augment adds an additional overhead for training which can be calculated by equations REF and REF respectively.", "Generally we find that we can reduce the number of steps by 1.7 times while reduce the number of floating point operations at each step by 1.4 times.", "Additional Implementation Details Table: Hyper parameters used in various setups.", "CA: Cosan Annealing, ESC: Extended Super ConvergenceWe have trained all our experiments in the single node setting with maximum 8 GPUs which restricted us to use 512 as the maximum batch size.", "Some of the SSL methods like SimCLR [6] and BYOL [22] have been shown to perform better with larger batch sizes which we could not replicate and restricted ourselves to the same training setting in all our experiments.", "In some of the ablations and analysis experiments where we study the effect of a certain parameter different values can be used.", "For Linear evaluation we use freeze the encoder and re-initialize the last layer and train with batch size 2048 and learning rate 0.8 for 90 epochs using the LARS optimizer.", "Our implementation is based on PyTorch Lightning [16] where we define Hard Augmentation and Super Progressive Learning Schedule as callbacks.", "We adapt the OneCycle Learning rate schedule implementationhttps://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.OneCycleLR.html for Extended Super Convergence.", "We will provide the code for our method upon publication.", "See the table for a full list of parameters tab:hyperparams.", "Learning rate range finder We have used the Learning rate range finder test proposed by [41] to find good values for minimum and maximum learning rates.", "This test increase learning rate exponentially and keeps track of on the training and validation loss.", "The minimum value and maximum value are determined by the point at which the validation loss starts to decrease and the point at which is starts to diverge.", "We calculate the validation loss at each step on a batch of 4096 validation samples in order to keep the test manageable.", "In the test shown in Figure REF we increase the learning rate from 1e-3 to 1e0 in 200 steps.", "Both the training and validation losses plateau close to step 135 and learning rate .1 which we use in our experiments on Imagenette dataset.", "Figure: Learning rate finder plot for training (left) and validation (right) losses Augmentation Resolution Relationship In order to examine the relationship between resolution and augmentation magnitude for self-supervised learning we have conducted a series of experiments.", "We trained a BYOL [22] model on the Imagenette dataset using the ResNet18 [24] architecture with RandAugment [12] augmentation with the cosine annealing learning rate schedule where we change the augmentation strength $m$ while we change the input image resolution.", "Each entry in the table is a separate experiment with fixed input resolution and augmentation magnitude.", "The results in Table REF confirms that we can apply higher magnitude augmentations only on higher resolution images and the optimum augmentation magnitude increases with resolution.", "This confirms our intuition behind the relationship between resolution and augmentation magnitude in SSL and provides motivation for applying Super Progressive Learning and allows us to confirm the findings of [44] for SSL.", "Table: Resolution and Augmentation Magnitude relationship shown by training a self-supervised learning method on the combination of resolution and magnitudes and measuring the online classification accuracy of a linear layer Progressive Augmentation Curriculum However what values should be used as the minimum and maximum augmentation magnitude is an important question.", "Since we use the linearly scaled SimCLR [6] augmentations in our method we made a hyper-parameter search on these values in order to determine augmentation magnitudes empirically.", "We train our method for 320 epochs with Super Progressive learning schedule using 128 as the minimum resolution for various augmentation magnitudes and compare against the default augmentation magnitude of 5.", "Our experiment shows that a minimum value of 4 and maximum value of 6 perform better in our setting and outperform the fixed augmentation setting.", "Table: Maximum and minimum augmentation magnitude when trained with Super Progressive learning Future Work This idea can be extended to other modalities where resolution is defined in different manners for example on sound with the sampling rate analogue which in a similar sense accelerates the training however than the amount of acceleration and cost benefit calculation will change.", "A similar case can also be made for tabular data where the most important features are processed first and later additional features are added however this is much more difficult to implement would probably require additional structure.", "Similarly training with a small vocabulary and than enlarging that can have a similar effect as well." ], [ "Speed Up Calculation", "While Super Progressive Learning reduces the number of floating point calculations made by using a smaller resolution, Hard Augment adds an additional overhead for training which can be calculated by equations REF and REF respectively.", "Generally we find that we can reduce the number of steps by 1.7 times while reduce the number of floating point operations at each step by 1.4 times." ], [ "Additional Implementation Details", "We have trained all our experiments in the single node setting with maximum 8 GPUs which restricted us to use 512 as the maximum batch size.", "Some of the SSL methods like SimCLR [6] and BYOL [22] have been shown to perform better with larger batch sizes which we could not replicate and restricted ourselves to the same training setting in all our experiments.", "In some of the ablations and analysis experiments where we study the effect of a certain parameter different values can be used.", "For Linear evaluation we use freeze the encoder and re-initialize the last layer and train with batch size 2048 and learning rate 0.8 for 90 epochs using the LARS optimizer.", "Our implementation is based on PyTorch Lightning [16] where we define Hard Augmentation and Super Progressive Learning Schedule as callbacks.", "We adapt the OneCycle Learning rate schedule implementationhttps://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.OneCycleLR.html for Extended Super Convergence.", "We will provide the code for our method upon publication.", "See the table for a full list of parameters tab:hyperparams." ], [ "Learning rate range finder", "We have used the Learning rate range finder test proposed by [41] to find good values for minimum and maximum learning rates.", "This test increase learning rate exponentially and keeps track of on the training and validation loss.", "The minimum value and maximum value are determined by the point at which the validation loss starts to decrease and the point at which is starts to diverge.", "We calculate the validation loss at each step on a batch of 4096 validation samples in order to keep the test manageable.", "In the test shown in Figure REF we increase the learning rate from 1e-3 to 1e0 in 200 steps.", "Both the training and validation losses plateau close to step 135 and learning rate .1 which we use in our experiments on Imagenette dataset.", "Figure: Learning rate finder plot for training (left) and validation (right) losses" ], [ "Augmentation Resolution Relationship", "In order to examine the relationship between resolution and augmentation magnitude for self-supervised learning we have conducted a series of experiments.", "We trained a BYOL [22] model on the Imagenette dataset using the ResNet18 [24] architecture with RandAugment [12] augmentation with the cosine annealing learning rate schedule where we change the augmentation strength $m$ while we change the input image resolution.", "Each entry in the table is a separate experiment with fixed input resolution and augmentation magnitude.", "The results in Table REF confirms that we can apply higher magnitude augmentations only on higher resolution images and the optimum augmentation magnitude increases with resolution.", "This confirms our intuition behind the relationship between resolution and augmentation magnitude in SSL and provides motivation for applying Super Progressive Learning and allows us to confirm the findings of [44] for SSL.", "Table: Resolution and Augmentation Magnitude relationship shown by training a self-supervised learning method on the combination of resolution and magnitudes and measuring the online classification accuracy of a linear layer" ], [ "Progressive Augmentation Curriculum", "However what values should be used as the minimum and maximum augmentation magnitude is an important question.", "Since we use the linearly scaled SimCLR [6] augmentations in our method we made a hyper-parameter search on these values in order to determine augmentation magnitudes empirically.", "We train our method for 320 epochs with Super Progressive learning schedule using 128 as the minimum resolution for various augmentation magnitudes and compare against the default augmentation magnitude of 5.", "Our experiment shows that a minimum value of 4 and maximum value of 6 perform better in our setting and outperform the fixed augmentation setting.", "Table: Maximum and minimum augmentation magnitude when trained with Super Progressive learning" ], [ "Future Work", "This idea can be extended to other modalities where resolution is defined in different manners for example on sound with the sampling rate analogue which in a similar sense accelerates the training however than the amount of acceleration and cost benefit calculation will change.", "A similar case can also be made for tabular data where the most important features are processed first and later additional features are added however this is much more difficult to implement would probably require additional structure.", "Similarly training with a small vocabulary and than enlarging that can have a similar effect as well." ] ]
2212.05611
[ [ "Galaxies on graph neural networks: towards robust synthetic galaxy\n catalogs with deep generative models" ], [ "Abstract The future astronomical imaging surveys are set to provide precise constraints on cosmological parameters, such as dark energy.", "However, production of synthetic data for these surveys, to test and validate analysis methods, suffers from a very high computational cost.", "In particular, generating mock galaxy catalogs at sufficiently large volume and high resolution will soon become computationally unreachable.", "In this paper, we address this problem with a Deep Generative Model to create robust mock galaxy catalogs that may be used to test and develop the analysis pipelines of future weak lensing surveys.", "We build our model on a custom built Graph Convolutional Networks, by placing each galaxy on a graph node and then connecting the graphs within each gravitationally bound system.", "We train our model on a cosmological simulation with realistic galaxy populations to capture the 2D and 3D orientations of galaxies.", "The samples from the model exhibit comparable statistical properties to those in the simulations.", "To the best of our knowledge, this is the first instance of a generative model on graphs in an astrophysical/cosmological context." ], [ "Introduction", "Upcoming astronomical imaging surveys such as the Vera C. Rubin Observatory Legacy Survey of Space and Time (LSST)https://www.lsst.org/, Roman Space Telescopehttps://roman.gsfc.nasa.gov/ High Latitude Survey (HLS) and Euclidhttps://www.euclid-ec.org/ will aim to answer fundamental questions about the nature of dark matter and dark energy, by precisely measuring the distribution and properties of billions of galaxies.", "The analysis of these surveys will require having an access to large scale cosmological simulations for a variety of applications, ranging from validating analysis pipelines [5], [6] to constraining cosmology through Simulation-Based Inference [10].", "However, as the volume and data quality of future surveys increases, cosmological simulation must cover increasingly large volumes at high resolution [26].", "Full hydrodynamical simulations, which can resolve the formation and evolution of individual galaxies, are extremely expensive and cannot scale to such volumes.", "This motivates the need for emulation methods capable of generating realistic mock galaxy catalogs without relying on a full simulation.", "One traditional solution to this problem has been to (semi-)analytically paint galaxies on N-body (gravity only) simulations.", "However, the assumptions behind these (semi-)analytical models may not be robust and need validation from non-parametric models [22].", "While machine learning would be an appealing solution to this problem, One of the main difficulties in building ML-based non-parametric models for such simulations is the fact that the data to emulate is catalog-based (i.e., a catalog of galaxy positions and properties in the simulation volume) and that each galaxy cannot be treated independently if important correlations between galaxies are to be preserved.", "In this work, we propose to address this emulation problem with a conditional deep generative model, capable of modeling relevant galaxy properties and their inter-dependencies, conditioned on the underlying large-scale structure scaffolding.", "Our model combines a customized graph convolutional network architecture, to model the correlations between galaxies, with a Wasserstein Generative Adversarial Network to build a probabilistic model of galaxy properties.", "To the best of our knowledge, this work is the first instance of a deep generative model on graphs introduced in astrophysics.", "We apply our model to the particularly challenging problem of modeling the intrinsic alignments of galaxies.", "The model is able to learn and predict both (a) scalar features such as galaxy shapes, and more importantly, (b) correlated vector orientations in 3D and in 2D to a good quantitative agreement." ], [ "Related Work", "In the literature there is substantial work on galaxy property emulators.", "In some cases [1], [14], [27] the approach has been to 'paint' galaxy properties onto N-body (dark matter – gravity only) simulations.", "However, these methods typically do not model correlations between galaxies and only predict scalar quantities, as opposed to our model which predicts both vector and scalar quantities.", "While graph neural networks have been proposed in the context of cosmological simulations in previous work, it is the first time that they are used to build generative models for galaxy properties.", "[4] trained a graph neural network on a cosmological simulation and then extracted symbolic equations pertaining to physical laws.", "[25], on the other hand, used graph neural networks to infer halo masses." ], [ "Directional Graph Convolutional Networks", "A key feature of our problem is that the neural architecture needs to model direction- and distance-dependent correlations between galaxies.", "Instead of relying on a generic message-passing approach to build a graph neural network, we use our physical insight to build an architecture with explicit dependence on relative distance and orientation between graph nodes, as described below.", "In this work, we are considering undirected and connected graphs: $\\mathcal {G} = (\\mathcal {V} , \\mathcal {E}, \\mathbf {W})$ , where $\\mathcal {V}$ is the set of graphs vertices, with $\\left|\\mathcal {V} \\right|= n$ the number of vertices, $\\mathcal {E}$ is the set of graph edges and $\\mathbf {W} \\in \\mathbb {R}^{n \\times n}$ is the weighted adjacency matrix.", "We adopt a first order approximation to parameterize graph convolutions [12], and define one Graph Convolutional Network layer with an activation $y_i$ for a node $i$ as: $\\forall i \\in \\mathcal {V}, \\ y_i = \\mathbf {b} + \\mathbf {W}_0 h_i + \\sum _{j \\in \\mathcal {N}_i} w_{i, j} \\mathbf {W}_1 h_j$ where $\\mathbf {b} $ represents a vector of bias terms; $\\mathcal {N}_i$ denotes the set of immediate neighborsImmediate neighbors or first neighbors are neighbors that are one hop away from node $i$ .", "of vertex $i$ ; $\\mathbf {W}_0$ are the weights that apply a linear transform to the activation vector $h_i$ of node $i$ (i.e., self connection); $w_{i, j}$ are linear transforms on the activation vectors $h_j$ of the nodes $j$ in the neighborhood of $i$ ; and $\\mathbf {W}_1$ are the set of weights that apply to the immediate neighbors.", "Following an approach proposed in [24], we implement direction-dependent graph convolution layer as $y_i= \\mathbf {b} + \\mathbf {W}_0 h_i + \\sum \\limits _{m=1}^{M} \\frac{1}{|\\mathcal {N}_i|} \\sum _{j \\in \\mathcal {N}_i} q_m(\\mathbf {r}_i, \\mathbf {r}_j) \\mathbf {W}_m h_j.$ Here $|\\mathcal {N}_i|$ denotes the cardinality of the set $\\mathcal {N}_i$ , $M$ is the number of directions, and $\\mathbf {r}_i$ are the 3D Cartesian coordinates of the node.", "The $q_m(\\mathbf {r}_i, \\mathbf {r}_j)$ are normalized so that $\\sum _{m=1}^{M} q_m(\\mathbf {r}_i, \\mathbf {r}_j) = 1$ and are defined as: $q_m(\\mathbf {r}_i, \\mathbf {r}_j) \\propto \\exp (- \\mathbf {d}_m^t \\cdot (\\mathbf {r}_j - \\mathbf {r}_i)) \\ g_\\lambda ( \\parallel \\mathbf {r}_i - \\mathbf {r}_j \\parallel _2^2),$ where the $\\lbrace \\mathbf {d}_m \\rbrace _{m \\in [1,M]}$ are a set of directions we want to make the kernel sensitive to, and $g_\\lambda $ is a parametric function of the distance between two vertices.", "This can be seen as a hard-coded direction-dependent attention mechanism allowing the model to gain directional awareness by design.", "We further chose an exponential parametrization of the form $g_\\lambda (r) = \\exp ( - r^2/2\\lambda ^2)$ for the distance-dependence, where $\\lambda $ is fit automatically during training.", "Note that more generic functions could be used, but this empirical parametrization was found to work well for our problem." ], [ "Generative Model with Graphs", "Our goal is to learn and sample from a conditional probability density $p(\\mathbf {y} | \\mathbf {x})$ , where $\\mathbf {y}$ might be an orientation of a galaxy, and $\\mathbf {x}$ would be quantities such as the dark matter mass of a galaxy or the tidal field at its location.", "We model this distribution by employing a conditional Wasserstein generative adversarial network (GAN, [7]).", "GANs were chosen to model complex joint probability densities of all galaxies in a halo, without needing a parametric form/probability model.", "Given a generating function $g_\\theta (z , \\mathbf {x})$ with $z \\sim \\mathcal {N}(0, \\mathbf {I})$ , we aim to adjust the implicit distribution generated by $g_\\theta $ to match our target distribution $p(\\mathbf {y} | \\mathbf {x})$ .", "This can be done by minimizing the Wasserstein 1-distance $\\mathcal {W}$ between these two distributions to find an optimal set of weights $\\theta _{\\star }$ .", "By using an approximate Wasserstein distance, we are solving the following minimax optimization problem: $\\operatornamewithlimits{arg\\,min}\\limits _{\\theta } \\left( \\sup _\\phi \\mathbb {E}_{(x, y)} \\left[ d_\\phi (\\mathbf {x}, \\mathbf {y}) - \\mathbb {E}_{z} \\left[ d_\\phi (g_\\theta (\\mathbf {z}, \\mathbf {x}), \\mathbf {y}) \\right] \\right] \\right)$ Additionally, we must keep the Lipschitz constant bounded, to ensure that $d_\\phi $ indeed parameterizes a Wasserstein distance.", "In [2], the authors have clipped the weights of the model to ensure the Lipschitz condition.", "Later [8] showed that the gradient constraint performs better – thus in this work we adopt a gradient penalty." ], [ "Application: Emulating Galaxy Intrinsic Alignments in Illustris-TNG simulations", "Images of distant galaxies come to us with distortions, excluding camera and atmospheric effects.", "These distortions are caused by a phenomena known as weak gravitational lensing, where light traveling from the galaxy gets deflected due to the presence of a massive objects (like a galaxy cluster) on the light's pathway.", "Weak lensing is measured using statistical ensembles of galaxies and their coherent shape distortions, which are caused by the matter distribution in the Universe coherently distorting space-time.", "However, weak lensing measurements suffer from a number of systematics, one of which is intrinsic alignments - where galaxies are not oriented randomly in the sky in the absence of weak lensing effects, but rather tend to point towards dense regions, including those hosting other galaxies.", "This effect contaminates our desired weak lensing signal and can bias our measurement of dark energy.", "Realistic modeling of these alignments in mock galaxy catalogs is therefore paramount to validate the robustness of analysis pipelines." ], [ "Simulated data", "In this work we are using the hydrodynamical TNG100-1 run from the IllustrisTNG simulation suite [17], [18], [23], [15], [13], [16].", "We employ a minimum stellar mass threshold of $ \\log _{10}(M_*/M_\\odot ) =9 $ for all galaxies, using their stellar mass from the SUBFIND catalog." ], [ "Graph construction", "To construct the graph for the cosmic web (i.e., for the subhalos and the galaxies), we first grouped all subhalos and galaxies based on their parent halo.", "Given a galaxy catalog, an undirected graph based on the 3D positions within the parent halo is built by placing each galaxy on a graph node.", "Then, each node has a list of features such as mass and tidal field.", "To build the graph connection for a given group, the nearest neighbors within a specified radius for a given node are connected via the undirected edges with signals on the graphs representing the alignments.", "A snapshot of the simulation represented as graphs is shown in Fig.", "REF , where each node represents a galaxy and the connections between the nodes are represented by grey lines.", "Figure: The cosmological simulation box modeled as a graph.", "Here every node represents a galaxy and the connections are made using the r-NN algorithm within each gravitationally bound system.", "Graph animation adapted from .Figure: Architecture of the graph convolution GAN model used.", "Here, the input features are typical of medium to high resolution N-body (gravity only) simulations.", "The 𝐞=(e 1 ,e 2 )\\mathbf {e} = (e_1, e_2) is the 2D quantity that parametrizes the galaxy orientation in the sky." ], [ "Model Architecture", "In Fig.", "REF we outline the architecture of our model.", "We have list of features (orange box) that are relevant for capturing the dependence of intrinsic alignments within a halo (dashed red box), and the tidal fields that are relevant for capturing the dependence of IA for galaxies on matter beyond their halo (dashed purple).", "These inputs are fed into the GAN-Generator (crimson box), which tries to learn the statistical distribution of the desired output labels (yellow box).", "At the end the input and the output from the GAN-Generator are fed into the GAN-Critic (blue box) to determine the performance of the GAN-Generator.", "In our model, the Generator has 5 layers each with {128,128,16,2,2} neurons, while the Critic has 4 layers each with {128,128,64,32} neurons followed by a mean-pooling layer and a single output neuron." ], [ "Training", "We train the model using the Adam optimizer [11] with a learning rate of $10^{-3}$ and exponential decay rates of $\\beta _1 = 0$ and $\\beta _2 = 0.95$ .", "During the adversarial training we train the Generator for 5 steps and the Critic for 1 step with a batch size of 64 (one batch is set of graphs) and a leaky ReLU activation function.", "As is common with GANs, our GAN models do not converge; we arbitrarily stop the training once it reached a reasonable result.", "Our code is available at []." ], [ "Results ", "Throughout the section we refer to the sample generated from the Graph-Convolutional Network-based Generative Adversarial Networks as the GAN sample, and the sample from the TNG100 simulation as the TNG sample.", "Figure: Projected two-point correlation functions w g+ w_{g+} of galaxy positions and the projected 2D ellipticities of all galaxies, split into roughly equal-sized training and testing samples while preserving group membership.", "The top panel shows w g+ w_{g+} as a function of projected galaxy separation r p r_p measured using data from the TNG simulation in yellow and the data generated by the GAN in dotted green, while the bottom panel shows the ratios among the curves as indicated by the label.", "All four curves are in good quantitative agreement, suggesting that the GAN is not significantly overfitting.For our key result, we examine $w_{g+}$ , the density-shape correlation function computed using the ellipticities (can be thought of as the 2D orientation and flattening of a galaxy when modeled as an ellipse), as shown in Fig.", "REF .", "The projected density-shape correlation function $w_{g+}$ captures the correlation between overdensity and projected intrinsic ellipticity, as is commonly used in observational studies.", "Positive values for $w_{g+}$ indicate that galaxies exhibit a coherent alignment towards the locations of other nearby galaxies.", "We split our sample roughly 50/50 into training and testing samples, while preserving group membership of subhalos and galaxies.", "The projected 2D correlation function, $w_{g+}$ , from the GAN agrees quantitatively with the measured one from TNG simulation.", "Here, the errorbars were derived from an analytic estimate of the covariance matrix, which includes Gaussian terms for noise and cosmic variance (for more details see [21], [19]).", "Additionally, our model is also able to predict 3D orientations and scalar quantities to a similar level of quantitative agreement." ], [ "Conclusions", "In this abstract, we have presented a novel deep generative model for scalar and vector galaxy properties.", "Using the TNG100 hydrodynamical simulation from the IllustrisTNG simulation suite, we have trained the model to accurately predict galaxy orientations.", "For a simulation box of 75 Mpc/h with 20k galaxies, the training takes about 3–4 days on a modern GPU; applying the model to a dataset of equal size is very fast (less than a minute).", "Overall, the Graph Convolution based Generative Adversarial network learns and generates scalar and vector quantities that have statistical properties (distributions and alignment correlations) that agree well with those of the original simulation.", "Learning galaxy orientations is part of a more general problem called Galaxy-Halo connection.", "The problem can be stated as follows: given some properties of a dark matter halo can we predict what type of galaxy it hosts, or vice versa?", "Our results represent a concrete step towards addressing this complex problem with Graph Neural Network-based Deep Generative Models.", "Future work includes applying this model on a much higher volume N-body simulation with lower resolution in order to utilize the power of deep generative models for upcoming cosmological surveys.", "Additionally, incorporating symmetries such as SO(3) or E(3) and making equivariant neural networks for graphs [9], [20] would be a useful development." ], [ "Acknowledgements", "We thank Ananth Tenneti, Tiziana DiMatteo, Barnabas Poczos and Rupert Croft for useful discussion that informed the direction of this work.", "This work was supported in part by the National Science Foundation, NSF AST-1716131 and by a grant from the Simons Foundation (Simons Investigator in Astrophysics, Award ID 620789).", "SS is supported by a McWilliams postdoctoral fellowship at Carnegie Mellon University.", "This work is supported by the NSF AI Institute: Physics of the Future, NSF PHY- 2020295." ] ]
2212.05596
[ [ "Virial expansion for the optical response of doped two-dimensional\n semiconductors" ], [ "Abstract We present a quantum virial expansion for the optical response of a doped two-dimensional semiconductor.", "As we show, this constitutes a perturbatively exact theory in the high-temperature or low-doping regime, where the electrons' thermal wavelength is smaller than their interparticle spacing.", "The virial expansion predicts new features of the photoluminescence, such as a non-trivial shape of the attractive branch related to universal low-energy exciton-electron scattering and an associated shift of the attractive peak from the trion energy.", "Our results are in excellent agreement with recent experiments on doped monolayer MoSe2 [Zipfel et al., Phys.", "Rev.", "B 105, 075311 (2022)] and they imply that the trion binding energy is likely to have been overestimated in previous measurements.", "Our theory furthermore allows us to formally unify two distinct theoretical pictures that have been applied to this system, with the conventional trion picture results emerging as a high-temperature and weak-interaction limit of Fermi polaron theory." ], [ "Virial expansion for the optical response of doped two-dimensional semiconductors Brendan C. Mulkerin B. C. M. and A. T. contributed equally to this work.", "School of Physics and Astronomy, Monash University, Victoria 3800, Australia ARC Centre of Excellence in Future Low-Energy Electronics Technologies, Monash University, Victoria 3800, Australia Antonio Tiene B. C. M. and A. T. contributed equally to this work.", "Departamento de Física Teórica de la Materia Condensada & Condensed Matter Physics Center (IFIMAC), Universidad Autónoma de Madrid, Madrid 28049, Spain School of Physics and Astronomy, Monash University, Victoria 3800, Australia Francesca Maria Marchetti Departamento de Física Teórica de la Materia Condensada & Condensed Matter Physics Center (IFIMAC), Universidad Autónoma de Madrid, Madrid 28049, Spain Meera M. Parish School of Physics and Astronomy, Monash University, Victoria 3800, Australia ARC Centre of Excellence in Future Low-Energy Electronics Technologies, Monash University, Victoria 3800, Australia Jesper Levinsen School of Physics and Astronomy, Monash University, Victoria 3800, Australia ARC Centre of Excellence in Future Low-Energy Electronics Technologies, Monash University, Victoria 3800, Australia We present a quantum virial expansion for the optical response of a doped two-dimensional semiconductor.", "As we show, this constitutes a perturbatively exact theory in the high-temperature or low-doping regime, where the electrons' thermal wavelength is smaller than their interparticle spacing.", "The virial expansion predicts new features of the photoluminescence, such as a non-trivial shape of the attractive branch related to universal low-energy exciton-electron scattering and an associated shift of the attractive peak from the trion energy.", "Our results are in excellent agreement with recent experiments on doped monolayer MoSe$_2$ [Zipfel et al., Phys.", "Rev.", "B 105, 075311 (2022)] and they imply that the trion binding energy is likely to have been overestimated in previous measurements.", "Our theory furthermore allows us to formally unify two distinct theoretical pictures that have been applied to this system, with the conventional trion picture results emerging as a high-temperature and weak-interaction limit of Fermi polaron theory.", "The problem of a quantum impurity interacting with a background medium represents a paradigmatic example of a strongly correlated many-body system.", "First considered in the context of electrons moving in a crystal lattice [1], the quantum impurity (polaron) problem has since been generalized to many different systems across a wide range of energy scales, including magnetic impurities [2], quantum mixtures in ultracold atomic gases [3], [4], [5], and even protons in neutron stars [6].", "Most recently, it has provided key insights into the optical response of doped two-dimensional (2D) semiconductors [7], [8], [9], [10], [11], [12], [13], [14], [15], [16].", "Here, an optically excited exciton (bound electron-hole pair) is immersed in a fermionic medium of charge carriers (electrons or holes), leading to the appearance of two peaks in the optical response — a “repulsive polaron” and an “attractive polaron” — at energies which evolve, respectively, into those of the exciton and a trion (charged exciton bound state) in the limit of vanishing doping.", "Such exciton polarons have attracted a large amount of interest since they can be realized in atomically thin transition metal dichalcogenides (TMDs), where there is the prospect of technological applications involving both charge doping and coupling to light [17], [18].", "However, there is an ongoing debate about whether the optical response should be described within a Fermi-polaron picture [7], [8], [9], [10], [11], [12], where excitons are coherently dressed by excitations of the medium to form new polaronic quasiparticles [19], or whether it is better to use a more conventional trion picture [20] which involves independent few-body states (excitons and trions).", "The two pictures have been shown to give indistinguishable results for some observables (e.g., the oscillator strength) at low charge doping, but this requires an intrinsic exciton linewidth that exceeds the Fermi energy of the charge carriers [21].", "On the other hand, in the context of ultracold atoms, it is known that there is a density-driven transition between a Fermi polaron and few-body bound states equivalent to trions [4].", "Importantly, the nature of the exciton polaron has implications for other properties such as the transport of optical excitations under an electric field [22].", "In this Letter, we resolve this question and reveal that these two pictures are in fact connected when we account for the crucial role played by temperature.", "We employ a quantum virial expansion [23] for the optical response, which is a perturbatively exact theory when the temperature $T$ greatly exceeds the Fermi energy $E_F$ , and is therefore applicable at high temperature and/or low doping.", "We show that this corresponds to a limit of the Fermi-polaron picture where the coherent dressing cloud of the attractive polaron quasiparticle is destroyed by thermal fluctuations (see Ref.", "[24] for details), in contrast to the situation at lower temperatures.", "We demonstrate that the virial expansion predicts hitherto unrecognized features in photoluminescence (PL) such as a non-trivial behavior of the attractive peak near the trion energy related to 2D resonant exciton-electron scattering, and a Lorentzian repulsive peak, as illustrated in Fig.", "REF (a).", "This in turn implies that the trion binding energy is likely to have been overestimated by about 10$\\%$ in experiment.", "We compare our results to a recent experiment on a doped MoSe$_2$ monolayer [16], and find excellent agreement.", "Finally, we show analytically that the virial expansion reduces to the predictions of the trion picture in the limit where $E_F\\rightarrow 0$ .", "Model.—We model a doped 2D semiconductor using the following Hamiltonian for excitons and excess charge carriers: H= k (kc†kck+Xkx†kxk ) - kk'q vq  x†kc†k'ck'+qxk-q .", "Since the optically generated exciton is tightly bound, we treat it as a structureless boson, with corresponding operator $\\hat{x}^{\\dag }_{{\\bf k}}$ , mass $m_X$ and free-particle dispersion $\\epsilon _{X{\\bf k}}=|{\\bf k}|^2/2m_X \\equiv k^2/2m_X$ , where the energy is measured from that of the 1$s$ exciton at rest.", "The fermionic operator $\\hat{c}^{\\dag }_{{\\bf k}}$ creates charge carriers (electrons or holes) with mass $m$ and dispersion $\\epsilon _{{\\bf k}}=k^2/2m$ .", "For simplicity, we generally assume the charge carriers are electrons, but note that our results equally hold for the hole-doped case.", "We also ignore the spin/valley degree of freedom and consider spin-polarized electrons that are distinguishable from the electron within the exciton, since this is sufficient to describe the polaron and trion physics in TMDs such as MoSe$_2$ monolayers [8].", "Here, and in the following, we set $\\hbar = k_B= 1$ and work in a system of unit area.", "The second term in Eq.", "(REF ) describes the attractive charge-(induced) dipole interactions between electrons and excitons, which give rise to a trion bound state [25], [12].", "Note that we can treat the trion as an effective two-body (electron-exciton) bound state since the exciton binding energy exceeds the trion binding energy $\\varepsilon _T$ by an order of magnitude in TMDs [18].", "Furthermore, the potential $v_{\\bf q}$ is sufficiently short ranged that it can be described with a low-energy $s$ -wave scattering amplitude [26].", "We neglect the interactions between electrons since these are not necessary to describe the trion bound state and they do not contribute to the leading order behavior in the high-temperature limit, $T/E_F\\gg 1$ , as we discuss below.", "While we formulate our results in the language of excitons in a Fermi sea of charge carriers such as electrons, our results apply more generally to any dilute gas of impurities interacting via short-range interactions with a 2D Boltzmann gas.", "In particular, the features of the spectrum discussed below would also be observable in cold-atom experiments on 2D Fermi gases [27], [28], [29], and our theory can straightforwardly be extended to the three-dimensional case.", "Figure: (a) Schematic illustration (solid [blue] lines) of the key features of PL from a doped semiconductor at low doping and/or high temperature where E F ≪TE_F\\ll T. For the attractive branch, this includes the exponential tail related to electron recoil, the shape of the onset due to resonant electron-exciton scattering, and the shift of the peak from the trion energy.", "Here, we have neglected any additional exciton broadening due to effects beyond those described in the Hamiltonian (), such as disorder and radiative recombination.", "The (gray) dashed lines are the predictions from the conventional trion theory .", "(b) Leading-order contribution to the exciton self-energy within the virial expansion, describing the interaction between the exciton (red line) and an electron (blue line).", "(c) Diagrams contributing to the two-body TT matrix (square) due to the exciton-electron interaction potential (wavy line).Photoluminescence.— The starting point of our analysis is the detailed balance relation [31], [32], [33], [34], [35] between optical absorption, which is proportional to the exciton spectral function $A(\\omega )$ , and photoluminescence $P(\\omega )$ , P()=e-A(), with $\\beta \\equiv 1/T$ .", "This expression is valid (up to an unimportant frequency-independent prefactor [24]) within linear response for a system in thermal equilibrium, under the assumption of a low density of excitons such that they can effectively be treated as uncorrelated.", "The spectral function is related to the (retarded) exciton Green's function $G_X$ via $A(\\omega )=-\\frac{1}{\\pi }\\Im G_X(\\omega +i0)$ , where the factor $+i0$ signifies that here and in the following the poles are shifted slightly into the lower half of the complex $\\omega $ plane [36] (at this stage, we do not explicitly introduce the exciton linewidth which can mask the intrinsic features of the PL).", "The Green's function in turn satisfies the Dyson equation GX()=1-(), in terms of the self-energy $\\Sigma (\\omega )$ for the zero-momentum exciton.", "Calculating the PL thus amounts to obtaining the self-energy which, in the general case of a strongly correlated system, can only be done approximately.", "Virial expansion.—A key insight is that we can apply the quantum virial expansion to the exciton self-energy at finite temperature.", "Specifically, this corresponds to a systematic expansion in powers of the fugacity $z=e^{\\beta \\mu }$ , where $\\mu $ is the chemical potential of the background Fermi gas.", "In the high-temperature/low-doping regime $T\\gtrsim E_F$ , we have $z \\lesssim 1$ , allowing us to perform an exact perturbative expansion around the ideal Boltzmann gas limit of the medium (where $z \\simeq \\beta E_F$ ).", "The virial expansion has been extensively used in other contexts.", "For instance, to obtain thermodynamic quantities and quantum corrections to the equation of state in condensed matter physics [37], [38], nuclear physics [39] and ultracold gases [40], [23].", "It has also been used to calculate response functions for atomic gases [41], [42], [43], [44], [45], [46], [47], magnetic impurities [48], magnons [49] and Coulomb systems [50].", "At lowest order in $z$ , only two-point correlations involving the exciton and electrons are present in the self-energy, corresponding to all the ladder diagrams depicted in Fig.", "REF (b,c).", "Crucially, higher $N$ -point correlations with $N >2$ only enter at higher order in $z$ since they require multiple electrons to be scattered from the medium, where each medium excitation is weighted by $z$  [51], [46].", "This furthermore means that we can neglect electron-electron interactions if we work at lowest order in $z$ , where the medium corresponds to a Boltzmann gas.", "Thus, the leading order exciton self-energy takes the form $\\Sigma (\\omega )= z \\sum _{{\\bf q}} e^{-\\beta \\epsilon _{{\\bf q}}} \\mathcal {T}({\\bf q}_r;{\\bf q},\\omega +\\epsilon _{\\bf q}) \\, .$ Here the $T$ matrix $\\mathcal {T}({\\bf k};{\\bf Q},\\omega )$ describes the sum of repeated scattering processes between an exciton and an electron in vacuum, where ${\\bf Q}$ and $\\omega $ are the total momentum and energy, respectively, while ${\\bf k}$ is the electron-exciton relative momentum (where the incoming and outgoing momenta are equal).", "Note that, due to Galilean invariance, the center-of-mass and relative contributions separate.", "For a zero-momentum exciton with energy $\\omega $ and an electron with kinetic energy $\\epsilon _{{\\bf q}}$ , as in Eq.", "(REF ), the center-of-mass momentum is simply ${\\bf q}$ and the relative momentum ${\\bf q}_r = {\\bf q}m_X/m_T$ , with $m_T = m+m_X$ .", "In principle, one can obtain the self-energy (REF ) for an arbitrary electron-exciton potential $v_{\\bf q}$ by determining the $T$ matrix in Fig.", "REF (c).", "However, since the relevant energy scales in TMDs (i.e., $T$ , $E_F$ , and the trion binding energy $\\varepsilon _T$ ) are smaller than that set by the range of $v_{\\bf q}$ , the $T$ matrix is well approximated by its low-energy $s$ -wave form [52] T(qr;q,) T0(q, ) = 2mr 1[-T/(- Tq)], with the reduced electron-exciton mass $m_r=m m_X/m_T$ and the center-of-mass (trion) dispersion $\\epsilon _{T{\\bf q}}=q^2/2m_T$ .", "This is independent of the relative momentum and coincides with the limit of a zero-range potential.", "Together, Eqs.", "(REF -REF ) and (REF ) allow us to straightforwardly calculate the optical response in the virial expansion.", "The expressions in Eqs.", "(REF ) and (REF ) are in fact equivalent to the celebrated Chevy ansatz [19] for the Fermi polaron when we consider its finite-temperature generalization [53] and take the limit $z \\ll 1$ (see also the accompanying paper [24] for details).", "Thus, our approach is continuously connected to the Fermi-polaron picture of excitons in doped semiconductors which is based on the zero-temperature Chevy ansatz [8], [9], [12].", "Features of the photoluminescence spectrum.—The resulting exciton spectral function contains two well-separated peaks, as illustrated in Fig.", "REF (a): a repulsive branch centered close to the exciton at $\\omega =0$ , and an attractive branch that is peaked for frequencies $\\omega \\lesssim -\\varepsilon _T$ .", "For the repulsive branch, the leading order in $z$ is obtained by taking $\\omega =0$ in the self-energy.", "In this case we find, to logarithmic accuracy in $\\beta \\varepsilon _T$  This assumes that the typical kinetic energy in Eq.", "(REF ) is set by the temperature, with logarithmic corrections neglected.", "The inclusion of $\\gamma _{\\rm E}$ is found by expanding the special function called $\\nu (x)$ around the limit $T\\rightarrow \\infty $ and utilizing the Laplace transform of the $\\sin (x)$ function ., rep(0)EF(m/mr)2+2(eET)[(eET)-i] , with $\\gamma _{\\rm E} \\simeq 0.5772$ the Euler-Mascheroni constant.", "In the regime $T \\lesssim \\varepsilon _T$ , which is the situation in most current experiments, the dominant contribution to the attractive branch arises from the pole of the $T$ matrix when $\\omega =-\\varepsilon _T+\\epsilon _{T{\\bf q}}$ , related to the trion bound state.", "Expanding Eq.", "(REF ) around the pole gives T0(q,+i0) ZT-Tq+T+i0, with $Z_T=\\frac{2\\pi \\varepsilon _T}{m_r}$ the residue at the pole.", "This allows us to obtain the self-energy by straightforward contour integration, with the result att() -z T ( mTmX )2 emTmX (+ T)   [Ei[- mTmX (+ T)]+i(--T)], where $ {\\rm Ei}\\left( x \\right)$ is the exponential integral.", "Note that the pole expansion of the $T$ matrix is exact for the imaginary part of $ \\Sigma _{\\rm att}$ but only approximate for the real part, with the latter becoming exact close to the trion energy.", "Combining these results yields the spectral function $A(\\omega )$ and, from Eq.", "(REF ), the PL P()-1e-(--T)-att()-11-rep(0), in terms of the self-energies in Eqs.", "(REF ) and (REF ).", "Equation (REF ) is a key result of this work.", "We see that the repulsive branch is a Lorentzian peak at $\\omega = \\Re \\Sigma _{\\rm rep}(0)$ , where both the width and position scale with $E_F$ , similar to Fermi polaron theories [55], [56], [9].", "However, for the attractive branch, we find that we cannot satisfy the condition $\\omega = \\Re \\Sigma _{\\rm att}(\\omega )$ , indicating that there is no attractive polaron quasiparticle in the limit $z\\ll 1$ , unlike for the quantum degenerate case $z>1$  [24].", "Instead, we have an asymmetric continuum of trion states, with a sharp onset at $\\omega =-\\varepsilon _T$ and an exponential tail involving trions and recoil electrons at finite relative momentum, where $P(\\omega )\\propto e^{\\beta \\omega m/m_X}/\\omega ^2$ for $-\\omega \\gg \\varepsilon _T$ in agreement with Ref. [30].", "Moreover, in the limit of an infinitely heavy exciton, we see that the tail in PL loses its exponential dependence, becoming a power law, unlike in the case of absorption.", "The shape of the onset is dictated by 2D resonant electron-exciton scattering at the trion energy, leading to a universal logarithmic divergence in the self-energy: $\\Sigma _{\\rm att}(\\omega \\lesssim -\\varepsilon _T)\\simeq -z \\varepsilon _T ( \\tfrac{m_T}{m_X} )^2 \\left[\\ln \\left( -e^{\\gamma _{\\rm E}}\\beta \\tfrac{m_T}{m_X}(\\omega +\\varepsilon _T) \\right) +i\\pi \\right]$ .", "Previous trion theories of PL [30], [57], [16] focussed on the imaginary part of the self-energy, as we show below, and thus appear to have missed this divergence in the real part.", "Figure: Photoluminescence in a hole-doped MoSe 2 _2 monolayer.", "(a) Frequency difference between attractive and repulsive peaks as a function of temperature.", "The black squares are the experimental peak positions obtained from Ref. .", "The blue and red shaded regions correspond to the results of the virial expansion using binding energies ε T =22.5\\varepsilon _T=22.5 meV and 23.523.5 meV, respectively.", "The solid lines correspond to the experimental hole density n h =0.5×10 11 n_h=0.5\\times 10^{11} cm -2 ^{-2}, and the lower and upper bounds of each shaded region to densities of 0.25×10 11 0.25\\times 10^{11} cm -2 ^{-2} and 10 11 10^{11} cm -2 ^{-2}, respectively.", "(b) Comparison between theoretical (solid dark) and experimental  (solid light) photoluminescence spectra (arbitrary units and vertical offset) for the attractive branch at different lattice temperatures.", "The theoretical spectra were obtained by convolving Eq.", "() with a Lorentzian of width 1 meV  and using ε T =22.5\\varepsilon _T=22.5 meV, n h =0.5×10 11 n_h=0.5\\times 10^{11} cm -2 ^{-2}, and the MoSe 2 _2 values of the exciton and hole effective masses: m X =1.15m 0 m_X=1.15m_0 and m=0.59m 0 m=0.59m_0, with m 0 m_0 the free electron mass .The experimental PL has been shifted horizontally to match the peaks of the virial expansion.Comparison with experiment.—Recently, the PL originating from a MoSe$_2$ monolayer was measured for the case of a hole doping (per valley) of $n_h\\simeq 0.5\\times 10^{11}$  cm$^{-2}$ and for lattice temperatures $T=5$ –50K [16], corresponding to fugacities in the range $z\\simeq 1$ –$0.1$ .", "Therefore, apart from the very lowest temperatures explored, the experiment was well within the regime of validity of the virial expansion.", "To compare our spectra calculated using Eq.", "(REF ), we apply a Lorentzian broadening of 1 meV, matching the experimental linewidth [16].", "We start by analyzing the distance between the peaks of the attractive and repulsive branches which, primarily due to the non-trivial shape of the attractive branch, does not correspond to $\\varepsilon _T$ even at very low doping.", "Figure REF (a) shows our theoretical result for two values of the trion binding energy, $\\varepsilon _T=22.5$ meV and 23.5 meV, and for a range of densities.", "Even though this is noticeably below the quoted experimental value of 25 meV [8], [16], we see that the virial expansion correctly reproduces the splitting between the peaks when we take $\\varepsilon _T=22.5$ meV.", "Thus, the fact that the attractive branch peak in PL does not correspond to the onset implies that the trion binding energy is likely to have been overestimated by as much as 10$\\%$ in previous works This is also consistent with Ref.", "which has shown that the extracted trion binding energy is sensitive to the shape of the trion peak.. We expect corrections to this result to be at most comparable to the Fermi energy [24] which for this experiment is 0.4 meV.", "Figure REF (b) shows the comparison of our results for the attractive branch PL with experiment, using the extracted $\\varepsilon _T$ .", "We see that the agreement is essentially perfect at high temperature, with small discrepancies at lower temperatures.", "Since our theory is fully analytic and contains no free parameters, this is a remarkable agreement.", "The remaining discrepancy could potentially be due to the temperature of the system being different from that of the crystal lattice at low $T$ .", "Connection to the trion picture.—Our results for the attractive branch can be straightforwardly generalized to an arbitrary exciton-electron potential $v_{\\bf q}$ .", "In this case, we obtain $\\mathcal {T}({\\bf q}_r;{\\bf q},\\omega )$ from the spectral representation of the two-particle Green's function close to the trion pole, which finally gives [24] att()zqe-q |qr|2 [P(rqr+T)2+rqr+T-i2 (+rqr+T)], where ${\\mathcal {P}}$ denotes the principal value, $\\eta _{{\\bf q}_r}$ is the trion wave function, and $\\epsilon _{r{\\bf q}_r}=q_r^2/2m_r$ .", "Indeed, for the case of contact interactions, we have $\\eta _{{\\bf q}_r}=\\sqrt{Z_T}/(\\varepsilon _T+\\epsilon _{r{\\bf q}_r})$  [24], which precisely yields Eq.", "(REF ).", "Thus, we can obtain the PL for the attractive branch simply from the trion wave function (a similar approach has been used to calculate absorption [60]).", "It turns out that previous trion theories of PL [30], [57], [16] correspond to the weakly interacting limit of our theory.", "Here, one assumes that the self-energy is sufficiently small such that the Dyson equation (REF ) can be expanded as $G_X(\\omega )\\simeq 1/\\omega +\\Sigma (\\omega )/\\omega ^2$ , which gives Patt()z  e(m-mTT)mX|2mr|+T||2(--T) , where in the last step we used Eq.", "(REF ).", "This precisely matches the result of Refs.", "[30], [57].", "Given the connection between the virial expansion and Fermi-polaron theory, we conclude that the trion picture results corresponds to a weak-coupling and high-temperature/low-doping limit of the Fermi-polaron picture, thus providing a formal unification of these two apparently disparate frameworks.", "Note that the weak-coupling assumption explicitly fails at the onset where the real part of the self-energy in Eq.", "(REF ) diverges, and hence the trion picture only correctly describes the shape of the attractive branch in the limit $E_F \\rightarrow 0$ .", "Likewise, the broadening of the repulsive branch depends on exciton-electron scattering states which are neglected within trion-based theories [60], [57].", "Concluding remarks.— In summary, we have presented a controlled virial expansion for the exciton-polaron problem, which we show corresponds to a thermally incoherent limit of Fermi-polaron theory where the attractive polaron quasiparticle no longer exists.", "Our theory has the advantage of being fully analytic, and it yields excellent agreement with experiment without the need for fitting parameters.", "Our approach is very generic and can potentially be applied to a broad range of systems, for instance emerging designer materials such as moiré superlattices where signatures of polaron physics have already been observed [61], [62], [63].", "We are grateful to Dmitry Efimkin for useful discussions and feedback on our manuscript, and we thank Alexey Chernikov for sharing the data of Ref. [16].", "BCM, MMP, and JL acknowledge support from the Australian Research Council Centre of Excellence in Future Low-Energy Electronics Technologies (CE170100039).", "MMP and JL are also supported through the Australian Research Council Future Fellowships FT200100619 and FT160100244, respectively.", "AT and FMM acknowledge financial support from the Ministerio de Ciencia e Innovación (MICINN), project No.", "AEI/10.13039/501100011033 (2DEnLight).", "FMM acknowledges financial support from the Proyecto Sinérgico CAM 2020 Y2020/TCS-6545 (NanoQuCo-CM)." ] ]
2212.05627
[ [ "A non-singular generalized entropy and its implications on bounce\n cosmology" ], [ "Abstract We propose a new five-parameter entropy function that proves to be singular-free during the entire cosmic evolution of the universe, and at the same time, also generalizes the Tsallis, Barrow, R\\'{e}nyi, Sharma-Mittal, Kaniadakis and Loop Quantum Gravity entropies for suitable limits of the parameters.", "In particular, all the above mentioned known entropies become singular (or diverge) when the Hubble parameter vanishes in course of the universe's evolution (for instance, in bounce cosmology at the instant of bounce), while the newly proposed entropy function seems to be singular-free even at $H = 0$ (where $H$ represents the Hubble parameter of the universe).", "Such non-singular behaviour of the entropy function becomes useful in describing bouncing scenario, in which case, the universe undergoes through $H = 0$ at the instant of bounce.", "It turns out that the entropic cosmology corresponding to the singular-free generalized entropy naturally allows symmetric bounce scenarios, such as -- exponential bounce and quasi-matter bounce scenario respectively.", "In the case of exponential bounce, the perturbation modes are in the super-Hubble domain at the distant past, while for the quasi-matter bounce, the perturbation modes generate in the deep sub-Hubble regime far before the bounce and hence resolves the ``horizon issue''.", "Based on this fact, we perform a detailed perturbation analysis for the quasi-matter bounce in the present context of singular-free entropic cosmology.", "Furthermore the entropic cosmology in the present context is shown to be equivalent with the generalized holographic cosmology where the holographic cut-offs are determined in terms of either future horizon and its derivative or the particle horizon and its derivative." ], [ "Introduction", "Entropy, one of the important and fundamental quantities in physics, seems to depend on the physical system under consideration.", "For example, the entropy in classical thermodynamics is found to be proportional to the volume of the system, while the black hole entropy is proportional to the horizon area.", "This may indicate that we do not possibly understand the fundamental construction of entropy till now, or, there possibly exists a generalized form of entropy that is true irrespective of the choices of the system(s).", "The black body radiation of a black hole is regarded as one of the remarkable discoveries of theoretical physics, which is described by a finite temperature and by a Bekenstein-Hawking entropy function [1], [2] (see [3], [4] for extensive reviews).", "The distinctive property of the Bekenstein-Hawking entropy is that the entropy function is proportional to the horizon area of the black hole, unlike to the classical thermodynamics where the entropy is directly proportional ton the volume of the system under consideration.", "Such unusual behaviour of the black hole entropy results to the recently proposed various forms of entropy functions other than the Bekenstein-Hawking one.", "For instance, the Tsallis [5] and the Rényi [6] entropies have been proposed depending on the non-additive statistics of the system.", "Recently Barrow proposed an entropy function in [7], which encodes the fractal structure of a black hole that may originate from quantum grvaity effects.", "Beside these, the Sharma-Mittal entropy [8], the Kaniadakis entropy [9], [10] and the Loop Quantum Gravity entropy [11], [12] are some well known entropies proposed so far.", "All of these entropies reduce to the Bekenstein-Hawking entropy at certain limit, and moreover, they are monotonic increasing function with respect to the Bekenstein-Hawking entropy variable.", "The thermodynamics laws with Bekenstein-Hawking entropy are extended to the cosmology sector, by which, one can achieve the usual Friedmann equations.", "In accordance, we may argue that the cosmological field equations have a thermodynamic nature.", "In this regard, the holographic cosmology, initiated in [13], [14], [15], gained a lot of attention as it is intimately connected to entropy construction.", "The entropic cosmology corresponding to different entropy functions (for example the Tsallis entropic cosmology, the Rényi entropic cosmology etc.)", "are proved to be equivalent with generalized holographic cosmology where the respective cut-offs are determined in terms of either particle horizon and its derivative or the future horizon and its derivative [16], [17].", "The holographic cosmology stands to be one of the useful theories in describing the dark energy era of the universe, in particular, the dark energy density in a holographic dark energy (HDE) model is sourced from holographic principle, rather than put by hand in the Lagrangian [18], [19], [20], [21], [22], [23], [24], [25], [26], [27], [28], [29], [30], [31], [32], [33], [34], [35], [36], [37], [38], [39], [40], [41], [42], [43], [44], [45], [46], [47], [48], [49], [50], [51], [52].", "Beside the HDE models, the holographic cosmology is extended to explain the early inflationary phase of the universe where the primordial quantities turn out to be compatible with the recent observational data [53], [54], [55], [56], [57], [58], [59], [60], [61], [62], and more importantly, the holographic cosmology shows to be useful to unify the early inflation with the dark energy era of the universe [54], [63].", "From a different theoretical construction, the holographic energy density and the corresponding pressure help to violate the null energy condition during the early phase of the universe, which in turn provides a non-singular bouncing scenario from a holographic point of view [64], [65].", "All of such literatures reveal the immense interest on holographic as well as on entropic cosmology corresponding to different entropy constructions.", "Based on the above arguments, the question that naturally arises is following: Does there exist a generalized entropy function that generalizes the known entropy functions proposed so far, like the Tsallis, Rényi, Barrow, Sharma-Mittal, Kaniadakis and Loop Quantum Gravity entropies?", "Possible explanations of this question has been given in [53], [54], [66].", "In particular, some of our authors proposed a 6-parameter entropy function that generalizes all the aforementioned known entropies [53].", "However in [54], we proposed a different form of generalized entropy containing less parameters, particularly having four-parameters, which is also able to generalize the known entropies from the Tsallis to Loop Quantum Gravity entropies.", "In this regard, we give the following conjecture: “The minimum number of parameters required in a generalized entropy function that can generalize all the aforementioned entropies is equal to four”.", "Moreover, the generalized entropy with four parameters proved to be useful to explain the early inflation and the dark energy era of the universe in an unified manner.", "However at this stage, it deserves mentioning that all of these generalized entropies mentioned in [53], [54] become singular (or diverge) during the cosmic evolution of the universe, particularly when the Hubble parameter of the universe vanishes (for instance, in a bounce cosmology – when the Hubble parameter vanishes at the instant of the bounce).", "Such diverging behaviour also shows to all the known entropies, due to the fact that the Bekenstein-Hawking entropy (given by $S = \\frac{\\pi }{GH^2}$ ) itself diverges at $H = 0$ .", "Thus an immediate question is given by: Does there exist a singular-free generalized entropy that generalizes the Tsallis, Rényi, Barrow, Sharma-Mittal, Kaniadakis and Loop Quantum Gravity entropies, and at the same time, is also singular-free during the cosmological evolution of the universe (even at $H = 0$ where $H$ represents the Hubble parameter) ?", "If so, then what are the cosmological implications of such non-singular entropy function ?", "We will try to address these questions in the present work.", "Here we would like to mention that if such non-singular entropy exists, then it proves to be useful in describing the bounce scenario where the universe undergoes through $H = 0$ at the instant of the bounce, unlike to all the known entropies (including the generalized entropies proposed in [53], [54], [66]) which diverge at $H = 0$ and hence they are unacceptable in the context of bounce cosmology.", "Based on this argument, we will address the possible implications of a non-singular generalized entropy to bounce cosmology." ], [ "Search for a singuar-free generalized entropy", "In this section, we will propose a generalized entropy function which is singular-free and can lead to various known entropy functions proposed so far.", "Here it deserves mentioning that in one of our previous works [54], we have proposed a generalized four parameter entropy function given by, $S_\\mathrm {g}^{(s)}\\left[\\alpha _+,\\alpha _-,\\beta ,\\gamma \\right] = \\frac{1}{\\gamma }\\left[\\left(1 + \\frac{\\alpha _+}{\\beta }~S\\right)^{\\beta }- \\left(1 + \\frac{\\alpha _-}{\\beta }~S\\right)^{-\\beta }\\right] \\,,$ that converges to the known entropies (in particular, the Bekenstein-Hawking entropy, Tsallis entropy, Barrow entropy, Rényi entropy, Kaniadakis entropy, Sharma-Mittal entropy and the entropy in the context of Loop Quantum gravity) for suitable choices of the parameters.", "Moreover the entropic cosmology corresponds to the $S_\\mathrm {g}^{(s)}$ results to a viable unification of inflation to the dark energy epoch which are well consistent with the observational constraints, see [54].", "Despite these successes, the above entropy function $S_\\mathrm {g}^{(s)}$ seems to be plagued with singularity for certain cosmological evolution of the universe, in particular, in the context of bounce cosmology.", "The demonstration goes as follows: in the right hand side of Eq.", "(REF ), $S = A/\\left(4G\\right)$ represents the Bekenstein-Hawking entropy, where $A = 4\\pi r_h^2$ is the area of the horizon and $r_h$ is the horizon radius.", "Using $r_h = 1/H$ , with $H$ being the Hubble parameter of the universe, one can represent the Bekenstein-Hawking entropy as $S = \\pi /\\left(GH^2\\right)$ .", "In effect, the $S_\\mathrm {g}^{(s)}$ from Eq.", "(REF ) is equivalently written as, $S_\\mathrm {g}^{(s)}\\left[\\alpha _+,\\alpha _-,\\beta ,\\gamma \\right] = \\frac{1}{\\gamma }\\left[\\left(1 + \\frac{\\pi \\alpha _+}{\\beta GH^2}\\right)^{\\beta }- \\left(1 + \\frac{\\pi \\alpha _-}{\\beta GH^2}\\right)^{-\\beta }\\right] \\,.$ Clearly in the context of bounce cosmology, $S_\\mathrm {g}^{(s)}$ becomes singular (or diverges) at the instant of bounce when the Hubble parameter of the universe vanishes.", "Therefore in a bounce scenario, the generalized entropy function shown in Eq.", "(REF ) is not physical, and thus, we need to search for a generalized entropy function which can lead to various known entropy functions and is non-singular for the entire cosmological evolution of the universe.", "We propose a new singular-free entropy function given by, $S_\\mathrm {g}\\left[\\alpha _{\\pm },\\beta ,\\gamma ,\\epsilon \\right] = \\frac{1}{\\gamma }\\bigg [\\left\\lbrace 1 + \\frac{1}{\\epsilon }\\tanh \\left(\\frac{\\epsilon \\alpha _+}{\\beta }~S\\right)\\right\\rbrace ^{\\beta }- \\left\\lbrace 1 + \\frac{1}{\\epsilon }\\tanh \\left(\\frac{\\epsilon \\alpha _-}{\\beta }~S\\right)\\right\\rbrace ^{-\\beta }\\bigg ]~~,$ where $\\alpha _{\\pm }$ , $\\beta $ , $\\gamma $ and $\\epsilon $ are the parameters which are considered to be positive, and $S$ symbolizes the Bekenstein-Hawking entropy.", "In regard to the number of parameters, we propose a conjecture at the end of this section.", "First we demonstrate that the above entropy function remains finite, and thus is non-singular, during the whole cosmological evolution of a bouncing universe.", "Due to $S = \\pi /\\left(GH^2\\right)$ , the $S_\\mathrm {g}$ from Eq.", "(REF ) can be re-written as, $S_\\mathrm {g}\\left[\\alpha _{\\pm },\\beta ,\\gamma ,\\epsilon \\right] =\\frac{1}{\\gamma }\\bigg [\\left\\lbrace 1 + \\frac{1}{\\epsilon }\\tanh \\left(\\frac{\\epsilon \\pi \\alpha _+}{\\beta GH^2}\\right)\\right\\rbrace ^{\\beta }- \\left\\lbrace 1 + \\frac{1}{\\epsilon }\\tanh \\left(\\frac{\\epsilon \\pi \\alpha _-}{\\beta GH^2}\\right)\\right\\rbrace ^{-\\beta }\\bigg ]~~,$ which, due to the presence of $\\tanh \\left(\\frac{\\epsilon \\pi \\alpha _{\\pm }}{\\beta GH^2}\\right)$ , is clearly finite at $H = 0$ .", "In particular, the $S_\\mathrm {g}$ takes the following form at the instant of bounce: $S_\\mathrm {g}\\left[\\alpha _{\\pm },\\beta ,\\gamma ,\\epsilon \\right] =\\frac{1}{\\gamma }\\bigg [\\left\\lbrace 1 + \\frac{1}{\\epsilon }\\right\\rbrace ^{\\beta }- \\left\\lbrace 1 + \\frac{1}{\\epsilon }\\right\\rbrace ^{-\\beta }\\bigg ]~~.$ Having demonstrated the non-singular behaviour of the entropy function, we now show that $S_\\mathrm {g}$ of Eq.", "(REF ), for suitable choices of the parameters, reduces to various known entropies– i.e the Bekenstein-Hawking entropy, Tsallis entropy, Barrow entropy, Rényi entropy, Kaniadakis entropy, Sharma-Mittal entropy and the entropy in the context of Loop Quantum gravity.", "Such entropies are constructed from different viewpoints.", "For instance, the Tsallis entropy is given by $S_T = S^{\\delta }$ (with $\\delta $ being the Tsallis exponent and $S$ represents the Bekenstein-Hawking entropy) which is useful for the systems having long range interactions where the Boltzmann-Gibbs entropy is not applied [5].", "Clearly for $\\delta = 1$ , the Tsallis entropy converges to the Bekenstein-Hawking limit, however for $\\delta \\ne 1$ , it lacks of additivity.", "The Barrow entropy has almost the same form as the Tsallis one, however the motivation for introducing the Barrow entropy is different.", "In particular, the Barrow entropy is given by $S_\\mathrm {B} = \\left(\\frac{A}{A_\\mathrm {Pl}} \\right)^{1+\\Delta /2}$ where $A$ is the usual black hole horizon area, $A_\\mathrm {Pl} = 4G$ is the Planck area and $\\Delta $ is the Barrow exponent which measures the fractal features on the black hole structure that may generate from quantum gravitational effects [7].", "On other side, the Rényi entropy is of the form of $S_\\mathrm {R} = \\frac{1}{\\alpha } \\ln \\left( 1 + \\alpha S \\right)$ which has the Bekenstein-Hawking limit for $\\alpha \\rightarrow 0$ [6].", "The Rényi entropy was proposed as an index specifying the amount of information.", "The Sharma-Mittal entropy has been introduced as a possible combination of Tsallis and Rényi entropies [8], and is given by $S_{SM} = \\frac{1}{R}\\left[\\left(1 + \\delta ~S\\right)^{R/\\delta } - 1\\right]$ .", "The Kaniadakis entropy has the form : $S_K = \\frac{1}{K}\\sinh {\\left(KS\\right)}$ which can be regarded as a possible generalization of the Boltzmann-Gibbs entropy arising in relativistic statistical systems [9], [10].", "In the context of Loop Quantum Gravity, one may get the following entropy as, $S_q = \\frac{1}{\\left(1-q\\right)}\\left[\\mathrm {e}^{(1-q)\\Lambda (\\gamma _0)S} - 1\\right]$ where $\\gamma _0$ is known as the Barbero-Immirzi parameter and $q$ is the entropic index that quantifies the probability of frequent events [11], [12].", "For $\\epsilon \\rightarrow 0$ , $S_\\mathrm {g}$ tends to the following form, $S_\\mathrm {g} = \\frac{1}{\\gamma }\\left[\\left(1 + \\frac{\\alpha _+}{\\beta }~S\\right)^{\\beta }- \\left(1 + \\frac{\\alpha _-}{\\beta }~S\\right)^{-\\beta }\\right]~~,$ which, for $\\alpha _+ \\rightarrow \\infty $ and $\\alpha _- = 0$ , becomes $S_\\mathrm {g} = \\frac{1}{\\gamma }\\left(\\frac{\\alpha _+}{\\beta }\\right)^{\\beta }S^{\\beta } \\,.\\nonumber $ Further considering $\\gamma = \\left(\\alpha _+/\\beta \\right)^{\\beta }$ , the generalized entropy $S_\\mathrm {g}$ reduces to $S_\\mathrm {g} = S^{\\beta } \\,.$ This resembles to the Tsallis entropy [5] or to the Barrow entropy [7] with $\\beta = \\delta $ or $\\beta = 1 + \\Delta $ respectively.", "The limit $\\epsilon \\rightarrow 0$ results to the following form of $S_\\mathrm {g}$ as, $S_\\mathrm {g} = \\frac{1}{\\gamma }\\left[\\left(1 + \\frac{\\alpha _+}{\\beta }~S\\right)^{\\beta }- \\left(1 + \\frac{\\alpha _-}{\\beta }~S\\right)^{-\\beta }\\right]~~,$ which, for $\\epsilon \\rightarrow 0$ , $\\alpha _- = 0$ , $\\beta \\rightarrow 0$ and $\\frac{\\alpha _+}{\\beta } \\rightarrow \\mathrm {finite}$ , tends to, $S_\\mathrm {g} = \\frac{1}{\\gamma }\\left[\\left(1 + \\frac{\\alpha _+}{\\beta }~S\\right)^{\\beta } - 1\\right]= \\frac{1}{\\gamma }\\left[\\exp {\\left\\lbrace \\beta \\ln {\\left(1 + \\frac{\\alpha _+}{\\beta }~S\\right)}\\right\\rbrace } - 1\\right]\\approx \\frac{1}{\\left(\\gamma /\\beta \\right)} \\ln {\\left(1 + \\frac{\\alpha _+}{\\beta }~S\\right)} \\,.$ This has the form of Rényi entropy [6] with the identifications $\\gamma = \\alpha _+$ and $\\frac{\\alpha _+}{\\beta } = \\alpha $ , in particular, $S_\\mathrm {g} = \\frac{1}{\\alpha }\\ln {\\left(1 + \\alpha ~S\\right)} \\,.$ For $\\epsilon \\rightarrow 0$ and $\\alpha _- \\rightarrow 0$ , the non-singular generalized entropy converges to the following form, $S_\\mathrm {g}&=&\\lim _{\\epsilon \\rightarrow 0} \\frac{1}{\\gamma }\\bigg [\\left\\lbrace 1 + \\frac{1}{\\epsilon }\\tanh \\left(\\frac{\\epsilon \\alpha _+}{\\beta }~S\\right)\\right\\rbrace ^{\\beta }- \\left\\lbrace 1 + \\frac{1}{\\epsilon }\\tanh \\left(\\frac{\\epsilon \\alpha _-}{\\beta }~S\\right)\\right\\rbrace ^{-\\beta }\\bigg ]= \\frac{1}{\\gamma }\\left[\\left(1 + \\frac{\\alpha _+}{\\beta }~S\\right)^{\\beta }- \\left(1 + \\frac{\\alpha _-}{\\beta }~S\\right)^{-\\beta }\\right]\\nonumber \\\\&\\longrightarrow &\\lim _{\\alpha _- \\rightarrow 0} \\frac{1}{\\gamma }\\left[\\left(1 + \\frac{\\alpha _+}{\\beta }~S\\right)^{\\beta }- \\left(1 + \\frac{\\alpha _-}{\\beta }~S\\right)^{-\\beta }\\right]= \\frac{1}{\\gamma }\\left[\\left(1 + \\frac{\\alpha _+}{\\beta }~S\\right)^{\\beta } - 1\\right]$ Therefore with $\\gamma = R$ , $\\alpha _+ = R$ and $\\beta = R/\\delta $ , the above form of $S_\\mathrm {g}$ becomes similar to the Sharma-Mittal entropy [8].", "For $\\epsilon \\rightarrow 0$ , $\\beta \\rightarrow \\infty $ , $\\alpha _+ = \\alpha _- = \\frac{\\gamma }{2} = K$ , one may write Eq.", "(REF ) as, $S_\\mathrm {g}&=&\\lim _{\\epsilon \\rightarrow 0} \\frac{1}{\\gamma }\\bigg [\\left\\lbrace 1 + \\frac{1}{\\epsilon }\\tanh \\left(\\frac{\\epsilon \\alpha _+}{\\beta }~S\\right)\\right\\rbrace ^{\\beta }- \\left\\lbrace 1 + \\frac{1}{\\epsilon }\\tanh \\left(\\frac{\\epsilon \\alpha _-}{\\beta }~S\\right)\\right\\rbrace ^{-\\beta }\\bigg ]= \\frac{1}{\\gamma }\\left[\\left(1 + \\frac{\\alpha _+}{\\beta }~S\\right)^{\\beta }- \\left(1 + \\frac{\\alpha _-}{\\beta }~S\\right)^{-\\beta }\\right]\\nonumber \\\\&\\longrightarrow &\\frac{1}{2K}\\lim _{\\beta \\rightarrow \\infty }\\left[\\left(1 + \\frac{K}{\\beta }~S\\right)^{\\beta }- \\left(1 + \\frac{K}{\\beta }~S\\right)^{-\\beta }\\right]= \\frac{1}{2K}\\left[\\mathrm {e}^{KS} - \\mathrm {e}^{-KS}\\right] = \\frac{1}{K}\\sinh {\\left(KS\\right)}$ that is similar to the Kaniadakis entropy [9], [10].", "Finally, with $\\epsilon \\rightarrow 0$ , $\\alpha _- \\rightarrow 0$ , $\\beta \\rightarrow \\infty $ and $\\gamma = \\alpha _+ = (1-q)$ , Eq.", "(REF ) yields to, $S_\\mathrm {g}&=&\\lim _{\\epsilon \\rightarrow 0} \\frac{1}{\\gamma }\\bigg [\\left\\lbrace 1 + \\frac{1}{\\epsilon }\\tanh \\left(\\frac{\\epsilon \\alpha _+}{\\beta }~S\\right)\\right\\rbrace ^{\\beta }- \\left\\lbrace 1 + \\frac{1}{\\epsilon }\\tanh \\left(\\frac{\\epsilon \\alpha _-}{\\beta }~S\\right)\\right\\rbrace ^{-\\beta }\\bigg ]= \\frac{1}{\\gamma }\\left[\\left(1 + \\frac{\\alpha _+}{\\beta }~S\\right)^{\\beta }- \\left(1 + \\frac{\\alpha _-}{\\beta }~S\\right)^{-\\beta }\\right]\\nonumber \\\\&\\longrightarrow &\\lim _{\\alpha _- \\rightarrow 0} \\frac{1}{\\gamma }\\left[\\left(1 + \\frac{\\alpha _+}{\\beta }~S\\right)^{\\beta }- \\left(1 + \\frac{\\alpha _-}{\\beta }~S\\right)^{-\\beta }\\right]= \\frac{1}{\\gamma }\\left[\\left(1 + \\frac{\\alpha _+}{\\beta }~S\\right)^{\\beta } - 1\\right]\\nonumber \\\\&\\longrightarrow &\\frac{1}{(1-q)}\\lim _{\\beta \\rightarrow \\infty }\\left[\\left(1 + \\frac{(1-q)}{\\beta }~S\\right)^{\\beta } - 1\\right]= \\frac{1}{(1-q)}\\left[\\mathrm {e}^{(1-q)S} - 1\\right]$ which resembles with the Loop Quantum Gravity entropy [11], [12].", "Furthermore, the generalized entropy function in Eq.", "(REF ) shares the following properties: (1) $S_\\mathrm {g}\\left[ \\alpha _{\\pm },\\beta ,\\gamma ,\\epsilon \\right]$ tends to zero for $S \\rightarrow 0$ , i.e.", "the non-singular generalized entropy satisfies the generalized third law of thermodynamics.", "(2) Due to the fact that the hyperbolic terms present in the expression of $S_\\mathrm {g}$ increase with $S$ , the generalized entropy $S_\\mathrm {g}\\left[ \\alpha _{\\pm },\\beta ,\\gamma ,\\epsilon \\right]$ turns out to be a monotonically increasing function of $S$ .", "(3) $S_\\mathrm {g}\\left[ \\alpha _{\\pm },\\beta ,\\gamma ,\\epsilon \\right]$ proves to converge to the Bekenstein-Hawking entropy at certain limit of the parameters.", "In particular, by taking $\\epsilon \\rightarrow 0$ and then $\\alpha _+ \\rightarrow \\infty $ , $\\alpha _- = 0$ , $\\gamma = \\left(\\alpha _+/\\beta \\right)^{\\beta }$ and $\\beta = 1$ , one can show that the generalized entropy function in Eq.", "(REF ) becomes similar to the Bekenstein-Hawking entropy.", "At this stage it deserves mentioning that we have proposed two different generalized entropy functions in Eq.", "(REF ) and in Eq.", "(REF ) respectively – the former entropy function contains four independent parameters while the latter one has five parameters.", "Furthermore both the entropies are able to generalize the known entropies mentioned from Eq.", "(REF ) to Eq.", "(REF ) for suitable choices of the respective parameters.", "However as mentioned earlier that the entropy with four parameters becomes singular at $H = 0$ (for instance, in a bounce scenario when the Hubble parameter vanishes at the instant of bounce), while the entropy function having five parameters proves to be singular-free during the whole cosmological evolution of the universe.", "Based on these findings, we give the following conjecture in regard to the non-singular generalized entropy function: A Conjecture: “The minimum number of parameters required in a generalized entropy function that can generalize all the known entropies mentioned from Eq.", "(REF ) to Eq.", "(REF ), and at the same time, is also singular-free during the universe's evolution – is equal to five”." ], [ "Modified cosmology corresponding to the generalized entropy", "In this section, we will address the cosmological field equations corresponds to the non-singular entropy $S_\\mathrm {g}$ shown in Eq.", "(REF ).", "As we will show that the $S_\\mathrm {g}$ introduces an effective energy density and an effective pressure into the Friedmann-Lemaître-Robertson-Walker (FLRW) equations.", "The FLRW metric with flat spacial part will serve our purpose in the present context, i.e $ds^2=-dt^2+a^2(t)\\sum _{i=1,2,3} \\left(dx^i\\right)^2 \\, ,$ where $t$ and $a(t)$ are cosmic time (or proper time for a comoving observer) and the scale factor of the universe respectively.", "The cosmological horizon has the radius given by, $r_\\mathrm {H}=\\frac{1}{H}\\, ,$ with $H = \\dot{a}/a$ is known as the Hubble parameter of the universe.", "The amount of entropy within the cosmological horizon follows the Bekenstein-Hawking relation [67].", "Moreover the flux of the energy $E$ , or equivalently, the heat $Q$ within the cosmological horizon turns out to be $dQ = - dE = -\\frac{4\\pi }{3} r_\\mathrm {H}^3 \\dot{\\rho }dt = -\\frac{4\\pi }{3H^3} \\dot{\\rho }~dt= \\frac{4\\pi }{H^2} \\left( \\rho + p \\right)~dt \\, ,$ where $\\rho $ is the energy density of the normal matter under consideration, and we use the conservation law: $0 = \\dot{\\rho }+ 3 H \\left( \\rho + p \\right)$ in the last equality.", "Then the Hawking temperature [68] $T = \\frac{1}{2\\pi r_\\mathrm {H}} = \\frac{H}{2\\pi }\\, ,$ along with the first law of thermodynamics $TdS = dQ$ results to $\\dot{H} = - 4\\pi G \\left( \\rho + p \\right)$ which is identical with the spatial part of the usual Friedmann equation.", "Integrating both sides of this equation (with respect to time) leads to the temporal part of the FRW equation, $H^2 = \\frac{8\\pi G}{3} \\rho + \\frac{\\Lambda }{3} \\, ,$ where $\\Lambda $ is the integration constant, and acts as a cosmological constant.", "We now apply the above formalism to the non-singular generalized entropy function $S_\\mathrm {g}$ , rather than the Bekenstein-Hawking entropy.", "In effect of which, the first law of thermodynamics gives $TdS_\\mathrm {g} = dQ~~.$ Due to the fact that the generalized entropy is function of the Bekenstein-Hawking entropy, i.e $S_\\mathrm {g} = S_\\mathrm {g}(S)$ where $S$ is the Bekenstein-Hawking entropy, we may equivalently write Eq.", "(REF ) as $T\\left(\\frac{\\partial S_\\mathrm {g}}{\\partial S}\\right)dS = dQ$ .", "By using Eq.", "(REF ) and $S = \\frac{\\pi }{GH^2}$ , the thermodynamic equation leads to the following evolution of the Hubble parameter as, $\\dot{H}\\left(\\frac{\\partial S_\\mathrm {g}}{\\partial S}\\right) = -4\\pi G\\left(\\rho + p\\right) \\,,$ which, due to the explicit form of $S_\\mathrm {g}$ shown in Eq.", "(REF ), takes the following form, $\\frac{1}{\\gamma }&\\bigg [&\\alpha _{+}~\\mathrm {sech}^2\\left(\\frac{\\epsilon \\pi \\alpha _+}{\\beta GH^2}\\right)\\left\\lbrace 1 + \\frac{1}{\\epsilon }\\tanh \\left(\\frac{\\epsilon \\pi \\alpha _+}{\\beta GH^2}\\right)\\right\\rbrace ^{\\beta -1}\\nonumber \\\\&+&\\alpha _{-}~\\mathrm {sech}^2\\left(\\frac{\\epsilon \\pi \\alpha _-}{\\beta GH^2}\\right)\\left\\lbrace 1 + \\frac{1}{\\epsilon }\\tanh \\left(\\frac{\\epsilon \\pi \\alpha _-}{\\beta GH^2}\\right)\\right\\rbrace ^{-\\beta -1}\\bigg ]\\dot{H} = -4\\pi G\\left(\\rho + p\\right)~~.$ Owing to the conservation equation of matter fields, in particular $\\dot{\\rho } + 3H\\left(\\rho + p\\right) = 0$ , the above expression becomes, $\\frac{2}{\\gamma }&\\bigg [&\\alpha _{+}~\\mathrm {sech}^2\\left(\\frac{\\epsilon \\pi \\alpha _+}{\\beta GH^2}\\right)\\left\\lbrace 1 + \\frac{1}{\\epsilon }\\tanh \\left(\\frac{\\epsilon \\pi \\alpha _+}{\\beta GH^2}\\right)\\right\\rbrace ^{\\beta -1}\\nonumber \\\\&+&\\alpha _{-}~\\mathrm {sech}^2\\left(\\frac{\\epsilon \\pi \\alpha _-}{\\beta GH^2}\\right)\\left\\lbrace 1 + \\frac{1}{\\epsilon }\\tanh \\left(\\frac{\\epsilon \\pi \\alpha _-}{\\beta GH^2}\\right)\\right\\rbrace ^{-\\beta -1}\\bigg ]H~dH = \\left(\\frac{8\\pi G}{3}\\right)d\\rho \\,,\\nonumber $ which, by integration on both sides, $f\\left(H;~\\alpha _{\\pm },\\beta ,\\gamma ,\\epsilon \\right) = \\frac{8\\pi G\\rho }{3} + \\frac{\\Lambda }{3} \\,.$ Here the integration constant is symbolized by $\\Lambda $ and the function $f$ has the following form: $f\\left(H;~\\alpha _{\\pm },\\beta ,\\gamma ,\\epsilon \\right) =\\frac{2}{\\gamma }\\int &\\bigg [&\\alpha _{+}~\\mathrm {sech}^2\\left(\\frac{\\epsilon \\pi \\alpha _+}{\\beta GH^2}\\right)\\left\\lbrace 1 + \\frac{1}{\\epsilon }\\tanh \\left(\\frac{\\epsilon \\pi \\alpha _+}{\\beta GH^2}\\right)\\right\\rbrace ^{\\beta -1}\\nonumber \\\\&+&\\alpha _{-}~\\mathrm {sech}^2\\left(\\frac{\\epsilon \\pi \\alpha _-}{\\beta GH^2}\\right)\\left\\lbrace 1 + \\frac{1}{\\epsilon }\\tanh \\left(\\frac{\\epsilon \\pi \\alpha _-}{\\beta GH^2}\\right)\\right\\rbrace ^{-\\beta -1}\\bigg ]H~dH~~.$ In regard to the functional form of $f\\left(H;~\\alpha _{\\pm },\\beta ,\\gamma ,\\epsilon \\right)$ , we would like to mention that the integration in Eq.", "(REF ) may not be performed in a closed form, unless certain conditions are imposed.", "For example, we consider $GH^2 \\ll 1$ which is, in fact, valid during the universe's evolution (i.e the Hubble parameter is less than the Planck scale).", "With $GH^2 \\ll 1$ , Eq.", "(REF ) becomes, $f\\left(H;~\\alpha _{\\pm },\\beta ,\\gamma ,\\epsilon \\right) =\\frac{8}{\\gamma }\\int \\left\\lbrace \\alpha _{+}\\left(1+\\frac{1}{\\epsilon }\\right)^{\\beta -1}\\mathrm {exp}\\left(-\\frac{2\\epsilon \\pi \\alpha _+}{\\beta GH^2}\\right)+ \\alpha _{-}\\left(1+\\frac{1}{\\epsilon }\\right)^{-\\beta -1}\\mathrm {exp}\\left(-\\frac{2\\epsilon \\pi \\alpha _-}{\\beta GH^2}\\right)\\right\\rbrace H~dH~~,$ which can be integrated, and consequently, we get the following form of $f$ : $f\\left(H;~\\alpha _{\\pm },\\beta ,\\gamma ,\\epsilon \\right) =\\frac{4}{\\gamma }H^2&\\bigg \\lbrace &\\alpha _{+}\\left(1+\\frac{1}{\\epsilon }\\right)^{\\beta -1}\\left[\\mathrm {exp}\\left(-\\frac{2\\epsilon \\pi \\alpha _+}{\\beta GH^2}\\right)+ \\left(\\frac{2\\epsilon \\pi \\alpha _+}{\\beta GH^2}\\right)\\mathrm {Ei}\\left(-\\frac{2\\epsilon \\pi \\alpha _+}{\\beta GH^2}\\right)\\right]\\nonumber \\\\&+&\\alpha _{-}\\left(1+\\frac{1}{\\epsilon }\\right)^{-\\beta -1}\\left[\\mathrm {exp}\\left(-\\frac{2\\epsilon \\pi \\alpha _-}{\\beta GH^2}\\right)+ \\left(\\frac{2\\epsilon \\pi \\alpha _-}{\\beta GH^2}\\right)\\mathrm {Ei}\\left(-\\frac{2\\epsilon \\pi \\alpha _-}{\\beta GH^2}\\right)\\right]\\bigg \\rbrace ~~.$ Therefore as a whole, the general form of $f\\left(H;~\\alpha _{\\pm },\\beta ,\\gamma ,\\epsilon \\right)$ is given in Eq.", "(REF ).", "However with the consideration of $GH^2 \\ll 1$ , the integration of Eq.", "(REF ) is performed and hence a closed form of $f\\left(H;~\\alpha _{\\pm },\\beta ,\\gamma ,\\epsilon \\right)$ is obtained in Eq.", "(REF ).", "Eq.", "(REF ) and Eq.", "(REF ) are the cosmological field equations corresponding to the generalized entropy $S_\\mathrm {g}$ .", "The presence of the entropy $S_\\mathrm {g}$ effectively produces an energy density and pressure in the modified Friedmann equations.", "To be more explicit, we now define the following energy density and pressure, $\\rho _\\mathrm {g} = \\frac{3}{8\\pi G}\\left\\lbrace H^2 - f\\left(H;~\\alpha _{\\pm },\\beta ,\\gamma ,\\epsilon \\right) \\right\\rbrace \\,,$ and $p_\\mathrm {g} = \\frac{\\dot{H}}{4\\pi G}\\bigg \\lbrace \\frac{1}{\\gamma }&\\bigg [&\\alpha _{+}~\\mathrm {sech}^2\\left(\\frac{\\epsilon \\pi \\alpha _+}{\\beta GH^2}\\right)\\left\\lbrace 1 + \\frac{1}{\\epsilon }\\tanh \\left(\\frac{\\epsilon \\pi \\alpha _+}{\\beta GH^2}\\right)\\right\\rbrace ^{\\beta -1}\\nonumber \\\\&+&\\alpha _{-}~\\mathrm {sech}^2\\left(\\frac{\\epsilon \\pi \\alpha _-}{\\beta GH^2}\\right)\\left\\lbrace 1 + \\frac{1}{\\epsilon }\\tanh \\left(\\frac{\\epsilon \\pi \\alpha _-}{\\beta GH^2}\\right)\\right\\rbrace ^{-\\beta -1}\\bigg ] - 1\\bigg \\rbrace - \\rho _\\mathrm {g} \\,,$ respectively.", "As a consequence, Eq.", "(REF ) and Eq.", "(REF ) can be equivalently expressed as, $\\dot{H}=&\\,-4\\pi G\\left[\\left(\\rho + \\rho _\\mathrm {g}\\right) + \\left(p + p_\\mathrm {g}\\right)\\right] \\,,\\nonumber \\\\H^2=&\\, \\frac{8\\pi G}{3}\\left(\\rho + \\rho _\\mathrm {g}\\right) + \\frac{\\Lambda }{3} \\,.$ This demonstrates that Eq.", "(REF ) and Eq.", "(REF ) are similar to the usual Friedmann equations with the total energy density and pressure are given by $\\rho _\\mathrm {T} = \\rho + \\rho _\\mathrm {g}$ and $p_\\mathrm {T} = p + p_\\mathrm {g}$ respectively.", "Therefore $\\rho _\\mathrm {g}$ and $p_\\mathrm {g}$ denote the energy density and pressure produced by the non-singular generalized entropy $S_\\mathrm {g}$ itself.", "The motivation of the present paper is to investigate the possible implications of $\\rho _\\mathrm {g}$ and $p_\\mathrm {g}$ on bounce cosmology.", "In particular, we will show that the modified Friedmann Eq.", "(REF ) naturally allows a non-singular universe.", "However, before moving to such bounce scenario, we will show that the entropic cosmology corresponds to the $S_\\mathrm {g}$ can be regarded to be equivalent to the generalized holographic cosmology with suitable cut-off." ], [ "Equivalence between the entropic cosmology and the generalized holographic cosmology", "The holographic energy density, in the realm of holographic principle, comes as, $\\rho _\\mathrm {hol}=\\frac{3c^2}{\\kappa ^2 L^2_\\mathrm {IR}}\\, ,$ where $L_\\mathrm {IR}$ is known as the infrared cut-off, $c$ is a free parameter and $\\kappa ^2 = 8\\pi G$ with $G$ being the gravitational constant.", "Here the particle horizon (symbolized by $L_\\mathrm {p}$ ) or the future event horizon (symbolized by $L_\\mathrm {f}$ ) are defined as, $L_\\mathrm {p}\\equiv a \\int _0^t\\frac{dt}{a} \\,,\\quad L_\\mathrm {f}\\equiv a \\int _t^\\infty \\frac{dt}{a}\\, .$ A differentiation (with respect to $t$ ) of both sides of the above expressions yields the Hubble parameter in terms of particle horizon and its derivative or in terms of future horizon and its derivative as, $H \\left( L_\\mathrm {p} , \\dot{L}_\\mathrm {p} \\right) = \\frac{\\dot{L}_\\mathrm {p}}{L_\\mathrm {p}} - \\frac{1}{L_\\mathrm {p}}\\, ,\\quad H(L_\\mathrm {f} , \\dot{L}_\\mathrm {f}) = \\frac{\\dot{L}_\\mathrm {f}}{L_\\mathrm {f}} + \\frac{1}{L_\\mathrm {f}} \\, .$ In regard to the holographic cut-off, a general form was proposed in [22] as, $L_\\mathrm {IR} = L_\\mathrm {IR} \\left( L_\\mathrm {p}, \\dot{L}_\\mathrm {p},\\ddot{L}_\\mathrm {p}, \\cdots , L_\\mathrm {f}, \\dot{L}_\\mathrm {f}, \\cdots , a\\right)\\, .$ It may be observed that $L_\\mathrm {IR}$ depends on $L_\\mathrm {p}$ , $L_\\mathrm {f}$ and their derivatives, and the scale factor.", "The other dependency of $L_\\mathrm {IR}$ , particularly on the Ricci scalar and its derivatives, are embedded by either $L_\\mathrm {p}$ , $\\dot{L}_\\mathrm {p}$ or $L_\\mathrm {f}$ , $\\dot{L}_\\mathrm {f}$ via Eq.", "(REF ).", "Such a generalized cutoff may correspond to a general covariant gravity model, $S = \\int d^4 \\sqrt{-g} F \\left( R,R_{\\mu \\nu } R^{\\mu \\nu },R_{\\mu \\nu \\rho \\sigma }R^{\\mu \\nu \\rho \\sigma }, \\Box R, \\Box ^{-1} R,\\nabla _\\mu R \\nabla ^\\mu R, \\cdots \\right) \\, .$ Here it deserves mentioning that all the HDE models proposed so far (for example the Tsallis HDE or the Barrow HDE etc.)", "are shown to be different candidates of generalized holographic cosmology, see [16], [17].", "In this section, we will examine whether the entropic cosmology corresponds to the present entropy function $S_\\mathrm {g}$ is equivalent to the generalized holographic cosmology with specific cut-offs.", "Using Eq.", "(REF ) and Eq.", "(REF ), we may argue that the entropic energy density can be thought to be equivalent with the generalized holographic energy density, where the equivalent holographic cutoff $L_\\mathrm {g}$ depends on either the particle horizon and its derivative or the future horizon and its derivative.", "In the former case, $L_\\mathrm {g}$ is given by, $\\frac{3c^2}{\\kappa ^2 L^2_\\mathrm {g}}= \\frac{3}{8\\pi G}\\left\\lbrace \\left( \\frac{\\dot{L}_\\mathrm {p}}{L_\\mathrm {p}} - \\frac{1}{L_\\mathrm {p}} \\right)^2- f_1\\left(L_\\mathrm {p},\\dot{L}_\\mathrm {p}\\right)\\right\\rbrace ~~,$ in terms of $L_\\mathrm {p}$ , $\\dot{L}_\\mathrm {p}$ .", "Here $f_1\\left(L_\\mathrm {p},\\dot{L}_\\mathrm {p} \\right)$ has the following form: $f_1\\left(L_\\mathrm {p},\\dot{L}_\\mathrm {p} \\right)= \\frac{2}{\\gamma }\\int &\\bigg [&\\alpha _{+}~\\mathrm {sech}^2\\left(\\frac{\\epsilon \\pi \\alpha _+}{\\beta GH^2}\\right)\\left\\lbrace 1 + \\frac{1}{\\epsilon }\\tanh \\left(\\frac{\\epsilon \\pi \\alpha _+}{\\beta GH^2}\\right)\\right\\rbrace ^{\\beta -1}\\nonumber \\\\&+&\\alpha _{-}~\\mathrm {sech}^2\\left(\\frac{\\epsilon \\pi \\alpha _-}{\\beta GH^2}\\right)\\left\\lbrace 1 + \\frac{1}{\\epsilon }\\tanh \\left(\\frac{\\epsilon \\pi \\alpha _-}{\\beta GH^2}\\right)\\right\\rbrace ^{-\\beta -1}\\bigg ]H~dH\\bigg |_{H = \\frac{\\dot{L}_\\mathrm {p}}{L_\\mathrm {p}} - \\frac{1}{L_\\mathrm {p}}}~~.$ Similarly, $L_\\mathrm {g}$ in terms of the future horizon and its derivative becomes, $\\frac{3c^2}{\\kappa ^2 L^2_\\mathrm {g}}= \\frac{3}{8\\pi G}\\left\\lbrace \\left( \\frac{\\dot{L}_\\mathrm {f}}{L_\\mathrm {f}} + \\frac{1}{L_\\mathrm {f}} \\right)^2- f_2\\left(L_\\mathrm {f},\\dot{L}_\\mathrm {f}\\right)\\right\\rbrace ~~,$ with $f_2\\left(L_\\mathrm {f},\\dot{L}_\\mathrm {f} \\right)$ is given by, $f_2\\left(L_\\mathrm {p},\\dot{L}_\\mathrm {p} \\right)= \\frac{2}{\\gamma }\\int &\\bigg [&\\alpha _{+}~\\mathrm {sech}^2\\left(\\frac{\\epsilon \\pi \\alpha _+}{\\beta GH^2}\\right)\\left\\lbrace 1 + \\frac{1}{\\epsilon }\\tanh \\left(\\frac{\\epsilon \\pi \\alpha _+}{\\beta GH^2}\\right)\\right\\rbrace ^{\\beta -1}\\nonumber \\\\&+&\\alpha _{-}~\\mathrm {sech}^2\\left(\\frac{\\epsilon \\pi \\alpha _-}{\\beta GH^2}\\right)\\left\\lbrace 1 + \\frac{1}{\\epsilon }\\tanh \\left(\\frac{\\epsilon \\pi \\alpha _-}{\\beta GH^2}\\right)\\right\\rbrace ^{-\\beta -1}\\bigg ]H~dH\\bigg |_{H = \\frac{\\dot{L}_\\mathrm {f}}{L_\\mathrm {f}} + \\frac{1}{L_\\mathrm {f}}}~~.$ With the condition $GH^2 \\ll 1$ along with Eq.", "(REF ), the integral in Eq.", "(REF ) (or in Eq.", "(REF )) is performed, and thus the $L_\\mathrm {g}$ can be achieved in a closed form as, $\\frac{3c^2}{\\kappa ^2 L^2_\\mathrm {g}}&=&\\frac{3}{8\\pi G}\\left( \\frac{\\dot{L}_\\mathrm {p}}{L_\\mathrm {p}} - \\frac{1}{L_\\mathrm {p}} \\right)^2\\Bigg [1 - \\frac{4}{\\gamma }\\bigg \\lbrace \\alpha _{+}\\left(1+\\frac{1}{\\epsilon }\\right)^{\\beta -1}\\left(\\mathrm {exp}\\left(-\\frac{2\\epsilon \\pi \\alpha _+}{\\beta GH^2}\\right)+ \\left(\\frac{2\\epsilon \\pi \\alpha _+}{\\beta GH^2}\\right)\\mathrm {Ei}\\left(-\\frac{2\\epsilon \\pi \\alpha _+}{\\beta GH^2}\\right)\\right)\\nonumber \\\\&+&\\alpha _{-}\\left(1+\\frac{1}{\\epsilon }\\right)^{-\\beta -1}\\left(\\mathrm {exp}\\left(-\\frac{2\\epsilon \\pi \\alpha _-}{\\beta GH^2}\\right)+ \\left(\\frac{2\\epsilon \\pi \\alpha _-}{\\beta GH^2}\\right)\\mathrm {Ei}\\left(-\\frac{2\\epsilon \\pi \\alpha _-}{\\beta GH^2}\\right)\\right)\\bigg \\rbrace \\Bigg ]\\bigg |_{H = \\frac{\\dot{L}_\\mathrm {p}}{L_\\mathrm {p}} - \\frac{1}{L_\\mathrm {p}}}$ in terms of $L_\\mathrm {p}$ and $\\dot{L}_\\mathrm {p}$ , or, $\\frac{3c^2}{\\kappa ^2 L^2_\\mathrm {g}}&=&\\frac{3}{8\\pi G}\\left( \\frac{\\dot{L}_\\mathrm {f}}{L_\\mathrm {f}} + \\frac{1}{L_\\mathrm {f}} \\right)^2\\Bigg [1 - \\frac{4}{\\gamma }\\bigg \\lbrace \\alpha _{+}\\left(1+\\frac{1}{\\epsilon }\\right)^{\\beta -1}\\left(\\mathrm {exp}\\left(-\\frac{2\\epsilon \\pi \\alpha _+}{\\beta GH^2}\\right)+ \\left(\\frac{2\\epsilon \\pi \\alpha _+}{\\beta GH^2}\\right)\\mathrm {Ei}\\left(-\\frac{2\\epsilon \\pi \\alpha _+}{\\beta GH^2}\\right)\\right)\\nonumber \\\\&+&\\alpha _{-}\\left(1+\\frac{1}{\\epsilon }\\right)^{-\\beta -1}\\left(\\mathrm {exp}\\left(-\\frac{2\\epsilon \\pi \\alpha _-}{\\beta GH^2}\\right)+ \\left(\\frac{2\\epsilon \\pi \\alpha _-}{\\beta GH^2}\\right)\\mathrm {Ei}\\left(-\\frac{2\\epsilon \\pi \\alpha _-}{\\beta GH^2}\\right)\\right)\\bigg \\rbrace \\Bigg ]\\bigg |_{H = \\frac{\\dot{L}_\\mathrm {f}}{L_\\mathrm {f}} - \\frac{1}{L_\\mathrm {f}}}$ in terms of $L_\\mathrm {f}$ and $\\dot{L}_\\mathrm {f}$ .", "Wer now intend to determine the equation of state (EoS) parameter of $\\rho _\\mathrm {hol} = 3c^2/\\left(\\kappa ^2L_\\mathrm {g}^2\\right)$ , i.e for the holographic energy density with the cut-off given by $L_\\mathrm {g}$ .", "In effect of the conservation relation of $\\rho _\\mathrm {hol}$ , one may write the corresponding EoS parameter ($\\Omega _\\mathrm {hol}^{(g)}$ where the superscript '$\\mathrm {g}$ ' denotes that the EoS parameter corresponds to the cut-off $L_\\mathrm {g}$ ) as, $\\Omega _\\mathrm {hol}^{(g)} = -1 - \\left(\\frac{2}{3HL_\\mathrm {g}}\\right)\\frac{dL_\\mathrm {g}}{dt} \\,,$ where $L_\\mathrm {g}$ is shown in Eq.", "(REF ) (or in Eq.", "(REF )).", "Owing to Eq.", "(REF ), the above form of $\\Omega _\\mathrm {hol}^{(g)}$ turns out to be equivalent with $\\omega _\\mathrm {g} = p_\\mathrm {g}/\\rho _\\mathrm {g}$ , i.e.", "the EoS parameter corresponds to the holographic energy density is equivalent with that of corresponds to the entropic energy density.", "In particular, $\\Omega _\\mathrm {hol}^{(g)} \\equiv \\omega _{g} \\,.$ Therefore we may argue that the entropic cosmology corresponds to the non-singular entropy $S_\\mathrm {g}$ can be thought as a candidate of the generalized holographic family where the corresponding holographic cut-off is represented in terms of either $L_\\mathrm {p}$ and $\\dot{L}_\\mathrm {p}$ (see Eq.", "(REF )) or in terms of $L_\\mathrm {f}$ and $\\dot{L}_\\mathrm {f}$ (see Eq.", "(REF ))." ], [ "Generalized entropy on bounce cosmology", "For the first time, we provide a non-singular generalized entropy ($S_\\mathrm {g}$ ), in particular, all the known entropies proposed so far (like Tsallis, Barrow, Rényi, Sharma-Mittal, Kaniadakis and Loop Quantum Gravity entropies) become singular (or diverge) when the Hubble parameter vanishes during the universe's evolution (for instance, in bounce cosmology at the instant of bounce), unlike to the $S_\\mathrm {g}[\\alpha _{\\pm },\\beta ,\\gamma ,\\epsilon ]$ which proves to be singular-free at $H = 0$ .", "Such $non-singular$ behaviour of the proposed entropy function is useful in describing bouncing scenario, in which case, the universe undergoes through $H = 0$ at the instant of bounce.", "In this section, we will address the implications of the generalized entropy $S_\\mathrm {g}$ on non-singular bounce cosmology, in particular, we will investigate whether the entropic energy density can trigger a viable bounce during the early stage of the universe that is consistent with the observational constraints.", "For this purpose, we take the matter field and the cosmological constant to be absent, i.e., $\\rho = p = \\Lambda = 0$ .", "In effect, Eq.", "(REF ) and Eq.", "(REF ) becomes, $\\frac{1}{\\gamma }&\\bigg [&\\alpha _{+}~\\mathrm {sech}^2\\left(\\frac{\\epsilon \\pi \\alpha _+}{\\beta GH^2}\\right)\\left\\lbrace 1 + \\frac{1}{\\epsilon }\\tanh \\left(\\frac{\\epsilon \\pi \\alpha _+}{\\beta GH^2}\\right)\\right\\rbrace ^{\\beta -1}\\nonumber \\\\&+&\\alpha _{-}~\\mathrm {sech}^2\\left(\\frac{\\epsilon \\pi \\alpha _-}{\\beta GH^2}\\right)\\left\\lbrace 1 + \\frac{1}{\\epsilon }\\tanh \\left(\\frac{\\epsilon \\pi \\alpha _-}{\\beta GH^2}\\right)\\right\\rbrace ^{-\\beta -1}\\bigg ]\\dot{H} = 0~~.$ The parameters $\\left(\\alpha _{\\pm },\\beta ,\\gamma ,\\epsilon \\right)$ are positive, and thus the solution of the above equation is given by: $\\dot{H} = 0$ or equivalently $H=\\mathrm {constant}$ .", "Therefore the cosmology corresponds to the generalized entropy $S_\\mathrm {g}[\\alpha _{\\pm },\\beta ,\\gamma ,\\epsilon ]$ results to a constant Hubble parameter of the universe.", "Here we would like to mention that the emergence of a constant Hubble parameter is a generic property for all the known entropy functions, like the Tsallis, the Rényi, the Kaniadakis entropy etc.", "Clearly $H=\\mathrm {constant}$ does not lead to the correct evolution of the universe.", "Thus in order to have an acceptable cosmological evolution in the present context, we consider the parameters of $S_\\mathrm {g}[\\alpha _{\\pm },\\beta ,\\gamma ,\\epsilon ]$ vary with time (see [54], [45] where the entropic cosmology with $varying$ exponents were studied).", "The running behavior of such parameters may be motivated by quantum gravity, particularly in the case of gravity, if the space-time fluctuates at high energy scales, the degrees of freedom may increase.", "On the other hand, if gravity becomes a topological theory, the degrees of freedom may decrease.", "In particular, we consider the parameter $\\gamma $ to vary with time, and all the other parameters remain fixed, i.e.", "$\\gamma = \\gamma (N)~~,$ with $N$ being the e-fold number of the universe.", "In such scenario where $\\gamma (N)$ varies with time, the Friedmann equation corresponds to $S_\\mathrm {g}[\\alpha _{\\pm },\\beta ,\\gamma ,\\epsilon ]$ gets modified compared to Eq.", "(REF ), and is given by: $\\left(\\frac{2\\pi }{G}\\right)\\left(\\frac{\\partial S_\\mathrm {g}}{\\partial S}\\right)\\frac{H^{\\prime }(N)}{H^3} =\\left(\\frac{\\partial S_\\mathrm {g}}{\\partial \\gamma }\\right)\\gamma ^{\\prime }(N)~~,$ where the overprime denotes the derivative with respect to $N$ .", "With the explicit form of $S_\\mathrm {g}$ , Eq.", "(REF ) takes the following form, $&-&\\left(\\frac{2\\pi }{G}\\right)\\left(\\frac{H^{\\prime }(N)}{H^3}\\right)\\times \\nonumber \\\\&\\Bigg [&\\frac{\\alpha _{+}~\\mathrm {sech}^2\\left(\\frac{\\epsilon \\alpha _+}{\\beta }S\\right)\\left\\lbrace 1 + \\frac{1}{\\epsilon }\\tanh \\left(\\frac{\\epsilon \\alpha _+}{\\beta }S\\right)\\right\\rbrace ^{\\beta -1}+ \\alpha _{-}~\\mathrm {sech}^2\\left(\\frac{\\epsilon \\alpha _-}{\\beta }S\\right)\\left\\lbrace 1 + \\frac{1}{\\epsilon }\\tanh \\left(\\frac{\\epsilon \\alpha _-}{\\beta }S\\right)\\right\\rbrace ^{-\\beta -1}}{\\left\\lbrace 1 + \\frac{1}{\\epsilon }\\tanh \\left(\\frac{\\epsilon \\alpha _+}{\\beta }S\\right)\\right\\rbrace ^{\\beta } -\\left\\lbrace 1 + \\frac{1}{\\epsilon }\\tanh \\left(\\frac{\\epsilon \\alpha _-}{\\beta }S\\right)\\right\\rbrace ^{-\\beta }}\\Bigg ] = \\frac{\\gamma ^{\\prime }(N)}{\\gamma (N)}~.$ The above equation clearly depicts that due to $\\gamma ^{\\prime }(N) \\ne 0$ , the Hubble parameter is not a constant in this context, and thus it may lead to a viable non-singular bounce.", "Due to $S = \\pi /(GH^2)$ , Eq.", "(REF ) turns out to be, $\\left[\\frac{\\alpha _{+}~\\mathrm {sech}^2\\left(\\frac{\\epsilon \\alpha _+}{\\beta }S\\right)\\left\\lbrace 1 + \\frac{1}{\\epsilon }\\tanh \\left(\\frac{\\epsilon \\alpha _+}{\\beta }S\\right)\\right\\rbrace ^{\\beta -1}+ \\alpha _{-}~\\mathrm {sech}^2\\left(\\frac{\\epsilon \\alpha _-}{\\beta }S\\right)\\left\\lbrace 1 + \\frac{1}{\\epsilon }\\tanh \\left(\\frac{\\epsilon \\alpha _-}{\\beta }S\\right)\\right\\rbrace ^{-\\beta -1}}{\\left\\lbrace 1 + \\frac{1}{\\epsilon }\\tanh \\left(\\frac{\\epsilon \\alpha _+}{\\beta }S\\right)\\right\\rbrace ^{\\beta } -\\left\\lbrace 1 + \\frac{1}{\\epsilon }\\tanh \\left(\\frac{\\epsilon \\alpha _-}{\\beta }S\\right)\\right\\rbrace ^{-\\beta }}\\right]dS = \\frac{\\gamma ^{\\prime }(N)}{\\gamma (N)}dN$ which can be integrated to get, $\\left\\lbrace 1 + \\frac{1}{\\epsilon }\\tanh \\left(\\frac{\\epsilon \\pi \\alpha _+}{\\beta GH^2}\\right)\\right\\rbrace ^{\\beta } -\\left\\lbrace 1 + \\frac{1}{\\epsilon }\\tanh \\left(\\frac{\\epsilon \\pi \\alpha _-}{\\beta GH^2}\\right)\\right\\rbrace ^{-\\beta } = \\gamma (N)~~.$ The above equation provides the Hubble parameter in terms of e-fold number, i.e.", "$H = H(N)$ , for a suitable form of $\\gamma (N)$ .", "In order to extract an explicit form of the Hubble parameter from Eq.", "(REF ), we take $\\alpha _+ = \\alpha _- = \\alpha $ (say) without losing any generality.", "In effect, Eq.", "(REF ) yields $H=H(N)$ as, $\\tanh {\\left(\\frac{\\epsilon \\pi \\alpha }{\\beta GH^2}\\right)} = \\left\\lbrace \\frac{\\gamma (N) + \\sqrt{\\gamma ^2(N) + 4}}{2}\\right\\rbrace ^{1/\\beta } - 1~~.$ Due to the appearance of quadratic power of $H$ , Eq.", "(REF ) allows a positive branch as well as a negative branch of the Hubble parameter.", "This leads to a natural possibility of symmetric bounce in the present context of singular free generalized entropic cosmology.", "Moreover Eq.", "(REF ) also demonstrates that the explicit evolution of $H(N)$ does depend on the form of $\\gamma (N)$ .", "In the following, we will consider two cases where we will determine the form of $\\gamma (N)$ such that it gives two different symmetric bounce scenarios respectively.", "However before examining the possibility of bounce scenarios, here we provide the effective energy density ($\\rho _\\mathrm {eff}$ ) and the effective pressure ($p_\\mathrm {eff}$ ) sourced from the $S_\\mathrm {g}$ where the parameter $\\gamma $ varies with the e-folding number.", "In particular, Eq.", "(REF ) and Eq.", "(REF ) immediately lead to the following forms of $\\rho _\\mathrm {eff}$ and $p_\\mathrm {eff}$ as, $\\rho _\\mathrm {eff}&=&\\left(\\frac{3\\epsilon \\alpha }{4\\beta G^2}\\right)\\left[\\ln {\\left\\lbrace \\frac{1}{2\\left(\\frac{2}{\\gamma (N) + \\sqrt{\\gamma ^2(N) + 4}}\\right)^{1/\\beta } - 1}\\right\\rbrace }\\right]^{-1}~~,\\nonumber \\\\p_\\mathrm {eff} + \\rho _\\mathrm {eff}&=&\\left(\\frac{\\gamma ^{\\prime }(N)}{8\\pi ^2\\gamma (N)}\\right)\\left[\\frac{H^4\\left[\\left\\lbrace 1 + \\frac{1}{\\epsilon }\\tanh \\left(\\frac{\\epsilon \\pi \\alpha }{\\beta GH^2}\\right)\\right\\rbrace ^{\\beta } -\\left\\lbrace 1 + \\frac{1}{\\epsilon }\\tanh \\left(\\frac{\\epsilon \\pi \\alpha }{\\beta GH^2}\\right)\\right\\rbrace ^{-\\beta }\\right]}{\\alpha ~\\mathrm {sech}^2\\left(\\frac{\\epsilon \\pi \\alpha }{\\beta GH^2}\\right)\\left[\\left\\lbrace 1 + \\frac{1}{\\epsilon }\\tanh \\left(\\frac{\\epsilon \\pi \\alpha }{\\beta GH^2}\\right)\\right\\rbrace ^{\\beta -1}+ \\left\\lbrace 1 + \\frac{1}{\\epsilon }\\tanh \\left(\\frac{\\epsilon \\pi \\alpha }{\\beta GH^2}\\right)\\right\\rbrace ^{-\\beta -1}\\right]}\\right]\\nonumber \\\\$ respectively.", "These expressions will be useful later." ], [ "Possibility for an exponential bounce", "Here the scale factor is taken as, $a(t) = \\mathrm {exp}\\left(a_0t^2\\right)$ which results to a symmetric bounce at $t = 0$ .", "Here $a_0$ is a constant having mass dimension [+2] – this constant is related with the entropic parameters of $S_\\mathrm {g}$ and thus, without losing any generality, we take $a_0 = \\frac{\\epsilon \\pi \\alpha }{4G\\beta }$ .", "Hence the scale factor takes the following form: $a(t) = \\mathrm {exp}\\left(\\frac{\\epsilon \\pi \\alpha }{4G\\beta }~t^2\\right)~~.$ Consequently the Hubble parameter in terms of e-fold number comes as, $H(N) = \\pm \\sqrt{\\frac{\\epsilon \\pi \\alpha }{G\\beta }}~N^{1/2}~~,$ where we use the relation between the cosmic time and the e-fold number $t(N) = \\pm \\sqrt{\\frac{4G\\beta }{\\epsilon \\pi \\alpha }}N^{\\frac{1}{2}}$ obtained from $N = \\ln {a}$ .", "Eq.", "(REF ) clearly indicates that the Hubble parameter is negative in the negative branch of $t(N)$ , while $H(N) > 0$ at $t(N) > 0$ .", "Therefore the negative and positive branch of $t(N)$ represents the contracting and expanding stage of the universe respectively.", "Clearly the bounce occurs at $t = 0$ or equivalently at $N = 0$ .", "Thus in terms of the e-fold number, the universe starts at $N \\rightarrow +\\infty $ from the distant past, then the bounce happens at $N = 0$ and consequently the universe goes to the distant future again at $N \\rightarrow +\\infty $ .", "By using Eq.", "(REF ), we reconstruct the form of $\\gamma (N)$ which allows the above $H = H(N)$ , and is given by, $\\gamma (N) = \\left\\lbrace 1 + \\frac{1}{\\epsilon }\\tanh \\left(\\frac{1}{N}\\right)\\right\\rbrace ^{\\beta } -\\left\\lbrace 1 + \\frac{1}{\\epsilon }\\tanh \\left(\\frac{1}{N}\\right)\\right\\rbrace ^{-\\beta }~~.$ Therefore the exponential bounce scenario described by the scale factor in Eq.", "(REF ) or equivalently by the Hubble parameter in Eq.", "(REF ) can be achieved from the singular free generalized entropy $S_\\mathrm {g}\\left[\\alpha ,\\beta ,\\epsilon ,\\gamma (N)\\right]$ with $\\gamma (N)$ is given by Eq.", "(REF ).", "At this stage it deserves mentioning that in the case of exponential bounce, the comoving Hubble radius ( defined by $r_\\mathrm {h} = 1/\\left|aH\\right|$ , where $r_\\mathrm {h}$ symbolizes the comoving Hubble radius ) decreases with time and asymptotically goes to zero at both sides of the bounce.", "This indicates that the perturbation modes generate near the bounce when the comoving Hubble radius is infinite in size to contain all the perturbation modes within the sub-Hubble regime.", "In effect, the issue of horizon problem appears as the perturbation modes are in the super-Hubble regime at the distant past.", "Due to such problem in the exponential bounce, we now consider an alternative bounce in the present context of entropic cosmology, which is free from the horizon problem." ], [ "Possibility for a quasi matter bounce", "In this case, the scale factor is, $a(t) = \\left[1 + a_0\\left(\\frac{t}{t_0}\\right)^2\\right]^n$ which is symmetric about $t = 0$ when the bounce happens.", "The $n$ , $a_0$ and $t_0$ considered in the scale factor are related to the entropic parameters, and we take it as follows: $n = \\sqrt{\\alpha }~~~~~~,~~~~~~~~a_0 = \\frac{\\pi }{4\\beta }~~~~~~~~\\mathrm {and}~~~~~~~~~t_0 = \\sqrt{G/\\epsilon }~~,$ with $G$ being the gravitational constant.", "The relation between ($n$ , $a_0$ , $t_0$ ) with the entropic parameters can be considered in a different way compared to the Eq.", "(REF ), however for a simplified expression of $\\gamma (N)$ we consider the relations as of Eq.", "(REF ).", "Consequently the scale factor has the following form: $a(t) = \\left[1 + \\left(\\frac{\\pi \\epsilon }{4\\beta G}\\right)t^2\\right]^{\\sqrt{\\alpha }}~~.$ For $\\alpha = \\frac{1}{9}$ , the scale factor represents a matter bounce scenario, while a quasi matter bounce is depicted by $\\alpha \\approx \\frac{1}{9}$ .", "At this stage, it deserves mentioning that due to the above scale factor, the comoving Hubble radius asymptotically goes as $r_h \\sim t^{1-2\\sqrt{\\alpha }}$ .", "Therefore for $\\alpha < \\frac{1}{4}$ , the comoving Hubble radius asymptotically diverges to infinity, and hence, the primordial perturbation modes generate far before the bounce during the contracting phase.", "This results to the resolution of the horizon problem as the perturbation modes lie within the sub-Hubble domain at the distant past.", "However for $\\alpha > \\frac{1}{4}$ , similar to the exponential bounce, $r_h$ asymptotically vanishes and consequently the bounce scenario may suffer from the Horizon issue.", "Based on these arguments, we will take $\\alpha < \\frac{1}{4}$ for the scale factor of Eq.", "(REF ), which covers the range required for the quasi matter bounce.", "Eq.", "(REF ) immediately gives the cosmic time in terms of e-fold number as follows: $t(N) = \\pm \\sqrt{\\frac{4\\beta G}{\\pi \\epsilon }}\\left[\\mathrm {e}^{N/\\sqrt{\\alpha }} - 1\\right]^{\\frac{1}{2}}~~.$ We use the above expression to get the Hubble parameter in terms of e-fold number, and is given by, $H(N) = \\pm \\left(\\sqrt{\\frac{\\epsilon \\pi \\alpha }{\\beta G}}\\right)\\mathrm {e}^{-N/\\sqrt{\\alpha }}\\left[\\mathrm {e}^{N/\\sqrt{\\alpha }} - 1\\right]^{\\frac{1}{2}}~~.$ Plugging back the above expression of $H(N)$ into Eq.", "(REF ) yields the respective form of $\\gamma (N)$ as, $\\gamma (N) = \\left\\lbrace 1 + \\frac{1}{\\epsilon }\\tanh \\left[\\mathrm {e}^{-N/\\sqrt{\\alpha }}\\left(\\mathrm {e}^{N/\\sqrt{\\alpha }} - 1\\right)^{\\frac{1}{2}}\\right]\\right\\rbrace ^{\\beta } -\\left\\lbrace 1 + \\frac{1}{\\epsilon }\\tanh \\left[\\mathrm {e}^{-N/\\sqrt{\\alpha }}\\left(\\mathrm {e}^{N/\\sqrt{\\alpha }} - 1\\right)^{\\frac{1}{2}}\\right]\\right\\rbrace ^{-\\beta }~~,$ which triggers the quasi matter bounce described by the scale factor of Eq.", "(REF ) in the present context of singular free generalized entropic cosmology.", "Having described the background evolution, we now focus to the perturbation analysis for the curvature perturbation as well as for the tensor perturbation respectively.", "As mentioned earlier that in the case of exponential bounce the perturbation modes lie outside of the Hubble radius at the distant past, unlike to the case of quasi-matter bounce scenario where the perturbation modes generate far before the bounce and thus the modes remain within the deep sub-Hubble regime at the distant past.", "This resolves the horizon problem in the quasi-matter bounce scenario, while the exponential bounce seems to suffer from such problem.", "Thus in the following, we will concentrate on the quasi-matter bounce scenario and perform the perturbation analysis in order to determine the observable quantities like the spectral tilt for the curvature perturbation and the tensor-to-scalar ratio respectively.", "This is the subject of Sec.[].", "In regard to the perturbation analysis, we represent the present entropic cosmology with the ghost free Gauss-Bonnet (GB) theory of gravity proposed in [69].", "The motivation of such representation is due to the rich structure of the Gauss-Bonnet theory in various directions of cosmology [70], [71], [72], [73].", "We briefly demonstrate the essential features of the GB theory, and then we will show that for a certain $\\gamma (N)$ in the context of entropic cosmology, there exists an equivalent set of GB parameters in the side of Gauss-Bonnet cosmology that results to the same cosmological evolution as of the generalized entropy.", "The action for $f(\\mathcal {G})$ gravity is given by [69], $ S=\\int d^4x\\sqrt{-g} \\left(\\frac{1}{2\\kappa ^2}R+ \\lambda \\left( \\frac{1}{2} \\partial _\\mu \\chi \\partial ^\\mu \\chi + \\frac{\\mu ^4}{2} \\right) - \\frac{1}{2} \\partial _\\mu \\chi \\partial ^\\mu \\chi + h\\left( \\chi \\right) \\mathcal {G} - V\\left( \\chi \\right)\\right)\\, ,$ where $\\mu $ is a constant having mass dimension $[+1]$ , $\\lambda $ represents the Lagrange multiplier, $\\chi $ is a scalar field and $V(\\chi )$ is its potential.", "Moreover $\\mathcal {G} = R^2 - 4R_{\\mu \\nu }R^{\\mu \\nu } + R_{\\mu \\nu \\alpha \\beta }R^{\\mu \\nu \\alpha \\beta }$ is the Gauss-Bonnet scalar and $h(\\chi )$ symbolizes the Gauss-Bonnet coupling with the scalar field.", "The above action results to a ghost free action, as shown in [69].", "Varying the Lagrange multiplier, i.e $\\frac{\\delta S}{\\delta \\lambda } = 0$ gives the following constraint equation, $0=\\frac{1}{2} \\partial _\\mu \\chi \\partial ^\\mu \\chi + \\frac{\\mu ^4}{2} \\,$ which clearly indicates that the kinetic term of $\\chi $ is a constant, thus it may be absorbed within the $V(\\chi )$ .", "Therefore the new potential of the $\\chi $ comes as, $\\tilde{V} \\left(\\chi \\right) \\equiv \\frac{1}{2}\\partial _\\mu \\chi \\partial ^\\mu \\chi + V \\left( \\chi \\right)= - \\frac{\\mu ^4}{2} + V \\left( \\chi \\right) \\, ,$ owing to which, the action of Eq.", "(REF ) is equivalently re-written as, $S=\\int d^4x\\sqrt{-g} \\left(\\frac{1}{2\\kappa ^2}R+ \\lambda \\left( \\frac{1}{2} \\partial _\\mu \\chi \\partial ^\\mu \\chi + \\frac{\\mu ^4}{2} \\right) + h\\left( \\chi \\right) \\mathcal {G}- \\tilde{V}\\left( \\chi \\right)\\right) \\, .$ For the above action (REF ), the scalar and the gravitational equations have the following form, $0 =& - \\frac{1}{\\sqrt{-g}} \\partial _\\mu \\left(\\lambda g^{\\mu \\nu }\\sqrt{-g} \\partial _\\nu \\chi \\right)+ h^{\\prime }\\left( \\chi \\right) \\mathcal {G} - {\\tilde{V}}^{\\prime }\\left( \\chi \\right) \\, , \\\\0 =& \\frac{1}{2\\kappa ^2}\\left(- R_{\\mu \\nu }+ \\frac{1}{2}g_{\\mu \\nu } R\\right) - \\frac{1}{2} \\lambda \\partial _\\mu \\chi \\partial _\\nu \\chi - \\frac{1}{2}g_{\\mu \\nu } \\tilde{V} \\left( \\chi \\right)+ D_{\\mu \\nu }^{\\ \\ \\tau \\eta } \\nabla _\\tau \\nabla _\\eta h \\left( \\chi \\right)\\, ,$ where $D_{\\mu \\nu }^{\\ \\ \\tau \\eta }$ is given by, $D_{\\mu \\nu }^{\\ \\ \\tau \\eta }=&\\left( \\delta _{\\mu }^{\\ \\tau }\\delta _{\\nu }^{\\ \\eta } + \\delta _{\\nu }^{\\ \\tau }\\delta _{\\mu }^{\\ \\eta }- 2g_{\\mu \\nu }g^{\\tau \\eta } \\right) R + \\left( -4g^{\\rho \\tau }\\delta _{\\mu }^{\\ \\eta }\\delta _{\\nu }^{\\ \\sigma }- 4g^{\\rho \\tau }\\delta _{\\nu }^{\\ \\eta }\\delta _{\\mu }^{\\ \\sigma } + 4g_{\\mu \\nu }g^{\\rho \\tau }g^{\\sigma \\nu } \\right) R_{\\rho \\sigma }\\nonumber \\\\&+4R_{\\mu \\nu }g^{\\tau \\eta } - 2R_{\\rho \\mu \\sigma \\nu } \\left(g^{\\rho \\tau }g^{\\sigma \\nu } + g^{\\rho \\eta }g^{\\sigma \\tau }\\right)\\nonumber $ with having in mind $g^{\\mu \\nu }D_{\\mu \\nu }^{\\ \\ \\tau \\eta } = 4\\left[-\\frac{1}{2}g^{\\tau \\eta }R + R^{\\tau \\eta } \\right]$ .", "Due to the FRW metric shown in Eq.", "(REF ), and assuming that $\\chi $ and $\\lambda $ are functions of $t$ only, Eq.", "(REF ) immediately gives the solution for $\\chi $ as, $ \\chi = \\mu ^2 t \\, .$ Consequently the temporal and spatial components of Eq.", "() become, $0 = & - \\frac{3H^2}{2\\kappa ^2}- \\frac{\\mu ^4 \\lambda }{2} + \\frac{1}{2} \\tilde{V} \\left( \\mu ^2 t \\right)- 12 \\mu ^2 H^3 h^{\\prime } \\left( \\mu ^2 t \\right) \\, , \\\\0 = & \\frac{1}{2\\kappa ^2} \\left( 2 \\dot{H} + 3 H^2 \\right)- \\frac{1}{2} \\tilde{V} \\left( \\mu ^2 t \\right)+ 4 \\mu ^4 H^2 h^{\\prime \\prime } \\left( \\mu ^2 t \\right) + 8 \\mu ^2 \\left( \\dot{H} +H^2 \\right) H h^{\\prime } \\left( \\mu ^2 t \\right) \\, ,$ and moreover, the scalar field equation comes as, $0 = \\mu ^2 \\dot{\\lambda }+ 3 \\mu ^2 H \\lambda + 24 H^2\\left( \\dot{H} + H^2 \\right) h^{\\prime }\\left( \\mu ^2 t \\right)- {\\tilde{V}}^{\\prime }\\left( \\mu ^2 t \\right) \\, .$ Here it may be mentioned that the above field equations are not independent, in particular, Eq.", "() can be achieved from the other two.", "Eq.", "(REF ) and Eq.", "() provide the energy density ($\\rho _\\mathrm {GB}$ ) and the pressure ($p_\\mathrm {GB}$ ) corresponding to this ghost free Gauss-Bonnet gravity theory as: $\\rho _\\mathrm {GB}&=&-\\mu ^4\\lambda + \\tilde{V}(\\chi ) - 24\\mu ^2H^3h^{\\prime }(\\chi )~~,\\nonumber \\\\p_\\mathrm {GB}&=&-\\tilde{V}(\\chi ) + 8\\mu ^4H^2h^{\\prime \\prime }(\\chi ) + 16\\mu ^2\\left(\\dot{H} + H^2\\right)Hh^{\\prime }(\\chi )~~,$ respectively.", "For the Gauss-Bonnet theory, the speed of the gravitational wave ($c_T^2$ ) is, in general, different than unity and the deviation of $c_T^2$ from unity is controlled by the GB coupling function.", "In particular, the $c_T^2$ in the Gauss-Bonnet theory is, $c_T^2 = 1 + \\frac{16\\left( \\ddot{h}-\\dot{h}H \\right)}{\\frac{1}{\\kappa ^2} + 16\\dot{h}H}~~.$ Such deviation of $c_T^2$ from unity is not consistent with the GW170817 event which argues that the gravitational wave propagates with same speed of light.", "Thus in order to be consistent with the GW170817 event, we consider such class of GB coupling functions that satisfy the following condition [71], [73], [74], $\\ddot{h} = \\dot{h}H~~~~~~~~~\\Longrightarrow ~~~~~~~~~~\\dot{h} = h_0a(t)~~,$ where $h_0$ being the integration constant.", "With the above condition, $\\rho _\\mathrm {GB}$ and $p_\\mathrm {GB}$ from Eq.", "(REF ) turn out to be, $\\rho _\\mathrm {GB}&=&-\\mu ^4\\lambda + \\tilde{V}(\\chi ) - 24\\mu ^2H^3h^{\\prime }(\\chi )~~,\\nonumber \\\\p_\\mathrm {GB}&=&-\\tilde{V}(\\chi ) + 8\\mu ^2\\left(2\\dot{H} + 3H^2\\right)Hh^{\\prime }(\\chi )~~,$ respectively.", "To represent the entropic cosmology corresponding to the $S_\\mathrm {g}[\\alpha _{\\pm },\\beta ,\\gamma ,\\epsilon ]$ as a ghost free Gauss-Bonnet gravity theory compatible with the GW170817 event, we need to compare the above forms of energy density and pressure of the Gauss-Bonnet theory with that of coming from the entropy function as obtained in Eq.", "(REF ), i.e $\\rho _\\mathrm {GB} \\equiv \\rho _\\mathrm {eff}$ and $p_\\mathrm {GB} \\equiv p_\\mathrm {eff}$ respectively.", "By doing so, we obtain – $\\mu ^4\\lambda - \\tilde{V}(\\chi ) + 24\\mu ^2H^3h^{\\prime }(\\chi ) =\\left(\\frac{3\\epsilon \\alpha }{4\\beta G^2}\\right)\\left[\\ln {\\left\\lbrace 2\\left(\\frac{2}{\\gamma (N) + \\sqrt{\\gamma ^2(N) + 4}}\\right)^{1/\\beta } - 1\\right\\rbrace }\\right]^{-1}$ and $&-&\\mu ^4\\lambda + 16\\mu ^2\\dot{H}Hh^{\\prime }(\\chi )\\nonumber \\\\&=&H^4\\left(\\frac{\\gamma ^{\\prime }(N)}{8\\pi ^2\\gamma (N)}\\right)\\left[\\frac{\\left\\lbrace 1 + \\frac{1}{\\epsilon }\\tanh \\left(\\frac{\\epsilon \\pi \\alpha }{\\beta GH^2}\\right)\\right\\rbrace ^{\\beta } -\\left\\lbrace 1 + \\frac{1}{\\epsilon }\\tanh \\left(\\frac{\\epsilon \\pi \\alpha }{\\beta GH^2}\\right)\\right\\rbrace ^{-\\beta }}{\\alpha ~\\mathrm {sech}^2\\left(\\frac{\\epsilon \\pi \\alpha }{\\beta GH^2}\\right)\\left[\\left\\lbrace 1 + \\frac{1}{\\epsilon }\\tanh \\left(\\frac{\\epsilon \\pi \\alpha }{\\beta GH^2}\\right)\\right\\rbrace ^{\\beta -1}+ \\left\\lbrace 1 + \\frac{1}{\\epsilon }\\tanh \\left(\\frac{\\epsilon \\pi \\alpha }{\\beta GH^2}\\right)\\right\\rbrace ^{-\\beta -1}\\right]}\\right]~.\\nonumber \\\\$ The above two algebraic equations can be solved for $\\tilde{V}(\\chi )$ and $\\lambda (t)$ to get, $\\tilde{V}(\\chi )&=&-8\\pi G~F_1\\left[\\gamma (N),\\gamma ^{\\prime }(N)\\right]\\left(\\frac{1}{\\kappa ^2} + 8h_0a(t)H(t)\\right) \\bigg |_{t=\\chi /\\mu ^2}\\, ,\\\\\\mu ^4\\lambda (t)&=&-8\\pi G~F_2\\left[\\gamma (N),\\gamma ^{\\prime }(N)\\right]\\left(\\frac{1}{\\kappa ^2} - 8h_0a(t)H(t)\\right)\\, ,$ where the functions $F_1\\left[\\gamma (N),\\gamma ^{\\prime }(N)\\right]$ and $F_2\\left[\\gamma (N),\\gamma ^{\\prime }(N)\\right]$ are given by, $F_1\\left[\\gamma (N),\\gamma ^{\\prime }(N)\\right]&=&\\left(\\frac{3\\epsilon \\alpha }{4\\beta G^2}\\right)\\left[\\ln {\\left\\lbrace 2\\left(\\frac{2}{\\gamma (N) + \\sqrt{\\gamma ^2(N) + 4}}\\right)^{1/\\beta } - 1\\right\\rbrace }\\right]^{-1} +H^4\\left(\\frac{\\gamma ^{\\prime }(N)}{8\\pi ^2\\gamma (N)}\\right)\\times \\nonumber \\\\&\\Bigg [&\\frac{\\left\\lbrace 1 + \\frac{1}{\\epsilon }\\tanh \\left(\\frac{\\epsilon \\pi \\alpha }{\\beta GH^2}\\right)\\right\\rbrace ^{\\beta } -\\left\\lbrace 1 + \\frac{1}{\\epsilon }\\tanh \\left(\\frac{\\epsilon \\pi \\alpha }{\\beta GH^2}\\right)\\right\\rbrace ^{-\\beta }}{\\alpha ~\\mathrm {sech}^2\\left(\\frac{\\epsilon \\pi \\alpha }{\\beta GH^2}\\right)\\left[\\left\\lbrace 1 + \\frac{1}{\\epsilon }\\tanh \\left(\\frac{\\epsilon \\pi \\alpha }{\\beta GH^2}\\right)\\right\\rbrace ^{\\beta -1}+ \\left\\lbrace 1 + \\frac{1}{\\epsilon }\\tanh \\left(\\frac{\\epsilon \\pi \\alpha }{\\beta GH^2}\\right)\\right\\rbrace ^{-\\beta -1}\\right]}\\Bigg ]\\nonumber \\\\\\nonumber $ and $F_2\\left[\\gamma (N),\\gamma ^{\\prime }(N)\\right] = H^4\\left(\\frac{\\gamma ^{\\prime }(N)}{8\\pi ^2\\gamma (N)}\\right)\\left[\\frac{\\left\\lbrace 1 + \\frac{1}{\\epsilon }\\tanh \\left(\\frac{\\epsilon \\pi \\alpha }{\\beta GH^2}\\right)\\right\\rbrace ^{\\beta } -\\left\\lbrace 1 + \\frac{1}{\\epsilon }\\tanh \\left(\\frac{\\epsilon \\pi \\alpha }{\\beta GH^2}\\right)\\right\\rbrace ^{-\\beta }}{\\alpha ~\\mathrm {sech}^2\\left(\\frac{\\epsilon \\pi \\alpha }{\\beta GH^2}\\right)\\left[\\left\\lbrace 1 + \\frac{1}{\\epsilon }\\tanh \\left(\\frac{\\epsilon \\pi \\alpha }{\\beta GH^2}\\right)\\right\\rbrace ^{\\beta -1}+ \\left\\lbrace 1 + \\frac{1}{\\epsilon }\\tanh \\left(\\frac{\\epsilon \\pi \\alpha }{\\beta GH^2}\\right)\\right\\rbrace ^{-\\beta -1}\\right]}\\right]\\nonumber $ respectively.", "Eq.", "(REF ) and Eq.", "() clearly depict that for a certain form of $\\gamma (N)$ in the context of entropic cosmology, there exists an equivalent $\\tilde{V}(\\chi )$ and $\\lambda $ in the side of Gauss-Bonnet cosmology that results to the same cosmological evolution as of the generalized entropy.", "therefore we may argue that the entropic cosmology of $S_\\mathrm {g}$ can be equivalently represented by Gauss-Bonnet cosmology where the $\\tilde{V}(\\chi )$ and $\\lambda (t)$ are given by Eq.", "(REF ) and Eq.", "() respectively.", "As mentioned earlier that we will concentrate on the quasi-matter bounce to analyze the perturbation (see the next subsection), in which case, the $\\gamma (N)$ is shown in Eq.", "(REF ), and thus, $F_1\\left[\\gamma (N),\\gamma ^{\\prime }(N)\\right]$ and $F_2\\left[\\gamma (N),\\gamma ^{\\prime }(N)\\right]$ come as, $F_1\\left[\\gamma (N),\\gamma ^{\\prime }(N)\\right]&=&\\left(\\frac{\\epsilon \\pi \\sqrt{\\alpha }}{8\\pi \\beta G^2}\\right)e^{-2N/\\sqrt{\\alpha }}\\left(e^{N/\\sqrt{\\alpha }} - 2\\right)\\nonumber \\\\F_2\\left[\\gamma (N),\\gamma ^{\\prime }(N)\\right]&=&\\left(\\frac{\\epsilon \\pi \\alpha }{8\\pi \\beta G^2}\\right)e^{-2N/\\sqrt{\\alpha }}\\left\\lbrace 3\\left(e^{N/\\sqrt{\\alpha }} - 1\\right) + \\frac{1}{\\sqrt{\\alpha }}\\left(e^{N/\\sqrt{\\alpha }} - 2\\right)\\right\\rbrace ~~.$ Thus as a whole – Eq.", "(REF ) and Eq.", "() establish the equivalence between the entropic cosmology corresponding to the $S_\\mathrm {g}$ and the Gauss-Bonnet cosmology, and moreover, Eq.", "(REF ) shows such equivalence in the case of quasi-matter bounce scenario." ], [ "Cosmological perturbation and phenomenology of the entropic quasi-matter bounce", "As mentioned earlier that we consider the quasi-matter bounce scenario described by the scale factor (REF ) to analyze the perturbation, where the perturbation modes generate during the contracting phase deep in the sub-Hubble regime, which in turn ensures the resolution of the horizon problem.", "For the scale factor (REF ), the Ricci scalar during the contracting era is given by, $R(t) = \\frac{12n(1-4n)}{t^2}~~,$ Due to this expression of $R = R(t)$ , one can achieve the scale factor, the Hubble parameter and its derivative (during the contracting era) in terms of the Ricci scalar as follows: $a(R)&=&\\frac{a_0^{n}}{\\left(\\widetilde{R}/R_0\\right)^{n}}~~~~,~~~~H(R) = -2n\\widetilde{R}^{1/2}~~~~~\\mathrm {and}~~~~~\\dot{H}(R)=-2n\\widetilde{R}~~,$ where $R_0 = \\frac{1}{t_0^2}$ and $\\widetilde{R}(t) = \\frac{R(t)}{12n(1-4n)}$ .", "It may be noted that $n$ , $a_0$ and $t_0$ are related to the entropic parameters via Eq.", "(REF ).", "Moreover from Eq.", "(REF ), we determine the derivative of the GB coupling function (in terms of the Ricci scalar) as $\\dot{h}(R) = \\frac{(2n+1)}{8\\pi G\\sqrt{R_0}}\\left(\\frac{R_0}{\\widetilde{R}}\\right)^n~~,$ where the integration constant $h_0$ is adjusted in suitable manner.", "With the above expressions of $H(R)$ and $\\dot{h}(R)$ , the functions $Q_i$ in the context of the ghost free Gauss-Bonnet theory of gravity [75], [76] come with the following expressions, $Q_a=&-8\\dot{h}H^2 = -\\frac{4n^2(1+2n)\\sqrt{\\widetilde{R}}}{\\pi G}\\left(\\frac{\\widetilde{R}}{R_0}\\right)^{\\frac{1}{2}-n}\\, , \\nonumber \\\\Q_b=&-16\\dot{h}H = \\frac{4n(1+2n)}{\\pi G}\\left(\\frac{\\widetilde{R}}{R_0}\\right)^{\\frac{1}{2}-n}\\, ,\\nonumber \\\\Q_c=&Q_d = 0 \\, ,\\nonumber \\\\Q_e=&-32\\dot{h}\\dot{H} = \\frac{8n(1+2n)\\sqrt{\\widetilde{R}}}{\\pi G}\\left(\\frac{\\widetilde{R}}{R_0}\\right)^{\\frac{1}{2}-n}\\, ,\\nonumber \\\\Q_f=&16 \\left[ \\ddot{h} - \\dot{h}H \\right] = 0 \\, ,$ respectively.", "The last expression of Eq.", "(REF ), i.e.", "$Q_f = 0$ , is a direct consequence of the fact that the speed of the gravitational wave is unity.", "Finally Eq.", "() gives the Lagrange multiplier function as, $\\mu ^4\\lambda = -\\frac{n\\widetilde{R}}{2\\pi G}\\left[1 - 16n(1+2n)\\left(\\frac{\\widetilde{R}(t)}{R_0}\\right)^{\\frac{1}{2} - n}\\right]\\, .$ We will frequently use these expressions in the subsequent sections.", "In the comoving gauge, the second order action for primordial curvature perturbation (symbolized by $\\Psi (t,\\vec{x})$ ) comes as [75], [76], $\\delta S_{\\psi } = \\int dt d^3\\vec{x} a(t) z(t)^2\\left[\\dot{\\Psi }^2- \\frac{c_s^2}{a^2}\\left(\\partial _i\\Psi \\right)^2\\right]\\, .$ We have shown that the energy density and the pressure corresponds to the entropy $S_\\mathrm {g}$ , i.e $\\rho _\\mathrm {g}$ and $p_\\mathrm {g}$ , can be represented by a ghost free $f(R,\\mathcal {G})$ gravity theory for suitable forms of scalar field potential and the GB coupling function.", "As a result, $z(t)$ and $c_s^2$ have the following forms [75], $z(t) = \\frac{a(t)}{H + \\frac{Q_a}{2F + Q_b}} \\sqrt{-\\mu ^4\\lambda + \\frac{3Q_a^2 + Q_aQ_e}{2F + Q_b}}$ and $c_{s}^{2} = 1 + \\frac{Q_aQ_e/\\left(2F + Q_b\\right)}{-\\mu ^4\\lambda + 3\\frac{Q_a^2}{2F + Q_b}} \\, ,$ respectively, with and $F=\\frac{1}{16\\pi G}$ .", "Plugging the expressions of $Q_i$ into Eq.", "(REF ) yields the form of $z(t)$ as, $z(t) = -\\frac{a_0^n}{\\kappa \\left(\\widetilde{R}/R_0\\right)^{n}}~\\frac{\\sqrt{P(R)}}{Q(R)}$ where $P(R)$ and $Q(R)$ are given by, $P(R) = 4n\\left[1 - 16n(1+2n)\\left(\\frac{\\widetilde{R}}{R_0}\\right)^{\\frac{1}{2} - n}+ \\mathcal {O}\\left(\\frac{\\widetilde{R}}{R_0}\\right)^{1-2n}\\right]\\, ,$ and $Q(R) = 2n\\left[1 + 16n(1+2n)\\left(\\frac{\\widetilde{R}}{R_0}\\right)^{\\frac{1}{2} - n}+ \\mathcal {O}\\left(\\frac{\\widetilde{R}}{R_0}\\right)^{1-2n}\\right]\\, ,$ respectively.", "Recall that the perturbation modes generate far before the bounce when the Ricci scalar satisfies the condition $\\frac{\\widetilde{R}}{R_0} \\ll 1$ due to $\\widetilde{R} \\rightarrow 0$ at $t \\rightarrow -\\infty $ .", "This results to $z^2(t) > 0$ which in turn ensures the stability of the curvature perturbation.", "In order to solve the Mukhanov-Sasaki equation for the curvature perturbation, we will use conformal time defined by $\\eta = \\int \\frac{dt}{a(t)}$ which, due to $a(t) \\sim t^{2n}$ , comes as, $\\eta (t) = \\left[\\frac{1}{a_0^n(1-2n)}\\right]t^{1-2n}\\, .$ Clearly $\\eta (t)$ is a monotonic increasing function of $t$ (recall that $n < 1/2$ ).", "With the above expression of $\\eta =\\eta (t)$ , the Ricci scalar can be obtained in terms of conformal time as follows: $\\widetilde{R}(\\eta ) = \\frac{1}{\\left[a_0^n(1-2n)\\right]^{2/(1-2n)}}\\times \\frac{1}{\\eta ^{2/(1-2n)}} \\propto \\frac{1}{\\eta ^{2/(1-2n)}}\\,.$ by using which, the $z(\\eta )$ from Eq.", "(REF ) turns out to be, $z(\\eta ) \\propto \\left(\\frac{\\sqrt{P(\\eta )}}{Q(\\eta )}\\right)\\eta ^{2n/(1-2n)}\\, ,$ with $P(\\eta ) = P(R(\\eta ))$ and $Q(\\eta ) = Q(R(\\eta ))$ .", "Consequently the factor $\\frac{1}{z}\\frac{d^2z}{d\\eta ^2}$ (which demonstrates the interaction of the curvature perturbation with the background evolution in the Mukhanov-Sasaki equation) is given by, $\\frac{1}{z}\\frac{d^2z}{d\\eta ^2} = \\frac{\\xi (\\xi - 1)}{\\eta ^2}\\left\\lbrace 1 + 24\\left(1-4n^2\\right)\\left(\\frac{\\widetilde{R}}{R_0}\\right)^{\\frac{1}{2} - n}+ \\mathcal {O}\\left(\\frac{\\widetilde{R}}{R_0}\\right)^{1-2n}\\right\\rbrace $ where we use $\\frac{d\\widetilde{R}}{d\\eta } = \\frac{-2}{(1-2n)}\\frac{\\widetilde{R}}{\\eta }$ and moreover $\\xi = \\frac{2n}{(1-2n)}$ in the above expression.", "Furthermore the speed of the scalar perturbation from Eq.", "(REF ) comes with the following expression, $c_s^2 = 1 + \\mathcal {O}\\left(\\frac{\\widetilde{R}}{R_0}\\right)^{1-2n}\\, .$ Having all the necessary ingredients in hand, we now introduce the canonical Mukhanov-Sasaki (MS) variable: $v(\\eta ,\\vec{x}) = z(\\eta )\\Psi (\\eta ,\\vec{x})$ , and consequently the Fourier version of Mukhanov-Sasaki equation is, $\\frac{d^2v_k(\\eta )}{d\\eta ^2} + \\left(c_s^2k^2 - \\frac{1}{z}\\frac{d^2z}{d\\eta ^2}\\right)v_k(\\eta ) = 0\\, ,$ with $v_k(\\eta )$ being the Fourier mode for $v(\\eta ,\\vec{x})$ .", "Eq.", "(REF ) clearly depicts that the dynamics of $v_k(\\eta )$ is controlled by the background quantities like $z^{\\prime \\prime }/z$ and $c_s^2$ .", "As demonstrated after Eq.", "(REF )) that the Ricci scalar satisfies $\\frac{\\widetilde{R}}{R_0} \\ll 1$ during the contracting era, and thus one can retain the leading order term of $\\left(\\widetilde{R}/R_0\\right)^{\\frac{1}{2} - n}$ in the expressions of $z^{\\prime \\prime }/z$ and $c_s^2$ .", "In effect, they take the following forms, $\\frac{1}{z}\\frac{d^2z}{d\\eta ^2}&=&\\frac{\\xi (\\xi - 1)}{\\eta ^2}\\left[1 + 24\\left(1-4n^2\\right)\\left(\\frac{\\widetilde{R}}{R_0}\\right)^{\\frac{1}{2} - n}\\right]~~,\\nonumber \\\\c_s^2&=&1\\, .$ Moreover due to $n < 1/2$ (in order to resolve the horizon problem) along with $\\frac{\\widetilde{R}}{R_0} \\ll 1$ reveal that the term $\\left(\\widetilde{R}/R_0\\right)^{\\frac{1}{2} - n}$ within the parenthesis can be safely considered to be small during the contracting era when the perturbation modes cross the horizon.", "In effect, $z^{\\prime \\prime }/z$ becomes proportional to $1/\\eta ^2$ , in particular, $\\frac{1}{z}\\frac{d^2z}{d\\eta ^2} = \\sigma /\\eta ^2$ , where, $\\sigma = \\xi (\\xi - 1)\\left[1 + 24\\left(1-4n^2\\right)\\left(\\frac{\\widetilde{R}}{R_0}\\right)^{\\frac{1}{2} - n}\\right]\\, ,$ which is approximately a constant during the generation era of the perturbation modes in the sub-Hubble regime.", "Using $z^{\\prime \\prime }/z \\propto \\eta ^{-2}$ and $c_s^2 = 1$ , one may solve $v_k(\\eta )$ from Eq.", "(REF ) and is given by, $v(k,\\eta ) = \\frac{\\sqrt{\\pi |\\eta |}}{2}\\left[c_1(k)H_{\\nu }^{(1)}(k|\\eta |) + c_2(k)H_{\\nu }^{(2)}(k|\\eta |)\\right]\\, ,$ with $\\nu = \\sqrt{\\sigma + \\frac{1}{4}}$ , and, $H_{\\nu }^{(1)}(k|\\eta |)$ and $H_{\\nu }^{(2)}(k|\\eta |)$ symbolize the Hermite functions (having order $\\nu $ ) of first and second kind, respectively.", "The integration constants $c_1$ and $c_2$ can be determined from the Bunch-Davies condition of the MS variable.", "The Bunch-Davies vacuum of the MS variable during $\\eta \\rightarrow -\\infty $ , i.e $\\lim _{k|\\eta | \\gg 1}v(k,\\eta ) = \\frac{1}{\\sqrt{2k}}e^{-ik\\eta }$ , is well justified due to the fact that the perturbation modes lie within the sub-Hubble radius at the distant past.", "This results to $c_1 = 0$ and $c_2 = 1$ respectively.", "Owing to which, the scalar power spectrum for $k$ th mode becomes (defined by $\\mathcal {P}_{\\Psi }(k,\\eta ) = \\frac{k^3}{2\\pi ^2} \\left| \\frac{v(k,\\eta )}{z(\\eta )} \\right|^2$ ), $\\mathcal {P}_{\\Psi }(k,\\eta ) = \\frac{k^3}{2\\pi ^2} \\left| \\frac{\\sqrt{\\pi |\\eta |}}{2z (\\eta )}H_{\\nu }^{(2)}(k|\\eta |) \\right|^2\\, ,$ where the solution of $v(k,\\eta )$ is used.", "The horizon crossing condition for $k$ th mode is given by $k = |aH|$ which, due to Eq.", "(), is determined as, $k = \\frac{1}{\\left| \\eta _h\\right|}\\left(\\frac{2n}{1-2n}\\right) \\quad \\Rightarrow \\quad k\\left| \\eta _h\\right| = \\frac{2n}{1-2n}\\, ,$ with the suffix 'h' symbolizes the instant of horizon crossing.", "The observable quantities like the spectral tilt for the curvature perturbation and the tensor to scalar ratio are eventually determined around the large scale modes given by $k = 0.05\\mathrm {Mpc}^{-1}$ .", "Thus Eq.", "(REF ) leads to the horizon crossing instant for $k = 0.05\\mathrm {Mpc}^{-1}$ as $\\eta _h \\approx -13\\,\\mathrm {By}$ .", "This is however expected because of the following reasons: (1) the universe's evolution is symmetric around the bounce for the scale factor in Eq.", "(REF ), and (2) the large scale modes re-enter the horizon near the present epoch of the universe $\\sim 13.5\\mathrm {By}$ .", "This immediately leads to the estimation of the Ricci scalar at the horizon crossing of the large scale modes as $\\frac{\\widetilde{R}}{R_0} \\sim 10^{-6}$ (with $n = 0.3$ , $R_0 = 1\\mathrm {By}^{-2}$ and $a_0 \\sim \\mathcal {O}(1)$ which will be shown to be consistent with the viability of the model in respect to the Planck data, see the next subsection).", "This in turn justifies the condition $\\frac{\\widetilde{R}}{R_0} \\ll 1$ considered earlier in determining $z(t)$ .", "In order to determine the spectral tilt, we need the scalar power spectrum in the super-Hubble regime when the mode satisfies $k|\\eta | < 2n/(1-2n)$ , and consequently, $\\mathcal {P}_{\\Psi }(k,\\eta )$ becomes, $\\mathcal {P}_{\\Psi }(k,\\eta ) = \\left[\\left(\\frac{1}{2\\pi }\\right)\\frac{1}{z\\left|\\eta \\right|}\\frac{\\Gamma (\\nu )}{\\Gamma (3/2)}\\right]^2\\left(\\frac{k|\\eta |}{2}\\right)^{3-2\\nu }\\, ,$ recall that $\\nu = \\sqrt{\\sigma + \\frac{1}{4}}$ .", "The above expression of $\\mathcal {P}_{\\Psi }(k,\\eta )$ leads to the spectral tilt for the curvature perturbation, symbolized by $n_s$ .", "However before calculating the $n_s$ , we perform the tensor perturbation which is required for the other observable quantity namely the tensor-to-scalar ratio ($r$ ).", "The quadratic order action of the tensor perturbation is [75], [76], $\\delta S_{h} = \\int dt d^3\\vec{x} a(t) z_T(t)^2\\left[\\dot{h}_{ij}\\dot{h}^{ij}- \\frac{1}{a^2}\\left(\\partial _kh_{ij}\\right)^2\\right]\\, .$ As demonstrated earlier that the entropic energy density and the entropic pressure corresponding to the $S_\\mathrm {g}$ are represented by Lagrange multiplier $f(R,\\mathcal {G})$ theory, in which case, the function $z_T$ takes the following form [75], $z_T(t) = a\\sqrt{F + \\frac{1}{2}Q_b}\\, ,$ where $F = \\frac{1}{16\\pi G}$ and the $Q_b$ is given in Eq.", "(REF ).", "It may be observed from Eq.", "(REF ) that the speed of the gravitational wave is unity, as the GB coupling function in the present context satisfies $\\ddot{h} = \\dot{h}H$ which leads to $c_T^2 = 1$ and ensures the compatibility of the bounce model with the GW170817 event.", "We use Eq.", "(REF ) to get $z_T$ as follows, $z_T=\\frac{a_0^n}{\\sqrt{2}\\kappa \\widetilde{R}^n}\\left[1 + 16n(1+2n)\\left(\\frac{\\widetilde{R}}{R_0}\\right)^{\\frac{1}{2} - n}\\right]\\, ,$ which depicts that $z_T^2 > 0$ and thus the tensor perturbation is stable.", "Eq.", "(REF ) immediately leads to $z_T$ in terms of the conformal time as, $z_T(\\eta ) \\propto S(R(\\eta ))\\eta ^{2n/(1-2n)}\\, ,$ where $S(R(\\eta ))$ has the following form, $S(R(\\eta )) = 1 + 16n(1+2n)\\left(\\frac{\\widetilde{R}}{R_0}\\right)^{\\frac{1}{2} - n}\\, .$ Consequently we determine the factor $z_T^{\\prime \\prime }/z_T$ : $\\frac{1}{z_T}\\frac{d^2z_T}{d\\eta ^2} = \\frac{\\xi (\\xi -1)}{\\eta ^2}\\left[1 - 16(1-4n^2)\\left(\\frac{\\widetilde{R}}{R_0}\\right)^{\\frac{1}{2} - n}\\right]\\, .$ The above expression will be useful for solving the tensor Mukhanov-Sasaki equation.", "The condition $\\frac{\\widetilde{R}}{R_0} \\ll 1$ together with $n < 1/2$ clearly demonstrate that the term containing $\\left(\\frac{\\widetilde{R}}{R_0}\\right)^{\\frac{1}{2} - n}$ within the parenthesis can be safely considered to be small and slowly varying during the contracting phase.", "In effect of which, $z_T^{\\prime \\prime }/z_T$ becomes proportional to $1/\\eta ^2$ , in particular, $\\frac{1}{z_T}\\frac{d^2z_T}{d\\eta ^2} = \\sigma _T/\\eta ^2$ , with $\\sigma _T = \\xi (\\xi - 1)\\left[1 - 16(1-4n^2)\\left(\\frac{\\widetilde{R}}{R_0}\\right)^{\\frac{1}{2} - n}\\right]\\, .$ Thus the tensor Mukhanov-Sasaki (MS) equation turns out to be, $\\frac{d^2v_T(k,\\eta )}{d\\eta ^2} + \\left(k^2 - \\frac{\\sigma _T}{\\eta ^2}\\right)v_T(k,\\eta ) = 0\\, ,$ with $v_T(k,\\eta )$ is the Fourier transformed quantity of the tensor MS variable that is defined by $\\left(v_T\\right)_{ij} = z_Th_{ij}$ .", "Here it may be mentioned that both the tensor polarization modes ($+$ and $\\times $ polarization modes) obey the same evolution Eq.", "(REF ) – this means that the two polarization modes equally contribute to the energy density of the tensor perturbation variable, and thus we will multiply by the factor '2' in the final expression of the tensor power spectrum.", "Solving Eq.", "(REF ), one gets $v_T(k,\\eta ) = \\frac{\\sqrt{\\pi |\\eta |}}{2}\\left[D_1~H_{\\theta }^{(1)}(k|\\eta |) + D_2~H_{\\theta }^{(2)}(k|\\eta |)\\right]\\, ,$ with $\\theta = \\sqrt{\\sigma _T + \\frac{1}{4}}$ , and, $H_{\\theta }^{(1)}(k|\\eta |)$ and $H_{\\theta }^{(2)}(k|\\eta |)$ symbolize the Hermite functions (having order $\\theta $ ) of first and second kind, respectively.", "Here it may be mentioned that the solution of $v_T(k,\\eta )$ can also be written in terms of cylinder function of order $\\theta $ .", "Moreover $D_1$ and $D_2$ are two integration constants that can be determined from the initial condition of $v_T(k,\\eta )$ .", "Similar to the curvature perturbation variable, the tensor perturbation initiates from the Bunch-Davies vacuum at the distant past, i.e.", "$v_T(k,\\eta )$ , i.e $\\lim _{k|\\eta | \\gg 1}v_T(k,\\eta ) = \\frac{1}{\\sqrt{2k}}e^{-ik\\eta }$ , which immediately leads to $D_1 = 1$ and $D_2 = 0$ .", "As a result, we obtain the tensor power spectrum for $k$ th mode in the super-Hubble regime as, $\\mathcal {P}_{T}(k,\\tau ) = 2\\left[\\frac{1}{2\\pi }\\frac{1}{z_T\\left|\\eta \\right|}\\frac{\\Gamma (\\theta )}{\\Gamma (3/2)}\\right]^2 \\left(\\frac{k|\\eta |}{2}\\right)^{3 - 2\\theta }\\, .$ where $\\theta = \\sqrt{\\sigma _T + \\frac{1}{4}}$ .", "Having obtained the scalar and tensor power spectra, we now evaluate the observable quantities like the spectral tilt for the curvature perturbation ($n_s$ ) and the tensor-to-scalar ratio ($r$ ) respectively, which are defined by, $n_s = 1 + \\left.", "\\frac{\\partial \\ln {\\mathcal {P}_{\\Psi }}}{\\partial \\ln {k}} \\right|_{h} \\, , \\quad r=\\mathcal {P}_T/\\mathcal {P}_{\\Psi }\\, ,$ where the suffix 'h' denotes the horizon crossing of the large scale modes at which we will calculate the $n_s$ and $r$ .", "The observational constraints on $n_s$ and $r$ reported by the latest Planck 2018 data [77] are, $n_s = 0.9649 \\pm 0.0042 \\quad \\mbox{and} \\quad r < 0.064 \\, .$ By using Eq.", "(REF ) and Eq.", "(REF ), we obtain the theoretical expressions of $n_s$ and $r$ as, $n_s = 4 - \\sqrt{1 + 4\\sigma _h} \\, , \\quad r = 2\\left[\\frac{z(\\eta _h)}{z_T(\\eta _h)}\\frac{\\Gamma (\\theta )}{\\Gamma (\\nu )}\\right]^2\\left( k\\left|\\eta _h\\right| \\right)^{2(\\nu -\\theta )}\\, ,$ where the quantities have the following forms, $\\nu =&\\sqrt{\\sigma _h + \\frac{1}{4}}\\, ; \\quad \\sigma _h = \\xi (\\xi - 1)\\left[1 + 24\\left(1-4n^2\\right)\\left(\\frac{\\widetilde{R}_h}{R_0}\\right)^{\\frac{1}{2} - n}\\right]\\, ,\\nonumber \\\\\\theta =&\\sqrt{\\sigma _{T,h} + \\frac{1}{4}} \\, ; \\quad \\sigma _{T,h} = \\xi (\\xi - 1)\\left[1 - 16(1-4n^2)\\left(\\frac{\\widetilde{R}_h}{R_0}\\right)^{\\frac{1}{2} - n}\\right]\\, ,\\nonumber \\\\z(\\eta _h)=&-\\frac{1}{\\sqrt{n}}\\left(\\frac{a_0^n}{\\kappa \\widetilde{R}_h^{n}}\\right)\\left[1 - 24n(1+2n)\\left(\\frac{\\widetilde{R}_h}{R_0}\\right)^{\\frac{1}{2} - n}\\right]\\, ,\\nonumber \\\\z_T(\\eta _h)=&\\frac{1}{\\sqrt{2}}\\left(\\frac{a_0^n}{\\kappa \\widetilde{R}_h^{n}}\\right)\\left[1 + 16n(1+2n)\\left(\\frac{\\widetilde{R}_h}{R_0}\\right)^{\\frac{1}{2} - n}\\right]\\, .$ Here $\\widetilde{R}_h$ represents the Ricci scalar at the horizon crossing, and due to Eq.", "(REF ), it turns out to be, $\\widetilde{R}_h = \\left[\\frac{1}{a_0^n(1-2n)\\left|\\eta _h\\right|}\\right]^{2/(1-2n)}\\, ,$ with $\\eta _h$ is given by, $\\left|\\eta _h\\right| = \\left(\\frac{2n}{1-2n}\\right)\\frac{1}{k} \\approx \\left(\\frac{2n}{1-2n}\\right)\\times 13\\,\\mathrm {By}\\, .$ In the second equality of the above equation, we use $k = 0.05\\mathrm {Mpc}^{-1} \\approx 13\\mathrm {By}$ (recall that the large scale modes crosses the horizon at around $-13\\mathrm {By}$ , and being the universe is symmetric, it re-enters the horizon during the expanding phase of the universe around $+13\\mathrm {By}$ ).", "Using Eq.", "(REF ) and Eq.", "(REF ), one gets $\\widetilde{R}_h$ in terms of $n$ and $a_0$ : $\\widetilde{R}_h = \\left[\\frac{1}{26na_0^n}\\right]^{2/(1-2n)}\\mathrm {By}^{-2}\\, .$ Therefore it is clear that $n_s$ and $r$ in the present context depends on the parameters $n$ and $a_0$ .", "Here we need to recall that $n$ and $a_0$ are related to the entropic parameters as $n = \\sqrt{\\alpha }$ and $a_0 = \\pi /\\left(4\\beta \\right)$ respectively.", "It turns out that the theoretical predictions for $n_s$ and $r$ get simultaneously compatible with the recent Planck data for a small range of the entropic parameters given by: $\\alpha = [0.0938,0.0939]$ and $\\beta = \\frac{\\pi }{16}$ – this is depicted in Fig.", "[REF ].", "It may be observed that the viable range of $\\alpha = [0.0938,0.0939]$ slightly differs than $\\alpha = \\frac{1}{9}$ which leads to a matter-bounce scenario.", "Therefore in the present context of entropic cosmology, we may argue that a quasi matter bounce scenario gets a consistent $n_s$ as well as a consistent $r$ under the Gauss-Bonnet representation.", "This is unlike to the case when the entropic cosmology under consideration is represented by a scalar-tensor representation, in which case, the quasi matter bounce may give a correct $n_s$ however the tensor-to-scalar ratio becomes too large to be consistent with the Planck data.", "Figure: Parametric plot of n s n_s (along xx-axis) vs. rr (along yy-axis)with respect to nn.", "Here we take α=[0.0938,0.0939]\\alpha = [0.0938,0.0939] and β=π 16\\beta = \\frac{\\pi }{16}." ], [ "Conclusion", "We propose a new five-parameter entropy function ($S_\\mathrm {g}$ ) that generalizes the Tsallis, Barrow, Rényi, Sharma-Mittal, Kaniadakis and Loop Quantum Gravity entropies for suitable limits of the parameters, and at the same time, is also non-singular during the evolution of the universe.", "The non-singular generalized entropy function obeys the generalized third law of thermodynamics, i.e $S_\\mathrm {g}$ tends to zero for $S \\rightarrow 0$ (with $S$ being the Bekenstein-Hawking entropy), and moreover, $S_\\mathrm {g}$ turns out to be a monotonic increasing function with $S$ .", "Here we would like to mention that beside the five-parameter entropy function, we also proposed a four-parameter entropy function in one of our earlier works [54], which is able to generalize all the known entropies mentioned above.", "However such an entropy with the four parameters becomes singular (or diverges) when the Hubble parameter vanishes (i.e $H = 0$ , for instance, in a bounce scenario at the instant of bounce), unlike to the five-parameter entropy function (proposed in the present work) which proves to be singular-free during the entire cosmological evolution of the universe.", "In this regard, we give the following conjecture – “The minimum number of parameters required in a generalized entropy function that can generalize all the aforementioned known entropies, and is also singular-free during the universe's evolution – is equal to five”.", "It is important to note that unlike to the aforementioned known entropies, the newly proposed five-parameter entropy function ($S_\\mathrm {g}$ ) is singular-free at $H = 0$ .", "Such $non-singular$ behaviour of the entropy function is useful in describing bouncing scenario, in which case, the universe undergoes through $H = 0$ at the instant of bounce.", "Thus we address the cosmological implications of $S_\\mathrm {g}$ during the early phase of the universe.", "In particular, the effective energy density and the effective pressure sourced from the $S_\\mathrm {g}$ modify the Friedmann equations, and consequently, we examine whether the cosmology corresponding to $S_\\mathrm {g}$ can trigger a non-singular bouncing universe.", "It turns out that being the entropic parameters are constant, the entropic cosmology corresponding to $S_\\mathrm {g}$ leads to a constant Hubble parameter as the only possible solution.", "Clearly $H = \\mathrm {constant}$ does not lead to the correct evolution of the universe.", "Thus in order to have an acceptable cosmological evolution in the present context, we consider the parameters of $S_\\mathrm {g}[\\alpha _{\\pm },\\beta ,\\gamma ,\\epsilon ]$ vary with time, in particular, we consider the parameter $\\gamma $ to vary with time, and all the other parameters remain fixed, i.e.", "$\\gamma = \\gamma (N)$ where $N$ represents the e-fold number of the universe.", "With $\\gamma = \\gamma (N)$ , the Friedmann equations corresponding to $S_\\mathrm {g}$ depict the following points – (1) the Hubble parameter is no more a constant during the universe's evolution, and the evolution of $H(N)$ depends on the form of $\\gamma (N)$ , and (2) the Friedmann equations contain a quadratic power of the Hubble parameter, due to which, it allows a positive branch as well as a negative branch solution of $H(N)$ , which in turn leads to a natural possibility of symmetric bounce in the present context of singular-free entropic cosmology.", "Consequently, we determine the form of $\\gamma (N)$ in two different symmetric bounce scenarios, in particular, for an exponential bounce and for a quasi-matter bounce respectively.", "It turns out that similar to the entropy function, the functional form of $\\gamma (N)$ is of hyperbolic nature in both the bounce scenarios.", "Despite such similarity, the evolution of comoving Hubble radius makes the bounce scenarios qualitatively different from each other.", "In the case of exponential bounce, the comoving Hubble radius asymptotically goes to zero at both sides of the bounce, and thus the perturbation modes generate near the bounce when the comoving Hubble radius is infinite in size to contain all the perturbation modes within it.", "This may result to the “horizon probelm” which makes the exponential bounce less viable.", "On other hand, the comoving Hubble radius in the case of quasi-matter bounce diverges to infinity at the distant past, owing to which, the perturbation modes generate far before the bounce in the deep sub-Hubble regime.", "As a result, the “horizon issue” gets resolved in quasi-matter bounce, as the perturbation modes remain in sub-Hubble regime at the distant past.", "Based on these arguments, we concentrate on the quasi-matter bounce scenario and consequently perform the detailed perturbation analysis in order to estimate the observable quantities in the present context of entropic cosmology.", "The observable quantities like the spectral tilt for curvature perturbation ($n_s$ ) and the tensor-to-scalar ratio ($r$ ) are found to depend on the entropic parameters, as expected.", "It turns out that the theoretical expectations of $n_s$ and $r$ are simultaneously compatible with the recent Planck 2018 data for a small range of the entropic parameters given by: $\\alpha _+ = \\alpha _- = [0.0938,0.0939]$ and $\\beta = \\frac{\\pi }{16}$ respectively.", "Furthermore, we show that the entropic cosmology from the proposed singular-free entropy is equivalent to holographic cosmology with suitable forms of holographic cut-offs.", "In particular, the holographic cut-offs are determined in terms of either particle horizon and its derivative or future horizon and its derivative." ], [ "Acknowledgments", "This work was supported by MINECO (Spain), project PID2019-104397GB-I00 and also partially supported by the program Unidad de Excelencia Maria de Maeztu CEX2020-001058-M, Spain (SDO).", "This research was also supported in part by the International Centre for Theoretical Sciences (ICTS) for the online program - Physics of the Early Universe (code: ICTS/peu2022/1) (TP)." ] ]
2212.05531
[ [ "Energy-recurrence Breakdown and Chaos in Disordered\n Fermi-Pasta-Ulam-Tsingou Lattices" ], [ "Abstract In this paper, we consider the classic Fermi-Pasta-Ulam-Tsingou system as a model of interacting particles connected by harmonic springs with a quadratic nonlinear term (first system) and a set of second-order ordinary differential equations with variability (second system) that resembles Hamilton's equations of motion of the Fermi-Pasta-Ulam-Tsingou system.", "In the absence of variability, the second system becomes Hamilton's equations of motion of the Fermi-Pasta-Ulam-Tsingou system (first system).", "Variability is introduced to Hamilton's equations of motion of the Fermi-Pasta-Ulam-Tsingou system to take into account inherent variations (for example, due to manufacturing processes), giving rise to heterogeneity in its parameters.", "We demonstrate that a percentage of variability smaller than a threshold can break the well-known energy recurrence phenomenon and induce localization in the energy normal-mode space.", "However, percentage of variability larger than the threshold may make the trajectories of the second system blow up in finite time.", "Using a multiple-scale expansion, we derive analytically a two normal-mode approximation that explains the mechanism for energy localization and blow up in the second system.", "We also investigate the chaotic behavior of the two systems as the percentage of variability is increased, utilising the maximum Lyapunov exponent and Smaller Alignment Index.", "Our analysis shows that when there is almost energy localization in the second system, it is more probable to observe chaos, as the number of particles increases." ], [ "Introduction", "Debye suggested that thermal conductivity in a crystal is a consequence of atom vibrations in the lattice [1], [2].", "To model thermalization processes in physical media, Fermi, Pasta, Ulam, with Tsingou's help running the computer simulations [3], considered a system of particles connected by harmonic springs with a quadratic nonlinear term, i.e., the so-called FPUT lattice, that is fixed at both ends.", "A purely linear dynamics of the springs keeps energy, given to a single normal mode, localized in that mode.", "However, introducing nonlinear interactions, one would expect that energy introduced to one normal mode, would slowly spread to other normal modes, until the system reaches a state of equipartition of energy, i.e., the system relaxes to a thermal equilibrium.", "Contrary to this expectation, Fermi, Pasta, Ulam, and Tsingou observed in their seminal paper [3] in 1955 recurrences of the energy to its initial state, the so-called FPUT recurrences, a phenomenon that led to numerous discoveries in mathematics and physics [4], [5], [6], [7], [8] thereafter.", "The recurrence phenomenon was explained by Zabusky and Kruskal [9] in real space, who derived the integrable Korteweg-de Vries equation from the continuum limit of the FPUT lattice.", "Introducing energy into one normal mode with wave number $k$ is nothing else but taking the sinusoidal initial condition in real space.", "As time evolves, the state breaks into a series of localized solutions, i.e., solitons that move and interact with the fixed ends, i.e.", "boundaries.", "Upon interacting with the fixed ends, the solitons bounce back and return to their initial positions, i.e., giving rise to FPUT recurrences.", "Another explanation to the inefficient energy transfer among normal modes was provided by Izrailev and Chirikov [10] who used the concept of the overlap of nonlinear resonances.", "They associated equipartition of energy with dynamical chaos and were able to estimate a threshold that separates regular from chaotic dynamics.", "Another direction in the study of the FPUT lattice is that of heat conductivity in the presence of disorder.", "The main interest is in its interplay with nonlinearity.", "For harmonic disordered systems, all eigenmodes of the infinite system, i.e., Anderson modes, are known to be localized and form a complete basis [11].", "As a linear superposition of Anderson modes, an initially localized wave in the infinite chain will remain localized at any time.", "Whether this behavior changes qualitatively by the introduction of nonlinearity is still an open question (see for example [12], [13], [14] and references therein).", "Disorder can be introduced in the form of uniformly distributed random variation of particle masses [15], [12], linear coupling constants between nearest neighbours [16], or in the nonlinearity coefficients [17].", "Recently, by viewing FPUT lattices as systems of masses coupled with nonlinear springs, Nelson et al.", "[18] incorporated heterogeneity on a one-dimensional FPUT array to take into account uncertainties (i.e., in the masses, the spring constants, or the nonlinear coefficients) during the manufacturing process of such physical systems.", "They demonstrated numerically that tolerances degrade the observance of recurrences, often leading to a complete loss in moderately-sized arrays.", "Such a variability may therefore provide a plausible explanation to little experimental evidence on FPUT energy recurrences.", "Here, we consider the problem of heterogeneous FPUT systems studied in [18].", "In our work, we perform numerical simulations in great detail to understand the breakdown of FPUT recurrences in the model.", "Indeed, we observe recurrence degradation, where the energy peak of the lowest normal mode is decreasing subsequently.", "For percentage of variability smaller than a threshold that we derive, the energy is then localized in the few lowest normal modes.", "The authors in [19], [20] considered non-equipartition of energy among normal modes and studied time-periodic states that are exponentially localized in the $k$ - (or $q$ - in [19], [20]) space of normal modes.", "Such time-periodic states are referred to as $q$ -breathers.", "Variability in FPUT lattices therefore leads to $q$ -breathers.", "In our work, by transforming the FPUT system into another system in the normal-mode space and considering a two normal-mode approximation, we provide a qualitative explanation for the disappearance of FPUT recurrences.", "In this approximation, $q$ -breathers are periodic solutions centered around an equilibrium point (i.e., time-independent solutions) in the $q$ -space of normal modes.", "We also perform long-term numerical integrations and compute the maximum Lyapunov exponent (mLE) [21] and Smaller Alignment Index (SALI) [22], [23] to show that the trajectories of the heterogeneous system become quickly chaotic, as the number of particles in the system increases for the same percentage of variability.", "In homogeneous FPUT lattices (i.e., in the absence of variability), when recurrences occur, the system reaches a metastable state [24], [25], where only few (low $k$ ) normal modes share the total energy of the system.", "However, it has also been shown that a rather weak diffusion takes place in the highest normal modes of the spectrum [26] that gradually leads to equipartition of energy [27].", "Using the result in [10], this weak diffusion process implies weak chaos [28].", "Our work shows that variability enhances the chaotic dynamics of the system.", "In this work, we also show that for percentages of variability bigger than a threshold, solutions may blow up in finite time.", "Using the same two normal-mode approximation, we have been able to explain the blow up phenomenon.", "A bifurcation analysis is further provided that yields a variability threshold for the blow up of solutions.", "The paper is organised as follows: In Sec.", ", we review the original FPUT lattice with a quadratic nonlinearity (i.e., the FPUT$-\\alpha $ system) and discuss energy recurrence.", "We introduce the governing equations of motion in the presence of parameter variability in Sec.", ".", "The phenomena of recurrence breakdown and blow up of solutions are reported in the same section.", "In Sec.", ", a two normal-mode approximation in the normal mode space is derived using multiple-scale analysis.", "Our analytical results explain why energy recurrences breakdown when variability is introduced and provide a qualitative reason why solutions blow up in finite-time after a variability threshold.", "In Sec.", ", we discuss chaos in the FPUT$-\\alpha $ system with or without variability and the mLE and SALI methods that we use to discriminate between ordered and chaotic trajectories.", "Finally, we conclude our study and discuss future work in Sec.", "." ], [ "Mathematical model and dynamics of FPUT$-\\alpha $ lattices", "The Hamiltonian of the FPUT$-\\alpha $ system is given by $H(x,p)=\\frac{1}{2}\\sum _{j=0}^N p_j^2 + \\sum _{j=0}^N \\frac{1}{2} \\left( x_{j+1}-x_j \\right)^2 + \\frac{\\alpha }{3} \\left( x_{j+1}-x_j \\right)^3=E,$ where fixed boundary conditions $x_0 = x_{N+1}=0$ and $p_0=0$ are considered.", "In this context, $\\alpha \\ge 0$ is the nonlinear coupling strength and $E$ the total, fixed, energy of the system.", "By viewing the FPUT$-\\alpha $ lattice as a model of particles coupled with springs, $x_j(t)$ represents the relative displacement of the $j$ th-particle from its equilibrium position at any time $t$ and $p_j(t)$ its corresponding conjugate momentum at any time $t$ .", "The equations of motion that result from Hamiltonian (REF ) (i.e., Hamilton's equations of motion) are then given by $\\ddot{x}_j= & (x_{j+1}-x_j)+\\alpha (x_{j+1}-x_j)^2-(x_{j}-x_{j-1}) - \\alpha (x_{j}-x_{j-1})^2.$ Working in the real space $x_j$ and $p_j$ , one can express Eqs.", "(REF ) in the normal-mode space $Q_j$ and $P_j$ .", "This can be done by writing the position $x_j(t)$ as a superposition of eigenvectors of the linear equation.", "Using the normal mode transformation, $\\mathbf {x} &=A\\mathbf {Q},\\quad \\mathbf {p} =A\\mathbf {P}, $ where $\\mathbf {x}=[x_1~x_2~\\ldots ~x_N]^T$ , $\\mathbf {p}=[p_1~p_2~\\ldots ~p_N]^T$ , $\\mathbf {Q}=[Q_1~Q_2~\\ldots ~Q_N]^T$ , $\\mathbf {P}=[P_1~P_2~\\ldots ~P_N]^T$ , and $A=\\sqrt{\\frac{2}{N+1}}\\begin{bmatrix}\\sin \\left(\\frac{\\pi }{N+1} \\right) & \\sin \\left(\\frac{2\\pi }{N+1} \\right) & \\dots & \\sin \\left(\\frac{N\\pi }{N+1} \\right)\\\\\\sin \\left(\\frac{2\\pi }{N+1} \\right)& \\sin \\left(\\frac{4\\pi }{N+1} \\right) & \\dots & \\sin \\left(\\frac{2N\\pi }{N+1} \\right)\\\\\\vdots & \\vdots & \\ddots & \\vdots \\\\\\sin \\left(\\frac{N\\pi }{N+1} \\right)& \\sin \\left(\\frac{2N\\pi }{N+1} \\right) & \\dots & \\sin \\left(\\frac{N^2\\pi }{N+1} \\right)\\end{bmatrix},$ the Hamiltonian (REF ) becomes $H&=\\frac{1}{2}\\sum _{k=1}^N \\left(P_k^2+\\omega _k^2Q_k^2 \\right)+\\alpha H_3(Q_1,Q_2,\\ldots ,Q_N),$ for some nonlinear function $H_3$ , where $\\omega _k=2\\sin \\left(\\frac{k\\pi }{2(N+1)}\\right).$ In this framework, $\\mathbf {Q}$ represents the amplitude of the normal mode, while $\\mathbf {P}$ its velocity.", "The energy of normal mode $k$ for $\\alpha =0$ can then be defined by $E_k&=\\frac{1}{2} \\left(P_k^2+\\omega _k^2Q_k^2 \\right).$ Substituting Eq.", "(REF ) into Eq.", "(REF ), we obtain the equations of motion in normal-mode coordinates as $\\ddot{\\mathbf {Q}} &=D\\mathbf {Q}+A^{-1}\\mathbf {F(Q)},$ where $D=\\begin{bmatrix}-\\omega _1^2 & 0 & \\ldots & 0\\\\0& -\\omega _2^2 & & 0\\\\\\vdots & & \\ddots & \\vdots \\\\0& 0 & \\ldots & -\\omega _N^2\\end{bmatrix},\\;\\;\\mathbf {F(Q)} =\\begin{bmatrix}f_1(\\mathbf {Q})\\\\f_2(\\mathbf {Q})\\\\\\vdots \\\\f_N(\\mathbf {Q})\\end{bmatrix}$ and $A^{-1}$ is the inverse matrix of $A$ , given by Eq.", "(REF ).", "In their seminal work, Fermi, Pasta, Ulam and Tsingou [3] excited the lowest possible normal mode, i.e., the mode with $k=1$ .", "The initial conditions of Eqs.", "(REF ) are then $p_j&=0, \\quad x_j=\\sin \\left( \\frac{\\pi i}{N+1}\\right),\\;j=1,2,\\ldots ,N, $ which are equivalent to solving Eq.", "(REF ) with $Q_1=\\sqrt{(N+1)/2}$ , $Q_k=0$ for $k=2,3,\\dots ,N,$ and $\\dot{Q}_k=0$ for $k=1,2,\\dots ,N$ .", "In Fig.", "REF , we plot the dynamics of $x_j$ of system (REF ) for $N=64$ , for the initial condition (REF ), where $E=0.03795$ .", "Panels (a) and (b) show the dynamics of $x_j(t)$ in real space (infact, what is shown is the oscillation envelope) and the normal mode energy of the first four normal modes of the FPUT$-\\alpha $ system (REF ), respectively.", "Figure: FPUT recurrences of the Hamiltonian system ().", "(a) Dynamics of x j (t)x_j(t) using the initial condition in (), where E=0.03795E=0.03795.", "Panel (a) shows the top view of the oscillation envelope of x j (t)x_j(t) in time.", "(b) Energy of the first four normal modes in the dynamics shown in panel (a).", "Note in panel (b) how almost all of the energy returns to the first normal mode at around t=6×10 4 t=6\\times 10^4, i.e., the appearance of an FPUT recurrence.", "Here we have used N=64N=64 in the computations in both panels.", "The range of values in the vertical axis in panel (a) is between 1 and N=64N=64.In their seminal paper [3], Fermi, Pasta, Ulam and Tsingou expected that the energy $E$ , which was initially used to excite the lowest normal mode only (i.e., $k=1$ ), would slowly drift to the other normal modes until the system reaches thermalization, as predicted by Statistical Mechanics.", "Surprisingly, the numerical experiment showed that that was not the case and that after several periods of the evolution of the mode, almost all energy in the system returned to the first normal mode that was excited initially.", "The authors witnessed the so-called FPUT recurrences.", "An example of such recurrences is given in Fig.", "REF for $N=64$ and $E=0.03795$ ." ], [ "Disordered FPUT lattices", "The authors in [18] proposed various disordered FPUT$-\\alpha $ systems that include tolerances $v$ into each particle as a result of variability in a manufacturing process.", "In their study, they claimed to incorporate tolerances into the system in different ways based on manufacturing constraints.", "They first introduced the following system with heterogeneity $\\ddot{x}_j= & (v_{j+1}x_{j+1}-v_jx_j)+\\alpha (v_{j+1}x_{j+1}-v_jx_j)^2-(v_jx_{j}-v_{j-1}x_{j-1}) - \\alpha (v_jx_{j}-v_{j-1}x_{j-1})^2,$ where, again $\\alpha \\ge 0$ is the nonlinear coupling strength.", "It can be shown that this system admits the Hamiltonian function $H(x,p)=\\frac{1}{2}\\sum _{j=0}^N \\frac{p_j^2}{v_j} + \\sum _{j=0}^N \\frac{1}{2} \\left( v_{j+1}x_{j+1}-v_jx_j \\right)^2 + \\frac{\\alpha }{3} \\left( v_{j+1}x_{j+1}-v_jx_j \\right)^3=E.$ The disorder is inserted in a symmetric way between the linear and nonlinear coupling.", "Particularly, the variabilities $v_j$ were generated randomly from a Gaussian distribution, that is for a tolerance $\\tau \\%$ , the values of $v_j$ were drawn from a Gaussian distribution with mean 1 and standard deviation $\\sigma =1/3 \\times 0.01\\tau $ .", "Therefore, the values of $v_j$ would lie in the interval $[1-0.01\\tau ,1+0.01\\tau ]$ .", "The authors in [18] considered also the case where the variabilities $v_j$ are present only in the nonlinear coupling terms, resulting in the following system of second-order ordinary differential equations $\\ddot{x}_j= & (x_{j+1}-x_j)+\\alpha (v_{j+1}x_{j+1}-v_jx_j)^2-(x_{j}-x_{j-1}) - \\alpha (v_jx_{j}-v_{j-1}x_{j-1})^2.$ In this case, the system is no longer Hamiltonian.", "Then, they showed numerically that incorporating variability in the nonlinear coupling terms only has, for a fixed amount of variability, a comparable effect to incorporating it in only the linear coupling terms.", "Although in both setups recurrences such as those in Fig.", "REF disappear for large enough tolerance and the energy localizes in the first few normal modes, more energy is transferred to the lower modes in the latter case than in the former.", "In this work, we consider the effect of disorder in the second scenario of Eq.", "(REF ), which is a toy-dyamical system that does not necessarily relate to a real physical system.", "We have decided to study it as we show in Sec.", ", we can derive a mathematical theory to understand the effect of variability in the localization of energy and its dynamics.", "Throughout the paper, we consider disorder that is generated using the same setup as in [18].", "We show in Fig.", "REF the dynamics of $x_j$ and normal mode energies $E_k$ of the first four modes of the system of equations (REF ) (i.e., for $k=1,\\ldots ,4$ ) for $N=64$ particles and two different percentages of tolerance, i.e., for $\\tau =5\\%$ and $10\\%$ .", "Figure: Energy recurrences in time for the system in Eq.", "() for N=64N=64, similarly to Fig.", "for the Hamiltonian system ().", "Panels (a) and (b) are for τ=5%\\tau =5\\% tolerance and panels (c) and (d) for τ=10%\\tau =10\\%.", "Note that the ranges in the vertical axes in panels (a) and (c) are from 1 to N=64N=64.Comparing panels (a) in Figs.", "REF and REF , one can see that the variability reduces the effectiveness of recurrence, where a subsequent peak of the mode energy $E_1$ is lower than the preceding ones.", "In [18], it was reported that for larger variability, the energy transfer from the lowest to the higher ones becomes ineffective, which creates a non-recurrent state, shown in panel (d) in Fig.", "REF .", "This state is localized in the normal mode space, i.e., it is a $q$ -breather [19], [20].", "In other words, disorder promotes the occurrence of $q$ -breathers.", "In Sec.", ", applying a two normal-mode approximation to Eqs.", "(REF ) and using multiple-scale expansions, we show that there is a threshold for the percentage of variability $\\tau _c\\approx 10.0749\\%$ , after which the initial condition (REF ) may lead to finite-time blow up of the solutions.", "This is illustrated in Fig.", "REF , where the effect of the blow up is clearly seen in the abrupt increase of the normal-mode energies $E_k$ (see Eq.", "(REF )) of the first four normal modes (i.e., for $k=1,\\ldots ,4$ ) for $\\tau =20\\%>\\tau _c$ .", "Figure: A similar simulation as in panels (b) and (d) in Fig.", ", for the system in Eq.", "() and N=64N=64, but for the increased tolerance τ=20%\\tau =20\\%, where a finite-time blow up of the solution manifests as the abrupt increase of the energy of the first four normal modes around t=2000t=2000.In the following, we show why the transfer of energy between modes reduces with the increase of the percentage of variability.", "This leads to a localized state in the energy-mode space and to finite-time blow up of solutions if variability is greater than $\\tau _c$ .", "When energy localization occurs, the plots of the normal mode energies suggest that most of the mode coordinates are vanishing in time.", "Therefore, we prefer to work in the normal-mode coordinate system than in the real (physical) space.", "In this framework, the equations of motion (REF ) can be written in the normal-mode coordinates space in a similar manner as in Eq.", "(REF ), namely in the form $\\ddot{\\mathbf {Q}} &=D\\mathbf {Q}+A^{-1}\\mathbf {\\widehat{F}(Q)},$ for some nonlinear, vector-function $\\mathbf {\\widehat{F}(Q)}$ that depends on $\\tau $ , which is different to $\\mathbf {F(Q)}$ in Eq.", "(REF ) in the absence of variability.", "Our main assumption is that we can approximate system (REF ) by considering only the first few modes.", "To illustrate numerically that this assumption is reasonable, we present in Fig.", "REF the normal-mode energy for the set of equations of motion (REF ) for 2, 4 and 8 normal modes and different percentages of variability.", "The parameter values in the set of equations of motion (REF ) are calculated numerically for $N=64$ and the same percentage of variability as in Figs.", "REF and REF , where all remaining modes are set to 0 at all times.", "Particularly, looking at Fig.", "REF , we see that using 2 and 4 modes gives dynamics that are quantitatively different from those in Figs.", "REF and REF , with respect to the recurrence period.", "Nevertheless, even with only 2 modes, we can still observe energy recurrence and localization for increasing percentage of variability.", "Therefore, in the following, we will consider a two normal-mode system in Eq.", "(REF ).", "Figure: Normal-mode energy in time obtained from integrating Eq.", "() using only 2 normal modes in panels (a), (d), (g), 4 modes in panels (b), (e), (h) and 8 modes in panels (c), (f), (i).", "The tolerance is 0%0\\% in panels (a) - (c), 5%5\\% in panels (d) - (f), and 10%10\\% in panels (g) - (i).", "We note that for illustration purposes, we plot in all panels only the normal mode energy of the first four modes and that all tolerances are smaller than τ c \\tau _c.", "Despite the fact that the last four modes are activated for 0%0\\% and 5%5\\% tolerance, they are essentially zero for 10%10\\% tolerance in panel (i)." ], [ "A two normal-mode system and bifurcation analysis", "Figure REF suggests that when energy localization in the first few normal mode occurs, all higher modes have relatively much smaller energy.", "This gives us the idea that we can approximate Eq.", "(REF ) by setting $Q_k(t)=0$ for $k=3,4,\\dots ,N$ , and obtain the following two normal-mode system $\\ddot{Q}_1 &= - \\omega _1^2 Q_1 + \\epsilon \\left( A_1 Q_1^2+A_2Q_2^2+A_3Q_1Q_2 \\right), \\\\\\ddot{Q}_2 &= - \\omega _2^2 Q_2 + \\epsilon \\left( B_1 Q_1^2+B_2Q_2^2+B_3Q_1Q_2 \\right),$ where $A_i$ , $B_i \\in \\mathbb {R}$ , $i=1,2,3$ and $\\omega _k$ is given in Eq.", "(REF )." ], [ "Multiple-scale expansions", "Since $\\omega _2 = 2 \\omega _1 + \\epsilon $ , $|\\epsilon | \\ll 1$ , we take the following asymptotic series $Q_1 &=X_0(t,T) + \\epsilon X_1(t,T) + \\ldots ,\\\\Q_2 &=Y_0(t,T) + \\epsilon Y_1(t,T) + \\ldots ,$ where $T=\\epsilon t$ is a slow-time variable.", "The leading-order approximations to Eqs.", "(REF ) are given by $X_0 &= q_1(T) e^{i\\omega _1 t} + q_1^*(T) e^{-i \\omega _1 t}, \\quad Y_0 = q_2(T) e^{i\\omega _2 t} + q_2^*(T) e^{-i \\omega _2 t}.", "$ Substituting Eqs.", "(REF ), (REF ) into Eq.", "(), expanding the equations in $\\epsilon $ and applying the standard solvability condition to avoid secular terms appearing (see e.g., [29]), we obtain $i \\frac{dq_1(T)}{d T} &= q_1(T) + \\widetilde{A}q_1^*q_2,\\\\\\quad i \\frac{dq_2(T)}{d T} &= q_2(T) + \\widetilde{B}q_1^2,$ for $q_1$ and $q_2$ , respectively, where $\\tilde{A}=A_3/(2\\omega _1)$ and $\\tilde{B}=B_1/(2\\omega _2)$ .", "In this context, $i$ is the imaginary unit of the complex numbers.", "Following Eqs.", "(REF ), the initial conditions of system (REF ) are given by $q_1(0)&=\\frac{Q_1(0)}{2}, \\\\q_2(0)&=0.", "$ We note that parameters $\\tilde{A}$ and $\\tilde{B}$ depend on $\\tau $ .", "In Fig.", "REF , we plot these parameters as a function of $\\tau $ for $N=64$ particles and 100 realizations.", "These realizations have been computed by fixing $\\tau $ and then opting for 100 sets of $N=64$ randomly generated numbers from the Gaussian distribution with mean 1 and standard deviation $\\sigma =1/3 \\times 0.01\\tau $ .", "Therefore, the $v_j$ s in the 100 sets lie in the interval $[1-0.01\\tau ,1+0.01\\tau ]$ .", "As we can see in panel (a), $\\tilde{A}$ is positive for all $\\tau $ , whereas $\\tilde{B}$ changes sign at around $\\tau =10\\%$ .", "Particularly, $\\tilde{B}$ starts positive for small $\\tau $ values before it becomes negative at around $\\tau =10\\%$ .", "By using polynomial regression, we have been able to fit the mean of the 100 realisations in panel (b) by the function $\\tilde{B}\\approx -0.00893\\tau ^2-0.000084\\tau +0.90728$ , with a sum of square errors (SSE) of $3.46\\times 10^{-19}$ .", "This allowed us to estimate with good accuracy the threshold for the percentage of variability where $\\tilde{B}$ changes sign and found to be given by $\\tau _c\\approx 10.0749\\%$ as $\\tilde{B}(\\tau _c)=0$ .", "In Sec.", "REF , we show that when $\\tilde{B}<0$ , that is for $\\tau >\\tau _c$ , trajectories of Eqs.", "(REF ) may blow up in finite time.", "Figure: Plot of A ˜\\tilde{A} (in panel (a)) and B ˜\\tilde{B} (in panel (b)) as a function of the tolerance obtained numerically for N=64N=64.", "The dash-dotted curve is the mean value over 100 realisations of the same percentage of variability (see the discussion in the text), while the lengths of the shaded regions are two standard deviations.", "Using a polynomial regression, the mean is found to be given approximately by A ˜≈0.01739τ 2 -0.00029τ+3.62805\\tilde{A}\\approx 0.01739\\tau ^2-0.00029\\tau +3.62805 and B ˜≈-0.00893τ 2 -0.000084τ+0.90728\\tilde{B}\\approx -0.00893\\tau ^2-0.000084\\tau +0.90728, where the sums of square errors are 2.14×10 -15 2.14\\times 10^{-15} and 3.46×10 -19 3.46\\times 10^{-19} in panels (a) and (b), respectively.", "Note the horizontal black dashed line at B ˜=0\\tilde{B}=0 from which τ c \\tau _c is derived (see text for details).A comparison of the dynamics of the normal modes $Q_1$ and $Q_2$ of Eq.", "() and those of the slow-time variables $q_1$ and $q_2$ of Eqs.", "(REF ) is shown in Fig.", "REF , where one can see that $q_j$ is an envelope of $Q_j$ for $j=1,2$ .", "Figure: Time evolution of the normal mode variables Q 1 Q_1 (blue curve) and Q 2 Q_2 (red curve) with their envelopes q 1 q_1 and q 2 q_2 (black curves) from Eqs.", "() for τ=0%\\tau =0\\% in panel (a) and τ=10%\\tau =10\\% in panel (b).", "Note that in both panels τ<τ c \\tau <\\tau _c, so trajectories do not blow up.Next we explain the cause of localization with the increase of the percentage of variability $\\tau $ .", "Note that from Eqs.", "(REF ), there can be transfer of energy from $q_1(t)$ to $q_2(t)$ through the nonlinear coupling coefficient $\\tilde{B}$ .", "Panel (b) in Fig.", "REF shows that $\\tilde{B}$ decreases from positive values with the increase of $\\tau $ until $\\tau =\\tau _c$ , after which it becomes negative.", "When $\\tilde{B}$ vanishes at $\\tau =\\tau _c$ , there is no transfer of energy and hence localization.", "In the following, we will also show that when $\\tilde{B}<0$ , i.e., for $\\tau >\\tau _c$ , there might be unbounded trajectories that blow up in finite time." ], [ "Equilibrium solutions", "We start by analyzing the standing wave solutions of the envelope equations (REF ).", "To do so, it is convenient to write $q_1$ and $q_2$ in polar form $q_1=r_1 e^{i\\phi _1}$ and $q_2=r_2 e^{2i\\phi _2}$ , where $r_1=|q_1|$ , $r_2=|q_2|$ .", "Then, we define the new variables $P&=r_1^2+r_2^2, \\\\\\Delta &= r_1^2-r_2^2, \\\\\\theta &= \\phi _2 - \\phi _1.", "$ These variables satisfy the set of equations (see [30] for a similar derivation) $\\dot{P} &= \\frac{\\widetilde{A}-\\widetilde{B}}{\\widetilde{A}+\\widetilde{B}} \\dot{\\Delta }, \\\\\\dot{\\Delta }&=\\frac{\\sqrt{2\\left(P-\\Delta \\right)}\\sin \\left( 2\\,\\theta \\right) \\left( P+\\Delta \\right) \\left( \\widetilde{A}+\\widetilde{B}\\right)}{2}, \\\\\\dot{\\theta }&=-{\\frac{2\\,\\widetilde{A}\\cos \\left( 2\\,\\theta \\right) (\\Delta -P)+\\widetilde{B}\\cos \\left( 2\\,\\theta \\right) (\\Delta +P)-\\sqrt{2\\left(P-\\Delta \\right)}}{2\\sqrt{2\\left(P-\\Delta \\right)}}}.$ From Eq.", "(REF ), the constant of motion $C$ follows $C&=P-\\frac{\\widetilde{A}-\\widetilde{B}}{\\widetilde{A}+\\widetilde{B}} \\Delta .", "\\nonumber $ System (REF ) is transformed into Eqs.", "(REF ) by using Eqs.", "(REF ), where we assume that $r_1$ and $r_2$ are non negative real numbers.", "Equation (REF ) requires $P-\\Delta >0$ in order to have real-valued solutions, whereas Eq.", "(REF ) requires $P-\\Delta >0$ and $P+\\Delta \\ge 0$ , otherwise $r_1$ and $r_2$ will be complex numbers.", "We call the region which satisfies these two inequalities the well-defined region and denote it by the shaded area in Fig.", "REF .", "$\\Delta _2$ is outside the shaded region in the area below the red curve and above $\\widetilde{B}=0$ .", "This implies that $\\Delta _2$ is the equilibrium of system (REF ) only, but not of system (REF ).", "Figure: Bifurcation diagram of the equilibrium points Δ 1 \\Delta _1 and Δ 2 \\Delta _2 and the regions where the dynamics of system () is well-defined (see text for more details).The latter result implies that the dynamics of Eq.", "(REF ) can be described by the remaining equations () and (), i.e., in terms of $\\Delta $ and $\\theta $ only.", "As discussed before, Eqs.", "(), () are valid only when $P-\\Delta > 0$ .", "However, Eqs.", "(REF ), () imply that $P+\\Delta \\ge 0$ .", "These two inequalities determine the region where Eqs.", "(REF ), () are defined in the $(\\Delta ,\\theta )$ -plane.", "As this region depends on $\\widetilde{A}$ and $\\widetilde{B}$ , we consider the following cases: If $\\frac{\\widetilde{A}-\\widetilde{B}}{\\widetilde{A}+\\widetilde{B}} \\ge 1$ , then $\\Delta > \\text{max}\\left\\lbrace \\Delta _{\\text{crit}}^{1},\\Delta _{\\text{crit}}^{2}\\right\\rbrace $ , where $\\Delta _{\\text{crit}}^{1}=\\frac{C\\left(\\widetilde{A}+\\widetilde{B}\\right)}{2\\widetilde{B}}\\mbox{ and }\\Delta _{\\text{crit}}^{2}=-\\frac{C\\left(\\widetilde{A}+\\widetilde{B}\\right)}{2\\widetilde{A}}.$ If $-1 \\le \\frac{\\widetilde{A}-\\widetilde{B}}{\\widetilde{A}+\\widetilde{B}} < 1$ , then $\\Delta _{\\text{crit}}^{2}\\le \\Delta < \\Delta _{\\text{crit}}^{1}$ .", "If $\\frac{\\widetilde{A}-\\widetilde{B}}{\\widetilde{A}+\\widetilde{B}} < -1$ , then $\\Delta < \\text{min}\\left\\lbrace \\Delta _{\\text{crit}}^{1},\\Delta _{\\text{crit}}^{2}\\right\\rbrace $ .", "The regions, in which the reduced system (REF ) is well-defined, are plotted in the ($\\tilde{A}$ ,$\\tilde{B}$ )-plane in Fig.", "REF .", "To study the reduced system of Eqs.", "(), (), we restrict the phase difference $\\theta $ in the interval $0\\le \\theta < \\pi $ and obtain two equilibrium points, namely $(\\theta _j,\\Delta _j)$ , $j=1,2$ , where $\\theta _1=0\\mbox{ or } \\pi /2\\mbox{ and }\\theta _2= \\pi /2.$ Particularly, there are two cases with respect to $\\tilde{B}$ .", "The first one is when $\\widetilde{B}>0$ , in which case $\\theta _1=0$ or $\\theta _2=\\pi /2$ , and the second when $\\widetilde{B}<0$ , in which case $\\theta _1=\\theta _2=\\pi /2$ .", "Then, $\\Delta _{j}&={\\frac{ \\left( 6\\,{\\widetilde{A}}^{2}C-3\\,\\widetilde{A}\\widetilde{B}C-(-1)^j\\sqrt{1+6\\,\\widetilde{A}\\left( \\widetilde{A}+\\widetilde{B} \\right) C}-1 \\right) \\left( \\widetilde{A}+\\widetilde{B} \\right) }{18{\\widetilde{A}}^{2}\\widetilde{B}}} .$ The stability of the equilibrium points is determined by the eigenvalues of the Jacobian matrix of Eqs.", "(), (), evaluated at the equilibrium points, i.e., by $\\lambda ^{(j)}_{1,2} &= \\pm \\frac{\\sqrt{-3-18{\\widetilde{A}}^{2}C-18\\widetilde{A}\\widetilde{B}C+6(-1)^j\\sqrt{1+6\\,\\widetilde{A}\\left( \\widetilde{A}+\\widetilde{B} \\right) C}}}{3}.", "$ From Eq.", "(REF ) it follows that the equilibrium points exist when $1+6\\,\\widetilde{A}\\left( \\widetilde{A}+\\widetilde{B} \\right) C \\ge 0.", "$ For the initial conditions (REF ), (), Eq.", "(REF ) becomes $1+12\\widetilde{A}\\widetilde{B}r_1^2 \\ge 0.", "\\nonumber $ Therefore, the threshold for the existence of the equilibrium is given by $1+12\\widetilde{A}\\widetilde{B}r_1^2= 0,\\nonumber $ which is the blue curve in Fig.", "REF .", "The dashed and solid lines represent the curve below and above the line $\\widetilde{A}+\\widetilde{B}=0$ , respectively.", "Comparing Eqs.", "(REF ) and (REF ), we conclude that when the equilibrium points exist, they are either a centre or a saddle node.", "Particularly, for the initial conditions in Eqs.", "(REF ), (), the thresholds for the eigenvalues that discriminate between a centre and a saddle node are $1+12\\widetilde{A}\\widetilde{B}r_1^2 &= 0, \\\\1-4\\widetilde{A}\\widetilde{B}r_1^2 &= 0.", "$ In Fig.", "(REF ), we plot $\\tilde{A}+\\tilde{B}=0$ , Eqs.", "(REF ) and () as the black dashed, blue and red curves, respectively.", "System (REF ) with parameter values above the red curve in Fig.", "(REF ) is bounded, with $\\Delta _{\\text{crit}}^1$ and $\\Delta _{\\text{crit}}^2$ being the upper and lower bounds, respectively.", "The two equilibria given in Eq.", "(REF ) are both centres, and are therefore stable.", "When the parameter values lie on the red curve, $\\Delta _2=\\Delta _{\\text{crit}}^2$ .", "Furthermore, if the parameter values are below the red curve and $\\widetilde{B}>0$ , system (REF ) is still bounded, but it only shares one equilibrium point $\\Delta _1$ with system (REF ), whereas $\\Delta _2$ does not belong to the well-defined region.", "Equation (REF ), on the other hand, is unbounded when $\\widetilde{B}<0$ .", "In this case, it either extends to $\\Delta \\rightarrow \\infty $ or $-\\infty $ and depending on the value of $\\frac{\\widetilde{A}-\\widetilde{B}}{\\widetilde{A}+\\widetilde{B}}$ , $\\Delta _1$ can be a centre and $\\Delta _2$ a saddle node in this region.", "Additionally, the system has only one equilibrium on the blue curve.", "The location of the equilibrium points $(\\theta _j,\\Delta _j)$ in Eqs.", "(REF ), (REF ) and their nature are shown in Fig.", "REF .", "We also plot the values of $\\Delta _{j}$ in Fig.", "REF .", "To better visualise $\\Delta _{2}$ as it approaches infinity when $\\widetilde{A}$ or $\\widetilde{B}$ approaches zero, we plot in Fig.", "REF (b) $\\tanh (\\Delta _{2}/100)$ instead of $\\Delta _{2}$ .", "Figure: Plot of (a) Δ 1 \\Delta _1 and (b) tanh(Δ 2 /100)\\tanh (\\Delta _2/100) as a function of A ˜\\widetilde{A} and B ˜\\widetilde{B}.", "In panel (a), the color bar denotes the values of Δ 1 \\Delta _1 and in panel (b), the values of tanh(Δ 2 /100)\\tanh (\\Delta _{2}/100).", "The black dashed, red and blue curves are discussed in the text and are the same with those in Fig.", ".In the following, we illustrate the phase portrait of the reduced system of Eqs.", "(), () for different percentages of variability $\\tau $ , which correspond to different values of $\\tilde{A}$ and $\\tilde{B}$ .", "When there is no variability (i.e., for $\\tau =0\\%$ ), the parameter values are $\\widetilde{A}=3.63$ and $\\widetilde{B}=0.91$ and the equilibrium points are $(\\theta _1,\\Delta _1)=(0,5.09)$ and $(\\theta _2,\\Delta _2)=(\\pi /2,4.34)$ .", "Both are stable and the phase space in this case is shown in Fig.", "REF (a).", "As we can see in panel (b) in Fig.", "REF , as $\\tau $ increases, $\\widetilde{B}$ decreases and becomes negative for $\\tau >\\tau _c$ .", "The parameter values for $\\tau =10\\%$ variability are $\\widetilde{A}=4.97$ and $\\widetilde{B}=0.05$ and the equilibrium points are $(\\theta _1,\\Delta _1)=(0,6.27)$ and $(\\theta _2,\\Delta _2)=(\\pi /2,4.08)$ .", "Similar to the previous case, both equilibrium points are stable and the phase space is shown in Fig.", "REF (b).", "Note that for the initial conditions (), we have that $\\lim _{\\widetilde{B} \\rightarrow 0} \\Delta _{\\text{crit}}^{1} &= r_1^2,\\quad \\lim _{\\widetilde{B} \\rightarrow 0} \\Delta _{\\text{crit}}^{2} = 0,$ which shows that $\\Delta $ becomes positive as we increase $\\tau $ .", "Indeed, $\\Delta >0$ corresponds to energy localization as the magnitude of $q_1$ remains larger than $q_2$ .", "Figure: Phase portraits of the reduced system of Eqs.", "(), () for (a) τ=0%\\tau =0\\% percentage of variability and (b) τ=10%\\tau =10\\% percentage of variability.As we can see in Fig.", "REF for $\\tau \\approx 10.0833\\%>\\tau _c$ , $\\widetilde{B}$ is negative ($\\widetilde{B}=-0.0015$ ) and the region in the $(\\Delta ,\\theta )$ -space becomes unbounded (see also Fig.", "REF ).", "It extends to either $\\Delta \\rightarrow \\infty $ or $-\\infty $ and depends on $\\widetilde{A}$ .", "In this case, the two equilibrium points are $(\\theta _1,\\Delta _1)=(\\pi /2,9.1274)$ , which is a (stable) center, and $(\\theta _2,\\Delta _2)=(\\pi /2,15.4383)$ , which is a (unstable) saddle point.", "The plot shows that in this case, one may obtain bounded solutions as well as unbounded ones, depending on the initial condition.", "For example, the initial condition of Eqs.", "(REF ) (or Eqs.", "(REF ), ()) results in $\\theta $ and $\\Delta $ values in the unbounded region in Fig.", "REF , where the trajectory is shown as the blue curve and starts at the bottom of the plot.", "Figure: The same as Fig.", ", where the parameter values are A ˜=5.3932\\widetilde{A}=5.3932 and B ˜=-0.0015\\widetilde{B}=-0.0015, which correspond to τ≈10.0833%>τ c \\tau \\approx 10.0833\\%>\\tau _c.", "The blue curve is the trajectory of the initial condition in Eqs.", "(), ()." ], [ "Chaotic behavior", "Energy recurrences arise in the homogeneous FPUT lattice (REF ) when the system remains in the quasi-stationary state for an extremely long time, making the approach to equipartition of energy unobservable.", "In the quasi-stationary state, the FPUT lattice can be viewed as the perturbation of the regular, integrable Toda lattice [26].", "Here we study the effect of variability on the chaotic properties of system (REF ).", "Particularly, we consider lattices of $N=4,8,16,32,64$ particles in systems (REF ) (homogeneous, no variability) and (REF ) (with variability) and use the maximum Lyapunov exponent (mLE) [21] and Smaller Alignment Index (SALI) [22], [23] to discriminate between regular and chaotic dynamics.", "We want to see if energy localization in the first normal mode that we observed in Secs.", "and for $\\tau =10\\%<\\tau _c$ corresponds to chaotic dynamics, by increasing $\\tau $ from 0 to $10\\%$ .", "To compute mLE, we follow the evolution of a trajectory starting at the initial point $\\mathbf {x}(0)=(q_1(0),\\ldots ,q_N(0),p_1(0),\\ldots ,p_N(0)),$ that evolves according to Hamilton's equations of motion $\\dot{\\mathbf {x}}=\\mathbf {f(x)}=\\left[ \\frac{\\partial {H}}{\\partial {\\mathbf {p}}} \\quad -\\frac{\\partial {H}}{\\partial {\\mathbf {q}}} \\right]^T,\\nonumber $ and the evolution of a deviation vector $\\mathbf {w}(0)=(\\delta q_1(0),\\ldots ,\\delta q_N(0),\\delta p_1(0),\\ldots ,\\delta p_N(0)),$ that evolves according to the variational equation $\\dot{\\mathbf {w}}& = \\frac{\\partial {\\mathbf {f}}}{\\partial {\\mathbf {x}}}(\\mathbf {x}(t)) \\cdot \\mathbf {w}.$ Then mLE is defined as $\\lambda = \\lim _{t \\rightarrow \\infty }\\frac{1}{t}\\ln \\frac{||\\mathbf {w}(t)||}{||\\mathbf {w}(0)||}, \\nonumber $ where $\\ln $ is the natural logarithm.", "If mLE converges to zero following the law $1/t$ , then the trajectory is regular, whereas if it converges to a positive value in time, then the trajectory is chaotic [31].", "Hence it is convenient to plot mLE in $\\log _{10}$$-\\log _{10}$ scales as the law $1/t$ becomes then a line with negative slope and serves as a guide to the eye.", "To compute SALI, we follow the evolution of the same initial condition and two deviation vectors $\\mathbf {w}_1(0)$ , $\\mathbf {w}_2(0)$ .", "Then, SALI is defined by $\\textrm {SALI}(t)=\\min \\lbrace \\left\\Vert \\hat{\\mathbf {w}}_1(t) - \\hat{\\mathbf {w}}_2(t)\\right\\Vert ,\\left\\Vert \\hat{\\mathbf {w}}_1(t) + \\hat{\\mathbf {w}}_2(t)\\right\\Vert \\rbrace , $ where $\\hat{\\mathbf {w}}_i(t)= \\frac{\\mathbf {w}_i(t)}{\\left\\Vert \\mathbf {w}_i(t)\\right\\Vert },\\;i=1,2$ , are the two normalized deviation vectors at time $t$ .", "SALI approaches zero exponentially fast in time (as a function of the largest or 2 largest Lyapunov exponents) for chaotic trajectories and non-zero, positive, values for regular trajectories [23].", "First, we consider the case without variability, that is the FPUT$-\\alpha $ system (REF ).", "We integrate the equations of motion (REF ) and its corresponding variational equations (following Eq.", "(REF )) by using the tangent-map method [32] and Yoshida's fourth order symplectic integrator [33].", "We have found that a time step of $0.01$ keeps the relative energy error below $10^{-9}$ .", "In all our computations, the final integration time is $t=10^8$ .", "Here, we use the same initial condition in Eq.", "(REF ) for all $N$ .", "This initial condition then results in different energies for different $N$ , i.e., $E=0.4775$ for $N=4$ , $E=0.2714$ for $N=8$ , $E=0.1447$ for $N=16$ , $E=0.0747$ for $N=32$ , and $E=0.0379$ for $N=64$ .", "Our results in Fig.", "REF show that all trajectories for $N=4,8,16,32,64$ are regular up to $t=10^8$ , corroborated by the tendency of the mLEs to converge to zero following the $1/t$ law and SALI to tend to fixed positive values, shown in panels (a) and (b), respectively.", "These results are in agreement with the fact that energy recurrences in the homogeneous FPUT lattice (REF ) arise when it remains in the quasi-stationary state for extremely long times, making the approach to equipartition of energy unobservable.", "Figure: Plot of mLE (panel a)) and SALI (panel b)) in time for a range of NN values seen in the insets (denoted by different colors) in the absence of variability, i.e., of the FPUT system ().", "Note that all axes are logarithmic.", "The black dashed line in panel (a) is the law 1/t1/t of regular trajectories to guide the eye.Finally, we look at the case of $\\tau =10\\%<\\tau _c$ , for which we have observed almost energy localization in the first normal mode in Sec.", ".", "Since in this case we only know the equations of motion (REF ), we integrated them using the DOP853 integrator [34], an explicit Runge-Kutta method of order 8 due to Dormand and Prince, to achieve good numerical accuracy.", "We compute the chaotic indicators for 30 realisations of the same percentage of variability $\\tau =10\\%$ , while keeping the initial conditions fixed for each number of particles $N$ .", "For $N=4$ and 8, all trajectories in panels (a)-(d) in Fig.", "REF appear to be regular up to final integration time $t=10^8$ , corroborated by the tendency of the mLEs to converge to zero following the $1/t$ law and SALI to tend to fixed positive values.", "However, for $N=16$ , two of the 30 trajectories in panels (e), (f) in Fig.", "REF are chaotic as their mLEs converge to positive values at $t=10^8$ and their SALI decrease to zero exponentially fast.", "Figure REF shows that there are more chaotic orbits than those for the smaller values of $N$ in Fig.", "REF .", "We show the percentage of chaotic trajectories (out of the 30 realisations) as a function of $N$ in Fig.", "REF , where the increase from $N=4,8,16$ to $N=32,64$ is apparent.", "These results suggest that in the case of almost complete energy localization, variability promotes chaos in the system as the number of particles increases.", "However, further studies are required to determine whether the increase is monotone.", "Figure: Plot of mLE (panels (a), (c), (e)) and SALI (panels (b), (d), (f)) in time for 30 trajectories (denoted by different colors) and τ=10%\\tau =10\\% (see Eq.", "()).", "Panels (a), (b) are for N=4N=4, panels (c), (d) for N=8N=8 and panels (e), (f) for N=16N=16.", "Note that all axes are logarithmic.", "The black dashed lines in panels (a), (c), (e) are the law 1/t1/t of regular trajectories to guide the eye.Figure: Plot of mLE (panels (a), (c)) and SALI (panels (b), (d)) in time for 30 trajectories (denoted by different colors) and τ=10%\\tau =10\\% (see Eq.", "()).", "Panels (a), (b) are for N=32N=32 and panels (c), (d) for N=64N=64.", "Note that all axes are logarithmic.", "The black dashed lines in panels (a), (c) are the law 1/t1/t of regular trajectories to guide the eye.Figure: Percentage of chaotic trajectories as a function of NN for 30 realisations of variability with τ=10%\\tau =10\\%.", "The black-dash line segments connect the black points and are there to guide the eye." ], [ "Conclusions and discussion", "In this paper, we have considered a disordered FPUT$-\\alpha $ system with variations in its parameters (also called variability) to take into account inherent manufacturing processes.", "By using a two normal-mode approximation, we have been able to explain the mechanism for energy localization and blow up of solutions for percentage of variability bigger than a threshold, that we have been able to compute using our theory.", "Moreover, we have also studied the effect of variability in the chaotic behavior of the system calculating the maximum Lyapunov exponent and Smaller Alignment Index for a number of realizations for the same variability percentage that corresponds to energy-localization.", "We have found that, when there is almost energy localization, it is more frequent for the trajectories to be chaotic with the increase of the number of particles $N$ for the same percentage of variability, smaller than the threshold.", "Finally, while it has been shown previously that variability leads to energy-recurrence breakdown and energy localization, we have also shown here that by increasing the percentage of variability beyond a threshold that we determined using our theory, the solutions of the system may blow up in finite-time.", "This is because we have started with the equations of motion without a Hamiltonian that would allow us to keep the energy of the system constant [18].", "The case of the Hamiltonian model with heterogeneity, cf.", "Eq.", "(REF ), will be studied in a future publication." ], [ "Credit authorship contribution statement", "Zulkarnain: Investigation, Writing – Original Draft.", "H. Susanto: Conceptualization, Supervision, Methodology, Writing – review & editing.", "C. Antonopoulos: Conceptualization, Supervision, Methodology, Writing – review & editing.", "The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.", "Z is supported by the Ministry of Education, Culture, Research, and Technology of Indonesia through a PhD scholarship (BPPLN).", "HS is supported by Khalifa University through a Faculty Start-Up Grant (No.", "8474000351/FSU-2021-011) and a Competitive Internal Research Awards Grant (No.", "8474000413/CIRA-2021-065).", "The authors acknowledge the use of the High Performance Computing Facility (Ceres) and its associated support services at the University of Essex in the completion of this work.", "The authors are also grateful to the referees for their comments and feedback that improved the manuscript." ] ]
2212.05644
[ [ "Normal forms of $\\mathbb Z$-graded $Q$-manifolds" ], [ "Abstract Following recent results of A.K.", "and V.S.", "on $\\mathbb Z$-graded manifolds, we give several local and global normal forms results for $Q$-structures on those, i.e.", "for differential graded manifolds.", "In particular, we explain in which sense their relevant structures are concentrated along the zero-locus of their curvatures, especially when the negative part is of Koszul--Tate type.", "We also give a local splitting theorem." ], [ "Introduction", "This article is the sequel of [9], that studied normal forms of ${\\mathbb {Z}}$ -graded manifolds and where the analogue of the Batchelor's theorem has been proven.", "We now equip a ${\\mathbb {Z}}$ -graded manifold with a degree $+1 $ self-commuting vector field $Q$ , thus making it a differential graded (DG) manifold, also called $Q$ -manifold.", "The purpose of this paper is to provide several normal form type results in this setting.", "The paper is organized as follows.", "In Section , we give some precise definitions and fix some usual notations related to graded manifolds.", "Then we proceed with the description of projective systems of algebras (recapitulated in Appendix ), which we specialize to the ${\\mathbb {Z}}$ -graded structure sheaves.", "Section is devoted to the idea that “outside the zero locus of their curvatures, ($\\mathbb {Z}^*$ -graded) $Q$ -manifolds can be made trivial”.", "A more precise statement is that on any open subset where the curvature is different from zero at all points, the dual $\\mathbb {Z}^* $ -graded Lie $\\infty $ -algebroid can be chosen to have all $k$ -ary bracket equal to zero, except for the 0-ary bracket, given by the nowhere vanishing curvature.", "In section we first recall the standard notion of Koszul–Tate resolution, which are examples of negatively graded $Q$ -manifolds.", "Then we construct two structures on the zero locus $\\lbrace \\kappa =0\\rbrace $ of a $Q$ -manifold that are independent of a choice of a splitting: a positively graded $Q$ -structure on the zero locus $\\lbrace \\kappa =0\\rbrace $ and a negatively graded $Q$ -manifold.", "We eventually show that $Q$ -manifolds whose negative part is of Koszul-Tate type are entirely encoded by this positively graded $Q$ -structure on the zero locus.", "Last, in section we choose a point in the zero locus, (on which leaves of the anchor map are well defined) and give a splitting theorem: near a leaf $L$ in the zero locus, a $Q$ -manifold is the direct product of the standard $T[1]L$ and a transverse $Q$ -manifold.", "In the process, we also give some counter-examples to “naive beliefs” about the anchor maps of a $Q$ -manifold.", "We conclude by mentioning some perspectives and potential applications." ], [ "${\\mathbb {Z}}^{*}$ -graded manifolds", "Let us give (recall) the definition of ${\\mathbb {Z}}^*$ -graded manifolds.", "We start with an important definition: a filtration which is used throughout the paper.", "Definition 1.1 Let ${\\mathcal {O}}= \\oplus _{j\\in {\\mathbb {Z}}} {\\mathcal {O}}_j$ be a $\\mathbb {Z}$ -graded commutative algebra.", "We call negative filtration the filtration $ {\\mathcal {O}}= F^0{\\mathcal {O}}\\supset {\\mathcal {I}}_{-}=F^1{\\mathcal {O}}\\supset \\dots \\supset F^i {\\mathcal {O}}\\supset \\dots .$ defined by $F^i {\\mathcal {O}}= \\mathcal {I}^{-i}_-$ for all $i \\in {\\mathbb {Z}}_{\\ge 0} $ , where $\\mathcal {I}^{-i}_-$ is generated by $\\oplus _{j \\le -i} {\\mathcal {O}}_j$ .", "This filtration allows to define the graded manifolds we are interested in.", "Definition 1.2 ([9]) A ${\\mathbb {Z}}^*$ -graded manifold is a pair $M=(M_0,{\\mathcal {O}})$ , where $M_0$ is a smooth manifold (referred to as base manifold) and $ {\\mathcal {O}}= \\oplus _{i\\in {\\mathbb {Z}}} {\\mathcal {O}}_i$ is a sheaf of ${\\mathbb {Z}}$ -graded commutative algebras (whose sections are referred to as functions), such that each point of $M_0$ has a neighborhood $U \\subset M_0$ over which $\\mathcal {O}(U) $ is isomorphic to $ \\Gamma (\\tilde{S}(\\oplus _{i \\in {\\mathbb {Z}}^*} V_i) )$ , where each $V_i$ is a graded vector bundle of degree $i$ , and “$ \\tilde{S} $ ” stands for the completion of $\\Gamma (S(\\oplus _{i \\in {\\mathbb {Z}}^*} V_i))$ with respect to the negative filtration.", "Remark 1.3 This definition is explained in details in [9]: for this paper to be self-consistent and for further use, we recollect some necessary facts about filtrations and their completions in what follows and in Appendix .", "Note that Definition REF potentially allows a function to be a sum of infinitely many terms, which is also explained in [9].", "$\\square $ Remark 1.4 We write “${\\mathbb {Z}}^*$ -graded” instead of “${\\mathbb {Z}}$ -graded” to insist on the natural assumptions that there are no generators of ${\\mathcal {O}}$ of degree 0 which is not a coordinate function on $M_0$ .", "For instance, Kapranov dg-manifolds [13], [14] are not ${\\mathbb {Z}}^*$ -graded manifolds, the difference is in conventions though.", "For ${\\mathbb {Z}}^*$ -graded manifolds, in contrast to the ${\\mathbb {Z}}_{\\ge 1}$ -graded or $(-{\\mathbb {Z}}_{\\ge 1})$ -graded cases, we do not have an isomorphism $C^\\infty (M_0) \\simeq {\\mathcal {O}}_0 $ .", "For instance, the product of a function in ${\\mathcal {O}}_p$ with a function in ${\\mathcal {O}}_{-p}$ may very well produce a non-zero function: it then belongs to ${\\mathcal {O}}_0$ but can not be considered as an element in $C^\\infty (M_0)$ .", "There is even no canonical inclusion $C^\\infty (M_0) \\hookrightarrow {\\mathcal {O}}_0 $ , but there is a natural projection ${\\mathcal {O}}_0 \\rightarrow C^\\infty (M_0)$ , which corresponds to the inclusion $M_0\\hookrightarrow M$ .", "$\\square $ Remark 1.5 A ${\\mathbb {Z}}^*$ -graded manifold is complete with respect to the topology on $\\mathcal {O} $ given by the negative filtration, see [9].", "$\\square $ Remark 1.6 The negative filtration of ${\\mathcal {O}}$ is compatible with the negative filtration of the symmetric algebras that appear in Definition REF .", "Notice that elements in $F^i {\\mathcal {O}}$ may be of any degree, although its generators have degree less or equal to $-i$ .", "Also, notice that $\\cap _{i \\ge 0} F^i {\\mathcal {O}}= \\lbrace 0\\rbrace $ .", "$\\square $ According to [9], there are natural sheaves of graded ideals in ${\\mathcal {O}}$ : the ideal $\\mathcal {I}_+ $ generated by $\\oplus _{i \\ge 1} {\\mathcal {O}}_i $ .", "the ideal $\\mathcal {I}_- =F^1 {\\mathcal {O}}$ generated by $\\oplus _{i \\le -1} {\\mathcal {O}}_i $ .", "The ideal $\\mathcal {I} =\\mathcal {I}_+ +\\mathcal {I}_- $ .", "Let us consider the quotient of ${\\mathcal {O}}$ by these three ideals: The quotient $(M_0, {\\mathcal {O}}/\\mathcal {I}_+)$ is a graded manifold with grading now ranging from 0 to $-\\infty $ that we call the negative part of $(M_0,\\mathcal {O}) $.", "The quotient $(M_0, {\\mathcal {O}}/\\mathcal {I}_-)$ is a graded manifold with grading now ranging from 0 to $+\\infty $ that we call the positive part of $(M_0,\\mathcal {O}) $.", "The quotient $ (M_0,{\\mathcal {O}}/\\mathcal {I})$ is simply the smooth manifold $M_0$ with its sheaf $C^{\\infty }(M_0)$ of smooth functions (and, in particular, is concentrated in degree 0).", "For $i \\ge 0$ and $ j \\in \\mathbb {Z}$ , we denote by $(F^i {\\mathcal {O}})_j $ elements of $F^i {\\mathcal {O}}$ of degree $j$ .", "Then, to a graded manifold $M=(M_0,{\\mathcal {O}})$ , one can associate (canonically) a family $(E_i)_{i \\in {\\mathbb {Z}}^*} $ of vector bundles over $M_0$ , as follows.", "The quotient space $ \\frac{\\mathcal {I}}{\\mathcal {I}^2} = \\bigoplus _{i \\in {\\mathbb {Z}}^*} \\left(\\frac{\\mathcal {I}}{ \\mathcal {I}^2}\\right)_{i}$ is a direct sum of projective $C^\\infty (M_0)$ -modules, hence by Serre–Swan theorem, there exists for all $i \\in {\\mathbb {Z}}^* $ a vector bundle $E_{i}$ such that $\\Gamma (E^*)_{i} \\simeq (\\mathcal {I})_{i} / (\\mathcal {I}^2)_{i} $ .", "We call $E_\\bullet :=\\oplus _{i \\in {\\mathbb {Z}}^*} E_i $ the canonical graded vector bundle of $(M_0,{\\mathcal {O}})$.", "Theorem 1.7 (Batchelor's theorem, [9] – Sections 3.3 and 4.2) Let $(M_0,{\\mathcal {O}})$ be a $\\mathbb {Z}^*$ -graded manifold with canonical $\\mathbb {Z}^*$ -graded bundle $ E_\\bullet $ .", "There exists an isomorphism of sheaves (called splitting ): ${\\mathcal {O}}\\simeq \\Gamma \\left(\\tilde{S} (\\oplus _{i\\in {\\mathbb {Z}}^*}E^*_i) \\right).$ Here, $\\tilde{S}$ again refers to the completion of $\\Gamma \\left(S(\\oplus _{i\\in {\\mathbb {Z}}^*}E^*_i)\\right)$ with respect to the its negative filtration as in Definition REF .", "Remark 1.8 Notice that for every splitting, sections of $E_{-i}^* \\equiv (E_{-i})^* = (E^*)_{i} $ become functions of degree $+i$ in ${\\mathcal {O}}$ .", "$\\square $ Remark 1.9 Although Batchelor's theorem claims that splitting exists, there is no canonical splitting in general.", "In contrast, the vector bundles $(E_i)_{i \\in \\mathbb {Z}^*} $ defined above are canonical.", "$\\square $ Once a splitting is chosen, many different notions of “degree” can be defined, beside the degree that ${\\mathcal {O}}$ is equipped with by definition.", "More precisely, for a section $\\alpha \\in \\Gamma (E^*)_i $ , let us define three different degrees as follows: $ {\\mathrm {deg}}(\\alpha ) = i ,\\hspace{5.69046pt}{\\mathrm {pol}}(\\alpha ) = 1 , \\hspace{5.69046pt} {\\mathrm {deg}}_+(\\alpha )= \\left\\lbrace \\begin{tabular}{rr}i&\\hbox{ for $i\\ge 1$} \\\\ 0& \\hbox{ otherwise,} \\\\\\end{tabular}\\right.", "\\hspace{5.69046pt}, \\hspace{5.69046pt}{\\mathrm {deg}}_-(\\alpha )=\\left\\lbrace \\begin{tabular}{rr}-i& \\hbox{ for $i\\le 1$}\\\\ 0 & \\hbox{ otherwise.}", "\\\\\\end{tabular}\\right.$ Then these degrees extend by multiplicativity to $\\Gamma (S(\\oplus _{i \\in \\mathbb {Z}^*} E_{i}^*)) $ .", "To avoid confusion, the degree ${\\mathrm {deg}}$ will be called the total degree, sometimes referred to as the ghost degree.", "It coincides with the degree that $\\mathcal {O}$ is initially equipped with.", "This degree is responsibleWe make this assumption for simplicity of the presentation in this paper, but the constructions work for a more general convention on the relation of the total degree and the (super) parity.", "for all the commutation relations, i.e.", "the Koszul sign rule is defined by its reduction modulo 2.", "The degree ${\\mathrm {deg}}_-$ (resp.", "${\\mathrm {deg}}_+$ ) is called the negative degree (resp.", "positive degree) and plays an important role.", "Also, ${\\mathrm {deg}}={\\mathrm {deg}}_+ - {\\mathrm {deg}}_-.", "$ Last, ${\\mathrm {pol}}$ is the polynomial degree (sometimes referred to as arity) that counts the number of sections in a product.", "Example 1.10 Concretely, for a section of $E_{-5}^* \\odot E_{4}^* \\odot E_{7}^* $ the total degree or ghost degree is $5-4-7=-6$ ; the negative degree is $4+7= +11 $ ; the positive degree is $+5 $ ; the polynomial degree is 3 (it is the product of three sections).", "Remark 1.11 The negative degree is compatible with the filtration $F^i{\\mathcal {O}}$ introduced above in the sense that $ F^i{\\mathcal {O}}=\\lbrace F \\in {\\mathcal {O}}\\, | \\, {\\mathrm {deg}}_-(F) \\ge i\\rbrace $ .", "$\\square $" ], [ "$Q$ -manifolds", "Let us now define $Q$ -manifolds, that is equip a ${\\mathbb {Z}}^*$ -graded manifold with a differential structure.", "Definition 1.12 A vector field of degree $k$ on a ${\\mathbb {Z}}^*$ -graded manifold $(M,{\\mathcal {O}})$ is a degree $k$ derivation of $ \\mathcal {O}$ .", "Vector fields of degree $k$ shall be denoted as $\\mathfrak {X}_k({\\mathcal {O}})$ .", "The graded vector space of all vector fields: $ \\mathfrak {X}_\\bullet ({\\mathcal {O}}) = \\bigoplus \\limits _{k \\in \\mathbb {Z}} \\mathfrak {X}_k ({\\mathcal {O}}), $ form a graded Lie algebra when equipped with the graded commutator $[\\cdot , \\cdot ] $ .", "Definition 1.13 A ${\\mathbb {Z}}^*$ -graded $Q$ -manifold is a triple $(M_0,{\\mathcal {O}},Q) $ , with $M=(M_0,{\\mathcal {O}})$ a ${\\mathbb {Z}}^*$ -graded manifold and $Q$ a degree $+1$ vector field which satisfies $[Q,Q]=0$ .", "Since the degree of $Q $ is $+1$ , we have $Q[\\mathcal {I}_+ ] \\subset \\mathcal {I}_+ $ , so that $Q$ induces a degree $+1$ derivation $ Q^-$ of the quotient $ {\\mathcal {O}}/\\mathcal {I}_+$ which is by definition the sheaf of functions of the negative part of $(M_0,{\\mathcal {O}})$ .", "This allows the following definition.", "Definition 1.14 We call the $Q$ -manifold $(M_0,{\\mathcal {O}}/\\mathcal {I}_+,Q^-)$ the negative part of the $Q$ -manifold $(M_0,{\\mathcal {O}},Q) $ .", "Remark 1.15 The vector field $Q^-$ is $C^\\infty (M_0)$ -linear, i.e.", "it is a vertical vector field.", "$\\square $" ], [ "An algebraic generalization: $Q$ -varieties over a commutative algebra", "Let $\\mathcal {A} $ be a unital commutative algebra (that may be thought as functions over an affine variety $X_0 $ for instance).", "Definition REF admits a generalization: a differential graded commutative algebra $\\mathcal {O} $ such that there exist finitely generated projective $\\mathcal {A}$ -modules $(\\mathcal {V}_i)_{i \\in \\mathbb {N}} $ and a graded algebra isomorphism: $ \\mathcal {O} \\simeq S_{\\mathcal {A}} (\\oplus _{i \\ge 1} \\mathcal {V}_i).", "$ In particular, the following object will be important.", "Definition 1.16 Let $I \\subset C^\\infty (M_0)$ be an ideal.", "A positively graded variety (resp.", "$Q$ -variety) over $C^\\infty (M_0)/I $ is a positively graded commutative algebra $\\mathcal {K}_+ $ (resp.", "positively graded commutative differential algebra $(\\mathcal {K}_+,Q_+)$ ) that admits a splitting, i.e.", "an isomorphism $\\mathcal {K}_+ \\simeq \\Gamma _I(S(\\oplus _{i \\ge 1} E_{-i}^*)) $ for a family of vector bundles $(E_{-i})_{i \\ge 1} $ over $M_0$ .", "Here for any vector bundle $E \\rightarrow M_0$ , $\\Gamma _I(E) :=\\Gamma (E) \\otimes _{C^\\infty (M_0)} C^\\infty (M_0)/I.", "$ Remark 1.17 There is no need to take completions in the definition above since every function of a given degree is necessarily polynomial with respect to non-zero degree variables." ], [ "Duality $Q$ -manifolds {{formula:d4928b96-07fd-42da-870c-7723f5aeb72c}} Lie {{formula:54c79e64-9e24-4dc3-95e4-c9dad29f529f}} -algebroids", "Let $(M_0,{\\mathcal {O}},Q) $ be a $Q$ -manifold.", "Once a splitting ${\\mathcal {O}}\\simeq \\Gamma (\\oplus _{i \\in {\\mathbb {Z}}} E_{i}^*) $ is given, $Q $ can be dualized to a Lie $\\infty $ -algebroid, defined as follows.", "Definition 1.18 [17], [3] A ${\\mathbb {Z}}^*$ -graded Lie $\\infty $ -algebroid of a $\\mathbb {Z}^*$ -graded vector bundle is the data of: families indexed by $n \\ge 1$ of vector bundle morphisms $ \\rho _n\\colon S^n( \\oplus _{i \\in {\\mathbb {Z}}^*} E_{i})_{-1} \\longrightarrow TM_0$ called $n$ -anchor maps, families of degree $+1$ maps: $ \\ell _n \\colon S_\\mathbb {R}^n \\left(\\Gamma ( \\oplus _{i \\in {\\mathbb {Z}}^*} E_{i})\\right)_k \\longrightarrow \\Gamma (E_{k+1}) $ called $n$ -bracket, together with a section $\\kappa \\in \\Gamma (E_{+1}) $ called curvature that satisfy the higher Jacobi and higher Leibniz identities (see e.g.", "[16]).", "Remark 1.19 It is not easy to attach a single name to the following proposition, based on a observation by Pavol Ševera [18], spelled out in the negative degree case in [2], and which can be proven using Theodore Voronov's derived brackets in [17].", "$\\square $ Proposition 1.20 There is a one-to-one correspondence between ${\\mathbb {Z}}^*$ -graded Lie $\\infty $ -algebroids structures on $\\oplus _{i \\in {\\mathbb {Z}}}E_i \\rightarrow M_0 $ and $Q$ -manifolds structures with sheaf of functions $\\Gamma (\\tilde{S} (\\oplus _{i \\in {\\mathbb {Z}}^*}E_i^*) )$ ." ], [ "Projective systems associated to graded manifolds", "In this section, we give a precise sense to the notion of the flow of a degree 0 vector field on a graded manifold.", "For the standard definitions of projective systems the reader is referred to Appendix , while now we specialize the Proposition REF from there to the context we are interested in.", "Let $(M_0,{\\mathcal {O}}) $ be a $\\mathbb {Z}^*$ -graded $Q$ -manifold over $M_0$ with the sheaf of functions ${\\mathcal {O}}$ .", "This sheaf of functions comes equipped with the (negative) filtration as in Definition REF , so that $ A^{i}:= {\\mathcal {O}}/ F^i {\\mathcal {O}}$ is a projective system of algebras.", "Since $ \\cap _{i \\in {\\mathbb {N}}} F^i {\\mathcal {O}}= \\lbrace 0\\rbrace $ , its projective limit ${A^\\infty }$ is canonically isomorphic to $ {\\mathcal {O}}$ .", "If a degree 0 vector field $\\mathpzc {v}$ such that $\\mathpzc {v}[{\\mathcal {O}}] \\subset F^n {\\mathcal {O}}$ for some $n \\ge 1 $ is given, then for every $i\\in {\\mathbb {N}}$ , the family of endomorphisms $ \\begin{array}{rcl}{\\mathcal {O}}/ F^i {\\mathcal {O}}&\\rightarrow & {\\mathcal {O}}/ F^i {\\mathcal {O}}\\\\ f& \\mapsto & \\sum \\limits _{k \\ge 0} \\frac{t^k}{k!}", "\\mathpzc {v}^k [f] \\end{array}$ is well-defined because the sum is finite, it is an algebra endomorphism for all $ i\\in {\\mathbb {N}}$ , and is a morphism of projective systems of algebras.", "We denote its projective limit by $ e^{t\\mathpzc {v}}$ .", "By construction, for all $s,t \\in {\\mathbb {R}}$ we have $ e^{s\\mathpzc {v}} e^{t\\mathpzc {v}}= e^{(s+t)\\mathpzc {v}} $ and $e^{0\\mathpzc {v}}=\\mathrm {Id}_{{\\mathcal {O}}} $ .", "As a consequence $e^{t\\mathpzc {v}}$ is a diffeomorphism of the graded manifold $(M_0, {\\mathcal {O}})$ .", "Proposition 1.21 Given a family $(\\mathpzc {v}_n)_{n \\in {\\mathbb {N}}}$ of degree zero vector fields on a graded manifold $(M_0,{\\mathcal {O}}) $ such that $ \\mathpzc {v}_n \\colon {\\mathcal {O}}\\rightarrow F^n {\\mathcal {O}}$ the infinite composition $ \\bigcirc _{i\\uparrow \\in {\\mathbb {N}}} e^{ \\mathpzc {v}_i} $ is a diffeomorphism of the graded manifold $(M_0,{\\mathcal {O}})$ , well-defined in the sense of [9]" ], [ "Normal forms outside of the zero locus of the curvature", "For $(M_0,{\\mathcal {O}},Q) $ a $ \\mathbb {Z}^*$ -graded $Q$ -manifoldWe present the results of this section for $Q$ -manifolds with a smooth base.", "All results in section extend to $Q$ -manifolds over affine varieties or $Q$ -manifolds over Stein varieties.This may no longer be true for the results of Section , where we will treat non-smooth cases separately., recall (Equation REF ) that the vector bundle $E_{+1}$ (in fact its dual) is defined by applying the Serre-Swan theorem: $\\Gamma (E_{+1}^*) = \\left( \\frac{\\mathcal {I}}{\\mathcal {I}^2}\\right)_{-1} = \\frac{\\mathcal {I}_{-1}}{\\mathcal {I}^2_{-1}}=\\frac{F^1 {\\mathcal {O}}_{-1} }{F^2 {\\mathcal {O}}_{-1} }.$ Definition 2.1 The composition $ F^1 {\\mathcal {O}}_{-1} \\stackrel{Q}{\\longrightarrow } {\\mathcal {O}}_0 \\longrightarrow {\\mathcal {O}}(M_0) \\simeq \\frac{{\\mathcal {O}}_0}{F^1 {\\mathcal {O}}_0} $ is ${\\mathcal {O}}(M_0) $ -linear and admits $F^2 {\\mathcal {O}}_{-1}$ in the kernel.", "It is therefore given by the contraction with a canonical section of $E_{+1} $ that we call the curvature of the $Q$ -manifold $M$ and denote by $\\kappa $.", "Equivalently, the curvature is defined by the following commutative diagram, whose horizontal lines are exact: $ {@{^(->}[r] [d]^{Q} F^2 {\\mathcal {O}}_{-1}& @{->>}[r] [d]^{Q} F^1 {\\mathcal {O}}_{-1} & \\Gamma (E_{+1}^*)@{..>}[d]^{\\mathfrak {i}_\\kappa } \\\\ F^1 {\\mathcal {O}}_0@{^(->}[r]&@{->>}[r]{\\mathcal {O}}_0 & {\\mathcal {O}}(M_0) }$ Remark 2.2 The previous description of the curvature, although abstract, implies that it is a canonical notion, but it can be described in a more explicit manner, upon choosing a splitting.", "The polynomial degree is then well-defined, and $\\mathfrak {i}_\\kappa $ is the only component of $Q$ of polynomial degree $ -1$ .", "$ Q = \\mathfrak {i}_\\kappa + \\sum _{i \\ge 0} Q^{[i]} $ where $Q^{[i]} $ is the component of polynomial degree $i$ of $Q$ .", "Also, after having chosen a splitting (which always exists in the smooth case) and local coordinates: $Q = \\sum _{i=1}^{{\\mathrm {rk}}(E_{+1})} \\tilde{\\kappa }_i(x)\\frac{\\partial }{\\partial \\eta _i}+ \\sum _{j=1}^{\\mathrm {dim}(M_0)}f_j \\frac{\\partial }{\\partial x_j} +\\sum _{i \\in \\mathbb {Z} \\backslash \\lbrace 0,1\\rbrace }\\sum _{j=1}^{ \\mathrm {rk}(E_{i})}g_{i,j} \\frac{\\partial }{\\partial \\theta _{i,j}}.$ Here the $x_i$ 's are the variables in the base manifold, the $\\eta _i $ 's are the degree $ -1$ variables, the $\\theta _{i,j} $ 's are the degree $j $ variables for $j\\ne 0,-1$ , the functions $\\tilde{\\kappa }_i(x)\\in {\\mathcal {O}}_0 $ are functions whose projection in $C^\\infty (M_0)$ are the components of the section $\\kappa $ , $f_j \\in {\\mathcal {O}}_{1}$ , and $ g_{i,j} \\in {\\mathcal {O}}_{1-i}$ .", "$\\square $ It is well-known [15] that on a super-manifold of dimension $(n,p)$ , every point where self-commuting odd vector field $Q$ does not vanish on the zero section, there exist local coordinates $(x_1, \\dots ,x_n, \\eta _1, \\dots , \\eta _p ) $ such that $Q=\\tfrac{\\partial }{\\partial \\eta _1} .$ Below is the equivalent of this statement for the ${\\mathbb {Z}}^*$ -graded case.", "Proposition 2.3 Let $(M_0, {\\mathcal {O}},Q) $ be a $ \\mathbb {Z}^*$ -graded $Q$ -manifold with associated bundles $(E_i)_{i \\in {\\mathbb {Z}}^*} $ over $M_0$ .", "Over every open set $ U \\subset M_0$ over which the curvature $\\kappa \\in \\Gamma (E_1) $ is different from zero at every point, there is a splitting $ {\\mathcal {O}}(U) \\simeq \\Gamma \\left(\\tilde{S} \\left(\\oplus _{i \\in {\\mathbb {Z}}^*} E_i^*\\right)\\right) $ under which $ Q= \\mathfrak {i}_\\kappa , $ i.e.", "the degree $+1$ vector field $Q$ is given by the contraction with the curvature.", "Remark 2.4 In the situation when there is duality (in the sense of Section REF ), Proposition REF may be restated as follows: every open subset on which the curvature $ \\kappa \\in \\Gamma (E_{+1})$ is different from zero at every point admits a dual $ \\mathbb {Z}^*$ -graded Lie $\\infty $ -algebroid for which all the brackets $(\\ell _k)_{k \\ge 1} $ are equal to zero except for the 0-ary bracket (which is $ \\kappa $ ).", "Also, it immediately implies the existence of local coordinates as in Equation (REF ) such that $Q$ takes the form (REF ).", "$\\square $ The proof of Proposition REF goes through the next three lemmas (see Definition REF for the negative part $Q_-$ of the vector field $Q$ ).", "Lemma 2.5 There exists a degree $-1 $ function $\\alpha \\in F^1 {\\mathcal {O}}$ such that $Q_- (\\alpha )=1 \\in {\\mathcal {O}}$ .", "Proof.", "Take any splitting $ {\\mathcal {O}}(U) \\simeq \\Gamma \\left( \\tilde{S}(\\oplus _{i \\in {\\mathbb {Z}}^*} E^*_i) \\right) $ .", "Since the curvature $\\kappa $ is a nowhere vanishing section of $ E_{+1}$ , there exists $\\alpha \\in \\Gamma (E_{+1}^*) \\subset F^1 {\\mathcal {O}}$ such that $\\langle \\kappa ,\\alpha \\rangle =1 $ .", "We then have $Q(\\alpha )=\\langle \\kappa ,\\alpha \\rangle + F=1 + F $ for some function $F\\in F^1{\\mathcal {O}}_0={\\mathcal {O}}_0\\cap \\mathcal {I}_-={\\mathcal {O}}_0\\cap \\mathcal {I}_+ $ .", "As a consequence, $ Q_-(\\alpha )= 1$ .", "$\\blacksquare $ Lemma 2.6 There exists a splitting $ {\\mathcal {O}}(U) = \\Gamma _U\\left(\\tilde{S}(\\oplus _{i \\ne 0} E_i^*)\\right) $ such that $Q= Q_- $ .", "Proof.", "The choice of a splitting $ {\\mathcal {O}}(U) \\simeq \\Gamma \\left(\\tilde{S}(\\oplus E^*)\\right) $ allows to decompose functions and vector fields according to their negative degree, and any function of given degree decomposes as a sum $ f= \\sum _{n \\ge 0} f^{(n)}$ with $f^{(n)}$ a function of negative degree $n$ ($deg_-(f^{(n)}) = n$ ).", "For a degree $+1$ vector fields $R$ , we have: $ R= \\sum _{i \\ge -1} R^{(n)} $ with $R^{(n)}$ a vector field of negative degree $n$ .", "Notice that, for instance, $Q_-=Q^{(-1)} $ .", "We construct by induction a sequence $\\Phi _n =e^{\\mathpzc {v}_n}$ (starting at $n=1$ ) of graded manifold isomorphisms that satisfy the following conditions: $\\mathpzc {v}_n$ is a vector field such that $\\mathpzc {v}_n : {\\mathcal {O}}\\rightarrow F^n {\\mathcal {O}}$ for all $n \\in {\\mathbb {N}}$ (i.e.", "$\\mathpzc {v}_n^{(i)} =0$ for $ i < n$ ).", "the push-forward $Q_n$ of the vector field $Q$ by $ \\Phi _n \\circ \\dots \\circ \\Phi _1$ is of the form: $ Q_{n+1} = Q^{(-1)} + Q_{n+1}^{(n+1)}+ \\cdots $ The sequence is constructed as follows: $Q_0=Q$ and at each step we choose $\\mathpzc {v}_{n+1} = -\\alpha Q_{n}^{(n)} $ , with $\\alpha $ as in Lemma REF .", "It follows from $[Q_n, Q_n]=0 $ that $[Q^{(-1)} , Q_{n}^{(n)}]=0 $ .", "As a consequence, the push-forward vector of $Q_n$ by $e^{ \\mathpzc {v}_{n}}$ , i.e.", "the derivation: $ e^{-\\mathpzc {v}_{n}} Q_n e^{ \\mathpzc {v}_{n} } =\\sum _{k=0}^\\infty \\frac{1}{k!}", "{\\mathrm {ad}}_{\\mathpzc {v}_{n}}^k Q \\hbox{ (all sums are finite for a given negative degree)} $ is given (up to components of negative degree $\\ge n+1 $ ) by $ Q_n + [Q_n,\\mathpzc {v}_{n}] = Q^{(-1)} + Q_n^{(n)} - [Q^{(-1)}, \\alpha Q_n^{(n)}] = Q^{(-1)} + Q_n^{(n)} - Q_n^{(n)} = Q^{(-1)} .", "$ The henceforth constructed sequence satisfies the required assumption.", "We then apply Proposition REF to construct the infinite composition $\\Psi := \\bigcirc _{i\\uparrow \\ge 1} e^{\\mathpzc {v}_i} $ .", "By construction, the push-forward of $Q$ through $\\Psi $ is $Q_-$ , which completes the proof.", "$\\blacksquare $ Lemma 2.7 There exists a splitting $ {\\mathcal {O}}(U) = \\Gamma (\\tilde{S}(\\oplus _{i\\ne 0} E^*_i)) $ such that $Q^{(-1)}= \\mathfrak {i}_\\kappa $ .", "Proof.", "The proof consists in repeating the steps of the proof of Lemma REF , by using now the polynomial degree, which is well-defined in the negative part.", "We write $ Q^{(-1)}= \\mathfrak {i}_\\kappa + Q^{[0]}+Q^{[1]} + \\cdots , $ where $[i]$ now stands for the polynomial degree.", "We then transport $Q^{(-1)}$ through $e^{\\alpha Q^{[0]}}$ .", "Since $ [ \\mathfrak {i}_\\kappa , Q^{[0]} ]=0$ , the vector field obtained in such a way is now of the form: $ Q^{(-1)}_1 = \\mathfrak {i}_\\kappa + Q^{[1]}_1 + Q^{[2]}_1+ \\cdots , $ for new $(Q_1 - \\mathfrak {i}_\\kappa )$ of polynomial degree $\\ge 1$ .", "We then construct recursively a collection of isomorphisms of the graded manifold $ M$ that satisfy the requirements of Proposition REF : since we only use negative variables at this point, the ideal of elements of polynomial degree $k$ in negative variables is included in $F^k {\\mathcal {O}}$ (cf.", "to be more precise [9])).", "Their infinite composition intertwines $Q^{(-1)} $ with $\\mathfrak {i}_\\kappa $ .", "$\\blacksquare $ Proof.", "(of Proposition REF ) The statement follows from Lemmas REF and REF above: Lemma REF constructs an isomorphism of graded manifold under which $Q$ becomes its negative part part $Q_{-} $ , and Lemma REF constructs an isomorphism of graded manifold under which $Q_-$ becomes $\\mathfrak {i}_\\kappa $ .", "$\\blacksquare $ Corollary 2.8 Let $(M_0,{\\mathcal {O}},Q) $ be a $ \\mathbb {Z}^*$ -graded $Q$ -manifold.", "On every open set $ U \\subset M_0$ over which the curvature $\\kappa \\in \\Gamma (E_1) $ is different from zero at every point, the cohomology of $({\\mathcal {O}}(U),Q) $ is zero in every degree.", "Proof.", "The statement follows from the easily-checked fact that multiplication by the function $\\alpha \\in \\Gamma (E_{+1}^*) $ defined in Lemma REF is a contracting homotopy for $Q=\\mathfrak {i}_\\kappa $ .", "$\\blacksquare $" ], [ "Geometry of the zero locus of the curvature of a $Q$ -manifold", "$\\;$ Consider a $Q$ -manifold $(M_0,{\\mathcal {O}}, Q) $ , with associated bundle $(E_i)_{i \\in {\\mathbb {Z}}^*} $ and curvature $\\kappa \\in \\Gamma (E_{+1})$ (see Definition REF ).", "Definition 2.9 We call the zero locus ideal of ${\\mathcal {O}}$ the image of $\\mathfrak {i}_\\kappa \\colon \\Gamma (E_{+1}^*) \\rightarrow {\\mathcal {O}}$ and we denote it by $\\langle \\kappa \\rangle $ .", "We call functions on the zero locus the quotient algebra $ {\\mathcal {O}}/ \\langle \\kappa \\rangle $ .", "The space $ \\mathcal {I}_- + {\\mathcal {O}}Q[\\mathcal {I}_-] \\subset {\\mathcal {O}}$ is both an ideal of $ {\\mathcal {O}}$ and stable by $Q$ , so that the latter induces a derivation $Q^+$ of the quotient $ \\mathcal {K}_+ := \\frac{{\\mathcal {O}}}{\\mathcal {I}_- + {\\mathcal {O}}Q[\\mathcal {I}_-]} , $ so that $ (\\mathcal {K}_+, Q_+)$ is a differential graded algebra.", "Definition 2.10 The differential graded algebra $(\\mathcal {K},Q_+) $ is called the zero locus DGA of a $\\mathbb {Z}^* $ -graded $Q$ -manifold $(M_0,{\\mathcal {O}},Q)$ .", "Here is an important result.", "Proposition 2.11 The zero locus DGA $({\\mathcal {K}}_+,Q_+) $ of a $\\mathbb {Z}^* $ -graded $Q$ -manifold $(M_0,{\\mathcal {O}},Q)$ is a positively graded $Q$ -variety over the algebra $C^\\infty (M_0)/\\langle \\kappa \\rangle $ of functions on the zero locus and there is a splitting $\\mathcal {K}_+ \\simeq \\Gamma _{\\langle \\kappa \\rangle } \\left( S(\\oplus _{i \\ge 1} E_{-i}^*)\\right) .$ Here $\\langle \\kappa \\rangle $ is the zero-locus ideal and $\\Gamma _{ \\langle \\kappa \\rangle }(E) = \\Gamma (E) \\otimes _{C^\\infty (M_0)} C^\\infty (M_0)/\\langle \\kappa \\rangle $ for every vector bundle $E \\rightarrow M$ .", "We start with a lemma.", "Lemma 2.12 For any $\\mathbb {Z}^* $ -graded $Q$ -manifold $(M_0,{\\mathcal {O}},Q)$ : $ {\\mathcal {O}}Q[\\mathcal {I}_-]+\\mathcal {I}_- = \\langle \\kappa \\rangle {\\mathcal {O}}+ \\mathcal {I}_-, $ where $\\kappa $ stands for the curvature.", "Proof.", "For any $ \\alpha \\in \\Gamma _{E_{+1}^*}$ : $ \\langle \\kappa , \\alpha \\rangle = Q[\\alpha ] + \\sum _{i \\ge 1} F_i G_i $ where $F_i,G_i \\in {\\mathcal {O}}$ are functions of degree $-i$ and $+i $ respectively (the sum might be infinite).", "This proves the inclusion $ \\langle \\kappa \\rangle \\, {\\mathcal {O}}+ \\mathcal {I}_- \\subset {\\mathcal {O}}Q[\\mathcal {I}_-]+\\mathcal {I}_- .$ The converse inclusion is straightforward.", "$\\blacksquare $ Proof.", "(of Proposition REF ) As a consequence of Lemma REF above, the graded algebra morphism $ \\Gamma \\left( {S}\\left(\\oplus _{i \\ge 1} E_{-i}^*\\right)\\right) \\rightarrow {\\mathcal {K}}$ is surjective, so that the following sequence is exact: $ 0 \\rightarrow \\langle \\kappa \\rangle \\, \\, \\Gamma \\left( {S}\\left(\\oplus _{i \\ge 1} E_{-i}^*\\right)\\right) \\rightarrow \\Gamma \\left( {S}\\left(\\oplus _{i \\ge 1} E_{-i}^*\\right)\\right) \\rightarrow {\\mathcal {K}}_+ \\rightarrow 0.", "$ Consequently: the degree of elements in ${\\mathcal {K}}_+$ is non-negative by construction, degree 0-elements can be identified with $ {\\mathcal {O}}(M_0)/ \\langle \\kappa \\rangle $ , for $k\\ge 1$ , degree $+k$ elements are elements of degree $k$ in the symmetric algebra (over ${\\mathcal {O}}(M_0)/ \\langle \\kappa \\rangle $ ) of $ \\oplus _{i \\ge 1} \\Gamma (E_{-i}^*)\\otimes {\\mathcal {O}}(M_0)/ \\langle \\kappa \\rangle $ .", "This yields the isomorphism of projective ${\\mathcal {O}}(M_0)/ \\langle \\kappa \\rangle $ -module in Equation (REF ).", "$\\blacksquare $ Definition 2.13 Let $(M_0,{\\mathcal {O}},Q) $ be a $\\mathbb {Z}^*$ -graded $Q$ -manifold.", "We call zero locus NQ-variety the $NQ$ -variety with sheaf of functions ${\\mathcal {K}}_+ $ and differential $ Q_+$ .", "Remark 2.14 For $(M_0,{\\mathcal {O}},Q) $ be a $\\mathbb {Z}^*$ -graded $Q$ -manifold wit spliting, $ Q$ can be decomposed by the negative degree as an infinite sum: $ Q = q_{-1} + q_0 + \\dots + q_i + \\dots $ with $ q_i$ a degree $+1$ vector field of negative degree $i $ for $ i \\ge -1 $ .", "Then, it is easy to see that $ q_{-1}$ induces the negative part of the $Q$ -manifold and that $ q_0 $ (which commutes with $q_{-1} $ , hence induces a derivation of $\\mathcal {K}_+ $ ) induces the differential $Q_+$ of the zero locus NQ-variety.", "$\\square $ Remark 2.15 As explained in [12], when the ideal $\\kappa $ is the ideal of functions vanishing on a submanifold $X\\subset M_0$ , then the distribution $\\mathcal {D}:= \\rho _1(\\Gamma (E_{-1})) $ is made of vector fields tangent to $X$ and its restriction to $X$ is involutive on $X$ .", "This singular foliation on the submanifold $X$ is the basic singular foliation of the $NQ$ -manifold $(\\mathcal {K}_+,Q_+)$ .", "The same conclusion holds when $X$ is a singular subset, provided that vector fields on $X$ can be defined in a appropriate manner (e.g.", ": an affine variety).", "$\\square $" ], [ "Koszul-Tate resolution and vector fields on the zero locus NQ-variety", "Recall that for any vector bundle $E \\rightarrow M_0$ and any ideal $I \\subset C^\\infty (M_0) $ , we use the following notation: $ \\Gamma _I (E) := \\Gamma (E) \\otimes _{C^\\infty (M_0)} {C^\\infty (M_0)}/{I} .$ If $I$ is the vanishing ideal of a submanifold $X_I \\subset M_0 $ (i.e.", "$X_I$ is the zero locus of I), then $\\Gamma _I (E)$ is simply the space of sections of the restriction of $E$ to $X_I$ ." ], [ "Koszul-Tate resolutions", "Let $M_0$ be a smooth manifold.", "We recall the usual definition of a Koszul-Tate resolution of an ideal.", "Definition 3.1 A Koszul-Tate resolutionWe use the standard notations from [7].", "of an ideal $I \\subset C^\\infty (M_0)$ is a $\\mathbb {Z}_-$ -graded $Q$ -manifold $(M_0,{\\mathcal {O}}_-, \\delta ) $ which is concentrated in non-positive degree ${\\mathcal {O}}_- = \\oplus _{i \\le 0} \\mathcal {O}_i$ , with ${\\mathcal {O}}_0 = C^\\infty (M_0)$ , and satisfies that the cohomology of the (total) degree $+1 $ vector field $\\delta \\colon \\mathcal {O}_- \\rightarrow \\mathcal {O}_- $ is given by $H^i ({\\mathcal {O}}_- , \\delta )=\\left\\lbrace \\begin{array}{cc}\\mathcal {C}^\\infty (M_0)/I, & i=0\\\\0, & i<0\\end{array}\\right.$ Example 3.2 For $M_0=\\mathbb {R}^n$ and $I$ the ideal of functions vanishing at 0, the graded algebra exterior form $\\Omega (M_0) $ equipped with $\\delta = \\mathfrak {i}_E$ the contraction with the Euler vector field is a Koszul-Tate resolution of $I$ .", "In that case, only $E_{+1}=TM_0$ is non-zero and the curvature is the Euler vector field.", "We start with a few remarks that may help to understand the notion.", "Remark 3.3 Item (1) in Definition REF implies that the associated canonical graded vector bundle $E_\\bullet $ of a Koszul-Tate resolution is concentrated in positive degrees $E_\\bullet = \\oplus _{i \\ge 1} E_{+i} $ .", "$\\square $ Remark 3.4 Let $\\kappa \\in \\Gamma (E_{+1})$ be the curvature of a Koszul-Tate resolution.", "The condition on $H^0({\\mathcal {O}},\\delta )$ in Definition REF implies that the curvature ideal $\\langle \\kappa \\rangle $ of $\\kappa \\in \\Gamma (E_{+1})$ coincides with $I$ , i.e.", "a function $ F \\in C^\\infty (M_0)$ belongs to $I$ if and only if there exists a section $\\alpha \\in \\Gamma (E_{+1}^*) $ such that $ F = \\langle \\kappa , \\alpha \\rangle = \\delta (\\alpha )$ .", "$\\square $ We will need a variation of Definition REF .", "Definition 3.5 Let $I \\subset C^\\infty (M_0)$ be an ideal, and consider a positively-graded variety $\\mathcal {K}_+ $ on $C^\\infty (M_0)/I $ .", "A Koszul-Tate resolution of $\\mathcal {K}_+ $ is a pair made of a splitting of $\\mathcal {K}_+$ , i.e.", "$ \\mathcal {K}_+ \\simeq \\Gamma _I \\left( S\\left( \\oplus _{i \\ge 1} E_{-i}^* \\right) \\right), $ a Koszul-Tate resolution of $I$ with splitting $ \\left( \\Gamma \\left( S\\left( \\oplus _{i \\ge 1} E_{i}^* \\right)\\right), \\delta \\right) ,$ assembled into a $Q$ -manifold $(M_0,{\\mathcal {O}},\\tilde{\\delta })$ with splitting ${\\mathcal {O}}\\simeq \\tilde{S} \\left( \\Gamma \\left( \\oplus _{i \\ne 0} E_i^* \\right)\\right) $ where $ \\tilde{\\delta }$ is the extension of $\\delta $ which is identically 0 on $\\oplus _{i \\ge 1} \\Gamma \\left(E_{-i}^*\\right) $ .", "Here are a few comments about Definition REF .", "Lemma 3.6 Let $(M_0,{\\mathcal {O}},\\tilde{\\delta })$ be a Koszul-Tate resolution of $\\mathcal {K}_+ $ as in Definition REF .", "Its zero locus DGA $\\frac{\\displaystyle {\\mathcal {O}}}{\\displaystyle {\\mathcal {O}}\\tilde{\\delta }(\\mathcal {I}_-) + \\mathcal {I}_- } $ is $\\mathcal {K}_+$ .", "The cohomology of the complex $({\\mathcal {O}}, \\tilde{\\delta })$ is given by: $H^i ({\\mathcal {O}}, \\tilde{\\delta })=\\left\\lbrace \\begin{array}{cc}{\\mathcal {K}}_+,& i = 0\\\\0, & i>0\\end{array}\\right.$ Here the degree considered is the negative degree $ {\\mathrm {deg_-}}$ (compare with Equation (REF ) where the index was running over the total degree, i.e.", "the total degree of $\\tilde{\\delta }$ is still $+1$ ).", "Proof.", "The first item is a direct consequence of the identification: ${\\mathcal {O}}\\tilde{\\delta }(\\mathcal {I}_-)+ \\mathcal {I}_-=I {\\mathcal {O}}+ \\mathcal {I}_- = \\langle I + \\Gamma \\left( \\oplus _{i\\ge 1} E_i^* \\right) \\rangle .$ Let us prove the second item.", "The cohomology of the complex $\\left( \\Gamma (S( \\oplus _{i \\ge 1} E_{-i}^*)) \\otimes _{C^\\infty (M_0)} \\Gamma (S( \\oplus _{i \\ge 1} E_{+i}^*)) , {\\mathrm {id}} \\otimes \\delta \\right)$ is $ \\Gamma (S \\oplus _{i \\ge 1} E_{-i}^*) \\otimes _{C^\\infty (M_0)} C^\\infty (M_0)/I \\simeq {\\mathcal {K}}_+$ .", "Now, $({\\mathcal {O}}, \\tilde{\\delta }) $ is the completion of the complex (REF ) with respect to the negative degree, but completion does not affect cohomology, and the result follows.", "$\\blacksquare $ We conclude the section with an important definition.", "To any $Q$ -manifold $(M_0,{\\mathcal {O}},Q) $ was associated in Definition REF another $Q$ -manifold, called its negative part $(M_0, \\mathcal {O}/\\mathcal {I}_+, Q_{-}) $ .", "Definition 3.7 We say that a $\\mathbb {Z}^*$ -graded $Q$ -manifold $(M_0,{\\mathcal {O}},Q)$ with curvature $ \\kappa $ has a Koszul-Tate negative part if its negative part is a Koszul-Tate resolution of the curvature ideal $\\langle \\kappa \\rangle $ ." ], [ "Vector fields on Koszul-Tate resolutions I: the cohomology", "The space ${\\mathfrak {X}}({\\mathcal {O}}_-)$ of vector fields on a Koszul-Tate resolution $(M_0,{\\mathcal {O}}_-,\\delta ) $ of an ideal $I \\subset C^\\infty (M_0) $ form a DGLA when equipped with the graded commutator and the differential ${\\mathrm {ad}}_\\delta $ .", "In particular, $\\left({\\mathfrak {X}}({\\mathcal {O}}_-), {\\mathrm {ad}}_\\delta \\right) $ is a complex, whose cohomology we now compute.", "To start with, let us notice that $ (({\\mathcal {O}}_- \\delta (\\mathcal {I}_-)+ \\mathcal {I}_-) {\\mathfrak {X}}({\\mathcal {O}}_-), {\\mathrm {ad}}_\\delta ) $ is a subcomplex of $\\left({\\mathfrak {X}}({\\mathcal {O}}_-), {\\mathrm {ad}}_\\delta \\right) $ .", "The quotient complex is canonically isomorphic to a complex of the form $\\Gamma _I(TM) \\mapsto \\Gamma _I (E_{+1}) \\mapsto \\Gamma _I (E_{+2}) \\mapsto \\cdots ,$ recall Equation (REF ) for the notation $\\Gamma _I$ .", "Definition 3.8 Let $(M_0,{\\mathcal {O}},\\delta ) $ be a Koszul-Tate resolution of an ideal $I \\subset C^\\infty (M_0)$ .", "We call linearization of Koszul-Tate differential $\\delta $ at the zero locus the complex (REF ), and denote it by $({\\mathfrak {X}}_{lin}, \\delta _{lin}) $ .", "Remark 3.9 The complex (REF ) can be understood as follows when $I$ is the vanishing ideal of a subset $X_I \\subset M_0 $ : the differential of the curvature $\\kappa : M_0 \\rightarrow E_{+1} $ is a vector bundle morphism: $ T\\kappa \\colon TM_0 \\rightarrow TE_{+1}$ over $\\kappa :M_0 \\rightarrow E_{+1}$ Now, for any $m \\in X_I$ , since $\\kappa (m)=0_m$ , there is a canonical decomposition $ T_{0_m} E_{+1} = T_{m}M + E_{+1}|_m$ , $ pr_2 \\circ T_m\\kappa $ can be seen as a linear map $ T_m M \\rightarrow E_{+1}|_m $ , where $pr_2 $ being the projection onto the second component.", "This map easily checked to coincide with the first bundle morphism in (REF ).", "All remaining morphisms in (REF ) are simply the restriction to $X_I$ of the component of polynomial degree 0 of $\\delta $ (which is by construction a degree $ +1$ vector bundle endomorphism of $E_\\bullet $ ).", "$\\square $ Let us now state the main result of this section.", "For a Koszul-Tate resolution, recall that the negative degree is simply the opposite of the total degree.", "Also, ${\\mathrm {ad}}_\\delta $ is a negative degree $-1$ operator.", "Also, the following remark helps to understand the statement.", "Remark 3.10 A ${\\mathrm {ad}}_\\delta $ -cocycle $q$ of degree 0 induces a vector field $\\underline{q} \\in \\mathfrak {X}(M_0)$ which satisfies $\\underline{q} [I] \\subset I$ , and therefore induces a derivation $q^I $ of $C^\\infty (M_0) /I$ .", "If $q$ is an ${\\mathrm {ad}}_\\delta $ -coboundary, then $q^I=0$ .", "$\\square $ Theorem 3.11 Let $(M_0,{\\mathcal {O}}_-,\\delta )$ be a Koszul-Tate resolution of an ideal $I \\in C^\\infty (M_0) $ .", "With respect to the negative degree, we have: $H^{-i} (\\mathfrak {X}({\\mathcal {O}}_-), {\\mathrm {ad}}_{{\\delta }})=\\left\\lbrace \\begin{array}{cc}H^{-i}(\\mathfrak {X}_{lin}, \\delta _{lin} ), & i \\le 0\\\\0, & i > 0\\end{array}\\right.$ In particular, an ${\\mathrm {ad}}_{{\\delta }}$ -cocycle $q$ of degree 0 is a coboundary if and only if its induced derivation $q^I $ of $ C^\\infty (M_0)/I$ is zero.", "Proof.", "Let us chose a splitting ${\\mathcal {O}}_- \\simeq \\Gamma \\left(S(\\oplus _{i \\ge 1} E_i^*) \\right) $ , and a family of affine connections $\\nabla ^k $ on $ E_k^*$ (recall Remark REF for notations).", "Consider the following bigrading (on the “North-West” quarter): $ \\mathfrak {X} ( {\\mathcal {O}}_-)_{a,b} = \\left\\lbrace \\begin{array}{ll} {\\mathcal {O}}_a \\otimes \\mathfrak {X} (M_0) &\\hbox{ for $a \\ge 0$ and $b=0$,} \\\\{\\mathcal {O}}_{a} \\otimes \\Gamma (E_{-b}) & \\hbox{ for $a \\ge 0$ and $b\\le -1$,} \\\\ 0 & \\hbox{ otherwise.}", "\\end{array}\\right.$ We adopt the following conventions: All tensor products are over $C^\\infty (M_0) $ .", "A section $ e \\in \\Gamma (E_b)$ is seen as the vertical vector field given by the derivation $\\mathfrak {i}_e $ of ${\\mathcal {O}}_-$ .", "Notice that for the negative degree, it is of degree $ -b$ .", "A vector field $X \\in \\mathfrak {X} (M_0) $ is extended to a degree 0 derivation of ${\\mathcal {O}}_- $ by $X[\\epsilon _k] = \\nabla ^k_X \\epsilon _k $ for every section $\\epsilon _k \\in \\Gamma (E_k^*)$ .", "It is indeed a bigrading, since: $ \\mathfrak {X} ({\\mathcal {O}}_-)_i = \\oplus _{a\\ge 0} \\mathfrak {X} ({\\mathcal {O}}_-)_{a,i-a} ,$ (infinite sums are allowed, since they converge with respect to the filtration $(F^i {\\mathcal {O}})_{i \\ge 0}$ ).", "With respect to this bi-grading, ${\\mathrm {ad}}_{\\delta } $ decomposes as follows: ${\\mathfrak {X} ( {\\mathcal {O}}_-)_{a+2,b-3} & & & \\\\ & \\mathfrak {X} ( {\\mathcal {O}}_-)_{a+1,b-2} & & \\\\& & \\mathfrak {X} ( {\\mathcal {O}}_-)_{a,b-1} &\\mathfrak {X} ( {\\mathcal {O}}_-)_{a,b}[d]^{ \\delta \\otimes {\\mathrm {id}}}[l]^{ {\\mathrm {id} \\otimes D }}[llu] [llluu]\\\\& & & \\mathfrak {X} ( {\\mathcal {O}}_-)_{a-1,b}}$ We can now use generic diagram chasing arguments: since all vertical lines are acyclic in degree $\\ne 0 $ , the cohomology is concentrated in the 0-th cohomology of the line $a=0 $ , which coincides with $ \\Gamma _I(TM_0) $ for $b=0$ and $ \\Gamma _I(E_b) $ for $b \\le -1$ .", "Equipped with the induced differential, a direct computation shows that it coincides (with opposite signs) with the differential of the complex (REF ).", "This proves (REF ).", "Since all vertical lines are exact, a degree 0 ${\\mathrm {ad}}_\\delta $ -cocycle is exact if and only if its bi-degree $(0,0) $ component lies in the image of the vertical lines, i.e.", "belong to $I \\otimes \\mathfrak {X}(M_0) $ .", "Equivalently, this means that this element induces the zero map on $C^\\infty (M_0)/I $ .", "This completes the proof.", "$\\blacksquare $ Here is an immediate consequence of Theorem REF .", "Corollary 3.12 Let $(M_0,{\\mathcal {O}},\\tilde{\\delta })$ be a Koszul-Tate resolution of $\\mathcal {K}_+ $ as in Definition REF .", "With respect to the negative degree: $H^i (\\mathfrak {X}({\\mathcal {O}}), {\\mathrm {ad}}_{\\tilde{\\delta }})=\\left\\lbrace \\begin{array}{cc}\\mathcal {K}_+\\otimes _{C^\\infty (M_0)/I} H^{-i}(\\mathfrak {X}_{lin}, \\delta _{lin} ), & i < 0\\\\ \\mathcal {K}_+\\otimes _{C^\\infty (M_0)/I}\\left( \\oplus _{i \\ge 1}\\Gamma _I( E_{-i}) \\oplus H^{0}(\\mathfrak {X}_{lin}, \\delta _{lin} ) \\right), & i=0\\\\0, & i>0\\end{array}\\right.$ Proof.", "As in the proof of Theorem REF , one can use a family of connections on $(E_i)_{i \\in \\mathbb {Z}^*} $ to decompose the ${\\mathcal {O}}$ -module $\\mathfrak {X} ({\\mathcal {O}}) $ as the sum of two submodules: one is $ {\\mathcal {O}}\\otimes \\left( \\oplus _{i \\ge 1} {E_{i}} \\oplus TM_0 \\right) \\hbox{ and } {\\mathcal {O}}\\otimes \\left( \\oplus _{i \\ge 1} {E_{-i}}\\right) $ Both modules are ${\\mathrm {ad}}_{\\tilde{\\delta }} $ -stable.", "On the second one, ${\\mathrm {ad}}_{\\tilde{\\delta }} = \\tilde{\\delta } \\otimes {\\mathrm {id}}$ , so that the cohomology is concentrated in negative degree 0 and coincides with $ \\frac{\\Gamma (S(\\oplus _{i \\ge 1} E_{-i}^* ))}{I} \\otimes _{C^\\infty (M_0)} \\Gamma (\\oplus _{i \\ge 1} E_{-i}) = \\mathcal {K}_+ \\otimes _{C^\\infty (M_0)/I} \\Gamma _I(\\oplus _{i \\ge 1} E_{-i}).$ The first one is the completion of the tensor product of $ \\Gamma (S(\\oplus _{i \\ge 1} E_{-i}^* )) $ with the module $\\mathfrak {X}({\\mathcal {O}}_-) $ of vector fields on a Koszul-Tate resolution of $I$ , whose cohomology is given in Theorem REF .", "The differential being given by $ {\\mathrm {id}} \\otimes {\\mathrm {ad}}_{\\tilde{\\delta }}$ , the result then follows from Theorem REF and the fact that $\\mathcal {K}_+$ is a $C^\\infty (M_0)/I$ -projective module, so that tensoring with $\\mathcal {K}_+ $ preserves cohomology.", "$\\blacksquare $" ], [ "Vector fields on Koszul-Tate resolutions II: the extension", "We now consider another problem.", "As stated in Remark REF , for a Koszul-Tate resolution $(M_0,{\\mathcal {O}}_-,\\delta ) $ of an ideal $I$ , an ${\\mathrm {ad}}_{\\delta } $ -cocycle $q \\in \\mathfrak {X}({\\mathcal {O}}_-)$ induces a derivation $q^I$ of $C^\\infty (M_0)/I$ , and an ${\\mathrm {ad}}_{\\delta } $ -coboundary induces a derivation equal to zero.", "In particular, there is a Lie algebra morphism $ H^0 ( \\mathfrak {X} ({\\mathcal {O}}_-) , {\\mathrm {ad}}_{\\delta } ) \\longrightarrow {\\mathrm {Der}}( C^\\infty (M_0)/I) .$ The second part of Theorem REF implies that this morphism is injective.", "The following statement shows that it is surjective.", "Proposition 3.13 Let $(M_0,{\\mathcal {O}}_-,\\delta ) $ be a Koszul-Tate resolution of an ideal $I \\subset C^\\infty (M_0)$ .", "Then the natural Lie algebra morphism REF is an isomorphism $H^0 ( \\mathfrak {X} ({\\mathcal {O}}_-) , {\\mathrm {ad}}_{\\delta } ) \\, \\simeq \\, {\\mathrm {Der}}( C^\\infty (M_0)/I).", "$ In particular, every derivation $q^I$ of $C^\\infty (M_0) /I$ is induced by a degree 0 vector field $q \\in \\mathfrak {X}({\\mathcal {O}}_-)$ such that $[\\delta , q]=0 $ .", "Proof.", "Denote the projection $C^\\infty (M_0) \\rightarrow C^\\infty (M_0)/I $ by $F \\mapsto \\overline{F}$ .", "Also, let us choose $(U_k,\\chi _k)_{k \\in K} $ a partition of unity of the manifold $M_0$ for which each $U_k $ is a coordinate neighborhood on which each one of the vector bundles $E_k$ admits a trivialization.", "Let $q^I $ be a derivation of $C^\\infty (M_0)/I $ .", "Let $x_1, \\dots , x_n $ be local coordinates on the open subset $U_k$ for some $k \\in K$ .", "Consider any functions $F_1, \\dots , F_r \\in C^\\infty (U_k) $ such that $q^I(\\overline{x_i}) = \\overline{F_i} $ .", "The vector field $ \\nu _k^0 := \\sum _{i=1}^r F_i \\frac{\\partial }{\\partial x_i} $ satisfies by construction that $\\nu _k^0 (I) \\subset I $ , since it induces the derivation of $C^\\infty (U_k)/I $ which coincides with the restriction of $q^I $ to $U_k$ .", "These local vector fields $\\nu ^0_k$ can be glued to a vector field $\\nu ^0$ on $M_0$ : $ \\nu ^0 = \\sum _{k \\in K} \\chi _k \\nu ^0_{k}.", "$ This vector field still satisfies $ \\nu ^0 [I] \\subset I$ by construction.", "Since $ I = \\delta (\\Gamma (E_{+1}^*))$ , for any local trivialization $\\eta _1, \\dots , \\eta _r $ of $E_{+1}^* $ , defined on the open subset $U_k$ , there exist functions $ (\\phi _{i,j})_{i,j=1}^r$ in $ C^\\infty (U_k)$ such that the collection of functions $\\kappa _i:=\\delta (\\eta _i)$ , $i=1, \\ldots , r$ locally generates the vanishing ideal of the zero locus and $ \\nu ^0 \\delta (\\eta _i)=\\nu ^0 (\\kappa _i) = \\sum _{j=1}^r \\phi _{i,j} \\kappa _j=\\sum _{j=1}^r \\phi _{i,j} \\delta (\\eta _j) $ Consider the vector field $ \\nu ^1_k := \\sum _{i,j=1}^r \\phi _{j,i}\\eta _i \\frac{\\partial }{\\partial \\eta _j} .$ By construction, it satisfies $ \\nu ^0 \\circ \\delta = \\delta \\circ \\nu ^1_{k}.$ Since $\\delta $ is $ C^\\infty (M_0)$ -linear, the vector field $ \\nu ^1 := \\sum _{k \\in K} \\chi _k\\nu ^1_{k} $ also satisfies: $\\nu ^0 \\circ \\delta = \\delta \\circ \\nu ^1.$ Now, $\\nu ^0,\\nu ^1 $ extends to vector fields on ${\\mathcal {O}}_- $ , that we will denote by the same symbol.", "The proof then consists in constructing recursively $\\nu ^j \\in \\bigoplus _{b\\le -j}\\mathfrak {X} ( {\\mathcal {O}}_-)_{a,b}$ such that $ \\nu ^j \\circ \\delta = \\delta \\circ \\nu ^{j+1} .$ Provided that $\\nu _0, \\dots , \\nu _j $ are constructed, the existence of $\\nu _{j+1} $ follows from the fact that $\\nu ^j \\circ \\delta $ is valued in the kernel of $\\delta $ : $ \\delta \\circ \\nu ^j \\circ \\delta = \\nu ^{j-1} \\circ \\delta ^2= 0 .$ Since the cohomology of $ ({\\mathcal {O}}_-,\\delta )$ is zero, $\\nu ^j \\circ \\delta $ is therefore valued in the image of $\\delta $ , and since ${\\mathcal {O}}_-$ is a projective $C^\\infty (M_0)$ -module, the existence of the vector field $\\nu ^{j+1}$ is granted.", "Moreover, with respect to the bi-grading above (see Equation (REF )), $\\nu ^{j+1 }\\in \\bigoplus _{b\\le -(j+1)}\\mathfrak {X} ( {\\mathcal {O}}_-)_{a,b}$ .", "As a consequence, the sequence $q^k := \\sum _{j=0}^k \\nu ^j,$ converges and the limit is the desired vector field $q$ .", "$\\blacksquare $ Consider now a Koszul-Tate resolution of a graded variety given by $\\mathcal {K}_+$ as in Definition REF .", "Again, notice that a total degree $k$ and negative degree 0 vector field $q_0$ such that $[\\tilde{\\delta },q_0]=0 $ induces a degree $k$ derivation of $\\mathcal {K}_+ $ .", "If $q_0$ is an ${\\mathrm {ad}}_{\\tilde{\\delta }}$ -cocycle, that derivation is zero.", "Proposition REF extends easily to give the following result.", "Corollary 3.14 Let $(M_0,{\\mathcal {O}},\\tilde{\\delta })$ be a Koszul-Tate resolution of a graded variety $\\mathcal {K}_+ $ as in Definition REF .", "There is a natural isomorphism: $ H^{(0,k)} \\left(\\mathfrak {X}({\\mathcal {O}}), {\\mathrm {ad}}_{\\tilde{\\delta }}\\right) = {\\mathrm {Der}}^k(\\mathcal {K}_+) $ where $H^{(0,k)}$ stands for the cohomology in negative degree 0 and total degree $k$ and ${\\mathrm {Der}}^k$ stands for derivations of degree $k$ of $\\mathcal {K}_+ $ .", "In particular, for any degree $k$ derivation $Q_+$ of $\\mathcal {K}_+ $ , there exists $q_0 \\in \\mathfrak {X} (\\mathcal {O})$ of negative degree 0 and total degree $k$ satisfying $[\\tilde{\\delta },q_0]=0 $ and inducing the derivation $Q_+$ on $\\mathcal {K}_+$ .", "Remark 3.15 (Extension of derivations in the affine case) At the beginning of the proof of Proposition REF it was shown that in the smooth category any derivation of the quotient algebra ${\\mathcal {O}}/I$ , considered as functions on the zero locus, can be extended to a derivation of the entire algebra of functions ${\\mathcal {O}}$ .", "This is also true in the affine case.", "Let $M_0$ be an affine $n-$ dimensional space over a field ${k}$ of characteristic 0 (we think of it as ${\\mathbb {R}}$ or $) with affine coordinates $ (zi)i=1n$; and $ IO(M0)=k[z1, ..., zn]$ be an ideal, then every derivation of $ K=O(M0)/I$ admits an extension to a derivation of $ O(M0)$.", "Indeed,let $ qI$ be a derivation of $ K$.", "Define $ q$ such that $ q(zi)$ equals to the preimage of $ qI [zi]K$ under the projection map $ O(M0)K$, where $ [zi]=zi /I$.", "Let us extend $ q$ to the whole algebra of functions $ O(M0)$ by the Leibniz rule.", "It is easy to see that $ q(I)I$ and $ q/I=qI$.Proposition \\ref {prop:der} remains therefore valid in the context of affine varieties in algebraic geometry.$$ $ Remark 3.16 The above statements (Theorem REF and Proposition REF ) can be proved in a the following alternative way.", "Let ${\\mathfrak {X}}_{\\scriptscriptstyle \\mathrm {null}}={\\mathfrak {X}}_{\\scriptscriptstyle \\mathrm {null}}({\\mathcal {O}}_-)$ be the Lie super subalgebra of derivations $\\mathpzc {v}$ of ${\\mathcal {O}}_-$ , satisfying $\\mathpzc {v}({\\mathcal {O}}_-)\\subset I_-$ , where $ I_-=I\\oplus \\bigoplus _{i<0}{\\mathcal {O}}_i$ .", "In particular, ${\\mathfrak {X}}_{\\scriptscriptstyle \\mathrm {null}}$ contains all derivations of positive negative degree.", "Since ${\\mathcal {K}}={\\mathcal {O}}_0/I\\simeq {\\mathcal {O}}_-/I_-$ , ${\\mathfrak {X}}_{\\scriptscriptstyle \\mathrm {null}}$ can be identified with vector fields on the whole non-positively graded manifold $(M_0,_cO_-)$ , that vanish at the zero locus; therefore ${\\mathfrak {X}}({\\mathcal {O}}_-)/{\\mathfrak {X}}_{\\scriptscriptstyle \\mathrm {null}}={\\mathfrak {X}}_I$ , where ${\\mathfrak {X}}_I = {\\mathfrak {X}}({\\mathcal {O}}_-)\\otimes _{{\\mathcal {O}}_-} {\\mathcal {K}}$ .", "Notice that, since $\\delta \\in {\\mathfrak {X}}_{\\scriptscriptstyle \\mathrm {null}}$ , the subalgebra ${\\mathfrak {X}}_{\\scriptscriptstyle \\mathrm {null}} $ is closed under the adjoint action of $\\delta $ .", "Thus we have a short exact sequence of complexes $\\nonumber 0 \\rightarrow \\big ({\\mathfrak {X}}_{\\scriptscriptstyle \\mathrm {null}} , \\mathrm {ad}_\\delta \\big ) \\rightarrow \\big ({\\mathfrak {X}}({\\mathcal {O}}_-), \\mathrm {ad}_\\delta \\big ) \\rightarrow \\big ({\\mathfrak {X}}_I, \\mathrm {ad}_\\delta \\big ) \\rightarrow 0$ which leads to the long exact sequence in cohomology $\\nonumber \\ldots \\rightarrow H^i \\big ({\\mathfrak {X}}_{\\scriptscriptstyle \\mathrm {null}}, \\mathrm {ad}_\\delta \\big ) \\rightarrow H^i \\big ({\\mathfrak {X}}({\\mathcal {O}}_-), \\mathrm {ad}_\\delta \\big ) \\rightarrow H^i \\big ({\\mathfrak {X}}_I, \\mathrm {ad}_\\delta \\big ) \\rightarrow H^{i+1} \\big ({\\mathfrak {X}}_{\\scriptscriptstyle \\mathrm {null}}, \\mathrm {ad}_\\delta \\big ) \\rightarrow \\ldots $ Notice that $\\big ({\\mathfrak {X}}_I, \\mathrm {ad}_\\delta \\big )$ coincides with (REF ).", "First we prove that the complex $\\big ({\\mathfrak {X}}_{\\scriptscriptstyle \\mathrm {null}} , \\mathrm {ad}_\\delta \\big )$ is acyclic.", "Combining this statement with the fact that any derivation of ${\\mathcal {K}}$ extends to a derivation of ${\\mathcal {O}}_-$ (see the beginning of the Proposition REF ), we prove Theorem REF and the remaining part of Proposition REF altogether.", "$\\square $ Remark 3.17 In fact, Theorem REF computed the cohomology of $({\\mathfrak {X}}({\\mathcal {O}}), \\delta )$ by using the spectral sequence associated to the following filtration of ${\\mathfrak {X}}({\\mathcal {O}}_-)$ : $F^p{\\mathfrak {X}}({\\mathcal {O}}_-)$ is the Lie subalgebra of vector fields on $M$ which annihilate the subalgebra of functions generated by all elements of negative degree $0, \\ldots , p-1$ .", "This spectral sequence converges in the second term.", "Its zero term gives sections of the graded vector bundle $\\pi ^*_-\\big ( TM_0\\oplus E_+\\big )$ , where $E_+=\\bigoplus _{k>0} E_k$ and $\\pi _-$ is the projection of $(M_0,{\\mathcal {O}})$ onto $M_0$ , while the first term of the spectral sequence - sections of the restriction of $TM_0\\oplus E_+$ on $X$ together with the differential determined by the normal linearization of $\\delta $ along $X$ .", "The “restriction” of vector fields to the zero locus ${\\mathfrak {X}}_I(\\mathcal {O})$ is an ${\\mathbb {N}}-$ graded $C^\\infty (M_0)/I$ -module, the homogeneous components of which of negative degree $i = 0$ and $i<0$ are canonically isomorphic to ${\\mathfrak {X}}_I (\\mathcal {O}_-)$ and ${\\mathcal {K}}\\otimes _{{\\mathcal {O}}_-}\\Gamma (E_{-i})$ , respectively.", "$\\oplus _{i} H^i ({\\mathfrak {X}}_{lin}, \\delta _{lin})$ is a graded Lie-Rinehart $C^\\infty (M_0)/I$ -algebra, which extends ${\\mathfrak {X}}_I ({\\mathcal {O}}_-)$ ; furthermore, the latter is embedded into the former as a Lie-Rinehart $C^\\infty (M_0)/I$ -subalgebra consisting of all elements of negative degree.", "$\\square $ It follows from Theorem REF that the ${\\mathrm {ad}}_\\delta $ -cohomology of vector fields on a Koszul-Tate resolution $(M_0,\\mathcal {O}_-,\\delta ) $ of $I$ are zero near any point outside the zero locus of $I$ .", "We also claim that non-trivial cohomologies of non-zero degree only appear on the singular part of the zero locus of $I$ .", "The following examples illustrate this phenomenon.", "Example REF is provided to show that the positive degree part in cohomology of $\\big ({\\mathfrak {X}}({\\mathcal {O}}_-), \\mathrm {ad}_\\delta \\big )$ , where $\\big ({\\mathcal {O}}_0, \\delta \\big )$ is the Koszul-Tate resolution of an ideal $I\\subset {\\mathcal {O}}(M_0)$ , is related to singularities of the zero locus of this ideal.", "Example REF tells us that, even in the complete intersection case, the degree 1 cohomology of vector fields on a Koszul-Tate resolution can be non-trivial.", "Example 3.18 Assume that $X\\subset M_0$ is a smooth submanifold, $I$ is the ideal of functions vanishing on $X$ , $\\delta $ is a Koszul-Tate differential which resolves $I$ , such that $X$ is the zero locus of $\\delta $ regarded as a homological vector field on a non-positively graded $M$ .", "It is possible to cover $M$ by graded coordinate charts such that either such a chart does not intersect $X$ , then the corresponding $\\mathfrak {i}_\\kappa $ is non-vanishing at all points, so we can use Lemma REF and technique from Corollary REF to show that the ${\\mathrm {ad}}_\\delta -$ cohomology of vector fields over this chart are vanishing, or there are adapted coordinates $(x^i, y^a, \\eta ^a, \\xi ^\\alpha , \\zeta ^a)$ such thatIn mathematical physics $(y^a, \\eta ^a, \\xi ^\\alpha , \\zeta ^a)$ would be called contractible pairs.", "$\\nonumber \\delta =\\sum _a y^a \\frac{\\partial }{\\partial \\eta ^a} + \\sum _\\alpha \\zeta ^\\alpha \\frac{\\partial }{\\partial \\xi ^\\alpha }\\,.$ In such case the intersection of the above coordinate chart with $X$ is given by equations $y^a=\\eta ^a=\\xi ^\\alpha =\\zeta ^\\alpha =0$ , therefore all sections of the restriction of $TM$ onto $X$ are of the form $\\nonumber \\mathpzc {v}(x, \\frac{\\partial }{\\partial x})+ \\sum _a \\left( f^a (x) \\frac{\\partial }{\\partial y^a} + h^a (x) \\frac{\\partial }{\\partial \\eta ^a}\\right)+\\sum _\\alpha \\left( \\lambda ^a (x) \\frac{\\partial }{\\partial \\zeta ^a} + \\mu ^\\alpha (x) \\frac{\\partial }{\\partial \\xi ^\\alpha }\\right)$ It is easy to see that $\\left\\lbrace \\frac{\\partial }{\\partial y^a}, \\frac{\\partial }{\\partial \\eta ^a}, \\frac{\\partial }{\\partial \\zeta ^a}, \\frac{\\partial }{\\partial \\xi ^\\alpha }\\right\\rbrace $ generate an acyclic complex w.r.t.", "$\\delta $ , therefore the cohomology of positive degree sections of $TM_{|X}$ are zero over this coordinate chart and thus on the whole $X$ as $\\delta $ is linear under the multiplication on functions on $M_0$ which allows us to apply the partition of unity technique.", "Example 3.19 On $M_0=\\mathbb {R}^2$ , equipped with affine coordinates $(x,y)$ , let $I$ be the ideal generated by $xy$ , and $X$ be an affine variety given by the equation $xy=0$ .", "A Koszul-tate resolution of $I$ is determined by a homological vector field $\\delta =xy\\frac{\\partial }{\\partial \\xi }$ on the non-positively graded affine manifold $(M_0,{\\mathcal {O}}_-)$ with graded coordinates $(x,y,\\xi )$ , where $\\xi $ has total degree $-1$ .", "It is routine to check directly that, as stated in Proposition REF : $\\nonumber H^0 ({\\mathfrak {X}}({\\mathcal {O}}_-), \\mathrm {ad}_\\delta ) \\simeq \\left\\lbrace x f(x)\\frac{\\partial }{\\partial x}+y g(y)\\frac{\\partial }{\\partial y} \\, | \\, f,g \\in C^\\infty (\\mathbb {R}) \\right\\rbrace .$ Since a degree 1 vector field is a ${\\mathrm {ad}}_\\delta $ -cocycle, since $[\\delta , \\frac{\\partial }{\\partial x} ] = y \\frac{\\partial }{\\partial \\xi } $ , $[\\delta , \\frac{\\partial }{\\partial y} ] = x \\frac{\\partial }{\\partial \\xi } $ , and $ [\\delta , \\xi \\frac{\\partial }{\\partial \\xi } ]= xy \\frac{\\partial }{\\partial \\xi } $ , and since the quotient of $C^\\infty (M_0) $ by the ideal generated by $x,y,xy$ is $ \\mathbb {R}$ , we also have $\\nonumber H^1 ({\\mathfrak {X}}({\\mathcal {O}}_-), \\mathrm {ad}_\\delta )=\\left\\lbrace \\lambda \\frac{\\partial }{\\partial \\xi }\\, | \\, \\lambda \\in \\mathbb {R} \\right\\rbrace \\,.$ In particular, the degree 1 cohomology is different from zero." ], [ "Koszul-Tate resolutions and singular locus $NQ$ -variety.", "Here is the main result of this section.", "Theorem 3.20 Let $M_0$ be a manifold and $ I \\subset C^\\infty (M_0) $ an ideal.", "Given a Koszul-Tate resolution $ (M_0,{\\mathcal {O}}_-, \\delta ) $ of $ I$ with a splitting ${\\mathcal {O}}_-=\\Gamma (S(\\oplus _{i \\ge 1} E_i^*)) $ a positively graded $Q$ -variety $(\\mathcal {K}_+,Q_+) $ on $C^\\infty (M_0)/I $ with a splitting $ \\mathcal {K}_+ = \\Gamma _I( S(\\oplus _{i \\ge 1} E_{-i}^*)) ,$ there exists a $Q$ -manifold $(M_0,{\\mathcal {O}},Q) $ with a splitting: ${\\mathcal {O}}\\simeq \\Gamma (\\tilde{S} (\\oplus _{i \\in \\mathbb {Z}^*} E_i^* ) ) $ whose negative part is the Koszul-Tate resolution $(M_0,{\\mathcal {O}}_-,\\delta ) $ , and whose positively graded $NQ$ -variety is $(\\mathcal {K}_+,Q_+)$ .", "Two such $Q$ -vector fields are diffeomorphic through a diffeomorphism which is the composition of flows of degree zero vector fields as in Proposition REF , and that induce the identity maps of the base manifold $M_0$ , of the negative part $ {\\mathcal {O}}_- $ , and of the positively graded NQ-variety $\\mathcal {K}_+ $ .", "Proof.", "The idea of the proof consists in applying perturbation theory techniques, and construct $Q$ (and the degree 0 vector fields defining $\\Psi $ ) through a recursion by showing that the obstructions for the next step are cohomology classes that vanish.", "Let $ (M_0,{\\mathcal {O}},\\tilde{\\delta })$ be as in Definition REF .", "The first step consists in applying Corollary REF : there exists a vector field $q_0$ of negative degree 0 and total degree $+1$ , such that $ [q_0, \\tilde{\\delta }]=0 $ (which implies that $ Q_0$ induces a derivation of $\\mathcal {K}_+ $ ) whose induced derivation of $\\mathcal {K}_+ $ is $Q_+$ .", "Now, the proof of the existence of the vector field $Q$ consists in constructing recursively a family $ (q_i)_{i \\ge 1}$ such that Each $q_i$ is of negative degree $i $ and total degree $ +1$ $Q_i = \\tilde{\\delta } + q_0 + \\dots + q_i $ satisfies $ [Q_i,Q_i]=0$ up to vector fields of negative degree $ \\ge i $ .", "For instance $Q_0= \\tilde{\\delta }+q_0 $ satisfies the recursion for $i=0$ since: $ [\\tilde{\\delta }+Q_0, \\tilde{\\delta }+Q_0] = [Q_0,Q_0]$ and since $[Q_0,Q_0] $ is of negative degree 0.", "Now, by the graded Jacobi identity of the graded Lie algebra of vector fields $\\mathfrak {X}({\\mathcal {O}}) $ , $ \\left[\\tilde{\\delta }+Q_0 ,[\\tilde{\\delta }+Q_0, \\tilde{\\delta }+Q_0]\\right]=0,$ so that $ {\\mathrm {ad}}_{\\tilde{\\delta }} \\left( [ Q_0,Q_0] \\right) =0 $ is an ${\\mathrm {ad}}_{\\tilde{\\delta }} $ -cocycle of negative degree 0.", "Now, since $Q_0 $ induces $Q_+ $ on $\\mathcal {K}_+ $ and since $Q_+^2=0 $ , the class of $[Q_0,Q_0]$ in $ \\mathcal {K}_+ \\otimes H^0(\\mathfrak {X}_{lin}, \\delta _{lin}) $ is zero, so that there exists a vector field $q_1$ of total degree $+1$ and negative degree $+1$ such that $ [\\tilde{\\delta }, q_1]=[q_0,q_0]$ .", "As a consequence $ Q_1 = \\tilde{\\delta } + q_0+q_1$ satisfies the recursion condition for $i=1$ .", "The proof then continues easily by noticing that if $Q_i:=\\tilde{\\delta } + \\sum _{k=1}^i q_k $ satisfies the recursion assumption at order $i$ , then $ [Q_i,Q_i] = \\sum _{k=0}^i [q_k,q_{i-k}] + R_i$ with $ R_i$ of negative degree $\\ge i+2 $ and $\\sum _{k=0}^i [q_k,q_{i-k}]$ being the component of negative degree $ i+1$ .", "By the graded Jacobi identity, this implies that $\\sum _{k=0}^i [q_k,q_{i-k}]$ is an ${\\mathrm {ad}}_{\\tilde{\\delta }} $ -cocycle of negative degree $i+1 $ .", "Since cohomology is zero in that degree by Corollary REF , there exists a vector field $ q_{i+1}$ of total degree 1 and negative degree $i+1 $ such that $-\\sum _{k=0}^i [q_k,q_{i-k}] = {\\mathrm {ad}}_{\\tilde{\\delta }} q_{i+1}$ which in turn implies that $ Q_{i+1} = \\tilde{\\delta }+\\sum _{k=0}^{i+1} q_k $ satisfies the recursion relation for $i+1$ .", "Now, the series $ \\tilde{\\delta } + \\sum _{i=0}^\\infty q_i $ converges with respect to the negative degree filtration $(F^i{\\mathcal {O}})_{i \\ge 0} $ .", "We denote by $Q$ its limit.", "By construction, $ [Q,Q] =0 $ (since $[Q,Q]$ is a derivation that takes values in $\\cap _{i \\ge 0} F^i {\\mathcal {O}}= \\lbrace 0\\rbrace $ ), and $Q$ has total degree $+1 $ , so that $ (M_0,{\\mathcal {O}},Q)$ is a $\\mathbb {Z}^*$ -graded $Q$ -manifold.", "By Remark REF , $Q$ satisfies both requirements in the Theorem REF .", "Now, let us show that any two such vector fields can be intertwined by a diffeomorphism for the desired form.", "Let $Q$ and $Q^{\\prime }$ be two vector fields as in Theorem REF .", "We will construct a family $\\mathpzc {u}_1, \\mathpzc {u}_2, \\mathpzc {u}_3, \\dots $ of total degree 0 and of respective negative degrees $1,2,3, \\dots $ such that the sequence of degree $+1 $ vector fields defined by the recursion relation $Q_{0}=Q $ and $Q_{i+1} = e^{{\\mathrm {ad}}_{\\mathpzc {u}_{i+1}}} Q_i$ (which is well-defined, see Section REF ) satisfies that $Q_i$ coincides with $ Q^{\\prime }$ in negative degrees $-1, \\dots , i-1 $ .", "Proposition REF implies then that the infinite composition of the exponentials of the vector fields $\\mathpzc {u}_i $ intertwines $Q$ and $Q^{\\prime }$ through a diffeomorphism $\\Psi $ which is by construction of the desired form.", "Let us first construct $\\mathpzc {u}_1 $ .", "We have: $ \\nonumber Q &=& \\tilde{\\delta } + q_0 + \\sum _{i\\ge 1} q_{i} \\\\ \\nonumber Q^{\\prime } &=& \\tilde{\\delta } + q^{\\prime }_0 + \\sum _{i\\ge 1} q^{\\prime }_{i}$ where $ q_i,q_i^{\\prime }$ are of negative degree $i$ .", "Now, since both $ q_0,q_0^{\\prime }$ are ${\\mathrm {ad}}_{\\tilde{\\delta }}$ -cocycles, so is $ q_0-q_0^{\\prime } $ .", "Since by construction, both $q_0 $ and $q_0^{\\prime } $ induce the same derivation $Q_+$ on $\\mathcal {K}_+ $ , their difference induce the trivial derivation of $\\mathcal {K}_+ $ .", "This implies that $q_0-q_0^{\\prime } $ is a ${\\mathrm {ad}}_{\\tilde{\\delta }}$ -coboundary by Corollary REF , and there exists a vector field $\\mathpzc {u}_{-1} $ of negative degree $ +1$ and total degree 0 such that $ q_0-q_0^{\\prime } = [\\tilde{\\delta }, \\mathpzc {u}_{-1}] .$ By construction, $\\nonumber \\nonumber Q_1 := e^{{\\mathrm {ad}}_{\\mathpzc {u}_{-1}}}(Q)=Q+\\sum _{m=1}^\\infty \\frac{1}{m!}", "ad^m_{u_{-1}}(Q)$ is well-defined, squares to zero, and satisfies again the requirements of Theorem REF .", "Also, it coincides with $Q^{\\prime }$ in negative degree $-1 $ and 0.", "Now, assume that $\\mathpzc {u}_1, \\dots , \\mathpzc {u}_{i} $ are constructed.", "Consider the decompositions according to negative degrees: $\\nonumber \\nonumber Q^{\\prime }= \\tilde{\\delta } + q_0^{\\prime } + \\dots + q_i^{\\prime } + q_{i+1}^{\\prime } + \\cdots \\\\\\nonumber Q_i = \\tilde{\\delta } + q_0^{\\prime } + \\dots + q_i^{\\prime } + q_{i+1} + \\cdots $ It follows from $[Q_i,Q_i]=0 $ and $[Q^{\\prime },Q^{\\prime }]=0 $ that ${\\mathrm {ad}}_{\\tilde{\\delta }} q_{i+1} =- \\sum _{k=0}^{i+1} [q_k,q_{i+1-k}] \\hbox{ and } {\\mathrm {ad}}_{\\tilde{\\delta }} q_{i+1}^{\\prime } = -\\sum _{k=0}^{i+1} [q_k,q_{i+1-k}] $ The difference $ q_{i+1}-q_{i+1}^{\\prime } $ is therefore an ${\\mathrm {ad}}_{\\tilde{\\delta }} $ -cocycle.", "Since by Corollary REF , the cohomology is zero in degree $i+1 $ , there exists a vector field $\\mathpzc {u}_{i+1} $ (of negative degree $i+1$ and total degree 0) such that $q_{i+1}^{\\prime } =q_{i+1} + {\\mathrm {ad}}_{\\tilde{\\delta }} \\mathpzc {u}_{i+1} $ .", "The vector field $Q_{i+1}$ defined as in (REF ) satisfies the recursion relation for $i+1$ .", "By Proposition REF , the infinite ordered product of automorphisms $\\Psi =\\lim \\limits _{k\\rightarrow \\infty } e^{\\mathpzc {u}_{-k}}\\circ \\ldots \\circ e^{{\\mathpzc {u}_{-1}}}$ exists and induces a diffeomorphism $\\Psi $ of the graded manifold $M$ .", "Furthermore, one has $\\Psi Q\\Psi ^{-1}=\\lim \\limits _{k\\rightarrow \\infty } e^{{\\mathrm {ad}}_{\\mathpzc {u}_{-k}}}\\circ \\ldots \\circ e^{{\\mathrm {ad}}_{\\mathpzc {u}_{-1}}} (Q)$ and $\\nonumber \\Psi Q\\Psi ^{-1} - Q^{\\prime } \\in \\bigcap _{j\\ge 0} F^j {\\mathcal {O}}(M)=\\lbrace 0\\rbrace \\,,$ therefore $\\Psi Q\\Psi ^{-1} - Q^{\\prime }=0$ .", "It is also clear that, since the total degree of each $\\mathpzc {u}_i$ is zero, $deg_+ (\\mathpzc {u}_{-i})=deg_- (\\mathpzc {u}_{-i})=i$ for each $i \\ge 1$ , so that the positive degree of $\\mathpzc {u}_1, \\mathpzc {u}_2, \\mathpzc {u}_3, \\dots $ is $1,2,3, \\dots $ respectively.", "This implies that $ \\Psi (F) -F \\in {\\mathcal {I}}_+ $ for every $F \\in {\\mathcal {O}}$ , and therefore that $ \\Psi $ induces the identity on the negative part ${\\mathcal {O}}_-={\\mathcal {O}}/{\\mathcal {I}}_+ $ .", "These degree relations also imply that $ \\Psi (F) -F \\in {\\mathcal {I}}_- $ , so that $\\Psi $ induces the identity of $ S(\\oplus _{i \\ge 1}\\Gamma (E_{-i}^*))$ , and therefore of its quotient $\\mathcal {K}_+ $ .", "$\\blacksquare $ Here is an immediate consequence of the second part of Theorem REF .", "Corollary 3.21 Any two $\\mathbb {Z}^*$ -graded manifolds $(M_0,{\\mathcal {O}},Q)$ and $ (M_0,{\\mathcal {O}}, Q^{\\prime })$ over the same graded manifold $(M_0,{\\mathcal {O}}) $ , whose negative parts coincide and are Koszul-Tate resolutions, and whose negatively graded NQ-varieties coincide, are diffeomorphic through a diffeomorphism as in Theorem REF ." ], [ "$Q$ -structure in local coordinates", "In the following, we consider local coordinates of a graded manifold $(M_0,{\\mathcal {O}})$ of the form $(y_1, \\dots , y_r,(x_i)_{i \\in I},\\theta _1, \\dots , \\theta _r,(\\eta _j)_{j \\in J}), $ where Latin letters $x_i,y_k$ will be used for degree 0 variables, the Greek letters $\\theta _k$ (resp.", "$\\eta _j$ ) will be used for variables of degree $+1$ (resp.", "of degree different from 0).", "Last, we will also assume that the variables $y_k$ and $\\theta _k$ go “in pairs” and that there is the same number $r$ of them.", "Last, an expression like $R\\left(x,\\eta , \\frac{\\partial }{\\partial x_\\bullet }, \\frac{\\partial }{\\partial \\eta _\\bullet }\\right)$ stand for any local vector field of the form: $ \\sum _{i \\in I } A_i(x,\\eta ) \\frac{\\partial }{\\partial x_i} +\\sum _{j \\in J } B_j(x,\\eta ) \\frac{\\partial }{\\partial \\eta _j} $ where $A_i(x,\\eta ), B_j(x,\\eta ) $ are functions that depend on the variables $(x_i)_{i \\in I},(\\eta _j)_{j \\in J} $ only.", "For any $Q$ -manifold $(M_0,{\\mathcal {O}},Q)$ , equipped with a splitting $ \\Phi \\colon {\\mathcal {O}}\\simeq \\Gamma \\left( \\hat{S} \\oplus _{i \\in \\mathbb {Z}^*} E_i^* \\right) , $ the anchor map $\\rho \\colon E_{-1} \\rightarrow T M_0 $ is the vector bundle morphism defined by: $ \\langle Q[f]^{(1)} , u \\rangle = \\rho (u) [f] ,$ for every $u \\in \\Gamma (E_{-1})$ and $f \\in C^\\infty (M_0)$ .", "Above, $Q[f]^{(1)}$ stands for the component of polynomial degree 1 of $ Q[f]$ : since $ Q[f]$ is of degree 1, $Q[f]^{(1)}$ is a section of $E_{-1}^*$ , so that the previous definition makes sense.", "Remark 4.1 By construction, the anchor map of a $Q$ -manifold is a vector bundle morphism $\\rho \\colon E_{-1} \\rightarrow TM_0 $ that depends on the choice of the splitting, although the vector bundles $E_{-1}$ and $TM_0$ do not.", "For instance, in a splitting as in Proposition REF for which $Q = \\mathfrak {i}_\\kappa $ , the anchor map is the zero map.", "But it may be non-zero in some other splitting.", "However, at every $m $ that belongs to the zero locus of the curvature $\\kappa \\in \\Gamma (E_{+1}) $ , the anchor map $ \\rho \\colon E_{-1} \\rightarrow M$ does not depend on the choice of a splitting, and is therefore canonical.", "$\\square $ Remark REF implies that the following theorem only makes sense when the point $ m $ is the zero locus of the curvature $\\kappa $ .", "Theorem 4.2 Let $ (M_0,{\\mathcal {O}},Q)$ be a $Q$ -manifold.", "Let $\\rho \\colon E_{-1} \\rightarrow TM_0 $ be the anchor map corresponding to some splitting.", "Every point $m \\in M_0$ on the zero locus of the curvature $\\kappa $ admits a coordinate neighborhood with variables $(y_1, \\dots , y_r,(x_i)_{i \\in I},\\theta _1, \\dots , \\theta _r, (\\eta _j)_{j \\in J}) $ on which $Q$ reads: $ Q = \\sum _{k=1}^{r} {\\theta _k}\\frac{\\partial }{\\partial y_k} + R\\left(x,\\eta , \\frac{\\partial }{\\partial x_\\bullet }, \\frac{\\partial }{\\partial \\eta _\\bullet }\\right) $ where $r$ is the rank of the anchor map $\\rho \\colon E_{-1} \\rightarrow TM_0 $ at $m$ .", "We start with a remark and two lemmas, before proving a proposition crucial for the proof of the above theorem.", "Remark 4.3 Any degree 0 vector field $\\mathpzc {v}$ on a ${\\mathbb {Z}}^*$ -graded manifold $(M_0,{\\mathcal {O}})$ induces a vector field $\\underline{\\mathpzc {v}} $ on $M_0$ : A degree 0 vector field being, by definition, a degree 0 derivation of ${\\mathcal {O}}$ , it preserves both negative and positive functions, so it preserves the maximal ideal $\\mathcal {I}$ , and induces a derivation of the quotient ${\\mathcal {O}}/\\mathcal {I}$ , which is isomorphic to $ C^\\infty (M_0)$ .", "This induced derivation is a vector field on $M_0$ .", "In coordinates, this assignment reads: $\\begin{array}{rcl} {\\mathfrak {X}}({\\mathcal {O}})_0 &\\rightarrow & \\mathfrak {X}(M_0) \\\\ \\sum _{i} f_i(z,\\zeta ) \\frac{\\partial }{\\partial z_i} + \\sum _{j} g_j(z, \\zeta ) \\frac{\\partial }{\\partial \\zeta _j} &\\mapsto & \\sum _{i} f_i(z,0) \\frac{\\partial }{\\partial z_i} \\end{array}$ where $(z,\\zeta )$ are local coordinates of degree 0 and different from 0 respectively.", "$\\square $ Lemma REF extends to graded manifolds the well-known straightening theorem, also known as Hadamard Lemma.", "Lemma 4.4 Let $\\mathpzc {v}$ be a vector field of degree 0 on a graded manifold $ (M_0,{\\mathcal {O}})$ with $M_0$ .", "Every point of the base manifold $M_0$ where the induced vector field $\\underline{\\mathpzc {v}}$ is different from zero admits a coordinate neighborhood $( y,(x_i)_{i \\in I}, (\\eta _j)_{j \\in J})$ on which $\\mathpzc {v}= \\frac{\\partial }{\\partial y} $ .", "Proof.", "The proof is rather straightforward: use the general form of the coordinate changes on graded manifolds (cf.", "[9]).", "$\\blacksquare $ The following lemma is the result of an obvious computation.", "Lemma 4.5 Every vector field $Q$ , defined on a coordinate neighborhood $(y,x_\\bullet ,\\eta _\\bullet )$ , that satisfies $\\left[Q,\\frac{\\partial }{\\partial y}\\right]=0 $ is of the form: $ Q = \\tau (x_\\bullet ,\\eta _\\bullet ) \\frac{\\partial }{\\partial y}+ R\\left(x_\\bullet ,\\eta _\\bullet , \\frac{\\partial }{\\partial x_\\bullet }, \\frac{\\partial }{\\partial \\eta _\\bullet } \\right).$ We can now prove the following statement: Proposition 4.6 Let $ (M_0,{\\mathcal {O}},Q)$ be a $Q$ -manifold equipped with a splitting.", "Let $\\rho \\colon E_{-1} \\rightarrow TM_0 $ be the corresponding anchor map.", "Every point $m \\in M_0$ in the zero locus of the curvature $\\kappa $ such that $\\rho _m : E_{-1} \\rightarrow TM_0 $ is not the zero map admits a coordinate neighborhood with variables $((y,x_\\bullet ), (\\theta , \\eta _\\bullet )) $ on which $Q$ reads: $ Q = {\\theta }\\frac{\\partial }{\\partial y} + R\\left(x,\\eta , \\frac{\\partial }{\\partial x_\\bullet }, \\frac{\\partial }{\\partial \\eta _\\bullet }\\right) .$ Proof.", "The map $\\rho : E_{-1} \\rightarrow TM_0 $ is different from zero at $m \\in M_0$ if and only if there exists a section $e$ in $\\Gamma (E_{-1}) $ such that the degree 0 vector field $ \\mathpzc {v}:= [ Q, \\mathfrak {i}_e] $ (which is of degree 0) has a basic vector field $\\underline{\\mathpzc {v}} $ (see Remark REF ) different from 0 at $m$ .", "By Lemma REF , there exists a coordinate neighborhood $(y,x_\\bullet , \\eta _\\bullet )$ on which $ \\mathpzc {v}= [ Q, \\mathfrak {i}_e] = \\frac{\\partial }{\\partial y} $ .", "Since $[\\mathpzc {v}, Q]=0$ , Lemma REF implies that in these coordinates: $ Q= \\tau (x,\\eta ) \\frac{\\partial }{\\partial y}+ R\\left(x,\\eta , \\frac{\\partial }{\\partial x_\\bullet }, \\frac{\\partial }{\\partial \\eta _\\bullet } \\right).$ Now, $\\tau (x,\\eta )= Q(y) $ is a degree $ +1$ function whose component in $\\Gamma ((E_{-1})^*) $ cannot be zero in view of $\\mathfrak {i}_e \\tau (x,\\eta ) = \\mathfrak {i}_e Q(y)=[\\mathfrak {i}_e , Q] (y) + Q (\\mathfrak {i}_e[y])= \\frac{\\partial }{\\partial y} (y) + Q (\\mathfrak {i}_e[y])=1 + Q (\\mathfrak {i}_e[y])$ and the fact that the projection of the degree 0 function $Q (\\mathfrak {i}_e[y]) $ on $C^\\infty (M_0)$ has to be an element of the zero locus ideal for degree reasons.", "We can therefore replace one of the degree $-1 $ variables in the coordinates $\\eta _\\bullet $ by $\\tau (x,\\eta ) $ : we denote by $\\theta $ this new variable.", "Since $ \\theta =\\tau (x,\\eta )$ does not depend on the variable $y$ , this change of coordinates does not affect $\\theta \\frac{\\partial }{\\partial y} $ and changes $R$ in a vector field that again does not depend on $y$ nor contains $\\frac{\\partial }{\\partial y} $ .", "But it may contain a component in $\\frac{\\partial }{\\partial \\theta } $ .", "In conclusion: $ Q= \\theta \\frac{\\partial }{\\partial y} + \\tilde{R}\\left(x,\\eta , \\theta , \\frac{\\partial }{\\partial x_\\bullet }, \\frac{\\partial }{\\partial \\eta _\\bullet } \\right) + S(x_\\bullet , \\eta ) \\frac{\\partial }{\\partial \\theta }.$ Since $Q^2(y)=Q(\\theta )=0 $ , we have $S(x_\\bullet , \\eta _\\bullet ) =0$ and therefore: $ Q= \\theta \\frac{\\partial }{\\partial y} + \\tilde{R}\\left(x,\\eta , \\theta , \\frac{\\partial }{\\partial x_\\bullet }, \\frac{\\partial }{\\partial \\eta _\\bullet } \\right)$ Since $\\theta ^2=0 $ , we have: $ \\tilde{R}\\left(x,\\eta , \\theta , \\frac{\\partial }{\\partial x_\\bullet }, \\frac{\\partial }{\\partial \\eta _\\bullet } \\right)=A\\left(x,\\eta , \\frac{\\partial }{\\partial x_\\bullet }, \\frac{\\partial }{\\partial \\eta _\\bullet } \\right) + \\theta B\\left(x,\\eta , \\frac{\\partial }{\\partial x_\\bullet }, \\frac{\\partial }{\\partial \\eta _\\bullet } \\right)$ so that $ Q = \\theta \\left( \\frac{\\partial }{\\partial y} +B\\left(x,\\eta , \\frac{\\partial }{\\partial x_\\bullet }, \\frac{\\partial }{\\partial \\eta _\\bullet }\\right)\\right)+ A\\left(x,\\eta , \\frac{\\partial }{\\partial x_\\bullet }, \\frac{\\partial }{\\partial \\eta _\\bullet } \\right) $ There exists local coordinates $(y^{\\prime },x_\\bullet ^{\\prime }, \\eta ^{\\prime }) $ leaving $\\theta $ untouched, where $ \\frac{\\partial }{\\partial y} +B\\left(x,\\eta , \\frac{\\partial }{\\partial x_\\bullet }, \\frac{\\partial }{\\partial \\eta _\\bullet }\\right)= \\frac{\\partial }{\\partial y^{\\prime }}.", "$ We have in these coordinates: $ Q= \\theta \\frac{\\partial }{\\partial y^{\\prime }} + A^{\\prime }\\left(x^{\\prime },\\eta ^{\\prime },y^{\\prime }, \\frac{\\partial }{\\partial x_\\bullet ^{\\prime }}, \\frac{\\partial }{\\partial \\eta _\\bullet ^{\\prime }}, \\frac{\\partial }{\\partial y^{\\prime }} \\right).$ Since $Q^2=0$ , $A^{\\prime }$ does not depend on $y^{\\prime }$ , and: $ Q= \\theta \\frac{\\partial }{\\partial y^{\\prime }} + A^{\\prime \\prime }\\left(x^{\\prime },\\eta ^{\\prime }, \\frac{\\partial }{\\partial x_\\bullet ^{\\prime }}, \\frac{\\partial }{\\partial \\eta _\\bullet ^{\\prime }}, \\right)+ T(x^{\\prime },\\eta ^{\\prime })\\frac{\\partial }{\\partial y^{\\prime }} .$ We now replace $\\theta $ by $\\theta ^{\\prime } = \\theta + T(x^{\\prime },\\eta ^{\\prime }) $ .", "Since $ (\\theta +T(x^{\\prime },\\eta ^{\\prime })) = Q(y^{\\prime })= \\theta ^{\\prime }$ , we have $A^{\\prime \\prime }\\left(x^{\\prime },\\eta ^{\\prime }, \\frac{\\partial }{\\partial x_\\bullet ^{\\prime }}, \\frac{\\partial }{\\partial \\eta _\\bullet ^{\\prime }}, \\frac{\\partial }{\\partial \\theta _\\bullet ^{\\prime }} \\right)\\theta ^{\\prime }=0$ , so that $A^{\\prime \\prime }$ has no component in $\\frac{\\partial }{\\partial \\theta _\\bullet ^{\\prime }}$ and the vector field $Q$ has the desired form in these coordinates.", "This completes the proof.", "$\\blacksquare $ Proof.", "[Proof of Theorem REF ] The theorem is now an immediate consequence of Proposition REF , upon making a finite recursion until the corresponding anchor map vanishes.", "$\\blacksquare $" ], [ "Examples and non-examples", "Example 4.7 Theorem REF , when applied to a Lie algebroids, gives back a classical result [5], which itself is similar to Weinstein splitting theorem for Poisson manifolds [19].", "For Lie $\\infty $ -algebroids, Theorem REF gives back a similar statement in [4].", "Example 4.8 For a Koszul-Tate resolution, Theorem REF does not give any interesting result, since the anchor is zero at every point of the zero locus.", "Example 4.9 For a positively graded $Q$ -manifold over a manifold $M_0$ , the image of the anchor map $ \\rho \\colon \\Gamma (E_{-1}) \\longrightarrow {\\mathfrak {X}}(M_0) $ is a singular foliation in the sense of [1], i.e.", "a locally finitely generated $C^\\infty (M_0)$ -sub-module of ${\\mathfrak {X}}(M_0) $ closed under Lie bracket.", "For $\\mathbb {Z}^*$ -graded $Q$ -manifold with splitting, whose dual Lie $\\infty $ -algebroid with anchor maps $(\\rho _n)_{n \\ge 1} $ , it is natural to ask if $ \\bigoplus _{n \\ge 1} \\rho _n(\\Gamma (S^n \\oplus _{i \\in \\mathbb {Z}} E_i)_{-1}) $ is still a singular foliation.", "The answer is no: it is certainly a $C^\\infty (M_0)$ -sub-module of ${\\mathfrak {X}}(M_0)$ , but, even when it is locally finitely generated, it may not be stable under Lie bracket.", "Here is a class of counter-examples: Let $M_0$ be a manifold, $X_1 , X_2$ vector fields such that $ [X_1,X_2]$ is not in the $C^\\infty $ -module generated by $X_1,X_2$ , let $\\theta _1,\\theta _2, \\eta $ be additional variables of respective degrees 2, 2 and $-1 $ , and consider $ Q= \\eta \\theta _1 X_1 + \\eta \\theta _2 X_2 .$ It is straightforward to check that $Q$ is a degree $+1 $ vector field squaring to zero.", "The 2-ary anchor map is not zero and its image is the $\\mathcal {C}^\\infty (M_0) $ module generated by $X_1,X_2$ , which generated a $C^\\infty (M_0)$ -module; by assumption it is not stable under Lie bracket.", "Example 4.10 Here is an example of a $Q$ -manifold with a splitting, whose 2-ary anchor is not valued in vector fields tangent to the zero locus: $ Q = (x - \\epsilon \\zeta ) \\frac{\\partial }{ \\partial \\eta } + \\zeta \\xi \\frac{\\partial }{\\partial x}+ \\xi \\frac{\\partial }{ \\partial \\epsilon }, $ where $x$ is a degree 0 variables and $\\eta , \\zeta ,\\xi ,\\epsilon $ are variables of respective degrees $-1,3,-2,-3$ ." ], [ "Conclusion / perspectives", "As mentioned in the introduction, the results of [9] on the ${\\mathbb {Z}}$ -graded manifolds and the technique of filtrations of functional spaces open a way to understanding the form of various geometric and algebraic structures on them.", "This permitted for example, to extend the results of [8] to the honest ${\\mathbb {Z}}$ -graded case and develop them in [10].", "In the current paper we have added an important ingredient to the picture – a $Q$ -structure – describing thus the normal form of differential ${\\mathbb {Z}}^*$ -graded manifolds.", "Our common thread is that \"for a $\\mathbb {Z}^*$ -graded $Q$ -manifold, only the zero locus of the curvature matters\": Proposition REF should be understood as meaning that outside the zero locus of their curvatures, $\\mathbb {Z}^*$ -graded $Q$ -manifolds have a very trivial structure; then Theorem REF makes more precise this general idea, by stating that positive part of a $Q$ -manifold over its zero locus is the only piece that matters when its negative part is a Koszul-Tate resolution; and last, Theorem REF adds an other layer to the same general idea, by stating that, at a point in the zero locus, the anchor map and its transverse $Q$ -manifold are the only two non-trivial pieces of information.", "On top of the pure mathematical significance of the above results we expect them to have straightforward consequences for gauge theories.", "According to [6], under rather natural assumptions one can read-off a $Q$ -structure from the equations governing the theory.", "This language is also widely used for various quantization problems.", "Then, as explained in [11], a lot of information can be encoded in the language of mappings between $Q$ -manifolds: the equations of motion (i.e.", "extrema of the functional describing the model) correspond to $Q$ -morphisms, and gauge transformations (symmetries) to $Q$ -homotopies.", "In this setting reducing a $Q$ -structure to a (simple) canonical form by a homotopy would mean gauge fixing in an intelligent way." ], [ "Projective systems of algebras", "We call projective system of algebras a pair made of a sequence $(A^{i})_{i \\in {\\mathbb {N}}}$ of algebras, and a family of algebra morphisms $\\pi ^{[i \\rightarrow j]} \\colon A^{i} \\rightarrow A^{j} $ , defined for all integers $i \\ge j $ , subject to the two following conditions: $\\pi ^{[i\\rightarrow i]} ={\\mathrm {id}}_{A^{i}} $ and $ \\pi ^{[j\\rightarrow k]} \\circ \\pi ^{[i \\rightarrow j]} = \\pi ^{[i \\rightarrow k]}, \\; \\; \\forall i \\ge j \\ge k. $ A endomorphism of projective algebras is a family $(\\phi ^{[i]})_{i \\in {\\mathbb {N}}} $ of algebra endomorphisms $ \\phi ^{[i]}: A^{i} \\rightarrow A^{i}$ , defined for all $i \\in {\\mathbb {N}}$ such that $\\phi ^{[j]} \\circ \\pi ^{[i\\rightarrow j]}= \\pi ^{[i\\rightarrow j]} \\circ \\phi ^{[i]}$ for all $i \\ge j $ .", "The following diagram recapitulates the above commutativity properties for all $i \\ge j \\ge k$ : ${A^{i} [rrd]|-{\\pi ^{[i\\rightarrow j]}}[rrrr]|-(.7){\\pi ^{[i\\rightarrow k]}} && && A^{k} \\\\&&A^{j}[rru]|-{\\pi ^{[j \\rightarrow k]}} &&\\\\A^{i}[rrrr]|(0.5)|-(.7){\\pi ^{[i\\rightarrow k]}} [uu]|-(.6){\\phi ^{[i]}} [rrd]|-{\\pi ^{[i\\rightarrow j]}} && && A^{k} [uu]|-(.6){\\phi ^{[k]}}\\\\&&A^{j} [uu]|-(.7){\\phi ^{[j]}} [rru]|-{\\pi ^{[j\\rightarrow k]}}&& }$ We define the projective limit $A^{\\infty } $ of a projective system of algebras to be the algebra of collections $i \\mapsto a^{i} \\in A^{i} $ such that $\\pi ^{[i\\rightarrow j]}(a^{i})=a^{j} $ for all $i\\ge j $ .", "By assigning to such a collection its $i$ -th component, one defines, for all $i \\in {\\mathbb {N}}$ , algebra morphisms $\\pi ^{[\\infty \\rightarrow i]}\\colon A^{\\infty } \\rightarrow A^{[i]}$ that satisfy: $ \\pi ^{[j\\rightarrow k]} \\circ \\pi ^{[\\infty \\rightarrow j]} = \\pi ^{[\\infty \\rightarrow k]}, \\; \\; \\forall j \\ge k. $ For any morphism of projective algebras $(\\phi ^{[i]})_{i \\in {\\mathbb {N}}} $ , there exists a unique algebra endomorphism $\\phi ^{[\\infty ]} \\colon A^{\\infty } \\rightarrow A^{\\infty } $ such that $\\phi ^{[i]} \\circ \\pi ^{[\\infty \\rightarrow i]} = \\pi ^{[\\infty \\rightarrow i]} \\circ \\phi ^{[\\infty ]} $ .", "${A^{\\infty } [rrr]|-{\\pi ^{[\\infty \\rightarrow i]}} &&& A^{i} \\\\A^{\\infty } [u]^{\\phi ^{[\\infty ]}} [rrr]|-{\\pi ^{[\\infty \\rightarrow i]}} &&& A^{i} [u]^{\\phi ^{[i]}}}$ We call $\\phi ^{[\\infty ]}\\colon A^{\\infty } \\rightarrow A^{\\infty } $ the projective limit of $(\\phi ^{[i]})_{i \\in {\\mathbb {N}}} $.", "Proposition 1.1 Let $\\left(A^{i},\\pi ^{[i\\rightarrow j]}\\right)$ be a projective system of algebras.", "For any family $(\\phi _N)_{N \\in {\\mathbb {N}}} $ of endomorphisms of the latter such that $ \\phi _{N}^{[i]} = {\\mathrm {id}}_{A^{i}} $ for all $N \\ge i $ , the sequence of algebra endomorphisms defined for all $i \\in {\\mathbb {N}}$ by $\\begin{array}{rrcl} \\psi ^{[i]} : & A^i& \\rightarrow & A^i\\\\ &a &\\mapsto & \\cdots \\circ \\phi _{3}^{[i]}\\circ \\phi _{2}^{[i]} \\circ \\phi _{1}^{[i]}(a) \\\\ & & & = \\phi _{i}^{[i]} \\circ \\dots \\circ \\phi _{1}^{[i]}(a) \\; \\hbox{ (by assumption)}\\end{array}$ is an endomorphism of projective systems of algebras.", "The projective limit $ \\psi ^{[\\infty ]} \\colon A^{\\infty } \\rightarrow A^{\\infty }$ must be understood as the infinite composition of all the $(\\phi _i)_{i \\in {\\mathbb {N}}} $ , it will therefore be denoted by $\\bigcirc _{i\\uparrow \\in {\\mathbb {N}}} \\, \\phi _i$ or $\\prod _{i\\uparrow \\in {\\mathbb {N}}} \\phi _i$ , where by “$i \\uparrow \\in {\\mathbb {N}}$ ” we mean the ordered index $i$ .", "Acknowledgments.", "We are thankful to the “Research in Paris” program of the Institut Henri Poincaré, which hosted us in the beginning of our work on this paper.", "The work was also partially supported by the CNRS MITI Project “GraNum” and PHC Procope “GraNum 2.0”.", "A.K.", "also appreciates the support of the Faculty of Science of the University of Hradec Králové." ], [ "$Q$ -structure in local coordinates", "In the following, we consider local coordinates of a graded manifold $(M_0,{\\mathcal {O}})$ of the form $(y_1, \\dots , y_r,(x_i)_{i \\in I},\\theta _1, \\dots , \\theta _r,(\\eta _j)_{j \\in J}), $ where Latin letters $x_i,y_k$ will be used for degree 0 variables, the Greek letters $\\theta _k$ (resp.", "$\\eta _j$ ) will be used for variables of degree $+1$ (resp.", "of degree different from 0).", "Last, we will also assume that the variables $y_k$ and $\\theta _k$ go “in pairs” and that there is the same number $r$ of them.", "Last, an expression like $R\\left(x,\\eta , \\frac{\\partial }{\\partial x_\\bullet }, \\frac{\\partial }{\\partial \\eta _\\bullet }\\right)$ stand for any local vector field of the form: $ \\sum _{i \\in I } A_i(x,\\eta ) \\frac{\\partial }{\\partial x_i} +\\sum _{j \\in J } B_j(x,\\eta ) \\frac{\\partial }{\\partial \\eta _j} $ where $A_i(x,\\eta ), B_j(x,\\eta ) $ are functions that depend on the variables $(x_i)_{i \\in I},(\\eta _j)_{j \\in J} $ only.", "For any $Q$ -manifold $(M_0,{\\mathcal {O}},Q)$ , equipped with a splitting $ \\Phi \\colon {\\mathcal {O}}\\simeq \\Gamma \\left( \\hat{S} \\oplus _{i \\in \\mathbb {Z}^*} E_i^* \\right) , $ the anchor map $\\rho \\colon E_{-1} \\rightarrow T M_0 $ is the vector bundle morphism defined by: $ \\langle Q[f]^{(1)} , u \\rangle = \\rho (u) [f] ,$ for every $u \\in \\Gamma (E_{-1})$ and $f \\in C^\\infty (M_0)$ .", "Above, $Q[f]^{(1)}$ stands for the component of polynomial degree 1 of $ Q[f]$ : since $ Q[f]$ is of degree 1, $Q[f]^{(1)}$ is a section of $E_{-1}^*$ , so that the previous definition makes sense.", "Remark 4.1 By construction, the anchor map of a $Q$ -manifold is a vector bundle morphism $\\rho \\colon E_{-1} \\rightarrow TM_0 $ that depends on the choice of the splitting, although the vector bundles $E_{-1}$ and $TM_0$ do not.", "For instance, in a splitting as in Proposition REF for which $Q = \\mathfrak {i}_\\kappa $ , the anchor map is the zero map.", "But it may be non-zero in some other splitting.", "However, at every $m $ that belongs to the zero locus of the curvature $\\kappa \\in \\Gamma (E_{+1}) $ , the anchor map $ \\rho \\colon E_{-1} \\rightarrow M$ does not depend on the choice of a splitting, and is therefore canonical.", "$\\square $ Remark REF implies that the following theorem only makes sense when the point $ m $ is the zero locus of the curvature $\\kappa $ .", "Theorem 4.2 Let $ (M_0,{\\mathcal {O}},Q)$ be a $Q$ -manifold.", "Let $\\rho \\colon E_{-1} \\rightarrow TM_0 $ be the anchor map corresponding to some splitting.", "Every point $m \\in M_0$ on the zero locus of the curvature $\\kappa $ admits a coordinate neighborhood with variables $(y_1, \\dots , y_r,(x_i)_{i \\in I},\\theta _1, \\dots , \\theta _r, (\\eta _j)_{j \\in J}) $ on which $Q$ reads: $ Q = \\sum _{k=1}^{r} {\\theta _k}\\frac{\\partial }{\\partial y_k} + R\\left(x,\\eta , \\frac{\\partial }{\\partial x_\\bullet }, \\frac{\\partial }{\\partial \\eta _\\bullet }\\right) $ where $r$ is the rank of the anchor map $\\rho \\colon E_{-1} \\rightarrow TM_0 $ at $m$ .", "We start with a remark and two lemmas, before proving a proposition crucial for the proof of the above theorem.", "Remark 4.3 Any degree 0 vector field $\\mathpzc {v}$ on a ${\\mathbb {Z}}^*$ -graded manifold $(M_0,{\\mathcal {O}})$ induces a vector field $\\underline{\\mathpzc {v}} $ on $M_0$ : A degree 0 vector field being, by definition, a degree 0 derivation of ${\\mathcal {O}}$ , it preserves both negative and positive functions, so it preserves the maximal ideal $\\mathcal {I}$ , and induces a derivation of the quotient ${\\mathcal {O}}/\\mathcal {I}$ , which is isomorphic to $ C^\\infty (M_0)$ .", "This induced derivation is a vector field on $M_0$ .", "In coordinates, this assignment reads: $\\begin{array}{rcl} {\\mathfrak {X}}({\\mathcal {O}})_0 &\\rightarrow & \\mathfrak {X}(M_0) \\\\ \\sum _{i} f_i(z,\\zeta ) \\frac{\\partial }{\\partial z_i} + \\sum _{j} g_j(z, \\zeta ) \\frac{\\partial }{\\partial \\zeta _j} &\\mapsto & \\sum _{i} f_i(z,0) \\frac{\\partial }{\\partial z_i} \\end{array}$ where $(z,\\zeta )$ are local coordinates of degree 0 and different from 0 respectively.", "$\\square $ Lemma REF extends to graded manifolds the well-known straightening theorem, also known as Hadamard Lemma.", "Lemma 4.4 Let $\\mathpzc {v}$ be a vector field of degree 0 on a graded manifold $ (M_0,{\\mathcal {O}})$ with $M_0$ .", "Every point of the base manifold $M_0$ where the induced vector field $\\underline{\\mathpzc {v}}$ is different from zero admits a coordinate neighborhood $( y,(x_i)_{i \\in I}, (\\eta _j)_{j \\in J})$ on which $\\mathpzc {v}= \\frac{\\partial }{\\partial y} $ .", "Proof.", "The proof is rather straightforward: use the general form of the coordinate changes on graded manifolds (cf.", "[9]).", "$\\blacksquare $ The following lemma is the result of an obvious computation.", "Lemma 4.5 Every vector field $Q$ , defined on a coordinate neighborhood $(y,x_\\bullet ,\\eta _\\bullet )$ , that satisfies $\\left[Q,\\frac{\\partial }{\\partial y}\\right]=0 $ is of the form: $ Q = \\tau (x_\\bullet ,\\eta _\\bullet ) \\frac{\\partial }{\\partial y}+ R\\left(x_\\bullet ,\\eta _\\bullet , \\frac{\\partial }{\\partial x_\\bullet }, \\frac{\\partial }{\\partial \\eta _\\bullet } \\right).$ We can now prove the following statement: Proposition 4.6 Let $ (M_0,{\\mathcal {O}},Q)$ be a $Q$ -manifold equipped with a splitting.", "Let $\\rho \\colon E_{-1} \\rightarrow TM_0 $ be the corresponding anchor map.", "Every point $m \\in M_0$ in the zero locus of the curvature $\\kappa $ such that $\\rho _m : E_{-1} \\rightarrow TM_0 $ is not the zero map admits a coordinate neighborhood with variables $((y,x_\\bullet ), (\\theta , \\eta _\\bullet )) $ on which $Q$ reads: $ Q = {\\theta }\\frac{\\partial }{\\partial y} + R\\left(x,\\eta , \\frac{\\partial }{\\partial x_\\bullet }, \\frac{\\partial }{\\partial \\eta _\\bullet }\\right) .$ Proof.", "The map $\\rho : E_{-1} \\rightarrow TM_0 $ is different from zero at $m \\in M_0$ if and only if there exists a section $e$ in $\\Gamma (E_{-1}) $ such that the degree 0 vector field $ \\mathpzc {v}:= [ Q, \\mathfrak {i}_e] $ (which is of degree 0) has a basic vector field $\\underline{\\mathpzc {v}} $ (see Remark REF ) different from 0 at $m$ .", "By Lemma REF , there exists a coordinate neighborhood $(y,x_\\bullet , \\eta _\\bullet )$ on which $ \\mathpzc {v}= [ Q, \\mathfrak {i}_e] = \\frac{\\partial }{\\partial y} $ .", "Since $[\\mathpzc {v}, Q]=0$ , Lemma REF implies that in these coordinates: $ Q= \\tau (x,\\eta ) \\frac{\\partial }{\\partial y}+ R\\left(x,\\eta , \\frac{\\partial }{\\partial x_\\bullet }, \\frac{\\partial }{\\partial \\eta _\\bullet } \\right).$ Now, $\\tau (x,\\eta )= Q(y) $ is a degree $ +1$ function whose component in $\\Gamma ((E_{-1})^*) $ cannot be zero in view of $\\mathfrak {i}_e \\tau (x,\\eta ) = \\mathfrak {i}_e Q(y)=[\\mathfrak {i}_e , Q] (y) + Q (\\mathfrak {i}_e[y])= \\frac{\\partial }{\\partial y} (y) + Q (\\mathfrak {i}_e[y])=1 + Q (\\mathfrak {i}_e[y])$ and the fact that the projection of the degree 0 function $Q (\\mathfrak {i}_e[y]) $ on $C^\\infty (M_0)$ has to be an element of the zero locus ideal for degree reasons.", "We can therefore replace one of the degree $-1 $ variables in the coordinates $\\eta _\\bullet $ by $\\tau (x,\\eta ) $ : we denote by $\\theta $ this new variable.", "Since $ \\theta =\\tau (x,\\eta )$ does not depend on the variable $y$ , this change of coordinates does not affect $\\theta \\frac{\\partial }{\\partial y} $ and changes $R$ in a vector field that again does not depend on $y$ nor contains $\\frac{\\partial }{\\partial y} $ .", "But it may contain a component in $\\frac{\\partial }{\\partial \\theta } $ .", "In conclusion: $ Q= \\theta \\frac{\\partial }{\\partial y} + \\tilde{R}\\left(x,\\eta , \\theta , \\frac{\\partial }{\\partial x_\\bullet }, \\frac{\\partial }{\\partial \\eta _\\bullet } \\right) + S(x_\\bullet , \\eta ) \\frac{\\partial }{\\partial \\theta }.$ Since $Q^2(y)=Q(\\theta )=0 $ , we have $S(x_\\bullet , \\eta _\\bullet ) =0$ and therefore: $ Q= \\theta \\frac{\\partial }{\\partial y} + \\tilde{R}\\left(x,\\eta , \\theta , \\frac{\\partial }{\\partial x_\\bullet }, \\frac{\\partial }{\\partial \\eta _\\bullet } \\right)$ Since $\\theta ^2=0 $ , we have: $ \\tilde{R}\\left(x,\\eta , \\theta , \\frac{\\partial }{\\partial x_\\bullet }, \\frac{\\partial }{\\partial \\eta _\\bullet } \\right)=A\\left(x,\\eta , \\frac{\\partial }{\\partial x_\\bullet }, \\frac{\\partial }{\\partial \\eta _\\bullet } \\right) + \\theta B\\left(x,\\eta , \\frac{\\partial }{\\partial x_\\bullet }, \\frac{\\partial }{\\partial \\eta _\\bullet } \\right)$ so that $ Q = \\theta \\left( \\frac{\\partial }{\\partial y} +B\\left(x,\\eta , \\frac{\\partial }{\\partial x_\\bullet }, \\frac{\\partial }{\\partial \\eta _\\bullet }\\right)\\right)+ A\\left(x,\\eta , \\frac{\\partial }{\\partial x_\\bullet }, \\frac{\\partial }{\\partial \\eta _\\bullet } \\right) $ There exists local coordinates $(y^{\\prime },x_\\bullet ^{\\prime }, \\eta ^{\\prime }) $ leaving $\\theta $ untouched, where $ \\frac{\\partial }{\\partial y} +B\\left(x,\\eta , \\frac{\\partial }{\\partial x_\\bullet }, \\frac{\\partial }{\\partial \\eta _\\bullet }\\right)= \\frac{\\partial }{\\partial y^{\\prime }}.", "$ We have in these coordinates: $ Q= \\theta \\frac{\\partial }{\\partial y^{\\prime }} + A^{\\prime }\\left(x^{\\prime },\\eta ^{\\prime },y^{\\prime }, \\frac{\\partial }{\\partial x_\\bullet ^{\\prime }}, \\frac{\\partial }{\\partial \\eta _\\bullet ^{\\prime }}, \\frac{\\partial }{\\partial y^{\\prime }} \\right).$ Since $Q^2=0$ , $A^{\\prime }$ does not depend on $y^{\\prime }$ , and: $ Q= \\theta \\frac{\\partial }{\\partial y^{\\prime }} + A^{\\prime \\prime }\\left(x^{\\prime },\\eta ^{\\prime }, \\frac{\\partial }{\\partial x_\\bullet ^{\\prime }}, \\frac{\\partial }{\\partial \\eta _\\bullet ^{\\prime }}, \\right)+ T(x^{\\prime },\\eta ^{\\prime })\\frac{\\partial }{\\partial y^{\\prime }} .$ We now replace $\\theta $ by $\\theta ^{\\prime } = \\theta + T(x^{\\prime },\\eta ^{\\prime }) $ .", "Since $ (\\theta +T(x^{\\prime },\\eta ^{\\prime })) = Q(y^{\\prime })= \\theta ^{\\prime }$ , we have $A^{\\prime \\prime }\\left(x^{\\prime },\\eta ^{\\prime }, \\frac{\\partial }{\\partial x_\\bullet ^{\\prime }}, \\frac{\\partial }{\\partial \\eta _\\bullet ^{\\prime }}, \\frac{\\partial }{\\partial \\theta _\\bullet ^{\\prime }} \\right)\\theta ^{\\prime }=0$ , so that $A^{\\prime \\prime }$ has no component in $\\frac{\\partial }{\\partial \\theta _\\bullet ^{\\prime }}$ and the vector field $Q$ has the desired form in these coordinates.", "This completes the proof.", "$\\blacksquare $ Proof.", "[Proof of Theorem REF ] The theorem is now an immediate consequence of Proposition REF , upon making a finite recursion until the corresponding anchor map vanishes.", "$\\blacksquare $" ], [ "Examples and non-examples", "Example 4.7 Theorem REF , when applied to a Lie algebroids, gives back a classical result [5], which itself is similar to Weinstein splitting theorem for Poisson manifolds [19].", "For Lie $\\infty $ -algebroids, Theorem REF gives back a similar statement in [4].", "Example 4.8 For a Koszul-Tate resolution, Theorem REF does not give any interesting result, since the anchor is zero at every point of the zero locus.", "Example 4.9 For a positively graded $Q$ -manifold over a manifold $M_0$ , the image of the anchor map $ \\rho \\colon \\Gamma (E_{-1}) \\longrightarrow {\\mathfrak {X}}(M_0) $ is a singular foliation in the sense of [1], i.e.", "a locally finitely generated $C^\\infty (M_0)$ -sub-module of ${\\mathfrak {X}}(M_0) $ closed under Lie bracket.", "For $\\mathbb {Z}^*$ -graded $Q$ -manifold with splitting, whose dual Lie $\\infty $ -algebroid with anchor maps $(\\rho _n)_{n \\ge 1} $ , it is natural to ask if $ \\bigoplus _{n \\ge 1} \\rho _n(\\Gamma (S^n \\oplus _{i \\in \\mathbb {Z}} E_i)_{-1}) $ is still a singular foliation.", "The answer is no: it is certainly a $C^\\infty (M_0)$ -sub-module of ${\\mathfrak {X}}(M_0)$ , but, even when it is locally finitely generated, it may not be stable under Lie bracket.", "Here is a class of counter-examples: Let $M_0$ be a manifold, $X_1 , X_2$ vector fields such that $ [X_1,X_2]$ is not in the $C^\\infty $ -module generated by $X_1,X_2$ , let $\\theta _1,\\theta _2, \\eta $ be additional variables of respective degrees 2, 2 and $-1 $ , and consider $ Q= \\eta \\theta _1 X_1 + \\eta \\theta _2 X_2 .$ It is straightforward to check that $Q$ is a degree $+1 $ vector field squaring to zero.", "The 2-ary anchor map is not zero and its image is the $\\mathcal {C}^\\infty (M_0) $ module generated by $X_1,X_2$ , which generated a $C^\\infty (M_0)$ -module; by assumption it is not stable under Lie bracket.", "Example 4.10 Here is an example of a $Q$ -manifold with a splitting, whose 2-ary anchor is not valued in vector fields tangent to the zero locus: $ Q = (x - \\epsilon \\zeta ) \\frac{\\partial }{ \\partial \\eta } + \\zeta \\xi \\frac{\\partial }{\\partial x}+ \\xi \\frac{\\partial }{ \\partial \\epsilon }, $ where $x$ is a degree 0 variables and $\\eta , \\zeta ,\\xi ,\\epsilon $ are variables of respective degrees $-1,3,-2,-3$ ." ], [ "Conclusion / perspectives", "As mentioned in the introduction, the results of [9] on the ${\\mathbb {Z}}$ -graded manifolds and the technique of filtrations of functional spaces open a way to understanding the form of various geometric and algebraic structures on them.", "This permitted for example, to extend the results of [8] to the honest ${\\mathbb {Z}}$ -graded case and develop them in [10].", "In the current paper we have added an important ingredient to the picture – a $Q$ -structure – describing thus the normal form of differential ${\\mathbb {Z}}^*$ -graded manifolds.", "Our common thread is that \"for a $\\mathbb {Z}^*$ -graded $Q$ -manifold, only the zero locus of the curvature matters\": Proposition REF should be understood as meaning that outside the zero locus of their curvatures, $\\mathbb {Z}^*$ -graded $Q$ -manifolds have a very trivial structure; then Theorem REF makes more precise this general idea, by stating that positive part of a $Q$ -manifold over its zero locus is the only piece that matters when its negative part is a Koszul-Tate resolution; and last, Theorem REF adds an other layer to the same general idea, by stating that, at a point in the zero locus, the anchor map and its transverse $Q$ -manifold are the only two non-trivial pieces of information.", "On top of the pure mathematical significance of the above results we expect them to have straightforward consequences for gauge theories.", "According to [6], under rather natural assumptions one can read-off a $Q$ -structure from the equations governing the theory.", "This language is also widely used for various quantization problems.", "Then, as explained in [11], a lot of information can be encoded in the language of mappings between $Q$ -manifolds: the equations of motion (i.e.", "extrema of the functional describing the model) correspond to $Q$ -morphisms, and gauge transformations (symmetries) to $Q$ -homotopies.", "In this setting reducing a $Q$ -structure to a (simple) canonical form by a homotopy would mean gauge fixing in an intelligent way." ], [ "Projective systems of algebras", "We call projective system of algebras a pair made of a sequence $(A^{i})_{i \\in {\\mathbb {N}}}$ of algebras, and a family of algebra morphisms $\\pi ^{[i \\rightarrow j]} \\colon A^{i} \\rightarrow A^{j} $ , defined for all integers $i \\ge j $ , subject to the two following conditions: $\\pi ^{[i\\rightarrow i]} ={\\mathrm {id}}_{A^{i}} $ and $ \\pi ^{[j\\rightarrow k]} \\circ \\pi ^{[i \\rightarrow j]} = \\pi ^{[i \\rightarrow k]}, \\; \\; \\forall i \\ge j \\ge k. $ A endomorphism of projective algebras is a family $(\\phi ^{[i]})_{i \\in {\\mathbb {N}}} $ of algebra endomorphisms $ \\phi ^{[i]}: A^{i} \\rightarrow A^{i}$ , defined for all $i \\in {\\mathbb {N}}$ such that $\\phi ^{[j]} \\circ \\pi ^{[i\\rightarrow j]}= \\pi ^{[i\\rightarrow j]} \\circ \\phi ^{[i]}$ for all $i \\ge j $ .", "The following diagram recapitulates the above commutativity properties for all $i \\ge j \\ge k$ : ${A^{i} [rrd]|-{\\pi ^{[i\\rightarrow j]}}[rrrr]|-(.7){\\pi ^{[i\\rightarrow k]}} && && A^{k} \\\\&&A^{j}[rru]|-{\\pi ^{[j \\rightarrow k]}} &&\\\\A^{i}[rrrr]|(0.5)|-(.7){\\pi ^{[i\\rightarrow k]}} [uu]|-(.6){\\phi ^{[i]}} [rrd]|-{\\pi ^{[i\\rightarrow j]}} && && A^{k} [uu]|-(.6){\\phi ^{[k]}}\\\\&&A^{j} [uu]|-(.7){\\phi ^{[j]}} [rru]|-{\\pi ^{[j\\rightarrow k]}}&& }$ We define the projective limit $A^{\\infty } $ of a projective system of algebras to be the algebra of collections $i \\mapsto a^{i} \\in A^{i} $ such that $\\pi ^{[i\\rightarrow j]}(a^{i})=a^{j} $ for all $i\\ge j $ .", "By assigning to such a collection its $i$ -th component, one defines, for all $i \\in {\\mathbb {N}}$ , algebra morphisms $\\pi ^{[\\infty \\rightarrow i]}\\colon A^{\\infty } \\rightarrow A^{[i]}$ that satisfy: $ \\pi ^{[j\\rightarrow k]} \\circ \\pi ^{[\\infty \\rightarrow j]} = \\pi ^{[\\infty \\rightarrow k]}, \\; \\; \\forall j \\ge k. $ For any morphism of projective algebras $(\\phi ^{[i]})_{i \\in {\\mathbb {N}}} $ , there exists a unique algebra endomorphism $\\phi ^{[\\infty ]} \\colon A^{\\infty } \\rightarrow A^{\\infty } $ such that $\\phi ^{[i]} \\circ \\pi ^{[\\infty \\rightarrow i]} = \\pi ^{[\\infty \\rightarrow i]} \\circ \\phi ^{[\\infty ]} $ .", "${A^{\\infty } [rrr]|-{\\pi ^{[\\infty \\rightarrow i]}} &&& A^{i} \\\\A^{\\infty } [u]^{\\phi ^{[\\infty ]}} [rrr]|-{\\pi ^{[\\infty \\rightarrow i]}} &&& A^{i} [u]^{\\phi ^{[i]}}}$ We call $\\phi ^{[\\infty ]}\\colon A^{\\infty } \\rightarrow A^{\\infty } $ the projective limit of $(\\phi ^{[i]})_{i \\in {\\mathbb {N}}} $.", "Proposition 1.1 Let $\\left(A^{i},\\pi ^{[i\\rightarrow j]}\\right)$ be a projective system of algebras.", "For any family $(\\phi _N)_{N \\in {\\mathbb {N}}} $ of endomorphisms of the latter such that $ \\phi _{N}^{[i]} = {\\mathrm {id}}_{A^{i}} $ for all $N \\ge i $ , the sequence of algebra endomorphisms defined for all $i \\in {\\mathbb {N}}$ by $\\begin{array}{rrcl} \\psi ^{[i]} : & A^i& \\rightarrow & A^i\\\\ &a &\\mapsto & \\cdots \\circ \\phi _{3}^{[i]}\\circ \\phi _{2}^{[i]} \\circ \\phi _{1}^{[i]}(a) \\\\ & & & = \\phi _{i}^{[i]} \\circ \\dots \\circ \\phi _{1}^{[i]}(a) \\; \\hbox{ (by assumption)}\\end{array}$ is an endomorphism of projective systems of algebras.", "The projective limit $ \\psi ^{[\\infty ]} \\colon A^{\\infty } \\rightarrow A^{\\infty }$ must be understood as the infinite composition of all the $(\\phi _i)_{i \\in {\\mathbb {N}}} $ , it will therefore be denoted by $\\bigcirc _{i\\uparrow \\in {\\mathbb {N}}} \\, \\phi _i$ or $\\prod _{i\\uparrow \\in {\\mathbb {N}}} \\phi _i$ , where by “$i \\uparrow \\in {\\mathbb {N}}$ ” we mean the ordered index $i$ .", "Acknowledgments.", "We are thankful to the “Research in Paris” program of the Institut Henri Poincaré, which hosted us in the beginning of our work on this paper.", "The work was also partially supported by the CNRS MITI Project “GraNum” and PHC Procope “GraNum 2.0”.", "A.K.", "also appreciates the support of the Faculty of Science of the University of Hradec Králové." ] ]
2212.05579
[ [ "A Bombieri-Vinogradov-type theorem for moduli with small radical" ], [ "Abstract In this article, we extend our recent work on a Bombieri-Vinogradov-type theorem for sparse sets of prime powers $p^N\\le x^{1/4-\\varepsilon}$ with $p\\le (\\log x)^C$ to sparse sets of moduli $s\\le x^{1/4-\\varepsilon}$ with radical rad$(s)\\le x^{9/40}$.", "To derive our result, we combine our previous method with a Bombieri-Vinogradov-type theorem for general moduli $s\\le x^{9/40}$ obtained by Roger Baker." ], [ "Introduction and main results", "Let $\\Lambda (n)$ be the von Mangoldt function, $q$ be a positive integer and $a$ be an integer coprime to $q$ .", "For $x\\geqslant 2$ let $\\pi (x;q,a):=\\sharp \\lbrace p\\leqslant x : p \\mbox{ prime and } p\\equiv a \\bmod {q}\\rbrace $ and set $F(x;q,a):=\\pi (x;q,a)- \\frac{1}{\\varphi (q)} \\int \\limits _2^x \\frac{dt}{\\log t},$ which is the error term in the prime number theorem for the arithmetic progression $a \\bmod {q}$ .", "Further, define $F(x,q):=\\max \\limits _{\\begin{array}{c}a\\\\ (a,q)=1\\end{array}} |F(x;q,a)| \\quad \\mbox{and} \\quad F^{\\ast }(x,q):=\\max \\limits _{y\\leqslant x} |F(y,q)|.$ Set $\\mathcal {L}:=\\log x$ throughout the sequel.", "The Bombieri-Vinogradov theorem implies that $F^{\\ast }(x,q)\\leqslant \\frac{\\pi (x)}{\\varphi (q)\\mathcal {L}^A}$ for all integers $q\\in (Q,2Q]$ with at most $O(Q\\mathcal {L}^{-A})$ exceptions, provided that $Q\\leqslant x^{1/2}\\mathcal {L}^{-2A-6}$ .", "Under GRH, the above inequality would hold for all $q\\leqslant x^{1/2-\\varepsilon }$ .", "In slightly modified form (for $\\psi (x;q,a)$ in place of $\\pi (x;q,a)$ ), Roger Baker [2] proved the following result, significantly restricting the set of exceptional moduli.", "The cost is that $Q$ is restricted to a smaller interval as well.", "Theorem 1 (Baker) Let $Q\\leqslant x^{9/40}$ .", "Let $\\mathcal {S}$ be a set of pairwise relatively prime integers in $(Q,2Q]$ .", "Then the number of $q$ in $\\mathcal {S}$ for which $F^{\\ast }(x,q)>\\frac{\\pi (x)}{\\varphi (q)\\mathcal {L}^A}$ is $O(\\mathcal {L}^{34+A})$ .", "(The above result follows easily using partial summation from Baker's original result.)", "In [1], we extended the above range to $Q\\leqslant x^{1/4-\\varepsilon }$ for the case when $\\mathcal {S}$ consists of powers of primes $p$ such that $p\\leqslant \\mathcal {L}^C$ for some fixed but arbitrary constant $C>0$ .", "In slightly modified form (with $x$ in place of $\\pi (x)$ ), we established the following result.", "Theorem 2 (Baier-Pujahari, 2022) Fix $C\\geqslant 6$ and $\\varepsilon >0$ .", "Assume that $x^{\\varepsilon }\\leqslant Q\\leqslant x^{1/4-\\varepsilon }$ .", "Let $\\mathcal {S}\\subseteq (Q,2Q]\\cap \\mathbb {N}$ be a set of powers of distinct primes $p\\leqslant \\mathcal {L}^C$ .", "Then the number of $q$ in $\\mathcal {S}$ for which $F^{\\ast }(x,q)>\\frac{\\pi (x)}{\\varphi (q)\\mathcal {L}^A}$ is $O_{C,\\varepsilon }(\\mathcal {L}^{14+2A})$ .", "(Here we decided to put $\\pi (x)$ in place of $x$ in the numerator for an aesthetic reason: The main term in the prime number theorem for arithmetic progressions with modulus $q$ is approximately of size $\\pi (x)/\\varphi (q)$ .", "Moreover, the inequalities in Theorems REF and REF above coincide.)", "In the present article, we substantially extend the family of admissible sets $\\mathcal {S}$ in Theorem REF , but keep their cardinalities small.", "We prove the following.", "Theorem 3 Fix $\\varepsilon >0$ .", "Assume that $x\\geqslant 3$ and $x^{\\varepsilon }\\leqslant Q\\leqslant x^{1/4-\\varepsilon }$ .", "Let $\\mathcal {S}\\subseteq (Q,2Q]\\cap \\mathbb {N}$ be a set of relatively prime integers $s$ with radicals $\\mbox{\\rm rad}(s)\\leqslant x^{9/40}$ .", "Suppose that $\\sharp \\mathcal {S}=\\mathcal {L}^C\\leqslant x^{1/4-\\varepsilon }Q^{-1}$ with $C>35$ and $A<\\min \\lbrace C/2-9,C-35\\rbrace $ .", "Then the number of integers $s$ in $\\mathcal {S}$ for which $F^{\\ast }(x,Q)>\\frac{\\pi (x)}{\\varphi (Q)\\mathcal {L}^A}$ is $O_{\\varepsilon }\\left(\\mathcal {L}^{18+2A}+\\mathcal {L}^{(70+2A+C)/3}\\right)$ .", "Here we define the radical $\\mbox{rad}(s)$ of an integer $s$ as the product of its prime divisors, i.e.", "$\\mbox{rad}(s):=\\prod \\limits _{p|s} p.$ (We reserve the symbol $p$ for primes throughout this article.)", "The idea behind the proof of the above Theorem REF in [1] was to use Harman's sieve to compare the number of primes in the sets $\\mathcal {A}:=\\lbrace n\\leqslant y \\ :\\ n\\equiv e \\bmod {p^N}\\rbrace $ and $\\mathcal {B}:=\\lbrace n\\leqslant y \\ :\\ n\\equiv d \\bmod {p}\\rbrace ,$ where we assumed $2\\leqslant y\\leqslant x$ , $p\\leqslant \\mathcal {L}^C$ and $e\\equiv d \\bmod {p}$ so that $\\mathcal {A} \\subseteq \\mathcal {B}$ .", "In this article, we extend this idea.", "Here we consider residue classes $\\mathcal {A}^{\\prime }:=\\lbrace n\\leqslant y \\ :\\ n\\equiv e \\bmod {s}\\rbrace $ and $\\mathcal {B}^{\\prime }:=\\lbrace n\\leqslant y \\ :\\ n\\equiv d \\bmod {q}\\rbrace ,$ where we assume $2\\leqslant y\\leqslant x$ , $q=\\mbox{rad}(s)\\leqslant x^{9/40}$ and $e\\equiv d\\bmod {q}$ so that $\\mathcal {A}\\subseteq \\mathcal {B}$ .", "In [1], we controlled the cardinality of primes in $\\mathcal {B}$ by the Siegel-Walfisz theorem.", "This forced us to choose $p$ rather small, namely $p\\leqslant \\mathcal {L}^C$ .", "Here we use Baker's Theorem REF above to control the cardinality of primes in $\\mathcal {B}^{\\prime }$ .", "By the said theorem, this cardinality satisfies the predicted asymptotic for all $q$ in $\\lbrace \\mbox{rad}(s): s\\in \\mathcal {S}\\rbrace $ with a small number of exceptions, which we discard.", "Our method is essentially the same as in [1], where $p$ is replaced by $q$ and $p^N$ by $s$ .", "Even though we repeat arguments from [1], we shall present our proofs in full detail for self-containedness, except for the proof of Proposition REF below." ], [ "Notations and preliminaries", "We shall apply Theorem REF to generate primes in a residue class $d_q \\bmod {q}$ , where we assume that $q$ is square-free and satisfies $q\\leqslant x^{9/40}$ .", "Then for $s$ satisfying $\\mbox{rad}(s)=q$ , we sieve for primes in a residue class $e_q \\bmod {s}$ contained in the residue class $d_q \\bmod {q}$ , i.e.", "with $e_q\\equiv d_q\\bmod {q}$ .", "Here we assume that $s\\in (Q,2Q]$ with $x^{\\varepsilon }\\leqslant Q\\leqslant x^{1/4-\\varepsilon }$ and $(d_q,q)=1$ (and hence $(e_q,s)=1$ ).", "For $y_q\\leqslant x$ , we set $\\mathcal {A}_q:=\\left\\lbrace n\\leqslant y_q: n \\equiv e_q \\bmod {s}\\right\\rbrace ,$ $\\mathcal {B}_q:=\\left\\lbrace n\\leqslant y_q: n \\equiv d_q \\bmod {q}\\right\\rbrace .$ If $\\mathcal {M}$ is a finite set of integers and $z\\geqslant 1$ , we use the notation $S(\\mathcal {M},z):=\\sharp \\lbrace n\\in \\mathcal {M} : p|n \\mbox{ prime } \\Rightarrow p\\geqslant z\\rbrace ,$ which is common in sieve theory.", "We note that $ \\pi (y_q;s,e_q)=S(\\mathcal {A}_q,x^{1/2})+O\\left(\\frac{x^{1/2}}{s}+1\\right)$ and $ \\pi (y_q;q,d_q)=S(\\mathcal {B}_q,x^{1/2})+O\\left(\\frac{x^{1/2}}{q}+1\\right).$ To deduce information on $S(\\mathcal {A}_q,x^{1/2})$ from information on $S(\\mathcal {B}_q,x^{1/2})$ , we use Harman's sieve with averaging of $q$ over a subset $\\mathcal {Q}$ of the set $\\lbrace \\mbox{rad}(s):s\\in \\mathcal {S}\\rbrace $ .", "We recall that this set consists of square-free numbers $q\\leqslant x^{9/40}$ .", "We apply the following extension of [1].", "Proposition 4 (Version of Harman's sieve for residue classes) Suppose that for all sequences $(a_m)_{m\\in \\mathbb {N}}$ and $(b_n)_{n\\in \\mathbb {N}}$ with $|a_m|\\leqslant \\tau (m)$ and $|b_n|\\leqslant \\tau (n)$ , we have $ \\sum \\limits _{q\\in \\mathcal {Q}} \\left| \\sum \\limits _{\\begin{array}{c}mn\\in \\mathcal {A}_q\\\\ m\\leqslant M\\end{array}} a_m -\\lambda \\sum \\limits _{\\begin{array}{c}mn\\in \\mathcal {B}_q\\\\ m\\leqslant M\\end{array}} a_m \\right|^2 \\leqslant Y$ and $ \\sum \\limits _{q\\in \\mathcal {Q}} \\left| \\sum \\limits _{\\begin{array}{c}mn\\in \\mathcal {A}_q\\\\ x^{\\alpha }< m\\leqslant x^{\\alpha +\\beta }\\end{array}} a_mb_n- \\lambda \\sum \\limits _{\\begin{array}{c}mn\\in \\mathcal {B}_q\\\\ x^{\\alpha }< m\\leqslant x^{\\alpha +\\beta }\\end{array}} a_mb_n \\right|^2 \\leqslant Y$ for some fixed $\\lambda >0$ , $0<\\alpha <1$ , $\\beta \\leqslant 1/2$ , $M>x^{\\alpha }$ and $Y\\geqslant 1$ .", "Then $ \\sum \\limits _{q\\in \\mathcal {Q}} \\left| S\\left(\\mathcal {A}_q,x^{\\beta }\\right)-\\lambda S\\left(\\mathcal {B}_q,x^{\\beta }\\right)\\right|^2 = O\\left(Y\\mathcal {L}^6\\right).$ The proof of the above Proposition REF is literally the same as that of [1] in the appendix of [1], except that $p$ is replaced by $q$ and $\\mathcal {P}$ by $\\mathcal {Q}$ .", "We therefore skip this proof here.", "Following usual custom, we call the bilinear sums in (REF ) type I sums and the bilinear sums in (REF ) type II sums.", "As in [1], we shall obtain a satisfactory type I estimate by elementary means and a satisfactory type II estimate by using a dispersion argument, followed by an application of the large sieve after detecting the implicit congruence relations using Dirichlet characters.", "Below is a version of the large sieve for Dirichlet characters (see [3], for example).", "Proposition 5 (Large Sieve) Let $Q$ and $N$ be positive integers and $M$ be an integer.", "Then, we have $\\sum \\limits _{q\\leqslant Q} \\frac{q}{\\varphi (q)} \\mathop {\\mmlmultiscripts{\\sum {\\mmlnone }{\\ast }}}\\limits \\limits _{\\chi \\bmod q} \\left| \\sum \\limits _{M<n\\leqslant M+N} a_n\\chi (n)\\right|^2 \\leqslant \\left(Q^2+N-1\\right) \\sum \\limits _{M<n\\leqslant M+N} |a_n|^2,$ where the asterisk indicates that the sum is restricted to primitive characters.", "Another technical devise which we shall use to make ranges of variables independent is the following approximate version of Perron's formula (see [3]).", "Proposition 6 (Perron's formula) Let $c>0$ , $N\\geqslant 2$ and $T\\geqslant 2$ .", "Let $(c_n)_{n\\in \\mathbb {N}}$ be a sequence of complex numbers and assume that the corresponding Dirichlet series $\\sum \\limits _{n=1}^{\\infty } c_nn^{-z}$ converges absolutely for $z=c$ .", "Then $\\sum \\limits _{n\\leqslant N} c_n=\\frac{1}{2\\pi i} \\int \\limits _{c-iT}^{c+iT} \\left(\\sum \\limits _{n=1}^{\\infty } c_nn^{-z}\\right) N^z \\frac{dz}{z}+O\\left(\\frac{N^c}{T} \\sum \\limits _{n=1}^{\\infty } |c_n|n^{-c} + C_N\\left(1+\\frac{N\\log N}{T}\\right)\\right),$ where $C_N:=\\max \\limits _{3N/4\\leqslant n\\leqslant 5N/4} |c_n|.$" ], [ "Proof of Theorem ", "We take $\\mathcal {Q}:=\\left\\lbrace \\mbox{rad}(s) : s\\in \\mathcal {S}, \\ F^{\\ast }(x,q)\\leqslant \\frac{\\pi (x)}{\\varphi (q)\\mathcal {L}^{B+1}}\\right\\rbrace $ for a suitable $B$ to be fixed later depending on $A$ and $C$ .", "Using Theorem REF , we have $ \\sharp (\\lbrace \\mbox{rad}(s) : s\\in \\mathcal {S}\\rbrace \\setminus \\mathcal {Q}) = O\\left(\\mathcal {L}^{35+B}\\right).$ To prove Theorem REF , we apply Proposition REF with $\\beta =1/2$ and $M=x^{\\alpha }+1$ , where the parameter $\\alpha $ will later be optimized.", "We shall establish (REF ) and (REF ) with $ \\lambda :=\\frac{q}{s} \\quad \\mbox{and} \\quad Y:=\\frac{\\pi (x)^2\\mathcal {L}^{12}}{Q^2},$ where we recall that $s\\in (Q,2Q]$ for all $q\\in \\mathcal {Q}$ .", "Using the Cauchy-Schwarz inequality and the definitions of $F(y_q;q,d_q)$ , $F(y_q;s,e_q)$ and $\\mathcal {Q}$ , we obtain $\\begin{split}\\sum \\limits _{q\\in \\mathcal {Q}} \\left|F(y_q;s,e_q)\\right|^2\\leqslant &\\sum \\limits _{q\\in \\mathcal {Q}} \\left|F(y_q;s,e_q)-\\frac{q}{s}F(y_q;q,d_q)\\right|^2+\\sum \\limits _{q\\in \\mathcal {Q}} \\left|\\frac{q}{s}F(y_q;q,d_q)\\right|^2\\\\= & \\sum \\limits _{q\\in \\mathcal {Q}} \\left|\\pi (y_q;s,e_q)-\\frac{q}{s} \\pi (y_q;q,d_q)\\right|^2+O\\left(\\frac{\\pi (x)^2\\sharp \\mathcal {Q}}{Q^2\\mathcal {L}^{2B}}\\right),\\end{split}$ where we note that $\\frac{\\varphi (q)}{\\varphi (s)}=\\frac{q}{s}.$ Further, using (REF ) together with (REF ) and (REF ), we have $\\sum \\limits _{q\\in \\mathcal {Q}} \\left|\\pi (y_q;s,e_q)-\\frac{q}{s} \\pi (y_q;q,d_q)\\right|^2=O\\left(\\frac{\\pi (x)^2\\mathcal {L}^{18}}{Q^2}\\right)$ for $Q$ as in Theorem REF .", "Combining the above inequalities, we get $\\sum \\limits _{q\\in \\mathcal {Q}} \\left|F(y_q;s,e_q)\\right|^2=O\\left(\\frac{\\pi (x)^2\\mathcal {L}^{18}}{Q^2}+\\frac{\\pi (x)^2\\sharp \\mathcal {Q}}{Q^2\\mathcal {L}^{2B}}\\right).$ Hence, the number of square-free integers $q$ in $\\mathcal {Q}$ for which $|F(y_q;s,e_q)|> \\frac{\\pi (x)}{Q\\mathcal {L}^A}$ is bounded by $O\\left(\\mathcal {L}^{18+2A}+\\sharp \\mathcal {Q}\\mathcal {L}^{2(A-B)}\\right)$ .", "Recalling that $\\sharp \\mathcal {Q}=\\sharp \\mathcal {S}=\\mathcal {L}^C$ and choosing $B:= (2A+C-35)/3$ , the statement of Theorem REF follows upon choosing $e_q$ and $y_q$ in such a way that $|F(y_q;s,e_q)|=F^{\\ast }(x,s)$ and discarding the elements $s$ of $\\mathcal {S}$ for which $\\mbox{rad}(s)\\notin \\mathcal {Q}$ .", "In view of (REF ) and our above choice of $B$ , the number of these exceptional $s$ is bounded by $O\\left(\\mathcal {L}^{(70+2A+C)/3}\\right)$ .", "What remains to prove are the bounds (REF ) and (REF ) with $Y$ defined in (REF ).", "This will be carried out in the remainder of this article." ], [ "Treatment of type I sums", "In the following, we suppress the index $q$ at $\\mathcal {A}_q$ , $\\mathcal {B}_q$ , $d_q$ , $e_q$ and $y_q$ for simplicity.", "We shall treat the type I sums similarly as in [1].", "The difference of double sums over $m$ and $n$ in (REF ) equals $\\Sigma _{I}=\\sum \\limits _{\\begin{array}{c}m\\leqslant M\\\\ (m,q)=1\\end{array}} a_m\\left(\\sum \\limits _{\\begin{array}{c}n\\\\ mn\\in A\\end{array}} 1 - \\lambda \\sum \\limits _{\\begin{array}{c}n\\\\ mn\\in B\\end{array}} 1 \\right)$ which in our setting (recall the choice of $\\lambda $ in (REF )) takes the form $\\Sigma _{I}=\\sum \\limits _{\\begin{array}{c}m\\leqslant M\\\\ (m,q)=1\\end{array}} a_m\\left(\\sum \\limits _{\\begin{array}{c}n\\leqslant y/m\\\\ n \\equiv e\\overline{m} \\bmod {s}\\end{array}} 1 - \\frac{q}{s} \\sum \\limits _{\\begin{array}{c}n\\leqslant y/m\\\\ n \\equiv d\\overline{m} \\bmod {q}\\end{array}} 1 \\right),$ where $\\overline{m}$ denotes a multiplicative inverse of $m$ modulo $s$ .", "We observe that the difference contained in the sum on the right-hand side above is $O(1)$ , and hence we have $\\Sigma _{I}\\ll \\sum \\limits _{m\\leqslant M} a_m \\ll \\sum \\limits _{m\\leqslant M} \\tau (m) \\ll M\\mathcal {L},$ which gives a bound of $O(M^2\\mathcal {L}^2\\sharp \\mathcal {Q})$ for the left-hand side of (REF ).", "Hence, (REF ) holds with $Y$ as defined in (REF ) if $M\\ll \\frac{\\pi (x)\\mathcal {L}^{5}}{Q(\\sharp \\mathcal {Q})^{1/2}}.$ Recalling our choice $M=x^{\\alpha }+1$ and $\\sharp \\mathcal {Q}=\\sharp \\mathcal {S}=\\mathcal {L}^C$ , we conclude that (REF ) holds under the condition $ \\boxed{Q\\leqslant x^{1-\\alpha }\\mathcal {L}^{5-C/2}.", "}$" ], [ "Treatment of type II sums.", "Our treatment of type II sums is similar as in [1].", "Suppressing again the index $q$ , the difference of double sums in (REF ) equals $ \\Sigma _{II}:=\\sum \\limits _{\\begin{array}{c}x^{\\alpha }<m\\leqslant x^{\\alpha +\\beta }\\end{array}} a_m \\left(\\sum \\limits _{\\begin{array}{c}n\\\\ mn\\in \\mathcal {A}\\end{array}} b_n - \\lambda \\sum \\limits _{\\begin{array}{c}n\\\\ mn\\in \\mathcal {B}\\end{array}} b_n\\right).$ In our setting, $ \\Sigma _{II}=\\sum \\limits _{\\begin{array}{c}x^{\\alpha }<m\\leqslant x^{\\alpha +\\beta }\\\\ (m,q)=1\\end{array}} a_m \\left(\\sum \\limits _{\\begin{array}{c}n\\leqslant y/m\\\\ n\\equiv e\\overline{m} \\bmod {s}\\end{array}} b_n - \\frac{q}{s}\\sum \\limits _{\\begin{array}{c}n\\leqslant y/m\\\\ n\\equiv d\\overline{m} \\bmod {q}\\end{array}} b_n\\right).$ We split $\\Sigma _{II}$ into $O(\\mathcal {L})$ sub-sums of the form $ \\Sigma (K):=\\sum \\limits _{\\begin{array}{c}K<m\\leqslant K^{\\prime }\\\\ (m,q)=1\\end{array}} a_m \\left(\\sum \\limits _{\\begin{array}{c}n\\leqslant y/m\\\\ n\\equiv e\\overline{m} \\bmod {s}\\end{array}} b_n - \\frac{q}{s}\\sum \\limits _{\\begin{array}{c}n\\leqslant y/m\\\\ n\\equiv d\\overline{m} \\bmod {q}\\end{array}} b_n\\right)$ with $x^{\\alpha }\\leqslant K<K^{\\prime }\\leqslant 2K\\leqslant x^{\\alpha +\\beta }$ .", "Throughout the following, let $L:=\\frac{x}{K}.$ To disentangle the summation variables, we apply Perron's formula, Proposition REF , with $N:=\\frac{y}{m}, \\quad c:=\\frac{1}{\\log L}, \\quad T:=L\\log L$ and $c_n:= {\\left\\lbrace \\begin{array}{ll} b_n & \\mbox{ if } n\\leqslant L \\mbox{ and } n \\equiv e \\overline{m} \\bmod {s}\\mbox{ (or } n \\equiv d\\overline{m} \\bmod {q})\\\\ 0 & \\mbox{ otherwise} \\end{array}\\right.", "}$ to the inner sums over $n$ on the right-hand side of (REF ).", "This gives $\\begin{split}& \\sum \\limits _{\\begin{array}{c}n\\leqslant y/m\\\\ n\\equiv e\\overline{m} \\bmod {s}\\end{array}} b_n - \\frac{q}{s}\\sum \\limits _{\\begin{array}{c}n\\leqslant y/m\\\\ n\\equiv d\\overline{m} \\bmod {q}\\end{array}} b_n\\\\= & \\frac{1}{2\\pi i} \\int \\limits _{c-iT}^{c+iT} \\left(\\sum \\limits _{\\begin{array}{c}n\\leqslant L\\\\ n\\equiv e\\overline{m} \\bmod {s}\\end{array}} b_n n^{-z} - \\frac{q}{s}\\sum \\limits _{\\begin{array}{c}n\\leqslant L\\\\ n\\equiv d\\overline{m} \\bmod {q}\\end{array}} b_n n^{-z} \\right) \\left(\\frac{y}{m}\\right)^{z} \\frac{dz}{z} + O\\left(x^{\\varepsilon }\\right).\\end{split}$ Hence, we have $\\Sigma (K)=\\frac{1}{2\\pi i} \\int \\limits _{c-iT}^{c+iT} \\Sigma (K,z)\\frac{dz}{z} + O(Kx^{\\varepsilon }),$ where $\\Sigma (K,z):=\\sum \\limits _{\\begin{array}{c}K<m\\leqslant K^{\\prime }\\\\ (m,q)=1\\end{array}} a_m(z) \\left(\\sum \\limits _{\\begin{array}{c}n\\leqslant L\\\\ n\\equiv e\\overline{m} \\bmod {s}\\end{array}} b_n(z) - \\frac{q}{s}\\sum \\limits _{\\begin{array}{c}n\\leqslant L\\\\ n\\equiv d\\overline{m} \\bmod {q}\\end{array}} b_n(z)\\right)$ with $a_m(z):=a_m\\cdot \\left(\\frac{y}{m}\\right)^z, \\quad b_n(z):=b_nn^{-z}.$ We note that $|a_m(z)|\\ll |a_m|\\leqslant \\tau (m)$ if $K< m\\leqslant K^{\\prime }$ and $|b_n(z)|\\ll |b_n|\\leqslant \\tau (n)$ if $n\\leqslant L$ .", "By the Cauchy-Schwarz inequality for integrals, we deduce that $ \\begin{split}|\\Sigma (K)|^2 \\ll & \\left(\\int \\limits _{-T}^{T} \\frac{dt}{|c+it|}\\right) \\cdot \\int \\limits _{-T}^T \\frac{|\\Sigma (K,c+it)|^2}{|c+it|}dt + K^2x^{2\\varepsilon } \\\\\\ll &\\mathcal {L} \\cdot \\int \\limits _{-T}^T \\frac{|\\Sigma (K,c+it)|^2}{|c+it|}dt + K^2x^{2\\varepsilon }.\\end{split}$ Set $z:=c+it$ .", "An application of the Cauchy-Schwarz inequality for sums gives $ |\\Sigma (K,z)|^2\\leqslant \\left(\\sum \\limits _{\\begin{array}{c}K<m\\leqslant K^{\\prime }\\\\ (m,q)=1\\end{array}} |a_m(z)|^2\\right) \\cdot \\Sigma ^{\\prime }(K,z)\\leqslant K\\mathcal {L}^3 \\cdot \\Sigma ^{\\prime }(K,z),$ where $\\Sigma ^{\\prime }(K,z):=\\sum \\limits _{\\begin{array}{c}K<m\\leqslant 2K\\\\ (m,q)=1\\end{array}} \\left|\\sum \\limits _{\\begin{array}{c}n\\leqslant L\\\\ n\\equiv e\\overline{m} \\bmod {s}\\end{array}} b_n(z) - \\frac{q}{s}\\sum \\limits _{\\begin{array}{c}n\\leqslant L\\\\ n\\equiv d\\overline{m} \\bmod {q}\\end{array}} b_n(z)\\right|^2.$ Now we use a dispersion argument.", "We multiply out the modulus square and re-arrange summations to get $\\Sigma ^{\\prime }(K,z)=\\Sigma _1(K,z)-\\Sigma _2(K,z)-\\Sigma _3(K,z)+\\Sigma _4(K,z),$ where $\\Sigma _1(K,z):=\\sum \\limits _{\\begin{array}{c}n_1,n_2\\leqslant L\\\\ n_1 \\equiv n_2 \\bmod {s}\\\\ (n_1n_2,q)=1\\end{array}} b_{n_1}(z)\\overline{b_{n_2}(z)}\\sum \\limits _{\\begin{array}{c}K<m\\leqslant 2K\\\\ m \\equiv e\\overline{n_1} \\bmod {s}\\end{array}} 1,$ $\\Sigma _2(K,z):=\\frac{q}{s} \\sum \\limits _{\\begin{array}{c}n_1,n_2\\leqslant L\\\\ n_1 \\equiv n_2 \\bmod {q}\\\\ (n_1n_2,q)=1\\end{array}} b_{n_1}(z)\\overline{b_{n_2}(z)}\\sum \\limits _{\\begin{array}{c}K<m\\leqslant 2K\\\\ m \\equiv e\\overline{n_1} \\bmod {s}\\end{array}} 1,$ $\\Sigma _3(K,z):=\\frac{q}{s} \\sum \\limits _{\\begin{array}{c}n_1,n_2\\leqslant L\\\\ n_1 \\equiv n_2 \\bmod {q}\\\\ (n_1n_2,q)=1\\end{array}} b_{n_1}(z)\\overline{b_{n_2}(z)}\\sum \\limits _{\\begin{array}{c}K<m\\leqslant 2K\\\\ m \\equiv e\\overline{n_2} \\bmod {s}\\end{array}} 1,$ and $\\Sigma _4(K,z):=\\left(\\frac{q}{s}\\right)^2 \\sum \\limits _{\\begin{array}{c}n_1,n_2\\leqslant L\\\\ n_1 \\equiv n_2 \\bmod {q}\\\\ (n_1n_2,q)=1\\end{array}} b_{n_1}(z)\\overline{b_{n_2}(z)}\\sum \\limits _{\\begin{array}{c}K<m\\leqslant 2K\\\\ m \\equiv d\\overline{n_1} \\bmod {q}\\end{array}} 1.$ We see immediately that $\\Sigma _1(K,z)=\\sum \\limits _{\\begin{array}{c}n_1,n_2\\leqslant L\\\\ n_1 \\equiv n_2 \\bmod {s}\\\\ (n_1n_2,q)=1\\end{array}} b_{n_1}(z)\\overline{b_{n_2}(z)} \\left(\\frac{K}{s}+O(1)\\right),$ $\\Sigma _2(K,z)=\\frac{q}{s} \\sum \\limits _{\\begin{array}{c}n_1,n_2\\leqslant L\\\\ n_1 \\equiv n_2 \\bmod {q}\\\\ (n_1n_2,q)=1\\end{array}} b_{n_1}(z)\\overline{b_{n_2}(z)} \\left(\\frac{K}{s}+O(1)\\right),$ $\\Sigma _3(K,z)=\\frac{q}{s} \\sum \\limits _{\\begin{array}{c}n_1,n_2\\leqslant L\\\\ n_1 \\equiv n_2 \\bmod {q}\\\\ (n_1n_2,q)=1\\end{array}} b_{n_1}(z)\\overline{b_{n_2}(z)}\\left(\\frac{K}{s}+O(1)\\right),$ and $\\Sigma _4(K,z)=\\left(\\frac{q}{s}\\right)^2 \\sum \\limits _{\\begin{array}{c}n_1,n_2\\leqslant L\\\\ n_1 \\equiv n_2 \\bmod {q}\\\\ (n_1n_2,q)=1\\end{array}} b_{n_1}(z)\\overline{b_{n_2}(z)}\\left(\\frac{K}{q}+O(1)\\right)$ so that $\\begin{split}\\Sigma ^{\\prime }(K,z)=& \\frac{K}{s}\\sum \\limits _{\\begin{array}{c}n_1,n_2\\leqslant L\\\\ n_1 \\equiv n_2 \\bmod {s}\\\\ (n_1n_2,q)=1\\end{array}} b_{n_1}(z)\\overline{b_{n_2}(z)} -\\frac{Kq}{s^2}\\sum \\limits _{\\begin{array}{c}n_1,n_2\\leqslant L\\\\ n_1 \\equiv n_2 \\bmod {q}\\\\ (n_1n_2,q)=1\\end{array}} b_{n_1}(z)\\overline{b_{n_2}(z)}\\\\& +O\\left(\\frac{L^2x^{\\varepsilon }}{s}\\right),\\end{split}$ provided that $s\\ll L$ for all $L$ in question (i.e.", "for $L= x/K\\geqslant x^{1-(\\alpha +\\beta )}$ ), which is the case if $\\boxed{Q \\leqslant x^{1-(\\alpha +\\beta )}.", "}$ Here we recall that $s\\in (Q,2Q]$ .", "Now we use Dirichlet characters to detect the congruence relations in the above sums.", "We thus get $\\begin{split}\\Sigma ^{\\prime }(K,z) = & \\frac{K}{s} \\cdot \\frac{1}{\\varphi (s)}\\sum \\limits _{\\chi \\bmod s} \\sum \\limits _{n_1,n_2\\leqslant L} b_{n_1}(z)\\overline{b_{n_2}(z)} \\chi (n_1)\\overline{\\chi }(n_2)-\\\\&\\frac{Kq}{s^2}\\cdot \\frac{1}{\\varphi (q)} \\sum \\limits _{\\chi ^{\\prime } \\bmod {q}}\\ \\sum \\limits _{n_1,n_2\\leqslant L} b_{n_1}(z)\\overline{b_{n_2}(z)} \\chi ^{\\prime }(n_1)\\overline{\\chi ^{\\prime }}(n_2)+O\\left(\\frac{L^2x^{\\varepsilon }}{s}\\right).\\end{split}$ Note that $\\frac{K}{s}\\cdot \\frac{1}{\\varphi (s)}=\\frac{Kq}{s^2}\\cdot \\frac{1}{\\varphi (q)}=\\frac{K}{\\varphi (s^2)}$ and hence $\\begin{split}\\Sigma ^{\\prime }(K,z) = & \\frac{K}{\\varphi (s^2)} \\sum \\limits _{\\chi \\in \\mathcal {X}(s)} \\sum \\limits _{n_1,n_2\\leqslant L} b_{n_1}(z)\\overline{b_{n_2}(z)}\\chi (n_1)\\overline{\\chi }(n_2) +O\\left(\\frac{L^2x^{\\varepsilon }}{s}\\right)\\\\= & \\frac{K}{\\varphi (s^2)}\\sum \\limits _{\\chi \\in \\mathcal {X}(s)} \\left|\\sum \\limits _{n\\leqslant L} b_n(z) \\chi (n)\\right|^2+ O\\left(\\frac{L^2x^{\\varepsilon }}{s}\\right),\\end{split}$ where $\\mathcal {X}(s)$ is the set of all Dirichlet characters modulo $s$ which are not induced by a Dirichlet character modulo $q$ (in particular, $\\mathcal {X}(s)$ does not contain the principal character).", "We may write the above as $ \\Sigma ^{\\prime }(K,z) = \\frac{K}{\\varphi (s^2)} \\sum \\limits _{\\begin{array}{c}r>q\\\\ q|r|s\\end{array}}\\ \\mathop {\\mmlmultiscripts{\\sum {\\mmlnone }{\\ast }}}\\limits \\limits _{\\chi \\bmod {r}} \\left| \\sum \\limits _{n\\leqslant L} b_n(z) \\chi (n)\\right|^2+ O\\left(\\frac{L^2x^{\\varepsilon }}{s}\\right),$ where the asterisk indicates that $\\chi $ ranges over all primitive characters modulo $r$ .", "Now we apply the large sieve, Proposition REF , after re-introducing the indices $q$ and summing over $q\\in \\mathcal {Q}$ .", "This gives us $ \\begin{split}\\sum \\limits _{q\\in \\mathcal {Q}} \\sum \\limits _{\\begin{array}{c}r>q\\\\ q|r|s\\end{array}}\\frac{r}{\\varphi (r)}\\ \\mathop {\\mmlmultiscripts{\\sum {\\mmlnone }{\\ast }}}\\limits \\limits _{\\chi \\bmod {r}}\\left| \\sum \\limits _{n\\leqslant L} b_n(z) \\chi (n)\\right|^2 \\leqslant & \\sum \\limits _{r\\leqslant 2Q} \\frac{r}{\\varphi (r)}\\ \\mathop {\\mmlmultiscripts{\\sum {\\mmlnone }{\\ast }}}\\limits \\limits _{\\chi \\bmod {r}}\\left| \\sum \\limits _{n\\leqslant L} b_n(z) \\chi (n)\\right|^2\\\\\\ll & \\left(Q^2+L\\right)\\sum \\limits _{n\\leqslant L}|b_n|^2\\\\\\ll & \\left(Q^2+L\\right)L\\mathcal {L}^3,\\end{split}$ where we recall that $s\\in (Q,2Q]$ .", "Noting that $\\frac{K}{\\varphi (s^2)}=\\frac{K}{s^2}\\cdot \\frac{r}{\\varphi (r)}$ for $q|r|s$ , and recalling that $KL=x$ , we deduce that $\\sum \\limits _{q\\in \\mathcal {Q}} |\\Sigma _q^{\\prime }(K,s)|\\ll Q^{-2}K^{-1}x^2\\mathcal {L}^3 + x\\mathcal {L}^3+Q^{-1}K^{-2}x^{2+\\varepsilon }\\sharp \\mathcal {Q}.$ Combining this with (REF ) and (REF ) and recalling $\\sharp \\mathcal {Q}=\\sharp \\mathcal {S}=\\mathcal {L}^C$ , we obtain $ \\sum \\limits _{q\\in \\mathcal {Q}} |\\Sigma _q(K)|^2 \\ll Q^{-2}x^2\\mathcal {L}^8+Kx\\mathcal {L}^8 +Q^{-1}K^{-1}x^{2+2\\varepsilon }\\mathcal {L}^C + K^2x^{2\\varepsilon }\\mathcal {L}^C.$ We get a second bound for the left-hand side by reversal of roles of the variables $m$ and $n$ in the above process, where $K$ on the right-hand side is replaced by $x/K$ , i.e.", "$ \\sum \\limits _{q\\in \\mathcal {Q}} |\\Sigma _q(K)|^2 \\ll Q^{-2}x^2\\mathcal {L}^{8} + K^{-1}x^2\\mathcal {L}^{8}+ Q^{-1}Kx^{1+2\\varepsilon }\\mathcal {L}^C+K^{-2}x^{2+2\\varepsilon }\\mathcal {L}^C.$ For this to hold, the condition (REF ) needs to be replaced by $\\boxed{Q \\leqslant x^{\\alpha }.", "}$ Using (REF ) if $K\\leqslant x^{1/2}$ and (REF ) if $K\\geqslant x^{1/2}$ , and recalling that $x^{\\alpha }\\leqslant K\\leqslant x^{\\alpha +\\beta }$ , we deduce that $\\sum \\limits _{q\\in \\mathcal {Q}} |\\Sigma _q(K)|^2 \\ll Q^{-2}x^2\\mathcal {L}^{8} + x^{3/2}\\mathcal {L}^{8}+ Q^{-1}x^{2-\\alpha +2\\varepsilon }\\mathcal {L}^C\\\\+Q^{-1}x^{1+\\alpha +\\beta +2\\varepsilon }\\mathcal {L}^C+x^{1+2\\varepsilon }\\mathcal {L}^C.$ With the choice $\\alpha =\\frac{1}{4}\\quad \\mbox{and} \\quad \\beta =\\frac{1}{2},$ and writing $\\Sigma _q=\\Sigma _{II}$ with $\\Sigma _{II}$ as in (REF ), we therefore obtain $\\begin{split}\\sum \\limits _{q\\in \\mathcal {Q}} |\\Sigma _q|^2\\ll & Q^{-2}x^2\\mathcal {L}^{10} + x^{3/2}\\mathcal {L}^{10} + Q^{-1}x^{7/4+3\\varepsilon }\\mathcal {L}^C+x^{1+3\\varepsilon }\\mathcal {L}^C\\end{split}$ using the Cauchy-Schwarz inequality again.", "The right-hand side is bounded by $Y$ defined in (REF ) if $ \\boxed{ Q\\leqslant x^{1/4-3\\varepsilon }\\mathcal {L}^{10-C}.", "}$" ], [ "Completion of the proof", "In view of our conditions (REF ), (REF ), (REF ) and (REF ), Theorem REF follows upon adjusting $\\varepsilon $ ." ] ]
2212.05576
[ [ "Dirac gauge theory for topological spinors in 3+1 dimensional networks" ], [ "Abstract Gauge theories on graphs and networks are attracting increasing attention not only as approaches to quantum gravity but also as models for performing quantum computation.", "We propose a Dirac gauge theory for topological spinors in $3+1$ dimensional networks associated to an arbitrary metric.", "Topological spinors are the direct sum of $0$-cochains and $1$-cochains defined on a network and describe a matter field defined on both nodes and links of a network.", "Recently it has been shown that topological spinors obey the topological Dirac equation driven by the discrete Dirac operator.", "Here these results are extended by formulating the Dirac equation on weighted and directed $3+1$ dimensional networks which allow for the treatment a local theory.", "The commutators and anti-commutators of the Dirac operators are non vanishing an they define the curvature tensor and magnetic field of our theory respectively.", "This interpretation is confirmed by the non-relativistic limit of the proposed Dirac equation.", "In the non-relativistic limit of the proposed Dirac equation the sector of the spinor defined on links follows the Schr\\\"odinger equation with the correct giromagnetic moment, while the sector of the spinor defined on nodes follows the Klein-Gordon equation and is not negligible.", "The action associated to the proposed field theory comprises of a Dirac action and a metric action.", "The Dirac action involves the topological spinor, the metric action is obtained from the contraction of the curvature tensor and only involves the metric degrees of freedom of the network.", "We describe the gauge invariance of the action under both Abelian and non-Abelian transformations and we propose the equation of motion of the field theory of both Dirac and metric fields.", "This theory can be interpreted as a limiting case of a more general gauge theory valid on any arbitrary network in the limit of almost flat spaces." ], [ "Introduction", "In this work we formulate a Dirac gauge theory on networks in which the matter field is defined on both nodes of links of the network extending previous work on the topological Dirac equation [1].", "Recently there is a growing attention on lattice gauge theories and their implementation using quantum information frameworks [2], [3], [4], [5].", "While in lattice gauge theories gauge fields are often associated to the links of a network, matter fields are traditionally only associated to the nodes of the network [6].", "Interestingly however, models in which quantum states are defined on links and on higher-dimensional simplices or cells of simplicial or cell complexes are receiving growing interest and they include for instance the very influential Kitaev model [7].", "Also in the context of classical dynamics on networks it is becoming clear that considering dynamical variables not only associated to the nodes but also to the links of networks and to the higher-dimensional simplices of simplicial complexes can reveal the important interplay between network topology and dynamics [8], [9], [10], [11].", "In this context it is emerging that the discrete Dirac operator[1], [12], [13], [14], [15], [16] is key to investigate the properties of coupled topological signals of different dimension defined on graphs and networks [17], [18], [19].", "The discrete Dirac operator on networks is a direct extension of the Dirac operator used in the continuum for example in supersymmetric theories [20] and in non-commutative geometry [21], [22], [23], [24], [25], [26] where to our knowledge it has been first formulated [13].", "The Dirac operator can be defined on networks also without relying on non-commutative geometry [12], [16], [15], [14].", "However most of the works that use the discrete Dirac operator outside the field of noncommutative geometry define the discrete Dirac operator on a network as the sum of the exterior derivative and its dual $\\partial =d+d^{\\star }$ , i.e.", "they do not associate an algebra to these operators.", "The Dirac equation proposed in Ref.", "[1] is based on a topological description of the wave function formed by a topological spinor defined on nodes and links of a network.", "In particular Ref.", "[1] considers the topology of finite dimensional $1+1,2+1 $ and $3+1$ square lattices and associates an algebra to the Dirac operators defined on space-like and time-like directions as well.", "In this work we build on these results to formulate a gauge theory for a Dirac field defined on both nodes and links of a network and its associated metric degrees of freedom.", "The formulated gauge theory framework can be considered as a gauge theory which could be potentially explored for synthetic realization in condensed matter systems.", "Alternatively the proposed gauge theory can be considered in the framework of the current scientific debate on the topic in the context of quantum gravity [27], [28], [29], [30].", "From a mathematical perspective quantum gravity is about reconciling quantum mechanics, a theory that does not have a fundamental geometrical foundation but is instead characterized by its probabilistic interpretation with general relativity, that is a deeply rooted in a geometrical description of space-time.", "Many approaches [20], [31], [32], [33], [34], [35], [36], [37] to quantum gravity aim at turning general relativity into a quantum discipline, however one different strategy is to try to turn quantum mechanics into a geometrical and topological theory.", "In this respect in the last years we have seen the flourishing of few approaches combining graph and network theory to quantum gravity and viceversa combining quantum gravity approaches to networks [38], [39], [40], [41], [42], [8].", "In this perspective of course it seems natural to rely on algebraic topology, exterior calculus, and discrete geometry [43], [33], [44], [45], [46], [47] and start by interpreting geometrically and in the discrete network setting, the relativistic Dirac equation [48].", "Driven by this abstract line of thought, the topological Dirac equation [1] has been recently proposed.", "The Dirac equation defined in Ref.", "[1] acts on a topological spinor defined on nodes and links, in which the wave function is slightly non-local as on each link the wave function takes the same value.", "Here we show that the Dirac equation [1] can be also defined on a local wave function defined on the nodes on each directed link departing from every node of the network acting as the fiber boundle at node $i$ .", "Therefore the topological spinor can be interpreted as encoding both the component of the wave-function defined on each node of the network and a \"flux\" of the wave function from a given node toward any neighbour node.", "Therefore the wave-function has an interpretation that is similar to the description of a dynamical state of a classical particle requiring both position and velocity of the particle.", "Here not only we turn the topological Dirac equation into a local theory, but we also introduce metric matrices on nodes and links.", "The metric matrices inducing deformations of the lattice are here treated as the geometric degrees of freedom of the network while the topology remains the $3+1$ dimensional one.", "The metric degree of freedoms are embodied in a metric field that evolves together with the topological spinor according to the equations of motion.", "Building on the fact that the directional Dirac operators along different directions of the lattice do not commute or anticommute, we define a curvature tensor and the magnetic field of the network, from the commutators and anticommutators of the directional Dirac operators.", "The non-relativistic limit of the proposed Dirac equation confirms our interpretation of the anti-commutators of the spatial Dirac operators as the magnetic field of the network.", "Therefore in this context the electromagnetic field of this theory is associated to the metric of the network.", "Moreover from the non-relativistic limit of the Dirac equation we learn the following results.", "First, the Schrödinger equation with the correct giromagnetic factor is recovered for the sector of the spinor associated to the links (or the fiber boundle).", "Secondly, the node sector of the topological spinor follows the Klein-Gordon equation.", "Thirdly, the node sector of the topological spinor is non-negligible.", "It is interesting to consider whether these results can lead to testable predictions different from the standard Dirac equation.", "More theoretically these results open new perspectives for connections to supersymmetry.", "Finally we define the Abelian and non-Abelian transformations acting simultaneously on the topological spinors and the network metric, and we derive the equation of motion from our action which relates the metric degree of freedom to the topological spinor.", "The proposed gauge field theory on $3+1$ dimensional networks can be interpreted as a limiting case of a more general theory valid on an arbitrary networks which applies when the discrete space-time is almost flat.", "The extension to arbitrary network topologies is not excluded but it would require changing the algebra adopted for the directional Dirac operators." ], [ "Dirac equation for topological spinors", "We consider a lattice in $3+1$ dimensions with $N$ nodes and $L=8N$ directed links and periodic boundary conditions endowed with an Euclidean metric.", "The choice of an Euclidean metric is dictated by the assumption (already formulated in Ref.", "[1]) that the Lorentzian nature of space-time might be fully accounted by the operators (such as the Dirac operators) acting on this space rather than being a intrinsic property of the discrete network describing space-time.", "This allows us to avoid the challenge to define space-like and time-like links.", "In our approach the only difference between space-like and time-like links will depend on the choice of the Dirac operator that acts on them.", "As a consequence of this, throughout the paper we will adopt the Einstein convention of repeated indices with $a_{\\mu }{b}^{\\mu }$ indicating $\\sum _{\\mu =1}^4a_{\\mu }b_{\\mu }$ .", "As in classical dynamics the configuration space is formed by the positions and the velocities of the particles we assume that the topological wave function ${\\Psi }$ is the direct sum between a two 0-cochains and a two 1-cochains defined on each directed link of the network.", "Let us indicate with $C_0$ the set of 0-cochains and with and $C_1$ the set of directed 1-cochains [45].", "With this notation we have $\\Psi \\in C_0\\oplus C_0\\oplus C_1\\oplus C_1$ .", "Hence $\\Psi =\\chi \\oplus \\psi $ or alternatively ${\\Psi }=\\left(\\begin{array}{c}\\chi \\\\\\psi \\end{array}\\right).$ The first component $\\chi $ is formed by two 0-cochains $\\chi \\in C_0\\oplus C_0$ and the second component $\\psi $ is formed by two directed 1-cochains $\\psi \\in C_1\\oplus C_1$ $\\chi =\\left(\\begin{array}{c}\\chi _{+}\\\\\\chi _{-}\\end{array}\\right),\\quad \\psi =\\left(\\begin{array}{c} {\\psi _{+}}\\\\\\psi _{-}\\end{array}\\right).$ Therefore $\\chi _{\\pm }$ are 0-cochains formed by elements defined on the $N$ nodes of the network, while $\\psi _{\\pm }$ are directed 1-cochains formed by elements defined on the links $L$ with any given direction $\\chi _{\\pm }=\\left(\\begin{array}{c}\\chi _{1,\\pm },\\\\\\chi _{2\\pm },\\\\\\vdots \\\\ \\chi _{N,\\pm }\\end{array}\\right),\\quad \\phi _{\\ell ,\\pm }=\\left(\\begin{array}{c}\\phi _{\\ell _1,\\pm }\\\\ \\psi _{\\ell _2,\\pm }\\\\\\vdots \\\\\\psi _{\\ell _{L},\\pm }\\end{array}\\right).$ Note that here we distinguish between $\\psi _{[i,j],\\pm }$ and $\\psi _{[j,i],\\pm }$ , in order to interpret $\\psi _{{[i,j]},\\pm }$ as a “flux going from $i$ to $j$ \" thus indicating a variable localized on the node $i$ while $\\psi _{{[j,i]},\\pm }$ indicates a \"flux going from $j$ to $i$ \", which is instead a variable localized on node $j$ .", "Note that in our directed setting values of $\\psi _{[i,j]\\pm }$ with $\\psi _{[i,j]\\pm }\\ne -\\psi _{[j,i]\\pm }$ are allowed.", "This framework can treat a local field theory in which the topological spinor is defined on both nodes and links of the network.", "The part of the topological spinor localized on the node $i$ is given by ${\\hat{ \\Psi }}_i=\\left(\\begin{array}{c}\\chi _i\\\\\\psi _i\\end{array}\\right).$ where $\\chi _i$ is formed by the elements of $\\chi $ localized on node $i$ and $\\psi _{i}=\\left(\\ldots \\psi _{[i,j]}\\ldots \\right)$ is formed by the elements of $\\psi $ indicating the fluxes from node $i$ to its neighbour nodes.", "Note that $\\psi _{i}$ is strictly speaking a field associated to the fiber boundle defined at node $i$ (see for instance discussion of \"vertex spaces\" in Ref.[12]).", "In a ($3+1$ )-dimensional lattice we classify links in 4 classes: $x$ -type, $y$ -type, $z$ -type and $t$ -type links.", "We consider the $ L\\times N$ coboundary matrix $\\bar{\\bf B}_{\\mu }$ of type $\\mu \\in \\lbrace t,x,y,z\\rbrace $ links which is given by $\\bar{\\bf B}_{\\mu }={\\bf G}_{[1]}^{-1/2}\\bar{\\bf B}_{\\mu }^{(U)}{\\bf G}_{[0]}^{1/2}$ where $\\bar{\\bf B}_{\\mu }^{(U)}$ is the $L\\times N$ unweighted coboundary matrix of elements $[{\\bar{B}_{\\mu }^{(U)}}]_{\\ell i}=\\left\\lbrace \\begin{array}{cclcl}-1 &\\mbox{if}& \\ell =[i,j] & \\& &\\ell \\ \\mbox{is a }\\mu -\\mbox{link} \\\\1&\\mbox{if}& \\ell =[j,i] & \\& & \\ell \\ \\mbox{is a } \\mu -\\mbox{link},\\end{array}\\right.$ and where ${\\bf G}_{[0]}$ and ${\\bf G}_{[1]}$ are the $N\\times N$ and $L\\times L$ metric matrices defined among nodes and among links respectively.", "The metric matrices ${\\bf G}_{[0]}$ and ${\\bf G}_{[1]}$ describe the geometry of the network and will be interpreted as metric degree of freedom which are interacting with the matter field $\\Psi $ .", "Note that while in topology and exterior calculus [44] typically these metric matrices are taken to be diagonal, with the diagonal elements of ${\\bf G}_{[1]}^{-1}$ indicating the weights of the links and the diagonal elements of ${\\bf G}_{[0]}^{-1}$ indicating the weights associated to the nodes, here we allow the metric matrices to be non diagonal and we allow their matrix elements to be complex.", "In particular here and in the following we will take ${\\bf G}_{[0]}=e^{\\bf A^{(0)}},\\quad {\\bf G}_{[1]}=e^{\\bf A},$ where the matrices ${\\bf A}^{(0)}$ and $\\bf A$ describe the metric fields of our gauge theory.", "The matrices ${\\bar{\\bf B}}^{\\star }_{\\mu }$ are the Hodge-star of the co-boundary matrices ${\\bar{\\bf B}}_{\\mu }$ with respect to the standard $L^2$ norm and they are given by [45], [49], [44], [46] ${\\bf \\bar{B}}^{\\star }_{\\mu }={\\bf \\bar{B}}^{\\dag }_{\\mu }.$ Note that the weighted graph Laplacian operator [50] in the direction $\\mu $ [45], [44], [46], [8] describing diffusion from nodes to nodes through links of type $\\mu $ is given by ${\\bf L}_{\\mu }={\\bf \\bar{B}}^{\\star }_{\\mu }{\\bf \\bar{B}}_{\\mu }$ .", "In order to define the Dirac operators let us first introduce the Pauli matrices $\\sigma _{\\mu }({\\bf F})$ with $\\mu \\in \\lbrace t,x,y,z\\rbrace $ defined over matrices ${\\bf F}$ as the matrices having block structure given by $\\hspace*{-22.76219pt}&&\\sigma _t({\\bf F})=\\sigma _0({\\bf F})=\\left(\\begin{array}{cc}{\\bf F}&{\\bf 0} \\\\ {\\bf 0}&{\\bf F}\\end{array}\\right),\\ \\ \\sigma _x({\\bf F})=\\left(\\begin{array}{cc}{\\bf 0}&{\\bf F} \\\\ {\\bf F}&0\\end{array}\\right),\\ \\ \\nonumber \\\\ &&\\sigma _y({\\bf F})=\\left(\\begin{array}{cc}{\\bf 0}&-\\mathrm {i}{\\bf F} \\\\ \\mathrm {i}{\\bf F} &{\\bf 0}\\end{array}\\right),\\ \\ \\sigma _z({\\bf F})=\\left(\\begin{array}{cc}{\\bf F}& {\\bf 0}\\\\{\\bf 0}& -{\\bf F}\\end{array}\\right).$ The exterior derivative $d_{\\mu }$ in the direction $\\mu $ and its adjoint $d^{\\star }_{\\mu }$ are given by $d_{\\mu }=\\left(\\begin{array}{cc}0&0\\\\\\sigma _0({\\bf \\bar{B}}_{\\mu })&0\\end{array}\\right),\\quad d^{\\star }_{\\mu }=\\left(\\begin{array}{cc}0&\\sigma _0({\\bf \\bar{B}}_{\\mu }^{\\star })\\\\0&0\\end{array}\\right),$ for $\\mu \\in \\lbrace t,x,y,z\\rbrace $ .", "We define the directional Dirac operators $\\partial _{\\mu }$ as $\\partial _{\\mu }=\\gamma _{\\mu }(d_{\\mu }^{\\star }+d_{\\mu }),$ (note that here the indices are not contracted), where the matrices $\\gamma _t$ and $\\gamma _{\\mu }$ with $\\mu \\in \\lbrace x,y,z\\rbrace $ are given by $\\gamma _{t}=\\gamma _{0}= \\left(\\begin{array}{cc}\\sigma _0({\\bf I}_{N})&0\\\\0& -\\sigma _0({\\bf I}_{L})\\end{array}\\right),\\quad \\gamma _{\\mu }=-\\mathrm {i}\\left(\\begin{array}{cc}\\sigma _\\mu ({\\bf I}_{N})&0\\\\0& -\\sigma _\\mu ({\\bf I}_{L})\\end{array}\\right),$ with ${\\bf I}_X$ indicating the identity matrix of dimension $X\\times X$ .", "We observe that given the definition of $d_{\\mu }$ and $d_{\\mu }^{\\star }$ given by Eq.", "(REF ) and the definition of the matrices $\\gamma _{\\mu }$ given by Eq.", "(REF ) we obtain the anticommutator relations $\\lbrace \\gamma _{\\mu },(d_{\\mu }+d_{\\mu }^{\\star })\\rbrace =0,$ valid for $\\mu \\in \\lbrace t,x,y,z\\rbrace $ .", "Finally we use $\\partial \\hspace{-5.69054pt}\\slash $ to indicate $\\partial \\hspace{-5.69054pt}\\slash =\\gamma ^{\\mu }(d_{\\mu }+d^{\\star }_{\\mu }),$ or alternatively $\\partial \\hspace{-5.69054pt}\\slash =\\sum _{\\mu }\\partial _{\\mu }$ .", "Note that from the above definition it follows that $\\partial _t$ and $\\partial _{\\mu }$ for $\\mu \\in \\lbrace x,y,z\\rbrace $ have the block structure $\\partial _t=\\left(\\begin{array}{cc}0&\\bar{\\mathcal {}{B}^{\\star }_{t}\\\\-\\bar{\\mathcal {}{B}_{t}&0},\\quad \\partial _\\mu =\\left(\\begin{array}{cc}0&-\\textrm {i}\\bar{\\mathcal {}{B}^{\\star }_{\\mu }\\\\\\textrm {i}\\bar{\\mathcal {}{B}_{\\mu }&0}.", "}Here \\bar{\\mathcal {}{B}_{\\mu } and \\bar{\\mathcal {}{B}_\\mu are given by\\begin{eqnarray}\\bar{\\mathcal {}{B}_{t}=\\sigma _{t}({\\bf \\bar{B}}_t),\\ \\ \\bar{\\mathcal {}{B}_{x}=\\sigma _{x}({\\bf \\bar{B}}_x),\\ \\ \\bar{\\mathcal {}{B}_{y}=\\sigma _{y}({\\bf \\bar{B}}_y),\\ \\ \\bar{\\mathcal {}{B}_{z}=\\sigma _{z}({\\bf \\bar{B}}_z),\\nonumber \\\\\\bar{\\mathcal {}{B}^{\\star }_{t}=\\sigma _{t}({\\bf \\bar{B}}^{\\star }_t),\\ \\ \\bar{\\mathcal {}{B}^{\\star }_{x}=\\sigma _{x}({\\bf \\bar{B}}^{\\star }_x),\\ \\ \\bar{\\mathcal {}{B}^{\\star }_{y}=\\sigma _{y}({\\bf \\bar{B}}^{\\star }_y),\\ \\ \\bar{\\mathcal {}{B}^{\\star }_{z}=\\sigma _{z}({\\bf \\bar{B}}^{\\star }_z).", "}This definition is such that \\textrm {i}\\gamma _0\\partial _{\\mu } is Hermitian for \\mu \\in \\lbrace x,y,z\\rbrace and anti-Hermitian for \\mu =t.In particular we obtain\\begin{eqnarray}\\textrm {i}\\gamma _0\\partial _\\mu =\\left(\\begin{array}{cc}0&\\bar{\\mathcal {}{B}^{\\star }_{\\mu }\\\\\\bar{\\mathcal {}{B}_{\\mu }&0},\\quad \\textrm {i}\\gamma _0\\partial _t=\\left(\\begin{array}{cc}0&\\textrm {i}\\bar{\\mathcal {}{B}^{\\star }_{t}\\\\\\textrm {i}\\bar{\\mathcal {}{B}_{t}&0}.", "}The Dirac action \\mathcal {S}_D is defined as\\begin{eqnarray}\\mathcal {S}_{D}=\\bar{\\Psi }(\\textrm {i}\\partial \\hspace{-5.69054pt}\\slash -m){\\Psi }.\\end{eqnarray}where \\bar{\\Psi }=\\Psi ^{\\dag }\\gamma _0.The dynamical equations associated to this action is the Dirac equation and its adjoint.", "The Dirac equation is obtained by deriving the action with respect to each element \\bar{\\Psi }_r of the topological spinor \\bar{\\Psi } getting\\begin{eqnarray}(\\textrm {i}\\partial \\hspace{-5.69054pt}\\slash -m) {\\Psi }=0,\\end{eqnarray}and its adjoint is obtained by deriving the action with respect to each element \\Psi _r of the topological spinor \\Psi is given\\begin{eqnarray}(-\\textrm {i}\\partial \\hspace{-5.69054pt}\\slash -m) {\\bar{\\Psi }}=0.\\end{eqnarray}The spectrum of the Dirac operator is\\begin{eqnarray}\\mathcal {E}^2=m^2\\end{eqnarray}where \\mathcal {E}^2 is the eigenvalue of the D^{\\prime }Alabertian \\square ={\\bf L}_t-\\sum _{\\mu \\in \\lbrace x,y,z\\rbrace }{\\bf L}_{\\mu }.If [{\\bf L}_{t},{\\bf L}_{x}+{\\bf L}_y+{\\bf L}_z]=0 we recover the relativistic dispersion\\begin{eqnarray}E^2=m^2+{p}^2\\end{eqnarray}where E^2 is the eigenvalue of the operator {\\bf L}_t and p^2 is the eigenvalue of {\\bf L}_{x}+{\\bf L}_y+{\\bf L}_z.", "In the case in which [{\\bf L}_\\mu ,{\\bf L}_\\nu ]=0 for any \\mu \\in \\lbrace t, x,y,z\\rbrace and \\nu \\in \\lbrace t,x,y,z\\rbrace we have\\begin{eqnarray}E^2=m^2+|\\lambda |^2\\end{eqnarray}where \\lambda =(\\lambda _x,\\lambda _y,\\lambda _z) indicates the eigenvalue of the directional Dirac operator {\\partial }_{\\mu } with \\mu \\in \\lbrace x,y,z\\rbrace .\\section {Non-commutative Dirac operators}The square of the spatial Dirac operators \\partial _{\\mu } with \\mu \\in \\lbrace x,y,z\\rbrace are given by the Laplacian matrices \\mathcal {L}_{\\mu }\\begin{eqnarray}\\partial _{\\mu } \\partial _{\\mu }={\\mathcal {L}}_{\\mu }=\\left(\\begin{array}{cc}\\sigma _0({\\bar{\\bf B}}_{\\mu }^{\\star }{\\bar{\\bf B}_{\\mu }})&0\\\\0&\\sigma _0({\\bar{\\bf B}}_{\\mu }{\\bar{\\bf B}}_{\\mu }^{\\star })\\end{array}\\right)\\end{eqnarray}while for the square of the temporal Dirac operator is given by -\\mathcal {L}_t, i.e.\\begin{eqnarray}\\partial _{t} \\partial _{t}=-{\\mathcal {L}}_{t}=-\\left(\\begin{array}{cc}\\sigma _0({\\bar{\\bf B}}_{t}^{\\star }{\\bar{\\bf B}_{t}})&0\\\\0&\\sigma _0({\\bar{\\bf B}}_{t}{\\bar{\\bf B}}_{t}^{\\star })\\end{array}\\right).\\end{eqnarray}The anticommutators of the Dirac operators are non zero in general.", "In particular for \\mu \\in \\lbrace x,y,z\\rbrace , \\nu \\in \\lbrace x,y,z\\rbrace with \\mu \\ne \\nu \\begin{eqnarray}\\lbrace \\partial _{\\mu }, \\partial _{\\nu }\\rbrace = -\\left(\\begin{array}{cc}0&0\\\\0&\\epsilon _{\\mu \\nu \\theta }\\sigma ^{\\theta }\\Big ([{\\bf B}^{(M)}]^{\\theta }\\Big )\\end{array}\\right).\\end{eqnarray}where the magnetic field {\\bf B}_{\\theta }^{\\bf (M)} is given by the L\\times L matrix given by\\begin{eqnarray}{\\bf B}_x^{(M)}=-\\mathrm {i}(\\bar{\\bf B}_{y}\\bar{\\bf B}_{z}^{\\star }-\\bar{\\bf B}_{z}\\bar{\\bf B}_{x}^{\\star }),\\nonumber \\\\{\\bf B}_y^{(M)}=-\\mathrm {i}(\\bar{\\bf B}_{z}\\bar{\\bf B}_{x}^{\\star }-\\bar{\\bf B}_{x}\\bar{\\bf B}_{z}^{\\star }),\\nonumber \\\\{\\bf B}_z^{(M)}=-\\mathrm {i}(\\bar{\\bf B}_{x}\\bar{\\bf B}_{y}^{\\star }-\\bar{\\bf B}_{y}\\bar{\\bf B}_{x}^{\\star }).\\end{eqnarray}It follows that the magnetic field {\\bf B}^{(M)}_{\\theta } is non vanishing in general.\\end{array}\\right.For \\mu =t and \\mu \\in \\lbrace x,y,z\\rbrace we have instead\\begin{eqnarray}\\lbrace \\partial _{t},\\partial _{\\mu }\\rbrace =\\left(\\begin{array}{cc}0&0\\\\0&\\mathrm {i}\\sigma _\\mu (\\bar{\\bf B}_{t}{\\bar{\\bf B}}_{\\mu }^{\\star }+{\\bar{\\bf B}}_{\\mu }{\\bar{\\bf B}}_{t}^{\\star }).\\end{array}\\right)\\end{eqnarray}The commutator of the Dirac operators in the spatial directions \\mu ,\\nu \\in \\lbrace x,y,z\\rbrace give\\begin{eqnarray}[\\partial _{\\mu },\\partial _{\\nu }]=-\\mathrm {i}\\left(\\begin{array}{cc}0&0\\\\0& \\epsilon _{\\mu \\nu \\theta }\\sigma ^{\\theta }(\\bar{\\bf B}_{\\mu }{\\bar{\\bf B}}_{\\nu }^{\\star }+{\\bar{\\bf B}}_{\\nu }{\\bar{\\bf B}}_{\\mu }^{\\star }).\\end{array}\\right)\\end{eqnarray}The commutator of the Dirac operators in the temporal direction t and in the spatial directions \\mu \\in \\lbrace x,y,z\\rbrace gives\\begin{eqnarray}[\\partial _{t},\\partial _{\\mu }]=\\mathrm {i}\\left(\\begin{array}{cc}0&0\\\\0&\\sigma _\\mu (\\bar{\\bf B}_{t}{\\bar{\\bf B}}_{\\mu }^{\\star }-{\\bar{\\bf B}}_{\\mu }{\\bar{\\bf B}}_{t}^{\\star })\\end{array}\\right).\\end{eqnarray}The commutators [\\partial _{\\mu },\\partial _{\\nu }] with \\mu ,\\nu \\in \\lbrace t,x,y,z\\rbrace will be interpreted as the curvature tensor of our theory and their contraction will be used in Sec.", "\\ref {Sec:metric} to formulate the action for the metric degrees of freedom.Note that while the commutators of the Dirac operators do not vanish, we have however\\begin{eqnarray}\\partial _\\rho [\\partial _\\mu ,\\partial _\\nu ]=0,\\end{eqnarray}for \\rho \\ne \\mu ,\\rho \\ne \\nu .", "}\\section {Non-relativistic limit of the Dirac equation}\\end{array}In this section our goal is to carry out the non-relativistic limit \\cite {ryder1996quantum} for the Dirac equation for \\right.m>0.", "The main difference with the textbook calculation is that the topological spinor has now a geometrical interpretation.", "As we will see this difference will carry notable consequences for the non-relativist limit of our equation.", "What we will see is that in the non-relativistic limit the wave function defined on the spatial section of the fiber boundle obeys the Schrödinger equation while the wave function defined on the nodes obeys the Klein-Gordon equation and is not negligible.In order to discuss the non-relativistic limit of our Dirac equation, let us consider the case in which {\\bf L}_t commutes with {\\bf L}_{x}+{\\bf L}_{y}+{\\bf L}_z but the spatial graph Laplacians do not commute with each other, due to the non-trivial metric matrices, i.e.", "[{\\bf L}_{\\mu },{\\bf L}_{\\nu }]\\ne 0 for \\mu ,\\nu \\in \\lbrace x,y,z\\rbrace .", "We consider therefore a Dirac wave-function associated with the eigenvalues E of the operator \\mathrm {i}\\partial _t with E\\sim m and we characterize the eigenvectors of the Dirac equation.The Dirac equation (\\ref {dirac}) can be expressed as and equation for \\chi and \\psi with \\Psi =(\\chi ,\\psi )^{\\top } asis given by\\begin{eqnarray}&&\\sum _{\\mu \\lbrace x,y,z\\rbrace }\\sigma _{\\mu }(\\bar{\\bf B}_{\\mu }^{\\star })\\psi +\\mathrm {i}\\sigma _0(\\bar{\\bf B}_{t}^{\\star })\\psi -m\\chi =0,\\\\&&-\\mathrm {i}\\sigma _{0}(\\bar{\\bf B}_{t})\\chi -m\\psi _t=0,\\\\&&-\\sigma _{\\mu }(\\bar{\\bf B}_{\\mu })\\chi -m\\psi _{\\mu }=0,\\end{eqnarray}where \\psi _\\mu indicates the vector constructed from \\psi by retaining only the elements defined on links of direction \\mu for any possible choice of \\mu \\in \\lbrace t,x,y,z\\rbrace .Following an argument similar to the one provided in \\cite {bianconi2021topological}, i.e.", "substituting Eq.", "(\\ref {d_psi_mu}) and Eq.", "(\\ref {d_psi_t}) into Eq.", "(\\ref {d_chi}) it is easy to show that \\chi satisfies the Klein-Gordon equation\\begin{eqnarray}\\sigma _{0}(\\square )\\chi =m^2\\chi .\\end{eqnarray}where \\square ={\\bf L}_t-\\sum _{\\mu \\in \\lbrace x,y,z\\rbrace }{\\bf L}_{\\mu }Therefore \\chi is an eigenvector of \\sigma _0(\\square ).", "Since the d^{\\prime }Alabertian commutes with {\\bf L}_t, \\chi can also be chosen to be an eigenvector of \\sigma _0({\\bf L}_t) with eigenvalue E^2.If we now consider Eq.", "(\\ref {d_psi_t}) we obtain\\begin{eqnarray}\\psi _{t}=-\\frac{1}{m}\\mathrm {i}\\sigma _{0}(\\bar{\\bf B}_{t})\\chi \\end{eqnarray}Hence, using this relation we obtain\\begin{eqnarray}\\mathrm {i}\\sigma _0(\\bar{\\bf B}_{t}^{\\star })\\psi =\\frac{1}{m}\\sigma _0({\\bf L}_t)\\chi =\\frac{E^2}{m}\\chi .\\end{eqnarray}Inserting this equation in Eq.", "(\\ref {d_chi}) we obtain\\begin{eqnarray}-\\sum _{\\mu \\in \\lbrace x,y,z\\rbrace }\\sigma _{\\mu }(\\bar{\\bf B}_{\\mu }^{\\star })\\psi =\\frac{E^2-m^2}{m}\\chi ,\\end{eqnarray}This is the relation between the component \\chi and \\psi and for E\\sim m, \\chi =O(1/(E-m))For E\\sim m we have E^2-m^2\\simeq 2m(E-m) and hence\\begin{eqnarray}\\chi \\simeq -\\frac{1}{2(E-m)}\\sum _{\\mu \\in \\lbrace x,y,z\\rbrace }\\sigma _{\\mu }(\\bar{\\bf B}_{\\mu }^{\\star })\\psi .\\end{eqnarray}In follows that inserting this expression into the Eq.", "(\\ref {d_psi_mu}) we obtain that the spatial sector on the wave function on the fiber boundle \\psi _s=\\psi _x+\\psi _y+\\psi _z obeys\\begin{eqnarray}(E-m)\\psi _s=\\frac{1}{2m}\\sum _{\\mu ,\\nu \\in \\lbrace x,y,z\\rbrace }\\sigma _{\\mu }(\\bar{\\bf B}_{\\mu })\\sigma _{\\nu }(\\bar{\\bf B}_{\\nu }^{\\star })\\psi _s.\\end{eqnarray}Therefore we obtain that \\psi follows the Schrödinger equation\\begin{eqnarray}(E-m)\\psi _s=\\frac{1}{2m}\\left[\\sum _{\\mu }\\sigma _0({\\bf L}_\\mu )-\\sum _{\\theta }\\sigma _{\\theta }({\\bf B}_\\theta ^{\\bf (M)})\\right]\\psi _s\\end{eqnarray}Therefore the in the non-relativistic limit the component on the fiber boundle follows Schrödinger equation with giromagnetic constant 2 while the component on the nodes obeys the Klein-Gordon equation.", "Note that the node component \\chi and the component on the fiber boundle \\psi are related by Eq.", "(\\ref {nodes_boundle}).", "Therefore \\chi is not negligible with respect to \\psi .We believe that these results might be related to the supersymmetric interpretation of the Dirac operator \\cite {post2009first}.\\section {Weyl equation for topological spinors}The Weyl equation for topological spinors is obtained when the mass is zero, i.e.", "m=0.In this case the topological Weyl equation reads\\begin{eqnarray}&&\\sum _{\\mu \\in \\lbrace x,y,z\\rbrace }{\\sigma }_{\\mu }(\\bar{\\bf B}_{\\mu }^{\\star })\\psi +\\mathrm {i}{\\sigma }_0(\\bar{\\bf B}_t^{\\star })\\psi =0 \\nonumber \\\\&&-\\sigma _{\\mu }(\\bar{\\bf B}_{\\mu })\\chi =0, \\\\&&-\\mathrm {i}\\sigma _{0}(\\bar{\\bf B}_{t})\\chi =0.\\end{eqnarray}This equation implies that \\psi satisfies Eq.", "(\\ref {weyl_psi}) and that \\chi belongs to the intersection of the kernels of \\sigma _{\\mu }(\\bar{\\bf B}_{\\mu }), i.e.\\begin{eqnarray}\\chi \\in \\cap _{\\mu \\in \\lbrace t,x,y,z\\rbrace }\\ \\mbox{ker}\\sigma _{\\mu }(\\bar{\\bf B}_{\\mu })\\end{eqnarray}\\section {Global transformations }We observe that the Dirac action \\mathcal {S}_{D} is invariant under a global phase transformation\\begin{eqnarray}\\Psi \\rightarrow e^{-\\mathrm {i}e \\Lambda {\\bf I}_{M}} \\Psi ,\\end{eqnarray}where \\Lambda is taken to be a arbitrary real constant and M=2N+2L.In the first order approximation for \\Lambda \\ll 1 we have \\Psi \\rightarrow \\Psi +\\delta \\Psi with\\begin{eqnarray}\\delta \\Psi =-\\mathrm {i}e \\Lambda \\Psi \\end{eqnarray}We have that the Dirac action \\mathcal {S}_D is invariant under this transformation, i.e.\\begin{eqnarray}\\delta \\mathcal {S}_D=-\\mathrm {i}e\\bar{\\Psi }(\\mathrm {i}\\partial \\hspace{-5.69054pt}\\slash -m) \\Lambda \\Psi +\\mathrm {i}e\\bar{\\Psi }\\Lambda (\\mathrm {i}\\partial \\hspace{-5.69054pt}\\slash -m)\\Psi =0\\end{eqnarray}Now \\partial \\hspace{-5.69054pt}\\slash =\\gamma ^{\\mu }(d_{\\mu }+d_{\\mu }^{\\star }) with d_{\\mu }+d_{\\mu }^{\\star } obeying the anti-commutator relations with the \\gamma _{\\mu } matrix given by Eq.", "(\\ref {anti_g_d}).", "Therefore we have\\begin{eqnarray}\\delta \\mathcal {S}_D=-e\\Lambda \\left[\\left\\langle {\\partial \\hspace{-5.69054pt}\\slash \\bar{\\Psi }^{\\top }, \\Psi }\\right\\rangle +\\left\\langle {\\bar{\\Psi }^{\\top }, \\partial \\hspace{-5.69054pt}\\slash \\Psi }\\right\\rangle \\right]=0,\\end{eqnarray}where the scalar product \\left\\langle {\\cdot ,\\cdot }\\right\\rangle between two topological spinors is taken to be the standard L^2 norm.Hence we obtain the following relation\\begin{eqnarray}\\left\\langle {\\partial \\hspace{-5.69054pt}\\slash \\bar{\\Psi }, \\Psi }\\right\\rangle +\\left\\langle {\\bar{\\Psi } , \\partial \\hspace{-5.69054pt}\\slash \\Psi }\\right\\rangle =0.\\end{eqnarray}In the next section we will consider gauge transformations which make the Dirac action also invariant under local transformation of the spinor.\\end{eqnarray}\\section {Gauge transformations}In the this paragraph we will construct a topological (Abelian) transformation in order to guarantee that the action \\mathcal {S}_D is invariant under the local U(1) symmetry\\begin{eqnarray}\\Psi \\rightarrow e^{-\\mathrm {i}e\\Lambda } \\Psi .\\end{eqnarray}where \\Lambda is an arbitrary diagonal M\\times M matrix with M=2N+2L.", "}We will also consider non-Abelian transformations acting oneach local spinor {\\psi }_{i,\\pm }.", "Let us indicate with {\\bf \\hat{J}}_{i}^{\\alpha \\beta } the generator of the Lorentz group SL(2,C) group acting on the axis ({\\alpha , \\beta }) at node i which can be interpreted as a transformation of the fiber boundle defined at node i.We will consider the set of transformations in which the local spinors obey\\begin{eqnarray}{\\psi }_{i\\pm }\\rightarrow e^{-\\mathrm {i}g {\\bf \\hat{J}}_i^{{\\alpha \\beta }}\\Theta _{i\\alpha \\beta }} {\\psi }_{i\\pm }\\end{eqnarray}where for each value of i, \\Theta _{{i\\alpha \\beta }} are a set of 6 0-cochains associated to the node i of the network.In this paragraph we will derive the gauge transformations on the metric matrices {\\bf G}_{[0]} and {\\bf G}_{[1]} that will ensure the invariance of the action \\mathcal {S}_D under these transformations.", "}\\subsection {Abelian Transformations}}Let us study the action \\mathcal {S}_D under the Abelian transformation\\begin{eqnarray}\\Psi \\rightarrow e^{-ie\\Lambda }\\Psi ,\\end{eqnarray}where we assume that \\Lambda is an arbitrary diagonal M\\times M matrix with block structure\\begin{eqnarray}e^{-\\mathrm {i}e\\Lambda }=\\left(\\begin{array}{cccc}e^{-\\mathrm {i}e\\Lambda _N}&0&0&0\\\\0&e^{-\\mathrm {i}e\\Lambda _N}&0&0\\\\0&0&e^{-\\mathrm {i}e\\Lambda _L}&0\\\\0&0&0&e^{-\\mathrm {i}e\\Lambda _L}\\end{array}\\right)\\end{eqnarray}where \\Lambda _N and \\Lambda _L are diagonal matrices of size N\\times N and L\\times L respectively.", "}By considering the action \\mathcal {S}_D defined in Eq.", "(\\ref {L_dirac})we observe that the action is clearly invariant under the Abelian transformation, provided the Dirac operator transform according to\\begin{eqnarray}\\partial \\hspace{-5.69054pt}\\slash \\rightarrow e^{-\\mathrm {i}e\\Lambda }\\partial \\hspace{-5.69054pt}\\slash e^{\\mathrm {i}e\\Lambda },\\end{eqnarray}which for small \\Lambda implies the gauge transformation\\begin{eqnarray}\\partial \\hspace{-5.69054pt}\\slash \\rightarrow \\partial \\hspace{-5.69054pt}\\slash -\\mathrm {i}eA \\hspace{-5.69054pt}\\slash ,\\end{eqnarray}with\\begin{eqnarray}A \\hspace{-5.69054pt}\\slash =[\\partial \\hspace{-5.69054pt}\\slash ,\\Lambda ].\\end{eqnarray}This implies that the coboundary operator and the metric matrices are transformed as\\begin{eqnarray}\\bar{\\bf B}_{\\mu }&\\rightarrow & e^{-ie\\Lambda _L}\\bar{\\bf B}_{\\mu }e^{\\mathrm {i}e\\Lambda _N},\\nonumber \\\\{\\bf G}_{[0]}&\\rightarrow & {\\bf G}_{[0]}e^{2\\mathrm {i}e\\Lambda _N},\\nonumber \\\\{\\bf G}_{[1]}&\\rightarrow &{\\bf G}_{[1]}e^{2\\mathrm {i}e\\Lambda _L},\\end{eqnarray}while before and after the transformation we have\\begin{eqnarray}\\bar{\\bf B}^{\\star }_{\\mu }=\\bar{\\bf B}_{\\mu }^{\\dag }.\\end{eqnarray}This invariance implies that the graph Laplacian matrices can be expressed as {\\bf L}_{\\mu }={\\bf \\bar{B}}_{\\mu }^{\\star }{\\bf \\bar{B}}_{\\mu } both before and after the transformation.", "Moreover the trace of the product of graph Laplacians (including also graph Laplacians of different direction) is invariant under these transformations.", "}}\\subsection {Non-Abelian transformations}Let us study the action \\mathcal {S}_D under the non-Abelian transformation\\begin{eqnarray}\\Psi \\rightarrow {S}(\\lbrace \\Theta _{\\alpha \\beta }\\rbrace )\\Psi ,\\end{eqnarray}where we assume that the M\\times M matrix {S}(\\lbrace \\Theta _{\\alpha \\beta }\\rbrace ) has block structure\\begin{eqnarray}{S}(\\lbrace \\Theta _{\\alpha \\beta }\\rbrace )=\\left(\\begin{array}{cccc}1&0&0&0\\\\0&1&0&0\\\\0&0&e^{-\\mathrm {i}gJ^{\\alpha \\beta }_i\\Theta _{\\alpha \\beta }^i}&0\\\\0&0&0&e^{-\\mathrm {i}gJ^{\\alpha \\beta }_i\\Theta _{\\beta \\alpha }^i}\\end{array}\\right)\\end{eqnarray}where for a given pair of \\alpha ,\\beta \\in \\lbrace t,x,y,z\\rbrace with \\alpha \\ne \\beta , J_{i}^{\\alpha \\beta } are L\\times L matrices that act of \\psi _{\\pm }.", "These matrices J_{i}^{\\alpha \\beta } are obtained the 4\\times 4 generators { \\hat{J}}_i^{\\alpha \\beta } of the Lorentz transformations of \\psi _{i\\pm } by acting trivially on all the other components of \\psi _{\\pm } that are not localized on node i, and \\Theta _{\\alpha \\beta }=(\\Theta _{\\alpha \\beta }^1,\\Theta _{\\alpha \\beta }^1,\\ldots , \\Theta _{\\alpha \\beta }^N) are determining the gauge with \\Theta ^i_{\\alpha \\beta }\\in \\mathbb {R}.We observe that the action \\mathcal {S}_D is clearly invariant under the non-Abelian transformations, provided the Dirac operator transforms according to\\begin{eqnarray}\\partial \\hspace{-5.69054pt}\\slash \\rightarrow {S}(\\lbrace \\Theta _{\\alpha \\beta }\\rbrace )\\partial \\hspace{-5.69054pt}\\slash {S}^{-1}(\\lbrace \\Theta _{\\alpha \\beta }\\rbrace ),\\end{eqnarray}which for small \\Theta implies the gauge transformation\\begin{eqnarray}\\partial \\hspace{-5.69054pt}\\slash \\rightarrow \\partial \\hspace{-5.69054pt}\\slash -\\mathrm {i}gB \\hspace{-5.69054pt}\\slash ,\\end{eqnarray}with\\begin{eqnarray}B \\hspace{-5.69054pt}\\slash =[\\partial \\hspace{-5.69054pt}\\slash ,\\mathcal {J}^{\\alpha \\beta }_i\\Theta _{\\alpha \\beta }^i],\\end{eqnarray}where \\mathcal {J}_i^{\\alpha \\beta } is a M\\times M matrix of block structure\\begin{eqnarray}\\mathcal {J}_i^{\\alpha \\beta }=\\left(\\begin{array}{cccc}1&0&0&0\\\\0&1&0&0\\\\0&0&J_i^{\\alpha \\beta }&0\\\\0&0&0&J_i^{\\alpha \\beta }\\end{array}\\right).\\end{eqnarray}This implies that the coboundary operator and the metric matrices are transformed as\\begin{eqnarray}\\bar{\\bf B}_{\\mu }&\\rightarrow & e^{-\\mathrm {i}gJ^{\\alpha \\beta }_i\\Theta _{\\alpha \\beta }^i}\\bar{\\bf B}_{\\mu },\\nonumber \\\\{\\bf G}_{[0]}&\\rightarrow & {\\bf G}_{[0]},\\nonumber \\\\{\\bf G}_{[1]}&\\rightarrow &{\\bf G}_{[1]}e^{2\\mathrm {i}gJ^{\\alpha \\beta }_i\\Theta _{\\alpha \\beta }^i},\\end{eqnarray}while we have as before the condition that the transformation does not modify Eq.", "(\\ref {HS}) and hence the graph Laplacians are always expressed as {\\bf L}_{\\mu }={\\bf \\bar{B}}_{\\mu }^{\\star }{\\bf \\bar{B}}_{\\mu }.", "Note that the trace of product of graph Laplacians is invariant under these transformations.", "}\\subsection {Combining Abelian and Non-Abelian transformations}Let us now consider the following combined transformation\\begin{eqnarray}\\Psi \\rightarrow \\mathcal {C}(\\Lambda ,\\lbrace \\Theta _{\\alpha \\beta }\\rbrace )\\Psi ,\\end{eqnarray}where using the same notation as in the previous two paragraphs, we assume that the M\\times M matrix \\mathcal {C}(\\Lambda ,\\lbrace \\Theta _{\\alpha \\beta }\\rbrace ) has block structure\\begin{eqnarray}\\hspace*{-71.13188pt}\\mathcal {C}(\\Lambda ,\\lbrace \\Theta _{\\alpha \\beta }\\rbrace )=\\left(\\begin{array}{cccc}ce^{-\\mathrm {i}e\\Lambda _N}&0&0&0\\\\0&e^{-\\mathrm {i}e\\Lambda _N}&0&0\\\\0&0&e^{-\\mathrm {i}(e\\Lambda _L+gJ^{\\alpha \\beta }_i\\Theta _{\\alpha \\beta }^i)}&0\\\\0&0&0&e^{-\\mathrm {i}(e\\Lambda _L+gJ^{\\alpha \\beta }_i\\Theta _{\\beta \\alpha }^i)}\\end{array}\\right).\\end{eqnarray}\\end{eqnarray}}We observe that the action \\mathcal {S}_D is invariant under the transformation given by Eq.", "(\\ref {dirac_trans_c}), provided the Dirac operator transform according to\\begin{eqnarray}\\partial \\hspace{-5.69054pt}\\slash \\rightarrow \\mathcal {C}(\\Lambda ,\\lbrace \\Theta _{\\alpha \\beta }\\rbrace )\\ \\partial \\hspace{-5.69054pt}\\slash \\ \\mathcal {C}^{-1}(\\Lambda ,\\lbrace \\Theta _{\\alpha \\beta }\\rbrace ),\\end{eqnarray}which for infinitesimal transformations implies the gauge transformation\\begin{eqnarray}\\partial \\hspace{-5.69054pt}\\slash \\rightarrow \\partial \\hspace{-5.69054pt}\\slash -\\mathrm {i}eA \\hspace{-5.69054pt}\\slash -\\mathrm {i}gB \\hspace{-5.69054pt}\\slash ,\\end{eqnarray}with A \\hspace{-5.69054pt}\\slash and B \\hspace{-5.69054pt}\\slash defined above.", "}The invariance of the action \\mathcal {S}_D under the gauge Eq.", "(\\ref {dirac_transf3}) implies that the coboundary operator and the metric matrices are transformed as\\begin{eqnarray}\\bar{\\bf B}_{\\mu }&\\rightarrow & e^{-\\mathrm {i}e\\Lambda _L-\\mathrm {i}gJ^{\\alpha \\beta }_i\\Theta _{\\alpha \\beta }^i}\\bar{\\bf B}_{\\mu }e^{\\mathrm {i}e\\Lambda _N},\\nonumber \\\\{\\bf G}_{[0]}&\\rightarrow &{\\bf G}_{[0]}e^{2\\mathrm {i}e\\Lambda _N},\\nonumber \\\\{\\bf G}_{[1]}&\\rightarrow &{\\bf G}_{[1]}e^{\\mathrm {i}2(e\\Lambda _L+gJ^{\\alpha \\beta }_i\\Theta _{\\alpha \\beta }^i)}.\\end{eqnarray}Note however that in general \\mathcal {C}(\\Lambda ,\\lbrace \\Theta _{\\alpha \\beta }\\rbrace )\\ne \\mathcal {K}(\\Lambda ,\\lbrace \\Theta _{\\alpha \\beta }\\rbrace )=e^{i\\Lambda }\\mathcal {S}(\\lbrace \\Theta _{\\alpha \\beta }\\rbrace ).", "Therefore ensuring invariance of the action under the transformation\\begin{eqnarray}\\Psi \\rightarrow \\mathcal {C}(\\Lambda ,\\lbrace \\Theta _{\\alpha \\beta }\\rbrace )\\Psi \\end{eqnarray}is not equivalent to ensuring invariance under the transformations\\begin{eqnarray}\\Psi \\rightarrow \\mathcal {K}(\\Lambda ,\\lbrace \\Theta _{\\alpha \\beta }\\rbrace )\\Psi =e^{i\\Lambda }\\mathcal {S}(\\lbrace \\Theta _{\\alpha \\beta }\\rbrace )\\Psi \\end{eqnarray}since \\Lambda and \\mathcal {J}_i^{\\alpha \\beta } do not commute in general.\\section {Schwinger-Dyson equations}In this section we aim at deriving the Schwinger-Dyson equations\\cite {peskin2018introduction} for our gauge theory.Hence we consider first the case of Abelian and then the case of non-Abelian transformations.\\subsection {Abelian transformations}Let us consider the local Abelian transformation\\begin{eqnarray}\\Psi \\rightarrow e^{-\\mathrm {i}e\\Lambda } \\Psi .\\end{eqnarray}In the first order approximations these transformations read\\begin{eqnarray}\\Psi \\rightarrow \\Psi -\\mathrm {i}e\\Lambda \\Psi .\\end{eqnarray}Under these transformations the Dirac action \\mathcal {S}_D will change according to\\begin{eqnarray}\\mathcal {S}_D\\rightarrow \\mathcal {S}_D+\\delta \\mathcal {S}_D\\end{eqnarray}where \\delta \\mathcal {S}_D is given by \\begin{eqnarray}\\delta \\mathcal {S}_D&=&- e\\left[\\left\\langle {\\partial \\hspace{-5.69054pt}\\slash {\\bar{\\Psi }}^{\\top },\\Lambda \\Psi }\\right\\rangle +\\left\\langle {\\Lambda {\\bar{\\Psi }}^{\\top },\\partial \\hspace{-5.69054pt}\\slash \\Psi }\\right\\rangle \\right].\\end{eqnarray}Now, indicating with s the generic simplex of the network (being either a node of a link), and taking into consideration that the matrix \\Lambda is diagonal with diagonal elements [\\Lambda ]_{ss}=\\Lambda _s, we obtain\\begin{eqnarray}\\Lambda \\Psi =\\sum _s \\Lambda _s \\Psi _s,\\end{eqnarray}where \\Psi _s is the spinor obtained by \\Psi in which only the element s is retained while all the other elements are zero.Using this expression for \\Lambda \\Psi we can decompose \\delta \\mathcal {S}_D as\\begin{eqnarray}\\delta \\mathcal {S}_D&=&- e \\sum _{s}\\Lambda _s\\left[\\left\\langle {\\partial \\hspace{-5.69054pt}\\slash {\\bar{\\Psi }}^{\\top },\\Psi _s}\\right\\rangle +\\left\\langle {{\\bar{\\Psi }}^{\\top }_s,\\partial \\hspace{-5.69054pt}\\slash \\Psi }\\right\\rangle \\right].\\ \\end{eqnarray}Indicating with r and r^{\\prime } two generic simplices of the network, and with \\Psi _r,\\bar{\\Psi }_{r} the r element of the spinors \\Psi and \\bar{\\Psi } respectively, we observe that the functional integral\\begin{eqnarray}\\int \\mathcal {D}{\\bar{\\Psi }}\\mathcal {D}{\\Psi }\\mathcal {D}{\\bf A}\\mathcal {D}{\\bf A}^{(0)} e^{\\mathrm {i}\\mathcal {S}_D} \\Psi _{r}\\bar{\\Psi }_{r^{\\prime }}^{\\top }\\end{eqnarray}is invariant under the transformation given by Eq.", "(\\ref {trs}).Therefore we have\\begin{eqnarray}0=\\int \\mathcal {D}{\\bar{\\Psi }}\\mathcal {D}{\\Psi }\\mathcal {D}{\\bf A}\\mathcal {D}{\\bf A}^{(0)}e^{\\mathrm {i}\\mathcal {S}_D} \\left\\lbrace \\mathrm {i}\\delta \\mathcal {S}_D\\Psi _{r}\\bar{\\Psi }_{r^{\\prime }}- \\mathrm {i}e\\Lambda _{r}\\Psi _{r}\\bar{\\Psi }_{r^{\\prime }}+\\mathrm {i}e\\Psi _{r}\\Lambda _{r^{\\prime }}\\bar{\\Psi }_{r^{\\prime }}\\right\\rbrace .\\end{eqnarray}Since \\Lambda is arbitrary, we have obtained the Schwinger-Dyson type equations\\begin{eqnarray}\\int \\mathcal {D}{\\bar{\\Psi }}\\mathcal {D}{\\Psi }\\mathcal {D}{\\bf A}\\mathcal {D}{\\bf A}^{(0)}e^{\\mathrm {i}\\mathcal {S}_D}\\left[\\left\\langle {\\partial \\hspace{-5.69054pt}\\slash {\\bar{\\Psi }}^{\\top }, \\Psi _{s}}\\right\\rangle +\\left\\langle {{\\bar{\\Psi }}_s^{\\top },\\partial \\hspace{-5.69054pt}\\slash \\Psi }\\right\\rangle \\right]\\bar{\\Psi }_{r^{\\prime }} \\Psi _{r}\\nonumber \\\\=\\int \\mathcal {D}{\\bar{\\Psi }}\\mathcal {D}{\\Psi }\\mathcal {D}{\\bf A}\\mathcal {D}{\\bf A}^{(0)}e^{\\mathrm {i}\\mathcal {S}_D}[-\\delta _{s,r}\\bar{\\Psi }_{r^{\\prime }} \\Psi _{r}+\\delta _{s,r^{\\prime }}\\bar{\\Psi }_{r^{\\prime }} \\Psi _{r}].\\end{eqnarray}\\end{array}\\subsection {Non-Abelian transformations}Let us consider the local non-Abelian transformation\\begin{eqnarray}\\Psi \\rightarrow e^{-\\mathrm {i}g{\\mathcal {J}}_i^{\\alpha \\beta }\\Theta ^i_{\\alpha \\beta }} \\Psi .\\end{eqnarray}In the first order approximations these transformations read\\begin{eqnarray}\\Psi \\rightarrow \\Psi +\\delta \\Psi ,\\end{eqnarray}with\\begin{eqnarray}\\delta \\Psi =-\\mathrm {i}g{\\mathcal {J}}_i^{\\alpha \\beta }\\Theta ^i_{\\alpha \\beta } \\Psi .\\end{eqnarray}Under these transformations the Dirac action \\right.\\mathcal {S}_D transform according to\\begin{eqnarray}\\mathcal {S}_D\\rightarrow \\mathcal {S}_D+\\delta \\mathcal {S}_D\\end{eqnarray}where \\delta \\mathcal {S}_D is given by \\begin{eqnarray}\\delta \\mathcal {S}_D&=&-g \\Theta ^{i}_{\\alpha \\beta } \\left[\\left\\langle {\\partial \\hspace{-5.69054pt}\\slash {\\bar{\\Psi }}^{\\top },\\mathcal {J}_i^{\\alpha \\beta } \\Psi }\\right\\rangle +\\left\\langle {\\mathcal {J}_i^{\\alpha \\beta }{\\bar{\\Psi }}^{\\top },\\partial \\hspace{-5.69054pt}\\slash \\Psi }\\right\\rangle \\right].\\end{eqnarray}}Indicating with r=[j,k] and r^{\\prime }=[j^{\\prime },k^{\\prime }] two generic links of the network, and by \\bar{\\Psi }_{r^{\\prime }} and \\Psi _r the corresponding elements of the spinors \\bar{\\Psi } and \\Psi we observe that the functional integral\\begin{eqnarray}\\int \\mathcal {D}{\\bar{\\Psi }}\\mathcal {D}{\\Psi }\\mathcal {D}{\\bf A}\\mathcal {D}{\\bf A}^{(0)} e^{\\mathrm {i}\\mathcal {S}_D} \\bar{\\Psi }_{r^{\\prime }} \\Psi _{r}\\end{eqnarray}is invariant under the transformation given by Eq.", "(\\ref {trs2}), therefore we have\\begin{eqnarray}\\hspace*{-65.44133pt}0=\\int \\mathcal {D}{\\bar{\\Psi }}\\mathcal {D}{\\Psi }\\mathcal {D}{\\bf A}\\mathcal {D}{\\bf A}^{(0)}e^{\\mathrm {i}\\mathcal {S}_D} \\left\\lbrace \\mathrm {i}\\delta \\mathcal {S}_D\\bar{\\Psi }_{r^{\\prime }} \\Psi _{r}+ \\delta \\bar{\\Psi }_{r^{\\prime }} \\Psi _{r}+\\bar{\\Psi }_{r^{\\prime }} \\delta \\Psi _{r}\\right\\rbrace .\\end{eqnarray}Since \\Theta is arbitrary we obtain the Swinger-Dyson type equations\\begin{eqnarray}\\hspace*{-62.59605pt}\\int \\mathcal {D}{\\bar{\\Psi }}\\mathcal {D}{\\Psi }\\mathcal {D}{\\bf A}\\mathcal {D}{\\bf A}^{(0)}e^{\\mathrm {i}\\mathcal {S}_D} \\left[\\left\\langle {\\partial \\hspace{-5.69054pt}\\slash {\\bar{\\Psi }}^{\\top },\\mathcal {J}_i^{\\alpha \\beta } \\Psi }\\right\\rangle +\\left\\langle {\\mathcal {J}_i^{\\alpha \\beta }{\\bar{\\Psi }}^{\\top },\\partial \\hspace{-5.69054pt}\\slash \\Psi }\\right\\rangle \\right]\\bar{\\Psi }_{r^{\\prime }} \\Psi _{r}=\\nonumber \\\\\\hspace*{-62.59605pt}=\\int \\mathcal {D}{\\bar{\\Psi }}\\mathcal {D}{\\Psi }\\mathcal {D}{\\bf A}\\mathcal {D}{\\bf A}^{(0)}e^{\\mathrm {i}\\mathcal {S}_D}[\\delta _{ij^{\\prime }}[\\bar{\\Psi }{\\mathcal {J}}_{j^{\\prime }}^{\\alpha \\beta }]_{r^{\\prime }=[j^{\\prime },k]} \\Psi _{r}-\\delta _{ij} \\bar{\\Psi }_{r^{\\prime }}[{\\mathcal {J}}_j^{\\alpha \\beta }\\Psi ]_{r=[j,k]}].\\end{eqnarray}\\section {The action for the metric degree of freedom}\\subsection {General framework}We can interpret the Dirac operators \\partial _{\\mu } as a topological version of a partial derivative in the direction \\mu coupled with a local rotation of the spinor.However there is a major difference between the Dirac operator and a partial derivative: mainly the difference is that the Dirac operators associated to a different directions do not commute (see discussion in Sec.", "\\ref {Sec:comm} and in Ref.", "\\cite {bianconi2021topological}).Here we define the curvature tensor F_{\\mu \\nu } as\\begin{eqnarray}[\\partial _{\\mu },\\partial _{\\nu }]=\\mathrm {i}F_{\\mu \\nu }.\\end{eqnarray}This tensor is dependent crucially on the metric matrices {\\bf G}_{[1]}=\\exp ({\\bf A}) and {\\bf G}_{[0]}=\\exp ({\\bf A}^{(0)}), i.e.", "it is determined by the metric fields {\\bf A} and {\\bf A}^{(0)}where for the moment we consider these two as independent fields (but actually the geometry might impose that they are related).The Bianchi identities are automatically satisfied\\begin{eqnarray}\\partial _{\\rho }F_{\\mu \\nu }+\\partial _{\\mu }F_{\\nu \\rho }+\\partial _{\\nu }F_{\\rho \\mu }=0.\\end{eqnarray}Here we consider the action \\mathcal {S}_G associated the metric field given by\\begin{eqnarray}\\mathcal {S}_G= -\\frac{1}{2}\\mbox{Tr} \\left(F_{\\mu \\nu }F^{\\mu \\nu }\\right)&=&-\\mbox{Tr}\\left(\\sum _{\\mu \\ne \\nu }(-1)^{\\delta _{\\mu ,t}+\\delta _{\\nu ,t}}{\\bf L}_{\\mu }{\\bf L}_{\\nu }\\right).\\end{eqnarray}Note that all the transformations considered in the previous section leave invariant the action \\mathcal {S}_G.Additionally is also possible to consider the action \\mathcal {S}_G^{\\prime } including higher-order terms in the curvature\\begin{eqnarray}\\mathcal {S}_{G}^{\\prime }&=&\\frac{\\Gamma }{2} \\mbox{Tr}\\sum _{\\mu ,\\nu } (F_{\\mu \\nu })^4=\\Gamma \\sum _{\\mu \\ne \\nu }\\mbox{Tr}\\left({\\bf L}_{\\mu }{\\bf L}_{\\nu }{\\bf L}_{\\mu }{\\bf L}_{\\nu }\\right),\\end{eqnarray}where \\Gamma is a constant.The action \\mathcal {S}_G^{\\prime } is also invariant under all the gauge transformations considered in the previous sections and describes the contribution to the action given by the metrics fields around each of the squares of the lattice.Now consider the action \\mathcal {S}=\\mathcal {S}_D+\\mathcal {S}_G+\\mathcal {S}_G^{\\prime } given by\\begin{eqnarray}{\\mathcal {S}}=\\bar{\\Psi } (\\textrm {i}\\partial \\hspace{-5.69054pt}\\slash -m) \\Psi - \\frac{1}{2}\\sum _{\\mu \\ne \\nu }\\mbox{Tr} \\left[(F_{\\mu \\nu })^2-\\Gamma (F_{\\mu \\nu })^4\\right].\\end{eqnarray}The equations of motion are obtained by deriving \\mathcal {S} with the respect {\\bf A} and {\\bf A}^{(0)}, i.e.", "expressing with A_{\\gamma } the generic element of either {\\bf A} or {\\bf A}^{(0)}.", "Under the assumption that {\\bf A} or {\\bf A}^{(0)} are unconstrained, the equation of motion read,\\begin{eqnarray}\\frac{\\partial \\mathcal {S}_D}{\\partial A_{\\gamma }}+\\frac{\\partial (\\mathcal {S}_G+\\mathcal {S}_G^{\\prime })}{\\partial A_{\\gamma }}=0.\\end{eqnarray}Here the first term depends on both the geometric degree of freedom and the Dirac fields while the second term depends exclusively on the metric degree of freedom (the A_{\\gamma }).These equations, together with Eq.", "(\\ref {dirac}) and the gauge transformations define the dynamics of this Dirac gauge theory of the network.Note that Eqs.", "(\\ref {motion}) might be modified to take into account geometric constraints between the metric fields {\\bf A} or {\\bf A}^{(0)}.\\end{array}\\right.\\subsection {Simple example}The study of the consequences of the equation of motion given by Eq.", "(\\ref {motion}) is beyond the scope of this paper, but in the this section their explicit expression is derived in the simple setting in which the metric matrices are diagonal.To this end we assume that the metric matrix {\\bf G}_{[1]}^{-1} is diagonal and real valued with the diagonal element associated to link [i,j] independent on the orientation of the link, i.e.", "the matrix element [i,j],[i,j] is the same as the matrix element [j,i][j,i].", "If the link [i,j] is in the direction \\mu indicate the matrix elements as\\begin{eqnarray}{\\bf G}_{[1]}^{-1}([i,j],[i,j])={\\bf G}_{[1]}^{-1}([j,i],[j,i])=w_{[i,j]}^{\\mu }=e^{A^{\\mu }_{[i,j]}}.\\end{eqnarray}Moreover we assume that {\\bf G}_{[0]} is diagonal with diagonal element associated to node i given by the real number \\begin{eqnarray}{\\bf G}_{[0]}([i][i])=\\frac{1}{w^{(0)}_i}=e^{-A^{(0)}_i}.\\end{eqnarray}For simplicity we assume that {\\bf G}_{[0]} is not constrained by {\\bf G}_{[1]} but it is expected that the geometry will constraint the choices of the possible matrices {\\bf G}_{[0]} and {\\bf G}_{[1]} inducing other constraints in the equations of motion (Eq.", "\\ref {motion}).In this setting the graph Laplacian {\\bf L}_{\\mu } has diagonal elements given by\\begin{eqnarray}[{\\bf L}_{\\mu }]_{ii}=2\\frac{w^{\\mu }_{[i,j^{\\prime }]}+w^{\\mu }_{[j^{\\prime \\prime },i]}}{w^{(0)}_i}\\end{eqnarray}where if \\mu =x the node j^{\\prime } has x-coordinate x_j=x_i+1 and all the other coordinates equal to the ones of node i.", "Similarly, node j^{\\prime \\prime } has x-coordinate x_j=x_i-1 and all the other coordinates equal to the ones of node i.", "Note that here we take periodic boundary conditions on the lattice.", "The generalization of the above expression for general direction \\mu is straightforward.The non-diagonal elements of {\\bf L}_{\\mu } are given instead by\\begin{eqnarray}[{\\bf L}_{\\mu }]_{ij}=-2\\frac{w^{\\mu }_{[i,j]}}{\\sqrt{w^{(0)}_iw^{(0)}_j}}.\\end{eqnarray}as long as node j is a connected to node i by links of type \\mu .$ We are interested now in deriving the explicit expression for the terms present in the equation of motion (Eq.()).", "In the absence of matter fields, i.e.", "when $\\Psi ={\\bf 0}$ , the equation of motion only affects the metric degree of freedom and in this limiting case Eq.", "() reduce to $\\frac{\\partial \\left(\\mathcal {S}_G+\\mathcal {S}_{G}^{\\prime }\\right)}{\\partial A_{\\gamma }}=0.$ Let us then calculate in this scenario the partial derivatives $\\frac{\\partial \\mathcal {S}_G}{\\partial A_{\\ell }^{\\mu }}$ and $\\frac{\\partial \\mathcal {S}_G}{\\partial A_{i}^{(0)}}$ .", "Let us consider the explicit expression of $\\mathcal {S}_G$ ()) in terms of the elements of the graph Laplacians, i.e.", "$\\hspace{-28.45274pt}\\mathcal {S}_G=-\\sum _{\\mu \\ne \\nu }(-1)^{\\delta _{\\mu ,t}+\\delta _{\\nu ,t}}\\mbox{Tr}\\left({\\bf L}_{\\mu }{\\bf L}_{\\nu }\\right)=-\\sum _{\\mu \\ne \\nu }(-1)^{\\delta _{\\mu ,t}+\\delta _{\\nu ,t}}\\sum _{i=1}^N[{\\bf L}_{\\mu }]_{ii}[{\\bf L}_{\\nu }]_{ii}.$ Using the explicit expression for the matrix elements $[{\\bf L}_{\\mu }]_{ii}$ given by Eq.", "() and considering the link $\\ell =[i,j]$ in the direction $\\mu $ , we obtain $\\frac{1}{4}\\frac{\\partial \\mathcal {S}_G}{\\partial A_{\\ell }^{t}}=-{[\\square -{\\bf L}_{t}]_{ii}} e^{A^{t}_{\\ell }+A_i^{(0)}}-{[\\square -{\\bf L}_{t}]_{jj}}e^{A^{t}_{\\ell }-A_j^{(0)}},$ and for $\\mu \\in \\lbrace x,y,z\\rbrace $ , $\\frac{1}{4}\\frac{\\partial \\mathcal {S}_G}{\\partial A_{\\ell }^{\\mu }}={[\\square +{\\bf L}_{\\mu }]_{ii}} e^{A^{\\mu }_{\\ell }-A^{(0)}_i}+{[\\square +{\\bf L}_{\\mu }]_{jj}} e^{A^{\\mu }_{\\ell }-A_j^{(0)}}.$ Moreover the partial derivative of $\\mathcal {S}_G$ with respect to $A^{(0)}_i$ is instead given by $\\frac{1}{2}\\frac{\\partial \\mathcal {S}_G}{\\partial A_{i}^{(0)}}=\\sum _{\\mu \\ne \\nu }(-1)^{\\delta _{\\mu ,t}+\\delta _{\\nu ,t}}[{\\bf L}_{\\mu }]_{ii}[{\\bf L}_{\\nu }]_{ii}.$ If $\\Gamma \\ne 0$ we need to consider the also the action $\\mathcal {S}_{G}^{\\prime }$ given by Eq.", "() which involves terms of the type $\\mbox{Tr}({\\bf L}_{\\mu }{\\bf L}_{\\nu }{\\bf L}_{\\mu }{\\bf L}_{\\nu })$ .", "The explicit expression of this trace in terms of the elements of the graph Laplacian matrices is given by $\\hspace*{-42.67912pt}\\mbox{Tr}({\\bf L}_{\\mu }{\\bf L}_{\\nu }{\\bf L}_{\\mu }{\\bf L}_{\\nu })&=&\\sum _{i,j,i^{\\prime }j^{\\prime }\\in SQ^{\\mu \\nu }_i}[{\\bf L}_{\\mu }]_{ij}[{\\bf L}_{\\nu }]_{ji^{\\prime }}[{\\bf L}_{\\mu }]_{i^{\\prime }j^{\\prime }}[{\\bf L}_{\\nu }]_{j^{\\prime }i}+\\sum _{i,j|i\\ne j}[{\\bf L}_{\\mu }]_{ij}[{\\bf L}_{\\nu }]_{jj}[{\\bf L}_{\\mu }]_{ji}[{\\bf L}_{\\nu }]_{ii}\\nonumber \\\\&&+\\sum _{i,j|i\\ne j}[{\\bf L}_{\\mu }]_{ii}[{\\bf L}_{\\nu }]_{ij}[{\\bf L}_{\\mu }]_{jj}[{\\bf L}_{\\nu }]_{ji}+\\sum _{i}([{\\bf L}_{\\mu }]_{ii}[{\\bf L}_{\\nu }]_{ii})^2,$ where the first contribution is a contribution around the squares $SQ^{\\mu \\nu }_i$ passing through node $i$ and with links in direction $\\mu $ and $\\nu $ with $\\mu \\ne \\nu $ .", "From this equation and from the explicit expression of the matrix element of the graph Laplacian in terms of $A^{\\mu }_{\\ell }$ and $A^{(0)}_i$ (Eq.", "( and Eq.", "()) we can calculate the expression for the partial derivatives $\\frac{\\partial \\mathcal {S}_G^{\\prime }}{\\partial A^{\\mu }_{\\ell }}$ and $\\frac{\\partial \\mathcal {S}_G^{\\prime }}{\\partial A^{(0)}_{i}}.$ Considering the link $\\ell =[i,j]$ in direction $\\mu $ we obtain $\\hspace*{-42.67912pt}\\frac{1}{\\Gamma }\\frac{\\partial \\mathcal {S}_G^{\\prime }}{\\partial A^{\\mu }_{\\ell }}&=&\\sum _{\\nu \\ne \\mu }\\left\\lbrace 2\\sum _{i^{\\prime }j^{\\prime }\\in SQ^{\\mu \\nu }_{ij}}[{\\bf L}_{\\mu }]_{ij}[{\\bf L}_{\\nu }]_{ji^{\\prime }}[{\\bf L}_{\\mu }]_{i^{\\prime }j^{\\prime }}[{\\bf L}_{\\nu }]_{j^{\\prime }i}+2[{\\bf L}_{\\mu }]_{ij}[{\\bf L}_{\\nu }]_{jj}[{\\bf L}_{\\mu }]_{ji}[{\\bf L}_{\\nu }]_{ii}\\right.\\nonumber \\\\&&+4\\sum _{j^{\\prime }\\ne i}e^{A_{ij}^{\\mu }-A_i^{(0)}}[{\\bf L}_{\\nu }]_{ij^{\\prime }}[{\\bf L}_{\\mu }]_{j^{\\prime }j^{\\prime }}[{\\bf L}_{\\nu }]_{j^{\\prime }i}+4\\sum _{j^{\\prime }\\ne j}e^{A_{ij}^{\\mu }-A_j^{(0)}}[{\\bf L}_{\\nu }]_{jj^{\\prime }}[{\\bf L}_{\\mu }]_{j^{\\prime }j^{\\prime }}[{\\bf L}_{\\nu }]_{j^{\\prime }j}\\nonumber \\\\&&\\left.+4[{\\bf L}_{\\mu }]_{ii}([{\\bf L}_{\\nu }]_{ii})^2e^{A_{[ij]}^{\\mu }-A^{(0)}_i}+4[{\\bf L}_{\\mu }]_{jj}([{\\bf L}_{\\nu }]_{jj})^2e^{A_{[ij]}^{\\mu }-A^{(0)}_j}\\right\\rbrace .$ Furthermore we obtain $&&\\hspace*{-71.13188pt}\\frac{1}{\\Gamma }\\frac{\\partial \\mathcal {S}_G^{\\prime }}{\\partial A^{(0)}_{i}}=-\\sum _{\\nu \\ne \\mu }\\left\\lbrace 2\\sum _{j,i^{\\prime },j^{\\prime }\\in SQ^{\\mu \\nu }_{i}}\\Big ([{\\bf L}_{\\mu }]_{ij}[{\\bf L}_{\\nu }]_{ji^{\\prime }}[{\\bf L}_{\\mu }]_{i^{\\prime }j^{\\prime }}[{\\bf L}_{\\nu }]_{j^{\\prime }i}+[{\\bf L}_{\\nu }]_{ij}[{\\bf L}_{\\mu }]_{ji^{\\prime }}[{\\bf L}_{\\nu }]_{i^{\\prime }j^{\\prime }}[{\\bf L}_{\\mu }]_{j^{\\prime }i}\\Big )\\right.\\nonumber \\\\&&\\hspace*{-56.9055pt}\\left.4\\sum _{j\\ne i}[{\\bf L}_{\\mu }]_{ij}[{\\bf L}_{\\nu }]_{jj}[{\\bf L}_{\\mu }]_{ji}[{\\bf L}_{\\nu }]_{ii}+4\\sum _{j\\ne i}[{\\bf L}_{\\mu }]_{ii}[{\\bf L}_{\\nu }]_{ij}[{\\bf L}_{\\mu }]_{jj}[{\\bf L}_{\\nu }]_{ji}+4([{\\bf L}_{\\mu }]_{ii}[{\\bf L}_{\\nu }]_{ii})^2\\right\\rbrace ,$ where $SQ^{\\mu \\nu }_{ij}$ indicates any of the two plaquettes of direction $\\mu \\nu $ having link $[i,j]$ as one of their links.", "Finally if the matter field $\\Psi $ is non-zero we will need to consider the equation of motion given by Eq.", "() which includes also the the partial derivatives of the Dirac action $\\frac{\\partial \\mathcal {S}_D}{\\partial A^{\\mu }_{\\ell }}$ and $\\frac{\\partial \\mathcal {S}_D}{\\partial A^{(0)}_{i}}.$ Let us notice that the Dirac action $\\mathcal {S}_{D}$ can be expressed as $\\hspace*{-56.9055pt}\\mathcal {S}_{D}&=&\\chi ^{\\dag }{\\sum _{\\mu \\in \\lbrace x,y,z\\rbrace }\\bar{\\mathcal {B}}_{\\mu }^{\\star }\\psi }+\\psi ^{\\dag }\\sum _{\\mu \\in \\lbrace x,y,z\\rbrace }\\bar{\\mathcal {B}}_{\\mu }\\chi +\\textrm {i}\\chi ^{\\dag }\\bar{\\mathcal {B}}_{t}^{\\star }\\psi +\\textrm {i}\\psi ^{\\dag }{\\bar{\\mathcal {B}}_{t}\\chi }-m({\\chi }^{\\dag }\\chi -\\psi ^{\\dag }\\psi ).$ Therefore $2\\frac{\\partial \\mathcal {S}_{D}}{\\partial A_{\\ell }^{\\mu }}=\\chi ^{\\dag }\\bar{\\mathcal {B}}_{\\mu }^{\\star }\\psi _{\\ell }+\\psi ^{\\dag }_{\\ell }\\bar{\\mathcal {B}}_{\\mu }\\chi ,$ where $\\psi _{\\ell }$ is the $2L$ dimensional vector obtained by $\\psi $ only keeping the elements $\\psi _{[i,j]\\pm }$ and $\\psi _{[j,i]\\pm }$ .", "Finally, the derivative of $\\mathcal {S}_D$ with respect to $A_i^{(0)}$ is given by $\\hspace{-42.67912pt}-2\\frac{\\partial \\mathcal {S}_{D}}{\\partial A_{i}^{(0)}}=\\chi ^{\\dag }_i\\left[\\sum _{\\mu \\in \\lbrace x,y,z\\rbrace }\\bar{\\mathcal {B}}_{\\mu }^{\\star }+\\textrm {i}\\bar{\\mathcal {B}}_{t}^{\\star }\\right]\\psi +\\psi ^{\\dag }\\left[\\sum _{\\mu \\in \\lbrace x,y,z\\rbrace }\\bar{\\mathcal {B}}_{\\mu }+\\textrm {i}\\bar{\\mathcal {B}}_{t}\\right]\\chi _i,$ where $\\chi _i$ is the $2N$ dimensional vector obtained from $\\chi $ by considering only the elements corresponding to node $i$ ." ], [ "Conclusions", "In conclusion in this work we are proposing a gauge theory where the matter fields are defined on both the nodes and the links of the network, similar to the classical situation in which the state of the particle is specified by both position and velocity.", "The matter field can be treated by an action making use of the discrete Dirac operator.", "This operator, defined in Ref.", "[1] for $3+1$ dimensional lattices is here extended to treat directed and weighted links.", "In particular we consider the case in which the network is associated with metric matrices ${\\bf G}_{[0]}$ and ${\\bf G}_{[1]}$ as it is usual in algebraic topology.", "However while in applied topology these matrices are typically considered diagonal, here we assume that in general they are not diagonal and that their matrix elements constitute the geometrical degree of freedom associated to the network.", "Our work shows that the these metric matrices are determining the electromagnetic field associated to this theory.", "Indeed the non-relativistic limit of the Dirac equation confirms our interpretation of the anticommutator of the spatial directional Dirac operators as the magnetic field.", "Moreover the non-relativistic limit of the Dirac equation allows us to draw the following important additional conclusions: in this limit, the wave function of the electron defined on the links (vector boundle) follows the Schrödinger equation of the electron, with the correct giromagnetic moment; the wave function defined on the nodes, instead follows the Klein-Gordon equation and it is not negligible.", "These findings might lead to observable differences between the non-relativistic limit of the traditional Dirac equation and possibly might be related to super-symmetric models.", "We then consider the gauge transformation acting on the Dirac spinor and the metric matrices.", "We then consider the specific case in which the Dirac topological fields interacts with the underlying degrees of freedom of the network geometry.", "The action associated to the metric field can be constructed contracting the curvature tensor of our theory, which is defined as the non-vanishing commutator of the directional Dirac operators associated to different directions.", "The resulting gauge transformations of metric and matter fields are discussed and the equations of motion are derived in the specific example in which the metric fields are real and diagonal.", "This works can be extended in different directions.", "First of all our approach has focused only on networks, i.e.", "the topological spinor considered in this work is only defined on nodes and links.", "However it would be interesting to explore in the future whether this framework can be extended also to topological spinors defined on the higher-order cells of the lattice such as the squares, and the cubes and if this extension will bring new physical results.", "Secondly we have restricted our analysis to an underlying network topology formed by a $3+1$ dimensional lattice but this framework can be applied easily also to lattices of dimension $1+1$ and $2+1$ .", "Thirdly, the choice of a $3+1$ square lattice can be considered as a first approximation of a general network topology valid for almost flat spaces.", "The underlying simple lattice topology allows us to adopt the Dirac operator enriched by the Lie algebra of Pauli matrices for distinguishing between exterior derivatives in different directions as proposed in Ref.[1].", "However on more general topologies other Lie algebras should be adopted.", "Finally the approach here developed for the Dirac equation can be potentially extended to treat Majorana fermions [53], [54].", "We hope that this work will stimulate further discussion on these open questions and more in general in the emergent field of gauge theories on networks which is both relevant for different approaches to quantum gravity and for realization of artificial gauge theories in condensed matter with applications in quantum computing." ], [ "Acknowledgments", "G. Bianconi acknowledges interesting discussions with Marcus Reitz, and Shahn Majid and 2021 conversations with Juergen Jost on a previous version of this manuscript." ] ]
2212.05621
[ [ "Exact results for sheared polar active suspensions with variable liquid\n crystalline order" ], [ "Abstract We consider a confined sheared active polar liquid crystal with a uniform orientation and study the effect of variations in the magnitude of polarization.", "Restricting our analysis to one-dimensional geometries, we demonstrate that with asymmetric boundary conditions, this system is characterized, macroscopically, by a linear shear stress vs. shear strain relationship that does not pass through the origin: At a zero strain rate, the fluid sustains a non-zero stress.", "Analytic solutions for the polarization, density, and velocity fields are derived for asymptotically large or small systems and are shown by comparison with precise numerical solutions to be good approximations for finite-size systems." ], [ "Introduction", "Swarms of bacteria [1], [2], [3], [4], mixtures of cytoskeletal filaments and motor proteins [5], [6], [7], and self-propelled colloids [8], [9] are all example of active suspensions [10], [11], [12], [13] consisting of anisotropic self-driven particles dispersed in a passive liquid.", "Due to the orientable nature of their constituents, active suspensions can exhibit long-range orientational order and are often referred to as active liquid crystals (LCs) [10], [11], [12], [14].", "While active LCs can exist in ordered phases typical of liquid crystals [15], they fundamentally differ from their passive counterparts in that each active particle transduces free energy into systematic movement, maintaining the system out of equilibrium.", "As active particles interact with each other and with their surrounding environment, they are able to collectively generate motion and mechanical stresses at scales much larger than their individual size, endowing active materials with unusual mechanical properties.", "An example is the reduction of the apparent viscosity of bacterial suspensions under shear [16], [17], [18], [19], [20], [21].", "Remarkably, upon increasing activity, the apparent viscosity can decrease until a value of zero is achieved, giving rise to superfluid-like behaviour [20], [22].", "In the last decade, rheological measurements [16], [17], [18], [19], [20] have shown qualitative agreement with earlier theoretical predictions [23], [24], [25], [26], [27], [28] for the macroscopic mechanical properties of active suspensions.", "Yet a more thorough understanding of the underlying mechanisms driving these systems requires a more quantitative comparison of theoretical models with experiments [29].", "Such a comparison is becoming possible as more detailed information, such as transient rheological behaviour [20] and velocity profiles [22], becomes accessible experimentally.", "Many active materials, whether biological or synthetic, involve head-tail asymmetric particles and can exist in a polar phase.", "For such system, the broken symmetry variable is the polarization vector which represents the local coarse-grained orientation of the particles.", "When an active polar LC is sheared, distortions in the orientation of the polarization field induces active stresses which in turn generate an extra flow (needed to maintain the stress balance).", "This mechanism allows the apparent viscosity (defined as the macroscopic viscosity at the scale of the system, as would be measured by, e.g., a rheometer) of active LCs to vanish or even become negative [29].", "While previous theoretical analyses [30], [31], [29] have focused on how gradients in the orientation can induce the active stresses that lead to unconventional mechanical behaviour, here our focus is on how variations in the magnitude of LC order affect the mechanical properties of active LCs.", "In this paper we show that a gradient in the magnitude of polarization of active LCs induces, even in the absence of variations in orientation, flows that give rise to anomalous mechanics.", "Specifically, we examine the effect of a varying polarization magnitude on a one-dimensional confined active polar LC subjected to shear.", "We derive the analytical relation between the stress and the macroscopic strain rate, which shows in particular that this system experiences a non-zero shear stress at zero shear strain rate (and conversely, a non-zero strain rate is required to maintain a zero stress).", "Despite the nonlinearities in the equations, because of a decoupling of the equations we are also able to obtain exact analytical results for the velocity and density fields as a function of the polarization field, the latter being shown to be well approximated by asymptotic solutions in the limit of large or small systems.", "This is interesting because hydrodynamic equations for active LCs are highly nonlinear, hence analytical solutions beyond linearization approximations are rare even in the simplest geometries.", "As a result studies of sheared active LCs tend to be numerical [32], [33], [30], [34], [35], [31], [29]." ], [ "Model ", "We consider an active LC with the possibility of polar orientational order.", "At the continuum scale, its dynamics are described by a set of long-wavelength, long-time scale equations forming the now well-accepted hydrodynamic theory of active matter [11], [14].", "The relevant hydrodynamic variables are the polarization vector $p$ as well as the conserved fields, here the particle number density $\\rho $ (for simplicity $\\rho $ is normalized by its equilibrium value) and the momentum $\\rho _m u$ where $\\rho _m$ is the fluid mass density and $u$ is the fluid velocity.", "The passive contributions to the equations of motion are customarily described as arising from the nonequilibrium analog of the free energy for a passive polar LC.", "This free energy is given by [30], [11] $F = \\int _{r} (f_n + f_p) \\; \\mathrm {d}r$ $f_n = \\frac{a_2}{2} \\vert p \\vert ^2 + \\frac{a_4}{4} \\vert p \\vert ^4 + \\frac{K}{2} \\vert \\nabla p \\vert ^2 + \\frac{C}{2} (\\rho -1)^2$ $f_p = B_1 (\\rho -1) \\nabla \\cdot p + B_2 \\vert p \\vert ^2 \\nabla \\cdot p + B_3 \\vert p \\vert ^2 p \\cdot \\nabla \\rho .$ The contribution $f_n$ is the free energy density of a nematic LC.", "The first two terms control the isotropic-polar transition: they favor a polar phase ($\\vert p \\vert ^2 = -a_2/a_4$ ) when $a_2<0$ and an isotropic phase ($\\vert p \\vert ^2 = 0$ ) when $a_2>0$ .", "The third term describes the energy cost of deformation ($K$ is the analog of the Frank constant for passive LCs), and the last term penalizes density variations ($C$ is the compression modulus).", "The contribution $f_p$ contains additional terms that break the $p \\rightarrow - p$ symmetry and are allowed in a polar fluid [36].", "Figure: A thin film of active polar LC sheared between two moving no-slip walls.", "The polarization field is uniformly aligned with the walls, and only its magnitude pp is allowed to vary.The polarization and density dynamics are governed by $D_t p = - \\beta _p p \\cdot \\nabla p + \\lambda E \\cdot p - \\Omega \\cdot p - \\Gamma _{pp} h - \\Gamma _{cp} g$ and $D_t \\rho = \\nabla \\cdot \\left[ -\\rho \\beta _c p + \\Gamma _{cp} h + \\Gamma _{cc} g \\right],$ where $D_t = \\partial _t + u \\cdot \\nabla $ , $E_{ij}=(\\partial _i u_j + \\partial _j u_i)/2$ , $\\Omega _{ij}=(\\partial _i u_j - \\partial _j u_i)/2$ , $h = \\delta F/\\delta p$ and $g = \\nabla (\\delta F/\\delta \\rho )$ .", "The flow is assumed incompressible ($\\nabla \\cdot u = 0$ ) and the flow field satisfies $\\rho _m (\\partial _t + u_i \\partial _i) u_j = \\partial _i \\sigma _{ij}.$ The stress tensor is given by $\\sigma _{ij} = 2 \\eta E_{ij} + \\sigma _{ij}^r + \\sigma _{ij}^a,$ where the first term is the dissipative contribution ($\\eta $ is the fluid viscosity), $\\sigma _{ij}^r$ is reversible contribution (as in passive LCs), and $\\sigma _{ij}^a$ is the active contribution.", "The reversible stress is given by $\\sigma _{ij}^r = - \\Pi \\delta _{ij} + \\frac{\\lambda }{2} (p_i h_j + p_j h_i ) \\\\ + \\frac{1}{2} (p_i h_j - p_j h_i),$ where $\\Pi $ is the pressure.", "The active stress is $\\sigma _{ij}^a = \\alpha \\rho p_i p_j + \\beta _\\sigma \\rho (\\partial _i p_j + \\partial _j p_i),$ where the lowest order term $\\sim \\alpha $ has nematic symmetry while the higher order term $\\sim \\beta _\\sigma $ is present only in systems with polar symmetry.", "In the above equations, $\\beta _{c,p}$ and $\\beta _\\sigma \\Gamma _{pp}$ have the dimension of a velocity and associated terms arise in polar systems from the self-advection of active elements along $p$ .", "Our geometry is similar to that used in prior work [30], [29] and is depicted in fig:schemeBCPfield: a two-dimensional thin film of active LC of thickness $L$ is sheared between two parallel walls moving in opposite directions with velocity magnitude $V$ .", "We allow gradients only in the direction normal to the walls.", "Due to incompressibility and wall impermeability the flow must be parallel to the wall: $u = (u(z), 0)$ , and we use no-slip boundary conditions: $u(L)=-u(0)=V$ .", "The fluid is therefore subjected to a macroscopic shear strain rate $\\dot{\\gamma } = \\int _0^L \\partial _z u \\, \\mathrm {d}z = 2 V / L$ .", "To pick out the effects of variations in the amount of LC order only, we further assume that the polarization field is parallel to the walls and only its (signed) magnitude is allowed to vary: $p = (p(z), 0)$ .", "Previous work has focused on the role of varying orientation with fixed magnitude, here in contrast we fix the orientation and focus on the role of varying magnitude of $p$ only.", "Our choice of orientation is consistent with expected boundary conditions on the walls.", "A nice feature of this approximation is that it makes the problem tractable analytically due to a decoupling of the equations.", "Such an aligned state is physically relevant to situations where strong parallel anchoring is precribed at the walls, provided that the coupling between the polarization orientation and the local shear is negligible, that is, for systems which satisfy $\\Gamma _{pp} K \\gg U L$ (with $U$ a characteristic velocity scale for the flow).", "The governing equations reduce to $\\partial _t p = - \\Gamma _{pp} ( a_2 + a_4 p^2 ) p + \\Gamma _{pp} K \\partial _z^2 p,$ $\\partial _t \\rho = \\partial _z \\big [ - 2 b_2 p \\partial _z p + (d + b_3 p^2) \\partial _z \\rho \\big ],$ $\\rho _m \\partial _t u = \\partial _z \\sigma ,$ with $\\sigma = \\eta \\partial _z u + (\\beta _\\sigma \\rho + m b_2 p^2) \\partial _z p + \\frac{m}{2} (b_1 - b_3 p^2) p \\partial _z \\rho ,$ where $\\sigma _{zx}$ is now simply denoted $\\sigma $ , and where we have introduced $d=\\Gamma _{cc} C - \\Gamma _{cp} B_1$ , $b_{1,2,3} = \\Gamma _{cp} B_{1,2,3}$ , and $m=(1-\\lambda )/\\Gamma _{cp}$ .", "At the boundaries the flux of $\\rho $ across the walls must be zero, and we require that the polarization vectors at the walls are antiparallel: $p(L)=-p(0)=p_{\\mathrm {eq}}=\\sqrt{-a_2/a_4}$ (here we assume $a_2 < 0$ ).", "It is interesting to note that the coupling terms in the governing equations are those which break the $p \\rightarrow - p$ symmetry.", "For a nematic fluid ($b_{1,2,3}=0$ , $\\beta _\\sigma =0$ ), the equation for $\\rho $ reduces to a diffusion equation, and the expression of the stress only contains the viscous term.", "Therefore, in the simple configuration we consider here, a nematic active fluid would behave as an isotropic passive one." ], [ "Results ", "Let us consider a continuous steady solution $p(z)$ for the polarization field.", "Then the steady solution to eq:density1D is $\\rho = \\frac{b_2}{b_3} \\ln \\left( d + b_3 p^2 \\right) + \\rho _0,$ where $d + b_3 p^2 > 0$ is assumed otherwise this would be equivalent to a negative diffusion coefficient in the density equation, meaning that fluctuations around the equilibrium density would not remain small and that stabilizing higher order terms would need to be included in our model and where $\\rho _0$ is a constant determined by the condition $L^{-1} \\int _0^L \\rho \\; \\mathrm {d}z= 1$ .", "At steady state the shear stress is uniform across the gap ($\\partial _z \\sigma = 0$ ).", "One can then integrate eq:sigma1D for a constant (unknown) $\\sigma $ and obtain the velocity field $u = \\frac{\\sigma }{2 \\eta } (2z-L) - \\xi p + \\sqrt{\\frac{d}{b_3}} \\xi \\arctan (\\sqrt{\\frac{b_3}{d}} p) - \\frac{\\beta _\\sigma }{\\eta } p \\rho $ with $\\xi = \\frac{b_2}{\\eta b_3} (m b_1 + m d - 2 \\beta _\\sigma ).$ The (macroscopic) steady-state flow curve $\\sigma =f(\\dot{\\gamma })$ , as would be measured by a rheometer, is then obtained from eq:usolution by satisfying the no-slip boundary conditions at the moving walls.", "One finds $\\sigma = \\eta \\dot{\\gamma } + \\sigma _{0}$ with $\\sigma _{0} = \\frac{2 \\eta }{L} \\Bigg \\lbrace \\xi p_{\\mathrm {eq}} - \\sqrt{\\frac{d}{b_3}} \\xi \\arctan (\\sqrt{\\frac{b_3}{d}} p_{\\mathrm {eq}}) \\\\+ \\frac{\\beta _\\sigma }{\\eta } p_{\\mathrm {eq}} \\left[ \\frac{b_2}{b_3} \\ln \\left( d + b_3 p_{\\mathrm {eq}}^2 \\right) + \\rho _0 \\right] \\Bigg \\rbrace .$ The slope $\\mathrm {d}\\sigma / \\mathrm {d}\\dot{\\gamma }$ is simply the fluid viscosity $\\eta $ , as for an isotropic passive fluid, however the stress at zero strain rate, $\\sigma _0$ , is not zero: the active LC has a yield stress and effectively behaves, from a rheological point of view, in a similar way to a Bingham fluid [38].", "Indeed gradients in $p$ induce reversible and active contributions to the stress which exist independently of the applied strain rate.", "The converse is also true: a non-zero macroscopic strain rate is required in order to maintain a zero stress.", "Moreover $\\sigma _0$ can be of either sign, meaning that the apparent viscosity, defined as $\\sigma /\\dot{\\gamma }$ , can be negative (while passive contributions can only add to $\\sigma $ , active ones $\\sim \\beta _\\sigma $ can either add or subtract to $\\sigma $ ).", "Figure: The effect of system size: (a) polarization magnitude field, (b) density field, (c) velocity field, (d) deviation from asymptotic results in the stress at zero strain rate (inset: stress).", "Analytical profiles and stress were obtained using eq:solptanh for L≫1L \\gg 1 and eq:solplinear for L≪1L \\ll 1 (here we set for simplicity -a 2 /K=1-a_2/K=1).", "Parameters are: a 2 =-1a_2=-1, a 4 =1a_4=1, K=1K=1, η=1\\eta =1, b 1,2,3 =0.1b_{1,2,3}=0.1, m=-1m = -1, d=1d=1, β σ =-1\\beta _\\sigma =-1, γ ˙=0\\dot{\\gamma }=0 (arbitrary units).There exists, as far as we know, two limiting cases where an explicit steady analytical solution to eq:polarization1D can be written.", "In the limit $-a_2 L^2 / K \\gg 1$ , a good approximation for $p$ is the solution for an infinite system [39] $p = p_{\\mathrm {eq}} \\tanh \\left[\\sqrt{\\frac{-a_2}{2 K}} (z-z_i) \\right]$ where $z_i$ is the (undetermined) location of the interface (which thickness decreases with increasing $-a_2/K$ ) between two polar phases pointing in opposite directions.", "This profile results in a depletion (or accumulation, depending on the sign of $b_2$ ) of $\\rho $ localized at the interface and in a non-uniform, non-monotonic velocity profile.", "In the opposite limit $-a_2 L^2 / K \\ll 1$ , the solution for the polarization magnitude can be approximated by a linear profile $p = - p_{\\mathrm {eq}} + \\frac{2 p_{\\mathrm {eq}}}{L} z.$ Note however that this latter limit may require that the width $L$ be comparable to the active particle size for which the validity of the hydrodynamic equations are in question and is mostly of interest here as a bound.", "In addition to these limiting cases, eq:polarization1D,eq:density1D,eq:sigma1D were also solved numerically.", "Our algorithm is based on second-order implicit finite difference schemes (Crank-Nicolson scheme for time integration and centered schemes for spatial discretization) with adaptive time-stepping.", "Time-dependent equations were solved to steady-state, starting from a linear profile for $p$ and a uniform $\\rho $ .", "A comparison between the asymptotic solutions and the numerical ones is shown in fig:sizeeffectfields, together with additional results for an intermediate value of $-a_2 L^2 / K$ .", "The estimate of $\\sigma _0$ obtained from asymptotic expressions is remarkably accurate (deviation not greater than 1 %) even for $-a_2 L^2 / K = O(1)$ ." ], [ "Discussion and conclusions ", "We considered the minimal problem of a one-dimensional sheared active LC under confinement, with a uniform orientation of the polarization field, focusing on the effect of varying its signed magnitude $p$ .", "Our analysis thereby complements prior studies of the same system which allowed gradients in the orientation field while keeping the magnitude of liquid crystalline order constant [40], [30], [29].", "As the dynamics of $p$ is not coupled to that of the density or the velocity, the uniform equilibrium solution is always stable and gradients in $p$ must be generated through boundary conditions.", "Here we imposed $p$ to be of equal magnitude and of opposite signs at the walls.", "Such asymmetric polarization at the boundaries could be realized experimentally through manipulation of the surface chemistry or architecture [41], [42], [43], [44], [45].", "The case of variable orientation leads to a rich phenomenology, including a spontaneous transition to a flowing state in the absence of external driving [40], and the existence of non-monotonic stress vs. strain rate flow curves [30], [29].", "In contrast, the case of variable polarization studied here does not yield such unusual mechanical properties: the relationship between the stress and the macroscopic strain rate is linear, and, for a nematic active LC, would not differ from that for an isotropic fluid.", "For a polar active LC though, there exist elastic and active contributions to the total stress in addition to the viscous one, and the flow curve does not pass through the origin.", "This indicates that macroscopic stresses are present in the uniformly aligned polar active LC even in the absence of external driving.", "One of the advantages of the simple configuration considered here lies in the fact that analytical solutions can be explicitly obtained.", "We hope that these solutions will provide insight into the role played by gradients of liquid crystalline order, and could be used as a starting point and benchmark reference for numerical work on sheared active polar LCs, where both the magnitude and direction of the LC order parameter vary [32], [33], [34].", "Part of this work was funded by a Leverhulme Trust Research Project Grant RPG-2016-147.", "APT acknowledges a University of Bristol undergraduate summer bursary.", "TBL acknowledges support of BrisSynBio, a BBSRC/EPSRC Advanced Synthetic Biology Research Centre (grant number BB/L01386X/1)." ] ]
2212.05534
[ [ "A Bit Stream Feature-Based Energy Estimator for HEVC Software Encoding" ], [ "Abstract The total energy consumption of today's video coding systems is globally significant and emphasizes the need for sustainable video coder applications.", "To develop such sustainable video coders, the knowledge of the energy consumption of state-of-the-art video coders is necessary.", "For that purpose, we need a dedicated setup that measures the energy of the encoding and decoding system.", "However, such measurements are costly and laborious.", "To this end, this paper presents an energy estimator that uses a subset of bit stream features to accurately estimate the energy consumption of the HEVC software encoding process.", "The proposed model reaches a mean estimation error of 4.88% when averaged over presets of the x265 encoder implementation.", "The results from this work help to identify properties of encoding energy-saving bit streams and, in turn, are useful for developing new energy-efficient video coding algorithms." ], [ "Introduction", "The dawn of portable devices, unrestricted Internet, and various video-focused social networking services has increased Internet video traffic drastically [1].", "Such a tremendous increase in video data traffic leads to huge storage costs and increased server-side energy consumption for video content creation.", "The total energy consumption of current coding systems is globally significant as online video contributes to 1% of total global $\\mathrm {CO_2}$ emissions [2].", "Furthermore, mobile devices pose limitations in terms of battery power.", "Notably, video encoding has considerable power requirements, which poses a problem to battery-powered devices [3], where the battery drains fast due to increased power requirements.", "Furthermore, the compression methods used for encoding have evolved considerably in recent years and provide a greater number of compression methods.", "However, their processing complexity has also greatly increased [4], leading to a significant increase in the energy demand.", "Therefore, we need powerful and energy-efficient video codecs and energy-aware video-based services in modern video communication applications.", "A limitation in searching for energy-efficient algorithms is that energy measurements are complex and laborious.", "Hence, we need simple energy estimators to overcome the drawback of complex measurements.", "An extensive body of literature on decoding energy modeling, such as in [5] and [6] exists.", "The decoding energy estimation model based on various characteristics, such as decoding time [7], the sequence characteristics such as the number of frames, resolution, and Quantization Parameter (QP) [8], and the number of instructions [9] were introduced.", "Similarly, the decoding energy estimation model based on encoded bit stream features for various decoder implementations estimates the true decoding energy with less than 8% mean estimation error [10], [11].", "Few recent works have explicitly addressed the encoder's processing energy.", "For example, [12] established a relationship between the quantization, spatial information, and coding energy for the intra-only HEVC encoder but did not consider the presets which quantify the encoding complexity and compression performance.", "Furthermore, Mercat et al.", "measured the energy of a software encoder for many different sequences and encoding configurations [13] but did not present any encoding energy estimation model.", "In the end, [14] introduces a lightweight encoding time based encoding energy estimator, which uses the ultrafast presets' encoding time to estimate the encoding energy demand of the other presets and achieves a mean estimation error of 11.35% when averaged over all the presets.", "To this end, this work explores the feasibility of estimating the HEVC software encoders' energy demand using the bit stream features used for decoding energy estimation in [10], [11].", "Last but not least, we introduce a subset of bit stream features that could be used to estimate the encoding energy with higher accuracy than the other encoding energy estimators in the literature.", "Some applications of this work are joint modeling of the energy in the encoder-decoder chain using bit stream features and identifying the features and properties that significantly contribute to energy consumption.", "The rest of this paper is structured as follows: Section presents existing encoding energy estimation models.", "Section introduces the bit stream features and their categories and the bit stream feature count.", "Further, Section introduces the proposed model.", "Then Section introduces the energy measurement setup, sequences and encoding configurations used, and the evaluation method and then discusses the results.", "Lastly, Section concludes this work." ], [ "Literature review of encoding energy estimation models", "This section presents existing models for encoding energy estimation.", "First, [12] introduces a quantization parameter (QP) based model for intra-coding, which is a static model to estimate encoding time based on the QP and then uses this model to estimate the encoding energy demand.", "A generalized version of such a model of video sequence classes [15] reads as follows: $t_{\\mathrm {enc}}= \\kappa \\cdot QP^{3}- \\lambda \\cdot QP^{2} - \\mu \\cdot QP+ T_{0},$ where $\\kappa $ , $\\lambda $ , $\\mu $ are the coefficients of the model, QP corresponds to the quantization parameter, $T_{0}$ is an offset.", "$E_{\\mathrm {enc}}=t_{\\mathrm {enc}} \\cdot P_{\\mathrm {avg}},$ where $t_{\\mathrm {enc}}$ is the estimated processing time from (REF ) and $P_{\\mathrm {avg}}$ is the mean processing power of encoder.", "This model is limited to for all-intra encoding and reported relatively accurate estimations for low and high values of QP, i.e., 0–15 and 40–51, but for the mid-range QP values, the errors increase above 15% [12].", "Notably, the QPs at which the model reports a low error are seldom used.", "In addition, this energy model estimates the energy only for classes B, C, and D of JVET common test conditions [15].", "Furthermore, [14] introduces two models based on the encoder processing time.", "The first model estimates the encoder energy demand by exploiting the linear relation between encoding time and encoding energy, such that the energy can be estimated by $ \\hat{E}_{\\mathrm {enc}} = E_{0} + P \\cdot t_{\\mathrm {enc}}, $ where $t_{\\mathrm {enc}}$ is the sequence-dependent encoder processing time.", "The parameter $P$ (slope) can be interpreted as the linear factor representing the mean encoding power and $E_{0}$ as a constant offset.", "The estimates can only be obtained when the encoding process is executed once on the target device because the encoder processing time needs to be measured.", "Hence, this model is adapted to perform a-priori energy estimation, i.e., energy estimation without the need to execute the encoder.", "Thereby, [14] adapts the model (REF ) to estimate the encoding time for each x265 preset using the encoding time of the lightweight encoding process i.e., $\\textit {ultrafast}$ preset, which is less costly to obtain than (REF ), $ \\hat{E}_{\\mathrm {enc}} = E_{0}+ P \\cdot t_{\\mathrm {enc,uf}}, $ where $E_{0}$ is the offset energy, $P$ indicates the slope, and $t_{\\mathrm {enc,uf}}$ denotes the encoding time consumed for a given sequence to be encoded at a given CRF using the ultrafast preset.", "However, these models do not reach a mean estimation errors of less than 15%." ], [ "bit stream Features", "A bit stream is a sequence of bits representing the coded video sequences and associated data and describes the execution of a standardized decoding process [16].", "Its syntax is specified in syntax structures, representing the logical entity of the information coded into the bit stream [17].", "The decoding process takes the syntax elements of the bit stream as input and reconstructs the video sequence according to the semantics of the coded syntax elements [17].", "A bit stream feature can be associated with a subprocess in the decoding process[16].", "While decoding a single bit stream, the decoder executes these sub-processes multiple times.", "Each sub-process consumes specific processing energy upon each execution [10].", "Furthermore, [10] defines a set of features for typical sub-processes, meaning that the complete decoding process can be split into sub-processes associated with the bit stream features.", "Another property of these features is that they can be linked to syntax elements defined in the standard [16].", "Eventually, the occurrence and value of these syntax elements and variables can be used to determine how often the bit stream features occur and hence, how often the corresponding sub-processes are executed.", "By analyzing the occurrence and value of HEVC syntax elements or variables, it is possible to determine how often a bit stream feature occurs.", "Hence, for a given HEVC-coded bit stream, it is counted how often a sub-process is executed.", "Therefore, by counting the number of executed sub-processes and determining the mean processing energy, we can estimate the total decoding energy related to that sub-process [10].", "The set of bit stream features as used for decoding energy estimation [10] is divided into five categories.", "The general features $E_0$ and the number of frames (Islice, PBslice) comprise sub-processes associated with global coding processes such as initialization of the decoding process and the slices.", "The former corresponds to the offset energy required for starting and ending the decoding process.", "The latter, the number of frames, represents the processing energy used to initialize a frame.", "The intraframe coding features relate to intra-prediction sub-processes of a block and corresponding flags.", "The interframe coding features describe the inter-prediction subprocess of a block, parsing, and fractional pel filtering, where the number of pels that need to be predicted and filtered is counted, which is counted twice in bi-prediction.", "The residual coding features represent the coefficient parsing of residual coefficients and transformation block size.", "Finally, the in-loop filtering features describe deblocking and sample-adaptive offset (SAO) filters in the in-loop filtering processes." ], [ "Proposed Encoding Energy Model", "In [10], for any given HEVC-coded bit stream, it is counted how often a sub-process is executed.", "The product of the number of sub-process executions (feature numbers) with its corresponding energy demand yields the required decoding energy related to this sub-process.", "In the end, the sum of all such energies for all bit stream features yields the estimated energy demand of the decoding process [10]: $\\hat{E}_\\mathrm {dec} = \\sum _{\\forall i} n_i \\cdot e_i,$ where the index $i$ denotes the bit stream feature index, $n_i$ number of occurrences of the bit stream feature (feature number), and $e_i$ the corresponding feature-specific decoding energy [10].", "Similar to the bit stream feature based model proposed in [10], we here propose a bit stream feature based model for the HEVC software encoding process.", "For an HEVC-coded bit stream, the encoding energy estimation with the help of the feature numbers and its associated energy can be obtained as follow: $\\hat{E}_\\mathrm {enc} = \\sum _{\\forall i} n_i \\cdot e_i,$ where the index $i$ denotes the bit stream feature index, $n_i$ feature number, and $e_i$ the feature-specific encoding energy when one such feature occurs in decoding bit stream.", "In the case of the feature-based decoding energy estimator, the $n_i$ denotes the number of times a sub-process $i$ is performed, and $e_i$ refers to the energy consumed by the corresponding sub-process $i$ .", "However, for encoding energy estimation using the bit stream features, $n_i$ feature number denotes the number of times a sub-process is performed while decoding, and the $e_i$ refers to the encoding energy consumed by the corresponding sub-process when that sub-process occurs once in decoding.", "The feature number derivation is explained in [11], and [10], where the feature counters implemented at positions corresponding to the standard's subclauses [16] were used.", "Using the syntax elements and variables used in the subclause [16], [10] defines a condition that must be maintained to increment the corresponding feature number and, in the end, implement the counters into any HEVC-conform decoder solution.", "[10] introduces such a counter tool called HM Analyzer [18] based on the HEVC-Test Model (HM) reference software [19].", "The HM Analyzer [18] yields very extensive bit stream features.", "As the complete set of bit stream features is rather elaborate and complex, we propose two models.", "Notably, we use a different set of features than the one used in [10], such that the features considered makes much sense for encoding.", "In addition, we select the features so that the difference between the measured and estimated energy is minimum.", "The first model is the elaborate model (EM), similar to the accurate model in [10] which considers 100 bit stream features.", "The second one is a simple model (SM) that considers a reduced set of 50 features.", "Table REF presents the features that are used in both models along with information on the features label, feature id, depth information, feature ID including depth information, and the tick on the columns SM and EM denotes the features using in SM and EM models respectively.", "Table: Bit stream features used to estimate the energy consumption of a video encoder.The tick in the columns EM and SM shows if the feature is used in the elaborate and simple feature based model, respectively." ], [ "Encoding Energy Measurement Setup", "In this work, the energy demand of the encoding process is determined by two consecutive measurements, as explained in [11].", "First, the total energy consumed during the encoding process is measured.", "Then, the energy consumed in the idle state over the same duration of encoding is measured later.", "At last, the encoding energy $E_{\\mathrm {enc}}$ is the difference between these two measurements.", "$ E_{\\mathrm {enc}} =\\int _{t=0}^{T} P_{\\mathrm {total}}(t)dt- \\int _{t=0}^{T} P_{\\mathrm {idle}}(t)dt, $ where $T$ is the duration of the encoding process, $P_{\\mathrm {total}}(t)$ is the total power consumed while encoding, $P_{\\mathrm {idle}}(t)$ is the power consumed by the device in idle mode, and t is the time.", "The confidence interval test for $m$ measurements with a standard deviation of $\\sigma $ is defined as follows: $ 2\\cdot \\frac{\\sigma }{\\sqrt{m}}\\cdot t_\\alpha (m-1) < \\beta \\cdot E_{\\mathrm {enc}}, $ where $\\beta $ is the maximum encoding energy deviation, $\\alpha $ is the probability with which (REF ) is fulfilled and $t_\\alpha $ denotes the critical t-value of Students's t-distribution.", "We chose $\\alpha =0.99$ and $\\beta =0.02$ and we repeat the measurements until the condition (REF ) is satisfied.", "Hence, we ensure that $ E_{\\mathrm {enc}}$ has a maximum energy deviation of 2% from the actual energy consumed." ], [ "Evaluation Sequences and Encoding Configurations", "In this work, we perform multi-core encoding with the x265 encoder implementation [20].", "We consider 22 sequences from the JVET common test conditions [15] with various sequence characteristics such as frame rate, resolution, and content.", "Table REF lists the sequences used in this work [15].", "In addition, we encode the first 64 frames of the sequences at different x265 presets, which are $\\textit {ultrafast}$ , $\\textit {superfast}$ , $\\textit {veryfast}$ , $\\textit {faster}$ , $\\textit {fast}$ , $\\textit {medium}$ , $\\textit {slow}$ , $\\textit {slower}$ , $\\textit {veryslow}$ , and various Constant Rate Factor (CRF) values, 18, 23, 28, 33.", "We generated 792 bit streams and further used them to train and validate the model.", "Table: Summary of sequences with its properties" ], [ "Model Evaluation", "We use mean relative estimation error for evaluation.", "By doing so, we get more significant results than using the absolute error as we strive to estimate the encoding energy accurately independent of the absolute energy, which can vary by several orders of magnitude.", "Thus, we show the relative estimation error of the measured encoding energy with respect to the estimated encoding energy for a single bit stream $n$ i.e., each bit stream $n$ corresponds to a single input sequence coded at a specific CRF, and for each preset $X$ as: $ \\epsilon _{X,n}=\\frac{\\hat{E}_{\\mathrm {enc}}- E_{\\mathrm {enc}}}{ E_{\\mathrm {enc}}} \\Bigg \\vert _{X,n} $ where $\\hat{E}_{\\mathrm {enc}}$ is the estimated and $ E_{\\mathrm {enc}}$ the measured encoding energy from (6).", "Then, we calculate the mean estimation error for each preset $X$ over each bit stream $n$ to obtain the overall estimation error for each preset: $ \\overline{\\epsilon }_{X}=\\frac{1}{n}\\sum _{n=1}^{n}\\vert \\epsilon _{X,n}\\vert .", "$ In order to determine the model parameter values for each preset, we perform a least-squares fit using a trust-region-reflective algorithm as presented in [21].", "We initially use the measured energies for a subset of the sequences referred to as the training set and their corresponding variables as input, which are the bit stream features.", "As a result, we obtain the least-squares optimal parameters for the input training set, where we train the parameters such that the mean relative error is minimized.", "Ultimately, these model parameters are used to validate the model's accuracy on the remaining validation sequences.", "The training and validation data set are determined using a ten-fold cross-validation as proposed in [22].", "Using this technique, we randomly divide the complete set of measured energies into ten approximately equal-sized subsets.", "Then, for each subset, we use the other nine subsets to train the model, and the trained parameters are then used to validate the remaining subset by calculating the relative estimation error for all the sequences." ], [ "Results and Discussion", "The estimation errors of models for QP based model [12], the encoding time-based model [14] and the lightweight encoding time-based model [14], along with the mean estimation errors for the bit stream feature-based models EM, and SM, are summarized in Table REF .", "The results show that the average estimation errors of models EM and SM perform best, with average errors smaller than 10% for most presets.", "From the last two columns of table REF , we can see that, when all the presets are considered, i.e., when the training and validation is done over bit streams of all presets, unlike other models, both the EM and SM perform better.", "Notably, when considering each preset, the SM with 50 features performs better than the EM.", "The reason is that the EM results in overfitting, as seen in Table REF suggesting that in the case of preset known, we do not need the EM, and SM is enough for the better estimates.", "However, when considering all presets, overfitting is reduced to a great extent such that the EM outperforms the SM.", "In conclusion, we can say that EM should be used when the encoding preset is unknown, and SM can be used when the preset is known.", "For example, figure REF shows the measured energy and estimation energy of all the presets from the SM for a Class B sequence \"Cactus. \"", "The left bars correspond to the measured energy $E_\\mathrm {enc}$ , right bars to the estimated energy $\\hat{E}_\\mathrm {enc}$ .", "Also, this figure shows the impact of various x265 presets on the encoding energy.", "Table: Relative mean estimation error for the QP based model(QP) , encoding time based model (T), lightweight (ultrafast) encoding time model (UF) , and proposed bistream based models EM and SM for all the bit streams.", "The lowest Relative mean estimation error across the different criteria is highlighted.Figure: The measured energy E enc E_\\mathrm {enc} and the estimated energy E ^ enc \\hat{E}_\\mathrm {enc} from SM for various presets 0-ultrafast, 1-superfast, 2-veryfast, 3-faster, 4-fast, 5-medium.", "6-slow, 7-slower, 8-veryslow for the sequence \"Cactus\" of the Class B." ], [ "Conclusion", "Studying the energy demand of encoders is crucial for energy-efficient algorithms, but the energy measurements are time-consuming.", "Therefore, we need valid and simple energy estimators to overcome the drawback of such laborious measurements.", "With this respect, in this paper, we have shown that a bit stream feature-based approach to modeling the energy consumption of the HEVC software encoding process is highly suitable for estimating the encoding energy accurately.", "Moreover, the proposed model estimates the encoding energy with a mean estimation error of 4.88% (averaged over presets), which outperforms models from the literature.", "The proposed model, coupled with the bit stream feature-based model for decoding, is useful in analyzing the energy efficiency of HEVC-coded bit streams, which is useful for identifying energy-efficient configurations.", "In future work, for an extended set of bit streams, we plan to count the sub-processes in encoding and obtain features-specific energies, address the energy demand of different categories of encoding sub-processes, and obtain the energy distribution of the encoding process." ] ]
2212.05609
[ [ "Generation of nearly pure and highly directional magnetic light in\n fluorescence of rare earth ions" ], [ "Abstract We discover regimes for promoting decay rates of magnetic light due to magnetic dipole (MD) transitions of trivalent rare earth ions located inside or near dielectric homogeneous spheres by as much as three orders of magnitude.", "A number of configurations involving sphere parameters and a rare earth emitter radial position are determined at which the branching ratio of MD transition is approaching its limit value of unity, signifying that transitions from a given parent level (e.g., $^5$D$_0$ level of Eu$^{3+}$) are entirely dominated by the MD transition.", "Dimensionless directivity of MD emission, radiative decay rates, and fluorescence of magnetic light can be enhanced by the the factor exceeding $25$, $10^3$, and $10^4$, respectively." ], [ "$E\\leftrightarrow M$ symmetry in SI units", "For an electric dipole (see eqs 9.4, 9.5 and 9.18 in ref Jackson1999) $\\begin{split}{\\bf H}_{\\rm ed} =& \\frac{ck^2}{4\\pi } ({\\bf n}\\times {\\bf p}) \\, \\frac{e^{ikr}}{r} \\left(1- \\frac{1}{ikr}\\right) \\ , \\\\{\\bf E}_{\\rm ed} =& \\frac{1}{4\\pi \\varepsilon } \\left\\lbrace k^2 ({\\bf n}\\times {\\bf p})\\times {\\bf n}\\, \\frac{e^{ikr}}{r} \\right.", "\\left.+ \\left[3{\\bf n}({\\bf n}\\cdot {\\bf p})-{\\bf p}\\right] \\left(\\frac{1}{r^3}- \\frac{ik}{r^2} \\right) \\right\\rbrace \\frac{e^{ikr}}{r}\\cdot \\end{split}$ Given the Maxwell's equations in material medium (see eqs 7.11 and 7.11' in ref Jackson1999) (cf.", "vacuum case of ref Jackson1999, see eqs 9.4 and 9.5), $\\begin{split}{\\bf H}= & \\frac{1}{\\mu }\\, \\mbox{$\\nabla $}\\times {\\bf A}= - \\frac{i}{Zk}\\,\\mbox{$\\nabla $}\\times {\\bf E}, \\\\{\\bf E}= & \\frac{iZ}{k}\\, \\mbox{$\\nabla $}\\times {\\bf H},\\end{split}$ after the ${\\bf p}\\rightarrow {\\bf m}/c$ substitution, ${\\bf H}_{\\rm md}$ (${\\bf E}_{\\rm md}$ ) for a magnetic dipole source will be ${\\bf E}_{\\rm ed}/Z$ ($-Z {\\bf H}_{\\rm ed}$ ).", "Indeed, for a magnetic dipole (see eq 9.33 in ref Jackson1999) ${\\bf A}({\\bf r}) =\\frac{ik\\mu }{4\\pi }\\,({\\bf n}\\times {\\bf m})\\,\\frac{e^{ikr}}{r}\\,\\left(1- \\frac{1}{ikr}\\right).$ On comparing with the first eqs (SI.REF ), ${\\bf A}$ for the magnetic dipole is thus $(i\\mu /k) {\\bf H}_{\\rm ed}$ .", "On making use of the latter in the first of eqs (SI.REF ), one finds with the help of Maxwell's equations in material medium, $\\begin{split}{\\bf H}_{\\rm md} =& \\frac{i\\mu }{\\mu k}\\, \\mbox{$\\nabla $}\\times {\\bf H}_{\\rm ed}= \\frac{i}{k}\\frac{k}{iZ}\\, {\\bf E}_{\\rm ed} = \\frac{1}{Z}\\, {\\bf E}_{\\rm ed} \\ , \\\\{\\bf E}_{\\rm md} =& \\frac{iZ}{k}\\, \\mbox{$\\nabla $}\\times {\\bf H}_{\\rm md} = \\frac{iZ}{k}\\, \\frac{1}{Z}\\, \\mbox{$\\nabla $}\\times {\\bf E}_{\\rm ed} = \\frac{i}{k}\\, \\mbox{$\\nabla $}\\times {\\bf E}_{\\rm ed} = -Z {\\bf H}_{\\rm ed}.\\end{split}$ The above expressions reflect a duality symmetry of the Maxwell's equations in the dipole case.", "A straightforward consequence is that $({\\bf E}_{\\rm md} \\times {\\bf H}_{\\rm md}) = ({\\bf E}_{\\rm ed} \\times {\\bf H}_{\\rm ed}),$ i.e., formally, neither the magnitude nor the orientation of the Poynting vector is changed (up to the substitution ${\\bf p}\\rightarrow {\\bf m}/c$ ).", "Therefore, the decay rates (normalized to those of a free dipole) will remain the same as when calculated with $\\tilde{{\\bf E}}_{\\rm d}$ and $\\tilde{{\\bf H}}_{\\rm d}$ by simply interchanging the $E$ and $M$ mode labels ($E\\leftrightarrow M$ ) in the decay rates formulas of an electric dipole case.", "There is no other way to interpret a MD than as a circulating current.", "Therefore, the $\\rm {ED}\\leftrightarrow \\rm {MD}$ duality is a special duality case not covered by the conventional duality of the vacuum and the source Maxwell's equations, the latter assuming the presence of magnetic monopoles." ], [ "Radiative and nonradiative decay rates of electric and magnetic dipoles in a presence of a homogeneous sphere", "The rates for electric and magnetic dipole emitters located inside or outside a sphere at $r_d$ distance from a center of a sphere (normalized with respect to $\\Gamma _{\\rm rad;ed(md);0}$ , the intrinsic radiative decay rate in the absence of a sphere) are obtained as: $\\begin{split}\\tilde{\\Gamma }^{\\perp }_{\\rm rad;ed} =\\dfrac{\\Gamma ^{\\perp }_{\\rm rad;ed}}{\\Gamma _{\\rm rad;ed;0}}& = \\dfrac{3}{2 x_d^4} {\\cal N}_{\\rm rad} \\sum _{l=1}^{\\infty }l(l+1)(2l+1) \\left| {\\cal F}_{El}(x_d) \\right|^2 \\ , \\\\\\tilde{\\Gamma }^{\\parallel }_{\\rm rad;ed} =\\dfrac{\\Gamma ^{\\parallel }_{\\rm rad;ed}}{\\Gamma _{\\rm rad;ed;0}}& = \\dfrac{3}{4 x_d^2} {\\cal N}_{\\rm rad} \\sum _{l=1}^{\\infty }(2l+1) \\left[ \\left| {\\cal F}_{Ml}(x_d) \\right|^2 + \\left| {\\cal F}^{\\;\\prime }_{El}(x_d) \\right|^2 \\right] \\ , \\\\\\tilde{\\Gamma }^{\\perp }_{\\rm nrad;ed} =\\dfrac{\\Gamma ^{\\perp }_{\\rm nrad;ed}}{\\Gamma _{\\rm rad;ed;0}}& = \\mbox{Im} (\\varepsilon )\\,\\dfrac{3k^3_d}{2x_d^4} {\\cal N}_{\\rm nrad} \\sum _{l=1}^{\\infty } l(l+1)(2l+1) I_{El} \\left| \\zeta _l \\left(x_d\\right) \\right|^2 \\ , \\\\\\tilde{\\Gamma }^{\\parallel }_{\\rm nrad;ed} =\\dfrac{\\Gamma ^{\\parallel }_{\\rm nrad;ed}}{\\Gamma _{\\rm rad;ed;0}}& = \\mbox{Im} (\\varepsilon )\\, \\dfrac{3k^3_d}{4x_d^2} {\\cal N}_{\\rm nrad} \\sum _{l=1}^{\\infty } (2l+1) \\left[ I_{Ml} \\left| \\zeta _l \\left(x_d\\right) \\right|^2 + I_{El} \\left| \\zeta _l \\left(x_d\\right) \\right|^2 \\right] \\ ,\\end{split}$ $\\begin{split}\\tilde{\\Gamma }^{\\perp }_{\\rm rad;md} =\\dfrac{\\Gamma ^{\\perp }_{\\rm rad;md}}{\\Gamma _{\\rm rad;md;0}}& = \\dfrac{3}{2 x_d^4} {\\cal N}_{\\rm rad} \\sum _{l=1}^{\\infty }l(l+1)(2l+1) \\left| {\\cal F}_{Ml}(x_d) \\right|^2 \\ , \\\\\\tilde{\\Gamma }^{\\parallel }_{\\rm rad;md} =\\dfrac{\\Gamma ^{\\parallel }_{\\rm rad;md}}{\\Gamma _{\\rm rad;md;0}}& = \\dfrac{3}{4 x_d^2} {\\cal N}_{\\rm rad} \\sum _{l=1}^{\\infty }(2l+1) \\left[ \\left| {\\cal F}_{El}(x_d) \\right|^2 + \\left| {\\cal F}^{\\;\\prime }_{Ml}(x_d) \\right|^2 \\right] \\ , \\\\\\tilde{\\Gamma }^{\\perp }_{\\rm nrad;md} =\\dfrac{\\Gamma ^{\\perp }_{\\rm nrad;md}}{\\Gamma _{\\rm rad;md;0}}& = \\mbox{Im} (\\varepsilon )\\, \\dfrac{3k^3_d}{2x_d^4} {\\cal N}_{\\rm nrad} \\sum _{l=1}^{\\infty } l(l+1)(2l+1) I_{Ml} \\left| \\zeta _l \\left(x_d\\right) \\right|^2 \\ , \\\\\\tilde{\\Gamma }^{\\parallel }_{\\rm nrad;md} =\\dfrac{\\Gamma ^{\\parallel }_{\\rm nrad;md}}{\\Gamma _{\\rm rad;md;0}}& = \\mbox{Im} (\\varepsilon )\\, \\dfrac{3k^3_d}{4x_d^2} {\\cal N}_{\\rm nrad} \\sum _{l=1}^{\\infty } (2l+1) \\left[ I_{El} \\left| \\zeta _l \\left(x_d\\right) \\right|^2 + I_{Ml} \\left| \\zeta _l \\left(x_d\\right) \\right|^2 \\right] \\ ,\\end{split}$ where $x_d = k_d r_d $ and $k_d = 2\\pi n_d/\\lambda $ .", "The coefficients ${\\cal N}_{\\rm rad}$ and ${\\cal N}_{\\rm nrad}$ depend on whether the decay rates were normalized with respect to the radiative decay rates in infinite homogeneous medium having the refractive index of (i) the host or (ii) the medium where the dipole is located, $\\varepsilon _d$ , whether it is the host or the sphere: $\\begin{split}{\\cal N}^{\\rm host}_{\\rm rad} & = \\dfrac{n_d^3}{\\varepsilon _d} \\dfrac{\\varepsilon _h}{n^3_h} \\ , \\quad {\\cal N}^{\\rm dip}_{\\rm rad} = \\left(\\dfrac{n_d}{n_h}\\right)^6 \\left( \\dfrac{\\varepsilon _h}{\\varepsilon _d} \\right)^2 \\ , \\\\{\\cal N}^{\\rm host}_{\\rm nrad} & = \\dfrac{n_d^3}{n^3_h} \\dfrac{\\varepsilon _h}{\\varepsilon _d^2} \\ , \\quad {\\cal N}^{\\rm dip}_{\\rm nrad} = \\dfrac{1}{\\varepsilon _d} \\cdot \\end{split}$ The functions ${\\cal F}_{pl}(x_d)$ and ${\\cal D}_{pl;a}(x_d)$ in eqs (SI.REF ) and (SI.REF ) depend on the relative position of the emitter with respect to the sphere.", "In terms of the Riccati-Bessel functions $\\psi _l$ and $\\zeta _l$ , ${\\cal F}_{pl}(x_d) ={\\left\\lbrace \\begin{array}{ll}\\dfrac{\\psi _l(x_d)}{T_{21;pl}^-} \\ , & {\\rm inside} \\ , \\\\[10pt]\\psi _l (x_d)+\\dfrac{T_{21;pl}^+}{T_{11;pl}^+} \\, \\zeta _l(x_d),& {\\rm outside}.\\end{array}\\right.", "}$ The respective radial integrals $I_{El}$ and $I_{Ml}$ in eqs (SI.REF ) and (SI.REF ) are $\\begin{split}I_{Ml} = & \\dfrac{1}{|k|^2} \\int \\left| {\\cal A}_{Ml} \\psi _l(k r)\\right|^2 {\\rm d}r \\ , \\\\I_{El} = & \\dfrac{l(l+1)}{|k|^4} \\int \\left| {\\cal A}_{El} \\psi _l(k r)\\right|^2 \\dfrac{{\\rm d}r}{r^2}+\\dfrac{1}{|k|^2} \\int _a \\left| {\\cal A}_{El} \\psi ^{\\prime }_{l}(k r)\\right|^2 {\\rm d}r \\ .\\end{split}$ Here $k = 2\\pi n/\\lambda $ is the wave vector in the sphere medium, and coefficients ${\\cal A}_{pl}$ are ${\\cal A}_{pl} =T_{11;pl}^- + T_{12;pl}^- \\dfrac{T_{21;pl}^+}{T_{11;pl}^+} \\cdot $ Finally, the respective backward and forward transfer matrices for the electric and magnetic modes: $T^-_{Ml} =- i \\begin{pmatrix}\\tilde{n} \\zeta _l^{\\prime }(x) \\psi _l(\\tilde{x}) - \\tilde{\\mu } \\zeta _l(x)\\psi _l^{\\prime }(\\tilde{x})& \\tilde{n} \\zeta _l^{\\prime }(x) \\zeta _l(\\tilde{x}) - \\tilde{\\mu } \\zeta _l(x) \\zeta _l^{\\prime }(\\tilde{x}) \\\\- \\tilde{n} \\psi _l^{\\prime }(x)\\psi _l(\\tilde{x}) + \\tilde{\\mu } \\psi _l(x)\\psi _l^{\\prime }(\\tilde{x})& - \\tilde{n} \\psi _l^{\\prime }(x) \\zeta _l(\\tilde{x}) + \\tilde{\\mu } \\psi _l(x)\\zeta _l^{\\prime }(\\tilde{x})\\end{pmatrix} \\ ,$ $T^-_{El} =- i \\begin{pmatrix}\\tilde{\\mu } \\zeta _l^{\\prime }(x)\\psi _l(\\tilde{x}) - \\tilde{n} \\zeta _l(x)\\psi _l^{\\prime }(\\tilde{x})& \\tilde{\\mu } \\zeta _l^{\\prime }(x) \\zeta _l(\\tilde{x}) - \\tilde{n} \\zeta _l(x) \\zeta _l^{\\prime }(\\tilde{x}) \\\\- \\tilde{\\mu } \\psi _l^{\\prime }(x)\\psi _l(\\tilde{x}) + \\tilde{n} \\psi _l(x)\\psi _l^{\\prime }(\\tilde{x})& - \\tilde{\\mu } \\psi _l^{\\prime }(x) \\zeta _l(\\tilde{x}) + \\tilde{n} \\psi _l(x)\\zeta _l^{\\prime }(\\tilde{x})\\end{pmatrix} \\ ,$ $T^+_{Ml} =- i \\begin{pmatrix}\\zeta _l^{\\prime }(\\tilde{x}) \\psi _l(x)/\\tilde{n} - \\zeta _l(\\tilde{x})\\psi _l^{\\prime }(x)/\\tilde{\\mu }& \\zeta _l^{\\prime }(\\tilde{x}) \\zeta _l(x)/\\tilde{n} - \\zeta _l(\\tilde{x})\\zeta _l^{\\prime }(x)/\\tilde{\\mu } \\\\- \\psi _l^{\\prime }(\\tilde{x})\\psi _l(x)/\\tilde{n} + \\psi _l(\\tilde{x})\\psi _l^{\\prime }(x) /\\tilde{\\mu }& - \\psi _l^{\\prime }(\\tilde{x}) \\zeta _l(x)/\\tilde{n} + \\psi _l(\\tilde{x})\\zeta _l^{\\prime }(x)/\\tilde{\\mu }\\end{pmatrix} \\ ,$ $T^+_{El} =- i \\begin{pmatrix}\\zeta _l^{\\prime }(\\tilde{x})\\psi _l(x)/\\tilde{\\mu } - \\zeta _l(\\tilde{x})\\psi _l^{\\prime }(x) /\\tilde{n}& \\zeta _l^{\\prime }(\\tilde{x}) \\zeta _l(x)/\\tilde{\\mu } - \\zeta _l(\\tilde{x}) \\zeta _l^{\\prime }(x) /\\tilde{n} \\\\- \\psi _l^{\\prime }(\\tilde{x})\\psi _l(x)/\\tilde{\\mu } + \\psi _l(\\tilde{x})\\psi _l^{\\prime }(x) /\\tilde{n}& - \\psi _l^{\\prime }(\\tilde{x}) \\zeta _l(x)/\\tilde{\\mu } + \\psi _l(\\tilde{x})\\zeta _l^{\\prime }(x) /\\tilde{n}\\end{pmatrix} \\ .$" ], [ "Fluorescence quantum efficiency in the presence of nonradiative losses", "Eq 7 for the fluorescence quantum efficiency of the main text, $q_j = \\frac{\\Gamma _{{\\rm rad};j}}{\\Gamma _{\\rm tot}},\\nonumber $ contains absolute decay rates.", "Provided there is some nonradiative decay rate $\\Gamma _{{\\rm nrad}}$ involved, $\\Gamma _{\\rm tot}$ in (SI.REF ) is $\\Gamma _{\\rm tot}=\\Gamma _{{\\rm nrad}} + \\sum _k \\Gamma _{{\\rm rad};k}\\cdot $ In order to make use of eqs (SI.REF ) and (SI.REF ), eq 7 has to be recast in a more convenient form.", "Divide both the numerator and denominator of eq 7 with some arbitrary predetermined radiative rate $\\Gamma _R$ , $q_j = \\frac{\\Gamma _{{\\rm rad};j}/\\Gamma _R}{\\Gamma _{\\rm tot}/\\Gamma _R}\\cdot $ With the knowledge of that the time-averaged total radiated power of a free dipole of dipole moment ${\\bf p}$ is [1] $P_{tot}=\\frac{ck_h^4 |{\\bf p}|^2 }{3 \\epsilon _h n_h},$ $\\Gamma _R$ can be taken as the radiative decay rate of a free dipole of a unit dipole moment, $\\Gamma _R=\\frac{P_{tot}}{\\hbar \\omega }=\\frac{k_h^3}{3\\hbar \\epsilon _h},$ where we have used that $k_h/\\omega =n_h/c$ .", "In the expressions above, $k_h=2\\pi /\\lambda _h$ , $\\epsilon _h$ , and $n_h$ are the wave vector, the dielectric constant, and refractive index in the host medium.", "In principle, one can use for $\\Gamma _R$ any fixed decay rate.", "With single radiative channel, one can choose $\\Gamma _R=\\Gamma _{{\\rm rad}}^0$ , whereby one recovers the usual expressions.", "Obviously, one can recast the ratio $\\Gamma _{\\rm tot}/\\Gamma _R$ as $\\Gamma _{\\rm tot}/\\Gamma _R=\\Gamma _{{\\rm nrad}}/\\Gamma _R +\\sum _k (\\Gamma _{{\\rm rad};k}/\\Gamma _{{\\rm rad};k}^0)\\cdot (\\Gamma _{{\\rm rad};k}^0/\\Gamma _R).$ The first parenthesis in the sum is provided by Stratify.", "The second parenthesis in the sum can be determined from Table 2.", "With the numerator of (SI.REF ) recast analogously as $(\\Gamma _{{\\rm rad};j}/\\Gamma _{{\\rm rad};j}^0)\\cdot (\\Gamma _{{\\rm rad};j}^0/\\Gamma _R)$ , such a modified (SI.REF ) has been used in our calculations." ], [ "Coupling constant for multiple Eu$^{3+}$ emitters", "Coupling constant, $C_{\\text{E-E}}$ , entering the expression for nonradiative decay rates due to concentration quenching, ${\\Gamma }_{\\rm nrad} = 8\\pi C_{\\rm Eu-Eu}[{\\rm Eu}^{3+}][Q] \\ ,$ can be estimated from the experimental measurements for $^5$ D$_0$ level lifetime as a function of Eu$^{3+}$ doping concentration (see Figure 4 in ref Dordevic2013).", "Excited state lifetime is known to be inversely proportional to the total decay rate: $\\tau \\propto \\frac{1}{\\Gamma _{\\rm tot}} \\propto \\frac{1}{\\Gamma _R + \\Gamma _{\\rm nrad} } \\cdot $ With known $\\Gamma _R = 891{\\rm s}^{-1}$  [3] and fixed quencher concentration [$Q$ ], one can approximate experimental data for $\\tau $ with a linear fit.", "For the data presented in ref Dordevic2013, we have found $ 8\\pi C_{\\rm Eu-Eu}[Q]\\approx 250\\frac{1}{{\\rm at.", "}\\% \\: {\\rm s}}$ , which is the value used in our simulations." ], [ "Directivity", "By definition, the directivity $\\mathcal {D}_{\\rm md}(\\theta _0)$ is a relation of the power emitted into a certain direction to the solid angle averaged emitted power.", "In case of a dipole near a spherical particle, when both the dipole and the sphere center are located on axis $z$ , and when $\\theta _0=0$ , a general formula reduces to $\\mathcal {D}_{\\rm md} = \\frac{\\left| \\sum _{l} (-i)^l \\sqrt{(2l+1)} \\left( a^d_{Ml} + a^d_{El} \\right) \\right|^2}{\\sum _{l}\\left(|a^d_{Ml}|^2 + |a^d_{El}|^2\\right)} \\cdot $" ] ]
2212.05552
[ [ "Efficient algorithms to solve atom reconfiguration problems. II. The\n assignment-rerouting-ordering (aro) algorithm" ], [ "Abstract Programmable arrays of optical traps enable the assembly of configurations of single atoms to perform controlled experiments on quantum many-body systems.", "Finding the sequence of control operations to transform an arbitrary configuration of atoms into a predetermined one requires solving an atom reconfiguration problem quickly and efficiently.", "A typical approach to solve atom reconfiguration problems is to use an assignment algorithm to determine which atoms to move to which traps.", "This approach results in control protocols that exactly minimize the number of displacement operations; however, this approach does not optimize for the number of displaced atoms nor the number of times each atom is displaced, resulting in unnecessary control operations that increase the execution time and failure rate of the control protocol.", "In this work, we propose the assignment-rerouting-ordering (aro) algorithm to improve the performance of assignment-based algorithms in solving atom reconfiguration problems.", "The aro algorithm uses an assignment subroutine to minimize the total distance traveled by all atoms, a rerouting subroutine to reduce the number of displaced atoms, and an ordering subroutine to guarantee that each atom is displaced at most once.", "The ordering subroutine relies on the existence of a partial ordering of moves that can be obtained using a polynomial-time algorithm that we introduce within the formal framework of graph theory.", "We numerically quantify the performance of the aro algorithm in the presence and in the absence of loss, and show that it outperforms the exact, approximation, and heuristic algorithms that we use as benchmarks.", "Our results are useful for assembling large configurations of atoms with high success probability and fast preparation time, as well as for designing and benchmarking novel atom reconfiguration algorithms." ], [ "Introduction", "Programmable arrays of optical traps [1], [2], [3], [4] have recently emerged as effective tools for assembling configurations of single atoms and molecules with arbitrary spatial geometries [5], [6], [7], [8], [9], [10], [11].", "Supplemented with strong and tunable interactions like Rydberg-Rydberg interactions [12], [13], these configurations realize large, coherent quantum many-body systems that act as versatile testbeds for quantum science and technology [14], [15], [16], [17].", "An ongoing challenge is to assemble configurations of thousands of atoms with high success probability and fast preparation time.", "Addressing this challenge requires the design and implementation of improved algorithms to solve atom reconfiguration problems [18], [19], [20], which are hard combinatorial optimization problems that seek a sequence of control operations to prepare a given configuration of atoms from an arbitrary one.", "In the absence of atom loss, finding a control protocol that exactly minimizes the total number of displacement operations, without concern for any other performance metrics, can be done in polynomial time using assignment algorithms, such as those based on the Hungarian algorithm [21], [22], [23].", "Relying on assignment algorithms to solve atom reconfiguration problems [18], [19], [20], however, suffers from two drawbacks.", "The first drawback is that these assignment-based algorithms do not optimize for the number of displaced atoms; actually, minimizing the total number of displaced atoms is an NP-complete problem (even on simple geometries such as grids) for which an approximate solution can be obtained in polynomial time using the Steiner tree 3-approximation algorithm (3-approx) [24].", "If single atoms are displaced sequentially, then increasing the number of displaced atoms increases the number of transfer operations required to extract and implant the atoms from and into the array of optical traps, and thus increases the probability of losing them.", "For example, an algorithm might choose to displace $N$ atoms once instead of one atom $N$ times, which, although equivalent in terms of the total number of displacement operations, results in greater uncertainty about which atoms will be lost, complicating the problem of efficiently allocating surplus atoms to replace lost ones.", "The second drawback is that the moves are executed in an arbitrary order, without taking into account the possibility of early moves obstructing later moves.", "In this case, the same atom might be displaced multiple times, further increasing the probability of losing it.", "Figure: The assignment-rerouting-ordering (aro) algorithm.The atom reconfiguration problem consists of finding a sequence of moves to transform an arbitrary configuration of atoms (black dots) contained in a static array of optical traps (black circles) into a target configuration of atoms (shaded green disks).", "First, the aro algorithm uses the assignment subroutine to find the sequence of moves that minimizes the total distance traveled by all atoms, or, equivalently, the total number of displacement operations.", "Second, the rerouting subroutine seeks to update each move to reduce the number of atoms displaced without increasing the total displacement distance, here choosing the path P 1 ' P_1^{\\prime } over the path P 1 P_1.", "Third, the ordering subroutine finds a sequence of moves that prevents an atom from obstructing the path of another atom, here choosing to execute the move associated with P 2 P_2 before executing the move associated with P 3 P_3.In this paper, we propose the assignment-rerouting-ordering (aro) algorithm to overcome the aforementioned drawbacks of typical assignment-based reconfiguration algorithms.", "The aro algorithm uses an assignment algorithm to determine which atoms to move to which traps and then updates the resulting sequence of moves using a rerouting subroutine and an ordering subroutine.", "The rerouting subroutine attempts to reroute the path of each move to reduce the number of displaced atoms, whereas the ordering subroutine orders the sequence of moves to guarantee that each atom is displaced at most once (Fig.", "REF ).", "Both subroutines might thus modify the sequence of moves without increasing the total number of displacement operations.", "Hence, besides possibly achieving a reduction in the number of transfer operations, the aro algorithm exactly minimizes both the number of displacement operations and the number of transfer operations per displaced atom.", "The resulting reduction in the total number of control operations results in the aro algorithm outperforming both a typical assignment-based algorithm and the 3-approx algorithm, as well as our recently introduced redistribution-reconfiguration (red-rec) algorithm [20], at solving atom reconfiguration problems both in the absence and in the presence of loss.", "More generally, we are motivated by the goal of improving the performance of atom reconfiguration algorithms “from the ground up” by building upon exact and approximation algorithms for which provable analytical guarantees exist, e.g., within the framework of combinatorial optimization and graph theory.", "This approach differs from operationally-driven approaches that build upon intuition and operational constraints to formulate heuristic algorithms [9], [10], [18], [25], [26], [27], [28], [29], [20].", "Exact and approximation algorithms can be used to improve operational performance as standalone algorithms or as subroutines within other heuristic algorithms, and they can also be used to provide near-optimal performance bounds to benchmark new algorithms and identify ways to further improve them.", "Moreover, these algorithms and the formal results that underpin them can support applications in other areas, such as robot motion planning [30], [31], [32], [33].", "For the purpose of this paper, we refer to “assignment algorithms” as algorithms that solve assignment problems and “assignment-based reconfiguration algorithms” or “assignment-based algorithms” as reconfiguration algorithms that solve atom reconfiguration problems using an assignment algorithm as a subroutine.", "When there is no risk of confusion, the term “assignment” or “assignment algorithm” might be used to denote a typical assignment-based reconfiguration algorithm that does not exploit the rerouting or the ordering subroutines.", "The rest of the paper is organized as follows.", "After reviewing atom reconfiguration problems (Sec.", "), we describe our baseline assignment-based reconfiguration algorithm (Sec. )", "and introduce the aro algorithm (Sec.", "), describing in detail the rerouting subroutine (Sec.", "REF ) and the ordering subroutine (Sec.", "REF ).", "We then numerically benchmark the performance of the aro algorithm against exact, approximation, and heuristic algorithms, first, in the absence of loss (Sec.", "), and then, in the presence of loss (Sec. ).", "We provide supporting proofs and technical details about the various subroutines, including running time analysis and proofs of correctness, in the appendices (App.", "– )." ], [ "Atom reconfiguration problems", "An atom reconfiguration problem [18], [19], [20] seeks a control protocol, $\\mathcal {P}:\\mathcal {C}_0\\mapsto \\mathcal {C_T}$ , to transform an arbitrary initial configuration of $N_a^0$ atoms, $\\mathcal {C}_0$ , into a given target configuration of $N_a^T$ atoms, $\\mathcal {C}_T$ .", "A configuration of atoms is contained in an array of optical traps, $\\mathcal {A}(V)$ , defined by its spatial arrangement or geometry, $V = \\lbrace \\vec{v}_j~|~\\vec{v}_j = (v_{j_x}, v_{j_y}) \\in \\mathbb {R}^2, 1 \\le j \\le N_t\\rbrace $ , where $N_t$ is the number of traps in the optical trap array.", "We later choose the geometry to be a square lattice of $N_t=N_t^x\\times N_t^y$ traps in the plane (a grid) where $v_{j_x}=x_0+j_x\\delta x$ , $v_{j_y}=y_0+j_y\\delta y$ for $1\\le j_x, j_y \\le N_t^\\mu $ , $(x_0, y_0)$ is the origin of the array, and $\\delta x,~\\delta y$ are the lattice spacing constants.", "The control protocol is composed of a sequence of extraction-displacement-implantation (EDI) cycles that extract, displace, and implant a single atom or multiple atoms simultaneously from one static trap to another using a secondary array of dynamic traps.", "These EDI cycles are composed of a sequence of elementary control operations that include elementary transfer operations, $T_\\alpha ^{\\pm }$ , which extract (implant) an atom from (into) a static trap into (from) a dynamic trap, and elementary displacement operations, $T_{{\\nu }_\\mu }^\\pm $ , which displace a dynamic trap containing an atom from one static trap to another by an elementary displacement step $\\delta x$ or $\\delta y$ .", "These elementary control operations are supplemented by no-op operations, $T_{\\alpha ,\\nu }^{0}$ , to account for the probability of losing atoms in idle traps while control operations are performed on other traps.", "In an operational setting, the initial configuration of atoms is obtained by randomly loading a single atom into every trap of the trap array with a probability given by the loading efficiency $\\epsilon $ .", "Given the initial and target configurations of atoms, the reconfiguration problem is then solved, the control protocol is executed, and a measurement is performed to check whether the updated configuration of atoms contains the target configuration or not.", "In the presence of loss, the atom reconfiguration problem might have to be solved multiple times through multiple reconfiguration cycles until the target configuration is reached (success) or is no longer reachable (failure).", "The same algorithm is used independently of the initial configuration, that is, we do not consider adaptive algorithms that are updated based on the measured configuration, nor do we consider protocols that rely on mid-cycle measurements." ], [ "Atom reconfiguration problems on graphs", "Atom reconfiguration problems can be viewed as reconfiguration problems on graphs [31], [34], [35], [24], [36], [19].", "A configuration of indistinguishable atoms trapped in an array of optical traps is represented as a collection of tokens placed on a subset of the vertices of a graph, $G=(V,E)$ , where $V(G)$ and $E(G)$ are the vertex set and edge set of $G$ , respectively, with $|V(G)| = n$ and $|E(G)| = m$ .", "We assume that each graph is finite, simple, connected, undirected, and edge-weighted (see Diestel's textbook [37] for standard graph terminology).", "We use $w: E(G) \\rightarrow \\mathbb {N}^{+}$ to denote the edge-weight function which entails that $w(e)$ is positive for all $e = \\lbrace u,v\\rbrace \\in E(G)$ .", "Although most of our results are valid for arbitrary (edge-weighted) graphs, we focus on (unweighted) grid graphs.", "Specifically, we focus on the $(p \\times q)$ -grid graph, which is a graph of $pq$ vertices with vertex set $\\lbrace (x, y) \\mid x \\in \\lbrace 0, 1, \\cdots , p - 1\\rbrace , y \\in \\lbrace 0, 1, \\cdots , q - 1 \\rbrace \\rbrace $ for $p,q\\in \\mathbb {N}^{+}$ .", "We denote the width $p$ of a grid graph $G$ by $W_G$ , and its height $q$ by $H_G$ (we drop the subscript $G$ when the graph we are referring to is clear from the context).", "Two vertices $v=(x, y)$ and $v^{\\prime }=(x^{\\prime }, y^{\\prime })$ ($v \\ne v^{\\prime }$ ) are adjacent, and thus connected by an edge, if and only if $|x - x^{\\prime }| + |y - y^{\\prime }| \\le 1$ .", "Note that $m = \\mathcal {O}(n)$ whenever $G$ is a planar graph, as is the case for grid graphs.", "In addition to the graph $G$ , the atom reconfiguration problem requires a definition of the initial (source) and desired (target) configurations of atoms.", "The traps containing the atoms in the source and target configurations are identified as subsets of vertices $S \\subseteq V(G)$ and $T \\subseteq V(G)$ , respectively.", "We assume that $|S| \\ge |T|$ (note that $S$ and $T$ need not be disjoint), otherwise, the problem does not have a solution.", "Each vertex in $S$ has a token on it and the problem is to move the tokens on $S^\\star \\subseteq S$ such that all vertices of $T$ eventually contain tokens.", "Here, a move of token $t$ from vertex $u$ to vertex $v$ , which is equivalent to a sequence of elementary displacement operations, is allowed or unobstructed whenever $t$ is on $u$ and there exists a path $P$ (defined below) from $u$ to $v$ in $G$ that is free of tokens (except for $t$ ); otherwise, we say that the move is obstructed and call any token $t^{\\prime } \\ne t$ on $P$ an obstructing token.", "If we attempt to move a token along a path that is not free of tokens, then we say that this move causes a collision; because a collision induces the loss of the colliding atoms, moves that cause collisions are replaced by sequences of moves that do not cause collisions.", "Indeed, if the move of token $t$ from $u$ to $v$ is obstructed, then, assuming $v$ is free of tokens, we can always reduce this move to a sequence of unobstructed moves by replacing the move by a sequence of moves involving the obstructing tokens, i.e., solving the obstruction problem (which we solve using a slightly different procedure described in Sec.", "REF ).", "A solution to an atom reconfiguration problem is thus a sequence of unobstructed moves, each of which displaces a token from a vertex with a token to a vertex without a token along a path that is free of obstructing tokens." ], [ "Path systems as solutions to atom reconfiguration problems on graphs", "Our reconfiguration algorithms proceed by constructing a valid path system, which can be understood at a high level as a collection of moves, and, after potentially updating the path system, finding the (ordered) sequence of unobstructed moves to execute along every path in the path system.", "We define a path in a graph $G$ as a walk whose sequence of vertices comprises distinct vertices.", "We define a walk (of length $\\ell $ ) in $G$ as a sequence of vertices in $V(G)$ , $(v_0, \\ldots , v_\\ell )$ , such that $\\lbrace v_i,v_{i + 1}\\rbrace \\in E(G)$ for all $i \\in \\lbrace 0, \\ldots , \\ell -1\\rbrace $ , where $\\lbrace v_1, v_2, \\ldots , v_{\\ell -1}\\rbrace $ are the internal vertices of the walk.", "We define a cycle in $G$ as a walk of length $\\ell \\ge 3$ that starts and ends on the same vertex, $v_0 = v_\\ell $ , and whose internal vertices form a path.", "The weight of a path is given by the distance between its first and last vertex, $d_G(v_0,v_\\ell )$ , where the distance between $u$ and $v$ in $G$ is the weight of a shortest path between $u$ and $v$ , computed as the sum of the weights of the edges connecting the vertices of a shortest path $P$ , $w(P) = \\sum _{e \\in P}{w(e)}$ .", "When the graph is unweighted, or each of its edges has a weight of one, in which case the graph is said to be uniformly-weighted, then the distance between $u$ and $v$ corresponds to the minimum number of edges required to get from $u$ to $v$ in $G$ .", "A path system $\\mathcal {P}$ in $G$ is a collection of paths, $\\mathcal {P} = \\lbrace P_1, P_2, \\ldots , P_k\\rbrace $ , in which each path $P_i \\in \\mathcal {P}$ for $i \\in \\mathbb {N}^{+}([1,k])$ is a path from $v_{s_i}$ (source vertex) to $v_{t_i}$ (target vertex), which we denote by $\\lbrace v_{s_i}, v_1, v_2, \\ldots , v_{t_i}\\rbrace $ (single-vertex paths with $v_{s_i} = v_{t_i}$ are also allowed).", "We define a doubly-labeled vertex as a vertex that is both a source vertex and a target vertex, whether within the same path or in two different paths.", "The weight of a path system is given by the sum of the weights of its paths, $w(\\mathcal {P}) = \\sum _{P \\in \\mathcal {P}}{w(P)}$ .", "Each source vertex $v_{s_i} \\in V(P_i)$ associated with a path $P_i\\in \\mathcal {P}$ contains a token, i.e., $v_{s_i} \\in S$ ; the other vertices in $P_i$ may or may not contain tokens.", "A token $t$ is said to be isolated in a path system $\\mathcal {P}$ whenever there exists a single-vertex path $P = \\lbrace v\\rbrace \\in \\mathcal {P}$ such that the token $t$ is on the vertex $v$ and no other path in $\\mathcal {P}$ contains vertex $v$ .", "We say that a move (of a non-isolated token) associated with path $P_i \\in \\mathcal {P}$ is executable whenever the target vertex $v_{t_i}$ does not contain a token; an unobstructed move is trivially executable, whereas an obstructed move can always be reduced to a sequence of unobstructed moves, assuming $v_{t_i}$ contains no token.", "A path system $\\mathcal {P}$ is said to be valid (for $T$ ) or $T$ -valid whenever there exists some ordering of the moves that makes all the paths executable, and executing all the moves associated with $\\mathcal {P}$ results in each vertex in $T$ having a token on it (with the exception of isolated tokens which need not move).", "Clearly, in a valid path system, all source vertices are distinct and all target vertices are distinct, although some source vertices can be the same as some target vertices.", "We note that for any valid path system, we can always find an ordering in which to execute the moves, i.e., we can always find an executable move, unless the problem has already been solved with all vertices in $T$ occupied by tokens.", "We also note that, whenever we have a token on some target or internal vertex, then there must exist a path for which this token is on the source vertex.", "As described in the next section (Sec.", "), a typical assignment-based reconfiguration algorithm solves an assignment problem to compute a valid path system of minimum weight, i.e., in which each path is one of the many possible shortest paths between its source vertex and its target vertex, chosen arbitrarily among the set of all possible shortest paths.", "The obstruction problem is then solved to find a sequence of unobstructed moves on each path.", "Our proposed aro algorithm (Sec. )", "solves an assignment problem to compute a valid path system, runs the rerouting subroutine, and then runs the ordering subroutine, which yields an ordering of the paths such that the resulting sequence of associated moves guarantees that each move is unobstructed at the time of its execution." ], [ "The baseline reconfiguration algorithm", "A typical approach to solve an atom reconfiguration problem is to map it onto an assignment problem, which can be solved in polynomial time using an assignment algorithm, such as one based on the Hungarian algorithm.", "A typical assignment-based reconfiguration algorithm computes a (valid) distance-minimizing path system, which is a valid path system, $\\mathcal {P}$ , whose weight $w(\\mathcal {P})$ is minimized, i.e., the resulting control protocol exactly minimizes the total number of displacement operations performed on all atoms.", "[H] – The baseline reconfiguration algorithm [1] A static trap array, $A$ , represented as a positive edge-weighted graph $G=(V,E)$ ; an initial configuration of atoms, $C_0$ , represented as a set of source vertices, $S \\subseteq V(G)$ ; and a target configuration of atoms, $C_T$ , represented as a set of target vertices, $T \\subseteq V(G)$ .", "Compute the distance and a shortest path between all vertices of $S$ and $T$ by solving the all-pairs shortest path (APSP) problem ($\\mathcal {O}(n^2)$ on uniformly-weighted grids, $\\mathcal {O}(n^3)$ on arbitrary weighted graphs).", "Compute a distance-minimizing path system, forming a shortest path between each vertex of $T$ and a vertex of $S$ , by solving the assignment problem ($\\mathcal {O}(n^3)$ ).", "(optional) Using the isolation subroutine, modify the path system to (locally) isolate tokens that do not need to be displaced ($\\mathcal {O}(n^5)$ ).", "Using the obstruction solver subroutine, find a sequence of unobstructed moves associated with the ordered sequence of paths ($\\mathcal {O}(n^2)$ ).", "(optional) Using the batching subroutine, batch moves to perform control operations on multiple atoms in parallel ($\\mathcal {O}(n^3)$ ).", "To benchmark the performance of our proposed aro algorithm, we use the baseline reconfiguration algorithm (Alg.", "), which is a slightly modified version of a typical assignment-based algorithm that relies primarily on solving an assignment problem.", "This baseline algorithm solves atom reconfiguration problems in five steps, two of which are optional.", "In the first and second steps, the assignment subroutine (Sec.", "REF ) computes a valid distance-minimizing path system by solving the all-pairs shortest path (APSP) problem, followed by the assignment problem.", "In the third (optional) step, the isolation subroutine (Sec.", "REF ) isolates a maximal subset of tokens found on doubly-labeled vertices; we find a maximal subset (not contained in a larger subset) without the guarantee that it is a maximum subset (the largest subset in the whole graph), because finding a maximum subset is equivalent to the NP-complete problem of minimizing the number of tokens that move (which remains NP-complete even on grids [24]).", "In the fourth step, the obstruction solver subroutine (Sec.", "REF ) computes the sequence of unobstructed moves associated with the path system.", "In the fifth (optional) step, the batching subroutine (Sec.", "REF ) combines some of the moves to simultaneously displace multiple atoms in parallel.", "We now describe each of these subroutines in more detail." ], [ "Assignment subroutine", "The assignment subroutine first solves the all-pairs shortest path (APSP) problem in order to compute the shortest pairwise distances between occupied traps in the initial configuration (represented as the set of source vertices $S$ ) and occupied traps in the target configuration (represented as the set of target vertices $T$ ).", "More generally, the algorithm consists of computing shortest paths between every vertex in $S$ and every vertex in $T$ .", "On uniformly-weighted grids, the APSP problem can be solved in a time that is quadratic in the number of vertices ($\\mathcal {O}(n^2)$ ), as the shortest distance between any two traps in the grid is equal to the Manhattan distance between them (we arbitrarily select the shortest path that consists of at most two straight lines).", "In the general edge-weighted case, this problem can be solved in $\\mathcal {O}(n^3)$ time using the Floyd-Warshall algorithm [38].", "Alternatively, because it is more easily amenable to parallel implementation, e.g., on a GPU, Dijkstra's algorithm (ran from each source vertex) could replace the Floyd-Warshall algorithm to find shortest paths between pairs of vertices in positive edge-weighted graphs.", "The assignment subroutine then solves the assignment problem to find one of possibly many sets of pairs of source and target traps that exactly minimize the total distance traveled by all atoms, and thus exactly minimize the total number of displacement operations.", "The problem of computing a distance-minimizing path system can be reduced to solving an assignment problem using the assignment subroutine.", "A solution to the assignment problem consists of finding a bijection $f: A \\rightarrow B$ that minimizes the total cost $\\sum _{a \\in A} C(a, f(a))$ , where $A$ and $B$ are two sets of equal cardinality and $C: A \\times B \\rightarrow \\mathbb {R}$ is a positive cost function.", "In the absence of a surplus of atoms, i.e., when $|S| = |T|$ , we set $A = S$ , $B = T$ , and for any pair $a \\in A$ , $b \\in B$ , we choose the cost function $C(a, b)$ to be the distance between vertex $a$ and vertex $b$ (see App.", "for the general definition of the cost function in the presence of surplus atoms).", "This assignment problem has a polynomial-time solution; the first documented solution, which was attributed to Kuhn as the Hungarian algorithm [21], has an asymptotic running time of $\\mathcal {O}(|A|^4)$ that was later improved to $\\mathcal {O}(|A|^3)$  [22].", "Clearly, the matching obtained from running the Hungarian algorithm minimizes the total displacement distance.", "Our current implementation of the assignment subroutine uses the Floyd-Warshall algorithm [38], which we modified to store one of the shortest paths between each pair of source and target vertices, to solve the APSP problem, and the Hungarian algorithm to find one of the many possible assignments of source and target traps that minimize the total displacement distance." ], [ "Obstruction solver subroutine", "The obstruction solver subroutine seeks a sequence of moves associated with a valid path system.", "Processing every path in the path system in an arbitrary order, the subroutine attempts to move each token from its source vertex to its target vertex.", "If an obstructing token is present on the path, the subroutine switches the target of the token that it is attempting to move with the target of the obstructing token, updates the path system with the previously-computed shortest paths between the newly updated pairs of source and target vertices, and then attempts to move the obstructing token.", "The recursive procedure terminates when all vertices in $T$ are occupied.", "Formally, suppose that the token $t_i$ is on the source vertex $v_{s_i}$ of a path $P_i\\in \\mathcal {P}$ aiming towards its target vertex $v_{t_i}$ .", "If the move associated with $P_i$ is not obstructed, i.e., there is no other token on the path between $v_{s_i}$ and $v_{t_i}$ , then $t_i$ is moved to $v_{t_i}$ .", "Otherwise, there is some obstructing token, say $t_j$ , on the path $P_i$ .", "The obstruction solver subroutine finds the target vertex $v_{t_j}$ associated with token $t_j$ and then switches the target of $t_i$ with that of $t_j$ .", "The solver updates the path system by choosing the shortest path (previously computed during the APSP subroutine) for the updated pairings of source and target vertices, and recursively attempts to move the obstructing token.", "Because it started with a valid path system, the solver is guaranteed to return a valid sequence of unobstructed moves in polynomial time ($\\mathcal {O}(n^2)$ time in the worst case).", "Figure: Examples of atom reconfiguration problems on graphs.", "(a) Example problem for which ordering the paths of a path system reduces the number of transfer operations.", "Executing the move associated with either P 2 P_2 or P 3 P_3 (or both) before the move associated with P 1 P_1 would force t 2 t_2 or t 3 t_3 (or both) to move twice.", "(b) Example problem for which a token (here, t 3 t_3) can be isolated without increasing the weight of the path system.", "(c) Example problem for which the displacement distance and the number of displaced tokens cannot be simultaneously minimized.", "Two tokens need to move to minimize the displacement distance, whereas one token needs to move if minimizing displacements is not imposed.In the absence of loss, the performance of the obstruction solver subroutine depends on the ordering of the paths on which moves are executed.", "Indeed, the ordering affects the number of displaced atoms, as well as the total number of transfer operations.", "For example, Figure REF a shows an instance in which the execution of the ordering $(P_1, P_2, P_3)$ displaces every token once for a total of three moves, which is optimal, whereas the execution of the ordering $(P_2, P_3, P_1)$ requires five moves (given that the tokens are moved sequentially, one after the other), with tokens $t_2$ and $t_3$ being displaced twice.", "To avoid an unnecessary increase in the number of control operations resulting from an arbitrary ordering of the paths in the path system, the ordering subroutine (Sec.", "REF ) in the aro algorithm seeks to compute a path system that admits an ordering of its paths, so that executing the moves specified by the ordered path system guarantees that each token is displaced at most once.", "The ordering of the paths in the execution of the obstruction solver subroutine also affects the number of atoms that are displaced.", "For example, Figure REF b shows an instance in which three atoms are displaced for one ordering $(P_3, P_2, P_1)$ , whereas two atoms are displaced for another ordering $(P_2, P_1, P_3)$ .", "Indeed, in the latter ordering, $P_2$ is not obstructed, so it can be directly executed.", "Then, because $P_1$ is obstructed by token $t_3$ , the target of $t_1$ becomes $v_{t_3}$ and the target of $t_3$ becomes $v_{t_1}=v_{s_3}$ .", "The obstruction solver subroutine first attempts to move $t_3$ , and, because it already occupies its target, there is nothing to be done.", "The solver then attempts to move $t_1$ to $v_{t_3}$ .", "If the shortest path between $v_{s_1}$ and $v_{t_3}$ goes through $P_2$ , then, because a move on this path is obstructed by $t_2$ , the solver first moves $t_2$ to $v_{t_3}$ and then $t_1$ to $v_{t_2}$ ; two tokens have thus been displaced instead of three.", "Because token $t_3$ can be discarded from the path system, we say that token $t_3$ can be isolated; the isolation subroutine (Sec.", "REF ) seeks to find the tokens that can be isolated and removes them from the path system to reduce unnecessary displacement operations." ], [ "Isolation subroutine", "A common issue associated with the assignment subroutine is that it might label a vertex as both a source and a target vertex, either labeling a target vertex as its own source or labeling a vertex as the target of one path and the source of another.", "Such double labeling might result in unnecessary transfer operations, possibly displacing an atom that could otherwise have remained idle.", "To eliminate the issue associated with double labeling, we implement an isolation subroutine, which we run before computing the moves associated with the path system, to remove doubly-labeled vertices whenever possible.", "The removal of a doubly-labeled vertex is possible whenever the recomputed path system (obtained after excluding the vertex and its incident edges from the graph and updating $S$ and $T$ accordingly) remains valid and has a total weight that is less than or equal to the total weight of the original path system.", "The isolation subroutine guarantees that every token in the resulting path system has to be displaced at least once (App. ).", "It also guarantees that the baseline reconfiguration algorithm does not serendipitously displace fewer tokens than the aro algorithm; however, because implementing the subroutine is computationally costly and these serendipitous instances are rare, this subroutine can be safely ignored in an operational setting." ], [ "Batching subroutine", "Because typical assignment-based reconfiguration algorithms do not take into account the number of transfer operations, the resulting control protocols might perform as many extraction and implantation operations as displacement operations, i.e., an EDI cycle for each elementary displacement operation.", "To reduce the number of transfer operations, we implement a batching subroutine that seeks to simultaneously displace multiple atoms located on the same column of the grid graph within a single EDI cycle.", "The running time of the batching subroutine is no more than $\\mathcal {O}(n^3)$ .", "Although the resulting performance of our baseline algorithm is improved over a typical assignment-based algorithm that does not rely on the isolation and batching subroutines, our benchmarking analysis shows that a larger gain in operational performance is achieved by the aro algorithm, which further includes a rerouting subroutine and an ordering subroutine.", "To improve the performance of assignement-based reconfiguration algorithms, we propose the assignment-rerouting-ordering (aro) algorithm, which exploits a rerouting subroutine (Sec.", "REF ) and an ordering subroutine (Sec.", "REF ).", "The aro algorithm performs fewer transfer operations than the baseline reconfiguration algorithm while still minimizing the total number of displacement operations, thereby strictly improving overall performance in the absence and in the presence of loss.", "The aro algorithm (Alg. )", "solves atom reconfiguration problems in seven steps, five of which are imported from the baseline algorithm.", "In the first three steps, similarly to the baseline algorithm, the assignment subroutine (Sec.", "REF ) computes a valid distance-minimizing path system by solving the all-pairs shortest path (APSP) problem and the assignment problem, and optionally isolates a maximal subset of tokens located on doubly-labeled vertices by using the isolation subroutine (Sec.", "REF ).", "Next, instead of directly computing the sequence of moves to execute as in the baseline algorithm, the aro algorithm seeks to further update the path system.", "In the fourth step, the rerouting subroutine (Sec.", "REF ) seeks to reroute each path in the path system in an attempt to reduce the number of displaced atoms.", "In the fifth step, the ordering subroutine (Sec.", "REF ) constructs an ordered path system that admits an ordering of its paths, guaranteeing that each atom moves at most once.", "In the sixth step, similarly to the baseline algorithm, the obstruction solver subroutine (Sec.", "REF ) computes a sequence of unobstructed moves associated with the path system; because the paths are ordered, the move associated with each path is unobstructed, and solving the obstruction problem is trivial.", "In the seventh (optional) step, the batching subroutine (Sec.", "REF ) combines some of the moves to simultaneously displace multiple atoms in parallel.", "[H] – The aro algorithm [1] A static trap array, $A$ , represented as a positive uniformly-weighted grid graph $G=(V,E)$ with $|V(G)| = n$ , $|E(G)| = m = \\mathcal {O}(n)$ , and $\\sum _{e \\in E(G)}{w(e)} = \\mathcal {O}(n)$ , i.e., $c = 1$ ; an initial configuration of atoms, $C_0$ , represented as a set of source vertices, $S \\subseteq V(G)$ ; and a target configuration of atoms, $C_T$ , represented as a set of target vertices, $T \\subseteq V(G)$ .", "Compute the distance and a shortest path between all pairs of vertices of $S$ and $T$ by solving the all-pairs shortest path (APSP) problem ($\\mathcal {O}(n^2)$ on unweighted grids, $\\mathcal {O}(n^3)$ on arbitrary weighted graphs).", "Compute a distance-minimizing path system, forming a shortest path between each vertex of $T$ and a vertex of $S$ , by solving the assignment problem ($\\mathcal {O}(n^3)$ ).", "(optional) Using the isolation subroutine, modify the path system to (locally) isolate tokens that do not need to be displaced ($\\mathcal {O}(n^5)$ ).", "Using the rerouting subroutine, reroute the paths in the path system to locally minimize collisions in the path system ($\\mathcal {O}(n^3)$ ).", "Using the ordering subroutine, order the paths in the path system, which breaks cycles if they exist ($\\mathcal {O}(n^8)$ ).", "Using the obstruction solver subroutine, find a sequence of unobstructed moves associated with the ordered sequence of paths ($\\mathcal {O}(n^2)$ ).", "(optional) Using the batching subroutine, batch moves to perform control operations on multiple atoms in parallel ($\\mathcal {O}(n^3)$ ).", "Our current implementation of the aro algorithm has been designed to work with general edge-weighted graphs and runs in time $\\mathcal {O}(n^{8})$ on uniformly-weighted grid graphs.", "The correctness of the algorithm follows from Theorem REF (App.", "REF ), Lemma REF (App.", "), and Lemma REF (App. ).", "Theorem REF summarizes all the aforementioned results.", "We note that our time estimates for uniformly-weighted grid graphs are not optimized; we believe that we could potentially reduce the worst-case asymptotic running time to roughly $\\mathcal {O}(n^{4})$ without isolation or $\\mathcal {O}(n^{5})$ with isolation." ], [ "Rerouting subroutine", "The baseline reconfiguration algorithm returns a path system that minimizes the total number of displacement operations, without considering the total number of displaced atoms nor the total number of transfer operations.", "Because the problem of minimizing the number of transfer operations is an NP-complete problem [24] (even on grids), and finding a control protocol that simultaneously minimizes both displacement and transfer operations is impossible for some instances (Fig.", "REF c), we must resort to using heuristics that seek to reduce the number of atoms that are displaced.", "To reduce the number of displaced atoms while preserving the number of displacement operations, we rely on the distance-preserving rerouting subroutine, which attempts to substitute each path in the path system with another path of the same weight that contains fewer vertices occupied by atoms.", "The intent behind the usage of the rerouting subroutine is to attempt to increase the number of isolated tokens, i.e., tokens that do not have to move.", "We refer to rerouting a path as updating its sequence of internal vertices while preserving its source and target vertices.", "This version of rerouting was designed specifically for uniformly-weighted grid graphs (but can easily be generalized).", "The subroutine proceeds by looping over every path in the path system, and, for every path, attempting to reroute it, while fixing the rest of the path system, in a way that maximizes token isolation while preserving the weight of the path.", "If the original path is a straight line, then there is nothing to do, as the path cannot be rerouted without increasing its weight.", "Otherwise, suppose that the source vertex $v_s$ of the path is $(x_1, y_1)$ and the target vertex $v_t$ of the same path is $(x_2, y_2)$ , and that, without loss of generality, $x_1 < x_2$ and $y_1 < y_2$ .", "Given $W = |x_1 - x_2|$ and $H = |y_1 - y_2|$ , there are a total of ${(W + H)!}/{H!W!", "}$ shortest paths between vertex $(x_1, y_1)$ and vertex $(x_2, y_2)$ .", "Using a brute-force approach for every path is inefficient, as the number of rerouted paths to consider for every path is exponential in the Manhattan distance between the source vertex and the target vertex of the path.", "To avoid an exhaustive search and speed up computation, we exploit dynamic programming (App.", "); searching for a rerouted path that maximizes token isolation can then be performed in $\\mathcal {O}(n^{3})$ time (Lemma REF ).", "A possible extension of the rerouting subroutine that we have developed, but whose performance we have not quantified, is to search for paths that might not necessarily preserve the minimum total displacement distance or minimum number of displacement operations.", "The distance-increasing rerouting subroutine (see App.", "for a detailed presentation) trades off an increase in displacement operations for a decrease in transfer operations.", "This subroutine runs in $\\mathcal {O}(n^{7})$ time on uniformly-weighted grids (Lemma REF ).", "We note that the aro algorithm would still work as expected if we were to replace the distance-preserving rerouting subroutine with the distance-increasing rerouting subroutine." ], [ "Ordering subroutine", "The ordering subroutine constructs an ordered path system that admits a (partial) ordering of its paths, so that the moves associated with each path are unobstructed, i.e., atoms displaced in preceding moves do not obstruct the displacement of atoms in succeeding moves and atoms obstructing certain paths are displaced before the atoms on the paths that they obstruct.", "This ordering of moves effectively guarantees that each displaced atom undergoes exactly one EDI cycle, thereby restricting the number of transfer operations per displaced atom to its strict minimum of two (one extraction operation and one implantation operation per EDI cycle).", "The existence of a polynomial-time procedure to transform any path system into a (valid) ordered path system, in which the paths are ordered such that executing the moves associated with each path displace every atom at most once, is guaranteed by Theorem REF .", "Theorem 1 A valid path system $\\mathcal {P}$ in a positive edge-weighted graph $G$ can always be transformed in polynomial time into a valid cycle-free path system $\\mathcal {P^{\\prime }}$ such that $w(\\mathcal {P^{\\prime }})\\le w(\\mathcal {P})$ , where a cycle-free path system is a path system that includes no cycles, i.e., it induces a cycle-free graph, or a forest.", "Moreover, the dependency graph associated with the path system $\\mathcal {P^{\\prime }}$ is a directed acyclic graph (DAG), which admits a partial ordering of its vertices, implying a partial ordering of the corresponding moves.", "In simple terms, Theorem REF states that a path system can always be transformed (in polynomial time) into an ordered path system that admits an ordering of its paths resulting in no obstructions; executing the moves associated with each path is then guaranteed not to cause any collisions.", "The ordering is obtained by finding the partial ordering of the vertices of the dependency graph, which is a graph where each path is represented by a vertex, and where each dependency of a path $P_i$ on a path $P_j$ is represented by a directed edge from the vertex representing path $P_j$ to the vertex representing path $P_i$ ; $P_i$ is said to depend on $P_j$ if $v_{s_j}$ is an internal vertex in $P_i$ , or if $v_{t_i}$ is an internal vertex in $P_j$ .", "This theorem is valid for any arbitrary path system defined over any arbitrary (edge-weighted) graph.", "[H] – The ordering subroutine [1] A valid path system, $\\mathcal {P}$ , defined on a uniformly-weighted grid graph $G$ with $|V(G)| = n$ , $|E(G)| = m = \\mathcal {O}(n)$ , and $\\sum _{e \\in E(G)}{w(e)} = \\mathcal {O}(n)$ , i.e., $c = 1$ .", "Merge path system ($\\mathcal {O}(n^{c+4}m^2) = \\mathcal {O}(n^7)$ ).", "Unwrap path system ($\\mathcal {O}(n^2m) = \\mathcal {O}(n^{3})$ ).", "Detect/break cycles in path system ($\\mathcal {O}(n^{c + 6}m) = \\mathcal {O}(n^{8})$ ).", "Order path system ($\\mathcal {O}(n^3)$ ).", "Theorem REF applies to general (positive) edge-weighted graphs, even when the path system is not distance-minimizing, e.g., is obtained from implementing the distance-increasing rerouting subroutine.", "Its proof directly results from the existence of the ordering subroutine (see Alg.", "REF ), which efficiently constructs a cycle-free path system and finds the ordering of the paths within it.", "We now describe the four steps of the ordering subroutine, providing formal proofs of supporting lemmas and theorems in App. .", "First, the merging step converts a path system into a (non-unique) merged path system (MPS).", "A merged path system is a path system such that no two paths intersect more than once, with the intersecting sections of the two paths possibly involving more than one vertex (all vertices in the intersection being consecutive).", "The merging operation does not increase the total path system weight, but it can decrease it; if the path system comes from the Hungarian algorithm, then the total weight of the path system is already minimized.", "A valid merged path system can be computed in time $\\mathcal {O}(n^{c+4}m^2)$ for a graph $G$ where $|V(G)| = n$ , $|E(G)| = m$ , and $\\sum _{e \\in E(G)}{w(e)} = \\mathcal {O}(n^{c})$ for some positive integer $c$  (see Lemma REF in App.", "REF ).", "Second, the unwrapping step converts an MPS into an unwrapped path system (UPS) by reconstructing tangled paths.", "An unwrapped path system is a MPS such that no two paths within it are tangled.", "Two paths are tangled if $P_i$ wraps or is wrapped by $P_j$ , where a path $P_i$ is said to be wrapped in another path $P_j$ if it is entirely contained in it; if $P_i$ is wrapped by $P_j$ , then $P_j$ wraps $P_i$ .", "Unwrapping the paths that a path $P_j$ wraps is performed by sorting their respective source traps and target traps separately based on their order of occurrence within $P_j$ , and assigning every source trap to the target trap of the same order.", "A valid UPS can be obtained from a valid MPS in time $\\mathcal {O}(n^2m)$  (see Lemma REF in App.", "REF ).", "Third, the cycle-breaking step converts an UPS into a cycle-free path system (CPS).", "The cycle-breaking step modifies the path system such that the graph induced on the modified path system is a cycle-free graph (a forest).", "Even though a graph can have exponentially many cycles, we prove that the cycle-breaking step can be executed in polynomial time, i.e., a valid CPS can be obtained from a valid UPS in time $\\mathcal {O}(n^{c + 6}m)$  (see Theorem REF in App.", "REF ).", "This result relies on the existence of “special cycles” in a graph with cycles (App.", "REF ), which can be found using a procedure that we provide in App.", "REF .", "Once a special cycle has been found, the set of paths that induce it can be found and updated to break the special cycle (see App.", "REF ) using a polynomial-time algorithm (see Lemma REF in App.", "REF ).", "Figure: Justification for the cycle-breaking procedure.Example of a path system defined on a weighted graph that induces a cycle that cannot be broken by computing a minimum spanning tree (MST) without increasing the total weight of the path system.", "The initial path system has a total weight equal to 46.", "The MST of the graph induced on the path system includes all the edges of the graph except for the edge of weight 10.", "The path system generated by computing all-pairs shortest path on the MST and then computing a source-target matching has a total total weight of 52, irrespective of the specific choice of the matching (among 4!=244!", "= 24 possibilities), as the edges of weight 5 will each be in two paths and the edge of weight 6 will be in 4 paths.", "A similar example can be constructed for the case of uniformly-weighted graphs, where deleting some edge might increase the weight of the path system.The reason for using the (time-consuming) cycle-breaking procedure to break cycles instead of, e.g., computing minimum spanning trees (MSTs) using Theorem 2.1 of Călinescu et al.", "[24], is that computing an MST to break cycles might in fact increase the total weight of the path system (see Fig.", "REF for an example on a weighted graph).", "However, we note that our current implementation largely follows the proof of existence of cycle-free path systems and more efficient implementations $(\\mathcal {O}(n^4))$ are possible.", "For instance, a different algorithm could iterate over edges of cycles formed by the paths in the path system and attempt to delete them one by one; the deletion of an edge is made permanent whenever we can still find a valid path system of the appropriate weight in the graph minus the edge.", "The procedure is then repeated on the new graph, i.e, the graph minus the edge, until we obtain a cycle-free path system.", "Proving the correctness of such an algorithm requires proving Theorem REF and it is therefore important to note that our presentation is oriented towards simplifying the proofs of correctness rather than optimizing the worst-case asymptotic running time of the aro algorithm.", "Fourth, the ordering step constructs an ordered path system (OPS) by ordering the moves associated with a CPS, e.g., by constructing the DAG associated with the path system.", "This step can be performed in time $\\mathcal {O}(n^{3})$  (see Theorem REF in App.", "REF ).", "The moves associated with the OPS can be trivially computed using the obstruction solver subroutine (see Sec.", "REF ).", "Having described in detail the baseline reconfiguration algorithm and the aro algorithm, we now proceed to numerically quantify their operational performance in the absence of loss (see Sec. )", "and in the presence of loss (see Sec.", ")." ], [ "Quantifying performance in the absence of loss", "We numerically quantify the performance of the aro algorithm in the absence of loss.", "We choose the performance metrics to be the total number of displacement operations, $N_\\nu $ , the total number of transfer operations, $N_\\alpha $ , and the total number of control operations, $N_T=N_\\alpha +N_\\nu $ .", "The values computed for these performance metrics can be compared to the values obtained using the baseline reconfiguration algorithm and the 3-approx algorithm; the total number of displacement operations is minimized by using the baseline reconfiguration algorithm, whereas the total number of transfer operations is bounded by at most 3 times its optimal value by using the 3-approx algorithm.", "These performance metrics directly correlate with the operational performance obtained in the presence of loss (see Sec.", "), quantified in terms of the mean success probability; reducing the total number of displacement and transfer operations results in fewer atoms lost, whereas reducing the number of displaced atoms concentrates the loss probability in as few atoms as possible, simplifying the problem of filling up empty target traps in subsequent reconfiguration cycles.", "Although our results are valid for any arbitrary geometry, and more generally for atom reconfiguration problems defined on arbitrary graphs, we focus on the problem of preparing compact-centered configurations of $N_a^T=\\sqrt{N_a^T}\\times \\sqrt{N_a^T}$ atoms in rectangular-shaped square-lattice arrays of $N_{t}=\\sqrt{N_a^T}\\times \\eta \\sqrt{N_a^T}=\\eta N_a^T$ static traps, where $\\eta =N_{t}/N_a^T$ is the trap overhead factor, which quantifies the overhead in the number of optical traps needed to achieve a desired configuration size.", "In the presence of loss, the overhead factor is typically a non-linear function of the configuration size that is chosen based on the desired mean success probability [20].", "In the absence of loss, the overhead factor is typically chosen based on the desired baseline success probability, which depends on the probability of loading at least as many atoms as needed to satisfy the target configuration, and thus on the loading efficiency, $\\epsilon $ .", "Figure: Reducing the fraction of displaced atoms using the rerouting subroutine in the absence of loss.", "(a) Mean fraction of displaced atoms, f ¯ ν =〈N a ν /N a 0 〉\\bar{f}_\\nu =\\langle N_a^\\nu /N_a^0\\rangle , computed for various configuration sizes for the baseline (yellow circle), assignment-rerouting (orange square), and 3-approx (purple inverted triangles) algorithms.", "(b) Distribution of the relative fraction of displaced atoms and its mean value for various configuration sizes computed as the ratio of the fraction of displaced atoms for the assignment-rerouting algorithm and the baseline reconfiguration algorithm.For each target configuration size, we sample over a thousand initial configurations of atoms by distributing $N_a^0$ atoms at random over $N_{t}$ traps.", "We then count the number of transfer and displacement operations for each displaced atom within each realization and compute the ensemble average over the distribution of initial configurations.", "The number of atoms in the initial configuration satisfies a binomial distribution, $\\tilde{N}_a^0\\sim \\text{Binomial}(N_{t},\\epsilon )$ , where the loading efficiency is conservatively chosen to be $\\epsilon =0.5$ and the trap overhead factor is chosen to be $\\eta =1/\\epsilon =2$ .", "As computed from the cumulative distribution function of the binomial distribution, the baseline success probability is thus $\\bar{p}=0.5$ , i.e., half the initial configurations contain enough atoms to satisfy the target configuration; however, we restrict our analysis to successful reconfiguration protocols with $N_a^0\\ge N_a^T$ .", "We first compute the reduction in the number of displaced atoms, $N_a^\\nu $ , or, equivalently, the fraction of displaced atoms, $f_\\nu =N_a^\\nu /N_a^0$ , achieved by supplementing our baseline reconfiguration algorithm with the rerouting subroutine.", "Considering the baseline reconfiguration algorithm, the mean fraction of displaced atoms increases with configuration size (Fig.", "REF a) from $\\bar{f}_\\nu ^{0}=0.7(2)$ for preparing a configuration of $N_a^T=4\\times 4$ atoms to $\\bar{f}_\\nu ^{0}=0.94(2)$ for preparing a configuration of $N_a^T=32\\times 32$ atoms.", "Nearly all atoms are thus displaced for large configuration sizes, in contrast with the 3-approx algorithm that displaces only slightly more than half of the atoms ($0.55(2)$ for a configuration of $N_a^T=32\\times 32$ atoms).", "Compared with the baseline reconfiguration algorithm, the assignment-rerouting algorithm supplemented by the rerouting subroutine slightly reduces the fraction of displaced atoms (Fig.", "REF b), achieving a mean relative fraction of displaced atoms of $\\langle f_\\nu ^{ar}/f_\\nu ^{0}\\rangle =0.94(^{+6}_{-8})$ for preparing a configuration of $N_a^T=4\\times 4$ atoms and $\\langle f_\\nu ^{ar}/f_\\nu ^{0}\\rangle =0.983(7)$ for preparing a configuration of $N_a^T=32\\times 32$ atoms.", "The mean relative gain in performance is thus larger for smaller configuration sizes, even though a gain in performance is not always possible for small configuration sizes when the baseline reconfiguration algorithm already minimizes the total number of displaced atoms ($f_{\\nu }^{ar}/f_{\\nu }^{0}=1$ ).", "Figure: Reducing the number of transfer operations using the rerouting and ordering subroutines in the absence of loss for a target configuration of 32×32{32\\times 32} atoms.", "(a) Distribution of the number of EDI cycles per displaced atom using the baseline (yellow), aro (red), and 3-approx (red) algorithms.", "(b) Distribution of the number of transfer operations computed relative to the 3-approx for the baseline (yellow) and aro (red) algorithms.", "(c) Mean relative number of transfer operations for the aro algorithm computed relative to the baseline reconfiguration algorithm for various configuration sizes.", "(d) Distribution of the relative number of transfer operations for the aro algorithm computed relative to the baseline reconfiguration algorithm.We then compute the reduction in the number of EDI cycles per displaced atom obtained by implementing the ordering subroutine.", "The baseline reconfiguration algorithm executes moves in an arbitrary order, possibly displacing the same atom multiple times, and thus making it undergo multiple EDI cycles, each of which entails unnecessary extraction and implantation operations.", "The ordering subroutine improves on the baseline reconfiguration algorithm by ordering the moves so that each atom undergoes at most one EDI cycle (Fig.", "REF a).", "The number of transfer operations per displaced atom is strictly reduced to two, as it is the case for the 3-approx algorithm, given one extraction and one implantation operation per EDI cycle.", "We further compute the reduction in the total number of transfer operations obtained by both reducing the number of displaced atoms using the rerouting subroutine and reducing the number of transfer operations per displaced atom using the ordering subroutine, i.e., by implementing the full aro algorithm.", "We express the number of transfer operations relative to the 3-approx algorithm, $N_\\alpha /N^{3}_\\alpha $ , which performs at most three times the minimum number of transfer operations.", "For preparing a configuration of $N_a^T=32\\times 32$ atoms, the baseline reconfiguration algorithm performs on average $2.7(2)$ times more transfer operations than the 3-approx algorithm, whereas the aro algorithm performs on average $1.66(4)$ times more transfer operations (Fig.", "REF b).", "The mean relative fraction of transfer operations performed by the aro algorithm over the baseline reconfiguration algorithm decreases with configuration size (Fig.", "REF c), ranging from $0.91(9)$ for preparing a configuration of $N_a^T=4\\times 4$ atoms to $0.62(4)$ for preparing a configuration of $N_a^T=32\\times 32$ atoms (Fig.", "REF d).", "Figure: Reducing the total number of control operations using the aro algorithm in the absence of loss for a target configuration of 32×32{32\\times 32} atoms.", "(a) Distribution of the number of transfer and displacement operations for the baseline (yellow), aro (red), and 3-approx (purple) algorithms.", "(b) Distribution of the number of control operations for the baseline (yellow), aro (red), and 3-approx (purple) algorithms.", "(c-d) Distribution of the relative number of control operations and its mean value for various configuration sizes for the aro algorithm computed relative to the baseline (yellow) and 3-approx algorithms (purple).We finally compute the total number of control operations by summing the number of transfer operations and the number of displacement operations for all atoms (Fig.", "REF a).", "The total number of control operations correlates with the mean success probability in the presence of loss when the duration and the efficiency of control operations are comparable for displacement and transfer operations.", "Because both the baseline and the aro algorithms exactly minimize the number of displacement operations, and the aro algorithm performs fewer transfer operations than the baseline algorithm, the aro algorithm performs fewer control operations than the baseline algorithm (Fig.", "REF b).", "In addition, although the 3-approx algorithm performs fewer transfer operations than both the baseline and aro algorithms, it performs significantly more displacement operations, so the 3-approx algorithm performs worse than the baseline and aro algorithms in terms of total number of control operations (Fig.", "REF b).", "The mean relative number of control operations computed for the aro algorithm relative to the baseline (3-approx) algorithm decreases with configuration size, ranging from $0.95(6)$ ($0.93(12)$ ) for a configuration of $N_a^T=4\\times 4$ atoms to $0.89(2)$ ($0.36(5)$ ) for a configuration of $N_a^T=32\\times 32$  (Fig.", "REF c-d).", "In the presence of loss, this relative reduction in the number of control operations translates into a relative increase in the mean success probability, which we quantify in the next section." ], [ "Quantifying performance in the presence of loss", "  Figure: Increasing the mean success probability using the aro algorithm.", "(a,c) Mean success probability, p ¯\\bar{p}, for preparing a configuration of N a T =N t x ×N t x N_a^T=N_t^x\\times N_t^x atoms in an array of N t =N t x ×N t y N_t=N_t^x\\times N_t^y traps for(a) N t x =16N_t^x=16 and (c) N t x =32N_t^x=32.", "(b, d) Mean success probability for preparing a configuration of N a T =N t x ×N t x N_a^T=N_t^x\\times N_t^x atoms computed for (b) N t x =16N_t^x=16, N t y =36,38N_t^y=36,38, and (d) N t x =32N_t^x=32, N t y =86,88N_t^y=86,~88.The markers represent the 3-approx (purple inverted triangle), baseline (yellow disk), red-rec (blue triangle), and aro (red square) algorithms.We numerically evaluate the performance of the aro algorithm in the presence of loss using realistic physical parameters following the approach outlined in our previous work introducing the redistribution-reconfiguration (red-rec) heuristic algorithm [20].", "We conservatively choose the trapping lifetime to be $60~\\text{seconds}$ and the success probability of elementary displacement and transfer operations to be $0.985$ .", "The red-rec algorithm is a heuristic algorithm that seeks to increase operational performance by performing parallel control operations.", "The key idea of the algorithm is, first, to redistribute atoms from columns containing more atoms than needed (donors) to columns containing fewer atoms than needed (receivers) and, then, to reconfigure each column using the exact 1D reconfiguration algorithm.", "In addition to increasing operational performance, it also enables efficient implementation on a low-latency feedback control system with fast computational running time.", "For the current study, we implement a slightly improved version of the red-rec algorithm that is more computationally efficient.", "We choose the performance metric to be the mean success probability obtained by averaging the success probability over the distribution of random initial configurations and loss processes.", "In the presence of loss, larger arrays are required to load enough atoms to replace atoms lost during multiple reconfiguration cycles.", "As the height of the trap array is increased, there is a sharp transition between near-certain success and near-certain failure (Fig.", "REF a-c).", "The relative gain in performance achieved by the aro algorithm over the baseline algorithm is maximized at the inflection point where $\\bar{p}=0.5$ .", "In the presence of loss, the aro algorithm outperforms the 3-approx, baseline, and red-rec algorithms (Fig.", "REF b-d).", "Comparing the baseline and aro algorithms, the mean success probability increases from $\\bar{p}_a=0.29(3)$ ($0.65(3)$ ) to $\\bar{p}_{aro}=0.41(4)$ ($0.74(3)$ ) for a static trap array of $16\\times 36$ ($16\\times 38$ ) traps, and from $\\bar{p}_a=0.41(4)$ ($0.64(3)$ ) to $\\bar{p}_{aro}=0.82(3)$ ($0.94(2)$ ) for a static trap array of $32\\times 86$ ($32\\times 88$ ) traps.", "The relative gain of performance is $\\bar{p}_{aro}/\\bar{p}_{a}=1.4(3)$  ($1.1(1)$ ) and $\\bar{p}_{aro}/\\bar{p}_{a}=2.0(2)$ ($1.5(1)$ ) for a static trap array of $16\\times 36$ ($16\\times 38$ ) and $32\\times 86$ ($32\\times 88$ ) traps, respectively.", "The relative improvement in performance is offset by a significant increase in computational running time.", "The red-rec algorithm thus maintains an operational advantage when real-time computation is needed.", "Our current implementation of the aro algorithm has, however, not been optimized for real-time operation, and we foresee opportunities to further improve its running time." ], [ "Conclusion", "In conclusion, we have introduced the assignment-rerouting-ordering (aro) algorithm and shown that it outperforms our baseline reconfiguration algorithm, which is a typical assignment-based algorithm, both in the absence and the presence of loss by exactly minimizing the total number of displacement operations, while reducing the number of displaced atoms and restricting the number of transfer operations to strictly two (one extraction and one implantation operation per EDI cycle).", "Further gains in performance could possibly be achieved by trading off an increase in displacement operations for a decrease in transfer operations.", "As we have seen, to minimize the total number of displacement operations, the baseline reconfiguration algorithm needs to displace nearly all atoms, including those that are already located in the target region of the trap array.", "Reducing the number of displaced atoms would reduce the number of transfer operations.", "This reduction could be achieved by imposing the constraint that a subset of the atoms located in the target region remain idle.", "This subset could be selected at random, searched over, or identified based on heuristics, e.g., fixing atoms located near the geometric center of the target configuration to minimize their corruption, as they are the most costly to replace.", "Alternatively, the distance-preserving rerouting algorithm can be substituted for the distance-increasing rerouting subroutine, which was designed to express the aforementioned trade-off between displacement operations and transfer operations.", "In this paper, we have alluded to the reliance on adaptive algorithms as another avenue for improving operational performance.", "The use of adaptive algorithms entails running different reconfiguration algorithms in each reconfiguration cycle, such that the choice of the algorithm to execute is possibly dependent on the measured atom configuration.", "Deploying adaptive algorithms requires quantifying the performance of the algorithms at our disposal in terms of how the atoms are distributed in the initial configuration.", "This will in turn make it possible to come up with a measure of the performance of the algorithms for a given configuration of atoms.", "A current limitation of our implementation is the computational running time, which prevents real-time operation and characterization for large configuration sizes.", "As a result of Theorem REF , we now know of the existence of an $\\mathcal {O}(n^4)$ algorithm for the assignment-rerouting-ordering subroutine on arbitrary (positive) edge-weighted graphs, which we briefly described in Sec.", "REF .", "Should the need arise, making the aro algorithm more computationally efficient than that will require further research and/or integrating additional heuristics that forego optimality and exhaustiveness.", "An encouraging observation is that our improved red-rec heuristic algorithm, which is compatible with real-time implementation, achieves performance comparable to that of the aro algorithm.", "Future work will focus on improving computational performance, quantifying performance on arbitrary graphs, and demonstrating applicability in an operational setting.", "Our results highlight the value to be gained from extending formal results from combinatorial optimization and graph theory in an operational setting.", "Additional research opportunities exist in developing exact and approximation algorithms for atom reconfiguration problems that simultaneously optimize multiple objective functions, as well as for problems with labeled, distinguishable atoms that encode quantum information.", "These algorithms could be used to design efficient algorithms for implementing quantum circuits, quantum protocols, and quantum error correction codes on quantum devices that admit dynamic connectivity graphs, such as those provided by dynamic configuration of atoms.", "We emphasize that our results are general and thus extend beyond the scope of atom reconfiguration problems." ], [ "Acknowledgements", "This work was supported by Industry Canada, the Canada First Research Excellence Fund (CFREF), the Canadian Excellence Research Chairs (CERC 215284) program, the Natural Sciences and Engineering Research Council of Canada (NSERC RGPIN-418579 and RGPIN-2022-02953) Discovery program, the Canadian Institute for Advanced Research (CIFAR), and the Province of Ontario.", "Amer E. Mouawad's work was supported by the Alexander von Humboldt Foundation and partially supported by the PHC Cedre project 2022 “PLR”." ], [ "Assignment subroutine", "Following the presentation of the baseline reconfiguration algorithm in Sec.", "REF , we show that the problem of computing a $T$ -valid distance-minimizing path system can be reduced to the assignment problem.", "Because we would like to solve reconfiguration problems with surplus atoms, we make a small modification in the reduction.", "We define a set $U$ ($|U| = |S| - |T|$ ), which comprises what we call fictitious vertices, and we set $A = S$ and $B = T \\cup U$ .", "As for the cost function, we define it for any tuple in $A \\times B$ as follows: $ C(a, b) ={\\left\\lbrace \\begin{array}{ll}d_G(a, b) & a \\in A, b \\in B\\\\W & \\text{otherwise} \\\\\\end{array}\\right.", "}$ $W$ is a large number (we set $W$ large enough to ensure that it is larger than $|S|$ times the weight of the heaviest shortest path).", "Running the Hungarian algorithm on the constructed instance will yield an assignment such that each non-fictitious vertex in $B$ is assigned to a source vertex in $A$ ; this subset of the matching can be used to construct a valid distance-minimizing path system $\\mathcal {P}$ (where for each pair we pick one of the many possible shortest paths) [24].", "We provide a proof for completeness.", "Lemma 1 Given an $n$ -vertex (positive) edge-weighted graph $G$ and two sets $S,T \\subseteq V(G)$ such that $|S| \\ge |T|$ , we can compute in time $\\mathcal {O}(n^3)$ a $T$ -valid distance-minimizing path system $\\mathcal {P}$ .", "Note that $C(a, b)$ is equal to the weight of a shortest path in $G$ connecting $a$ and $b$ whenever $a \\in S$ and $b \\in T$ and $W$ otherwise.", "After we obtain the matching, we can move the tokens matched to non-fictitious vertices as follows.", "Assume we want to move token $t_1$ ; if the path $t_1$ would take to reach its target has another token $t_2$ on it, we switch the targets of the two tokens and we move $t_2$ instead (this is similar to the obstruction solver subroutine).", "One can check that the weight of the edges traversed does not exceed the weight of the path system.", "On the other hand, the optimum solution (in fact, any solution) must move tokens to targets and cannot do better than the total weight of the shortest paths in a minimum-weight assignment.", "The running time follows from the fact that we run the Floyd-Warshall algorithm followed by the Hungarian algorithm, each of which has a running time of $\\mathcal {O}(n^3)$ ." ], [ "Isolation subroutine", "As described in Sec.", "REF , the isolation subroutine is a heuristic that guarantees that, for any distance-minimizing path system, every token in the path system (after deleting some vertices and tokens from the graph) has to move at least once, regardless of the path system or of the order in which we execute the moves.", "We let $I_{\\mathcal {P}}$ denote the set of vertices which contain tokens that are isolated in $\\mathcal {P}$ , and we let $F_{\\mathcal {P}}$ denote the set of vertices in $S$ that do not appear in $\\mathcal {P}$ (their tokens are said to be fixed by $\\mathcal {P}$).", "We show, in particular, that we can transform $\\mathcal {P}$ into another path system $\\mathcal {P^{\\prime }}$ such that $w(\\mathcal {P^{\\prime }}) = w(\\mathcal {P})$ , $I_{\\mathcal {P^{\\prime }}} \\supseteq I_{\\mathcal {P}}$ , $F_{\\mathcal {P^{\\prime }}} \\supseteq F_{\\mathcal {P}}$ , and there exists no other path system $P^{\\prime \\prime }$ such that $w(\\mathcal {P^{\\prime \\prime }}) = w(\\mathcal {P^{\\prime }})$ , and either $I_{\\mathcal {P^{\\prime \\prime }}} \\supset I_{\\mathcal {P^{\\prime }}}$ or $F_{\\mathcal {P^{\\prime \\prime }}} \\supset F_{\\mathcal {P^{\\prime }}}$ (see Figure REF b for the example of an instance where our rerouting heuristics would fail to isolate an extra token).", "Assume we are given a distance-minimizing path system $\\mathcal {P}$ and let $v \\in S \\subseteq V(G)$ be a vertex containing a token $t$ which is not isolated nor fixed in $\\mathcal {P}$ (not in $I_{\\mathcal {P}} \\cup F_{\\mathcal {P}}$ ).", "Let $G^{\\prime } = G - (I_\\mathcal {P} \\cup F_\\mathcal {P} \\cup \\lbrace v\\rbrace )$ denote the graph obtained from $G$ after deleting the set of vertices $I_\\mathcal {P} \\cup F_\\mathcal {P} \\cup \\lbrace v\\rbrace \\subseteq V(G)$ and the edges incident on all vertices in $I_\\mathcal {P} \\cup F_\\mathcal {P} \\cup \\lbrace v\\rbrace $ .", "We compute an assignment (App. )", "in the graph $G^{\\prime } = G - (I_\\mathcal {P} \\cup F_\\mathcal {P} \\cup \\lbrace v\\rbrace )$ with $S^{\\prime } = S \\setminus (I_\\mathcal {P} \\cup F_\\mathcal {P} \\cup \\lbrace v\\rbrace )$ and $T^{\\prime } = T \\setminus (I_\\mathcal {P} \\cup F_\\mathcal {P} \\cup \\lbrace v\\rbrace )$ .", "Let $\\mathcal {P^{\\prime }}$ denote the path system associated with this newly computed assignment.", "If $\\mathcal {P^{\\prime }}$ is $T^{\\prime }$ -valid and $w(\\mathcal {P^{\\prime }}) = w(\\mathcal {P})$ , then we use $\\mathcal {P^{\\prime }}$ instead of $\\mathcal {P}$ and consider token $t$ (on vertex $v$ ) as either an extra isolated token or an extra fixed token, depending on whether $v \\in S \\cap T$ or $v \\in S \\setminus T$ .", "We then add $v$ to either $I_{\\mathcal {P^{\\prime }}}$ or $F_{\\mathcal {P^{\\prime }}}$ , depending on whether the token on it was isolated or fixed.", "This process is repeated as long as we can find new tokens to isolate or fix.", "We show that a distance-minimizing path system in which no tokens can be isolated or fixed implies that all other tokens will have to move at least once.", "We call such a path system an all-moving path system.", "Lemma 2 Given an $n$ -vertex (positive) edge-weighted graph $G$ , two sets $S,T \\subseteq V(G)$ such that $|S| \\ge |T|$ , and a $T$ -valid distance-minimizing path system $\\mathcal {P}$ , we can compute, in time $\\mathcal {O}(n^5)$ , a valid all-moving path system $\\mathcal {P^{\\prime }}$ such that $w(\\mathcal {P^{\\prime }}) = w(\\mathcal {P})$ , $I_{\\mathcal {P^{\\prime }}} \\supseteq I_{\\mathcal {P}}$ , $F_{\\mathcal {P^{\\prime }}} \\supseteq F_{\\mathcal {P}}$ , and there exists no other valid path system $P^{\\prime \\prime }$ such that $w(\\mathcal {P^{\\prime \\prime }}) = w(\\mathcal {P^{\\prime }})$ , and either $I_{\\mathcal {P^{\\prime \\prime }}} \\supset I_{\\mathcal {P^{\\prime }}}$ or $F_{\\mathcal {P^{\\prime \\prime }}} \\supset F_{\\mathcal {P^{\\prime }}}$ .", "Let $\\mathcal {P^{\\prime }}$ denote the path system obtained after exhaustively applying the described procedure.", "Clearly, $\\mathcal {P^{\\prime }}$ is valid and $w(\\mathcal {P^{\\prime }}) = w(\\mathcal {P})$ (by construction).", "It remains to show that $\\mathcal {P^{\\prime }}$ is an all-moving path system.", "In other words, we show that there exists no other valid path system $\\mathcal {P^{\\prime \\prime }}$ such that $w(\\mathcal {P^{\\prime \\prime }}) = w(\\mathcal {P^{\\prime }})$ and $\\mathcal {P^{\\prime \\prime }}$ can isolate or fix a proper superset of $I_{\\mathcal {P^{\\prime }}}$ or $F_{\\mathcal {P^{\\prime }}}$ , i.e., $I_{\\mathcal {P^{\\prime \\prime }}} \\supset I_{\\mathcal {P^{\\prime }}}$ or $F_{\\mathcal {P^{\\prime \\prime }}} \\supset F_{\\mathcal {P^{\\prime }}}$ .", "Assume that $\\mathcal {P^{\\prime \\prime }}$ exists and let $v^\\star \\in I_{\\mathcal {P^{\\prime \\prime }}} \\setminus I_{\\mathcal {P^{\\prime }}}$ or $v^\\star \\in F_{\\mathcal {P^{\\prime \\prime }}} \\setminus F_{\\mathcal {P^{\\prime }}}$ .", "In either case, we know that $v^\\star \\in S$ , which gives us the required contradiction as our procedure would have deleted $v^\\star $ (and either isolated or fixed the token on it).", "For the running time, note that we can iterate over vertices in $S$ in $\\mathcal {O}(n)$ time.", "Once a vertex is deleted, we run the APSP algorithm followed by the Hungarian algorithm, and this requires $\\mathcal {O}(n^3)$ time.", "We can delete at most $n$ vertices, and whenever a vertex is successfully deleted (which corresponds to fixing or isolating a token), we repeat the procedure.", "Therefore, the total running time of the isolation procedure is $\\mathcal {O}(n^5)$ ." ], [ "Distance-preserving rerouting subroutine", "Following Sec.", "REF , we show that the problem of rerouting a path system to maximize the number of isolated tokens presents a substructure that we can utilize to design a polynomial-time dynamic programming solution.", "We present this subroutine only for unweighted (uniformly-weighted) grid graphs and we note that the distance-preserving rerouting subroutine could potentially be generalized to work on more general (positive) edge-weighted graphs.", "We assume that we are working with a path $P$ with a source vertex $(c_1, r_1)$ and a target vertex $(c_2, r_2)$ such that $r_1 < r_2$ and $c_1 < c_2$ .", "The other three cases entail flipping one or both of these inequalities and can be solved in a similar way.", "For the path in question, we introduce a $(W + 1) \\times (H + 1)$ matrix, $dp$ , such that $W = |c_1 - c_2|$ , $H = |r_1 - r_2|$ , and $dp[i][j]$ is the smallest number of isolated tokens on any path between $(c_1, r_1)$ and $(c_1 + i, r_1 + j)$ in the path system that excludes the current path $P$ , i.e., in $\\mathcal {P} \\setminus P$ .", "The $dp[i][j]$ values are computed as follows: $\\begin{split}& dp[i][j] = \\\\&\\begin{matrix}\\textsf {I} \\\\[0.5ex]\\textsf {II} \\\\[0.5ex]\\textsf {III} \\\\[0.5ex]\\textsf {IV} \\\\[0.5ex]\\\\\\end{matrix}\\quad {\\left\\lbrace \\begin{array}{ll}0 & (i = 0, j = 0) \\\\\\text{isolated}[i][j] + dp[i-1][j] & (i \\ne 0, j = 0) \\\\\\text{isolated}[i][j] + dp[i][j-1] & (i = 0, j \\ne 0) \\\\\\text{isolated}[i][j] + \\\\~~\\min (dp[i-1][j], dp[i][j-1]) & \\text{(otherwise)}\\end{array}\\right.", "}\\end{split}$ Here $\\text{isolated}[i][j]$ is equal to 1 whenever there is an isolated token on vertex $(c_1 + i, r_1 + j)$ in $\\mathcal {P} \\setminus P$ and 0 otherwise.", "The value of interest is $dp[W][H]$ , which can be computed in $\\mathcal {O}(WH)$ time.", "The proof of correctness of this algorithm follows from the next lemma.", "The statement of the lemma and the proof assume that $r_1 < r_2$ , $c_1 < c_2$ .", "The proof can be altered to work for the other three possible cases.", "Lemma 3 Given a valid path system $\\mathcal {P}$ in a uniformly-weighted grid graph $G$ and a path $P \\in \\mathcal {P}$ with source vertex $(c_1, r_1)$ and target vertex $(c_2, r_2)$ such that $r_1 < r_2$ and $c_1 < c_2$ , $dp[i][j]$ is the smallest number of isolated tokens in the path system that excludes the path $P$ (i.e., $\\mathcal {P} \\setminus P$ ) on any shortest path between the source vertex and vertex $(c_1 + i, r_1 + j)$ .", "We use induction on the two indices $i$ and $j$ .", "For the path going from the source vertex $(c_1, r_1)$ to itself, there are no tokens in $\\mathcal {P} \\setminus P$ since the current path is excluded from the path system, so $dp[0][0] = 0$ (case I) is correct.", "Similarly, for the vertices on the same column and the same row as $(c_1, r_1)$ , there is a single distance-minimizing path to each of them, so $\\text{isolated}[i][j] + dp[i-1][j]$ and $\\text{isolated}[i][j] + dp[i][j - 1]$ , respectively, compute the numbers of isolated tokens on the paths to those vertices correctly (cases II and III).", "Now, assume that $dp[i][j]$ takes on the correct value for $0 \\le i \\le k$ , $0 \\le j \\le k^{\\prime }$ (excluding the pair ($k, k^{\\prime }$ )).", "We would like to prove that $dp[k][k^{\\prime }]$ takes on the correct value.", "In any path from $(c_1, r_1)$ to $(c_1 + k, r_1 + k^{\\prime })$ , vertex $(c_1 + k, r_1 + k^{\\prime })$ can be reached on a distance-minimizing path from either vertex $(c_1 + k - 1, r_1 + k^{\\prime })$ or vertex $(c_1 + k, r_1 + k^{\\prime } - 1)$ .", "By the inductive hypothesis, we already know the smallest number of isolated tokens from $(c_1, r_1)$ to either one of those two vertices in $\\mathcal {P} \\setminus P$ ; the smallest number of isolated tokens from $(c_1, r_1)$ to $(c_1 + k, r_1 + k^{\\prime })$ will therefore be the minimum of those two values, to which we add 1 in case there is an isolated token on vertex $(c_1 + k, r_1 + k^{\\prime })$ (case IV).", "Once we obtain $dp[W][H]$ , if its value is smaller than the number of isolated tokens along the current path $P$ in the path system that excludes the current path $P$ , then $P$ has to be rerouted (since we can isolate more tokens by rerouting $P$ ); otherwise, $P$ is unchanged.", "Since path reconstruction is required, we need to store the decisions that were made by the dynamic programming procedure, and the easiest way to do that is by introducing a $(W + 1) \\times (H + 1)$ matrix, $prev$ , such that $prev[i][j]$ indicates whether $dp[i][j]$ 's value was obtained by reaching vertex $(c_1 + i, r_1 + j)$ from the bottom ($prev[i][j] = 1$ ) or from the left ($prev[i][j] = 0$ ).", "Using the $prev$ matrix, we can therefore reconstruct the rerouted path and substitute the initial path if needed.", "Lemma 4 Given a valid path system $\\mathcal {P}$ in an $n$ -vertex uniformly-weighted grid graph $G$ , we can, in time $\\mathcal {O}(n^3)$ , exhaustively run the distance-preserving rerouting heuristic.", "The procedure loops over all paths in the path system and attempts to reroute each path.", "If at least one path is rerouted, once all paths have been considered, the process is repeated.", "The process keeps getting repeated until the algorithm goes through all paths without changing any path.", "The algorithm terminates in $\\mathcal {O}(n^3)$ time, assuming at most $n$ paths in the path system, given that we can isolate at most $n$ tokens, that $\\mathcal {O}(WH) = \\mathcal {O}(n)$ , and that every time a token is isolated, the procedure repeats from scratch." ], [ "Distance-increasing rerouting subroutine", "Following Sec.", "REF , we provide the details of the distance-increasing rerouting subroutine.", "This subroutine is a trade-off heuristic that seeks to trade an increase in displacement distance for a reduction in the number of displaced tokens.", "We present this subroutine only for unweighted (uniformly-weighted) grid graphs and we note that the distance-increasing rerouting subroutine could potentially be generalized to work on more general (positive) edge-weighted graphs.", "We say that a path in a grid is rectilinear if it is horizontal or vertical (assuming an embedding of the grid in the plane).", "Recall that we say that a token is isolated in a path system $\\mathcal {P}$ whenever there exists a single-vertex path $P = \\lbrace v\\rbrace \\in \\mathcal {P}$ such that the token $t$ is on the vertex $v$ and no other path in $\\mathcal {P}$ contains vertex $v$ .", "The purpose of the distance-increasing rerouting subroutine is to introduce a mechanism that allows us to increase token isolation, even if that comes at the cost of increasing displacement distance.", "We also want to make it possible to control how much leeway is given to this subroutine when it comes to deviating from the minimization of overall displacement distance.", "To do so, we introduce the concept of a margin, which we denote by $\\mu $ .", "The margin limits the range of the paths we consider.", "For a margin $\\mu $ , a source vertex $(c_1, r_1)$ , and a target vertex $(c_2, r_2)$ (assuming, without loss of generality, $r_1 < r_2$ and $c_1 < c_2$ ) defined in a path system in a uniformly-weighted grid graph $G$ , the rerouted path can now include any of the vertices that are within the subgrid bounded by the vertices $(max(c_1 - \\mu , 0), max(r_1 - \\mu , 0))$ (bottom left corner) and $(min(c_2 + \\mu , W_G - 1), min(r_2 + \\mu , H_G - 1))$ (top right corner), which we call the extended subgrid.", "As was the case for the analysis of the distance-preserving rerouting subroutine, for the rest of this section, we assume that we are working with a path system $\\mathcal {P}$ and a path $P$ with a source vertex $(c_1, r_1)$ and a target vertex $(c_2, r_2)$ such that $r_1 < r_2$ and $c_1 < c_2$ .", "The other three cases can be handled in a similar way.", "Evidently, as was the case for the distance-preserving rerouting subroutine, there is no point in attempting to enumerate all the paths here either, as their number is exponential in $H + W + \\mu $ .", "In fact, even for $\\mu = 0$ , the possible reroutings are a superset of the possible reroutings in distance-preserving rerouting, as we removed the restriction on maintaining a shortest path within the subgrid.", "Out of the reroutings of $P$ that minimize the number of isolated tokens in $\\mathcal {P} \\setminus P$ , we select the ones that have the shortest path length, and out of those, we select the ones that have the smallest number of changes in direction (horizontal vs. vertical).", "We arbitrarily select any one of the remaining paths.", "Again, we exploit dynamic programming to solve the problem.", "Just like in App.", ", we make use of the $\\text{isolated}$ matrix; however, in this case, we have to cover all vertices that are part of the extended subgrid, so the $\\text{isolated}$ matrix is of size $(W + 1 + 2\\mu ) \\times (H + 1 + 2\\mu )$ , and $\\text{isolated}[i][j]$ is equal to 1 whenever $(c_1 + i - \\mu , r_1 + j - \\mu )$ is in the grid and contains an isolated token, and 0 otherwise.", "We introduce a $[(W + 1 + 2\\mu ) \\times (H + 1 + 2\\mu )] \\times (W + 1 + 2\\mu ) \\times (H + 1 + 2\\mu )$ matrix, $dp$ , such that $dp[i][j][k]$ is the smallest number of isolated tokens on any path of length $i$ between $(c_1, r_1)$ and $(c_1 + j - \\mu , r_1 + k - \\mu )$ in $\\mathcal {P} \\setminus P$ .", "The $dp[i][j][k]$ values are computed as follows: $\\begin{split}& dp[i][j][k] = \\\\&\\begin{matrix}\\textsf {I} \\\\[0.5ex]\\textsf {II} \\\\[0.5ex]\\textsf {III} \\\\[0.5ex]\\textsf {IV} \\\\[0.5ex]\\textsf {V} \\\\[0.5ex]\\\\\\\\\\\\\\end{matrix}\\quad {\\left\\lbrace \\begin{array}{ll}0 ~~ (i = 0, j = \\mu , k = \\mu ) \\\\+\\infty ~~ ((c_1 + j - \\mu , r_1 + k - \\mu ) \\text{ not in grid}) \\\\+\\infty ~~ (i = 0, j \\ne \\mu \\text{ or } k \\ne m) \\\\+\\infty ~~ (dp[i-1] = +\\infty \\text{~for all neighbors}) \\\\\\text{isolated}[j][k] +\\\\~~ min(dp[i-1][j-1][k], dp[i-1][j+1][k],\\\\~~dp[i-1][j][k-1], \\\\ ~~dp[i-1][j][k+1])~(\\text{otherwise}) \\\\\\end{array}\\right.", "}\\end{split}$ By neighbors of a pair $(j, k)$ we are referring to the subset of the set $\\lbrace (c_1 + j - \\mu - 1, r_1 + k - \\mu ), (c_1 + j - \\mu , r_1 + k - \\mu - 1), (c_1 + j - \\mu + 1, r_1 + k - \\mu ), (c_1 + j - \\mu , r_1 + k - \\mu + 1)\\rbrace $ that corresponds to row and column coordinates of vertices that are within the grid graph.", "By convention, we set $dp[i][j][k]$ to $+\\infty $ when (1) the shortest distance between $(c_1, r_1)$ and $(c_1 + j - \\mu , r_1 + k - \\mu )$ is greater than $i$ (cases III and IV, which is equivalent to saying that the smallest number of isolated tokens of any path of length $i$ between those two vertices in $\\mathcal {P} \\setminus P$ is infinite, and this makes case V work), (2) when $(c_1 + j - \\mu , r_1 + k - \\mu )$ is not in the grid (case II), or (3) when at least one of $i$ , $j$ or $k$ is out of bounds (not mentioned in the definition of $dp[i][j][k]$ above; this makes case V less tedious to deal with).", "The value of interest is $min(dp[W + H][\\mu + W][\\mu + H], dp[W + H + 1][\\mu + W][\\mu + H], \\ldots , dp[(W + 1 + 2\\mu ) \\times (H + 1 + 2\\mu ) - 1][\\mu + W][\\mu + H])$ .", "That is, we consider the maximum number of tokens we managed to isolate for every path length greater than or equal to $W + H$ (which is the length of the shortest path between ($c_1, r_1$ ) and ($c_2, r_2$ )) and we pick the maximum across all path lengths.", "The proof of correctness of this procedure follows from Lemma REF .", "The statement and the proof assume that $r_1 < r_2$ , $c_1 < c_2$ .", "The proof can be altered to work for the other three possible cases.", "Lemma 5 Given a valid path system $\\mathcal {P}$ in a uniformly-weighted grid graph $G$ and a path $P \\in \\mathcal {P}$ with source vertex $(c_1, r_1)$ and target vertex $(c_2, r_2)$ such that $r_1 < r_2$ and $c_1 < c_2$ , $\\min \\limits _{i \\in \\mathbb {N}^{+}([0,(W+1+2\\mu )\\times (H+1+2\\mu )-1])}dp[i][j][k]$ is the smallest number of isolated tokens in $\\mathcal {P} \\setminus P$ on any path between the source vertex and vertex $(c_1 + j - \\mu , r_1 + k - \\mu )$ .", "We start by proving the correctness of the dynamic programming approach, as the correctness of the lemma statement follows directly from that.", "That is, we start by proving that $dp[i][j][k]$ , for pairs $(j, k)$ that correspond to vertices in the grid, is the smallest number of isolated tokens on any path of length $i$ between $(c_1, r_1)$ and $(c_1 + j - \\mu , r_1 + k - \\mu )$ in the path system that excludes the path $P$ , and is equal to $+\\infty $ otherwise.", "We will use induction on the first dimension only (i.e., the path length dimension/number of edges in the path).", "The only path of length 0 starting from $(c_1, r_1)$ reaches $(c_1, r_1)$ .", "Since the current path is excluded from the path system, there are no tokens from $(c_1, r_1)$ to itself, and therefore $dp[0][\\mu ][\\mu ] = 0$ (case I), as needed.", "No other vertex is reachable for $i = 0$ , so $dp[0][i][j]$ ($0 \\le i \\le W + 2\\mu $ , $0 \\le j \\le H + 2\\mu $ ) should equal $+\\infty $ , and case III ensures that $dp[0][i][j]$ takes on the correct value in the specified range.", "Now, assume that $dp[l][j][k]$ takes on the correct values ($0 \\le j \\le W + 2\\mu $ , $0 \\le k \\le H + 2\\mu $ ).", "We would like to prove that $dp[l + 1][j][k]$ takes on the correct values ($0 \\le j \\le W + 2\\mu $ , $0 \\le k \\le H + 2\\mu $ ).", "There are two cases to consider.", "For a fixed value of the pair $(j, k)$ corresponding to a vertex $(c_1 + j - \\mu , r_1 + k)$ in the grid, if none of $(c_1 + j - \\mu - 1, r_1 + k - \\mu )$ , $(c_1 + j - \\mu + 1, r_1 + k - \\mu )$ , $(c_1 + j - \\mu , r_1 + k - \\mu - 1)$ and $(c_1 + j - \\mu , r_1 + k - \\mu + 1)$ is reachable within $l$ displacements, then $(c_1 + j - \\mu , r_1 + k - \\mu )$ should not be reachable within $l + 1$ displacements.", "By the induction hypothesis, we would have $dp[l][j-1][k] = dp[l][j+1][k] = dp[l][j][k-1] = dp[l][j][k+1] = +\\infty $ , which sets the value of $dp[l+1][j][k]$ to $+\\infty $ as well, as needed (case IV).", "The second case is when at least one of the neighbors of $(c_1 + j - \\mu , r_1 + k - \\mu )$ is reachable in $l$ steps.", "By the inductive hypothesis, we have the smallest number of isolated tokens in $\\mathcal {P} \\setminus P$ from $(c_1, r_1)$ to the neighbors of $(c_1 + j - \\mu , r_1 + k - \\mu )$ reached in $l$ steps.", "The vertex $(c_1 + j - \\mu , r_1 + k - \\mu )$ can only be reached in $l + 1$ steps from any of its neighbors that were reached in $l$ steps, so the smallest number of isolated tokens on any path of length $l + 1$ from $(c_1, r_1)$ to $(c_1 + j - \\mu , r_1 + k - \\mu )$ in $\\mathcal {P} \\setminus P$ is equal to the minimum of the values obtained for the neighbors reached in $l$ steps, to which we add 1 in the case in which there is an isolated token on vertex $(c_1 + j - \\mu , r_1 + k - \\mu )$ , which is what the algorithm does (case V).", "We have proven that $dp[i][j][k]$ takes on the correct value, that is, $dp[i][j][k]$ is the smallest number of isolated tokens on any path of length $i$ between $(c_1, r_1)$ and $(c_1 + j - \\mu , r_1 + k - \\mu )$ in $\\mathcal {P} \\setminus P$ .", "It follows that the smallest number of isolated tokens on any path between $(c_1, r_1)$ and $(c_1 + j - \\mu , r_1 + k - \\mu )$ is $\\min \\limits _{i \\in \\mathbb {N}^{+}([0,(W+1+2\\mu )\\times (H+1+2\\mu )-1])}dp[i][j][k]$ .", "Just like the distance-preserving case, the distance-increasing case requires keeping track of paths, their lengths, as well as their number of changes in direction; $dp$ can be easily augmented to accommodate that.", "The procedure is run exhaustively, that is, we loop over all paths and attempt to reroute them, and if at least one path is rerouted, once all paths have been considered, the process is repeated.", "It remains to prove that the procedure terminates and runs in polynomial time.", "Lemma 6 Given a valid path system $\\mathcal {P}$ in an $n$ -vertex uniformly-weighted grid graph $G$ and a margin $\\mu \\le n$ , we can, in time $\\mathcal {O}(n^7)$ , exhaustively run the distance-increasing rerouting heuristic.", "Computing all the entries in the augmented $dp$ table for a single path can be achieved in time $\\mathcal {O}(n^3)$ .", "Now, every time a path is rerouted we have one of the following three consequences: The isolation of one or more tokens and an indeterminate effect on the overall weight of the path system and the overall number of direction changes in the path system The decrease of the overall weight of the path system and an indeterminate effect on the overall number of direction changes in the path system (no increase in the number of isolated tokens) The decrease of the overall number of direction changes in the path system (no increase in the overall weight of the path system or decrease in the number of isolated tokens) The number of tokens that can be isolated is linear in the number of vertices in the grid, whereas the overall weight of the path system as well as the overall number of direction changes in the path system are quadratic in the number of vertices in the grid.", "On the one hand, the weight of a path can be decreased at most $\\mathcal {O}(n)$ times, ignoring the effect that token isolation may have on the weight of the path.", "On the other hand, the number of direction changes in a path can be decreased at most $\\mathcal {O}(n)$ times, ignoring the effect that token isolation or a decrease in path weight may have on the number of direction changes.", "Now, since decreasing the weight of a path may affect the number of direction changes within it, in the worst case, every two reroutings that decrease the weight of a path $P$ (of which we have $\\mathcal {O}(n)$ ) may be separated by $\\mathcal {O}(n)$ reroutings of path $P$ , each of which decreases its number of direction changes.", "Therefore, ignoring token isolation, each path can be rerouted at most $\\mathcal {O}(n^2)$ times, for a total of $\\mathcal {O}(n^3)$ path reroutings.", "We now incorporate token isolation into our analysis, and we consider how it interacts with the other two consequences of distance-increasing rerouting.", "Since every token isolation may lead to updating the weight of a single path to $\\mathcal {O}(n)$ , accounting for the $\\mathcal {O}(n)$ token isolations and relying on the reasoning from the previous paragraph means that we may have $\\mathcal {O}(n^3)$ extra path reroutings spread over the paths, in addition to the $\\mathcal {O}(n^3)$ path reroutings we have already accounted for earlier.", "This implies that the total number of possible reroutings is bounded by $\\mathcal {O}(n^3)$ .", "It remains to show how fast we can accomplish each rerouting.", "Recall that computing all the entries in the augmented $dp$ table for a single path can be achieved in time $\\mathcal {O}(n^3)$ .", "Moreover, the algorithm reiterates through the paths of the path system from scratch every time a rerouting takes place, meaning that a single rerouting is completed in time $\\mathcal {O}(n^4)$ .", "Given that the total number of possible reroutings is $\\mathcal {O}(n^3)$ , we can exhaustively run the distance-increasing rerouting heuristic in time $\\mathcal {O}(n^7)$ ." ], [ "Ordering subroutine", "Following Sec.", "REF , the ordering subroutine seeks to order the path system and convert it into a (valid) ordered path system.", "It is the central module of the aro algorithm and consists of finding the order in which to execute the moves so that each token is displaced at most once (Fig.", "REF a)." ], [ "Step 1 – Merge path system", "The first step of the aro subroutine is to compute a merged path system (MPS).", "A merged path system is a path system such that no pair of paths within the system intersects more than once, where an intersection between two paths is a nonempty, maximal sequence of vertices that appear in each path's vertex sequence representation contiguously either in the same order or in reverse order.", "Lemma 7 (Merged path system) Given an $n$ -vertex (positive) edge-weighted graph $G$ with $|E(G)| = m$ and $\\sum _{e \\in E(G)}{w(e)} = \\mathcal {O}(n^{c})$ for some positive integer $c$ , two sets $S,T \\subseteq V(G)$ , and a $T$ -valid path system $\\mathcal {P}$ , we can compute, in time $\\mathcal {O}(n^{c+4}m^2)$ , a valid merged path system $\\mathcal {P^{\\prime }}$ such that $w(\\mathcal {P^{\\prime }}) \\le w(\\mathcal {P})$ .", "Moreover, the number of distinct edges used in $\\mathcal {P^{\\prime }}$ is at most the number of distinct edges used in $\\mathcal {P}$ .", "For any pair of paths $P_i$ , $P_j$ that intersect, let $v_{i,f}$ and $v_{i,l}$ be the first and last vertices of $P_i$ that are also in $P_j$ .", "Similarly, let $v_{j,f}$ and $v_{j,l}$ be the first and the vertices of $P_j$ that are also in $P_i$ .", "We let $P^i_{f,l}$ denote the subpath of $P_i$ that starts at $v_{i,f}$ and ends at $v_{i,l}$ .", "We let $P^j_{f,l}$ denote the subpath of $P_j$ that starts at $v_{j,f}$ and ends at $v_{j,l}$ .", "An edge in a pair of intersecting paths $P_i, P_j$ is said to be exclusive if it belongs to either $P^i_{f,l}$ or $P^j_{f,l}$ but not both.", "With the above in mind, we describe the merging process.", "While there exists an edge in the path system that is exclusive in some pair of intersecting paths, we look for the edge that is exclusive in the smallest number of intersecting path pairs (this number is called the exclusivity frequency).", "If such an edge does not exist, this implies that the path system is merged.", "Once we have selected an edge, call it $e$ , we pick an arbitrary pair of intersecting paths where the edge $e$ is exclusive, and we proceed to merge this pair.", "Let the selected pair be $P_i$ and $P_j$ .", "We attempt to reroute $P^i_{f,l}$ through $P^j_{f,l}$ , and $P^j_{f,l}$ through $P^i_{f,l}$ .", "If one of the reroutings decreases the weight of the path system, the rerouting is preserved.", "Otherwise, we make both paths go through whichever of $P^i_{f,l}$ or $P^j_{f,l}$ maximizes token isolation.", "If rerouting through either of those subpaths isolates the same number of tokens, we reroute both paths through the subpath that does not contain the edge $e$ we selected initially.", "By definition of a merged path system, termination implies correctness.", "Therefore, it remains to show that the algorithm terminates.", "Merging two paths has one of three consequences: The decrease of the overall weight of the path system, a possible increase in the number of isolated tokens in the path system (because path merging may decrease the number of distinct edges used in the path system), and an indeterminate effect on the exclusivity frequencies of edges in unmerged path pairs The increase of the overall number of isolated tokens in the path system and an indeterminate effect on the exclusivity frequencies of edges in unmerged path pairs (no increase in the overall weight of the path system) The decrease of the exclusivity frequency of the edge that is exclusive in the smallest number of unmerged path pairs and an indeterminate effect on the exclusivity frequencies of edges in other unmerged path pairs (no increase in the overall weight of the path system or decrease in the number of isolated tokens) The first two consequences occur polynomially many times; we can isolate at most $\\mathcal {O}(n)$ tokens, and since we assume that $\\sum _{e \\in E(G)}{w(e)} = \\mathcal {O}(n^{c})$ , the first consequence can occur at most $\\mathcal {O}(n^{c + 1})$ times (since $\\sum _{P \\in \\mathcal {P}}{w(P)} = \\mathcal {O}(n^{c + 1})$ ).", "We need to take into account the interaction between the first two consequences and the third consequence.", "The third consequence can occur $\\mathcal {O}(nm)$ times in a row (this is a claim we prove in the next paragraph), as each edge can belong to all paths, of which we have $\\mathcal {O}(n)$ .", "Interleaving the third consequence with the first two consequences, both of which have an indeterminate effect on exclusivity frequencies, leads to a total of $\\mathcal {O}(nm \\cdot (n + n^{c + 1})) = \\mathcal {O}(n^{c+2}m)$ path pair merges.", "Each path pair merge is executed in time $\\mathcal {O}(m)$ and is preceded by a lookup for the edge that is exclusive in the smallest number of intersecting path pairs.", "This lookup is done in time $\\mathcal {O}(n^2m)$ , as it requires checking every path pair and looping over the paths in each path pair.", "The overall running time of the path merging procedure is therefore $\\mathcal {O}(n^{c+2}m \\cdot (m + n^2m)) = \\mathcal {O}(n^{c+4}m^2)$ .", "We still have to show that, once an edge is no longer exclusive in any unmerged path pair, that its exclusivity frequency can no longer increase as a result of the third consequence.", "If some merge increases the exclusivity frequency of the edge in question, since merging involves reusing edges that are already part of the path system, this implies that the edge already occurred exclusively in some unmerged path pair involving the path that the merging rerouted through.", "This contradicts the fact that the edge is no longer exclusive in any unmerged path pair, as needed." ], [ "Step 2 – Unwrap path system", "The second step of the aro subroutine is to compute an unwrapped path system (UPS).", "An unwrapped path system is an MPS such that no path within it contains another path.", "A path $P_i$ is said to contain a path $P_j$ if the intersection between $P_i$ and $P_j$ is $P_j$ .", "Lemma 8 (Unwrapped path system) Given an $n$ -vertex (positive) edge-weighted graph $G$ with $|E(G)| = m$ , two sets $S,T \\subseteq V(G)$ , and a $T$ -valid merged path system $\\mathcal {P}$ , we can compute, in time $\\mathcal {O}(n^2m)$ , a valid unwrapped path system $\\mathcal {P^{\\prime }}$ such that $w(\\mathcal {P^{\\prime }}) \\le w(\\mathcal {P})$ .", "Moreover, the number of distinct edges used in $\\mathcal {P^{\\prime }}$ is at most the number of distinct edges used in $\\mathcal {P}$ .", "We go through the paths in an arbitrary order, and for every path $P$ , we unwrap all the paths that it wraps.", "To do so, we go through the path system and we detect all paths whose source vertex and target vertex are both contained within the selected path $P$ .", "Let $l$ be the number of such paths.", "We then separately sort the $l$ source vertices and the $l$ target vertices by their order of appearance within the selected path $P$ .", "The final step assigns source $i$ to target $i$ in the ordering, and it does so for all $i$ in $\\mathbb {N}^{+}([1,l])$ ; the path with source $i$ as source vertex is then rerouted to target vertex $i$ via the selected path $P$ .", "For a selected path $P$ , this process destroys all wrappings within it.", "Assume it does not, that is, assume the assignment of sources to targets in order of appearance in $P$ fails to unwrap a wrapping.", "Without loss of generality, we will assume that $P_g$ wraps $P_h$ , and we will work with the assumption that $g$ and $h$ designate the indices of the source and target vertices of $P_g$ and $P_h$ respectively after sorting within $P$ , rather than just being arbitrary indices for the vertices and the tangled paths.", "The concerned vertices $v_{s_g}, v_{s_h}, v_{t_g}, v_{t_h}$ , must have appeared in one of four orders.", "Two of those four orders will be discussed below, as the analysis for the other two is symmetrical.", "If the sequence of vertices in the initial selected path takes on the form $\\ldots , v_{s_g}, \\ldots , v_{s_h}, \\ldots , v_{t_h}, \\ldots , v_{t_g}, \\ldots $ , we have a contradiction; since $v_{s_g}$ appears before $v_{s_h}$ in $P$ , $g < h$ .", "Likewise, since $v_{t_h}$ appears before $v_{t_g}$ in $P$ , $h < g$ .", "The same is true if we have the form $\\ldots , v_{s_g}, \\ldots , v_{t_h}, \\ldots , v_{s_h}, \\ldots , v_{t_g}, \\ldots $ .", "The last two forms, which are identical to the two forms we presented but with the positions of $v_{s_g}$ and $v_{t_g}$ switched, yield the same contradiction.", "We still have to show that path unwrapping does not “unmerge” a path system, and that there are no wrappings left when the algorithm terminates.", "We start by showing that applying path unwrapping to a merged path system does not undo merging.", "It is sufficient to show that unwrapping paths within an arbitrary path in a merged path system does not give rise to a pair of paths that have more than one intersection.", "When unwrapping paths within a path, some paths are shortened and some paths are extended.", "Shortening a path does not create an additional intersection between it and any other path, so we only have to worry about the paths that get extended as a result of the unwrapping.", "If the extension of some path $P_x$ makes it intersect some other path $P_y$ more than once, the selected arbitrary path $P_z$ which initially wrapped the now-extended path $P^{\\prime }_x$ intersects $P_y$ more than once, because $P^{\\prime }_x$ is a subpath of $P_z$ , which contradicts the assumption that we started with a merged path system.", "Finally, we show that no unwrapped path remains after path unwrapping terminates.", "Assume that path $P_i$ remains wrapped in path $P_j$ after termination.", "We know that, in its execution, the algorithm should have looped over $P_j$ and all the paths that contain it.", "We proved that unwrapping paths within any of the paths that contain $P_j$ (including $P_j$ itself) would eliminate the wrapping of $P_i$ within $P_j$ .", "Since the wrapping persisted, it has to be the case that it was caused by the unwrapping of paths within another path that does not contain $P_j$ .", "This is not possible, as the only paths that modify $P_i$ and $P_j$ via unwrapping are paths that contain them.", "The algorithm unwraps paths within every path; unwrapping paths within a single path can be executed in time $\\mathcal {O}(nm)$ .", "Finding the wrapped paths can be accomplished in time $\\mathcal {O}(m)$ , whereas reconstructing the wrapped paths is done in time $\\mathcal {O}(nm)$ , which is equal to the sum of their lengths, and since there are $\\mathcal {O}(n)$ paths in total, the running time of the path unwrapping procedure is $\\mathcal {O}(n^2m)$ ." ], [ "Step 3 – Detect and break cycles", "The third step of the aro algorithm detects and breaks cycles in a path system to compute a cycle-free path system (CPS), which is a UPS such that the graph induced on its vertices is a forest.", "We use $G[\\mathcal {P}]$ to denote the graph induced by (the vertices of) a path system $\\mathcal {P}$ , which we also call the path system graph.", "A cycle is either represented by a sequence of vertices $\\langle v_1, v_2, \\ldots , v_k \\rangle $ or a sequence of edges $\\langle e_1, e_2, \\ldots , e_k \\rangle $ .", "Given a path system $\\mathcal {P} = \\lbrace P_1, \\ldots , P_{k}\\rbrace $ that induces a cycle characterized by edge set $\\mathcal {E}$ , we define an edge coloring of the cycle as a function $col : \\mathcal {E} \\mapsto \\lbrace 1, \\ldots , k\\rbrace $ , where color $i$ is associated with $P_i$ .", "If $col(e_i) = j$ , we say that the edge $e_i$ is $j$ -colored, and $e_i$ is $j$ -colorable if and only if it is on the path $P_j$ .", "We say that a path is $j$ -colorable if all its edges are $j$ -colorable.", "Note that even though edges can appear in more than one path (which implies that a cycle can have multiple colorings), we are interested in a special type of edge coloring, the purpose of which will become clearer later.", "A cycle is contiguously colored if any two edges $e_1$ , $e_2$ that have the same color $j$ are separated by a sequence of edges along the cycle that are $j$ -colored.", "A cycle that is not contiguously colored is discontiguously colored.", "A cycle is contiguously colorable if there exists a coloring of its edges that makes it contiguously colored.", "A color in a cycle is discontiguous if there exist two non-consecutive edges $e$ and $e^{\\prime }$ in the cycle such that $col(e) = col(e^{\\prime }) = j$ and neither of the two subpaths that connect them along the cycle are $j$ -colored.", "Theorem 2 Given an $n$ -vertex (positive) edge-weighted graph $G$ with $|E(G)| = m$ and $\\sum _{e \\in E(G)}{w(e)} = \\mathcal {O}(n^{c})$ for some positive integer $c$ , two sets $S,T \\subseteq V(G)$ , and a $T$ -valid unwrapped path system $\\mathcal {P}$ , we can compute, in time $\\mathcal {O}(n^{c+6}m)$ , a valid cycle-free path system $\\mathcal {P^{\\prime }}$ such that $w(\\mathcal {P^{\\prime }}) \\le w(\\mathcal {P})$ .", "Moreover, the number of distinct edges used in $\\mathcal {P^{\\prime }}$ is at most the number of distinct edges used in $\\mathcal {P}$ .", "The proof of the theorem is quite involved, so we break it into several parts.", "We first provide a proof of the existence of a special cycle (defined later) passing through an arbitrary edge $e$ whenever a cycle in the path system graph passing through $e$ is found.", "We then describe the procedure to find a special cycle passing through $e$ .", "We further describe our approach to break the special cycle, and finally prove termination of the cycle-breaking procedure (Lemma REF ).", "While our results are applicable for arbitrary edges, we apply the algorithms we derive from the lemmas on a specific edge $e^\\star $ ; the careful selection of the edge $e^\\star $ is what guarantees termination.", "Given a path system, we denote the frequency of an edge as the number of paths containing it.", "Cycle detection consists of finding whether there is a cycle in $G[\\mathcal {P}]$ .", "Finding a cycle can be achieved using any graph traversal algorithm; either a breath-first search (BFS) or a depth-first search (DFS) is sufficient.", "We wish to obtain additional information; if a cycle is found, we look for the edge $e^\\star $ , which is the edge with the smallest frequency among all edges contained in cycles.", "To find the edge of interest, i.e., $e^\\star $ , we sort the edges with non-zero frequency in non-decreasing order of frequency, and then, in this ordering, we look for the earliest edge that is part of a cycle.", "For cycle detection, let $u$ and $w$ be the endpoints of an arbitrary edge $e$ .", "Then, $e$ is part of a cycle in $G[\\mathcal {P}]$ if and only if $w$ is reachable from $u$ in $G[\\mathcal {P}] - e$ , where $G[\\mathcal {P}] - e$ denotes the graph obtained from $G[\\mathcal {P}]$ after deleting the edge $e$ .", "After the edge $e^\\star $ is found, we look for paths that induce a special cycle, which is a cycle that contains $e^\\star $ and has particular properties.", "Before describing the procedure that allows us to identify the desired set of paths, we provide a few additional definitions relevant for the discussion.", "We define an $e$ -path as a path that contains the edge $e$ .", "The results from Lemmas REF , REF , and REF show that if there is a cycle in the path system graph that passes through an arbitrary edge $e$ , then there must exist a special cycle that passes through edge $e$ .", "A special cycle that passes through edge $e$ is a cycle that: is contiguously colorable is induced by a set of paths that: is inclusion-minimal contains at most two $e$ -paths induces a single cycle, i.e., it induces no cycle other than the special cycle itself" ], [ "Proof of the existence of a special cycle going through an edge $e$ in a graph with a cycle going through edge {{formula:581509e4-bf75-43e0-a8a3-cfee3e698d8e}}", "Lemma 9 If there is a cycle $C$ in $G[\\mathcal {P}]$ that passes through an arbitrary edge $e$ then there is a contiguously colorable cycle $C^{\\prime }$ that passes through edge $e$ .", "Consider any cycle $C$ that passes through an arbitrary edge $e$ .", "If the cycle is contiguously colored, there is nothing to prove.", "Otherwise, we describe how this cycle can be transformed into a contiguously colored cycle that includes $e$ .", "We call a $j$ -colored segment a maximal contiguous sequence of $j$ -colored edges in the edge representation of the cycle.", "We work with the assumption that the edges that separate $j$ -colored segments on the cycle and that are not $j$ -colored themselves are not all $j$ -colorable, otherwise we can reduce the number of segments in the cycle.", "We first cover notation we will be recurrently using in this proof.", "For any two vertices $u, v \\in V(P_k)$ , where $P_k$ is an arbitrary path, we denote the subpath of $P_k$ going from $u$ to $v$ by $P_{k, u \\rightarrow v}$ .", "For any cycle $C_k$ , any edge $\\eta \\in E(C_k)$ and any two vertices $u, v \\in V(C_k)$ , We use $P_{C_k, u \\rightarrow \\eta \\rightarrow v}$ (resp.", "$P_{C_k,u \\rightarrow v}$ ) to refer to the subpath between $u$ and $v$ along cycle $C_k$ that goes through (resp.", "does not go through) $\\eta $ .", "We describe a procedure that reduces the number of segments in the cycle.", "Let $i$ be the color of the edge $e = \\lbrace u, w\\rbrace $ .", "We first handle making all other colors contiguous.", "Peripheral color merge.", "Peripheral color merging merges multiple $j$ -colored segments into a single $j$ -colored segment, where $j \\ne i$ .", "Consider the vertex representation $v_1, v_2, \\ldots , v_k$ of the cycle (where $v_1 = u$ and $v_k = w$ are the endpoints of $e$ ).", "Let $v_{j_p}$ and $v_{j_q}$ be the earliest and the latest vertices in the vertex representation of the cycle belonging to path $P_j$ ($v_{j_p}$ and $v_{j_q}$ may be $v_1$ and $v_k$ , respectively).", "Clearly, $P_{j, v_{j_p} \\rightarrow v_{j_q}}$ is $j$ -colorable.", "We consider two cases: $P_{j, v_{j_p} \\rightarrow v_{j_q}}$ does not contain $e$ : $P_{j, v_{j_p} \\rightarrow v_{j_q}}$ can replace $P_{C, v_{j_p} \\rightarrow v_{j_q}}$ in the vertex sequence of the cycle, and we set the color of all the edges in $P_{j, v_{j_p} \\rightarrow v_{j_q}}$ to $j$ .", "It should be easy to see that $P_{j, v_{j_p} \\rightarrow v_{j_q}}$ and $P_{C, v_{j_p} \\rightarrow e \\rightarrow v_{j_q}}$ are internally vertex-disjoint, so the new vertex sequence is that of a cycle containing the edge $e$ .", "The original cycle was therefore transformed into a new cycle $C^{\\prime }$ where color $j$ is contiguous, as ensured by the definition of $v_{j_p}$ and $v_{j_q}$ .", "This implies that the cycle transformation reduced the number of segments by at least 1, since there were at least two $j$ -colored segments in $C$ .", "$P_{j, v_{j_p} \\rightarrow v_{j_q}}$ contains $e$ : We look at the vertices in $V(C) \\cap V(P_j)$ , and we partition them into two sets; the set of vertices $V_{j_b}$ that are before $e$ in the vertex representation of $P_j$ , and the set of vertices $V_{j_a}$ that are after $e$ in the vertex representation of $P_j$ .", "It is evident that $V_{j_b} \\cap V_{j_a} = \\emptyset $ and that $|V_{j_b}| \\ne 0$ , $|V_{j_a}| \\ne 0$ , because both $v_{j_p}$ and $v_{j_q}$ belong to one of the two sets, and they belong to different sets.", "Next, we search for two vertices $v_{j_b}$ and $v_{j_a}$ in the vertex representation of the cycle, such that $v_{j_b} \\in V_{j_b}$ , $v_{j_a} \\in V_{j_a}$ , and none of the internal vertices of $P_{C, v_{j_b} \\rightarrow v_{j_a}}$ are in $V_{j_a}$ or $V_{j_b}$ .", "There exist two paths between $v_{j_b}$ and $v_{j_a}$ ; $P_{j, v_{j_b} \\rightarrow v_{j_a}}$ and $P_{C, v_{j_b} \\rightarrow v_{j_a}}$ , and those two paths are internally vertex-disjoint because $P_{C, v_{j_b} \\rightarrow v_{j_a}}$ sharing a vertex with $P_{j, v_{j_b} \\rightarrow v_{j_a}}$ that is not $v_{j_b}$ and $v_{j_a}$ would contradict our choice of $v_{j_b}$ and $v_{j_a}$ .", "Those two paths will make up our transformed cycle $C^{\\prime }$ .", "The former of the two paths will be $j$ -colored (meaning that we change the color of $e$ as well), whereas the coloring of the latter of the two paths is unchanged.", "Color $j$ is contiguous in $C^{\\prime }$ , given how we defined $v_{j_b}$ and $v_{j_a}$ , meaning that we have reduced the number of segments by at least 1.", "Notice that this procedure modifies $col(e)$ .", "We exhaustively apply the procedure above, which clearly terminates, since the number of segments is reduced after every step, and we end up with a cycle, which we call $C$ again, where the only discontiguous color, if any, is the color of the edge $e$ , which we call $i$ again.", "We now describe how color $i$ can be made contiguous.", "Figure: Merging the edge color for the case where exactly one of V i b ∖V(s e )V_{i_b} \\setminus V(s_e) and V i a ∖V(s e )V_{i_a} \\setminus V(s_e) is empty.", "(a-b) Starting from cycle CC where the edge color (red) is discontiguous, we construct a cycle C ' C^{\\prime } (blue line) in which the color of edge ee (red) is contiguous via path P i P_i (red solid line on the cycle to highlight the segments, red dashed line otherwise).Edge color merge.", "Edge color merging merges multiple $i$ -colored segments into a single $i$ -colored segment, where $i$ is the color of the edge $e$ .", "In the vertex representation of the cycle, we are interested in segments that are $i$ -colored.", "If there is only one $i$ -colored segment, the cycle is contiguously colored and we are done.", "Otherwise, we have two or more $i$ -colored segments.", "We can assume that the $i$ -colored segments are maximal, that is, none of the $i$ -colored segments can be extended to include more edges.", "We let $s_e$ be the segment containing edge $e$ and we let $V(s_e)$ denote the vertices of $s_e$ .", "We define $V_{i_b}$ , $V_{i_a}$ , $v_{i_b}$ and $v_{i_a}$ analogously to $V_{j_b}$ , $V_{j_a}$ , $v_{j_b}$ and $v_{j_a}$ respectively.", "Since we are working with the assumption that there are two or more $i$ -colored segments, it must be the case that at least one of $V_{i_b} \\setminus V(s_e)$ or $V_{i_a} \\setminus V(s_e)$ is nonempty, otherwise, $(V_{i_b} \\cup V_{i_a}) \\setminus V(s_e) = \\emptyset $ , i.e., $V(P_i) \\cap V(C) = V(s_e)$ , which would imply that we have a single $i$ -colored segment in the cycle.", "The remaining two cases are considered below: $V_{i_b} \\setminus V(s_e)$ and $V_{i_a} \\setminus V(s_e)$ are both nonempty: We apply the algorithm described in the second case of peripheral color merging in order to merge color $i$ (i.e., $P_i$ plays the role of $P_j$ in that case).", "What makes applying said algorithm possible here is the fact that both $V_{i_b} \\setminus V(s_e)$ and $V_{i_a} \\setminus V(s_e)$ are nonempty, which means that $v_{i_b}$ and $v_{i_a}$ both exist, and the construction of the cycle follows from that.", "The resulting cycle $C^{\\prime }$ is a cycle where color $i$ is contiguous, so we are done.", "Exactly one of $V_{i_b} \\setminus V(s_e)$ or $V_{i_a} \\setminus V(s_e)$ is empty: We look at $v^{\\prime }_{i_p}$ and $v^{\\prime }_{i_q}$ , which are the earliest and the latest vertex, respectively, in the vertex representation of the cycle belonging to path $P_i$ but not segment $s_e$ .", "We either have $v^{\\prime }_{i_p}, v^{\\prime }_{i_q} \\in V_{i_b} \\setminus V(s_e)$ , or $v^{\\prime }_{i_p}, v^{\\prime }_{i_q} \\in V_{i_a} \\setminus V(s_e)$ .", "In the vertex representation of $P_i$ , either $u$ comes before $w$ , or $w$ comes before $u$ .", "We have four different subcases we split into two different groups: $v^{\\prime }_{i_p}, v^{\\prime }_{i_q} \\in V_{i_b} \\setminus V(s_e)$ and $w$ comes before $u$ in the vertex representation of $P_i$ , or $v^{\\prime }_{i_p}, v^{\\prime }_{i_q} \\in V_{i_a} \\setminus V(s_e)$ and $w$ comes after $u$ in the vertex representation of $P_i$ (Fig.", "REF a): There exist two vertex-disjoint paths between $w$ and $v^{\\prime }_{i_p}$ ; $P_{i, w \\rightarrow v^{\\prime }_{i_p}}$ and the path $w, u,\\ldots ,v^{\\prime }_{i_p}$ along the cycle going through $e$ .", "Those two paths will make up our transformed cycle $C^{\\prime }$ .", "The former of the two paths will be $i$ -colored, whereas the coloring of the latter of the two paths is unchanged.", "Color $i$ is contiguous in $C^{\\prime }$ , and no color was made discontiguous, as needed.", "$v^{\\prime }_{i_p}, v^{\\prime }_{i_q} \\in V_{i_b} \\setminus V(s_e)$ and $u$ comes before $w$ in the vertex representation of $P_i$ , or $v^{\\prime }_{i_p}, v^{\\prime }_{i_q} \\in V_{i_a} \\setminus V(s_e)$ and $u$ comes after $w$ in the vertex representation of $P_i$ (Fig.", "REF b): There exist two vertex-disjoint paths between $u$ and $v^{\\prime }_{i_q}$ ; $P_{i, u \\rightarrow v^{\\prime }_{i_q}}$ and the path $u, w,\\ldots ,v^{\\prime }_{i_q}$ along the cycle (in reverse order) going through $e$ .", "Those two paths will make up our transformed cycle $C^{\\prime }$ .", "The former of the two paths will be $i$ -colored, whereas the coloring of the latter of the two paths is unchanged.", "Color $i$ is contiguous in $C^{\\prime }$ , and no color was made discontiguous, so we are done.", "This completes the proof.", "Lemma 10 Let $C$ be a contiguously colored cycle in $G[\\mathcal {P}]$ that passes through an arbitrary edge $e$ and let $\\lbrace P_1, \\ldots , P_k\\rbrace $ be an inclusion-minimal set of paths from $\\mathcal {P}$ that contains all edges of $C$ .", "If there exist more than two $e$ -paths in $\\lbrace P_1, \\ldots , P_k\\rbrace $ then there exists a contiguously colored cycle $C^{\\prime }$ in $G[\\mathcal {P}]$ that passes through edge $e$ such that an inclusion-minimal set of paths $\\lbrace P^{\\prime }_1, \\ldots , P^{\\prime }_k\\rbrace $ from $\\mathcal {P}$ that contains all edges of $C^{\\prime }$ has at most two $e$ -paths.", "We can assume that the edges of the same color are contiguous along the cycle we are starting with, by Lemma REF .", "We show how to transform this cycle into a cycle that uses two or fewer $e$ -paths.", "If the cycle already has this property, there is nothing to prove.", "Otherwise, the cycle uses three or more $e$ -paths and we describe how this cycle can be transformed.", "Assume we are dealing with a total of $\\ell $ different $e$ -paths in the cycle, having colors 1 to $\\ell $ , with color 1 being the color of the edge $e$ .", "We now describe a process that yields a cycle with the desired properties.", "We loop through the $\\ell $ possible colors for the edge $e$ in order, starting from color 2, and, for every color, we change the color of $e$ accordingly and we merge the edge $e$ with the other segment of the same color.", "The first color change may cause two discontiguities; one discontiguity in color 1 if $e$ is not at the extremity of the 1-colored segment, and one discontiguity in color 2 if the 2-colored segment does not share an endpoint of $e$ with the 1-colored segment.", "If the color change makes color 2 discontiguous, we fix the discontiguity in color 2 using the construction from case 2 of the edge color merge subroutine from Lemma REF , which can be applied here, despite the fact that its precondition may not be satisfied, because the vertices in the 2-colored segment that does not contain $e$ are either all in $V_{2_b}$ or $V_{2_a}$ , so the other end of the path whose vertices are not part of a 2-colored segment can be safely ignored.", "This construction restores color 2's contiguity, and eliminates the discontiguity in color 1 (if any) by either discarding or recoloring one of the two 1-colored segments.", "Note that this process may eliminate some colors in the cycle; if an $e$ -path color is eliminated, it is skipped in the color looping for the edge $e$ .", "Next, we color the edge $e$ using color 3.", "Note that this does not cause a discontiguity in color 2 because the edge $e$ is at the extremity of color 2's segment, owing to the construction from the edge color merging subroutine; we merge color 3, we color the edge $e$ using color 4, and so on.", "Any coloring and merging can eliminate a color, but can never cause a discontiguity in any of the other colors.", "We then loop back to color 1.", "At this point, there are no color discontiguities in the cycle, and no color change applied on the edge $e$ can cause color discontiguities.", "We claim that the cycle resulting from this process conforms to the property we are looking for.", "That is, it uses two or fewer $e$ -paths.", "Assume it does not, that is, assume that it uses three or more $e$ -paths.", "If any of those $e$ -paths is not incident to the edge $e$ , the color of the edge $e$ can be changed to cause a discontiguity, which contradicts what we said earlier about changes in the color of $e$ involving an $e$ -path color not being able to cause discontiguities.", "Otherwise, the only case where we have three or more contiguous segments incident to $e$ is when $e$ and the two edges that are adjacent to it along the cycle have three different colors.", "However, in that case, $e$ can be colored using one of the other two colors and the path whose color was previously used to color $e$ can be discarded, contradicting minimality.", "Lemma 11 If a set of paths $\\mathcal {P^{\\prime }} = \\lbrace P_1, P_2, \\ldots , P_k\\rbrace \\subseteq \\mathcal {P}$ ($k \\ge 3$ ) induces a cycle and is inclusion-minimal, then it induces no other cycle.", "Hence, if there exists a cycle $C$ containing some edge $e$ , then there exists an inclusion-minimal set of paths that induces a unique cycle containing $e$ .", "Assume the set of paths $\\lbrace P_1, \\ldots , P_{k}\\rbrace $ induces multiple cycles and is (inclusion-)minimal.", "These cycles need not be contiguously colored.", "By Lemma REF , at least one of the induced cycles can be assumed to be contiguously colored.", "We first introduce a definition we will require later on in the proof.", "Two colors are said to be adjacent if their respective segments share at least one vertex.", "Clearly, if two colors are adjacent, their corresponding paths intersect.", "Now, consider the ordering of the colors in the induced cycles.", "Two color orderings are different if they are not a cyclical permutation (possibly reversed) of each other.", "The first two cases we look at assume the existence of two cycles with different color orderings, and elaborate how this assumption makes it possible to construct a color ordering corresponding to that of a cycle induced by fewer than $k$ paths, yielding a contradiction.", "If there exist two cycles that have different color orderings, there exist two cycles $C_r$ and $C_s$ that have different color orderings such that one of the two cycles (without loss of generality, $C_r$ ) is contiguously colored (this is a consequence of Lemma REF ).", "We have all the information we need to look at the first case.", "If some color $i$ is adjacent to at least 3 distinct colors (call them $j$ , $l$ , and $p$ ) across the two color orderings, we are done; one of the colors $j$ , $l$ , and $p$ will not be adjacent to color $i$ in the color ordering of $C_r$ .", "Without loss of generality, assume that color $j$ is not adjacent to color $i$ in the color ordering of $C_r$ .", "Starting from the color ordering of $C_r$ , we can construct a sequence of colors that omits the colors between $i$ and $j$ in the color ordering of $C_r$ because paths $P_i$ and $P_j$ intersect (given that colors $i$ and $j$ are adjacent).", "The constructed sequence of colors corresponds to a set of paths with cardinality strictly less than $k$ that induces a cycle, thus contradicting the fact that the set of paths $\\lbrace P_1, \\ldots , P_{k}\\rbrace $ was assumed to be inclusion-minimal.", "We now handle the second case.", "If no color $i$ is adjacent to at least 3 distinct colors across the two cycles, the color ordering of $C_s$ is an extended cyclical permutation of that of $C_r$ , as this is the only case where every color is adjacent to the same two colors across both cycles; if the color ordering of $C_r$ is $\\langle 1, 2, \\ldots , k \\rangle $ , that of $C_s$ is a cyclical permutation of $\\langle 1, 2, \\ldots , k \\rangle ^+$ , where the $+$ sign as a superscript means that the sequence is repeated one or more times.", "If the color ordering of $C_s$ is of length $2k$ or more and starts with the color 1, without loss of generality, we describe a construction that makes it possible to obtain a sequence of colors ordering that uses fewer than $k$ colors and that corresponds to the color ordering of a cycle.", "Starting from the color ordering of $C_s$ , we remove all colors in the sequence except the first and last occurrences of 1 and $k$ ; the resulting color ordering is the color ordering of a cycle.", "Therefore, paths $P_1$ and $P_k$ induce a cycle, contradicting the minimality assumption.", "We finally look at the case where all the cycles induced by the set of paths have the same ordering of colors.", "By Lemma REF , all the cycles are contiguously colored.", "It is sufficient to prove that two cycles that have the same color ordering are the same cycle.", "Consider two cycles $C_i$ and $C_j$ that have the same color ordering.", "For each of the two cycles considered separately, every path in $\\lbrace P_1, \\ldots P_{k}\\rbrace $ contains at least one edge in the cycle that it does not share with any other path, which we call a private edge; otherwise, a path can be omitted from the set of paths and the remaining paths will still induce the same cycle.", "Without loss of generality, let $\\langle 1, 2, \\ldots , k \\rangle $ be the color ordering that $C_i$ and $C_j$ share.", "Consider the two paths $P_1$ and $P_2$ whose colors are consecutive in the color ordering.", "Every cycle has to go through at least one private edge in $P_1$ and one private edge in $P_2$ , and since $P_1$ and $P_2$ 's colors are contiguous in the color ordering, the cycle goes through the private edges in question via the (unique and maximal) intersection of the two paths.", "Since the pair of paths whose colors are consecutive in the color ordering were picked arbitrarily, it follows that both $C_i$ and $C_j$ contain the edges in all the path intersections.", "A similar argument can be used to show that the inclusion of the edges in the path intersections in both cycles implies the inclusion of the edges in the subpaths between the path intersections in both $C_i$ and $C_j$ .", "Since the two cycles have the same edge set, it follows that they are in fact the same cycle, as needed.", "The combination of the three previous lemmas allows us to reduce the problem of finding cycles to the problem of finding special cycles.", "The reason why special cycles are of interest to us is because they make the cycle-breaking algorithm work." ], [ "Procedure to find a special cycle", "We now describe the procedure to find a special cycle containing the edge $e^\\star $ and obtain the set of paths that induce this special cycle.", "Recall that $e^\\star $ is the least frequency edge appearing in a cycle.", "Let $r$ be the number of $e^\\star $ -paths in the path system.", "For each $e^\\star $ -path, we construct $2(r-1) + 1$ different graphs that we call path intersection graphs, for a total of $r(2(r-1) + 1)$ path intersection graphs.", "Every path intersection graph is built out of a selection of one $e^\\star $ -path as a base path, and at most one other $e^\\star $ -path as a support path, where a base path is an $e^\\star $ -path whose color we use for $e^\\star $ , and a support path is an $e^\\star $ -path that is assumed to be needed as part of the special cycle we are looking for.", "In every path intersection graph, we add a vertex $v_{P_i}$ for every path $P_i$ that is not an $e^\\star $ -path, and we add an edge between two such vertices if the corresponding paths intersect.", "Moreover, in every path intersection graph with a path $P_b$ as its base path, we add two vertices $l_{P_b}$ and $r_{P_b}$ .", "The vertex $l_{P_b}$ (resp.", "$r_{P_b}$ ) is associated with all the vertices in the base path that occur to the left (resp.", "to the right) of the edge $e^\\star $ in the vertex sequence of the path (including one of the endpoints of the edge $e^\\star $ in both cases).", "If some path $P_j$ that is not an $e^\\star $ -path intersects $P_b$ , we add an edge from $v_{P_j}$ to either $l_{P_b}$ or $r_{P_b}$ , depending on whether it intersects $P_b$ in a subpath associated with $l_{P_b}$ or $r_{P_b}$ (clearly, a path cannot intersect both subpaths without going through the edge $e^\\star $ because the path system is merged).", "Support paths may either be treated as left-intersecting paths (i.e., the vertex corresponding to the support path is connected to $l_{P_b}$ ), right-intersecting paths (i.e., the vertex corresponding to the support path is connected to $r_{P_b}$ ), or neither, and that explains why we construct $2(r-1) + 1$ different path intersection graphs ($r-1$ path intersection graphs for the $r-1$ different choices of left-intersecting paths, $r-1$ path intersection graphs for the $r-1$ different choices of right-intersecting paths, and one path intersection graph with no support path) for every base path.", "We claim that the problem of obtaining a set of paths that induces a special cycle containing edge $e^\\star $ reduces to running $(l_{P_b}, r_{P_b})$ -reachability queries on the constructed $r(2(r-1) + 1)$ path intersection graphs (Lemma REF ).", "More specifically, we use BFS to check whether $r_{P_b}$ is reachable from $l_{P_b}$ , and if we find any yes-instance, we reconstruct the set of paths by using the BFS tree of the instance with the shortest path between $r_{P_b}$ and $l_{P_b}$ ; this set of paths induces a special cycle.", "Lemma 12 Paths $P_1, P_2, \\ldots , P_k$ induce a special cycle containing edge $e^\\star $ $(e^\\star \\in E(P_1))$ if and only if $r_{P_1}$ is reachable from $l_{P_1}$ through vertices $v_{P_2}, v_{P_3}, \\ldots , v_{P_k}$ in one of the generated reachability instances.", "We handle the forward direction first.", "If we have a special cycle, we know that it is contiguously colorable, that there exists a set of paths that induce it and induce no other cycles, and that it uses at most two $e^\\star $ -paths.", "We are able to extract a sequence of paths $P_1, P_2, \\ldots , P_k$ such that the only path intersections that exist within the sequence of paths are between two contiguous paths or the first and the last path in the sequence (if those are not the only path intersections, we end up with more than one induced cycle).", "There are two cases we need to handle.", "In the first case, $P_1$ is the only $e^\\star $ -path in the special cycle; there is a path from $v_{P_2}$ to $v_{P_k}$ in all of the path intersection graphs that have $P_1$ as the base path, since $P_i$ intersects $P_{i + 1}$ $\\forall i \\in \\mathbb {N}^{+}([2,k-1])$ .", "The path intersection graphs of interest to us are the ones with no support paths.", "$P_2$ and $P_k$ each intersect one side of $P_1$ , because if that were not the case, the resulting cycle would not contain the edge $e^\\star $ .", "Therefore, we either have edges from $v_{P_2}$ to $l_{P_1}$ and from $v_{P_k}$ to $r_{P_1}$ , or edges from $v_{P_2}$ to $r_{P_1}$ and from $v_{P_k}$ to $l_{P_1}$ ; $r_{P_1}$ is reachable from $l_{P_1}$ either way.", "In the second case, we are dealing with two $e^\\star $ -paths in the special cycle, meaning that $P_1$ and exactly one of $P_2$ or $P_k$ are $e^\\star $ -paths (note that the second $e^\\star $ -path has to be one of those two paths, otherwise, we contradict the inclusion-minimality property of the set of paths $\\lbrace P_1, P_2, \\ldots , P_k\\rbrace $ ).", "Here, $P_1$ acts as a base path, and exactly one of $P_2$ or $P_k$ acts as a support path.", "We therefore consider the two corresponding $(l_{P_1}, r_{P_1})$ -reachability instances, and using a similar argument as the one we employed for the first case, one of those two reachability instances (possibly both) will be a yes-instance.", "We now handle the backward direction.", "Assume $r_{P_1}$ is reachable from $l_{P_1}$ , and let $l_{P_1}, v_{P_2}, v_{P_3}, \\ldots , v_{P_k}, r_{P_1}$ be the corresponding shortest path that we obtained from the BFS tree.", "We can easily verify that the existence of such a path implies the existence of a subset of paths that induce cycles, this subset of paths being $P_1, P_2, \\ldots , P_k$ .", "We still have to show that they induce a special cycle.", "The easiest criterion to verify is that no more than two $e^\\star $ -paths are involved, as this follows by construction of the path intersection graphs, since all of them involve either one base path and no support path or one base path and one support path.", "Figure: Leveraging a source-target corner to reduce the number of paths that induce the special cycle.", "(a) Example of a sequence of three paths that induce a special cycle that contains the edge e ☆ e^\\star (thick blue line).", "(b) Updated sequence of paths after swapping the target vertices of P 2 P_2 and P 3 P_3, which can then be removed from the sequence of paths, as it no longer contributes any edge to the special cycle.We now show that the path from $l_{P_1}$ to $r_{P_1}$ implies the corresponding paths induce a single cycle.", "In the first case, we assume that a proper subset of the paths induces a cycle, and we pick the smallest such subset.", "Let $P_i$ be the path in said subset such that $v_{P_i}$ is the earliest occurring vertex in the path from $l_{P_1}$ to $r_{P_1}$ .", "There must exist two paths $P_i$ intersects within the subset such that the vertices associated with those two paths occur later than $v_{P_i}$ in the path from $l_{P_1}$ to $r_{P_1}$ ; let those paths be $P_j$ and $P_l$ .", "Since $v_{P_i}$ is the earliest occurring vertex, BFS visited it earlier than $v_{P_j}$ and $v_{P_l}$ .", "Therefore, in the BFS tree, $v_{P_j}$ and $v_{P_l}$ are children of $v_{P_i}$ ; the path from a leaf ($r_{P_1}$ specifically) to the root cannot therefore include both $v_{P_j}$ and $v_{P_l}$ .", "The second case we have to handle is the case where we assume the set of paths is inclusion-minimal and induces multiple cycles; we know this is not possible due to Lemma REF .", "The color contiguity criterion does not need to be proved, as we have already proven that any cycle can be turned into a contiguously colored cycle without adding new paths to the set of paths that induces it in Lemma REF .", "With that being said, we will prove something even stronger; all possible colorings of the cycle that the set of paths $P_1, P_2, \\ldots , P_k$ induces are contiguous.", "Assume we were able to discontiguously color the cycle.", "Pick the discontiguous color that corresponds to the path whose vertex is the earliest in the path from $l_{P_1}$ to $r_{P_1}$ .", "If the color is adjacent to three or more different colors, then we can use the BFS tree argument to prove the paths induce a single cycle (i.e., some of the colors in the cycle will not be on the path from $l_{P_1}$ to $r_{P_1}$ in the BFS tree).", "Otherwise, if we have a discontiguous color that is adjacent to two or fewer different colors, then we have an unmerged pair of paths.", "This is not possible, because the input of the cycle-breaking procedure is a path system that is unwrapped, and therefore, merged." ], [ "Procedure to break special cycles", "The cycle-breaking procedure takes as input a sequence of paths (the selected paths) that induce a special cycle $C$ containing the edge $e^\\star $ and works on breaking $C$ by modifying the path system.", "As a consequence of the cycle being special, the only intersections that exist within the sequence of paths are between two contiguous paths or the first and the last path in the sequence of paths.", "This procedure aims to reduce the number of paths that induce $C$ by looking for a source-target corner.", "A source-target corner is a tree that is a subgraph of the path system graph and that is induced by two paths, such that updating the paths by swapping their targets and reconstructing them via the tree they induce makes it possible to delete one of the two updated paths without destroying $C$ .", "Every time a source-target corner is found, we eliminate it by deleting one of the two paths and we end up with a smaller set of paths that induces the same cycle.", "For example, in Figure REF a, three paths induce the special cycle.", "A source-target corner exists between $P_2$ and $P_3$ , because swapping targets, reconstructing the paths and removing $P_3$ preserves the cycle (Fig.", "REF b), so $P_3$ is definitively removed from the selected paths, as it no longer contributes any edge to the special cycle.", "This process does not increase the total weight of the path system, since the same edges are used for the reconstruction.", "Note that if the number of remaining selected paths is odd, then there must exist another source-target corner; otherwise, the number of source vertices will not be equal to the number of target vertices in the graph that the selected paths induce.", "Figure: Breaking a cycle in a path system induced on a reduced set of paths without source-target corners.", "(a) Example of path system with a special cycle formed by four paths.", "(b) Even reduced path system and (c) odd reduced path system obtained after breaking the cycle.", "Because both reduced path systems have the same overall weightand the odd path system does not contain the edge e ☆ e^\\star (thick blue line), the odd reduced path system gets picked.Eventually, we end up with a cycle with no source-target corners (and an even number of paths); if the cycle is made up of two paths, we use logic that is similar to the logic we presented for path merging (in the proof of Lemma REF ).", "We attempt to reduce total path weight; if this fails, we attempt to isolate tokens; if this also fails, we reroute through the subpath that does not contain $e^\\star $ .", "If we end up with a cycle formed out of four or more paths (see Fig.", "REF ), we refer to the remaining paths as a reduced set of paths.", "Let $r$ be the cardinality of the reduced set of paths.", "We can generate two path systems induced on the reduced set of paths, which we call reduced path systems, such that they break the cycle.", "Let $\\lbrace P_1, P_2, \\ldots , P_r\\rbrace $ be the reduced set of paths: Even reduced path system: Match $v_{s_i}$ with $v_{t_i}$ if $i$ is even, $v_{s_i}$ with $v_{t_{(i+2) \\mod {r}}}$ if $i$ is odd, $1 \\le i \\le r$ .", "In the latter case, reconstruct path $P_i$ on the graph induced by the reduced set of paths without using its private edges.", "Odd reduced path system: Match $v_{s_i}$ with $v_{t_i}$ if $i$ is odd, $v_{s_i}$ with $v_{t_{(i+2) \\mod {r}}}$ if $i$ is even, $1 \\le i \\le r$ .", "In the latter case, reconstruct path $P_i$ on the graph induced by the reduced set of paths without using its private edges.", "We remark that there is a single way to reconstruct the paths within each of the two path systems, as we are forbidding the inclusion of private edges within them, and if there were multiple reconstructions that would work, this would imply that the reduced set of paths induces more than one cycle.", "The fact that both graphs induced by the reduced path systems are cycle-free follows from the fact that the reduced set of paths induces a single cycle, and from the existence of private edges for every path.", "Those private edges are no longer part of any path in each of the reduced path systems, and the number of distinct edges used did not increase, therefore the cycle has been broken.", "We would now like to update the reduced set of paths using one of the two reduced path systems we constructed.", "Observe that the total weight of one of the two generated reduced path systems is less than or equal to that of the path system involving the selected paths.", "The observation follows from the fact that the total weight of the odd reduced path system and that of the even reduced path system add up to twice the total weight of the reduced set of paths.", "If the total weights of the two reduced path systems are different, we pick the reduced path system with the smallest total weight.", "Otherwise, we pick the reduced path system that does not include the edge $e^\\star $ (Figure REF ).", "After the selection is made, we update the paths in the path system accordingly.", "It may be the case that the cycle-breaking procedure gives rise to pairs of paths that are unmerged, or paths that are wrapped; those are two possible consequences of the elimination of source-target corners.", "Therefore, every time a cycle is broken, we run the path merging and the path unwrapping procedures to reinstate the invariant (namely that the path system is a UPS), followed by repopulating the edge frequencies in the updated path system.", "The path system being unwrapped is what ensures the correctness of our work." ], [ "Proof of termination of the cycle-breaking procedure", "Finally, we show that the cycle-breaking procedure terminates.", "Once the cycle-breaking procedure terminates, the path system graph is a forest, which admits an ordering of moves (as we show in the next section).", "Lemma 13 Given an $n$ -vertex (positive) edge-weighted graph $G$ with $|E(G)| = m$ and $\\sum _{e \\in E(G)}{w(e)} = \\mathcal {O}(n^{c})$ for some positive integer $c$ , two sets $S,T \\subseteq V(G)$ , and a $T$ -valid unwrapped path system $\\mathcal {P}$ , the cycle-breaking procedure terminates in time $\\mathcal {O}(n^{c + 6}m)$ .", "The cycle-breaking procedure was designed in a way that ensures it terminates in polynomial time.", "If we limited it to arbitrarily detecting and breaking cycles, it would have been harder to prove its termination, let alone that it terminates in polynomial time.", "Breaking a single cycle (which takes time $\\mathcal {O}(n^4)$ , with the proof to follow) has one of three consequences: The decrease of the overall weight of the path system, a possible increase in the number of isolated tokens in the path system, and an indeterminate effect on the and edge frequencies The increase of the overall number of isolated tokens in the path system, with an indeterminate effect on edge frequencies (no increase in the overall weight of the path system) The decrease of the frequency of the least frequent edge that is part of a cycle, with an indeterminate effect on the frequency of the other edges (no increase in the overall weight of the path system or decrease in the number of isolated tokens) The consequences and their hierarchy are given by construction of the algorithm.", "The first consequence can occur $\\mathcal {O}(n^{c+1})$ times; it may also increase the number of isolated tokens (but not decrease it, as cycle-breaking reroutes paths through edges that are already in the path system).", "Irrespective of the first consequence, the second consequence can occur $\\mathcal {O}(n)$ times, because there are $\\mathcal {O}(n)$ tokens, and the third consequence can occur $\\mathcal {O}(nm)$ times in a row, since the frequency of a single edge is $\\mathcal {O}(n)$ , and edges that are taken out of cycles are not brought back into cycles because of how cycle-breaking is designed.", "One can observe that changes in path system weight or changes in the number of isolated tokens affect edge frequencies, in the sense that there can be $\\mathcal {O}(nm)$ occurrences of the third consequence in a row after each occurrence of either of the first two consequences.", "Analogously to the reasoning we employed for path merging, this means that the cycle-breaking procedure involves breaking $\\mathcal {O}((n^{c+1} + n)nm) = \\mathcal {O}(n^{c+2}m)$ cycles.", "It remains to show that breaking a single cycle can be done in time $\\mathcal {O}(n^4)$ .", "In the worst case, there are $\\mathcal {O}(n)$ paths that induce the cycle in question.", "In this case, constructing the reachability instances is done in time $\\mathcal {O}(n^4)$ , as we end up with $\\mathcal {O}(n^2)$ instances overall, each of which consists of $\\mathcal {O}(n)$ vertices, such that constructing the edges for each instance takes time $\\mathcal {O}(n^2)$ (provided that we precompute which pairs of paths intersect, which is done once and in time $\\mathcal {O}(n^2m)$ prior to the construction of the reachability instances).", "Every single BFS call on each reachability instance takes time $\\mathcal {O}(n^2)$ , which is the number of edges in a single path intersection graph, so all BFS calls combined take time $\\mathcal {O}(n^4)$ .", "We now account for the running time of source-target corner eliminations.", "In the worst case, we have $\\mathcal {O}(n)$ source-target corner eliminations, each of which is executed in time $\\mathcal {O}(n + m)$ (via BFS).", "Finally, it should be easy to see that the base cases run in time $\\mathcal {O}(nm)$ , as they either involve merging a pair of paths ($\\mathcal {O}(m)$ ), or constructing two path systems and computing their total weight ($\\mathcal {O}(nm)$ ).", "Given all the above, it follows that the cycle-breaking procedure terminates in time $\\mathcal {O}(n^{c + 6}m)$ .", "The fourth step of the aro algorithm is to compute an ordered path system (OPS) whose moves can be executed with the guarantee that each token moves at most once.", "Theorem 3 (Order path system) Given an $n$ -vertex (positive) edge-weighted graph $G$ with $|E(G)| = m$ and $\\sum _{e \\in E(G)}{w(e)} = \\mathcal {O}(n^{c})$ for some positive integer $c$ , two sets $S,T \\subseteq V(G)$ , and a $T$ -valid path system $\\mathcal {P}$ , we can compute, in time $\\mathcal {O}(n^{c + 6}m + n^3)$ , a valid ordered path system $\\mathcal {P^{\\prime }}$ such that $w(\\mathcal {P^{\\prime }}) \\le w(\\mathcal {P})$ .", "Moreover, the number of distinct edges used in $\\mathcal {P^{\\prime }}$ is at most the number of distinct edges used in $\\mathcal {P}$ .", "Given $\\mathcal {P}$ , we apply Theorem REF to compute a cycle-free path system $\\mathcal {P^{\\prime \\prime }}$ in time $\\mathcal {O}(n^{c + 6}m)$ .", "Recall that $\\mathcal {P^{\\prime \\prime }}$ is both merged and unwrapped and $G[\\mathcal {P^{\\prime \\prime }}]$ is a forest.", "Hence, we can apply Theorem 2.1 of [24] on each tree of the forest.", "The theorem states that given a tree with $n$ vertices, and with the number of tokens $S$ in the tree being equal to the number of target vertices $T$ in the tree, there is a $\\mathcal {O}(n)$ -time algorithm, which we call the exact tree solver, that performs the optimal (minimum) number of moves to transform $S$ to $T$ while moving each token at most once.", "It is easy to see that the non-trivial trees (i.e., the trees not containing a single source and target vertex) induced by the path system each have an equal number of source vertices and target vertices (we say that they are balanced); if not, there will exist a token in some tree that does not need to move, which implies that the path system can be modified in a way that decreases its total weight (or increases the number of isolated/fixed tokens).", "So we can assume, without loss of generality, that each tree is balanced.", "We keep track of the moves produced by the algorithm for each tree and add them in order to obtain $\\mathcal {P^{\\prime }}$ (the order among the different trees is irrelevant).", "The usage of the exact tree solver guarantees that $w(\\mathcal {P^{\\prime }}) \\le w(\\mathcal {P})$ and that the number of distinct edges used in $\\mathcal {P^{\\prime }}$ is at most the number of distinct edges used in $\\mathcal {P}$ , as isolated tokens will not move (in fact, we might even increase the number of isolated tokens) and the set of edges used in $\\mathcal {P^{\\prime }}$ is a subset of the edges used in $\\mathcal {P}$ .", "Moreover, the algorithm of [24] can be easily adapted to guarantee that the frequency of each edge, i.e., the number of times it is traversed in the path system, cannot increase since it is always traversed in one of its two possible directions.", "To see why $\\mathcal {P^{\\prime }}$ is ordered, we can construct a directed graph $D$ where each path $P$ in $\\mathcal {P^{\\prime }}$ corresponds to a node $v_{P}$ in $D$ and we add a directed edge from node $v_P$ to $v_{P^{\\prime }}$ whenever $P^{\\prime }$ depends on $P$ .", "We claim that $D$ is a directed acyclic graph.", "Assume otherwise.", "Then the existence of a cycle implies that either some pair of paths is unmerged, or some pair of paths is wrapped, or at least one token must move more than once, a contradiction in all cases.", "Hence, we can reconstruct the ordering of the moves by simply computing a topological ordering of $D$ , which can be done in time $\\mathcal {O}(|V(D)| + |E(D)|) = \\mathcal {O}(n^2)$  [39].", "Note that constructing the graph $D$ takes time $\\mathcal {O}(n^{3})$ in the worst case." ] ]
2212.05586
[ [ "Schauder type theorems for mild solutions to non autonomous\n Ornstein-Uhlenbeck equations" ], [ "Abstract We prove smoothing properties along suitable directions of the Ornstein-Uhlenbeck evolution operator, namely the evolution operator formally associated to the non autonomous Ornstein-Uhlenbeck operator.", "Moreover we use the smoothing estimates to prove Schauder type theorems, again along suitable directions, for the mild solution of a class of evolution equations." ], [ "Introduction", "Let $(X, \\langle \\cdot ,\\cdot \\rangle _X,\\left\\Vert \\,\\cdot \\,\\right\\Vert _X)$ be a separable Hilbert space, $T>0$ and $\\Delta =\\lbrace (s,t)\\in [0,T]^2\\ \\mbox{s.t.", "}\\ s<t\\rbrace $ .", "We consider the mild solution $u(s,x)=P_{s,t}\\varphi (x)-\\int _s^t\\bigl (P_{s,\\sigma }\\psi (\\sigma ,\\cdot )\\bigr )(x)\\,d\\sigma ,\\ \\ \\varphi \\in C_b(X),\\psi \\in C_b\\bigl ([0,T]\\times X\\bigr ).$ to the following class of backward non autonomous initial value problems ${\\left\\lbrace \\begin{array}{ll}\\partial _s u(s,x)+ L(s) u(s,x)=\\psi (s,x),\\ \\ \\ \\ (s,t)\\in \\Delta ,\\ x\\in X,\\\\u(t,x)=\\varphi (x),\\ \\ x\\in X,\\end{array}\\right.", "}$ where the operators $L(t)$ are of Ornstein-Uhlenbeck type $L(t)\\varphi (x)=\\frac{1}{2}\\mbox{\\rm Tr}\\Bigl (Q(t)D^2\\varphi (x)\\Bigr )+\\langle A(t)x+f(t),\\nabla \\varphi (x)\\rangle _X,$ with $Q(t)=B(t)B^\\star (t)$ .", "For any $t\\in [0,T]$ the evolution family $\\lbrace P_{s,t}\\rbrace _{s\\in [0,t]}$ is defined by $P_{t,t}&=I\\ \\ \\mbox{for any}\\ \\ t\\in [0,T], \\\\P_{s,t}\\varphi (x)&=\\int _X\\varphi (y)\\,\\mathcal {N}_{m^x(t,s),Q(t,s)}(dy)\\nonumber \\\\&=\\int _X\\varphi (y+m^x(t,s))\\,\\mathcal {N}_{0,Q(t,s)}(dy), \\ \\ (s,t)\\in \\overline{\\Delta },\\ \\varphi \\in C_b(X)$ where $\\mathcal {N}_{m,Q}$ is the Gaussian measure in $X$ with mean $m$ and covariance $Q$ .", "For any $(s,t)\\in \\overline{\\Delta }$ $ Q(t,s)&=\\int _s^tU(t,r) Q(r) U^\\star (t,r)\\,dr,\\\\m^x(t,s)&=U(t,s)x+g(t,s),\\\\g(t,s)&=\\int _s^tU(t,r) f(r)\\,dr.$ where $\\lbrace U(t,r)\\rbrace _{(t,r)\\in \\overline{\\Delta }}$ is the strongly continuous evolution operator in $X$ associated to the famili $\\lbrace A(t)\\rbrace _{t\\in [0,T]}$ .", "Under minimal assumptions, the mapping $(s,\\sigma ,x)\\in \\overline{\\Delta }\\times X\\longmapsto \\bigl (P_{s,\\sigma }\\psi (\\sigma ,\\cdot )\\bigr )(x)\\in \\mathbb {R}$ is measurable (e.g.", ").", "So the integral in (REF ) is well defined.", "In this infinite dimensional setting, we need that the operators $Q(t,s)$ defined by (REF ) have finite trace.", "If $\\psi \\equiv 0$ , (REF ) is the Kolmogorov equation formally associated to the forward stochastic differential equation ${\\left\\lbrace \\begin{array}{ll}dX_t(s,x)=\\bigl (A(t)X_t(s,x)+f(t)\\bigr )dt+B(t)dW_t,\\ \\ 0<s<t<T,\\\\X_s(s,x)=x\\in X,\\end{array}\\right.", "}$ where $W_t$ is a $X$ -valued cylindrical Wiener process.", "Namely it is the equation formally satisfied by $u(s,x)=\\mathbb {E}\\varphi (X_t(s,x)),\\ \\ 0\\le s\\le t\\le T.$ For a proof of this fact in the autonomous case see .", "We assume that for $0\\le s<t\\le T$ there exists a normed space $(E,\\left\\Vert \\, \\cdot \\, \\right\\Vert _{E})$ such that $E\\subseteq X$ with continuous embedding such that $U(t,s)(E)$ is contained in the Cameron-Martin space $\\mathcal {H}_{t,s}$ of $Q(t,s)$ and $U(t,s)_{|_E}\\in \\mathcal {L}(E,\\mathcal {H}_{t,s})$ .", "In this case we define the operators $\\Lambda (t,s)=Q^{-\\frac{1}{2}}(t,s)U(t,s),\\ \\ 0\\le s<t\\le T,$ where $Q^{-\\frac{1}{2}}(t,s)$ is the pseudo-inverse of $Q^{\\frac{1}{2}}(t,s)$ .", "Under this hypothesis, we can prove that $P_{s,t}$ maps $C_b(X)$ (the space of continuous and bounded functions) into $C^k_E(X)$ (the subspace of $k$ -time Frèchet differentiable functions along $E$ having bounded Frèchet differentials along $E$ ).", "Moreover, we give an explicit formula for the Frèchet derivatives of $P_{s,t}$ of any order along $E$ , that involves the operators $\\Lambda (t,s)$ .", "It turns out that $\\Lambda (t,s)\\in \\mathcal {L}(E,X)$ for $0\\le s < t\\le T$ , but $\\left\\Vert \\Lambda (t,s)\\right\\Vert _{\\mathcal {L}(E,X)}$ blows up as $t-s\\rightarrow 0^+$ .", "If we assume that $\\left\\Vert \\Lambda (t,s)\\right\\Vert _{\\mathcal {L}(E,X)}$ has a powerlike behavior, namely that there exist $\\theta , C>0$ such that $\\left\\Vert \\Lambda (t,s)\\right\\Vert _{\\mathcal {L}(E,X)}\\le \\frac{C}{(t-s)^\\theta }$ and $E$ does not depend on any $s,t$ , we can prove Hölder maximal regularity of (REF ) along directions of $E$ .", "In the autonomous case, Schauder estimates for Ornstein-Uhlenbeck type equations were proven in , in finite dimension and in in infinite dimension.", "Non autonomous equations in infinite dimensions were studied in in the strong Feller case; namely they show that $P_{s,t}$ maps $B_b(X)$ (the set of bounded Borel functions) into $C_b(X)$ under the assumption $U(t,s)(X)\\in Q^{\\frac{1}{2}}(t,s)(X)$ .", "Furthermore, it was proven that $P_{s,t}$ maps $B_b(X)$ into $C^k_b(X)$ for every $k\\in \\mathbb {N}$ and, under a suitable power like behavior of $\\left\\Vert \\Lambda (t,s)\\right\\Vert _{\\mathcal {L}(X)}$ , they proved Schauder type results.", "Techniques we adapted to our setting to prove Schauder estimates were developed in .", "Others authors looked for regularizing results in the autonomous and perturbed case.", "For further readings we refer to , , and .", "Moreover, for results on improvements of summability along suitable directions, we refer to .", "In section 2 we prove the smoothing properties of $P_{s,t}$ .", "In particular we prove that $P_{s,t}$ maps $C_b(X)$ in $C^k_E(X)$ for every $(s,t)\\in \\Delta $ and $k\\in \\mathbb {N}$ .", "Moreover there exist a constant $C_k>0$ such that $\\sup _{x\\in X}\\left\\Vert D_E^k(P_{s,t}\\varphi )(x)\\right\\Vert _{\\mathcal {L}^k(E)}\\le C_k\\left\\Vert \\Lambda (t,s)\\right\\Vert ^k_{\\mathcal {L}(E; X)}\\left\\Vert \\varphi \\right\\Vert _\\infty .$ Moreover, we extended to our non autonomous case a result of , giving sufficient conditions in order to take $E=Q^{\\frac{1}{2}}(s)(X)$ for $s\\in (0,t)$ .", "In section 3 we prove maximal Hölder regularity results.", "More precisely, if $\\alpha \\in (0,1)$ , $\\displaystyle {\\alpha +\\frac{1}{\\theta }\\notin \\mathbb {N}}$ , $\\varphi \\in C^{\\alpha +\\frac{1}{\\theta }}_E(X)$ and $\\psi \\in C^{0,\\alpha }_E\\bigl ([0,t]\\times X\\bigr )$ , then $u(s,\\cdot )\\in C_E^{0,\\alpha +\\frac{1}{\\theta }}\\bigl ([0,t]\\times X\\bigr )$ .", "Moreover there exists $C=C(T,\\alpha )>0$ , independent of $\\varphi $ and $\\psi $ , such that $\\left\\Vert u\\right\\Vert _{C_E^{0,\\alpha +\\frac{1}{\\theta }}([0,t]\\times X)}\\le C\\bigl (\\left\\Vert \\varphi \\right\\Vert _{C^{\\alpha +\\frac{1}{\\theta }}_E(X)}+\\left\\Vert \\psi \\right\\Vert _{C_E^{0,\\alpha }([0,t]\\times X)}\\bigr ).$ If $\\displaystyle {\\alpha +\\frac{1}{\\theta }}$ is an integer, $u(s,\\cdot )$ only belongs to the Zygmund space $Z_E^{\\alpha +\\frac{1}{\\theta }}$ for every $s$ .", "Zygmund regularity is not due to the infinite dimensional setting nor the time dependence of the data.", "Indeed, we have the same result even in finite dimension for the autonomous case for the Heat equation.", "Last section concerns three genuinely non autonomous examples.", "In the first example $A(t)$ and $B(t)$ are diagonal operators with respect to the same Hilbert basis of $X$ In the second example we consider $A(t)=a(t) I$ , where $a$ is a continuous function.", "We get a non autonomous version of the Ornstein-Uhlenbeck semigroup used in the Malliavin calculus and we extend to such non autonomous case the results of .", "In the third example $A(t)$ are the realization of second order elliptic operators in $X=L^2(\\mathcal {O})$ with Dirichlet boundary conditions and smooth enough coefficients, $\\mathcal {O}$ is a bounden open and smooth subset of $\\mathbb {R}^d$ and $B(t)\\in \\mathcal {L}\\bigl (L^2(\\mathcal {O});L^q(\\mathcal {O})\\bigr )$ for a suitable $q\\ge 2$ .", "As in the autonomous case, we can take $B(t)\\equiv I$ if $d=1$ .", "In , the authors proved Schauder results on the whole space $X$ with $\\theta =\\frac{1}{2}+\\frac{d(q-2)}{4q}$ .", "In this present paper we show how we can achieve better regularity choosing a suitable subspace of $X$ .", "Indeed choosing choosing $E=L^p(\\mathcal {O})$ with $p\\in (2,q]$ we find $\\theta =\\frac{d}{2p}\\bigl (1-\\frac{p}{q}\\bigr )$ and choosing $E=(X_q,D_q)_{\\alpha ,p} $ with $\\alpha \\in \\bigl (0,\\frac{1}{2}\\bigr )$ we get $\\theta =\\frac{1}{2}-\\alpha $ ." ], [ "Notations and assumptions", "If $X$ and $Y$ are real Banach spaces we denote by $\\mathcal {L}(X;Y)$ the space of bounded linear operators from $X$ to $Y$ .", "If $X=Y$ , we write $\\mathcal {L}(X)$ instead of $\\mathcal {L}(X;X)$ and if $Y=\\mathbb {R}$ we simply write $X^{\\prime }$ instead of $\\mathcal {L}(X;\\mathbb {R})$ .", "Moreover, we denote by $\\mathcal {L}_1^+(X)$ the subset of $\\mathcal {L}(X)$ consisting of all non-negative and symmetric operators having finite trace.", "For $k\\ge 2$ , $\\mathcal {L}^{k}(X)$ is the space of the $k$ -linear bounded operators $T: X^k\\longrightarrow \\mathbb {R}$ endowed with the norm $\\left\\Vert T\\right\\Vert _{\\mathcal {L}^{k}(X)}=\\sup \\biggl \\lbrace \\frac{\\left|T(x_1,...,x_k)\\right|}{\\left\\Vert x_1\\right\\Vert _X\\cdot \\cdot \\cdot \\left\\Vert x_k\\right\\Vert _X}:\\ x_1,...,x_k\\in X\\setminus \\lbrace 0\\rbrace \\biggr \\rbrace .$ By $B_b(X;Y)$ and $C_b(X;Y)$ we denote the space of bounded Borel functions from $X$ to $Y$ and the space of bounded and continuous functions from $X$ to $Y$ , respectively.", "We endow them with the sup norm $\\left\\Vert F\\right\\Vert _\\infty =\\sup _{x\\in X}\\left\\Vert F(x)\\right\\Vert _Y.$ If $Y=\\mathbb {R}$ , we simply write $B_b(X)$ and $C_b(X)$ instead of $B_b(X;\\mathbb {R})$ and $C_b(X;\\mathbb {R})$ , respectively.", "Let $(E,\\left\\Vert \\, \\cdot \\, \\right\\Vert _{E})$ be a normed space such that $E\\subseteq X$ with continuous embedding.", "For $\\alpha \\in (0,1)$ we define the Hölder spaces along $E$ as $C^\\alpha _E(X;Y)=\\biggl \\lbrace F\\in C_b(X;Y):\\ [F]_{C^\\alpha _E(X;Y)}=\\sup _{{\\begin{array}{c}x\\in X,\\\\ h\\in E\\setminus \\lbrace 0\\rbrace \\end{array}}}\\frac{\\left\\Vert F(x+h)-F(x)\\right\\Vert _Y}{\\left\\Vert h\\right\\Vert _E^\\alpha }<+\\infty \\biggr \\rbrace ,$ $\\left\\Vert F\\right\\Vert _{C^\\alpha _E(X;Y)}=\\sup _{x\\in X}\\left\\Vert F(x)\\right\\Vert _{Y}+[F]_{C^\\alpha _E(X:Y)},$ and the Lipschitz space along $E$ as ${\\rm Lip_E} (X;Y)=\\biggl \\lbrace F\\in C_b(X;Y):\\ [F]_{{\\rm Lip_E} (X;Y)}=\\sup _{\\begin{array}{c}x\\in X,\\\\ h\\in E\\setminus \\lbrace 0\\rbrace \\end{array}}\\frac{\\left\\Vert F(x+h)-F(x)\\right\\Vert _Y}{\\left\\Vert h\\right\\Vert _E}<+\\infty \\biggr \\rbrace ,$ $\\left\\Vert F\\right\\Vert _{{\\rm Lip_E} (X;Y)}=\\sup _{x\\in X} \\left\\Vert F(x)\\right\\Vert _Y+[F]_{{\\rm Lip_E} (X;Y)}.$ Again, if $Y=\\mathbb {R}$ we write $C^\\alpha _E(X)$ and ${\\rm Lip_E} (X)$ instead of $C^\\alpha _E(X;\\mathbb {R})$ and ${\\rm Lip_E} (X;\\mathbb {R})$ , respectively.", "Moreover we say that a map $f:X\\longrightarrow \\mathbb {R}$ is $E$ -Gâteaux differentiable at $x\\in X$ if there exists a bounded linear operator $l_x:E\\longrightarrow \\mathbb {R}$ such that for any $h\\in E$ , we have $\\lim _{t\\rightarrow 0}\\frac{f(x+th)-f(x)-l_x(h)}{t}=0.$ $l_x$ is the $E$ -Gâteaux differential of $f$ at $x$ and we set $l_x=D^G_Ef(x)$ .", "We say that a map $f:X\\longrightarrow \\mathbb {R}$ is $E$ -Fréchet differentiable at $x\\in X$ if there exists a bounded linear operator $t_x:E\\longrightarrow \\mathbb {R}$ such that $\\lim _{\\left\\Vert h\\right\\Vert _{E}\\rightarrow 0}\\frac{f(x+h)-f(x)-t_x(h)}{\\left\\Vert h\\right\\Vert _{E}}=0.$ $t_x$ is the $E$ -Fréchet differential of $f$ at $x$ and we set $t_x=D_Ef(x)$ .", "Clearly, if $f$ is Fréchet differentiable at $x$ then $f$ is $E$ -Fréchet differentiable at $x$ and this is due to the continuous embedding of $E$ in $X$ .", "More generally, for any $F:X\\longrightarrow Y$ , we say that $F$ is $E$ -Gâteaux differentiable at $x$ if there exists $L_x\\in \\mathcal {L}(E,Y)$ such that for any $h\\in E$ , we have $Y-\\lim _{t\\rightarrow 0}\\frac{F(x+th)-F(x)-L_x(h)}{t}=0.$ $L_x$ is the $E$ -Gâteaux differential of $F$ at $x$ and we donote it by by $D^G_E F(x)$ .", "Moreover we say that $F$ is $E$ -Fréchet differentiable at $x$ if there exists $T_x\\in \\mathcal {L}(E,Y)$ such that $Y-\\lim _{\\left\\Vert h\\right\\Vert _{E}\\rightarrow 0}\\frac{F(x+h)-F(x)-T_x(h)}{\\left\\Vert h\\right\\Vert _{E}}=0.$ $T_x$ is the $E$ - Fréchet differential of $F$ at $x$ and we denote it by by $D_E F(x)$ .", "Clearly, if $E=X$ the notions of $E$ -Fréchet differentiability and $E$ -Gâteaux differentiability coincide with the usual ones.", "In this case we omit the subindex $E$ in the notations above.", "Hence, for instance, we write $D F$ instead of $D_E F$ for the Fréchet derivative.", "If $F:X\\longrightarrow Y$ is $E$ -Fréchet differentiable at $x$ , we say that $F$ is twice $E$ -Fréchet differentiable at $x$ if $D_E F:X\\longrightarrow E^{\\prime }$ is $E$ -Fréchet differentiable at $x$ .", "Hence, we define the Hessian operator $D^2_E F\\in \\mathcal {L}^{2}(E)$ by $D^2_E F(x)(k,h):=(T_x k)(h),$ where $T_x$ is the operator in the definition, with $F(x)$ replaced by $D_E F$ and $Y=E^{\\prime }$ .", "If $F$ is $(k-1)$ -times $E$ -Fréchet differentiable at $x$ with $k\\ge 2$ , we say that $F$ is $k$ -times $E$ -Fréchet differentiable at $x$ if $D^{k-1}_E F: X\\longrightarrow \\mathcal {L}^{k-1}(E)$ is $E$ -Fréchet differentiable at $x$ .", "In this case $D^k_E\\in \\mathcal {L}^k(X)$ is defined as $D_E^k F(x)(h_1,...,h_k):=(T_xh_1)(h_2,...,h_k)$ where $T_x$ is the operator in the definition with $F(x)$ replaced by $D_E^{k-1}F(x)$ and $Y=\\mathcal {L}^{k-1}(E)$ .", "For any $k\\in \\mathbb {N}$ , $C^k_{E}(X;Y)$ is the subspace of $C_b(X)$ consisting of all functions $F:X\\longrightarrow Y$ $k$ -times $E$ -Fréchet differentiable at any point with $D_E^j F$ continuous and bounded in $\\mathcal {L}^{j}(E)$ for $j\\le k$ .", "$C^k_E(X;Y)$ is endowed with the norm $\\left\\Vert F\\right\\Vert _{C^k_E(X;Y)}:= \\left\\Vert F\\right\\Vert _\\infty +\\sum _{j=1}^k\\sup _{x\\in X}\\left\\Vert D_E^j F(x)\\right\\Vert _{\\mathcal {L}^{j}(E)}.$ If $Y=\\mathbb {R}$ we write $C^k_E(X)$ instead of $C^k_E(X;\\mathbb {R})$ .", "$Z^1_E(X;Y)$ is the Zygmund space along $E$ .", "It is defined by $Z^1_E(X;Y)=\\biggl \\lbrace F\\in C_b(X;Y)\\ :\\ [F]_{Z^1_E(X;Y)}=\\sup _{\\begin{array}{c}x\\in X,\\\\ h\\in E\\setminus \\lbrace 0\\rbrace \\end{array}}\\frac{\\left\\Vert F(x+2h)-2F(x+h)+F(x)\\right\\Vert _Y}{\\left\\Vert h\\right\\Vert _E}<+\\infty \\biggr \\rbrace ,$ and it is endowed with the norm $\\left\\Vert F\\right\\Vert _{Z^1_E(X;Y)}=\\left\\Vert F\\right\\Vert _\\infty +[F]_{Z^1_E(X;Y)}.$ Higher order Hölder and Zygmund spaces along $E$ are defined as follows.", "For $\\alpha \\in (0,1)$ and $n\\in \\mathbb {N}$ , we set $C^{n+\\alpha }_E(X;Y):=\\biggl \\lbrace F\\in C^n_b(X;Y)\\ :\\ D^n_E F\\in C^\\alpha _E(X;\\mathcal {L}^n(E))\\biggr \\rbrace ,$ $\\left\\Vert F\\right\\Vert _{C^{n+\\alpha }_E(X;Y)}:=\\left\\Vert F\\right\\Vert _{C^{n}_E(X;Y)}+[D^n_E F]_{C^\\alpha _E(X;\\mathcal {L}^n(E))}$ and for $n\\ge 2$ , $Z^n_E(X,Y)=\\biggl \\lbrace F\\in C^{n-1}_E(X;Y)\\ :\\ D^{n-1}_E F\\in Z^1_E(X;\\mathcal {L}^n(E))\\biggr \\rbrace ,$ $\\left\\Vert F\\right\\Vert _{Z^n_E(X,Y)}:=\\left\\Vert F\\right\\Vert _{C^{n-1}_E(X;Y)}+[D^{n-1}_E F]_{Z^1_E(X;\\mathcal {L}^{n-1}(E))}.$ Clearly if $E=X$ the functional spaces above coincide with the usual ones.", "We write $C^k_b(X;Y)$ , $C^\\alpha _b(X;Y)$ , ${\\rm Lip_b} (X;Y)$ , $Z^n_b(X;Y)$ instead of $C^k_E(X;Y)$ , $C^\\alpha _E(X;Y)$ , ${\\rm Lip_E} (X;Y)$ , $Z^n_E(X;Y)$ , respectively.", "Finally we introduce spaces of functions depending both on time and space variables.", "For every $a,b\\in \\mathbb {R}$ , $a<b$ and $\\alpha >0$ , we denote by $C^{0,\\alpha }_E\\bigl ([a,b]\\times X\\bigr )$ the space of all bounded continuous functions $\\psi :[a,b]\\times X\\longrightarrow \\mathbb {R}$ such that $\\psi (s,\\cdot )\\in C^\\alpha _E(X)$ , for every $s\\in [a,b]$ , with $\\left\\Vert \\psi \\right\\Vert _{C^\\alpha _E([a,b]\\times X)}=\\sup _{s\\in [a,b]}\\left\\Vert \\psi (s,\\cdot )\\right\\Vert _{C^\\alpha _E(X)}<+\\infty .$ If $\\alpha \\ge 1$ we also require that the mapping $(s,x)\\longrightarrow \\frac{\\partial \\psi }{\\partial h_1...\\partial h_k}(s,x)$ are continuous in $[a,b]\\times X$ , for every $h_1,...,h_k\\in E$ with $k\\le [\\alpha ]$ .", "Now, for every $k\\in \\mathbb {N}$ we denote by $Z^{0,k}_E\\bigl ([a,b]\\times X\\bigr )$ the space of all bounded continuous functions $\\psi :[a,b]\\times X\\longrightarrow \\mathbb {R}$ such that $\\psi (s,\\cdot )\\in Z^k_E(X)$ , for every $s\\in [a,b]$ , with $\\left\\Vert \\psi \\right\\Vert _{Z^k_E([a,b]\\times X)}=\\sup _{s\\in [a,b]}\\left\\Vert \\psi (s,\\cdot )\\right\\Vert _{Z^k_E(X)}<+\\infty .$ If $k\\ge 2$ , we also require that the mapping $(s,x)\\longrightarrow \\frac{\\partial \\psi }{\\partial h_1...\\partial h_i}(s,x)$ are continuous in $[a,b]\\times X$ , for every $h_1,...,h_i\\in E$ with $i\\le k-1$ .", "Hypotesis 2.1 $\\lbrace U(t,s)\\rbrace _{(s,t)\\in \\overline{\\Delta }}\\subseteq \\mathcal {L}(X)$ is a strongly continuous evolution operator, namely for every $x\\in X$ the map $(s,t)\\in \\overline{\\Delta }\\longmapsto U(t,s)x\\in X,$ is continuous and $U(t,t)=I$ for any $t\\in [0,T]$ , $U(t,r)U(r,s)=U(t,s)$ for $0\\le s\\le r\\le t\\le T$ .", "The family of operators $\\lbrace B(t)\\rbrace _{t\\in [0,T]}\\subseteq \\mathcal {L}(X)$ is bounded and strongly measurable, namely there exists $K>0$ such that $\\sup _{t\\in [0,T]}\\left\\Vert B(t)\\right\\Vert _{\\mathcal {L}(X)}\\le K,$ the map $t\\in [0,T]\\mapsto B(t)x\\in X$ is measurable for any $x\\in X$ .", "The map $f:[0,T]\\longrightarrow X$ is bounded and measurable.", "The trace of the operator $Q(t,s)$ is finite for every $0\\le s <t\\le T$ .", "By the Banach-Steinhaus theorem, there exists $N>0$ such that $\\left\\Vert U(t,s)\\right\\Vert _{\\mathcal {L}(X)}\\le N,\\ \\ \\ (s,t)\\in \\overline{\\Delta }.$ Definition 2.2 Let $\\mu =\\mathcal {N}_ {0,Q}$ be the Gaussian measure of mean $m$ and covariance $Q$ , the consider the Cameron-Martin space $\\mathcal {H}$ of $\\mu $ is the space $Q^{\\frac{1}{2}}(X)$ endowed with the scalar product $\\langle h,k\\rangle _{\\mathcal {H}}=\\langle Q^{-\\frac{1}{2}} h,Q^{-\\frac{1}{2}} k\\rangle _X,\\ \\ \\ h,k\\in Q^{\\frac{1}{2}}(X),$ where $Q^{-\\frac{1}{2}}$ is the pseudo-inverse of $Q$ .", "For all $h\\in \\mathcal {H}$ , there exists ${}{{{*[{h}]{\\hspace{-0.6pt}\\bigwedge \\hspace{-0.6pt}}{\\rule [-/2]{1ex}{}}}{}}{0.5ex}}[1pt]{h}{}\\in L^p(X,\\mu )$ , for every $p\\in [1,\\infty )$ with $\\left\\Vert \\,{}{{{*[{h}]{\\hspace{-0.6pt}\\bigwedge \\hspace{-0.6pt}}{\\rule [-/2]{1ex}{}}}{}}{0.5ex}}[1pt]{h}{}\\,\\right\\Vert _{L^p(X,\\mu )}\\le \\left\\Vert h\\right\\Vert _{Q^{\\frac{1}{2}}(X)}$ such that the Cameron-Martin formula $\\mathcal {N}_{z,Q}(dy)=\\exp \\biggl (-\\frac{1}{2}\\left\\Vert h\\right\\Vert ^2_{Q^{\\frac{1}{2}}(X)}+{}{{{*[{h}]{\\hspace{-0.6pt}\\bigwedge \\hspace{-0.6pt}}{\\rule [-/2]{1ex}{}}}{}}{0.5ex}}[1pt]{h}{}(y)\\biggr )\\mathcal {N}_{0,Q}(dy)$ holds.", "Let $(e_k)_{k\\in \\mathbb {N}}\\subseteq X$ be an orthonormal basis of $X$ consisting of eigenvectors of $Q$ , $Qe_k=\\lambda _ke_k$ for all $k\\in \\mathbb {N}$ .", "We have ${}{{{*[{h}]{\\hspace{-0.6pt}\\bigwedge \\hspace{-0.6pt}}{\\rule [-/2]{1ex}{}}}{}}{0.5ex}}[1pt]{h}{}(y)=\\sum _{\\begin{array}{c}k\\in \\mathbb {N},\\\\ \\lambda _k\\ne 0\\end{array}}y_k \\Bigl (Q^{-\\frac{1}{2}}h\\Bigr )_k\\lambda _k^{-\\frac{1}{2}}=\\sum _{\\begin{array}{c}k\\in \\mathbb {N},\\\\ \\lambda _k\\ne 0\\end{array}}y_k\\, h_k\\,\\lambda _k^{-1}$ where $y_k=\\langle y,e_k\\rangle _X$ for any $y\\in X$ .", "We remark that the series defined in (REF )converges in $L^p(X,\\mathcal {N}_{0,Q})$ for any $p\\in [1,\\infty )$ but in general it does not converge pointwise.", "Moreover for any $p\\in [1,\\infty )$ there exists $c_p>0$ such that $\\left\\Vert y\\longmapsto {}{{{*[{h}]{\\hspace{-0.6pt}\\bigwedge \\hspace{-0.6pt}}{\\rule [-/2]{1ex}{}}}{}}{0.5ex}}[1pt]{h}{}(y)\\right\\Vert _{L^p(X,\\mathcal {N}_{0,Q(t,s)})}\\le c_p \\left\\Vert h\\right\\Vert _X,\\\\ \\ \\mbox{for any}\\ (s,t)\\in \\Delta .$ Remark 2.3 In our non-autonomous setting, for every $(t,s)\\in \\Delta $ we denote by $\\mathcal {H}_{t,s}$ the Cameron-Martin space of the measure $\\mathcal {N}_{0,Q(t,s)}$ .", "Moreover, for every $t\\in [0,T]$ we denote by $\\mathrm {H}_t$ the space $Q(t)^{\\frac{1}{2}}(X)$ endowed with the scalar product $\\langle h,k\\rangle _{\\mathrm {H}_t}=\\langle Q^{-\\frac{1}{2}}(t) h,Q^{-\\frac{1}{2}}(t) k\\rangle _X,\\ \\ \\ h,k\\in Q^{\\frac{1}{2}}(t)(X),$ where $Q^{-\\frac{1}{2}}(t)$ is the pseudo-inverse of $Q(t)$ .", "Hypotesis 2.4 There exists a normed space $(E,\\left\\Vert \\, \\cdot \\, \\right\\Vert _{E})$ such that $E\\subseteq X$ with continuous embedding such that $U(t,s)(E)\\subseteq E$ , $U(t,s)_{|_E}\\in \\mathcal {L}(E)$ and there exists $M>0$ such that $ \\left\\Vert U(t,s)_{|_E}\\right\\Vert _{\\mathcal {L}(E)}\\le M,\\ \\ \\ (s,t)\\in \\overline{\\Delta }.$ Hypotesis 2.5 For a fixed $(s,t)\\in \\Delta $ there exists a normed space $(E,\\left\\Vert \\, \\cdot \\, \\right\\Vert _{E})$ such that $E\\subseteq X$ with continuous embedding such that $U(t,s)(E)\\subseteq \\mathcal {H}_{t,s}$ and $U(t,s)_{|_E}\\in \\mathcal {L}(E,\\mathcal {H}_{t,s})$ .", "Let $Q^{-\\frac{1}{2}}(t,s)$ be the pseudo-inverse of the operator $Q^{\\frac{1}{2}}(t,s)$ .", "If hypotheis REF holds, we define the operator $\\Lambda (t,s)=Q^{-\\frac{1}{2}}(t,s)U(t,s),\\ \\ (s,t)\\in \\Delta .$ Remark 2.6 If $E$ and $F$ are a Banach spaces and $U(t,s)(E)\\subseteq F$ , then $U(t,s)_{|_E}\\in \\mathcal {L}(E,F)$ .", "Indeed, if $x_n\\xrightarrow[n\\rightarrow +\\infty ]{E}x$ and $U(t,s)x_n\\xrightarrow[n\\rightarrow +\\infty ]{F}y$ , then $x_n\\xrightarrow[n\\rightarrow +\\infty ]{X}x$ and $U(t,s)x_n\\xrightarrow[n\\rightarrow +\\infty ]{X}y$ , since the embeddings of $E$ in $X$ and of $F$ in $X$ are continuous.", "Since $U(t,s)\\in \\mathcal {L}(X)$ then $y=U(t,s)x$ .", "Remark 2.7 We remark that if $E=X$ for all $(s,t)\\in \\Delta $ , Hypothesis REF implies that $P_{s,t}$ is strong Feller as proved in .", "Remark 2.8 It is convenient to rewrite Hypothesis REF , as in the autonomous case (e.g.", ", ).", "For $(s,t)\\in \\Delta $ , we consider the operator $L:L^2\\bigl ((s,t); X\\bigr )\\longrightarrow X$ defined by $Ly:=\\int _s^tU(t,\\sigma )B(\\sigma )y(\\sigma )\\,d\\sigma ,\\ \\ \\ y\\in L^2\\bigl ((s,t); X\\bigr ).$ The adjoint operator $L^\\star :X\\longrightarrow L^2\\bigl ((s,t); X\\bigr )$ satisfies $\\langle Ly,x\\rangle _X=\\langle y,L^\\star x\\rangle _{L^2((s,t); X)}$ , which means $\\int _s^t\\langle U(t,\\sigma )B(\\sigma )y(\\sigma ),x\\rangle _X\\,d\\sigma =\\int _s^t\\langle y(\\sigma ),(L^\\star x)(\\sigma )\\rangle _X\\,d\\sigma ,\\ \\ \\ x\\in X,y\\in L^2\\bigl ((s,t); X\\bigr ),$ so that $(L^\\star x)(\\sigma )=B^\\star (\\sigma ) U^\\star (t,\\sigma )x,\\ \\ x\\in X,\\ \\mbox{a.e}\\ \\sigma \\in (s,t)$ and we get $LL^\\star x=\\int _s^tU(t,\\sigma )B(\\sigma )B^\\star (\\sigma ) U^\\star (t,\\sigma )x \\,d\\sigma =Q(t,s)x,\\ \\ x\\in X.$ Therefore, by the general theory of linear operators in Hilbert spaces (e.g., ), we get ${\\left\\lbrace \\begin{array}{ll}\\mbox{\\mdseries {Range}}(L)=\\mbox{\\mdseries {Range}}(Q^{\\frac{1}{2}}(t,s)) \\\\~\\\\\\left\\Vert Q^{-\\frac{1}{2}}(t,s)x\\right\\Vert _X=\\left\\Vert L^{-1}x\\right\\Vert _{L^2((s,t); X)},\\ \\ \\ x\\in \\mathcal {H}_{t,s}.\\end{array}\\right.", "}$ Since in general $L$ is not invertible, we stress that $L^{-1}$ is meant as the pseudo-inverse of $L$ .", "Hence $\\left\\Vert L^{-1}x\\right\\Vert _{L^2((s,t); X)}=\\min \\Bigl \\lbrace \\left\\Vert y\\right\\Vert _{L^2((s,t); X)}\\ :\\ Ly=x\\Bigr \\rbrace ,\\ \\ \\ x\\in \\mathcal {H}_{t,s}.$ Hence the range of $L$ is the set of the traces at time $t$ of the mild solutions of the evolution problems ${\\left\\lbrace \\begin{array}{ll}u^{\\prime }(r)=A(r)u(r)+B(r)y(r),\\ \\ s<r<t,\\\\u(s)=0\\end{array}\\right.", "}$ where $y$ varies in $L^2((s,t);X)$ .", "So Hypothesis REF may be reformulated requiring that $U(t,s)$ maps $E$ in the trace space, for every $(s,t)\\in \\Delta $ .", "Finally we state also the last hypothesis that is essential to prove Schauder estimates.", "Hypotesis 2.9 There exists a normed space $(E,\\left\\Vert \\, \\cdot \\, \\right\\Vert _{E})$ such that $E\\subseteq X$ with continuous embedding such that $U(t,s)(E)\\subseteq E$ , $U(t,s)_{|_E}\\in \\mathcal {L}(E)$ and By the Banach-Steinhaus theorem, there exists $M>0$ such that $\\left\\Vert U(t,s)_{|_E}\\right\\Vert _{\\mathcal {L}(E)}\\le M,\\ \\ \\ (s,t)\\in \\overline{\\Delta };$ $U(t,s)(E)\\subseteq \\mathcal {H}_{t,s}$ for any $(s,t)\\in \\Delta $ and $U(t,s)_{|_E}\\in \\mathcal {L}(E,\\mathcal {H}_{t,s})$ .", "Hypotesis 2.10 There exist $C,\\theta >0$ such that $\\left\\Vert \\Lambda (t,s)\\right\\Vert _{\\mathcal {L}(E;X)}\\le \\frac{C}{(t-s)^\\theta },\\ \\ (s,t)\\in \\Delta .$" ], [ "Smoothing properties of $P_{s,t}$", "Smoothing properties of $P_{s,t}$ If Hypothesis REF hold, the evolution family $P_{s,t}\\varphi (x)=\\int _X\\varphi (y+m^x(t,s))\\,\\mathcal {N}_{0,Q(t,s)}(dy),\\ \\ \\ x\\in X,\\ (s,t)\\in \\Delta ,\\ \\varphi \\in C_b(X)$ is well defined.", "In it was proved that $P_{s,t}$ maps $C^1_b(X)$ into itself, and $\\nabla (P_{s,t}\\varphi )(x)=U^\\star (t,s)P_{s,t}\\nabla \\varphi (x),\\ \\ (s,t)\\in \\Delta ,\\ x\\in X,\\ \\varphi \\in C^1_b(X)$ so that $\\sup _{x\\in X}\\left\\Vert D(P_{s,t}\\varphi )(x)\\right\\Vert _{X^{\\prime }}\\le \\left\\Vert U(t,s)\\right\\Vert _{\\mathcal {L}(X)}\\left\\Vert D\\varphi \\right\\Vert _\\infty \\le M\\left\\Vert \\varphi \\right\\Vert _{C^1_b(X)},\\ \\ (s,t)\\in \\Delta ,\\ \\varphi \\in C^1_b(X);$ More generally it was proved that $P_{s,t}$ maps $C^k_b(X)$ into itself for every $k\\in \\mathbb {N}$ Here we need similar properties for the spaces $C^k_E(X)$ .", "Lemma 3.1 Under Hypotheses REF and REF for every $k\\in \\mathbb {N}$ and $(s,t)\\in \\Delta $ , $P_{s,t}$ maps $C^k_E(X)$ into itself, and $D^k_E(P_{s,t}\\varphi )(x)(h_1,...,h_k)=P_{s,t}\\bigl (D_E^k\\varphi (\\cdot )(U(t,s)h_1,...,U(t,s)h_k)\\bigr )(x),\\ \\ (s,t)\\in \\Delta ,$ for any $x\\in X$ and $h_1,...,\\ h_k\\in E$ .", "In particular it follows that for every $\\varphi \\in C^k_b(X)$ and $(s,t)\\in \\Delta $ $\\left|D^k_EP_{s,t}\\varphi (x)(h)\\right|&\\le P_{s,t}(\\left|D^k_E\\varphi (x)(h)\\right|), \\\\\\left\\Vert D_E^k(P_{s,t}\\varphi )\\right\\Vert _{\\infty }&\\le \\left\\Vert U(t,s)\\right\\Vert ^k_{\\mathcal {L}(E)}\\left\\Vert D^k_E\\varphi \\right\\Vert _{\\infty }\\le M^k\\left\\Vert \\varphi \\right\\Vert _{C^k_E(X)}.$ Moreover, for every fixed $t\\in [0,T]$ and $h_1,...,\\ h_k\\in X$ , the mapping $(s,x)\\in [0,t]\\times X\\longmapsto D_E^k(P_{s,t}\\varphi )(x)(h_1,...,h_k)\\in \\mathbb {R},$ is continuous.", "If in addition the mapping $(s,t)\\in \\Delta \\longmapsto U(t,s)\\in \\mathcal {L}(X)$ is continuous, then the function $(s,t,x)\\in \\overline{\\Delta }\\times X\\longmapsto D_E^k(P_{s,t}\\varphi )(x)(h_1,...,h_k)\\in \\mathbb {R},$ is continuous.", "If $\\varphi \\in C^\\alpha _E(X)$ , where $\\alpha =k+\\sigma $ , where $k\\in \\mathbb {N}\\cup \\lbrace 0\\rbrace $ and $\\sigma \\in (0,1)$ , then $P_{s,t}\\varphi \\in C^\\alpha _E(X)$ and $[D_E^k(P_{s,t}\\varphi )]_{C^\\sigma _E(X)}\\le \\left\\Vert U(t,s)\\right\\Vert ^{k}_{\\mathcal {L}(X)}[D^k_E\\varphi ]_{C^\\sigma _E(X)}\\le M^{k}[D^k\\varphi ]_{C^\\sigma _E(X)}.$ If $\\varphi \\in Z^k_E(X)$ , where $k\\in \\mathbb {N}$ , then $P_{s,t}\\varphi \\in Z^k_E(X)$ and $[D_E^{k-1}(P_{s,t}\\varphi )]_{Z^k_E(X)}\\le \\left\\Vert U(t,s)\\right\\Vert ^{k-1}_{\\mathcal {L}(X)}[D_E^k\\varphi ]_{Z^k_E(X)}\\le M^{k-1}[D_E^k\\varphi ]_{Z^k_E(X)}.$ We start proving (REF ) and (REF ) by induction over $k$ .", "If $k=1$ , let $x\\in X$ , $h\\in E$ and $\\varepsilon >0$ , then for all $(s,t)\\in \\Delta $ , we have $\\frac{P_{s,t}\\varphi (x+\\varepsilon h)-P_{s,t}\\varphi (x)}{\\varepsilon }\\le \\int _X\\frac{\\left|\\varphi \\bigl (y+m^{x+\\varepsilon h}(t,s)\\bigl )-\\varphi \\bigl (y+m^x(t,s)\\bigr )\\right|}{\\varepsilon }\\,\\mathcal {N}_{0,Q(t,s)}(dy).$ Since $\\frac{\\left|\\varphi \\bigl (y+m^{x+\\varepsilon h}(t,s)\\bigr )-\\varphi \\bigl (y+m^x(t,s)\\bigr )\\right|}{\\varepsilon }\\le \\left\\Vert D_E\\varphi \\right\\Vert _\\infty \\left\\Vert U(t,s)h\\right\\Vert _E\\le \\left\\Vert D_E\\varphi \\right\\Vert _\\infty \\left\\Vert U(t,s)\\right\\Vert _{E^{\\prime }}\\left\\Vert h\\right\\Vert _E,$ and by the Dominated Convergence Theorem, as $\\varepsilon \\rightarrow 0^+$ , we get $D_EP_{s,t}\\varphi (x)(h)=P_{s,t}D_E\\varphi (\\cdot )(U(t,s)h).$ Moreover $\\left|D_EP_{s,t}\\varphi (x)(h)\\right|&\\le \\int _X \\left|D_E\\varphi (y+m^x(t,s))(U(t,s)h)\\right|\\, \\mathcal {N}_{0,Q(t,s)}(dy)\\nonumber \\\\&\\le \\left\\Vert D_E\\varphi \\right\\Vert _{\\infty }\\left\\Vert U(t,s)\\right\\Vert _{\\mathcal {L}(E)}\\left\\Vert h\\right\\Vert _E,\\nonumber $ and $\\left\\Vert D_EP_{s,t}\\varphi (x)\\right\\Vert _{E^{\\prime }}\\le \\left\\Vert D_E\\varphi \\right\\Vert _{\\infty }\\left\\Vert U(t,s)\\right\\Vert _{\\mathcal {L}(E)}.$ Hence $\\left\\Vert D_EP_{s,t}\\varphi \\right\\Vert _{\\infty }\\le \\left\\Vert D_E\\varphi \\right\\Vert _{\\infty }\\left\\Vert U(t,s)\\right\\Vert _{\\mathcal {L}(E)}\\le M \\left\\Vert \\varphi \\right\\Vert _{C_E^1(X)}.$ We assume now $\\varphi \\in C^k_E(X)$ and that (REF ) and (REF ) hold.", "Let $x\\in X$ , $h_1,...,h_{k+1}\\in E$ and $\\varepsilon >0$ , then for all $(s,t)\\in \\Delta $ , we have $&\\frac{P_{s,t}\\bigl (D_E^k\\varphi (\\cdot )(U(t,s)h_1,...,U(t,s)h_k)\\bigr )(x+\\varepsilon h_{k+1})-P_{s,t}\\bigl (D_E^k\\varphi (\\cdot )(U(t,s)h_1,...,U(t,s)h_k)\\bigr )(x)}{\\varepsilon }\\nonumber \\\\ &\\le \\int _X\\frac{\\left|D_E^k\\varphi \\bigl (y+m^{x+\\varepsilon h_{k+1}}(t,s)\\bigl )(U(t,s)h_1,...,U(t,s)h_k)-D_E^k\\varphi \\bigl (y+m^x(t,s)\\bigr )(U(t,s)h_1,...,U(t,s)h_k)\\right|}{\\varepsilon }\\,\\mathcal {N}_{0,Q(t,s)}(dy).\\nonumber $ Since $&\\frac{\\left|D_E^k\\varphi \\bigl (y+m^{x+\\varepsilon h_{k+1}}(t,s)\\bigl )(U(t,s)h_1,...,U(t,s)h_k)-D_E^k\\varphi \\bigl (y+m^x(t,s)\\bigr )(U(t,s)h_1,...,U(t,s)h_k)\\right|}{\\varepsilon }\\nonumber \\\\&\\le \\left\\Vert D^k_E\\varphi \\right\\Vert _\\infty \\prod _{i=1}^{k+1}\\left\\Vert U(t,s)h_{k+1}\\right\\Vert _E\\le \\left\\Vert D^k_E\\varphi \\right\\Vert _\\infty \\left\\Vert U(t,s)\\right\\Vert ^{k+1}_{E^{\\prime }}\\prod _{i=1}^{k+1}\\left\\Vert h_i\\right\\Vert _E,$ by the Dominated Convergence Theorem, as $\\varepsilon \\rightarrow 0^+$ , we get $D^{k+1}_E(P_{s,t}\\varphi )(x)(h_1,...,h_{k+1})=P_{s,t}\\bigl (D_E^{k+1}\\varphi (\\cdot )(U(t,s)h_1,...,U(t,s)h_{k+1})\\bigr )(x).$ Moreover $\\left|D^{k+1}_E(P_{s,t}\\varphi )(x)(h_1,...,h_{k+1})\\right|&\\le \\int _X \\left|D^{k+1}_E\\varphi (y+m^x(t,s))(U(t,s)h_1,...,U(t,s)h_{k+1})\\right|\\, \\mathcal {N}_{0,Q(t,s)}(dy)\\nonumber \\\\&\\le \\left\\Vert D^{k+1}_E\\varphi \\right\\Vert _{\\infty }\\left\\Vert U(t,s)\\right\\Vert ^{k+1}_{\\mathcal {L}(E)}\\prod _{i=1}^{k+1}\\left\\Vert h_i\\right\\Vert _E,\\nonumber $ and $\\left\\Vert D^{k+1}_E(P_{s,t}\\varphi )(x)\\right\\Vert _{\\mathcal {L}^k(E)}\\le \\left\\Vert D^{k+1}_E\\varphi \\right\\Vert _{\\infty }\\left\\Vert U(t,s)\\right\\Vert ^{k+1}_{\\mathcal {L}(E)}.$ Hence $\\left\\Vert D^{k+1}_E(P_{s,t}\\varphi )\\right\\Vert _{\\infty }\\le \\left\\Vert U(t,s)\\right\\Vert ^{k+1}_{\\mathcal {L}(E)}\\left\\Vert D^{k+1}_E\\varphi \\right\\Vert _{\\infty }\\le M^{k+1}\\left\\Vert \\varphi \\right\\Vert _{C_E^{k+1}(X)}.$ The proofs of the continuity properties are analogous to and (REF ), (REF ) are a consequence of (REF ) and of the definition of $P_{s,t}$ .", "Corollary 3.2 Under Hypotheses REF and REF , for every $\\alpha >0$ and for every $\\varphi \\in C^\\alpha _E(X)$ , $t\\in (0,T]$ , the function $(s,x)\\in [0,t]\\times X\\longmapsto u_0(s,x):=P_{s,t}\\varphi (x)\\in \\mathbb {R},$ belongs to $C^{0,\\alpha }_E\\bigl ([0,t]\\times X\\bigr )$ , and there exists $C=C(\\alpha , T)>0$ such that $\\left\\Vert u_0\\right\\Vert _{C^{0,\\alpha }_E([0,t]\\times X)}\\le C\\left\\Vert \\varphi \\right\\Vert _{C^\\alpha _E(X)}.$ Similarly, if $\\varphi \\in Z^k_E(X)$ for some $k\\in \\mathbb {N}$ , then the function $u_0$ belongs to $Z^{0,k}_E\\bigl ([0,t]\\times X\\bigr )$ , and there exists $C=C(k, T)>0$ such that $\\left\\Vert u_0\\right\\Vert _{Z^{0,k}_E([0,t]\\times X)}\\le C\\left\\Vert \\varphi \\right\\Vert _{Z^k_E(X)}.$ Theorem 3.3 Under the hypotheses REF ,REF , $\\displaystyle {P_{s,t}\\varphi (x)\\in \\bigcap _{k\\in \\mathbb {N}} C^k_E(X)}$ for every $\\varphi \\in C_b(X)$ and all $(s,t)\\in \\Delta $ .", "In particular $D_E(P_{s,t}\\varphi )(x)(h)=\\int _X\\varphi \\bigl (y+m^x(t,s)\\bigr )\\,{}{{{*[{ U(t,s)h}]{\\hspace{-0.6pt}\\bigwedge \\hspace{-0.6pt}}{\\rule [-/2]{1ex}{}}}{}}{0.5ex}}[1pt]{ U(t,s)h}{}(y)\\,\\mathcal {N}_{0,Q(t,s)}(dy),\\ \\ h\\in E$ and there exists $C>0$ such that $\\sup _{x\\in X}\\left\\Vert D_E(P_{s,t}\\varphi )(x)\\right\\Vert _{E^{\\prime }}\\le C\\left\\Vert \\Lambda (t,s)\\right\\Vert _{\\mathcal {L}(E; X)}\\left\\Vert \\varphi \\right\\Vert _\\infty ,\\ \\ \\mbox{for any}\\ \\ (s,t)\\in \\Delta .$ Moreover for $n\\ge 2$ $ D^n_E(P_{s,t}\\varphi )(x)(h_1,...,h_n)=\\int _X\\varphi \\bigl (y+m^x(t,s)\\bigr )\\, I_n(t,s)(y)(h_1,...,h_n)\\, \\mathcal {N}_{0,Q(t,s)}(dy),\\ \\ h_1,...,h_n\\in E,$ where $I_n(t,s)(y)(h_1,...,h_n)&:=\\prod _{i=1}^n{}{{{*[{ U(t,s)h_i}]{\\hspace{-0.6pt}\\bigwedge \\hspace{-0.6pt}}{\\rule [-/2]{1ex}{}}}{}}{0.5ex}}[1pt]{ U(t,s)h_i}{}(y)\\nonumber \\\\&+\\sum _{s=1}^{r_n}(-1)^s\\sum _{\\begin{array}{c}i_1,...i_{2s},\\\\ i_{2k-1}<i_{2k},\\\\ i_{2k-1}<i_{2k+1}\\end{array}}^n\\prod _{i=1}^s\\langle \\Lambda (t,s)h_{i_{2k-1}},\\Lambda (t,s)h_{i_{2k}}\\rangle _X\\prod _{\\begin{array}{c}i_m=1,\\\\ i_m\\ne i_1,...,i_{2s}\\end{array}}^n{}{{{*[{ U(t,s)h_{i_m}}]{\\hspace{-0.6pt}\\bigwedge \\hspace{-0.6pt}}{\\rule [-/2]{1ex}{}}}{}}{0.5ex}}[1pt]{ U(t,s)h_{i_m}}{}(y)$ and $r_n={\\left\\lbrace \\begin{array}{ll}\\frac{n}{2}\\ \\ \\mbox{if}\\ n\\ \\mbox{is even},\\\\\\frac{n-1}{2}\\ \\ \\mbox{if}\\ n\\ \\mbox{is odd}.", "\\nonumber \\end{array}\\right.", "}$ In particular, for every $n\\ge 2$ there exists $C_n>0$ such that $\\sup _{x\\in X}\\left\\Vert D_E^n(P_{s,t}\\varphi )(x)\\right\\Vert _{\\mathcal {L}^n(E)}\\le C_n\\left\\Vert \\Lambda (t,s)\\right\\Vert ^n_{\\mathcal {L}(E; X)}\\left\\Vert \\varphi \\right\\Vert _\\infty ,\\ \\ \\mbox{for any}\\ \\ (s,t)\\in \\Delta .$ We prove that $P_{s,t}\\varphi (x)\\in C_b(X)$ for every $\\varphi \\in C_b(X)$ and $(s,t)\\in \\Delta $ .", "For $x, x_0\\in X$ we have $\\left|P_{s,t}\\varphi (x)-P_{s,t}\\varphi (x_0)\\right|\\le \\int _X\\left|\\varphi \\bigl (y+m^x(t,s)\\bigr )-\\varphi \\bigl (y+m^{x_0}(t,s)\\bigr )\\right|\\,\\mathcal {N}_{0,Q(t,s)}(dy).$ Since $\\left|\\varphi \\bigl (y+m^x(t,s)\\bigr )-\\varphi \\bigl (y+m^{x_0}(t,s)\\bigr )\\right|\\le 2 \\left\\Vert \\varphi \\right\\Vert _\\infty $ the statement follows as $x\\rightarrow x_0$ , by the Dominated Convergence theorem .", "We prove that $P_{s,t}\\varphi (x)\\in C^1_E(X)$ for every $\\varphi \\in C_b(X)$ and $(s,t)\\in \\Delta $ and that (REF ) holds.", "For every $\\varepsilon \\in (0,1)$ , $x\\in X$ and $h\\in E$ we have $\\frac{P_{s,t}\\varphi (x+\\varepsilon h)-P_{s,t}\\varphi (x)}{\\varepsilon }=&\\frac{1}{\\varepsilon }\\biggl (\\int _X\\varphi (y+m^x(t,s))\\,\\mathcal {N}_{\\varepsilon U(t,s)h,Q(t,s)}(dy)\\nonumber \\\\&-\\int _X\\varphi (y+m^x(t,s))\\,\\mathcal {N}_{0,Q(t,s)}(dy)\\biggr ).", "\\nonumber $ Since $\\varepsilon \\,U(t,s)h\\in \\mathcal {H}_{t,s}$ , thanks to the Cameron-Martin formula we get $\\mathcal {N}_{\\varepsilon U(t,s)h,Q(t,s)}(dy)&= \\exp \\biggl (-\\frac{1}{2}\\left\\Vert \\varepsilon U(t,s)h\\right\\Vert ^2_{\\mathcal {H}_{t,s}}+{}{{{*[{\\varepsilon U(t,s)h}]{\\hspace{-0.6pt}\\bigwedge \\hspace{-0.6pt}}{\\rule [-/2]{1ex}{}}}{}}{0.5ex}}[1pt]{\\varepsilon U(t,s)h}{}(y)\\biggr )\\mathcal {N}_{0,Q(t,s)}(dy)\\nonumber \\\\&= \\exp \\biggl (-\\frac{1}{2}\\langle Q^{-\\frac{1}{2}}(t,s)\\varepsilon U(t,s)h,Q^{-\\frac{1}{2}}(t,s)\\varepsilon U(t,s)h\\rangle _X\\nonumber \\\\&+\\varepsilon {}{{{*[{U(t,s)h}]{\\hspace{-0.6pt}\\bigwedge \\hspace{-0.6pt}}{\\rule [-/2]{1ex}{}}}{}}{0.5ex}}[1pt]{U(t,s)h}{}(y)\\biggr )\\mathcal {N}_{0,Q(t,s)}(dy)\\nonumber \\\\&=\\exp \\biggl (-\\frac{1}{2}\\varepsilon ^2\\left\\Vert \\Lambda (t,s)h\\right\\Vert ^2_X+\\varepsilon {}{{{*[{U(t,s)h}]{\\hspace{-0.6pt}\\bigwedge \\hspace{-0.6pt}}{\\rule [-/2]{1ex}{}}}{}}{0.5ex}}[1pt]{U(t,s)h}{}(y)\\biggr )\\mathcal {N}_{0,Q(t,s)}(dy).", "\\nonumber $ Therefore, setting $f_\\varepsilon (y)=-\\frac{1}{2}\\varepsilon \\left\\Vert \\Lambda (t,s)h\\right\\Vert ^2_X+{}{{{*[{ U(t,s)h}]{\\hspace{-0.6pt}\\bigwedge \\hspace{-0.6pt}}{\\rule [-/2]{1ex}{}}}{}}{0.5ex}}[1pt]{ U(t,s)h}{}(y),$ we get $\\frac{P_{s,t}\\varphi (x+\\varepsilon h)-P_{s,t}\\varphi (x)}{\\varepsilon }=\\int _X\\frac{\\exp (\\varepsilon f_\\varepsilon (y))-1}{\\varepsilon }\\,\\varphi (y+m^x(t,s))\\,\\mathcal {N}_{0,Q(t,s)}(dy).$ Now $&\\lim _{\\varepsilon \\rightarrow 0} f_\\varepsilon (y)={}{{{*[{U(t,s)h}]{\\hspace{-0.6pt}\\bigwedge \\hspace{-0.6pt}}{\\rule [-/2]{1ex}{}}}{}}{0.5ex}}[1pt]{U(t,s)h}{}(y)\\ \\ \\mbox{for a.e.", "}\\ y \\nonumber \\\\&\\left|\\frac{\\exp \\Bigl (\\varepsilon f_\\varepsilon (y)\\Bigr )-1}{\\varepsilon }\\right|\\le C\\left|f_\\varepsilon (y)\\right|\\Bigl (\\exp \\left| f_\\varepsilon (y)\\right|+1\\Bigr )\\nonumber \\\\&y\\mapsto {}{{{*[{U(t,s)h}]{\\hspace{-0.6pt}\\bigwedge \\hspace{-0.6pt}}{\\rule [-/2]{1ex}{}}}{}}{0.5ex}}[1pt]{U(t,s)h}{}(y)\\exp \\biggl ({}{{{*[{ U(t,s)h}]{\\hspace{-0.6pt}\\bigwedge \\hspace{-0.6pt}}{\\rule [-/2]{1ex}{}}}{}}{0.5ex}}[1pt]{ U(t,s)h}{}(y)\\biggr )\\in L^1(X,\\mathcal {N}_{0,Q(t,s)}).\\nonumber $ Hence by the Dominated Convergence theorem we obtain $\\lim _{\\varepsilon \\rightarrow 0}\\frac{P_{s,t}\\varphi (x+\\varepsilon h)-P_{s,t}\\varphi (x)}{\\varepsilon }=\\int _X\\varphi (y+m^x(t,s)){}{{{*[{U(t,s)h}]{\\hspace{-0.6pt}\\bigwedge \\hspace{-0.6pt}}{\\rule [-/2]{1ex}{}}}{}}{0.5ex}}[1pt]{U(t,s)h}{}(y)\\,\\mathcal {N}_{0,Q(t,s)}(dy).$ Denoting by $D_E^G(P_{s,t}\\varphi )(x)(h)$ the right hand side of (REF ) we get by (REF ) $\\left|D_E^G(P_{s,t}\\varphi )(x)h\\right|&\\le \\left\\Vert \\varphi \\right\\Vert _{\\infty }\\left\\Vert {}{{{*[{U(t,s)h}]{\\hspace{-0.6pt}\\bigwedge \\hspace{-0.6pt}}{\\rule [-/2]{1ex}{}}}{}}{0.5ex}}[1pt]{U(t,s)h}{}(\\cdot )\\right\\Vert _{L^1(X,\\mathcal {N}_{0,Q(t,s)})}\\nonumber \\\\&\\le \\left\\Vert \\varphi \\right\\Vert _{\\infty }c_1\\left\\Vert \\Lambda (t,s)\\right\\Vert _{\\mathcal {L}(E; X)}\\left\\Vert h\\right\\Vert _E \\nonumber $ so that $D_E^G(P_{s,t}\\varphi )(x)\\in E^{\\prime }$ .", "This implies that $P_{s,t}\\varphi $ is $E$ -Gâteaux differentiable at $x$ and that $D_E^G(P_{s,t}\\varphi )(x)$ is its Gâteaux derivative.", "To conclude we prove that $D_E^G(P_{s,t}\\varphi ):X\\longrightarrow E^{\\prime }$ is continuous.", "Indeed for $x, x_0\\in X$ we have $&\\left|D_E^G(P_{s,t}\\varphi )(x)(h)-D_E^G(P_{s,t}\\varphi )(x_0)(h)\\right|\\nonumber \\\\&\\le \\int _X\\left|\\varphi (y+m^{x}(t,s))-\\varphi (y+m^{x_0}(t,s))\\right|\\left|{}{{{*[{U(t,s)h}]{\\hspace{-0.6pt}\\bigwedge \\hspace{-0.6pt}}{\\rule [-/2]{1ex}{}}}{}}{0.5ex}}[1pt]{U(t,s)h}{}(y)\\right|\\,\\mathcal {N}(0,Q(t,s))(dy)\\nonumber \\\\&\\le c_2\\left\\Vert \\varphi (y+m^{x}(t,s))-\\varphi (y+m^{x_0}(t,s))\\right\\Vert _{L^2(X,\\mathcal {N}_{0,Q(t,s)})}\\left\\Vert h\\right\\Vert _X\\nonumber \\\\&\\le \\overline{c}\\left\\Vert \\varphi (y+m^{x}(t,s))-\\varphi (y+m^{x_0}(t,s))\\right\\Vert _{L^2(X,\\mathcal {N}_{0,Q(t,s)})}\\left\\Vert h\\right\\Vert _E.\\nonumber $ Hence $\\left\\Vert D_E^G(P_{s,t}\\varphi )(x)-D_E^G(P_{s,t}\\varphi )(x_0)\\right\\Vert _{E^{\\prime }}\\le \\overline{c}\\left\\Vert \\varphi (y+m^{x}(t,s))-\\varphi (y+m^{x_0}(t,s))\\right\\Vert _{L^2(X,\\mathcal {N}_{0,Q(t,s)})}$ and since $\\left|\\varphi \\bigl (y+m^x(t,s)\\bigr )-\\varphi \\bigl (y+m^{x_0}(t,s)\\bigr )\\right|\\le 2 \\left\\Vert \\varphi \\right\\Vert _\\infty $ , for $x\\rightarrow x_0$ we have the statement by the Dominated Convergence theorem.", "We prove that $P_{s,t}\\varphi \\in C^2_E(X)$ .", "We show first that the for every $(s,t)\\in \\Delta $ and $h\\in E$ , the mapping $D_E(P_{s,t}\\varphi )(\\cdot )(h): X\\longrightarrow \\mathbb {R}$ is $E$ -Gâteau differentiable.", "Setting $f_\\varepsilon (y)=-\\frac{1}{2}\\varepsilon \\left\\Vert \\Lambda (t,s)\\widetilde{h}\\right\\Vert ^2_X+{}{{{*[{U(t,s)\\widetilde{h}}]{\\hspace{-0.6pt}\\bigwedge \\hspace{-0.6pt}}{\\rule [-/2]{1ex}{}}}{}}{0.5ex}}[1pt]{U(t,s)\\widetilde{h}}{}(y),$ due to (REF ) and using again the Cameron-Martin formula, for every $\\varepsilon >0$ and $\\widetilde{h}\\in E$ we have $&\\frac{D_E(P_{s,t}\\varphi )(x+\\varepsilon \\widetilde{h})(h)-D_E(P_{s,t}\\varphi )(x)(h)}{\\varepsilon }\\nonumber \\\\&=\\frac{1}{\\varepsilon }\\biggl [\\int _X\\varphi \\bigl (y+m^x(t,s)\\bigr )\\,{}{{{*[{U(t,s)h}]{\\hspace{-0.6pt}\\bigwedge \\hspace{-0.6pt}}{\\rule [-/2]{1ex}{}}}{}}{0.5ex}}[1pt]{U(t,s)h}{}\\bigl (y-\\varepsilon U(t,s)\\widetilde{h}\\bigr )\\,\\mathcal {N}_{\\varepsilon U(t,s)\\widetilde{h},Q(t,s)}(dy)\\nonumber \\\\&-\\int _X\\varphi \\bigl (y+m^x(t,s)\\bigr )\\,{}{{{*[{U(t,s)h}]{\\hspace{-0.6pt}\\bigwedge \\hspace{-0.6pt}}{\\rule [-/2]{1ex}{}}}{}}{0.5ex}}[1pt]{U(t,s)h}{}(y)\\,\\mathcal {N}_{0,Q(t,s)}(dy)\\biggr ]\\nonumber \\\\&=\\frac{1}{\\varepsilon }\\int _X\\varphi \\bigl (y+m^x(t,s)\\bigr ){}{{{*[{U(t,s)h}]{\\hspace{-0.6pt}\\bigwedge \\hspace{-0.6pt}}{\\rule [-/2]{1ex}{}}}{}}{0.5ex}}[1pt]{U(t,s)h}{}(y)\\bigl (\\exp (\\varepsilon f_\\varepsilon (x))-1\\bigr )\\mathcal {N}_{0,Q(t,s)}(dy)\\nonumber \\\\&-\\int _X\\varphi (y+m^x(t,s))\\langle \\Lambda (t,s)h,\\Lambda (t,s)\\widetilde{h}\\rangle _X\\exp (\\varepsilon f_\\varepsilon (y))\\mathcal {N}_{0,Q(t,s)}(dy).\\nonumber $ Proceeding as in Step 2, we obtain $&\\lim _{\\varepsilon \\rightarrow 0}\\frac{D_E(P_{s,t}\\varphi )(x+\\varepsilon \\widetilde{h})h-D_E(P_{s,t}\\varphi )(x)h}{\\varepsilon }\\nonumber \\\\&=\\int _X\\varphi \\bigl (y+m^x(t,s)\\bigr )\\,{}{{{*[{U(t,s)h}]{\\hspace{-0.6pt}\\bigwedge \\hspace{-0.6pt}}{\\rule [-/2]{1ex}{}}}{}}{0.5ex}}[1pt]{U(t,s)h}{}(y)\\,{}{{{*[{U(t,s)\\widetilde{h}}]{\\hspace{-0.6pt}\\bigwedge \\hspace{-0.6pt}}{\\rule [-/2]{1ex}{}}}{}}{0.5ex}}[1pt]{U(t,s)\\widetilde{h}}{}(y)\\,\\mathcal {N}_{0,Q(t,s)}(dy)\\nonumber \\\\&-\\langle \\Lambda (t,s)h,\\Lambda (t,s)\\widetilde{h}\\rangle _XP_{s,t}\\varphi (x).$ The right-hand side of REF is the Gâteaux derivative of $D_EP_{s,t}\\varphi (\\cdot )h$ at $x$ .", "In the same way we did in Step 2, we can show that the mapping $D_E^G(D_E(P_{s,t}\\varphi )(\\cdot )h):X\\longrightarrow E^{\\prime }$ is continuous, so that we conclude that $P_{s,t}\\varphi \\in C^2_E(X)$ .", "We prove that $P_{s,t}\\varphi \\in C^n_E(X)$ and that (REF ) holds.", "We proceed by induction and we assume that $P_{s,t}\\varphi \\in C^n_E(X)$ and that formula (REF ) holds for $D^{n}_E(P_{s,t}\\varphi )$ .", "We first show that $D^{n-1}_E(P_{s,t})\\varphi $ is $E$ - Gâteaux differentiable.", "Let $x\\in X$ , $h_1,...,h_{n-1},h_n\\in E$ and $\\varepsilon >0$ , we set $f_\\varepsilon (y)=-\\frac{1}{2}\\varepsilon \\left\\Vert \\Lambda (t,s)h_n\\right\\Vert ^2_X+{}{{{*[{U(t,s)h_n}]{\\hspace{-0.6pt}\\bigwedge \\hspace{-0.6pt}}{\\rule [-/2]{1ex}{}}}{}}{0.5ex}}[1pt]{U(t,s)h_n}{}(y).$ and due to the Cameron-Martin Formula we have $&D^{n-1}_E(P_{s,t}\\varphi )(x+\\varepsilon h_n)(h_1,...,h_{n-1})\\nonumber \\\\&=\\int _X\\varphi (m^x(t,s)+y) I_{n-1}(t,s)(y-\\varepsilon U(t,s)h_n) \\exp (\\varepsilon f_\\varepsilon (y))\\nonumber \\mathcal {N}_{0,Q(t,s)}(dy).$ Moreover we have $I_{n-1}&(t,s)(y-\\varepsilon U(t,s)h_n)(h_1,...,h_{n-1})=I_{n-1}(t,s)(y)(h_1,...,h_{n-1}) \\nonumber \\\\&-\\varepsilon \\sum _{i=1}^{n-1}\\langle \\Lambda (t,s)h_i,\\Lambda (t,s)h_n\\rangle _X\\prod _{\\begin{array}{c}j=1\\\\j\\ne i\\end{array}}^{n-1}{}{{{*[{U(t,s)h_j}]{\\hspace{-0.6pt}\\bigwedge \\hspace{-0.6pt}}{\\rule [-/2]{1ex}{}}}{}}{0.5ex}}[1pt]{U(t,s)h_j}{}(y)\\nonumber \\\\&-\\varepsilon \\sum _{s=1}^{r_{n-1}}(-1)^s\\sum _{\\begin{array}{c}i_1,...i_{2s},\\\\ i_{2k-1}<i_{2k},\\\\ i_{2k-1}<i_{2k+1}\\end{array}}^n\\prod _{i=1}^s\\langle \\Lambda (t,s)h_{i_{2k-1}},\\Lambda (t,s)h_{i_{2k}}\\rangle _X\\nonumber \\\\&\\times \\sum _{\\begin{array}{c}i_m=1,\\\\ i_m\\ne i_1,...,i_{2s}\\end{array}}^{n-1}\\langle \\Lambda (t,s)h_{i_m},\\Lambda (t,s)h_n\\rangle _X\\prod _{\\begin{array}{c}i_j=1\\\\i_j\\ne i_m,i_1,...,i_{2s}\\end{array}}^{n-1}{}{{{*[{U(t,s)h_{i_j}}]{\\hspace{-0.6pt}\\bigwedge \\hspace{-0.6pt}}{\\rule [-/2]{1ex}{}}}{}}{0.5ex}}[1pt]{U(t,s)h_{i_j}}{}(y)+O(\\varepsilon ^2).\\nonumber $ In the same way we did in Step 2 and in Step 3, by the Dominated Convergence theorem we have $&\\lim _{\\varepsilon \\rightarrow 0}\\frac{D^{n-1}_E(P_{s,t}\\varphi )(x+\\varepsilon h_n)(h_1,...,h_{n-1})-D^{n-1}_E(P_{s,t}\\varphi )(x)(h_1,...,h_{n-1})}{\\varepsilon }=\\nonumber \\\\&=\\int _X \\varphi (m^x(t,s)+y) I_{n-1}(t,s)(y)(h_1,...,h_{n-1})\\,{}{{{*[{U(t,s)h_n}]{\\hspace{-0.6pt}\\bigwedge \\hspace{-0.6pt}}{\\rule [-/2]{1ex}{}}}{}}{0.5ex}}[1pt]{U(t,s)h_n}{}(y)\\,\\mathcal {N}_{0,Q(t,s)}(dy)+\\nonumber \\\\&-\\int _X \\varphi (m^x(t,s)+y) \\Biggl [\\sum _{i=1}^{n-1}\\langle \\Lambda (t,s)h_i,\\Lambda (t,s)h_n\\rangle _X\\prod _{\\begin{array}{c}j=1\\\\j\\ne i\\end{array}}^{n-1}{}{{{*[{U(t,s)h_j}]{\\hspace{-0.6pt}\\bigwedge \\hspace{-0.6pt}}{\\rule [-/2]{1ex}{}}}{}}{0.5ex}}[1pt]{U(t,s)h_j}{}(y)+\\nonumber \\\\&+\\sum _{s=1}^{r_{n-1}}(-1)^s\\sum _{\\begin{array}{c}i_1,...i_{2s},\\\\ i_{2k-1}<i_{2k},\\\\ i_{2k-1}<i_{2k+1}\\end{array}}^n\\prod _{i=1}^s\\langle \\Lambda (t,s)h_{i_{2k-1}},\\Lambda (t,s)h_{i_{2k}}\\rangle _X\\times \\nonumber \\\\&\\times \\sum _{\\begin{array}{c}i_m=1,\\\\ i_m\\ne i_1,...,i_{2s}\\end{array}}^{n-1}\\langle \\Lambda (t,s)h_{i_m},\\Lambda (t,s)h_n\\rangle _X\\prod _{\\begin{array}{c}i_j=1\\\\i_j\\ne i_m,i_1,...,i_{2s}\\end{array}}^{n-1}{}{{{*[{U(t,s)h_{i_j}}]{\\hspace{-0.6pt}\\bigwedge \\hspace{-0.6pt}}{\\rule [-/2]{1ex}{}}}{}}{0.5ex}}[1pt]{U(t,s)h_{i_j}}{}(y)\\Biggr ]\\,\\mathcal {N}_{0,Q(t,s)}(dy).", "\\nonumber $ It is easy to show that the right hand side of the equation above coincides with the expression of $D^n_E(P_{s,t}\\varphi )(h_1,...,h_n)$ given in (REF ).", "The same arguments of Step 2 and Step 3 imply that $P_{s,t}\\varphi \\in C^n_E(X)$ .", "Now we combine Lemma REF end Theorem REF to obtain the following result.", "Corollary 3.4 Under Hypotheses REF and REF , for every $k,n\\in \\mathbb {N}\\cup \\lbrace 0\\rbrace $ such that $k+n\\ge 1$ , $\\varphi \\in C^k_E(X)$ and $(s,t)\\in \\Delta $ , $P_{s,t}\\varphi \\in C_E^{k+n}(X)$ and $&D_E^{k+n}(P_{s,t}\\varphi )(x)(h_1,...,h_{k+n})\\nonumber \\\\&=\\int _X D_E^k\\varphi \\bigl (m^x(t,s)+y\\bigr )\\bigl (U(t,s)h_1,...,U(t,s)h_k\\bigr ) I_n(t,s)(y)(h_{k+1},...,h_{k+n})\\,\\mathcal {N}_{0,Q(t,s)}(dy).$ For every $\\alpha _2\\ge \\alpha _1\\ge 0$ there exists $C=C(\\alpha _1,\\alpha _2)$ such that $ \\left\\Vert P_{s,t}\\varphi \\right\\Vert _{C^{\\alpha _2}_E(X)}\\le C\\bigl (\\left\\Vert \\Lambda (t,s)\\right\\Vert _{\\mathcal {L}(E;X)}^{\\alpha _2-\\alpha _1}+1\\bigr )\\left\\Vert \\varphi \\right\\Vert _{C^{\\alpha _1}_E(X)},\\ \\ \\varphi \\in C^{\\alpha _1}_E(X),\\ (s,t)\\in \\Delta .$ Moreover for every $j\\in \\mathbb {N}$ , $h_1,...,h_j\\in E$ , $t\\in [0,T]$ and $\\varphi \\in C_E(X)$ the mapping $(s,x)\\in [0,t)\\times X\\longmapsto D^j_EP_{s,t}\\varphi (x)(h_1,...,h_j)\\in \\mathbb {R}$ is continuous.", "If in addition the mapping $(s,t)\\in \\Delta \\longmapsto U(t,s)\\in \\mathcal {L}(X)$ is continuous, then the function $(s,t,x)\\in \\Delta \\times X\\longmapsto D^j_E(P_{s,t}\\varphi )(x)(h_1,...,h_j)\\in \\mathbb {R},$ is continuous.", "Formula (REF ) follows applying (REF ) and (REF ).", "Moreover, by (REF ) and (REF ) there exist $C=C(k,n)>0$ such that $\\sup _{x\\in X}\\left\\Vert D_E^{k+n}(P_{s,t}\\varphi )(x)\\right\\Vert _{\\mathcal {L}^{k+n}(E)}\\le C\\left\\Vert \\Lambda (t,s)\\right\\Vert ^n_{\\mathcal {L}(E;X)}\\left\\Vert \\varphi \\right\\Vert _{C^k_E(X)},\\ \\ (s,t)\\in \\Delta ,\\ \\varphi \\in C^k_E(X),$ and for every $\\alpha \\in (0,1)$ $\\left\\Vert D_E^{k+n}(P_{s,t}\\varphi )\\right\\Vert _{C^\\alpha _E(X,\\mathcal {L}^{n+k}(E))}\\le C\\left\\Vert \\Lambda (t,s)\\right\\Vert ^n_{\\mathcal {L}(E;X)}\\left\\Vert \\varphi \\right\\Vert _{C^k_E(X)},\\ \\ (s,t)\\in \\Delta ,\\ \\varphi \\in C^{k+\\alpha }_E(X).$ We point out that if $\\alpha _1=\\alpha _2$ , then (REF ) follows from Lemma REF ; so we may assume $\\alpha _2>\\alpha _1$ .", "Now if $\\alpha _2-\\alpha _1=n\\in \\mathbb {N}$ , we set $\\alpha _1=m+\\sigma $ , where $m\\in \\mathbb {N}$ and $\\sigma \\in (0,1)$ .", "We have $&\\left\\Vert P_{s,t}\\varphi \\right\\Vert _{C^{\\alpha _2}_E}=\\left\\Vert P_{s,t}\\varphi \\right\\Vert _\\infty +\\sum _{j=1}^{m+n}\\sup _{x\\in X}\\left\\Vert D^j_E (P_{s,t}\\varphi )(x)\\right\\Vert _{\\mathcal {L}^j(E)}+[D_E^{m+n}P_{s,t}\\varphi ]_{C^\\sigma _E(X)}\\nonumber \\\\&\\le \\left\\Vert \\varphi \\right\\Vert _\\infty +\\sum _{j=1}^{m}\\sup _{x\\in X}\\left\\Vert D_E^j \\varphi (x)\\right\\Vert _{\\mathcal {L}^j(E)}+\\sum _{j=1}^{n}\\left\\Vert \\Lambda (t,s)\\right\\Vert ^j_{\\mathcal {L}(E;X)}\\sup _{x\\in X}\\left\\Vert D_E^m\\varphi (x)\\right\\Vert _{\\mathcal {L}^m(E)}+\\nonumber \\\\&+\\left\\Vert \\Lambda (t,s)\\right\\Vert ^n_{\\mathcal {L}(E;X)}[D_E^{m}\\varphi ]_{C^\\sigma _E(X)}\\nonumber \\\\&\\le \\left\\Vert \\varphi \\right\\Vert _\\infty +\\sum _{j=1}^{m}\\sup _{x\\in X}\\left\\Vert D_E^j \\varphi (x)\\right\\Vert _{\\mathcal {L}^j(E)}+n\\bigl (1+\\left\\Vert \\Lambda (t,s)\\right\\Vert ^n_{\\mathcal {L}(E;X)}\\bigr )\\sup _{x\\in X}\\left\\Vert D_E^m\\varphi (x)\\right\\Vert _{\\mathcal {L}^m(E)}+\\nonumber \\\\&+\\left\\Vert \\Lambda (t,s)\\right\\Vert ^n_{\\mathcal {L}(E;X)}[D_E^{m}\\varphi ]_{C^\\sigma _E(X)}\\le 2n\\bigl (1+\\left\\Vert \\Lambda (t,s)\\right\\Vert ^n_{\\mathcal {L}(E;X)}\\bigr )\\left\\Vert \\varphi \\right\\Vert _{C^{\\alpha _1}_E(X)}.\\nonumber $ If $\\alpha _2-\\alpha _1\\notin \\mathbb {N}$ , we set $\\alpha _2=\\alpha _1+n+\\sigma $ with $n\\in \\mathbb {N}\\cup \\lbrace 0\\rbrace $ and $\\sigma \\in (0,1)$ .", "We apply the following interpolation inequality $\\left\\Vert \\psi \\right\\Vert _{C^{\\alpha _2}_E(X)}\\le \\left\\Vert \\psi \\right\\Vert _{C^{\\alpha _1+n}_E(X)}^{1-\\sigma }\\left\\Vert \\psi \\right\\Vert _{C^{\\alpha _1+n+1}_E(X)}^\\sigma ,\\ \\ \\psi \\in C^{\\alpha _1+n+1}_E(X)$ to $\\psi =P_{s,t}\\varphi $ .", "Then, due to (REF ) with $\\alpha _1$ replaced by $\\alpha _1+n$ and $\\alpha _2$ by $\\alpha _1+n+1$ , we have (REF ) in the general case.", "The statement about the continuity of the derivatives is a consequence of Lemma REF and Theorem REF .", "Indeed for a fixed $t>0$ and $\\varepsilon \\in (0,t)$ , we have $P_{s,t}\\varphi =P_{s,t-\\varepsilon }P_{t-\\varepsilon ,t},\\ \\ 0\\le s< t-\\varepsilon \\le T.$ Since $\\psi =P_{t-\\varepsilon ,t}\\varphi \\in C^k_E(X)$ by Theorem REF , the function $(s,x)\\in [0,t-\\varepsilon ]\\times X\\longmapsto D_E^k P_{s,t-\\varepsilon }\\psi (x)(h_1,...,h_j)=D_E^kP_{s,t}\\varphi (h)(h_1,...,h_j)$ is continuous by Lemma REF .", "The proof of the last claim is similar.", "Remark 3.5 The interpolation inequality (REF ) is proved in in the case $E=X$ with equivalent norms.", "Anyway the proof of the general case is analogous.", "In the following propositions we give some examples of Hilbert spaces satisfying Hypothesis REF .", "We start with two preliminary results.", "Proposition 3.6 Let $X,X_1,X_2$ be Hilbert spaces and $L_1:X_1\\longrightarrow X$ , $L_2:X_2\\longrightarrow X$ be linear bounded operators.", "The following statements hold.", "$\\mbox{\\mdseries {Range}}(L_1)\\subseteq \\mbox{\\mdseries {Range}}(L_2)$ if and only if there exists a constant $C>0$ such that $\\left\\Vert L_1^\\star x\\right\\Vert _{X_1}\\le C\\left\\Vert L_2^\\star x\\right\\Vert _{X_2},\\ \\ x\\in X.$ In this case $\\left\\Vert L^{-1}_2L_1\\right\\Vert _{\\mathcal {L}(X_1,X_2)}\\le C$ ; more precisely $\\left\\Vert L^{-1}_2L_1\\right\\Vert _{\\mathcal {L}(X_1,X_2)}=\\inf \\lbrace C>0\\ s.t.\\ (\\ref {cstar})\\ holds\\rbrace $ If $\\left\\Vert L_1^\\star x\\right\\Vert _{X_1}=\\left\\Vert L_2^\\star x\\right\\Vert _{X_2}$ for every $x\\in X$ then $\\mbox{\\mdseries {Range}}(L_1)=\\mbox{\\mdseries {Range}}(L_2)$ and $\\left\\Vert L^{-1}_1 x\\right\\Vert _{X_1}=\\left\\Vert L^{-1}_2 x\\right\\Vert _{X_2}$ for every $x\\in X$ .", "See Proposition B.1 in Appendix B in .", "Lemma 3.7 The following properties hold.", "the mapping $s\\longmapsto Q(t,s)$ is continuous with values in $\\mathcal {L}(X)$ and it is decreasing, namely $\\langle Q(t,s_1)x,x\\rangle _X\\ge \\langle Q(t,s_2)x,x\\rangle _X \\ \\ \\ \\mbox{for any}\\ \\ \\ 0\\le s_1\\le s_2 < t< T.$ $\\mathcal {H}_{t,s_2}\\subseteq \\mathcal {H}_{t,s_1}$ for every $0\\le s_1<s_2 < t\\le T$ and the norm of the embedding is 1.", "${\\rm Ker}(Q(t,s_1))\\subseteq {\\rm Ker}(Q(t,s_2))\\subseteq {\\rm Ker}(Q(t)) \\ \\mbox{for any}\\ \\ 0\\le s_1\\le s_2 < t< T$ .", "We prove that the mapping $s\\longmapsto Q(t,s)$ is Lipschitz continuous.", "Let $0\\le s_1\\le s_2 < t< T$ and $x\\in X$ , we have $\\left\\Vert Q(t,s_1)x-Q(t,s_2)x\\right\\Vert _X= \\left\\Vert \\int _{s_1}^{s_2}U(t,r) Q(r) U^\\star (t,r) x dr\\right\\Vert _X\\le M^2 K^2\\left\\Vert x\\right\\Vert _X \\left|s_2-s_1\\right| \\nonumber $ and we obtain $\\left\\Vert Q(t,s_1)-Q(t,s_2)\\right\\Vert _{\\mathcal {L}(X)}\\le M^2 K^2 \\left|s_2-s_1\\right|, \\nonumber $ where $M$ and $K$ are the costants defined in hypotesis REF and in (REF ).", "Moreover $\\langle Q(t,s_2) x,x\\rangle _X&=\\biggl \\langle \\int _{s_2}^t U(t,r)Q(r)U^\\star (t,r)x\\,dr,x\\biggr \\rangle _X= \\nonumber \\\\&=\\int _{s_2}^t \\langle U(t,r)Q^{\\frac{1}{2}}(r)Q^{\\frac{1}{2}}(r)U^\\star (t,r)x,x\\rangle _X\\,dr=\\nonumber \\\\&=\\int _{s_2}^t\\left\\Vert Q^{\\frac{1}{2}}(r)U^\\star (t,r)x\\right\\Vert ^2_X\\,dr\\le \\nonumber \\\\&\\le \\int _{s_1}^t\\left\\Vert Q^{\\frac{1}{2}}(r)U^\\star (t,r)x\\right\\Vert ^2_X\\,dr=\\langle Q(t,s_1) x,x\\rangle _X.\\nonumber $ The continuous embedding $\\mathcal {H}_{t,s_2}\\subseteq \\mathcal {H}_{t,s_1}$ is an immediate consequence of proposition REF and statement 1, observing that $\\left\\Vert Q^{\\frac{1}{2}}(t,s) x\\right\\Vert _X^2=\\langle Q(t,s) x,x\\rangle _X$ .", "Let $x\\in {\\rm Ker}(Q(t,s_1))$ .", "Since $\\displaystyle {\\langle Q(t,s_1) x,x\\rangle _X=\\int _{s_1}^t\\left\\Vert Q^{\\frac{1}{2}}(r)U^\\star (t,r)x\\right\\Vert ^2_X\\,dr},$ we obtain $Q^{\\frac{1}{2}}(r)U^\\star (t,r)x=0$ for every $r\\in (s_1,t)$ and in particular $x\\in {\\rm Ker}(Q(t,s_2))$ .", "Let $x\\in {\\rm Ker}(Q(t,s_2))$ , then $Q^{\\frac{1}{2}}(r)U^\\star (t,r)x=0$ for every $r\\in (s_2,t)$ .", "For $y\\in X$ we have $0=\\langle Q^{\\frac{1}{2}}(r)U^\\star (t,r)x,y\\rangle _X=\\langle x,U(t,r)Q^{\\frac{1}{2}}(r)y\\rangle _X, \\ \\ s_2<r<t.\\nonumber $ For $r\\rightarrow t^-$ we obtain $0=\\langle x,Q^{\\frac{1}{2}}(t)y\\rangle _X=\\langle Q^{\\frac{1}{2}}(t)x,y\\rangle _X,\\ \\ \\ \\mbox{for all}\\ \\ y\\in X.$ Hence $Q^{\\frac{1}{2}}(t)x=0$ and since ${\\rm Ker}(Q(t))={\\rm Ker}(Q^{\\frac{1}{2}}(t))$ , the statement holds.", "Proposition 3.8 We assume that $U(t,s)(\\mathrm {H}_s)\\subseteq \\mathrm {H}_t$ for any $(s,t)\\in \\Delta $ and that there exists $M>0$ such that $ \\left\\Vert U(t,s)\\right\\Vert _{\\mathcal {L}(\\mathrm {H}_s,\\mathrm {H}_t)}\\le M,\\ \\ (s,t)\\in \\Delta .$ Then the following statements holds.", "For every $0\\le s < t\\le T$ , $U(t,s)(\\mathrm {H}_s)\\subseteq \\mathcal {H}_{t,s}$ and $(t-s)^{\\frac{1}{2}}\\left\\Vert U(t,s)\\right\\Vert _{\\mathcal {L}(\\mathrm {H}_s,\\mathcal {H}_{t,s})}=(t-s)^{\\frac{1}{2}}\\left\\Vert Q^{-\\frac{1}{2}}(t,s)U(t,s)\\right\\Vert _{\\mathcal {L}(\\mathrm {H}_s,X)}\\le M.$ For every $0\\le s < t\\le T$ , $U(t,s)^\\star (\\ker (Q(t)))\\subseteq {\\rm Ker}(Q(s))$ and ${\\rm Ker}(Q(t,s))={\\rm Ker}(Q(t))$ .", "For every $0\\le s_1<s_2 < t\\le T$ , $\\mathcal {H}_{t,s_1}=\\mathcal {H}_{t,s_2}$ and their norms are equivalent.", "For every $0\\le s < t\\le T$ , $\\mathcal {H}_{t,s}$ is continuously embedded in $\\mathrm {H}_t$ .", "We consider the operator $L$ from remark REF .", "We know that $\\mbox{\\mdseries {Range}}(L)=\\mbox{\\mdseries {Range}}(Q^{\\frac{1}{2}}(t,s))$ .", "On the other hand, for every $x\\in \\mathrm {H}_s$ we have $(t-s)U(t,s)x&=\\int _s^tU(t,s)x\\,d\\sigma =\\int _s^tU(t,\\sigma )U(\\sigma ,s)x\\,d\\sigma = \\nonumber \\\\&=\\int _s^tU(t,\\sigma )Q^{\\frac{1}{2}}(\\sigma )Q^{-\\frac{1}{2}}(\\sigma ) U(\\sigma ,s)x\\,d\\sigma .$ where $y(\\sigma )=Q^{-\\frac{1}{2}}(\\sigma ) U(\\sigma ,s)x$ belongs to $L^\\infty \\bigl ((s,t);X\\bigr )\\subseteq L^2\\bigl ((s,t);X\\bigr )$ .", "Hence $U(t,s)x $ belongs to the range of $L$ , and by (REF ) we get $\\left\\Vert Q^{-\\frac{1}{2}}(t,s)((t-s)U(t,s)x)\\right\\Vert _X&\\le \\left\\Vert y\\right\\Vert _{L^2((s,t);X)}\\le (t-s)^{\\frac{1}{2}}\\sup _{s<\\sigma <t}\\left\\Vert Q^{-\\frac{1}{2}}(\\sigma ) U(\\sigma , s)x\\right\\Vert _X\\le \\nonumber \\\\&\\le (t-s)^{\\frac{1}{2}} M \\left\\Vert x\\right\\Vert _{\\mathrm {H}_s}.", "\\nonumber $ We first remark some facts.", "Since $U(t,s)(\\mathrm {H}_s)\\subseteq \\mathrm {H}_t$ , by continuity we get $U(t,s)(\\overline{\\mathrm {H}_s})\\subseteq \\overline{\\mathrm {H}_t}$ .", "${\\rm Ker}(Q^{\\frac{1}{2}}(t))={\\rm Ker}(Q(t))$ , $\\bigl ({\\rm Ker}(Q(t))\\bigr )^{\\perp }=\\overline{\\mathrm {H}_t}$ and consequently $\\overline{\\mathrm {H}_t}=(I-P_t)(X)$ , where we denote by $P_t$ the orthogonal projection on ${\\rm Ker}(Q(t))$ .", "By statements (a) and (b) we have $U(t,s)^\\star (\\ker (Q(t)))\\subseteq {\\rm Ker}(Q(s))$ .", "In fact $U(t,s)(\\overline{\\mathrm {H}_s})\\subseteq \\overline{\\mathrm {H}_t}&\\iff U(t,s)(I-P_s)(X)\\subseteq (I-P_t)(X)\\nonumber \\\\&\\iff P_t U(t,s)(I-P_s)= 0.", "\\nonumber $ Hence $(I-P_s)^\\star U^\\star (t,s)P_t^\\star =(I-P_s) U^\\star (t,s)P_t=0,$ namely $U(t,s)^\\star (\\ker (Q(t)))\\subseteq {\\rm Ker}(Q(s))$ .", "For every $x\\in {\\rm Ker}(Q(t)), $ since $\\left\\Vert Q^{\\frac{1}{2}}(t,s) x\\right\\Vert _X^2=\\langle Q(t,s) x,x\\rangle _X=\\int _{s}^t\\left\\Vert Q^{\\frac{1}{2}}(r)U^\\star (t,r)x\\right\\Vert ^2_X\\,dr,$ we obtain $x\\in {\\rm Ker}(Q(t,s))$ .", "Conversely, the embedding of ${\\rm Ker}(Q(t,s))$ in ${\\rm Ker}(Q(t))$ follows from statement 3 of lemma REF .", "The continuous embedding $\\mathcal {H}_{t,s_2}\\subseteq \\mathcal {H}_{t,s_1}$ is statement 2 of lemma REF .", "Concerning the reverse embedding, we first point out that the adjoint of the operator $U(t,s)_{|_{\\mathrm {H}_s}}:\\mathrm {H}_s\\longrightarrow \\mathrm {H}_t$ is the operator $(U(t,s)_{|_{\\mathrm {H}_s}})^\\star :\\mathrm {H}_t\\longrightarrow \\mathrm {H}_s$ such that $\\langle x,(U(t,s)_{|_{\\mathrm {H}_s}})^\\star y\\rangle _{\\mathrm {H}_s}=\\langle U(t,s)_{|_{\\mathrm {H}_s}}x,y\\rangle _{\\mathrm {H}_t},\\ \\ \\ \\mbox{for all}\\ x\\in \\mathrm {H}_s,\\ y\\in \\mathrm {H}_t.$ Now we claim that $(U(t,s)_{|_{\\mathrm {H}_s}})^\\star Q(t) x=Q(s)U^\\star (t,s) x,\\ \\ x\\in X,\\ (s,t)\\in \\Delta ,$ where $U^\\star (t,s)$ denotes the adjoint operator of $U(t,s)\\in \\mathcal {L}(X)$ .", "Indeed for all $h\\in \\mathrm {H}_s$ we have that $(I-P_s)h=h$ and $\\langle (U(t,s)_{|_{\\mathrm {H}_s}})^\\star Q(t) x,h\\rangle _{\\mathrm {H}_s}&=\\langle Q(t) x,U(t,s)_{|_{\\mathrm {H}_s}}h\\rangle _{\\mathrm {H}_t}=\\langle x,U(t,s)h\\rangle _X\\nonumber \\\\&=\\langle U^\\star (t,s)x,h\\rangle _X=\\langle Q(s)U^\\star (t,s)x,h\\rangle _{\\mathrm {H}_s}\\nonumber \\\\&=\\langle Q(s)U^\\star (t,s) x,h\\rangle _{\\mathrm {H}_s}, \\nonumber $ where the last equality follows from statement 2.", "Hence we get for $0\\le s_1<s_2<t\\le T$ $\\left\\Vert Q^{\\frac{1}{2}}(t,s_1) x\\right\\Vert _X^2&=\\int _{s_1}^t\\left\\Vert Q^{\\frac{1}{2}}(r)U^\\star (t,r)x\\right\\Vert ^2_X\\,dr\\nonumber \\\\&=\\int _{s_1}^{s_2}\\left\\Vert Q^{\\frac{1}{2}}(r)U^\\star (t,r)x\\right\\Vert ^2_X\\,dr+\\int _{s_2}^t\\left\\Vert Q^{\\frac{1}{2}}(r)U^\\star (t,r)x\\right\\Vert ^2_X\\,dr \\nonumber \\\\&=\\int _{s_2}^{2s_2-s_1}\\left\\Vert Q^{\\frac{1}{2}}(\\sigma _0+s_1-s_2)U^\\star (t,\\sigma _0+s_1-s_2)x\\right\\Vert ^2_X\\,d\\sigma _0+\\left\\Vert Q^{\\frac{1}{2}}(t,s_2) x\\right\\Vert _X^2, \\nonumber $ where $\\sigma _0=r-s_1+s_2$ in the first integral.", "If $t>2s_2-s_1$ by (REF ), we have $&\\int _{s_2}^{2s_2-s_1}\\left\\Vert Q^{\\frac{1}{2}}(\\sigma _0+s_1-s_2)U^\\star (t,\\sigma _0+s_1-s_2)x\\right\\Vert ^2_X\\,d\\sigma _0\\nonumber \\\\&\\le \\int _{s_2}^{t}\\left\\Vert Q^{\\frac{1}{2}}(\\sigma _0+s_1-s_2)U^\\star (t,\\sigma _0+s_1-s_2)x\\right\\Vert ^2_X\\,d\\sigma _0\\nonumber \\\\&=\\int _{s_2}^{t}\\left\\Vert Q(\\sigma _0+s_1-s_2)U^\\star (\\sigma _0,\\sigma _0+s_1-s_2)U^\\star (t,\\sigma _0)x\\right\\Vert ^2_{E_{\\sigma _0+s_1-s_2}}\\,d\\sigma _0\\nonumber \\\\&=\\int _{s_2}^{t}\\left\\Vert (U(\\sigma _0,\\sigma _0+s_1-s_2)_{|_{E_{\\sigma _0+s_1-s_2}}})^\\star Q(\\sigma _0)U^\\star (t,\\sigma _0)x\\right\\Vert ^2_{E_{\\sigma _0+s_1-s_2}}\\,d\\sigma _0\\nonumber \\\\&\\le M^2 \\int _{s_2}^{t}\\left\\Vert Q(\\sigma _0)U^\\star (t,\\sigma _0)x\\right\\Vert ^2_{E_{\\sigma _0+s_1-s_2}}\\,d\\sigma _0\\nonumber \\\\&= M^2 \\int _{s_2}^{t}\\left\\Vert Q^{\\frac{1}{2}}(\\sigma _0)U^\\star (t,\\sigma _0)x\\right\\Vert ^2_X\\,d\\sigma _0= M^2 \\left\\Vert Q^{\\frac{1}{2}}(t,s_2)x\\right\\Vert ^2_X.\\nonumber $ Hence $\\left\\Vert Q^{\\frac{1}{2}}(t,s_1) x\\right\\Vert _X^2\\le (1+ M^2 )\\left\\Vert Q^{\\frac{1}{2}}(t,s_2)x\\right\\Vert ^2_X.$ If $s_2<t\\le 2s_2-s_1$ , by (REF ) we have $&\\int _{s_2}^{2s_2-s_1}\\left\\Vert Q^{\\frac{1}{2}}(\\sigma _0+s_1-s_2)U^\\star (t,\\sigma _0+s_1-s_2)x\\right\\Vert ^2_X\\,d\\sigma _0=\\nonumber \\\\&=\\int _{s_2}^t+\\int _t^{2s_2-s_1}\\left\\Vert Q^{\\frac{1}{2}}(\\sigma _0+s_1-s_2)U^\\star (t,\\sigma _0+s_1-s_2)x\\right\\Vert ^2_X\\,d\\sigma _0\\le \\nonumber \\\\&\\le M^2 \\left\\Vert Q^{\\frac{1}{2}}(t,s_2)x\\right\\Vert ^2_X+\\nonumber \\\\&+\\int _{s_2}^{3s_2-s_1-t}\\left\\Vert Q^{\\frac{1}{2}}(\\sigma _1+t+s_1-2s_2)U^\\star (t,\\sigma _1+t+s_1-2s_2)x\\right\\Vert ^2_X\\,d\\sigma _1, \\nonumber $ where $\\sigma _1=\\sigma _0-t+s_2$ .", "If $t>3s_2-s_1-t\\iff t>\\frac{3s_2-s_1}{2}$ , hence for any $t\\in \\Bigl (\\frac{3s_2-s_1}{2},2s_2-s_1\\Bigr ]$ we have $&\\int _{s_2}^{3s_2-s_1-t}\\left\\Vert Q^{\\frac{1}{2}}(\\sigma _1+t+s_1-2s_2)U^\\star (t,\\sigma _1+t+s_1-2s_2)x\\right\\Vert ^2_X\\,d\\sigma _0\\le \\nonumber \\\\&\\le \\int _{s_2}^{t}\\left\\Vert Q^{\\frac{1}{2}}(\\sigma _1+t+s_1-2s_2)U^\\star (t,\\sigma _1+t+s_1-2s_2)x\\right\\Vert ^2_X\\,d\\sigma _1 \\nonumber $ and by (REF ) $&\\int _{s_2}^{t}\\left\\Vert Q^{\\frac{1}{2}}(\\sigma _1+t+s_1-2s_2)U^\\star (t,\\sigma _1+t+s_1-2s_2)x\\right\\Vert ^2_X\\,d\\sigma _1=\\nonumber \\\\&=\\int _{s_2}^{t}\\left\\Vert Q(\\sigma _1+t+s_1-2s_2)U^\\star (\\sigma _1,\\sigma _1+t+s_1-2s_2)U^\\star (t,\\sigma _1)x\\right\\Vert ^2_{E_{\\sigma _1+t+s_1-2s_2}}\\,d\\sigma _1=\\nonumber \\\\&=\\int _{s_2}^{t}\\left\\Vert (U(\\sigma _1,\\sigma _1+t+s_1-2s_2)_{E_{\\sigma _1+t+s_1-2s_2}})^\\star Q(\\sigma _1)U^\\star (t,\\sigma _1)x\\right\\Vert ^2_{E_{\\sigma _1+t+s_1-2s_2}}\\,d\\sigma _1=\\nonumber \\\\&\\le M^2 \\left\\Vert Q^{\\frac{1}{2}}(t,s_2)\\right\\Vert ^2_X.", "\\nonumber $ Adding up, $ \\left\\Vert Q^{\\frac{1}{2}}(t,s_1) x\\right\\Vert _X^2\\le (1+ 2M^2)\\left\\Vert Q^{\\frac{1}{2}}(t,s_2)x\\right\\Vert ^2_X.$ If $\\displaystyle {s_2<t\\le \\frac{3s_2-s_1}{2}}$ $&\\int _{s_2}^{3s_2-s_1-t}\\left\\Vert Q^{\\frac{1}{2}}(\\sigma _1+t+s_1-2s_2)U^\\star (t,\\sigma _1+t+s_1-2s_2)x\\right\\Vert ^2_X\\,d\\sigma _1=\\nonumber \\\\&=\\int _{s_2}^t+\\int _t^{3s_2-s_1-t}\\left\\Vert Q^{\\frac{1}{2}}(\\sigma _1+t+s_1-2s_2)U^\\star (t,\\sigma _1+t+s_1-2s_2)x\\right\\Vert ^2_X\\,d\\sigma _1\\le \\nonumber \\\\&\\le \\int _{s_2}^{4s_2-s_1-2t}\\left\\Vert Q^{\\frac{1}{2}}(\\sigma _2+2t+s_1-3s_2)U^\\star (t,\\sigma _2+2t+s_1-3s_2)x\\right\\Vert ^2_X\\,d\\sigma _2+ \\nonumber \\\\&+M^2 \\left\\Vert Q^{\\frac{1}{2}}(t,s_2)x\\right\\Vert ^2_X,$ where $\\sigma _2=\\sigma _1-t+s_2$ .", "Let $k\\in \\mathbb {N}$ .", "Now we prove the following.", "If $t\\in \\Bigl (\\frac{(k+2)s_2-s_1}{k+1},\\frac{(k+1)s_2-s_1}{k}\\Bigr ]$ , then $\\left\\Vert Q^{\\frac{1}{2}}(t,s_1) x\\right\\Vert _X^2&\\le \\bigl [1+(k+1) M^2 \\bigr ]\\left\\Vert Q^{\\frac{1}{2}}(t,s_2)x\\right\\Vert ^2_X.$ If $t\\in \\Bigl (s_2,\\frac{(k+2)s_2-s_1}{k+1}\\Bigl ]$ , then $ &\\int _{s_2}^{s_k}\\left\\Vert Q^{\\frac{1}{2}}\\bigl (t_k\\bigr )U^{\\star }\\bigl (t,t_k\\bigr )\\right\\Vert ^2_X\\,d\\sigma _k\\le \\int _{s_2}^{s_{k+1}}\\left\\Vert Q^{\\frac{1}{2}}\\bigl (t_{k+1}\\bigr )U^{\\star }\\bigl (t,t_{k+1}\\bigr )\\right\\Vert ^2_X\\,d\\sigma _{k+1}\\nonumber \\\\&+M^2\\left\\Vert Q^{\\frac{1}{2}}(t,s_2)\\right\\Vert ^2_X,$ where $&s_k=(k+2)s_2-s_1-kt, \\nonumber \\\\&t_k=\\sigma _k+kt+s_1-(k+1)s_2, \\nonumber \\\\&\\sigma _{k+1}=\\sigma _k-t+s_2.", "\\nonumber $ We procede by induction over $k$ .", "For $k=1$ the (REF ) and (REF ) have been already proven in (REF ) and (REF ).", "We assume now (REF ) and (REF ) hold for a given $k\\in \\mathbb {N}$ and we prove that (REF ) and (REF ) hold for $k+1$ .", "Let $t\\in \\Bigl (s_2,\\frac{(k+2)s_2-s_1}{k+1}\\Bigl ]$ .", "If $t>s_{k+1}\\iff t>\\frac{(k+3)s_2-s_1}{k+2}$ , for any $t\\in \\Bigl (\\frac{(k+3)s_2-s_1}{k+2},\\frac{(k+2)s_2-s_1}{k+1}\\Bigr ]$ we have $\\int _{s_2}^{s_{k+1}}\\left\\Vert Q^{\\frac{1}{2}}\\bigl (t_{k+1}\\bigr )U^{\\star }\\bigl (t,t_{k+1}\\bigr )\\right\\Vert ^2_X\\,d\\sigma _{k+1}\\le \\int _{s_2}^{t}\\left\\Vert Q^{\\frac{1}{2}}\\bigl (t_{k+1}\\bigr )U^{\\star }\\bigl (t,t_{k+1}\\bigr )\\right\\Vert ^2_X\\,d\\sigma _{k+1}\\nonumber $ and by (REF ) $&\\int _{s_2}^{t}\\left\\Vert Q^{\\frac{1}{2}}\\bigl (t_{k+1}\\bigr )U^{\\star }\\bigl (t,t_{k+1}\\bigr )\\right\\Vert ^2_X\\,d\\sigma _{k+1}=\\int _{s_2}^{t}\\left\\Vert Q\\bigl (t_{k+1}\\bigr )U^{\\star }\\bigl (t,t_{k+1}\\bigr )x\\right\\Vert ^2_{E_{t_{k+1}}}\\,d\\sigma _{k+1}=\\nonumber \\\\&=\\int _{s_2}^{t}\\left\\Vert (U(\\sigma _{k+1},t_{k+1})_{|_{E_{t_{k+1}}}})^\\star Q(\\sigma _{k+1})U^\\star (t,\\sigma _{k+1})x\\right\\Vert ^2_{E_{t_{k+1}}}\\,d\\sigma _{k+1}=\\nonumber \\\\&\\le M^2 \\left\\Vert Q^{\\frac{1}{2}}(t,s_2)\\right\\Vert ^2_X.", "\\nonumber $ Finally $\\left\\Vert Q^{\\frac{1}{2}}(t,s_1) x\\right\\Vert _X^2&\\le \\bigl [1+ (k+2)M^2 \\bigr ]\\left\\Vert Q^{\\frac{1}{2}}(t,s_2)x\\right\\Vert ^2_X \\nonumber $ Now if $t\\in \\Bigl (s_2,\\frac{(k+3)s_2-s_1}{k+2}\\Bigl ]$ , then $&\\int _{s_2}^{s_{k+1}}\\left\\Vert Q^{\\frac{1}{2}}\\bigl (t_{k+1}\\bigr )U^{\\star }\\bigl (t,t_{k+1}\\bigr )\\right\\Vert ^2_X\\,d\\sigma _{k+1}=\\int _{s_2}^{t}+\\int _{t}^{s_{k+1}}\\left\\Vert Q^{\\frac{1}{2}}\\bigl (t_{k+1}\\bigr )U^{\\star }\\bigl (t,t_{k+1}\\bigr )\\right\\Vert ^2_X\\,d\\sigma _{k+1}\\nonumber \\\\&\\le \\int _{s_2}^{s_{k+2}}\\left\\Vert Q^{\\frac{1}{2}}\\bigl (t_{k+2}\\bigr )U^{\\star }\\bigl (t,t_{k+2}\\bigr )\\right\\Vert ^2_X\\,d\\sigma _{k+2}+M^2 \\left\\Vert Q^{\\frac{1}{2}}(t,s_2)\\right\\Vert ^2_X.", "\\nonumber $ Therefore we have obtained that $\\left\\Vert Q^{\\frac{1}{2}}(t,s_1) x\\right\\Vert _X^2&\\le (1+ M^2)\\left\\Vert Q^{\\frac{1}{2}}(t,s_2)x\\right\\Vert ^2_X,\\ \\ t\\in (2s_2-s_1,+\\infty )\\nonumber $ and if $k\\in \\mathbb {N}\\setminus \\lbrace 0\\rbrace $ and $t\\in \\Bigl (\\frac{(k+2)s_2-s_1}{k+1},\\frac{(k+1)s_2-s_1}{k}\\Bigr ]$ , we have $\\left\\Vert Q^{\\frac{1}{2}}(t,s_1) x\\right\\Vert _X^2&\\le \\bigl [1+(k+1) M^2 \\bigr ]\\left\\Vert Q^{\\frac{1}{2}}(t,s_2)x\\right\\Vert ^2_X.", "\\nonumber $ For $(s,t)\\in \\Delta $ and $x\\in X$ we use again (REF ) and we have $\\left\\Vert Q^{\\frac{1}{2}}(t,s)x\\right\\Vert _X&=\\int _s^t\\left\\Vert Q^{\\frac{1}{2}}(r)U^\\star (t,r)x\\right\\Vert ^2_X\\,dr=\\int _s^t\\left\\Vert Q(r)U^\\star (t,r)x\\right\\Vert ^2_{E_r}\\,dr=\\nonumber \\\\&=\\int _s^t\\left\\Vert (U(t,r)_{|_{E_r}})^\\star Q(t)x\\right\\Vert ^2_{E_r}\\,dr\\le M^2 (t-s) \\left\\Vert Q^{\\frac{1}{2}}(t)x\\right\\Vert ^2_X\\nonumber .$ By proposition REF we the statement follows." ], [ "Schauder type theorems", "In this section we assume that Hypotheses REF , REF , REF hold and we prove maximal Hölder regularity for the mild solution of problem (REF ), that is given by the formula $u(s,x)=P_{s,t}\\varphi (x)-\\int _s^t\\bigl (P_{s,\\sigma }\\psi (\\sigma ,\\cdot )\\bigr )(x)\\,d\\sigma ,\\ \\ (s,t)\\in \\Delta ,\\ x\\in X,$ where $\\varphi \\in C_b(X)$ and $\\psi \\in C_b\\bigl ([0,t]\\times X\\bigl )$ .", "We argue as in the papers and .", "More precisely, we rewrite for the reader the proofs of Theorems 3.12 and 3.13 of in this setting.", "We denote $u_0(s,x)&=P_{s,t}\\varphi (x),\\ \\ (s,t)\\in \\Delta ,\\ x\\in X\\nonumber \\\\u_1(s,x)&=-\\int _s^t\\bigl (P_{s,\\sigma }\\psi (\\sigma ,\\cdot )\\bigr )(x)\\,d\\sigma ,\\ \\ (s,t)\\in \\Delta ,\\ x\\in X.\\nonumber $ We have immediately the following Corollary 4.1 For every $n\\in \\mathbb {N}$ there exists $K_n>0$ such that $\\left\\Vert D^n_EP_{s,t}\\varphi (x)\\right\\Vert _{\\mathcal {L}^n(E)}\\le \\frac{K_n}{(t-s)^{n\\theta }}\\left\\Vert \\varphi \\right\\Vert _\\infty ,\\ \\ (s,t)\\in \\Delta ,\\ \\varphi \\in C_b(X).$ For every $\\alpha \\in (0,1)$ and $n\\in \\mathbb {N}$ there exists $K_{n,\\alpha }>0$ such that $\\left\\Vert D^n_EP_{s,t}\\varphi (x)\\right\\Vert _{\\mathcal {L}^n(E)}\\le \\frac{K_{n,\\alpha }}{(t-s)^{(n-\\alpha )\\theta }}\\left\\Vert \\varphi \\right\\Vert _{C^\\alpha _b(X)},\\ \\ (s,t)\\in \\Delta ,\\ \\varphi \\in C_b(X).$ Estimates (REF ) and (REF ) follow from Theorem REF and Corollary REF taking into account Hypothesis REF .", "Remark 4.2 Corollary REF allow us to extend continuously to $\\lbrace t\\rbrace \\times X$ the mapping $(s,x)\\in [0,t)\\times X\\longmapsto D^k_Eu_0(s,\\cdot )(x)(h_1,...,h_k)\\in \\mathbb {R}.$ Indeed, for $x_0,x\\in X$ and $s\\in [0,t)$ we have $ \\left|D^k_Eu_0(s,\\cdot )(x)(h_1,...,h_k)\\right|&\\le \\prod _{j=1}^k\\left\\Vert h_j\\right\\Vert \\int _s^t\\left\\Vert D^k_E P_{s,\\sigma }\\varphi (\\cdot )(x)\\right\\Vert _{\\mathcal {L}^k(E)}\\,d\\sigma \\nonumber \\\\&\\le K_k \\left\\Vert \\varphi \\right\\Vert _\\infty \\prod _{j=1}^k\\left\\Vert h_j\\right\\Vert _E(t-s)^{1-k\\sigma },$ where $h_1,...,h_k\\in E$ and $k\\in {1,...,n}$ .", "As $s\\rightarrow t^-$ and $x\\rightarrow x_0 $ the right hand side in (REF ) vanishes and we get $D^k_Eu_0(t,\\cdot )(x_0)(h_1,...,h_k)=0,\\ \\ x_0\\in X.$ Proposition 4.3 For every $t\\in [0,T]$ and $\\psi \\in C_b\\bigl ([0,t]\\times X\\bigr )$ , $u_1$ is continuous and $\\left\\Vert u_1\\right\\Vert _\\infty \\le t\\left\\Vert \\psi \\right\\Vert _\\infty .$ Moreover the following statements hold.", "Let $\\theta <1$ .", "For every $n\\in \\mathbb {N}$ such that $\\displaystyle {n<\\frac{1}{\\theta }}$ , the function $u_1$ belongs to $C^{0,n}_E\\bigl ([0,t]\\times X\\bigr )$ and there exists $C>0$ , independent of $\\psi $ and $t$ , such that $\\left\\Vert u_1\\right\\Vert _{C^{0,n}_E([0,t]\\times X)}\\le C \\left\\Vert \\psi \\right\\Vert _\\infty .$ Let $\\alpha \\in (0,1)$ be such that $\\displaystyle {\\alpha +\\frac{1}{\\theta }>1}$ .", "For every $\\psi \\in C^\\alpha _E(X)$ and for every $n\\in \\mathbb {N}$ such that $\\displaystyle {n<\\alpha +\\frac{1}{\\theta }}$ , the function $u_1$ belongs to $C^{0,n}_E\\bigl ([0,t]\\times X\\bigr )$ and there exists $C>0$ , independent of $\\psi $ and $t$ , such that $\\left\\Vert u_1\\right\\Vert _{C^{0,n}_E([0,t]\\times X)}\\le C\\left\\Vert \\psi \\right\\Vert _{C^{0,\\alpha }_E([0,t]\\times X)}.$ Since estimate (REF ) is obvious, we prove that u is continuous.", "Let $0\\le s_0\\le s< t\\le T$ and $x,x_0\\in X$ .", "Then $\\left|u_1(s,x)-u_1(s_0,x_0)\\right|\\le \\int _s^t\\left|P_{s,\\sigma }\\psi (\\sigma ,\\cdot )(x)-P_{s_0,\\sigma }\\psi (\\sigma ,\\cdot )(x_0)\\right|\\,d\\sigma +\\int _{s_0}^s\\left|P_{s_0,\\sigma }\\psi (\\sigma ,\\cdot )(x_0)\\right|\\,d\\sigma \\nonumber \\\\=\\int _{s_0}^t\\mathbb {1}_{[s,t]}(\\sigma )\\left|P_{s,\\sigma }\\psi (\\sigma ,\\cdot )(x)-P_{s_0,\\sigma }\\psi (\\sigma ,\\cdot )(x_0)\\right|\\,d\\sigma +\\int _{s_0}^s\\left|P_{s_0,\\sigma }\\psi (\\sigma ,\\cdot )(x_0)\\right|\\,d\\sigma .", "\\nonumber $ Since for every $\\sigma \\ge s_0$ the mapping $(s,x)\\in [0,T]\\times X\\longmapsto \\mathbb {1}_{[s,t]}(\\sigma )P_{s,\\sigma }\\psi (\\sigma ,\\cdot )(x)\\in \\mathbb {R},$ is continuous () and $\\left|P_{s,\\sigma }\\psi (\\sigma ,\\cdot )(x)-P_{s_0,\\sigma }\\psi (\\sigma ,\\cdot )(x_0)\\right|\\le 2\\left\\Vert \\psi \\right\\Vert _\\infty .$ By the Dominated Convergence Theorem the first integral vanishes as $s\\rightarrow s_0^+$ and $x\\rightarrow x_0$ .", "Moreover, since $\\int _{s_0}^s\\left|P_{s_0,\\sigma }\\psi (\\sigma ,\\cdot )(x_0)\\right|\\,d\\sigma \\le \\left\\Vert \\psi \\right\\Vert _\\infty (s-s_0),$ even the second integral vanishes as $s\\rightarrow s_0^+$ and $x\\rightarrow x_0$ .", "If $s<s_0$ we split $\\left|u_1(s,x)-u_1(s_0,x_0)\\right|\\le \\int _{s_0}^t \\left|P_{s,\\sigma }\\psi (\\sigma ,\\cdot )(x)-P_{s_0,\\sigma }\\psi (\\sigma ,\\cdot )(x_0)\\right|\\,d\\sigma +\\int ^{s_0}_s\\left|P_{s_0,\\sigma }\\psi (\\sigma ,\\cdot )(x_0)\\right|\\,d\\sigma $ and the proof is analogous.", "So $u_1$ is continuous.", "Concerning statements (1) and (2), due to Corollary REF we can apply the Dominated Convergence Theorem to obtain $\\frac{\\partial u_1}{\\partial h_1,...,\\partial h_k}(s,x)=\\int _s^t D^k P_{s,\\sigma }\\psi (\\sigma ,\\cdot )(x)(h_1,...,h_k)\\, d\\sigma ,\\ \\ s\\in [0,t),\\ x\\in X,$ for every $h_1,...,h_k\\in E$ and $k\\in \\lbrace 1,...,n\\rbrace $ .", "The proof that the mapping $(s,x)\\in [0,t)\\times X\\longmapsto D^k_Eu_1(s,\\cdot )(x)(h_1,...,h_k)\\in \\mathbb {R},$ for every $h_1,...,h_k\\in E$ and $k\\in \\lbrace 1,...,n\\rbrace $ , is similar to the proof of the continuity of $u_1$ .", "Indeed, for $0\\le s_0\\le s<t\\le T$ and $x,x_0\\in X$ , we write $\\frac{\\partial u_1}{\\partial h_1,...,\\partial h_k}(s,x)-\\frac{\\partial u_1}{\\partial h_1,...,\\partial h_k}(s_0,x_0)=\\int _{s_0}^s D^k_E P_{s_0,\\sigma }\\psi (\\sigma ,\\cdot )(x_0)(h_1,...,h_k)\\, d\\sigma \\nonumber \\\\+\\int _{s_0}^t\\mathbb {1}_{[s,t]}(\\sigma ) \\bigl (D^k_E P_{s,\\sigma }\\psi (\\sigma ,\\cdot )(x)(h_1,...,h_k)-D^k _E P_{s_0,\\sigma }\\psi (\\sigma ,\\cdot )(x)(h_1,...,h_k)\\bigr )\\, d\\sigma ,\\nonumber $ and we argue as before.", "Corollary REF allow us to extend continuously the mapping (REF ) to $\\lbrace t\\rbrace \\times X$ .", "Indeed, for $x_0,x\\in X$ and $s\\in [0,t)$ we have $ \\left|D^k_Eu_1(s,\\cdot )(x)(h_1,...,h_k)\\right|&\\le \\prod _{j=1}^k\\left\\Vert h_j\\right\\Vert \\int _s^t\\left\\Vert D^k_E P_{s,\\sigma }\\psi (\\sigma ,\\cdot )(x)\\right\\Vert _{\\mathcal {L}^k(E)}\\,d\\sigma \\nonumber \\\\&\\le K_k \\left\\Vert \\psi \\right\\Vert _\\infty \\prod _{j=1}^k\\left\\Vert h_j\\right\\Vert _E(t-s)^{1-k\\sigma },$ where $h_1,...,h_k\\in E$ and $k\\in {1,...,n}$ .", "As $s\\rightarrow t^-$ and $x\\rightarrow x_0 $ the right hand side in (REF ) vanishes and we get $D^k_Eu_1(t,\\cdot )(x_0)(h_1,...,h_k)=0,\\ \\ x_0\\in X.$ Theorem 4.4 Assume that Hypotheses REF , REF , REF hold.", "Let $\\varphi \\in C_b(X)$ , $\\psi \\in C_b\\bigl ([0,t]\\times X\\bigr )$ and let $u$ be defined by (REF ).", "If $\\displaystyle {\\frac{1}{\\theta }\\notin \\mathbb {N}}$ , $\\varphi \\in C^{\\frac{1}{\\theta }}_E(X)$ and $\\psi \\in C_b\\bigl ([0,t]\\times X\\bigr )$ , then $u\\in C_E^{0,\\frac{1}{\\theta }}\\bigl ([0,t]\\times X\\bigr )$ .", "Moreover there exists $C=C(T)>0$ , independent of $\\varphi $ and $\\psi $ , such that $\\left\\Vert u\\right\\Vert _{C_E^{0,\\frac{1}{\\theta }}([0,t]\\times X)}\\le C\\bigl (\\left\\Vert \\varphi \\right\\Vert _{C^{\\frac{1}{\\theta }}_E(X)}+\\left\\Vert \\psi \\right\\Vert _{\\infty }\\bigr ).$ If $\\alpha \\in (0,1)$ , $\\displaystyle {\\alpha +\\frac{1}{\\theta }\\notin \\mathbb {N}}$ , $\\varphi \\in C^{\\alpha +\\frac{1}{\\theta }}_E(X)$ and $\\psi \\in C^{0,\\alpha }_E\\bigl ([0,t]\\times X\\bigr )$ , then $u\\in C_E^{0,\\alpha +\\frac{1}{\\theta }}\\bigl ([0,t]\\times X\\bigr )$ .", "Moreover there exists $C=C(T,\\alpha )>0$ , independent of $\\varphi $ and $\\psi $ , such that $\\left\\Vert u\\right\\Vert _{C_E^{0,\\alpha +\\frac{1}{\\theta }}([0,t]\\times X)}\\le C\\bigl (\\left\\Vert \\varphi \\right\\Vert _{C^{\\alpha +\\frac{1}{\\theta }}_E(X)}+\\left\\Vert \\psi \\right\\Vert _{C_E^{0,\\alpha }([0,t]\\times X)}\\bigr ).$ We can assume that $\\varphi \\equiv 0$ since, by Corollary REF , we know that for every non integer $\\gamma >0$ $u_0$ belongs to $C^\\gamma _E([0,t]\\times X)$ , if $\\varphi \\in C^\\gamma _E(X)$ ; moreover there exists $C=C(\\gamma , T)>0$ such that $\\left\\Vert u_0\\right\\Vert _{C^{0,\\gamma }_E([0,t]\\times X)}\\le C\\left\\Vert \\varphi \\right\\Vert _{C^\\gamma _E(X)}.$ Let us prove statement (1).", "Setting $\\displaystyle {n:=\\biggl [\\frac{1}{\\theta }\\biggr ]}$ , we have that $n\\theta \\in (0,1),\\ \\ (n+1)\\theta >1.$ we have to show that $u_1(s,\\cdot )\\in C^{\\frac{1}{\\theta }}_E(X)$ for every $s\\in [0,t]$ .", "If $n>0$ , we know from Proposition REF that $u_1\\in C^{0,n}_b\\bigl ([0,t]\\times X\\bigr )$ .", "Hence, we have to prove that $D^n_Eu_1(s,\\cdot )$ is $\\displaystyle {\\biggl (\\frac{1}{\\theta }-n\\biggr )}$ -Hölder continuous with values in $\\mathcal {L}^n(E)$ , with Hölder constant independent of $s$ .", "Let $h,h_1,...h_n\\in E$ .", "We split every partial derivative $D^n_Eu_1(s,y)(h_1,...h_n)$ into $a_h(s,y)+b_h(s,y)$ where $a_h(s,y)&:=-\\int _s^{(s+\\left\\Vert h\\right\\Vert _E^{\\frac{1}{\\theta }})\\wedge t} D^n_E P_{s,\\sigma }\\psi (\\sigma ,\\cdot )(y)(h_1,...,h_n)\\,d\\sigma ,\\ \\ s\\in [0,t],\\ y\\in X,\\\\b_h(s,y)&:=-\\int ^t_{(s+\\left\\Vert h\\right\\Vert _E^{\\frac{1}{\\theta }})\\wedge t} D^n_E P_{s,\\sigma }\\psi (\\sigma ,\\cdot )(y)(h_1,...,h_n)\\, d\\sigma ,\\ \\ s\\in [0,t],\\ y\\in X.$ Due to (REF ), we obtain $&\\left|a_h(s,x+h)-a_h(s,x)\\right|\\le \\left|a_h(s,x+h)\\right|+\\left|a_h(s,x)\\right|\\nonumber \\\\&\\le 2 K_n\\left\\Vert \\psi \\right\\Vert _\\infty \\prod _{j=1}^n\\left\\Vert h_j\\right\\Vert _E\\int _s^{(s+\\left\\Vert h\\right\\Vert _E^{\\frac{1}{\\theta }})\\wedge t} (\\sigma -s)^{-n\\sigma }\\,d\\sigma .\\nonumber \\\\&\\le \\frac{2K_n}{1-n\\theta }\\left\\Vert h\\right\\Vert _E^{\\frac{1-n\\theta }{\\theta }}\\left\\Vert \\psi \\right\\Vert _\\infty \\prod _{j=1}^n\\left\\Vert h_j\\right\\Vert _E.\\nonumber $ Since if $\\left\\Vert h\\right\\Vert _E^{\\frac{1}{\\theta }}\\ge t-s$ $b_h(s,\\cdot )$ vanishes, we estimate $\\left|b_h(s,x+h)-b_h(s,x)\\right|$ when $\\left\\Vert h\\right\\Vert _E^{\\frac{1}{\\theta }}< t-s$ .", "Again, by (REF ) we get $\\left\\Vert D^n_E P_{s,\\sigma }\\psi (\\sigma ,\\cdot )(x+h)-D^n_E P_{s,\\sigma }\\psi (\\sigma ,\\cdot )(x)\\right\\Vert _{\\mathcal {L}^n(E)}&\\le \\sup _{y\\in X}\\left\\Vert D^{n+1}_E P_{s,\\sigma }\\psi (\\sigma ,\\cdot )(y)\\right\\Vert _{\\mathcal {L}^n(E)}\\left\\Vert h\\right\\Vert _E\\nonumber \\\\&\\le \\frac{K_{n+1}}{(\\sigma -s)^{(n+1)\\theta }}\\left\\Vert \\psi \\right\\Vert _\\infty \\left\\Vert h\\right\\Vert _E\\ \\sigma \\in (s,t)$ and $&\\left|b_h(s,x+h)-b_h(s,x)\\right|\\le K_{n+1}\\left\\Vert \\psi \\right\\Vert _\\infty \\left\\Vert h\\right\\Vert _E\\prod _{j=1}^n\\left\\Vert h_j\\right\\Vert _E\\int ^t_{(s+\\left\\Vert h\\right\\Vert _E^{\\frac{1}{\\theta }})\\wedge t} (\\sigma -s)^{-(n+1)\\theta }\\,d\\sigma .\\nonumber \\\\&\\le \\frac{K_{n+1}}{(n+1)\\theta -1}\\left\\Vert h\\right\\Vert _E^{\\frac{1}{\\theta }-n}\\left\\Vert \\psi \\right\\Vert _\\infty \\prod _{j=1}^n\\left\\Vert h_j\\right\\Vert _E.\\nonumber $ Summing up we get $\\left|\\bigl (D^n_Eu_1(s,x+h)-D^n_Eu_1(s,x)\\bigr )(h_1,...,h_n)\\right|\\le C\\left\\Vert h\\right\\Vert ^{\\frac{1}{\\theta }-n}_E\\left\\Vert \\psi \\right\\Vert _\\infty \\prod _{j=1}^n\\left\\Vert h_j\\right\\Vert _E$ with $C=\\frac{2K_n}{1-n\\theta }+\\frac{K_{n+1}}{(n+1)\\theta -1}.$ Therefore, $[D^n_Eu_1(s,\\cdot )]_{C^{\\frac{1}{\\theta }-n}_E(X;\\mathcal {L}^n(E))}\\le C\\left\\Vert \\psi \\right\\Vert _\\infty .$ The case $n=0$ is analogous to the previous one where instead of (REF ) we use (REF ).", "Let us prove statement (2).", "Setting $\\displaystyle {n:=\\biggl [\\alpha +\\frac{1}{\\theta }\\biggr ]}$ , we have that $(n-\\alpha )\\theta \\in (0,1),\\ \\ (n+1-\\alpha )\\theta >1.$ We already know that $u_1\\in C^{0,n}_E(X)$ by Proposition REF (2) and we have to show that $u_1(s,\\cdot )\\in C^{\\alpha +\\frac{1}{\\theta }}_E(X)$ for every $s\\in [0,t]$ .", "If $n>0$ , we have to prove that $D^n_Eu_1(s,\\cdot )$ is $\\displaystyle {\\biggl (\\alpha +\\frac{1}{\\theta }-n\\biggr )}$ -Hölder continuous with values in $\\mathcal {L}^n(E)$ , with Hölder constant independent of $s$ .", "Let $h,h_1,...h_n\\in E$ .", "We split every partial derivative $D^n_Eu_1(s,y)(h_1,...h_n)$ in $a_h(s,y)+b_h(s,y)$ in the same way as we did for the proof of statement (i).", "Due to (REF ), we obtain $&\\left|a_h(s,x+h)-a_h(s,x)\\right|\\le \\left|a_h(s,x+h)\\right|+\\left|a_h(s,x)\\right|\\nonumber \\\\&\\le 2 K_{n,\\alpha }\\left\\Vert \\psi \\right\\Vert _{C^{\\alpha }_E([0,t]\\times X)}\\prod _{j=1}^n\\left\\Vert h_j\\right\\Vert _E\\int _s^{(s+\\left\\Vert h\\right\\Vert _E^{\\frac{1}{\\theta }})\\wedge t} (\\sigma -s)^{-(n-\\alpha )\\sigma }\\,d\\sigma .\\nonumber \\\\&\\le \\frac{2K_{n,\\alpha }}{1-(n-\\alpha )\\theta }\\left\\Vert h\\right\\Vert _E^{\\frac{1-(n-\\alpha )\\theta }{\\theta }}\\left\\Vert \\psi \\right\\Vert _{C^{\\alpha }_E([0,t]\\times X)}\\prod _{j=1}^n\\left\\Vert h_j\\right\\Vert _E.\\nonumber $ We observe that if $\\left\\Vert h\\right\\Vert _E^{\\frac{1}{\\theta }}\\ge t-s$ , $b_h(s,\\cdot )$ vanishes.", "Hence, we estimate now $\\left|b_h(s,x+h)-b_h(s,x)\\right|$ when $\\left\\Vert h\\right\\Vert _E^{\\frac{1}{\\theta }}< t-s$ .", "Again, by (REF ) we get $&\\left\\Vert D^n_E P_{s,\\sigma }\\psi (\\sigma ,\\cdot )(x+h)-D^n_E P_{s,\\sigma }\\psi (\\sigma ,\\cdot )(x)\\right\\Vert _{\\mathcal {L}^n(E)}\\le \\sup _{y\\in X}\\left\\Vert D^{n+1}_E P_{s,\\sigma }\\psi (\\sigma ,\\cdot )(y)\\right\\Vert _{\\mathcal {L}^n(E)}\\left\\Vert h\\right\\Vert _E\\nonumber \\\\&\\le \\frac{K_{n+1,\\alpha }}{(\\sigma -s)^{(n+1-\\alpha )\\theta }}\\left\\Vert \\psi \\right\\Vert _{C^{\\alpha }_E([0,t]\\times X)}\\left\\Vert h\\right\\Vert _E,\\ \\sigma \\in (s,t)$ and $&\\left|b_h(s,x+h)-b_h(s,x)\\right|\\le K_{n+1,\\alpha }\\left\\Vert \\psi \\right\\Vert _{C^{\\alpha }_E([0,t]\\times X)}\\left\\Vert h\\right\\Vert _E\\prod _{j=1}^n\\left\\Vert h_j\\right\\Vert _E\\int ^t_{(s+\\left\\Vert h\\right\\Vert _E^{\\frac{1}{\\theta }})\\wedge t} (\\sigma -s)^{-(n+1-\\alpha )\\theta }\\,d\\sigma .\\nonumber \\\\&\\le \\frac{K_{n+1,\\alpha }}{(n+1-\\alpha )\\theta -1}\\left\\Vert h\\right\\Vert _E^{\\frac{1}{\\theta }+\\alpha -n}\\left\\Vert \\psi \\right\\Vert _{C^{\\alpha }_E([0,t]\\times X)}\\prod _{j=1}^n\\left\\Vert h_j\\right\\Vert _E.\\nonumber $ Summing up we get $\\left|\\bigl (D^n_Eu_1(s,x+h)-D^n_Eu_1(s,x)\\bigr )(h_1,...,h_n)\\right|\\le C\\left\\Vert h\\right\\Vert ^{\\frac{1}{\\theta }+\\alpha -n}_E\\left\\Vert \\psi \\right\\Vert _{C^{\\alpha }_E([0,t]\\times X)}\\prod _{j=1}^n\\left\\Vert h_j\\right\\Vert _E$ with $C=\\frac{2K_{n,\\alpha }}{1-(n-\\alpha )\\theta }+\\frac{K_{n+1,\\alpha }}{(n+1-\\alpha )\\theta -1}.$ Therefore, $[D^n_Eu_1(s,\\cdot )]_{C^{\\frac{1}{\\theta }+\\alpha -n}_E(X;\\mathcal {L}^n(E))}\\le C\\left\\Vert \\psi \\right\\Vert _{C^{\\alpha }_E([0,t]\\times X)}.$ The case $n=0$ is analogous to the previous one where instead of (REF ) we use (REF ).", "Theorem 4.5 Assume that Hypotheses REF , REF , REF hold.", "Let $\\varphi \\in C_b(X)$ , $\\psi \\in C_b\\bigl ([0,t]\\times X\\bigr )$ and let $u$ be defined by (REF ).", "If $\\displaystyle {\\frac{1}{\\theta }=k\\in \\mathbb {N}}$ and if $\\varphi \\in Z^k_E(X)$ , then $u\\in Z_E^{0,k}\\bigl ([0,t]\\times X\\bigr )$ .", "Moreover there exists $C=C(T)>0$ , independent of $\\varphi $ and $\\psi $ , such that $\\left\\Vert u\\right\\Vert _{Z_E^{0,k}([0,t]\\times X)}\\le C\\bigl (\\left\\Vert \\varphi \\right\\Vert _{Z^k_E(X)}+\\left\\Vert \\psi \\right\\Vert _{\\infty }\\bigr ).$ If $\\alpha \\in (0,1)$ and $\\displaystyle {\\alpha +\\frac{1}{\\theta }=k\\in \\mathbb {N}}$ and if $\\varphi \\in Z^k_E(X)$ and $\\psi \\in C^{0,\\alpha }_E\\bigl ([0,t]\\times X\\bigr )$ , then $u\\in Z_E^{0,k}\\bigl ([0,t]\\times X\\bigr )$ .", "Moreover there exists $C=C(T,\\alpha )>0$ , independent of $\\varphi $ and $\\psi $ , such that $\\left\\Vert u\\right\\Vert _{Z_E^{0,k}([0,t]\\times X)}\\le C\\bigl (\\left\\Vert \\varphi \\right\\Vert _{Z^{k}_E(X)}+\\left\\Vert \\psi \\right\\Vert _{C_E^{0,\\alpha }([0,t]\\times X)}\\bigr ).$ We can assume that $\\varphi \\equiv 0$ since, by Corollary REF , we know that for every $k\\in \\mathbb {N}$ $u_0$ belongs to $Z^k_E([0,t]\\times X)$ , if $\\varphi \\in Z^k_E(X)$ ; moreover there exists $C=C(k, T)>0$ such that $\\left\\Vert u_0\\right\\Vert _{Z^{0,k}_E([0,t]\\times X)}\\le C\\left\\Vert \\varphi \\right\\Vert _{Z^k_E(X)}.$ In the case of statement (1) we have $\\psi \\in C_b\\bigl ([s,t]\\times X\\bigr )$ and $\\displaystyle {k=\\frac{1}{\\theta }}$ .", "If $k\\ge 2$ , we know from Proposition REF that $u_1\\in C^{0,k-1}_E\\bigl ([0,t]\\times X\\bigr )$ .", "So we have to show that $[D^{k-1}_Eu_1(s,\\cdot )]_{Z^1_E(X;\\mathcal {L}^{k-1}(E))}$ is bounded by a constant independent of $s$ .", "Let $h,h_1,...h_n\\in E$ .", "We split every partial derivative $D^{k-1}_Eu_1(s,y)(h_1,...h_n)$ into the sum $a_h(s,y)+b_h(s,y)$ as we did in the Theorem REF .", "Due to (REF ), we obtain $&\\left|a_h(s,x+2h)-2a_h(s,x+h)+a_h(s,x)\\right|\\le \\left|a_h(s,x+2h)\\right|+2\\left|a_h(s,x+h)\\right|+\\left|a_h(s,x)\\right|\\nonumber \\\\&\\le 4 K_{k-1}\\left\\Vert \\psi \\right\\Vert _\\infty \\prod _{j=1}^{k-1}\\left\\Vert h_j\\right\\Vert _E\\int _s^{(s+\\left\\Vert h\\right\\Vert _E^{\\frac{1}{\\theta }})\\wedge t} (\\sigma -s)^{-(k-1)\\sigma }\\,d\\sigma .\\nonumber \\\\&\\le \\frac{4K_{k-1}}{1-(k-1)\\theta }\\left\\Vert h\\right\\Vert _E^{\\frac{1-(k-1)\\theta }{\\theta }}\\left\\Vert \\psi \\right\\Vert _\\infty \\prod _{j=1}^n\\left\\Vert h_j\\right\\Vert _E.\\nonumber \\\\&=4 k K_{k-1}\\left\\Vert h\\right\\Vert _E\\left\\Vert \\psi \\right\\Vert _\\infty \\prod _{j=1}^n\\left\\Vert h_j\\right\\Vert _E.", "\\nonumber $ We observe that if $\\left\\Vert h\\right\\Vert _E^{\\frac{1}{\\theta }}\\ge t-s$ , $b_h(s,\\cdot )$ vanishes.", "Hence we estimate $\\left|b_h(s,x+2h)-2b_h(s,x+h)+b_h(s,x)\\right|$ when $\\left\\Vert h\\right\\Vert _E^{\\frac{1}{\\theta }}< t-s$ .", "Again, by (REF ) we get $&\\left|b_h(s,x+2h)-2b_h(s,x+h)+b_h(s,x)\\right|\\nonumber \\\\&\\int _{s+\\left\\Vert h\\right\\Vert _E^{\\frac{1}{\\theta }}}^t\\left|\\bigl (D^{k-1}_EP_{s,\\sigma }\\psi (\\sigma ,\\cdot )(x+2h)-2D^{k-1}_EP_{s,\\sigma }\\psi (\\sigma ,\\cdot )(x+h)+D^{k-1}_EP_{s,\\sigma }\\psi (\\sigma ,\\cdot )(x)\\bigr )(h_1,...,h_{k-1})\\right|\\, d\\sigma \\nonumber \\\\&\\le \\left\\Vert h\\right\\Vert _E^2 \\prod _{j=1}^{k-1}\\left\\Vert h_j\\right\\Vert _E\\int _{s+\\left\\Vert h\\right\\Vert _E^{\\frac{1}{\\theta }}}^t\\sup _{y\\in X}\\left\\Vert D^{k+1}_EP_{s,\\sigma }\\psi (\\sigma ,\\cdot )(y)\\right\\Vert _{\\mathcal {L}^{k+1}}\\,d\\sigma \\nonumber \\\\&\\le K_{k+1}\\left\\Vert \\psi \\right\\Vert _\\infty \\left\\Vert h\\right\\Vert _E^2 \\prod _{j=1}^{k-1}\\left\\Vert h_j\\right\\Vert _E\\int _{s+\\left\\Vert h\\right\\Vert _E^{\\frac{1}{\\theta }}}^t (\\sigma -s)^{-(k+1)\\theta }\\, d\\sigma \\le k K_{k+1}\\left\\Vert \\psi \\right\\Vert _\\infty \\left\\Vert h\\right\\Vert _E \\prod _{j=1}^{k-1}\\left\\Vert h_j\\right\\Vert _E.\\nonumber $ Summing up we get $[D^{k-1}_Eu_1(s,\\cdot )]_{Z^1_E(X;\\mathcal {L}^{k-1}(E))}\\le k(4K_{k-1}+K_{k+1})\\left\\Vert \\psi \\right\\Vert _\\infty .$ The case $k=\\theta =1$ is analogous to the previous one where instead of (REF ) we use (REF ).", "In the case of statement (2) we have $\\psi \\in C_E^{0,\\alpha }\\bigl ([s,t]\\times X\\bigr )$ and $\\displaystyle {k=\\alpha +\\frac{1}{\\theta }}$ .", "If $k\\ge 2$ , we know from Proposition REF that $u_1\\in C^{0,k-1}_E\\bigl ([0,t]\\times X\\bigr )$ and so we have to show that $[D^{k-1}_Eu_1(s,\\cdot )]_{Z^1_E(X;\\mathcal {L}^{k-1}(E))}$ is bounded by a constant independent of $s$ .", "Let $h,h_1,...h_n\\in E$ .", "We split every partial derivative $D^{k-1}_Eu_1(s,y)(h_1,...h_n)$ into $a_h(s,y)+b_h(s,y)$ as we did in the proof of Theorem REF .", "Due to (REF ), we obtain $&\\left|a_h(s,x+2h)-2a_h(s,x+h)+a_h(s,x)\\right|\\le \\left|a_h(s,x+2h)\\right|+2\\left|a_h(s,x+h)\\right|+\\left|a_h(s,x)\\right|\\nonumber \\\\&\\le 4 K_{k-1,\\alpha }\\left\\Vert \\psi \\right\\Vert _{C^{0,\\alpha }_E([0,t]\\times X)}\\prod _{j=1}^{k-1}\\left\\Vert h_j\\right\\Vert _E\\int _s^{(s+\\left\\Vert h\\right\\Vert _E^{\\frac{1}{\\theta }})\\wedge t} (\\sigma -s)^{-(k-1-\\alpha )\\sigma }\\,d\\sigma .\\nonumber \\\\&\\le \\frac{4K_{k-1,\\alpha }}{1-(k-1-\\alpha )\\theta }\\left\\Vert h\\right\\Vert _E^{\\frac{1-(k-1-\\alpha )\\theta }{\\theta }}\\left\\Vert \\psi \\right\\Vert _{C^{0,\\alpha }_E([0,t]\\times X)}\\prod _{j=1}^n\\left\\Vert h_j\\right\\Vert _E.\\nonumber \\\\&=4 (k-\\alpha ) K_{k-1,\\alpha }\\left\\Vert h\\right\\Vert _E\\left\\Vert \\psi \\right\\Vert _{C^{0,\\alpha }_E([0,t]\\times X)}\\prod _{j=1}^n\\left\\Vert h_j\\right\\Vert _E.", "\\nonumber $ We observe that if $\\left\\Vert h\\right\\Vert _E^{\\frac{1}{\\theta }}\\ge t-s$ , $b_h(s,\\cdot )$ vanishes.", "Hence we estimate $\\left|b_h(s,x+2h)-2b_h(s,x+h)+b_h(s,x)\\right|$ when $\\left\\Vert h\\right\\Vert _E^{\\frac{1}{\\theta }}< t-s$ .", "Again, by (REF ) we get $&\\left|b_h(s,x+2h)-2b_h(s,x+h)+b_h(s,x)\\right|\\nonumber \\\\&\\int _{s+\\left\\Vert h\\right\\Vert _E^{\\frac{1}{\\theta }}}^t\\left|\\bigl (D^{k-1}_EP_{s,\\sigma }\\psi (\\sigma ,\\cdot )(x+2h)-2D^{k-1}_EP_{s,\\sigma }\\psi (\\sigma ,\\cdot )(x+h)+D^{k-1}_EP_{s,\\sigma }\\psi (\\sigma ,\\cdot )(x)\\bigr )(h_1,...,h_{k-1})\\right|\\, d\\sigma \\nonumber \\\\&\\le \\left\\Vert h\\right\\Vert _E^2 \\prod _{j=1}^{k-1}\\left\\Vert h_j\\right\\Vert _E\\int _{s+\\left\\Vert h\\right\\Vert _E^{\\frac{1}{\\theta }}}^t\\sup _{y\\in X}\\left\\Vert D^{k+1}P_{s,\\sigma }\\psi (\\sigma ,\\cdot )(y)\\right\\Vert _{\\mathcal {L}^{k+1}}\\,d\\sigma \\nonumber \\\\&\\le K_{k+1,\\alpha }\\left\\Vert \\psi \\right\\Vert _{C^{0,\\alpha }_E([0,t]\\times X)}\\left\\Vert h\\right\\Vert _E^2 \\prod _{j=1}^{k-1}\\left\\Vert h_j\\right\\Vert _E\\int _{s+\\left\\Vert h\\right\\Vert _E^{\\frac{1}{\\theta }}}^t (\\sigma -s)^{-(k+1-\\alpha )\\theta }\\, d\\sigma \\nonumber \\\\&\\le (k-\\alpha ) K_{k+1,\\alpha }\\left\\Vert \\psi \\right\\Vert _{C^{0,\\alpha }_E([0,t]\\times X)}\\left\\Vert h\\right\\Vert _E \\prod _{j=1}^{k-1}\\left\\Vert h_j\\right\\Vert _E.\\nonumber $ Summing up we get $[D^{k-1}_Eu_1(s,\\cdot )]_{Z^1_E(X;\\mathcal {L}^{k-1}(E))}\\le (k-\\alpha )(4K_{k-1,\\alpha }+K_{k+1,\\alpha })\\left\\Vert \\psi \\right\\Vert _\\infty .$ The case $k=\\theta =1$ is analogous to the previous one where instead of (REF ) we use (REF )." ], [ "Examples", " Example 1 Let $A(t)$ , $B(t)$ be self-adjoint operators in diagonal form with respect to the same Hilbert basis $\\lbrace e_k:\\ k\\in \\mathbb {N}\\rbrace $ , namely $A(t)e_k=a_k(t)e_k,\\ \\ B(t)e_k=b_k(t)e_k\\ \\ 0\\le t\\le T,\\ \\ k\\in \\mathbb {N},$ with continuous coefficients $a_k$ , $b_k$ .", "We set $\\mu _k=\\min _{t\\in [0,T]}a_k(t),\\ \\ \\lambda _k=\\max _{t\\in [0,T]}a_k(t)$ and we assume that there exists $\\lambda _0>0$ such that $\\lambda _k<\\lambda _0,\\ \\ \\forall \\ k\\in \\mathbb {N}.$ In this setting $U(t,s)e_k=\\exp \\biggl (\\int _s^ta_k(\\tau )\\,d\\tau \\biggr )e_k,\\ \\ (s,t)\\in \\Delta ,\\ \\ k\\in \\mathbb {N}$ is the strongly continuous evolution operator formally associated to the family $\\lbrace A(t)\\rbrace _{t\\in [0,T]}$ .", "Moreover we assume that there exists $K>0$ such that $\\left|b_k(t)\\right|\\le K,\\ \\ t\\in [0,T],\\ k\\in \\mathbb {N}.$ Hence $B(t)\\in \\mathcal {L}(X)$ for all $t\\in [0,T]$ and $\\sup _{t\\in [0,t]}\\left\\Vert B(t)\\right\\Vert _{\\mathcal {L}(X)}\\le K.$ The operators $Q(t,s)$ are given by $Q(t,s)e_k=\\int _s^t\\exp \\biggl (2\\int _\\sigma ^ta_k(\\tau )\\,d\\tau \\biggr )(b_k(\\sigma ))^2\\,d\\sigma \\,e_k=:q_k(t,s)e_k,\\ \\ (s,t)\\in \\Delta ,\\ k\\in \\mathbb {N}.", "\\nonumber $ Therefore, Hypothesis REF is fullfilled if $\\sum _{k=0}^\\infty q_k(t,s)<+\\infty ,\\ \\ (s,t)\\in \\Delta .$ We give now a sufficient condition to have (REF ).", "We assume that $\\lambda _k$ is eventually nonzero (say for $k\\ge k_0$ ).", "Given $(s,t)\\in \\Delta $ , we have $&\\left|\\int _s^t\\exp \\biggl (2\\int _\\sigma ^ta_k(\\tau )\\,d\\tau \\biggr )(b_k(\\sigma ))^2\\,d\\sigma \\right|\\le \\left\\Vert b_k\\right\\Vert ^2_{\\infty }\\left|\\int _s^t\\exp \\bigl (2\\lambda _k(t-\\sigma )\\bigr )\\,d\\sigma \\right|\\nonumber \\\\&=\\frac{\\left\\Vert b_k\\right\\Vert ^2_{\\infty }}{2\\left|\\lambda _k\\right|}\\left|1-\\exp (2\\lambda _k(t-s))\\right|\\le \\frac{\\left\\Vert b_k\\right\\Vert ^2_{\\infty }}{2\\left|\\lambda _k\\right|}\\bigl (1+\\exp (2\\lambda _0T)\\bigr ).$ Hence (REF ) holds provided $\\sum _{k=0}^\\infty \\frac{\\left\\Vert b_k\\right\\Vert ^2_{\\infty }}{\\left|\\lambda _k\\right|}<+\\infty .$ We look now for a sufficient condition such that $ U(t,s)(X)\\nsubseteq \\mathcal {H}_{t,s},\\ \\ (s,t)\\in \\Delta .$ Since $Q^{\\frac{1}{2}}(t,s)e_k=(q_k(t,s))^{\\frac{1}{2}}e_k$ for every $(s,t)\\in \\Delta $ and $k\\in \\mathbb {N}$ , (REF ) holds if $q_k(t,s)$ is eventually positive (say for $k\\ge k_1\\ge k_0$ ) and $ \\sup _{k\\ge k_1}\\frac{\\displaystyle {\\exp \\biggl (\\int _s^t2a_k(\\tau )\\,d\\tau \\biggr )}}{q_k(t,s)}=+\\infty ,\\ \\ 0\\le s<t\\le T.$ We obtain a sufficient condition for (REF ) if we assume that $\\lambda _k$ is eventually negative and $\\left\\Vert b_k\\right\\Vert _\\infty $ is eventually nonzero (say for $k\\ge k_2\\ge k_1$ for both), so for $(s,t)\\in \\Delta $ we have $&\\frac{\\displaystyle {\\exp \\biggl (\\int _s^t2a_k(\\tau )\\,d\\tau \\biggr )}}{\\displaystyle {\\int _s^t\\exp \\biggl (2\\int _\\sigma ^ta_k(\\tau )\\,d\\tau \\biggr )(b_k(\\sigma ))^2\\,d\\sigma }}\\ge \\frac{\\exp \\bigl (2\\mu _k(t-s)\\bigr )}{\\displaystyle {\\left\\Vert b_k\\right\\Vert ^2_{\\infty }\\int _s^t\\exp \\bigl (2\\lambda _k(t-\\sigma )\\bigr )\\,d\\sigma }}\\nonumber \\\\&=\\frac{\\exp \\bigl (2\\mu _k(t-s)\\bigr )}{\\displaystyle {\\left\\Vert b_k\\right\\Vert ^2_{\\infty }\\frac{1}{-2\\lambda _k}\\bigl (1-\\exp (2\\lambda _k(t-s))\\bigr )}}=\\frac{2\\left|\\lambda _k\\right|}{\\displaystyle {\\left\\Vert b_k\\right\\Vert ^2_{\\infty }\\bigl (\\exp \\bigl (-2\\mu _k(t-s)\\bigr )-\\exp (-2(\\mu _k-\\lambda _k)(t-s))\\bigr )}}.", "\\nonumber $ So (REF ) is fulfilled if $\\sup _{k\\ge k_2}\\frac{2\\left|\\lambda _k\\right|}{\\displaystyle {\\left\\Vert b_k\\right\\Vert ^2_{\\infty }\\bigl (\\exp \\bigl (-2\\mu _k(t-s)\\bigr )-\\exp (-2(\\mu _k-\\lambda _k)(t-s))\\bigr )}}=+\\infty $ Now we want to find some conditions such that the hypotheses of Proposition REF are satisfied.", "Let $(s,t)\\in \\Delta $ , we assume that $b_k(s)$ and $b_k(t)$ are eventually nonzero (say for $k\\ge k_3\\ge k_2$ ).", "Hence we have that $U(t,s)H_s\\subseteq H_t$ if $\\sup _{k\\ge k_3}\\frac{b_k^2(s)}{b_k^2(t)}\\exp \\biggl (\\int _s^t2a_k(\\tau )\\,d\\tau \\biggr )<+\\infty .$ Since $\\frac{b_k^2(s)}{b_k^2(t)}\\exp \\biggl (\\int _s^t2a_k(\\tau )\\biggr )\\le \\frac{b_k^2(s)}{b_k^2(t)} \\exp (2\\lambda _k T),\\ \\ k\\ge k_3\\nonumber $ a sufficient condition to have (REF ) is given by $\\sup _{k\\ge k_3}\\frac{b_k^2(s)}{b_k^2(t)}\\exp (2\\lambda _k T)<+\\infty .$ Now we investigate when (REF ) holds.", "We observe first that for $y=Q^{\\frac{1}{2}}(s)x$ with $x\\in H_s$ , we have $&\\left\\Vert U(t,s)y\\right\\Vert ^2_{H_t}=\\left\\Vert Q^{-\\frac{1}{2}}(t)U(t,s)Q^{\\frac{1}{2}}(s)x\\right\\Vert _X^2=\\sum _{\\begin{array}{c}k\\in \\mathbb {N},\\\\ b_k(t)\\ne 0\\end{array}}\\biggl (\\frac{\\left|b_k(s)\\right|}{\\left|b_k(t)\\right|}\\exp \\biggl (\\int _s^ta_k(\\tau )\\biggr )\\langle x,e_k\\rangle _X\\biggr )^2\\nonumber \\\\&=\\sum _{\\begin{array}{c}k\\in \\mathbb {N},\\\\ b_k(t)\\ne 0\\end{array}}\\biggl (\\frac{\\left|b_k(s)\\right|}{\\left|b_k(t)\\right|}\\exp \\biggl (\\int _s^ta_k(\\tau )\\biggr )\\frac{\\langle y,e_k\\rangle _X}{\\left|b_k(s)\\right|}\\biggr )^2=\\sum _{\\begin{array}{c}k\\in \\mathbb {N},\\\\ b_k(t)\\ne 0\\end{array}}\\biggl (\\frac{1}{\\left|b_k(t)\\right|}\\exp \\biggl (\\int _s^ta_k(\\tau )\\biggr )\\biggr )^2\\langle y,e_k\\rangle _{H_s}.", "\\nonumber $ We assume that there exist $L\\ge 0$ and $k_4\\in \\mathbb {N}$ such that $k_4\\ge k_3$ and $\\left|b_k(t)\\right|\\ge L$ for all $k\\ge k_4$ Hence for any $k\\ge k_4$ , we have $\\frac{1}{b_k^2(t)}\\exp \\biggl (\\int _s^t2a_k(\\tau )\\,d\\tau \\biggr )\\le \\frac{1}{L^2}\\exp \\bigl (2\\lambda _0 T\\bigr ),$ and $\\left\\Vert U(t,s)\\right\\Vert _{\\mathcal {L}(H_s,H_t)}\\le \\frac{1}{L} \\exp {\\lambda _0 T},\\ \\ (t,s)\\in \\Delta .$ Moreover for all $k\\ge k_4$ , by Proposition REF we have that there exists $M>0$ such that $\\left\\Vert U(t,s)\\right\\Vert _{\\mathcal {L}(H_s;\\mathcal {H}_{t,s})}\\le \\frac{M}{(t-s)^\\frac{1}{2}}$ Moreover, for all $k\\ge k_4$ , we have that $H_{t_1}=H_{t_2}$ with equivalent norms for every $t_1,t_2\\in [0,T]$ .", "Indeed, let $y\\in H_{t_1}$ then there exists $x\\in X$ such that $y=Q^{-\\frac{1}{2}}(t_1)x=\\sum _{k=1}^\\infty b_k(t) \\langle x,e_k\\rangle _X=Q^{-\\frac{1}{2}}(t_2)\\biggl (\\frac{b_k(t)}{b_k(s)}x\\biggr )\\in H_{t_2},$ and $\\left\\Vert y\\right\\Vert _{H_{t_1}}^2=\\sum _{k=1}^\\infty \\left|b_k(t_1)\\right|^2\\langle x,e_k\\rangle _X^2\\le \\frac{K^2}{L^2}\\sum _{k=1}^\\infty \\left|b_k(t_2)\\right|^2\\langle x,e_k\\rangle _X^2=\\frac{K^2}{L^2}\\left\\Vert y\\right\\Vert _{H_{t_2}}^2.$ Conversely, the other embedding is analogous.", "So, identifying each $H_t$ with one of them, Hypotheses REF and REF are satisfied too.", "Example 1 bis Let $A(t)$ , $B(t)$ be self-adjoint operators in diagonal form with respect to the same Hilbert basis $\\lbrace e_k:\\ k\\in \\mathbb {N}\\rbrace $ , namely $A(t)e_k=a_k(t)e_k,\\ \\ B(t)e_k=b_k(t)e_k\\ \\ 0\\le t\\le T,\\ \\ k\\in \\mathbb {N},$ with continuous coefficients $a_k$ , $b_k$ .", "We set $\\mu _k=\\min _{t\\in [0,T]}a_k(t),\\ \\ \\lambda _k=\\max _{t\\in [0,T]}a_k(t)$ and we assume that there exists $\\lambda _0>0$ such that $\\lambda _k<\\lambda _0,\\ \\ \\forall \\ k\\in \\mathbb {N}.$ In this setting $U(t,s)e_k=\\exp \\biggl (\\int _s^ta_k(\\tau )\\,d\\tau \\biggr )e_k,\\ \\ (s,t)\\in \\Delta ,\\ \\ k\\in \\mathbb {N}$ is the strongly continuous evolution operator formally associated to the family $\\lbrace A(t)\\rbrace _{t\\in [0,T]}$ .", "Moreover we assume that there exists $K>0$ such that $\\left|b_k(t)\\right|\\le K,\\ \\ t\\in [0,T],\\ k\\in \\mathbb {N}.$ Hence $B(t)\\in \\mathcal {L}(X)$ for all $t\\in [0,T]$ and $\\sup _{t\\in [0,t]}\\left\\Vert B(t)\\right\\Vert _{\\mathcal {L}(X)}\\le K.$ The operators $Q(t,s)$ are given by $Q(t,s)e_k=\\int _s^t\\exp \\biggl (2\\int _\\sigma ^ta_k(\\tau )\\,d\\tau \\biggr )(b_k(\\sigma ))^2\\,d\\sigma \\,e_k=:q_k(t,s)e_k,\\ \\ (s,t)\\in \\Delta ,\\ k\\in \\mathbb {N}.", "\\nonumber $ We assume also that there exist some indexes $k$ (possibly, infinite many) such that $b_k$ is identically zero on the interval $(s,t)$ .", "Hypothesis REF is fullfilled if $\\sum _{k=0}^\\infty q_k(t,s)<+\\infty ,\\ \\ (s,t)\\in \\Delta .$ We give now a sufficient condition to have (REF ).", "We assume that $\\lambda _k$ is eventually nonzero (say for $k\\ge k_0$ ).", "Given $(s,t)\\in \\Delta $ , we have $&\\left|\\int _s^t\\exp \\biggl (2\\int _\\sigma ^ta_k(\\tau )\\,d\\tau \\biggr )(b_k(\\sigma ))^2\\,d\\sigma \\right|\\le \\left\\Vert b_k\\right\\Vert ^2_{\\infty }\\left|\\int _s^t\\exp \\bigl (2\\lambda _k(t-\\sigma )\\bigr )\\,d\\sigma \\right|\\nonumber \\\\&=\\frac{\\left\\Vert b_k\\right\\Vert ^2_{\\infty }}{2\\left|\\lambda _k\\right|}\\left|1-\\exp (2\\lambda _k(t-s))\\right|\\le \\frac{\\left\\Vert b_k\\right\\Vert ^2_{\\infty }}{2\\left|\\lambda _k\\right|}\\bigl (1+\\exp (2\\lambda _0T)\\bigr ).$ Hence (REF ) holds if we require $\\sum _{k=0}^\\infty \\frac{\\left\\Vert b_k\\right\\Vert ^2_{\\infty }}{\\left|\\lambda _k\\right|}<+\\infty .$ We look now for a condition such that $ U(t,s)(X)\\nsubseteq \\mathcal {H}_{t,s},\\ \\ (s,t)\\in \\Delta .$ Since $Q^{\\frac{1}{2}}(t,s)e_k=(q_k(t,s))^{\\frac{1}{2}}e_k$ for every $(s,t)\\in \\Delta $ and $k\\in \\mathbb {N}$ , (REF ) holds if $ \\sup _{\\begin{array}{c}k\\ge k_0,\\\\ q_k(t,s)\\ne 0\\end{array}}\\frac{\\displaystyle {\\exp \\biggl (\\int _s^t2a_k(\\tau )\\,d\\tau \\biggr )}}{q_k(t,s)}=+\\infty ,\\ \\ 0\\le s<t\\le T.$ We obtain a sufficient condition for (REF ) if we assume $\\lambda _k$ eventually negative (say for $k\\ge k_1\\ge k_0$ ).", "Let $(s,t)\\in \\Delta $ , assuming $b_k\\lnot \\equiv 0$ on $(s,t)$ , we have $&\\frac{\\displaystyle {\\exp \\biggl (\\int _s^t2a_k(\\tau )\\,d\\tau \\biggr )}}{\\displaystyle {\\int _s^t\\exp \\biggl (2\\int _\\sigma ^ta_k(\\tau )\\,d\\tau \\biggr )(b_k(\\sigma ))^2\\,d\\sigma }}\\ge \\frac{\\exp \\bigl (2\\mu _k(t-s)\\bigr )}{\\displaystyle {\\left\\Vert b_k\\right\\Vert ^2_{\\infty }\\int _s^t\\exp \\bigl (2\\lambda _k(t-\\sigma )\\bigr )\\,d\\sigma }}\\nonumber \\\\&=\\frac{\\exp \\bigl (2\\mu _k(t-s)\\bigr )}{\\displaystyle {\\left\\Vert b_k\\right\\Vert ^2_{\\infty }\\frac{1}{-2\\lambda _k}\\bigl (1-\\exp (2\\lambda _k(t-s))\\bigr )}}=\\frac{2\\left|\\lambda _k\\right|}{\\displaystyle {\\left\\Vert b_k\\right\\Vert ^2_{\\infty }\\bigl (\\exp \\bigl (-2\\mu _k(t-s)\\bigr )-\\exp (-2(\\mu _k-\\lambda _k)(t-s))\\bigr )}}.", "\\nonumber $ So (REF ) is fullfilled if $\\sup _{\\begin{array}{c}k\\ge k_1,\\\\b_k\\lnot \\equiv 0\\end{array}}\\frac{2\\left|\\lambda _k\\right|}{\\displaystyle {\\left\\Vert b_k\\right\\Vert ^2_{\\infty }\\bigl (\\exp \\bigl (-2\\mu _k(t-s)\\bigr )-\\exp (-2(\\mu _k-\\lambda _k)(t-s))\\bigr )}}=+\\infty $ Now we want to find some conditions such that the hypotheses of Proposition REF are satisfied.", "Let $(s,t)\\in \\Delta $ , we assume that $b_k(s)$ and $b_k(t)$ are eventually nonzero (say for $k\\ge k_2\\ge k_1$ ).", "Hence we have that $U(t,s)H_s\\subseteq H_t$ if $\\sup _{k\\ge k_2}\\frac{b_k^2(s)}{b_k^2(t)}\\exp \\biggl (\\int _s^t2a_k(\\tau )\\,d\\tau \\biggr )<+\\infty .$ Since $\\frac{b_k^2(s)}{b_k^2(t)}\\exp \\biggl (\\int _s^t2a_k(\\tau )\\biggr )\\le \\frac{b_k^2(s)}{b_k^2(t)} \\exp (2\\lambda _k T),\\ \\ k\\ge k_2\\nonumber $ a sufficient condition to have (REF ) is given by $\\sup _{k\\ge k_2}\\frac{b_k^2(s)}{b_k^2(t)}\\exp (2\\lambda _k T)<+\\infty .$ Now we investigate when (REF ) holds.", "We observe first that for $y=Q^{\\frac{1}{2}}(s)x$ with $x\\in H_s$ , we have $&\\left\\Vert U(t,s)y\\right\\Vert ^2_{H_t}=\\left\\Vert Q^{-\\frac{1}{2}}(t)U(t,s)Q^{\\frac{1}{2}}(s)x\\right\\Vert _X^2=\\sum _{\\begin{array}{c}k\\in \\mathbb {N},\\\\ b_k(t)\\ne 0\\end{array}}\\biggl (\\frac{\\left|b_k(s)\\right|}{\\left|b_k(t)\\right|}\\exp \\biggl (\\int _s^ta_k(\\tau )\\biggr )\\langle x,e_k\\rangle _X\\biggr )^2\\nonumber \\\\&=\\sum _{\\begin{array}{c}k\\in \\mathbb {N},\\\\ b_k(t)\\ne 0\\end{array}}\\biggl (\\frac{\\left|b_k(s)\\right|}{\\left|b_k(t)\\right|}\\exp \\biggl (\\int _s^ta_k(\\tau )\\biggr )\\frac{\\langle y,e_k\\rangle _X}{\\left|b_k(s)\\right|}\\biggr )^2=\\sum _{\\begin{array}{c}k\\in \\mathbb {N},\\\\ b_k(t)\\ne 0\\end{array}}\\biggl (\\frac{1}{\\left|b_k(t)\\right|}\\exp \\biggl (\\int _s^ta_k(\\tau )\\langle y,e_k\\rangle _{H_s}\\biggr )\\biggr )^2.", "\\nonumber $ We assume that there exist $L\\ge 0$ and $k_3\\in \\mathbb {N}$ such that $k_3\\ge k_2$ and $\\left|b_k(t)\\right|\\ge L$ for all $k\\ge k_3$ .", "Hence for any $k\\ge k_3$ , we have $\\frac{1}{b_k^2(t)}\\exp \\biggl (\\int _s^t2a_k(\\tau )\\,d\\tau \\biggr )\\le \\frac{1}{L^2}\\exp \\bigl (2\\lambda _0 T\\bigr ),$ and $\\left\\Vert U(t,s)\\right\\Vert _{\\mathcal {L}(H_s,H_t)}\\le \\frac{1}{L} \\exp (\\lambda _0 T),\\ \\ 0\\le s < t\\le T.$ Example 2 Let $A(t)=a(t) I$ , where $a$ is a continuous real valued map on $t\\in [0,T]$ with $\\left\\Vert a\\right\\Vert _\\infty =a_0$ .", "Hence $U(t,s)=\\exp \\biggl (\\int _s^ta(\\tau )\\,d\\tau \\biggr ) I,\\ \\ (s,t)\\in \\Delta $ is the strongly continuous evolution operator formally associated to the family $\\lbrace A(t)\\rbrace _{t\\in [0,T]}$ .", "This particular choice of $A(t)$ is important because it is a non autonomous version of the Malliavin Operator.", "We refer to for the autonomous case.", "Let $\\lbrace B(t)\\rbrace _{t\\in [0,T]}\\in \\mathcal {L}(X)$ a family of continuous and strongly measurable operators such that there exists $K>0$ such that $\\sup _{t\\in [0,t]}\\left\\Vert B(t)\\right\\Vert _{\\mathcal {L}(X)}\\le K.$ So Hypotesis REF is fulfilled if $Q(t,s)\\in \\mathcal {L}^1_+(X)$ .", "Since $U(t,s)$ is a multiple of the identity, $U(t,s)$ cannot map $X$ into $\\mathcal {H}_{t,s}$ for all $(s,t)\\in \\Delta $ and $U(t,s)(H_s)=H_s$ for all $(s,t)\\in \\Delta $ .", "We look for sufficient conditions such that the hypotheses of Proposition REF are satisfied.", "We require that for every $(s,t)\\in \\Delta $ there exists $C=C(t,s)>0$ such that $\\langle Q(s)x,x\\rangle _X\\le C\\langle Q(t)x,x\\rangle _X$ for all $x\\in X$ and we obtain $\\left\\Vert Q^{\\frac{1}{2}}(s)x\\right\\Vert _X^2=\\langle Q(s)x,x\\rangle _X\\le C\\langle Q(t)x,x\\rangle _X= C\\left\\Vert Q^{\\frac{1}{2}}(t)x\\right\\Vert _X^2, \\ \\ x\\in X.$ Then by Proposition REF $H_s\\subseteq H_t$ with continuous imbedding.", "Denoting by $\\widetilde{C}$ the norm of the embedding of $H_s$ into $H_t$ , let $x\\in H_s$ we have $&\\left\\Vert U(t,s)x\\right\\Vert _{H_t}=\\left\\Vert \\exp \\biggl (\\int _s^ta(\\tau )\\,d\\tau \\biggr )x\\right\\Vert _{H_t}\\le e^{a_0T}\\widetilde{C}\\left\\Vert x\\right\\Vert _{H_s}, \\\\&\\left\\Vert U(t,s)\\right\\Vert _{\\mathcal {L}(H_s; H_t)}\\le e^{a_0T}\\widetilde{C}.$ Hence, we can choose $H_s$ as the set $E$ of Theorem REF .", "Finally we study the special case $B(t)=b(t) B$ , where $b$ is a continuous real valued map on $[0,T]$ and $B\\in \\mathcal {L}(X)$ .", "We set $\\max _{t\\in [0,T]}b(t)=b_0.$ The operators $Q(t,s)$ are given by Let $\\lbrace e_k:\\ k\\in \\mathbb {N}\\rbrace $ be a Hilbert basis that diagonalizes $Q(t,s)$ , we have $\\langle Q(t,s)e_k,e_k\\rangle _X=\\int _s^t b^2(\\sigma )\\left\\Vert U^\\star (t,\\sigma )B^\\star e_k\\right\\Vert _X^2\\,d\\sigma \\nonumber $ and $\\langle Q(t,s)e_k,e_k\\rangle _X\\le T\\, N^2\\, b_0^2\\, \\langle BB^*e_k,e_k\\rangle _X$ and $Q(t,s)\\in \\mathcal {L}^+_1(X)$ if and only if $Q:=BB^*\\in \\mathcal {L}^+_1(X)$ .", "Since $U(t,s)$ is invertible, $U(t,s)$ can not map $X$ into $\\mathcal {H}_{t,s}$ for all $(s,t)\\in \\Delta $ and we look for sufficient conditions such that the hypotheses of Proposition REF are satisfied.", "For all $(s,t)\\in \\Delta $ such that $b(s)$ and $b(t)$ are nonzero, we have $U(t,s)H_s=H_s= H_t$ and let $y\\in H_s$ we obtain $\\left\\Vert U(t,s)y\\right\\Vert _{H_t}=\\left\\Vert \\exp \\biggl (\\int _s^ta(\\tau )\\,d\\tau \\biggr )y\\right\\Vert _{H_t}\\le e^{a_0(t-s)}\\left\\Vert y\\right\\Vert _{H_t}\\le \\frac{\\left|b(s)\\right|}{\\left|b(t)\\right|} e^{a_0 T}\\left\\Vert y\\right\\Vert _{H_s}.\\nonumber $ Hence $\\left\\Vert U(t,s)\\right\\Vert _{\\mathcal {L}(H_s,H_t)}\\le \\frac{\\left|b(s)\\right|}{\\left|b(t)\\right|}\\,e^{a_0 T}.\\nonumber $ If $b(t)\\ne 0$ for all $t\\in [0,T]$ , then $H_{t_1}=H_{t_2}$ with equivalent norms.", "Indeed, setting $\\displaystyle {B_1=\\min _{t\\in [0,T]}\\left|b(t)\\right|}$ and $\\displaystyle {B_2=\\sup _{t\\in [0,T]}\\left|b(t)\\right|}$ for $x\\in H_{t_2}$ we have $\\left\\Vert x\\right\\Vert _{H_{t_2}}^2=\\frac{1}{\\left|b(t_2)\\right|}\\langle Q^{-\\frac{1}{2}}x,Q^{-\\frac{1}{2}}\\rangle _X=\\frac{B_2}{B_1}\\left\\Vert x\\right\\Vert _{H_{t_1}}^2.$ The other inequality is analogous.", "Example 3 We consider now Example 2 of and for the readers we write again all preliminary results.", "Let $\\mathcal {O}\\subset \\mathbb {R}^d$ be an open and with $C^2$ boundary, we consider the evolution operator $U(t,s)$ in $X=L^2(\\mathcal {O})$ associated to an evolution equation of parabolic type, ${\\left\\lbrace \\begin{array}{ll}u_t(t,x)=\\mathcal {A}(t)u(t,\\cdot )(x),\\ \\ (t,x)\\in (s,T)\\times \\mathcal {O},\\\\u(t,x)=0,\\ \\ (t,x)\\in (s,T)\\times \\partial \\mathcal {O},\\\\u(s,x)=u_0(x),\\ \\ x\\in \\mathcal {O}.\\end{array}\\right.", "}$ The differencial operators $\\mathcal {A}(t)$ are defined by $\\mathcal {A}(t)\\varphi (x)=\\sum _{i,j=1}^d a_{i,j}(t,x)D_{ij}\\varphi (x)+\\sum _{i=1}^d a_{i}(t,\\cdot )D_{j}\\varphi (x)+a_0(t,x)\\varphi (x), t\\in [0,T],\\ x\\in \\mathcal {O},$ and we make the following assumptions on the coefficients.", "Hypotesis 5.1 For some $\\rho >0$ , $a_{ij}\\in C^{0,1+\\rho }([0,T]\\times \\overline{\\mathcal {O}})$ and there exists $\\nu >0$ such that for all $\\xi \\in \\mathbb {R}^d$ $\\sum _{i,j=1}^da_{i,j}(t,x)\\xi _i\\xi _j\\ge \\nu \\left|\\xi \\right|^2,\\ \\ t\\in [0,T],\\ x\\in \\mathcal {O}.$ On the operators $B(t)$ we make the following assumptions.", "Hypotesis 5.2 There exists $q\\ge 2$ , $q> d$ such that for a.e.", "$t\\in [0,T]$ , $B(t)\\in \\mathcal {L}\\bigl (L^2(\\mathcal {O});L^q(\\mathcal {O})\\bigr )$ has bounded inverse and $\\operatornamewithlimits{ess\\,sup}_{0<t<T}(\\left\\Vert B(t)\\right\\Vert _{\\mathcal {L}(L^2(\\mathcal {O});L^q(\\mathcal {O}))}+\\left\\Vert B(t)^{-1}\\right\\Vert _{\\mathcal {L}(L^q(\\mathcal {O});L^2(\\mathcal {O}))})<+\\infty .$ Moreover for all $f\\in L^2(\\mathcal {O})$ the mapping $t\\in [0,T]\\longmapsto B(t)f\\in L^q(\\mathcal {O})$ is measurable.", "Since $q\\ge 2$ , $L^q(\\mathcal {O})\\subseteq L^2(\\mathcal {O})$ and $B(t)\\in \\mathcal {L}(X)$ a.e.", "$t\\in (0,T)$ .", "Hence Hypothesis REF (2) is fulfilled.", "Moreover, with abuse of notation, we denote by $B(t)$ both the operators $B(t):L^2(\\mathcal {O})\\rightarrow L^2(\\mathcal {O})$ and $B(t):L^2(\\mathcal {O})\\rightarrow L^2(\\mathcal {O})$ and by $B^\\star (t)$ the adjoint of $B(t)$ in both cases.", "Using the theory of quadratic forms (see ), in it is proved that for all $u_0\\in X$ there exists a unique weak solution of (REF ), namely there exists a unique $u\\in W:= L^2\\bigl ((s,T);H^1_0(\\mathcal {O})\\bigr )\\cap W^{1,2}\\bigl ((s,T);H^{-1}(\\mathcal {O})\\bigr )$ such that for every $v\\in W$ satisfying $v(T)=0$ we have $-\\int _s^T\\langle u(t),v^{\\prime }(t)\\rangle _{L^2(\\mathcal {O})}\\,dt+\\int _s^T\\mathbf {a}(t,u(t),v(t))\\,dt=\\langle u_0,v(0)\\rangle _{L^2(\\mathcal {O})},$ where $\\mathbf {a}(t,\\cdot ,\\cdot )$ is the quadratic form associated to the operator $\\mathcal {A}(t)$ in $H^1_0(\\mathcal {O})$ , namely $\\mathbf {a}(t,\\varphi ,\\psi )&=\\int _\\mathcal {O}\\sum _{i,j=1}^d a_{ij}(t,x)D_j\\varphi (x)D_i\\psi (x)\\, dx\\nonumber \\\\&-\\int _\\mathcal {O}\\biggl (\\sum _{j=1}^d \\Bigl (a_{j}(t,x)\\sum _{i=1}^d D_ia_{i,j}(t,x)\\Bigl ) D_j\\varphi (x)\\biggr )\\psi (x)\\, dx.\\nonumber $ Setting $U(t,s)u_0=:u(t)$ , $U(t,s)$ turns to be an evolution operator in $L^2(\\mathcal {O})$ .", "Moreover in it is shown that $U(t,s)$ can be extended to the whole space $L^1(\\mathcal {O})$ and the extension (still denoted by $U(t,s)$ ) belongs to $\\mathcal {L}(L^1(\\mathcal {O});L^\\infty (\\mathcal {O}))$ and it is represented by $U(t,s)\\varphi (x)=\\int _\\mathcal {O}k(x,y,t,s)\\varphi (y)\\,dy,\\ \\ \\varphi \\in L^1(\\mathcal {O})$ where for every $(s,t)\\in \\Delta $ , $k(\\cdot ,\\cdot ,t,s)\\in L^\\infty (\\mathcal {O}\\times \\mathcal {O})$ .", "Moreover there exist $M,m>0$ such that $\\left|k(x,y,t,s)\\right|\\le \\frac{M}{(t-s)^{\\frac{d}{2}}} e^{-\\frac{\\left|x-y\\right|^2}{m(t-s)}},\\ \\ x,y\\in \\mathcal {O},\\ (s,t)\\in \\Delta .$ Using this tools in , it is proven that the operator $Q(t,s)$ has finite trace for all $(s,t)\\in \\Delta $ .", "Since Hypothesis REF holds, results of the paper can be applied.", "Hence for $q\\in (1,+\\infty )$ there exists a strongly continuous evolution operator $U_q(t,s)$ on $X_q=L^q(\\mathcal {O})$ such that, setting $D_q=W^{2,q}(\\mathcal {O})\\cap W^{1,q}_0(\\mathcal {O})$ , for every $\\varphi \\in L^p\\bigl ((s,t);X_q\\bigr )$ with $p\\in (1,+\\infty )$ and $u_0\\in (X_q,D_q)_{1-\\frac{1}{p},p}$ , the function $U_q(t,s)u_0$ is the unique strong solution to (REF ), namely it is the unique function that belongs to $L^p\\bigl ((s,T);D_q\\bigr )\\cap W^{1,p}\\bigl ((s,T);X_q\\bigr )\\cap C\\bigl ([s,T];(X_q,D_q)_{1-\\frac{1}{p},p}\\bigr )$ and that satisfies ${\\left\\lbrace \\begin{array}{ll}u^{\\prime }(\\tau )=A_q(\\tau )u(\\tau )+\\varphi (\\tau ), \\ \\ \\mbox{a.e.", "}\\ \\tau \\in (s,t),\\\\u(s)=u_0\\end{array}\\right.", "}$ where $A_q(\\tau ):D_q\\longrightarrow X_q,\\ \\ A_q(\\tau )\\varphi =\\mathcal {A}(\\tau )\\varphi $ is the realization of $\\mathcal {A}(\\tau )$ in $X_q$ .", "Taking $p=q=2$ , we obtain that $U_2(t,s)$ coincides with $U(t,s)$ , since for every $u_0\\in W^{1,2}_0(\\mathcal {O})=(X_2,D_2)_{\\frac{1}{2},2}$ , the function $U_2(t,s)u_0$ is a strong solution to (REF ), hence it is a weak solution.", "By uniqueness of the weak solution, the bounded operators $U(t,s)$ and $U_2(t,s)$ coincide on a dense subset of $X$ , and therefore they coincide on the whole $X$ .", "Morover, again by uniqueness, for $q>2$ the operators $U_q(t,s)$ are the parts of $U(t,s)$ in $X_q$ .", "Therefore Hipotheses REF (1) and (2) are fulfilled.", "Let us check that Hypothesis REF is satisfied by the spaces $X_q$ , $E_{\\alpha ,p}=(X_q,D_q)_{\\alpha ,p}$ with $\\alpha \\in \\bigl (0,\\frac{1}{2}\\bigr )$ and $X_p$ with $p\\in (2,q)$ .", "Theorem 2.5 of , for every $(s,t)\\in \\Delta $ and $\\varphi \\in L^2\\bigl ((s,t);X_q\\bigr )$ , the problem ${\\left\\lbrace \\begin{array}{ll}v^{\\prime }(\\tau )=A_q(\\tau )v(\\tau )+\\varphi (\\tau ), \\ \\ \\mbox{a.e.", "}\\ \\tau \\in (s,t),\\\\v(s)=0\\end{array}\\right.", "}$ has an unique solution $v\\in L^2\\bigl ((s,t);D_q\\bigr )\\cap W^{1,2}\\bigl ((s,t);D_q\\bigr )$ , given by the variation of constants formula $v(\\tau )=\\int _s^tU(\\tau ,\\sigma )\\varphi (\\sigma )\\,d\\sigma \\ \\ \\tau \\in (s,t),$ and there exists a constant $C>0$ independent of $s,\\ t,$ and $\\varphi $ such that $\\left\\Vert v\\right\\Vert _{L^2((s,t);D_q)}+\\left\\Vert v\\right\\Vert _{W^{1,2}((s,t);X_q)}+\\le C \\left\\Vert \\varphi \\right\\Vert _{L^2((s,t);X_q)}.$ Therefore the mapping $\\bigl \\lbrace v\\in L^2\\bigl ((s,t);D_q\\bigr )\\cap W^{1,2}\\bigl ((s,t);X_q\\bigr ):\\ v(s)=0\\bigr \\rbrace \\longrightarrow L^2\\bigl ((s,t);X_q\\bigr ),\\ \\ v\\longmapsto \\Phi (v):=v^{\\prime }-A_q(\\cdot )v$ is an isomorphism.", "We recall now that for every couples of Banach spaces $X, D$ such that $D\\subseteq X$ with continuous embedding, the space $L^2\\bigl ((s,t);D\\bigr )\\cap W^{1,2}\\bigl ((s,t);X\\bigr )$ is continuously embedded in $C\\bigl ([s,t]; (X,D)_{\\frac{1}{2},2}\\bigr )$ and the range of the trace operator $v\\longmapsto Tv:=v(t)$ is precisely $(X,D)_{\\frac{1}{2},2}$ .", "It follows that the range of the mapping $L^2\\bigl ((s,t);X_q\\bigr )\\longrightarrow X_q,\\ \\ \\varphi \\longmapsto \\int _s^tU(t,\\sigma )\\varphi (\\sigma )\\,d\\sigma =T\\Phi ^{-1}\\varphi $ is equal to $(X_q,D_q)_{\\frac{1}{2},2}$ .", "By Hypothesis REF the operator $L^2\\bigl ((s,t);X_2\\bigr )\\longrightarrow L^q\\bigl ((s,t);X_q\\bigr ),\\ \\ \\varphi \\longmapsto B(\\cdot )\\varphi $ is bounded and onto.", "Hence, the range of the operator $L$ defined in (REF ) is still $(X_q,D_q)_{\\frac{1}{2},2}$ .", "In , it is proven that $U(t,s)$ maps $X_q$ into $(X_q,D_q)_{\\frac{1}{2},2}$ and $U(t,s)$ maps $X_q$ into itself.", "Moreover there exist $C,C_q>0$ such that $&\\left\\Vert U(t,s)x\\right\\Vert _{(X_q,D_q)_{\\frac{1}{2},2}}\\le \\frac{C}{(t-s)^{\\frac{1}{2}}}\\left\\Vert x\\right\\Vert _{X_q},\\ \\ (s,t)\\in \\Delta ,\\ \\ x\\in X_q \\\\&\\left\\Vert U(t,s)x\\right\\Vert _{X_q}\\le C_q\\left\\Vert x\\right\\Vert _{X_q},\\ \\ (s,t)\\in \\Delta ,\\ \\ x\\in X_q.", "$ Hence Hypotheses REF and REF are satisfied by $X_q$ .", "Moreover by (REF ), we have that for every $x\\in \\mathcal {O}$ and $p>1$ $\\left\\Vert k(x,\\cdot ,t,s)\\right\\Vert _{L^p(\\mathcal {O})}^p\\le \\frac{M}{(t-s)^{\\frac{-dp}{2}}}\\int _{\\mathbb {R}^d}e^{-p\\frac{\\left|x-y\\right|^2}{m(t-s)}}\\,dy=:M_p(t-s)^{\\frac{d(1-p)}{2}}$ and for every $\\varphi \\in X_p$ and $x\\in \\mathcal {O}$ $\\left|U(t,s)\\varphi (x)\\right|\\le \\int _\\mathcal {O}\\left|k(x,y,t,s)\\right|\\left|\\varphi (y)\\right|\\,dy\\le \\frac{M_{p^{\\prime }}}{(t-s)^{\\frac{d}{2p}}}\\left\\Vert \\varphi \\right\\Vert _{X_p}.$ and $\\left\\Vert U(t,s)\\varphi \\right\\Vert _\\infty \\le \\int _\\mathcal {O}\\left|k(x,y,t,s)\\right|\\left|\\varphi (y)\\right|\\,dy\\le \\frac{M_{p^{\\prime }}}{(t-s)^{\\frac{d}{2p}}}\\left\\Vert \\varphi \\right\\Vert _{X_p}.$ Again, as proven in , there exist $C_p>0$ such that $\\left\\Vert U(t,s)x\\right\\Vert _{X_p}\\le C_q\\left\\Vert x\\right\\Vert _{X_p},\\ \\ (s,t)\\in \\Delta ,\\ \\ x\\in X_p.$ Hence $U(t,s)\\in \\mathcal {L}(X_p; X_p)\\cap \\mathcal {L}(X_p; X_\\infty )$ and, by interpolation, $U(t,s)\\in \\mathcal {L}(X_p; X_q)$ since we know that $(X_p, X_\\infty )_{\\theta ,q}=X_q$ with the choice $\\theta =1-\\frac{p}{q}$ .", "Moreover we have that there exists $C_{p,q}>0$ $\\left\\Vert U(t,s)\\right\\Vert _{\\mathcal {L}(X_p; X_q)}\\le \\frac{C_{p,q}}{(t-s)^{\\frac{d}{2p}\\bigl (1-\\frac{p}{q}\\bigr )}},\\ \\ (s,t)\\in \\Delta .$ For $(s,t)\\in \\Delta $ we split $U(t,s)= U\\bigl (t,\\frac{t+s}{2}\\bigr )U\\bigl (\\frac{t+s}{2},s\\bigr )$ and by (REF ) and (REF ) we have that there exist a constant $C>0$ independent of $s$ and $t$ such that $\\left\\Vert U(t,s)\\varphi \\right\\Vert _{\\mathcal {L}(X_p; (X_q,D_q)_{\\frac{1}{2},2})}\\le \\frac{C}{(t-s)^{\\frac{1}{2}+\\frac{d}{2p}\\bigl (1-\\frac{p}{q}\\bigr )}},\\ \\ (s,t)\\in \\Delta .$ Hence Hypotheses REF and REF are satisfied by $X_p$ with $p\\in (2,q)$ .", "Finally, we know that $U(t,s)\\in \\mathcal {L}\\bigl (X_q; (X_q,D_q)_{\\frac{1}{2},2}\\bigr )$ and by $U(t,s)\\in \\mathcal {L}\\bigl ((X_q,D_q)_{\\frac{1}{2},2}; (X_q,D_q)_{\\frac{1}{2},2}\\bigr )$ .", "Since $\\bigl (X_q,(X_q,D_q)_{\\frac{1}{2},2}\\bigr )_{2\\alpha ,p}=(X_q,D_q)_{\\alpha ,p}$ with $\\alpha \\in \\bigl (0,\\frac{1}{2}\\bigr )$ , then by interpolation we have $U(t,s)\\in \\mathcal {L}\\bigl (E_{\\alpha ,p};(X_q,D_q)_{\\frac{1}{2},2}\\bigr )$ and there exists a constant $C>0$ independent of $s$ and $t$ such that $\\left\\Vert U(t,s)\\right\\Vert _{\\mathcal {L}(E_{\\alpha ,p};(X_q,D_q)_{\\frac{1}{2},2})}\\le \\frac{C}{(t-s)^{\\frac{1}{2}-\\alpha }}.$ Since the family $\\lbrace U(t,s)\\rbrace _{[0,T]}$ is a strongly continuous evolution operator on $E_{\\alpha ,p}$ (see again ), Hypotheses REF and REF are satisfied by $E_{\\alpha ,p}$ .", "tocsectionReferences" ] ]
2212.05559
[ [ "Orthogonal SVD Covariance Conditioning and Latent Disentanglement" ], [ "Abstract Inserting an SVD meta-layer into neural networks is prone to make the covariance ill-conditioned, which could harm the model in the training stability and generalization abilities.", "In this paper, we systematically study how to improve the covariance conditioning by enforcing orthogonality to the Pre-SVD layer.", "Existing orthogonal treatments on the weights are first investigated.", "However, these techniques can improve the conditioning but would hurt the performance.", "To avoid such a side effect, we propose the Nearest Orthogonal Gradient (NOG) and Optimal Learning Rate (OLR).", "The effectiveness of our methods is validated in two applications: decorrelated Batch Normalization (BN) and Global Covariance Pooling (GCP).", "Extensive experiments on visual recognition demonstrate that our methods can simultaneously improve covariance conditioning and generalization.", "The combinations with orthogonal weight can further boost the performance.", "Moreover, we show that our orthogonality techniques can benefit generative models for better latent disentanglement through a series of experiments on various benchmarks.", "Code is available at: \\href{https://github.com/KingJamesSong/OrthoImproveCond}{https://github.com/KingJamesSong/OrthoImproveCond}." ], [ "Introduction", "The Singular Value Decomposition (SVD) can factorize a matrix into orthogonal eigenbases and non-negative singular values, serving as an essential step for many matrix operations.", "Recently in computer vision and deep learning, many approaches integrated the SVD as a meta-layer in the neural networks to perform some differentiable spectral transformations, such as the matrix square root and inverse square root.", "The applications arise in a wide range of methods, including Global Covariance Pooling (GCP) [1], [2], [3], decorrelated Batch Normalization (BN) [4], [5], [6], Whitening an Coloring Transform (WCT) for universal style transfer [7], [8], [9], and Perspective-n-Point (PnP) problems [10], [11], [12].", "For the input feature map ${\\mathbf {X}}$ passed to the SVD meta-layer, one often first computes the covariance of the feature as ${\\mathbf {X}}{\\mathbf {X}}^{T}$ .", "This can ensure that the covariance matrix is both symmetric and positive semi-definite, which does not involve any negative eigenvalues and leads to the identical left and right eigenvector matrices.", "However, it is observed that inserting the SVD layer into deep models would typically make the covariance very ill-conditioned [2], resulting in deleterious consequences on the stability and optimization of the training process.", "For a given covariance ${\\mathbf {A}}$ , its conditioning is measured by the condition number: $\\kappa ({\\mathbf {A}}) = \\sigma _{max}({\\mathbf {A}}) \\sigma _{min}^{-1}({\\mathbf {A}}) $ where $\\sigma (\\cdot )$ denotes the eigenvalue of the matrix.", "Mathematically speaking, the condition number measures how sensitive the SVD is to the errors of the input.", "Matrices with low condition numbers are considered well-conditioned, while matrices with high condition numbers are said to be ill-conditioned.", "Specific to neural networks, the ill-conditioned covariance matrices are harmful to the training process in several aspects, which we will analyze in detail later.", "This phenomenon was first observed in the GCP methods by [2], and we found that it generally extrapolates to other SVD-related tasks, such as decorrelated BN.", "Fig.", "REF depicts the covariance conditioning of these two tasks throughout the training.", "As can be seen, the integration of the SVD layer makes the generated covariance very ill-conditioned (${\\approx }1e12$ for decorrelated BN and ${\\approx }1e16$ for GCP).", "By contrast, the conditioning of the approximate solver, i.e., Newton-Schulz iteration (NS iteration) [13], is about $1e5$ for decorrelated BN and is around $1e15$ for GCP, while the standard BN only has a condition number of $1e3$ .", "Figure: The covariance conditioning of the SVD meta-layer during the training process in the tasks of decorrelated BN (left) and GCP (Right).", "The decorrelated BN is based on ResNet-50 and CIFAR100, while ImageNet and ResNet-18 are used for the GCP.Ill-conditioned covariance matrices can harm the training of the network in both the forward pass (FP) and the backward pass (BP).", "For the FP, mainly the SVD solver is influenced in terms of stability and accuracy.", "Since the ill-conditioned covariance has many trivially-small eigenvalues, it is difficult for an SVD solver to accurately estimate them and large round-off errors are likely to be triggered, which might hurt the network performances.", "Moreover, the very imbalanced eigenvalue distribution can easily make the SVD solver fail to converge and cause the training failure [14], [2].", "For the BP, as pointed out in [15], [16], [4], the feature covariance is closely related to the Hessian matrix during the backpropagation.", "As the error curvature is given by the eigenvalues of the Hessian matrix [17], for the ill-conditioned Hessian, the Gradient Descent (GD) step would bounce back and forth in high curvature directions (large eigenvalues) and make slow progress in low curvature directions (small eigenvalues).", "As a consequence, the ill-conditioned covariance could cause slow convergence and oscillations in the optimization landscape.", "The generalization abilities of a deep model are thus harmed.", "Due to the data-driven learning nature and the highly non-linear transform of deep neural networks, directly giving the analytical form of the covariance conditioning is intractable.", "Some simplifications have to be performed to ease the investigation.", "Since the covariance is generated and passed from the previous layer, the previous layer is likely to be the most relevant to the conditioning.", "Therefore, we naturally limit our focus to the Pre-SVD layer, i.e., the layer before the SVD layer.", "To further simplify the analysis, we study the Pre-SVD layer in two consecutive training steps, which can be considered as a mimic of the whole training process.", "Throughout the paper, we mainly investigate some meaningful manipulations on the weight, the gradient, and the learning rate of the Pre-SVD layer in two sequential training steps.", "Under our Pre-SVD layer simplifications, one promising direction to improve the conditioning is enforcing orthogonality on the weights.", "Orthogonal weights have the norm-preserving property, which could improve the conditioning of the feature matrix.", "This technique has been widely studied in the literature of stable training and Lipschitz networks [18], [19], [20].", "We select some representative methods and validate their effectiveness in the task of decorrelated BN.", "Our experiment reveals that these orthogonal techniques can greatly improve the covariance conditioning, but could only bring marginal performance improvements and even slight degradation.", "This indicates that when the representation power of weight is limited, the improved conditioning does not necessarily lead to better performance.", "Orthogonalizing only the weight is thus insufficient to improve the generalization.", "Instead of seeking orthogonality constraints on the weights, we propose our Nearest Orthogonal Gradient (NOG) and Optimal Learning Rate (OLR).", "These two techniques explore the orthogonality possibilities about the learning rate and the gradient.", "More specifically, our NOG modifies the gradient of the Pre-SVD layer into its nearest-orthogonal form and keeps the GD direction unchanged.", "On the other hand, the proposed OLR dynamically changes the learning rate of the Pre-SVD layer at each training step such that the updated weight is as close to an orthogonal matrix as possible.", "The experimental results demonstrate that the proposed two techniques not only significantly improve the covariance conditioning but also bring obvious improvements in the validation accuracy of both GCP and decorrelated BN.", "Moreover, when combined with the orthogonal weight treatments, the performance can have further improvements.", "Besides the application on differentiable SVD, we propose that our orthogonality techniques can be also used for unsupervised latent disentanglement of Generative Adversarial Networks (GANs) [21].", "Recent works [22], [23] revealed that the latent disentanglement of GANs is closely related to the gradient or weight of the first projector after the latent code.", "In particular, the eigenvectors of the gradient or weight can be viewed as closed-formed solutions of interpretable directions [23].", "This raises the need for enforcing orthogonal constraints on the projector.", "As shown in Fig.", "REF , compared with non-orthogonal matrices, orthogonal matrices can lead to more disentangled representations and more precise attributes due to the property of equally-important eigenvectors.", "Motivated by this observation, we propose to enforce our NOG and OLR as orthogonality constraints in generative models.", "Extensive experiments on various architectures and datasets demonstrate that our methods indeed improve the disentanglement ability of identifying semantic attributes and achieve state-of-the-art performance against other disentanglement approaches.", "The main contributions are summarized below: We systematically study the problem of how to improve the covariance conditioning of the SVD meta-layer.", "We propose our Pre-SVD layer simplification to investigate this problem from the perspective of orthogonal constraints.", "We explore different techniques of orthogonal weights to improve the covariance conditioning.", "Our experiments reveal that these techniques could improve the conditioning but would harm the generalization abilities due to the limitation on the representation power of weight.", "We propose the nearest orthogonal gradient and optimal learning rate.", "The experiments on GCP and decorrelated BN demonstrate that these methods can attain better covariance conditioning and improved generalization.", "Their combinations with weight treatments can further boost the performance.", "We show that our proposed orthogonality approaches can be applied on the GANs projector for improved latent disentanglement ability of discovering precise semantic attributes, which opens the way for new applications of orthogonality techniques.", "This paper is an extension of the previous conference paper [24].", "In [24], we propose two orthogonality techniques and demonstrate that these methods can simultaneously improve the covaraince conditioning and generalization abilities of the SVD meta-layer.", "This journal extension motivates and proposes that these techniques can be also applied in generative models for better latent disentanglement.", "This point is validated through extensive experiments on various generative architectures and datasets.", "Moreover, we also investigate the probability of occurrence of our OLR throughout the training and show that the evaluation results agree well with our theoretical analysis.", "The rest of the paper is organized as follows: Sec.", "describes the related work in differentiable matrix decomposition, orthogonality applications, and unsupervised latent disentanglement.", "Sec.", "introduces our Pre-SVD layer simplification and orthogonal weight treatments, and Sec.", "presents the proposed orthogonality techniques.", "Sec.", "motivates why orthogonality can improve latent disentanglement.", "Sec.", "provides experimental results and some in-depth analysis.", "Finally, Sec.", "summarizes the conclusions.", "The differentiable matrix decomposition is widely used in neural networks as a spectral meta-layer.", "Ionescu et al.", "[25], [26] first propose the theory of matrix back-propagation and laid a foundation for the follow-up research.", "In deep neural networks, the transformation of matrix square root and its inverse are often desired due to the appealing spectral property.", "Their applications cover a wide range of computer vision tasks [6], [27].", "To avoid the huge time consumption of the SVD, some iterative methods are also developed to approximate the solution [13], [6], [27].", "Recently Song et al.", "[28] propose a dedicated eigen-solver for improving the computation speed of batched matrices.", "In [4], [8], [29], [30], [5], [6], the inverse square root is used in the ZCA whitening transform to whiten the feature map, which is also known as the decorrelated BN.", "The Global Covariance Pooling (GCP) models [1], [31], [32], [33], [2], [3], [34] compute the matrix square root of the covariance as a spectral normalization, which achieves impressive performances on some recognition tasks, including large-scale visual classification [1], [2], [33], [6], fine-grained visual categorization [1], [31], [34], and video action recognition [3].", "The Whitening and Coloring Transform (WCT), which uses both the matrix square root and inverse square root, is usually adopted in some image generation tasks such as neural style transfer [7], [9], image translation [35], [36], and domain adaptation [37], [38].", "In the geometric vision problems, the differentiable SVD is usually applied to estimate the fundamental matrix and the camera pose [39], [12], [11].", "Besides the SVD-based factorization, differentiating Cholesky decomposition [40] and some low-rank decomposition is used to approximate the attention mechanism [41], [42], [43] or to learn the constrained representations [44], [45]." ], [ "Orthogonality in Neural Network", "Orthogonal weights have the benefit of the norm-preserving property, i.e., the relation $||{\\mathbf {W}}{\\mathbf {A}}||_{\\rm F}{=}||{\\mathbf {A}}||_{\\rm F}$ holds for any orthogonal ${\\mathbf {W}}$ .", "When it comes to deep neural networks, such a property can ensure that the signal stably propagates through deep networks without either exploding or vanishing gradients [46], [47], which could speed up convergence and encourage robustness and generalization.", "In general, there are three ways to enforce orthogonality to a layer: orthogonal weight initialization [48], [18], [49], orthogonal regularization [50], [51], [52], [51], [19], and explicit orthogonal weight via Carley transform or matrix exponential [53], [54], [20].", "Among these techniques, orthogonal regularization and orthogonal weight are most commonly used as they often bring some practical improvements in generalization.", "Since the covariance is closely related to the weight matrix of the Pre-SVD layer, enforcing the orthogonality constraint could help to improve the covariance conditioning of the SVD meta-layer.", "We will choose some representative methods and validate their impact in Sec.", "REF .", "Notice that the focus of existing literature is different from our work.", "The orthogonality constraints are often used to improve the Lipschitz constants of the neural network layers, which is expected to improve the visual quality in image generation [55], [56], to allow for better adversarial robustness [57], [20], and to improve generalization abilities [58], [19].", "Our work is concerned with improving the covariance conditioning and generalization performance.", "Moreover, the orthogonality literature mainly investigates how to enforce orthogonality to weight matrices, whereas less attention is put on the gradient and learning rate.", "In Sec.", ", we will explore such possibilities and propose our solutions: nearest orthogonal gradient and optimal learning rate which is optimal in the sense that the updated weight is as close to an orthogonal matrix as possible." ], [ "Unsupervised Latent Disentanglement of GANs", "Interpreting latent spaces of GAN models in an unsupervised manner has received wide attention recently [59], [60], [61], [62].", "This can help to identify semantic attributes of the image and to have precise control of the generation process, which could benefit both local and global image editing tasks [63], [22].", "Voynov et al.", "[61] proposed to jointly learn a set of directions and an extra classifier such that the interpretable directions can be recognized.", "In [64], the authors proposed to perform PCA on the sampled data to capture the interpretable directions.", "More recently, Shen et al.", "[23] and Zhu et al.", "[22] pointed out that the semantic attributes are characterized by the eigenvectors of the weight or gradient of the first projector after the latent code.", "Motivated by this observation, we propose to enforce our orthogonality techniques to the gradient and weight matrices.", "Besides our orthogonality techniques, a few works have applied implicit orthogonality into the training process of GANs to attain more disentangled representations [65], [66], [67], [68].", "In [65], [68], the authors proposed to add orthogonal Hessian/Jacobian penalty to encourage disentanglement.", "He et al.", "[67] designed a dedicated GAN architecture where multi-level latent codes and orthogonal weight constraints are applied.", "Different from previous approaches, our orthogonality treatments do not rely on any implicit regularization.", "Instead, our NOG explicitly maps the original gradient into the nearest-orthogonal form, while our OLR keeps the updated weight in the closest form to orthogonal matrices.", "In this section, we first motivate our simplification of the Pre-SVD layer, and then validate the efficacy of some representative weight treatments." ], [ "Pre-SVD Layer Simplification", "The neural network consists of a sequential of non-linear layers where the learning of each layer is data-driven.", "Stacking these layers leads to a highly non-linear and complex transform, which makes directly analyzing the covariance conditioning intractable.", "To solve this issue, we have to perform some simplifications.", "Our simplifications involve limiting the analysis only to the layer previous to the SVD layer (which we dub as the Pre-SVD layer) in two consecutive training steps.", "The Pre-SVD layer directly determines the conditioning of the generated covariance, while the two successive training steps are a mimic of the whole training process.", "The idea is to simplify the complex transform by analyzing the sub-model (two layers) and the sub-training (two steps), which can be considered as an \"abstract representation\" of the deep model and its complete training.", "Let ${\\mathbf {W}}$ denote the weight matrix of the Pre-SVD layer.", "Then for the input ${\\mathbf {X}}_{l}$ passed to the layer, we have: ${\\mathbf {X}}_{l+1} = {\\mathbf {W}}{\\mathbf {X}}_{l} + \\mathbf {b}$ where ${\\mathbf {X}}_{l+1}$ is the feature passed to the SVD layer, and $\\mathbf {b}$ is the bias vector.", "Since the bias $\\mathbf {b}$ has a little influence here, we can sufficiently omit it for simplicity.", "The covariance in this step is computed as ${\\mathbf {W}}{\\mathbf {X}}_{l}{\\mathbf {X}}_{l}^{T}{\\mathbf {W}}^{T}$ .", "After the BP, the weight matrix is updated as $\\mathbf {W}{-}{\\eta }\\frac{\\partial l}{\\partial \\mathbf {W}}$ where $\\eta $ denotes the learning rate of the layer.", "Let ${\\mathbf {Y}}_{l}$ denote the passed-in feature of the next training step.", "Then the covariance is calculated as: $\\begin{aligned}{\\mathbf {C}}&= \\Big ( (\\mathbf {W}-\\eta \\frac{\\partial l}{\\partial \\mathbf {W}})\\cdot {\\mathbf {Y}}_{l} \\Big )\\Big ( (\\mathbf {W}-\\eta \\frac{\\partial l}{\\partial \\mathbf {W}})\\cdot {\\mathbf {Y}}_{l} \\Big )^{T}\\\\&={\\begin{array}{c}{\\mathbf {W}}{\\mathbf {Y}}_{l}{\\mathbf {Y}}_{l}^{T}{\\mathbf {W}}^{T} {-} \\eta \\frac{\\partial l}{\\partial \\mathbf {W}}{\\mathbf {Y}}_{l}{\\mathbf {Y}}_{l}^{T}{\\mathbf {W}}^{T}\\\\ {-} \\eta {\\mathbf {W}}{\\mathbf {Y}}_{l}{\\mathbf {Y}}_{l}^{T}(\\frac{\\partial l}{\\partial \\mathbf {W}})^{T} {+} \\eta ^{2}\\frac{\\partial l}{\\partial \\mathbf {W}}{\\mathbf {Y}}_{l}{\\mathbf {Y}}_{l}^{T}(\\frac{\\partial l}{\\partial \\mathbf {W}})^{T}\\end{array}}\\end{aligned}$ where ${\\mathbf {C}}$ denotes the generated covariance of the second step.", "Now the problem becomes how to stop the new covariance ${\\mathbf {C}}$ from becoming worse-conditioned than ${\\mathbf {W}}{\\mathbf {X}}_{l}{\\mathbf {X}}_{l}^{T}{\\mathbf {W}}^{T}$ .", "In eq.", "(REF ), three variables could influence the conditioning: the weight ${\\mathbf {W}}$ , the gradient of the last step $\\frac{\\partial l}{\\partial {\\mathbf {W}}}$ , and the learning rate $\\eta $ of this layer.", "Among them, the weight ${\\mathbf {W}}$ seems to be the most important as it contributes to three terms of eq.", "(REF ).", "Moreover, the first term ${\\mathbf {W}}{\\mathbf {Y}}_{l}{\\mathbf {Y}}_{l}^{T}{\\mathbf {W}}^{T}$ computed by ${\\mathbf {W}}$ is not attenuated by $\\eta $ or $\\eta ^2$ like the other terms.", "Therefore, it is natural to first consider manipulating ${\\mathbf {W}}$ such that the conditioning of ${\\mathbf {C}}$ could be improved." ], [ "General Treatments on Weights", "In the literature of enforcing orthogonality to the neural network, there are several techniques to improve the conditioning of the weight ${\\mathbf {W}}$ .", "Now we introduce some representatives methods and validate their impacts." ], [ "Spectral Normalization (SN)", "In [56], the authors propose a normalization method to stabilize the training of generative models [21] by dividing the weight matrix with its largest eigenvalue.", "The process is defined as: ${\\mathbf {W}}/ \\sigma _{max}({\\mathbf {W}})$ Such a normalization can ensure that the spectral radius of ${\\mathbf {W}}$ is always 1, i.e., $\\sigma _{max}({\\mathbf {W}}){=}1$ .", "This could help to reduce the conditioning of the covariance since we have $\\sigma _{max}({\\mathbf {W}}{\\mathbf {Y}}_{l}){=}\\sigma _{max}({\\mathbf {Y}}_{l})$ after the spectral normalization." ], [ "Orthogonal Loss (OL)", "Besides limiting the spectral radius of ${\\mathbf {W}}$ , enforcing orthogonality constraint could also improve the covariance conditioning.", "As orthogonal matrices are norm-preserving (i.e., $||{\\mathbf {W}}{\\mathbf {Y}}_{l}||_{\\rm F}{=}||{\\mathbf {W}}||_{\\rm F}$ ), lots of methods have been proposed to encourage orthogonality on weight matrices for more stable training and better signal-preserving property [69], [51], [19], [54], [20].", "One common technique is to apply soft orthogonality [19] by the following regularization: $l=||{\\mathbf {W}}{\\mathbf {W}}^{T}-{\\mathbf {I}}||_{\\rm F}$ This extra loss is added in the optimization objective to encourage more orthogonal weight matrices.", "However, since the constraint is achieved by regularization, the weight matrix is not exactly orthogonal at each training step." ], [ "Orthogonal Weights (OW)", "Instead of applying soft orthogonality by regularization, some methods can explicitly enforce hard orthogonality to the weight matrices [54], [20].", "The technique of [20] is built on the mathematical property: for any skew-symmetric matrix, its matrix exponential is an orthogonal matrix.", "$\\exp ({\\mathbf {W}}-{\\mathbf {W}}^{T})\\exp ({\\mathbf {W}}-{\\mathbf {W}}^{T})^{T}={\\mathbf {I}}$ where the operation of ${\\mathbf {W}}{-}{\\mathbf {W}}^{T}$ is to make the matrix skew-symmetric, i.e., the relation ${\\mathbf {W}}{-}{\\mathbf {W}}^{T}{=}-({\\mathbf {W}}{-}{\\mathbf {W}}^{T})^{T}$ always holds.", "Then $\\exp ({\\mathbf {W}}{-}{\\mathbf {W}}^{T})$ is used as the weight.", "This technique explicitly constructs the weight as an orthogonal matrix.", "The orthogonal constraint is thus always satisfied during the training.", "Figure: Performance of different weight treatments on ResNet-50 and CIFAR100 based on 10 runs.We apply the above three techniques in the experiment of decorrelated BN.", "Fig.", "REF displays the covariance conditioning throughout the training, and Table REF presents the corresponding validation errors.", "As can be seen, all of these techniques attain much better conditioning, but the performance improvements are not encouraging.", "The SN reduces the conditioning to around $10^{5}$ , while the validation error marginally improves.", "The soft orthogonality by the OL brings slight improvement on the performance despite some variations in the conditioning.", "The conditioning variations occur because the orthogonality constraint by regularization is not strictly enforced.", "Among the weight treatments, the hard orthogonality by the OW achieves the best covariance conditioning, continuously maintaining the condition number around $10^{3}$ throughout the training.", "However, the OW slightly hurts the validation error.", "This implies that better covariance conditioning does not necessarily correspond to the improved performance, and orthogonalizing only the weight cannot improve the generalization.", "We conjecture that enforcing strict orthogonality only on the weight might limit its representation power.", "Nonetheless, as will be discussed in Sec.", "REF , the side effect can be canceled when we simultaneously orthogonalize the gradient." ], [ "Nearest Orthogonal Gradient and Optimal Learning Rate", "This section introduces our proposed techniques on modifying the gradient and learning rate of the Pre-SVD layer.", "The combinations with weight treatments are also discussed." ], [ "Nearest Orthogonal Gradient (NOG)", "As discussed in Sec.", "REF , the covariance conditioning is also influenced by the gradient $\\frac{\\partial l}{\\partial {\\mathbf {W}}}$ .", "However, existing literature mainly focuses on orthogonalizing the weights.", "To make the gradient also orthogonal, we propose to find the nearest-orthogonal gradient of the Pre-SVD layer.", "Different matrix nearness problems have been studied in [70], and the nearest-orthogonal problem is defined as: $\\min _{{\\mathbf {R}}} ||\\frac{\\partial l}{\\partial {\\mathbf {W}}}-{\\mathbf {R}}||_{\\rm F}\\ subject\\ to\\ {\\mathbf {R}}{\\mathbf {R}}^{T}={\\mathbf {I}}$ where ${\\mathbf {R}}$ is the seeking solution.", "To obtain such an orthogonal matrix, we can construct the error function as: $e({\\mathbf {R}}) = Tr\\Big ((\\frac{\\partial l}{\\partial {\\mathbf {W}}}-{\\mathbf {R}})^{T}(\\frac{\\partial l}{\\partial {\\mathbf {W}}}-{\\mathbf {R}})\\Big ) + Tr\\Big (\\mathbf {\\Sigma } {\\mathbf {R}}^{T}{\\mathbf {R}}-{\\mathbf {I}}\\Big )$ where $Tr(\\cdot )$ is the trace measure, and $\\mathbf {\\Sigma }$ denotes the symmetric matrix Lagrange multiplier.", "The closed-form solution is given by: ${\\mathbf {R}}= \\frac{\\partial l}{\\partial {\\mathbf {W}}} \\Big (( \\frac{\\partial l}{\\partial {\\mathbf {W}}})^{T} \\frac{\\partial l}{\\partial {\\mathbf {W}}}\\Big )^{-\\frac{1}{2}}$ The detailed derivation is given in the supplementary material.", "If we have the SVD of the gradient (${\\mathbf {U}}{\\mathbf {S}}{\\mathbf {V}}^{T}{=}\\frac{\\partial l}{\\partial {\\mathbf {W}}}$ ), the solution can be further simplified as: ${\\mathbf {R}}= {\\mathbf {U}}{\\mathbf {S}}{\\mathbf {V}}^{T} ({\\mathbf {V}}{\\mathbf {S}}^{-1}{\\mathbf {V}}^{T})={\\mathbf {U}}{\\mathbf {V}}^{T}$ As indicated above, the nearest orthogonal gradient is achieved by setting the singular value matrix to the identity matrix, i.e., setting ${\\mathbf {S}}$ to ${\\mathbf {I}}$ .", "Notice that only the gradient of Pre-SVD layer is changed, while that of the other layers is not modified.", "Our proposed NOG can bring several practical benefits.", "The orthogonal constraint is exactly enforced on the gradient as we have $({\\mathbf {U}}{\\mathbf {V}}^{T})^{T}{\\mathbf {U}}{\\mathbf {V}}^{T}{=}{\\mathbf {I}}$ .", "Since we explicitly set all the singular values to 1, the optimal conditioning is also achieved, i.e., $\\kappa (\\frac{\\partial l}{\\partial {\\mathbf {W}}}){=}1$ .", "This could help to improve the conditioning." ], [ "Keeping Gradient Descent Direction Unchanged", "In the high-dimensional optimization landscape, the many curvature directions (GD directions) are characterized by the eigenvectors of gradient (${\\mathbf {U}}$ and ${\\mathbf {V}}$ ).", "Although our modification changes the gradient, the eigenvectors and the GD directions are untouched.", "In other words, our NOG only adjusts the step size in each GD direction.", "This indicates that the modified gradients will not harm performance." ], [ "Combination with Weight Treatments", "Our orthogonal gradient and the previous weight treatments are complementary.", "They can be jointly used to simultaneously orthogonalize the gradient and weight.", "In the following, we will validate their joint impact on the conditioning and performance.", "Figure: Performance of gradient treatments on ResNet-50 and CIFAR100.", "Each result is based on 10 runs.Fig.", "REF and Table REF present the covariance conditioning of decorrelated BN and the corresponding validation errors, respectively.", "As we can observe, solely using the proposed NOG can largely improve the covariance conditioning, decreasing the condition number from $10^{12}$ to $10^6$ .", "Though this improvement is not as significant as the orthogonal constraints (e.g., OL and OW), our NOG can benefit more the generalization abilities, leading to the improvement of validation error by $0.6\\%$ .", "Combining the SN with our NOG does not lead to obvious improvements in either the conditioning or validation errors, whereas the joint use of NOG and OL harms the network performances.", "This is because the orthogonality constraint by loss might not be enforced under the gradient manipulation.", "When our NOG is combined with the OW, the side effect of using only OW is eliminated and the performance is further boosted by $0.3\\%$ .", "This phenomenon demonstrates that when the gradient is orthogonal, applying the orthogonality constraint to the weight could also be beneficial to the generalization." ], [ "Optimal Learning Rate (OLR)", "So far, we only consider orthogonalizing ${\\mathbf {W}}$ and $\\frac{\\partial l}{\\partial {\\mathbf {W}}}$ separately, but how to jointly optimize ${\\mathbf {W}}{-}{\\eta }\\frac{\\partial l}{\\partial {\\mathbf {W}}}$ has not been studied yet.", "Actually, it is desired to choose an appropriate learning rate $\\eta $ such that the updated weight is close to an orthogonal matrix.", "To this end, we need to achieve the following objective: $\\min _{\\eta } ||({\\mathbf {W}}-{\\eta }\\frac{\\partial l}{\\partial {\\mathbf {W}}})({\\mathbf {W}}-{\\eta }\\frac{\\partial l}{\\partial {\\mathbf {W}}})^{T}-{\\mathbf {I}}||_{\\rm F}$ This optimization problem can be more easily solved in the vector form.", "Let $\\mathbf {w}$ , ${\\mathbf {i}}$ , and $\\mathbf {l}$ denote the vectorized ${\\mathbf {W}}$ , ${\\mathbf {I}}$ , and $\\frac{\\partial l}{\\partial {\\mathbf {W}}}$ , respectively.", "Then we construct the error function as: $e(\\eta ) = \\Big ((\\mathbf {w}-\\eta \\mathbf {l})^{T}(\\mathbf {w}-\\eta \\mathbf {l})-\\mathbf {i}\\Big )^{T}\\Big ((\\mathbf {w}-\\eta \\mathbf {l})^{T}(\\mathbf {w}-\\eta \\mathbf {l})-\\mathbf {i}\\Big )$ Expanding and differentiating the equation w.r.t.", "$\\eta $ lead to: ${\\begin{array}{c}\\frac{d e(\\eta )}{d \\eta } \\approx -4{\\mathbf {w}}{\\mathbf {w}}^{T}{\\mathbf {l}}^{T}{\\mathbf {w}}+ 4\\eta {\\mathbf {w}}{\\mathbf {w}}^{T}{\\mathbf {l}}^{T}{\\mathbf {l}}+ 8\\eta {\\mathbf {l}}^{T}{\\mathbf {w}}{\\mathbf {l}}^{T}{\\mathbf {w}}=0\\\\\\eta ^{\\star } \\approx \\frac{{\\mathbf {w}}^{T}{\\mathbf {w}}{\\mathbf {l}}^{T}{\\mathbf {w}}}{{\\mathbf {w}}^{T}{\\mathbf {w}}{\\mathbf {l}}^{T}{\\mathbf {l}}+2{\\mathbf {l}}^{T}{\\mathbf {w}}{\\mathbf {l}}^{T}{\\mathbf {w}}}\\end{array}}$ where some higher-order terms are neglected.", "The detailed derivation is given in the supplementary material.", "Though the proposed OLR yields the updated weight nearest to an orthogonal matrix theoretically, the value of $\\eta ^{\\star }$ is unbounded for arbitrary ${\\mathbf {w}}$ and ${\\mathbf {l}}$ .", "Directly using $\\eta ^{\\star }$ might cause unstable training.", "To avoid this issue, we propose to use the OLR only when its value is smaller than the learning rate of other layers.", "Let $lr$ denote the learning rate of the other layers.", "The switch process can be defined as: $\\eta ={\\left\\lbrace \\begin{array}{ll}\\eta ^{\\star } & if\\ \\eta ^{\\star }<lr\\\\lr & otherwise\\end{array}\\right.", "}$" ], [ "Combination with Weight/Gradient Treatments", "When either the weight or the gradient is orthogonal, our OLR needs to be carefully used.", "When only ${\\mathbf {W}}$ is orthogonal, ${\\mathbf {w}}^{T}{\\mathbf {w}}$ is a small constant and it is very likely to have ${\\mathbf {w}}^{T}{\\mathbf {w}}{\\ll }{\\mathbf {l}}^{T}{\\mathbf {w}}$ .", "Consequently, we have ${\\mathbf {w}}^{T}{\\mathbf {w}}{\\mathbf {l}}^{T}{\\mathbf {w}}{\\ll }{\\mathbf {l}}^{T}{\\mathbf {w}}{\\mathbf {l}}^{T}{\\mathbf {w}}$ and $\\eta ^{\\star }$ will attenuate to zero.", "Similarly for orthogonal gradient, we have ${\\mathbf {w}}^{T}{\\mathbf {w}}{\\mathbf {l}}^{T}{\\mathbf {w}}{\\ll }{\\mathbf {l}}^{T}{\\mathbf {w}}{\\mathbf {l}}^{T}{\\mathbf {l}}$ and this will cause $\\eta ^{\\star }$ close to zero.", "Therefore, the proposed OLR cannot work when either the weight or gradient is orthogonal.", "Nonetheless, we note that if both ${\\mathbf {W}}$ and $\\frac{\\partial l}{\\partial {\\mathbf {W}}}$ are orthogonal, our $\\eta ^{\\star }$ is bounded.", "Specifically, we have: Proposition 1 When both ${\\mathbf {W}}$ and $\\frac{\\partial l}{\\partial {\\mathbf {W}}}$ are orthogonal, $\\eta ^{\\star }$ is both upper and lower bounded.", "The upper bound is $\\frac{N^2}{N^2 + 2}$ and the lower bound is $\\frac{1}{N^{2}+2}$ where $N$ denotes the row dimension of ${\\mathbf {W}}$ .", "We give the detailed proof in the supplementary material.", "Obviously, the upper bound of $\\eta ^{\\star }$ is smaller than 1.", "For the lower bound, since the row dimension of $N$ is often large (e.g., 64), the lower bound of $\\eta ^{\\star }$ can be according very small (e.g., $2e{-}4$ ).", "This indicates that our proposed OLR could also give a small learning rate even in the later stage of the training process.", "In summary, the optimal learning rate is set such that the updated weight is optimal in the sense that it become as close to an orthogonal matrix as possible.", "In particular, it is suitable when both the gradient and weight are orthogonal.", "Figure: Performance of optimal learning rate on ResNet-50 and CIFAR100 based on 10 runs.We give the covariance conditioning and the validation errors in Fig.", "REF and in Table REF , respectively.", "Our proposed OLR significantly reduces the condition number to $10^{4}$ and improves the validation error by $0.5\\%$ .", "When combined with either orthogonal weight or orthogonal gradient, there is a slight degradation on the validation errors.", "This meets our expectation as $\\eta ^{\\star }$ would attenuate to zero in both cases.", "However, when both ${\\mathbf {W}}$ and $\\frac{\\partial l}{\\partial {\\mathbf {W}}}$ are orthogonal, jointly using our OLR achieves the best performance, outperforming only OLR by $0.5\\%$ and beating OW$+$ NOG by $0.2\\%$ .", "This observation confirms that the proposed OLR works well for simultaneously orthogonal ${\\mathbf {W}}$ and $\\frac{\\partial l}{\\partial {\\mathbf {W}}}$ ." ], [ "Orthogonality for Unsupervised Latent Disentanglement", "In this section, we motivate why orthogonal treatments (orthogonal weight or gradient) would help in unsupervised latent disentanglement of GANs." ], [ "Image Manipulation in Latent Space of GANs", "The latent space of GANs encodes rich semantics information, which can be used for image editing via vector arithmetic property [71].", "Consider a generator $G(\\cdot )$ and the latent code $\\mathbf {z}{\\in }\\mathbb {R}^{d}$ .", "The image manipulation is achieved by finding a semantically meaningful direction $\\mathbf {n}$ such that $\\texttt {edit}(G(\\mathbf {z}))=G(\\mathbf {z}+\\alpha \\mathbf {n})$ where $\\texttt {edit}(\\cdot )$ denotes the image editing process, and $\\alpha $ represents the perturbation strength.", "That being said, moving the latent code $\\mathbf {z}$ along with the interpretable direction $\\mathbf {n}$ should change the targeting semantic concept of the image.", "Since the generator $G(\\cdot )$ is highly non-linear and complex, directly analyzing $G(\\mathbf {z}+\\alpha \\mathbf {n})$ is intractable.", "To avoid this issue, existing approaches propose to simplify the analysis by considering only the first projector matrix $G_{1}(\\cdot )$ or performing local Taylor expansion [23], [22], [72], [73].", "Eigenvector of the first projector.", "In SeFa [23], the authors propose to seek for interpretable directions from the eigenvector of the first projector matrix.", "Specifically, they consider the affine transformation of the layer as: $G_{1}(\\mathbf {z}+\\alpha \\mathbf {n}) = \\mathbf {A}\\mathbf {z} + \\mathbf {b} + \\alpha \\mathbf {A}\\mathbf {n} = G_{1}(\\mathbf {z}) + \\alpha \\mathbf {A}\\mathbf {n}$ where $\\mathbf {A}$ is the weight matrix.", "Intuitively, a meaningful direction should lead to large variations of the generated image.", "So the problem can be cast into an optimization problem as: $\\mathbf {n}^{\\star } = \\arg \\max ||\\mathbf {A}\\mathbf {n}||^{2}$ All the possible closed-form solution correspond to the eigenvector of $\\mathbf {A}^{T}\\mathbf {A}$ .", "The top-$k$ eigenvectors are thus selected as the interpretable directions for image manipulation.", "Eigenvector of the Jacobian.", "LowRankGAN [22] proposes to linearly approximate $G(\\mathbf {z}+\\alpha \\mathbf {n})$ by the Taylor expansion as: $G(\\mathbf {z}+\\alpha \\mathbf {n}) \\approx G(\\mathbf {z}) + \\alpha \\mathbf {J}_{\\mathbf {z}}\\mathbf {n}$ where $\\mathbf {J}_{\\mathbf {z}}$ is the Jacobian matrix w.r.t.", "the latent code $\\mathbf {z}$ .", "Similarly to the deduction of eq.", "(REF ), the closed-form solution is given by the eigenvector of $\\mathbf {J}_{\\mathbf {z}}^{T}\\mathbf {J}_{\\mathbf {z}}$ .", "The above two formulations illustrate how the weight and gradient matrices are related with the interpretable direction discovery.", "Currently, most GAN models do not enforce orthogonality to their architectures.", "Now we turn to explaining the concrete benefit of introducing orthogonality to the latent disentanglement." ], [ "Usefulness of Orthogonality", "Though few previous works have applied implicit orthogonality as regularization in GANs [61], [65], [67], [68], there are no generally accepted explanations on how the orthogonality is related to the disentangled representations.", "Here we give an intuitive explanation.", "As discussed in the above image manipulation modelling, the eigenvectors of weight and gradient matrices naturally imply the interpretable directions for latent disentanglement.", "For common non-orthogonal matrices, the importance of each eigenvector is characterized by the corresponding eigenvalue.", "Each eigenvector is not equally important and the first few ones would dominate the spectrum.", "This imbalance would cause most semantic attributes entangled in the first few directions.", "Fig.", "REF top illustrates this phenomenon: moving the latent code along with the top-1 eigenvector direction triggers changes of many semantic attributes.", "On the contrary, the small eigenvector direction does not indicate any semantic changes.", "The learned representation are thus deemed entangled.", "The orthogonal matrices can greatly relieve this issue thanks to the flat spectrum and equally-important eigenvectors.", "As shown in Fig.", "REF bottom, when our NOG and OLR are applied, each direction of the orthogonal matrix is equally important and corresponds to one semantic attribute.", "Shifting the latent code in one direction only changes the targeting semantic concept, while the identity and other attributes are not touched.", "Enforcing orthogonality would lead to the superior disentanglement of learned representations.", "Our proposed NOG and OLR can serve as strict orthogonal gradient constraint and relaxed orthogonal weight constraint, respectively.", "Enforcing them on the first layer after the latent code during the training process is very likely to lead to more disentangled representations.", "In Sec.", "REF , we apply these two techniques in various GAN architectures and benchmarks for unsupervised latent disentanglement." ], [ "Covariance Conditioning", "We validate the proposed approaches in two applications: GCP and decorrelated BN.", "These two tasks are very representative because they have different usages of the SVD meta-layer.", "The GCP uses the matrix square root, while the decorrelated BN applies the inverse square root.", "In addition, the models of decorrelated BN often insert the SVD meta-layer at the beginning of the network, whereas the GCP models integrate the layer before the FC layer.", "We use ResNet-50 [74] as the backbone for the experiment on CIFAR10 and CIFAR100 [75].", "The kernel size of the first convolution layer of ResNet is $7{\\times }7$ , which might not suit the low resolution of these two datasets (the images are only of size $32{\\times }32$ ).", "To avoid this issue, we reduce the kernel size of the first convolution layer to $3{\\times }3$ .", "The stride is also decreased from 2 to 1.", "The BN layer after this layer is replace with our decorrelated BN layer (see Fig.", "REF ).", "Let ${\\mathbf {X}}{\\in }\\mathbb {R}^{C{\\times }BHW}$ denotes the reshaped feature.", "The whitening transform is performed as: ${\\mathbf {X}}_{whitened} = ({\\mathbf {X}}{\\mathbf {X}}^{T})^{-\\frac{1}{2}} {\\mathbf {X}}$ Compared with the vanilla BN that only standardizes the data, the decorrelated BN can further eliminate the data correlation between each dimension.", "Table: Performance comparison of decorrelated BN methods on CIFAR10/CIFAR100  based on ResNet-50 .", "We report each result based on 10 runs.", "The best four results are highlighted in redred, blueblue, greengreen, and cyancyan respectively.Table REF compares the performance of each method on CIFAR10/CIFAR100 [75] based on ResNet-50 [74].", "Both of our NOG and OLR achieve better performance than other weight treatments and the SVD.", "Moreover, when hybrid treatments are adopted, we can observe step-wise steady improvements on the validation errors.", "Among these techniques, the joint usage of OLR with NOG and OW achieves the best performances across metrics and datasets, outperforming the SVD baseline by $0.4\\%$ on CIFAR10 and by $0.9\\%$ on CIFAR100.", "This demonstrates that these treatments are complementary and can benefit each other.", "Table: Performance comparison of decorrelated BN methods on CIFAR100  with DenseNet-121  and MobileNet-v2  based on 10 runs.", "The best four results are highlighted in redred, blueblue, greengreen, and cyancyan respectively.Table REF presents the validation errors on CIFAR100 with DenseNet-121 [76] and MobileNet-v2 [77].", "The results are coherent with those on ResNet-50 [74]: our methods bring consistent performance improvements to the ordinary SVD on different architectures.", "This demonstrates the model-agnostic property of the proposed orthogonality approaches.", "Fig.", "REF displays the corresponding best validation accuracy during the training process.", "Our method can also accelerate the convergence of the training process.", "The acceleration is particularly significant in the initial training stage.", "Figure: The best validation accuracy during the training process.", "Our proposed techniques can consistently improve the convergence speed and help the model to achieve better accuracy within fewer training epochs.Finally, we would like to note that the performance gain of our methods depends on the specific architectures and the ill-conditioned extent of the covariance.", "Generally speaking, the larger the model is, the worse-conditioned the covariance is and the larger the performance gain would be.", "Take the above decorrelated BN experiments as an example, the accuracy improvement on MobileNet is around $1.5\\%$ , while the performance gain on larger DenseNet is about $4.0\\%$ ." ], [ "Global Covariance Pooling", "We use ResNet-18 [74] for the GCP experiment and train it from scratch on ImageNet [78].", "Fig.", "REF displays the overview of a GCP model.", "For the ResNet backbone, the last Global Average Pooling (GAP) layer is replaced with our GCP layer.", "Consider the final batched convolutional feature ${\\mathbf {X}}{\\in }\\mathbb {R}^{B{\\times }C{\\times }HW}$ .", "We compute the matrix square root of its covariance as: ${\\mathbf {Q}}= ({\\mathbf {X}}{\\mathbf {X}}^{T})^{\\frac{1}{2}}$ where ${\\mathbf {Q}}{\\in }\\mathbb {R}^{B{\\times }C{\\times }C}$ is used as the final representation and directly passed to the fully-connected (FC) layer.", "Table: Performance comparison of different GCP methods on ImageNet  based on ResNet-18 .", "The failure times denote the total times of non-convergence of the SVD solver during one training process.", "The best four results are highlighted in redred, blueblue, greengreen, and cyancyan respectively.Table REF presents the total failure times of the SVD solver in one training process and the validation accuracy on ImageNet [78] based on ResNet-18 [74].", "The results are very coherent with our experiment of decorrelated BN.", "Among the weight treatments, the OL and OW hurt the performance, while the SN improves that of SVD by $0.2\\%$ .", "Our proposed NOG and OLR outperform the weight treatments and improve the SVD baseline by $0.4\\%$ and by $0.3\\%$ , respectively.", "Moreover, the combinations with the orthogonal weight further boost the performance.", "Specifically, combining NOG and OW surpasses the SVD by $0.6\\%$ .", "The joint use of OW with NOG and OLR achieves the best performance among all the methods and beats the SVD by $0.7\\%$ .", "Figure: Latent traversal on AnimeFace .", "The EigenGAN has entangled attributes in the identified interpretable directions, while our methods achieve better disentanglement and each direction corresponds to a unique attribute.Figure: The covariance conditioning of GCP methods in the later stage of the training.", "The periodic spikes are caused by the evaluation on the validation set after every epoch.Fig.", "REF depicts the covariance conditioning in the later training stage.", "Our OLR and the OW both reduce the condition number by around $1e15$ , whereas the proposed NOG improves the condition number by $2e15$ .", "When hybrid treatments are used, combining NOG and OW attains better conditioning than the separate usages.", "Furthermore, simultaneously using all the techniques leads to the best conditioning and improves the condition number by $5e15$ .", "The covariance conditioning of GCP tasks is not improved as much as that of decorrelated BN.", "This might stem from the unique architecture of GCP models: the covariance is directly used as the final representation and fed to the FC layer.", "We conjecture that this setup might cause the covariance to have a high condition number.", "The approximate solver (NS iteration) does not have well-conditioned matrices either (${\\approx }1e15$ ), which partly supports our conjecture." ], [ "Computational Cost", "Table REF compares the time consumption of a single training step for the experiment of decorrelated BN.", "Our NOG and OLR bring negligible computational costs to the BP ($2\\%$ and $1\\%$ ), while the FP is not influenced.", "Even when all techniques are applied, the overall time costs are marginally increased by $10\\%$ .", "Notice that NOG and OLR have no impact on the inference speed." ], [ "Latent Disentanglement", "In this subsection, we first introduce the experiment setup, followed by the evaluation results on different GAN architectures and datasets.", "We defer the implementation details to the Supplementary Material." ], [ "Experimental Setup", "Models.", "We evaluate our methods on EigenGAN [67] and vanilla GAN [21].", "EigenGAN [67] is a particular GAN architecture dedicated to latent disentanglement.", "It progressively injects orthogonal subspaces into each layer of the generator, which can mine controllable semantic attributes in an unsupervised manner.", "For the vanilla GAN [21], we adopt the basic GAN model that consists of stacked convolutional layers and do not make any architectural modifications.", "Datasets.", "For EigenGAN, we use AnimeFace [79] and FFHQ [80] datasets.", "AnimeFace [79] is comprised of $63,632$ aligned anime faces with resolution varying from $90{\\times }90$ to $120{\\times }120$ .", "FFHQ [80] consists of $70,000$ high-quality face images that have considerable variations in identifies and have good coverage in common accessories.", "Since the vanilla GAN has a smaller architecture and fewer parameters, we use relatively simpler CelebA [81] and LSUN Church [82] datasets.", "CelebA [81] contains $202,599$ face images of $10,177$ celebrities, while LSUN Church [82] has $126,227$ scenes images of church.", "Metrics.", "We use Frechet Inception Distance (FID) [83] to quantitatively evaluate the quality of generate images.", "For the performance of latent disentanglement, we use Variational Predictability (VP) [66] as the quantitative metric.", "The VP metric adopts the few-shot learning setting to measure the generalization abilities of a simple neural network in classifying the discovered latent directions.", "Baselines.", "For the EigenGAN model that already has inherent orthogonality constraints and good disentanglement abilities, we compare the ordinary EignGAN with the modified version augmented by our proposed orthogonal techniques (NOG and OLR).", "For the vanilla GAN that suffers from limited disentanglement, we compare our NOG and OLR against other disentanglement schemes used in GANs, including (1) Hessian Penalty (HP) [65], (2) Orthogonal Jacobian Regularization (OrthoJar) [68], and (3) Latent Variational Predictability (LVP) [66]." ], [ "EigenGAN Architecture and Modifications", "Fig.", "REF displays the overview of the EigenGAN.", "At each layer, the latent code $\\mathbf {z}_{i}$ is multiplied with the orthogonal basis $\\mathbf {U}_{i}$ and the diagonal importance matrix $\\mathbf {L}_{i}$ to inject weighted orthogonal subspace for disentangled representation learning.", "The original EigenGAN [67] adopts the OL loss $||\\mathbf {U}_{i}\\mathbf {U}_{i}^{T}{-}\\mathbf {I}||_{\\rm F}$ to enforce relaxed orthogonality to each subspace $\\mathbf {U}_{i}$ .", "Instead, we apply our NOG and OLR to achieve the weight and gradient orthogonality, respectively.", "Notice that when our NOG and OLR are applied, we do not use the OL loss of EigenGAN.", "This is because the soft orthogonality introduced by the OL loss might not be enforced under the gradient manipulation of our NOG, which is similar to our experimental results of decorrelated BN (see Sec.", "REF ).", "Figure: Overview of the EigenGAN architecture." ], [ "Results on EigenGAN", "Qualitative Evaluation.", "Fig.", "REF compares the latent traversal results of the ordinary EigenGAN and our methods on AnimeFace.", "The interpretable direction of EigenGAN has many entangled attributes; the identity is poorly preserved during the latent traversal.", "By contrast, moving along with the discovered direction of our method would only introduce changes of a single semantic attribute.", "This demonstrates that our interpretable directions have more precisely-controlled semantics and our orthogonality techniques indeed help the model to learn more disentangled representations.", "Moreover, thanks to the power of orthogonality, our methods can mine many subtle and fine-grained attributes.", "Fig.", "REF displays such attributes that are precisely captured by out method but are not learnt by EigenGAN.", "These attributes include very subtle local details of the image, such as facial blush, facial shadow, and mouth openness.", "Figure: Qualitative comparison on FFHQ.", "The attributes are entangled in one latent direction of EigenGAN, while our method can avoid this and discover orthogonal concepts.Fig.", "REF compares the exemplary latent traversal on FFHQ.", "Similar with the result on AnimeFace, the interpretable directions have more disentangled attributes when our orthogonality techniques are used.", "Since FFHQ covers a wide range of image attributes, our method is able to learn very fine-grained attributes (e.g., angle and thickness of eyebrow) of a given super attribute (e.g., eyebrow) accordingly.", "We give a few examples in Fig.", "REF .", "As can be observed, our method can precisely control the subtle detail of the image while keeping other attributes unchanged.", "Table: Quantitative evaluation on EigenGAN.Quantitative Evaluation.", "Table REF compares the performance of EigenGAN on AnimeFace and FFHQ datasets.", "Our proposed NOG and OLR can improve both the image quality score (FID) and the disentanglement score (VP).", "Furthermore, when these two techniques are combined, the evaluation results achieve the best performance across metrics and datasets.", "This implies that enforcing simultaneous gradient and weight orthogonality allows for the learning of more disentangled representations and improved image fidelity.", "Discussion.", "Both quantitative and qualitative evaluation on two datasets demonstrates that our orthogonality approaches lead to better latent disentanglement than the inherent orthogonality loss of EigenGAN.", "This behavior is coherent to our previous experiment of decorrelated BN: the proposed NOG and OLR also outperform OL in that case.", "This further confirms the general applicability of our orthogonal methods." ], [ "Vanilla GAN Architecture", "For the vanilla GAN model, we use simple convolutional layers as building blocks.", "The orthogonality techniques are applied on the first convolution layer after the latent code." ], [ "Results on Vanilla GAN", "Qualitative Evaluation.", "Fig.", "REF presents the qualitative evaluation results on CelebA [81] against HP [65].", "The semantic factors discovered by our methods controls traversal process more precisely; only a single attribute is changed when one latent code is modified.", "By contrast, a interpretable direction mined by HP [65] would correspond to multiple attributes sometimes.", "This implies that the learned representations and attributes of our NOG and OLR are more disentangled.", "Fig.", "REF displays some learned attributes of our methods.", "The complex scenes and structures of churches are preserved well, and each semantic factor precisely controls the image attribute.", "This also demonstrates the diverse application domains of our disentanglement method beyond face analysis.", "Figure: Latent traversal of our NOG on LSUN Church.Table: Quantitative evaluation on vanilla GAN.", "We measure the time consumption of a single forward pass and backward pass.", "The best three results are highlighted in redred, blueblue, and greengreen respectively.Quantitative Evaluation.", "Table.", "REF reports the quantitative evaluation results on vanilla GAN.", "Our proposed orthogonality techniques outperform other disentanglement schemes in terms of both FID and VP, achieving state-of-the-art performance in the unsupervised latent disentanglement.", "Moreover, our approaches are much more efficient than other baselines due to the marginal computational cost.", "Table: Condition number of the first convolution weight in vanilla GANs on CelebA  and LSUN Church .Condition Number in Vanilla GANs.", "Similar to our previous experiments, we measure the condition number of the fist convolution weight in vanilla GANs (i.e., the projection matrix that maps latent codes to features).", "Table REF presents the evaluation results on CelebA [81] and LSUN Church [82].", "As can be observed, our methods (NOG, OLR, and NOG+OLR) outperform other baselines and have much better condition numbers.", "This demonstrates that our methods can also improve the conditioning of the weight matrix of vanilla GANs.", "Notice that the convolution weight matrix is small in dimensionality.", "The corresponding condition number is thus much smaller compared with the covariance conditioning in the previous experiments." ], [ "Conclusion", "In this paper, we explore different approaches to improve the covariance conditioning of the SVD meta-layer.", "Existing treatments on orthogonal weight are first studied.", "Our experiments reveal that these techniques could improve the conditioning but might hurt the performance due to the limitation on the representation power.", "To avoid the side effect of orthogonal weight, we propose the nearest orthogonal gradient and the optimal learning rate, both of which could simultaneously attain better covariance conditioning and improved generalization abilities.", "Moreover, their combinations with orthogonal weight further boost the performance.", "Besides the usage on the SVD meta-layer, we show that our proposed orthogonality approaches can benefit generative models for better latent disentanglement." ], [ "Acknowledgements", "This research was supported by the EU H2020 projects AI4Media (No.", "951911) and SPRING (No.", "871245) and by the PRIN project CREATIVE Prot.", "2020ZSL9F9.", "[Figure: NO_CAPTIONcum laude from KU Leuven, Belgium and the joint M.Sc.", "summa cum laude from the University of Trento, Italy and KTH Royal Institute of Technology, Sweden.", "Currently, he is a Ph.D. student with the Multimedia and Human Understanding Group (MHUG) at the University of Trento, Italy.", "His research interests are computer vision, deep learning, and numerical analysis and optimization.", "[Figure: NO_CAPTION [Figure: NO_CAPTION" ], [ "Background: SVD Meta-Layer", "This subsection presents the background knowledge about the propagation rules of the SVD meta-layer." ], [ "Forward Pass", "Given the reshape feature ${\\mathbf {X}}{\\in }\\mathbb {R}^{d{\\times }N}$ where $d$ denotes the feature dimensionality (i.e., the number of channels) and $N$ represents the number of features (i.e., the product of spatial dimensions of features), an SVD meta-layer first computes the sample covariance as: ${\\mathbf {P}}= {\\mathbf {X}}{\\mathbf {J}}{\\mathbf {X}}^{T}, {\\mathbf {J}}=\\frac{1}{N}({\\mathbf {I}}-\\frac{1}{N}\\mathbf {1}\\mathbf {1}^{T})$ where ${\\mathbf {J}}$ represents the centering matrix, $\\mathbf {I}$ denotes the identity matrix, and $\\mathbf {1}$ is a column vector whose values are all ones, respectively.", "The covariance is always positive semi-definite (PSD) and does not have any negative eigenvalues.", "Afterward, the eigendecomposition is performed using the SVD: ${\\mathbf {P}}={\\mathbf {U}}\\mathbf {\\Lambda }{\\mathbf {U}}^{T},\\ \\mathbf {\\Lambda }=\\rm {diag}(\\lambda _{1},\\dots ,\\lambda _{d})$ where $\\mathbf {U}$ is the orthogonal eigenvector matrix, ${\\rm diag}(\\cdot )$ denotes transforming a vector to a diagonal matrix, and $\\mathbf {\\Lambda }$ is the diagonal matrix in which the eigenvalues are sorted in a non-increasing order i.e., $\\lambda _i {\\ge } \\lambda _{i+1}$ .", "Then depending on the application, the matrix square root or the inverse square root is calculated as: ${\\begin{array}{c}\\mathbf {Q}\\triangleq \\mathbf {P}^{\\frac{1}{2}}=\\mathbf {U}\\mathbf {\\Lambda }^{\\frac{1}{2}} \\mathbf {U}^{T}, \\mathbf {\\Lambda }^{\\frac{1}{2}}={\\rm diag}(\\lambda _{1}^{\\frac{1}{2}},\\dots ,\\lambda _{d}^{\\frac{1}{2}}) \\\\\\mathbf {S}\\triangleq \\mathbf {P}^{-\\frac{1}{2}}=\\mathbf {U}\\mathbf {\\Lambda }^{-\\frac{1}{2}} \\mathbf {U}^{T}, \\mathbf {\\Lambda }^{-\\frac{1}{2}}={\\rm diag}(\\lambda _{1}^{-\\frac{1}{2}},\\dots ,\\lambda _{d}^{-\\frac{1}{2}})\\end{array}}$ The matrix square root ${\\mathbf {Q}}$ is often used in GCP-related tasks [1], [33], [2], while the application of decorrelated BN [4], [84] widely applies the inverse square root ${\\mathbf {S}}$ .", "In certain applications such as WCT, both ${\\mathbf {Q}}$ and ${\\mathbf {S}}$ are required." ], [ "Backward Pass", "Let $\\frac{\\partial l}{\\partial {\\mathbf {Q}}}$ and $\\frac{\\partial l}{\\partial {\\mathbf {S}}}$ denote the partial derivative of the loss $l$ w.r.t to the matrix square root ${\\mathbf {Q}}$ and the inverse square root ${\\mathbf {S}}$ , respectively.", "Then the gradient passed to the eigenvector is computed as: $\\frac{\\partial l}{\\partial \\mathbf {U}}\\Big |_{{\\mathbf {Q}}}=(\\frac{\\partial l}{\\partial \\mathbf {Q}} + (\\frac{\\partial l}{\\partial \\mathbf {Q}})^{T})\\mathbf {U}\\mathbf {\\Lambda }^{\\frac{1}{2}},\\ \\frac{\\partial l}{\\partial \\mathbf {U}}\\Big |_{{\\mathbf {S}}}=(\\frac{\\partial l}{\\partial \\mathbf {S}} + (\\frac{\\partial l}{\\partial \\mathbf {S}})^{T})\\mathbf {U}\\mathbf {\\Lambda }^{-\\frac{1}{2}}$ Notice that the gradient equations for ${\\mathbf {Q}}$ and ${\\mathbf {S}}$ are different.", "For the eigenvalue, the gradient is calculated as: ${\\begin{array}{c}\\frac{\\partial l}{\\partial \\mathbf {\\Lambda }}\\Big |_{{\\mathbf {Q}}}=\\frac{1}{2}\\rm {diag}(\\lambda _{1}^{-\\frac{1}{2}},\\dots ,\\lambda _{d}^{-\\frac{1}{2}})\\mathbf {U}^{T} \\frac{\\partial \\it {l}}{\\partial \\mathbf {Q}} \\mathbf {U}, \\\\\\frac{\\partial l}{\\partial \\mathbf {\\Lambda }}\\Big |_{{\\mathbf {S}}}=-\\frac{1}{2}\\rm {diag}(\\lambda _{1}^{-\\frac{3}{2}},\\dots ,\\lambda _{d}^{-\\frac{3}{2}})\\mathbf {U}^{T} \\frac{\\partial \\it {l}}{\\partial \\mathbf {S}} \\mathbf {U}\\end{array}}$ Subsequently, the derivative of the SVD step can be calculated as: $\\frac{\\partial l}{\\partial \\mathbf {P}}=\\mathbf {U}( (\\mathbf {K}^{T}\\circ (\\mathbf {U}^{T}\\frac{\\partial l}{\\partial \\mathbf {U}}))+ (\\frac{\\partial l}{\\partial \\mathbf {\\Lambda }})_{\\rm diag})\\mathbf {U}^{T}$ where $\\circ $ denotes the matrix Hadamard product, and the matrix $\\mathbf {K}$ consists of entries $K_{ij}{=}{1}/{(\\lambda _{i}{-}\\lambda _{j})}$ if $i{\\ne }j$ and $K_{ij}{=}0$ otherwise.", "This step is the same for both ${\\mathbf {Q}}$ and ${\\mathbf {S}}$ .", "Finally, we have the gradient passed to the feature ${\\mathbf {X}}$ as: $\\frac{\\partial l}{\\partial \\mathbf {X}}=(\\frac{\\partial l}{\\partial \\mathbf {P}}+(\\frac{\\partial l}{\\partial \\mathbf {P}})^{T})\\mathbf {X}{\\mathbf {J}}$ With the above rules, the SVD function can be easily inserted into any neural networks and trained end-to-end as a meta-layer." ], [ "Derivation of Nearest Orthogonal Gradient", "The problem of finding the nearest orthogonal gradient can be defined as: $\\min _{{\\mathbf {R}}} ||\\frac{\\partial l}{\\partial {\\mathbf {W}}}-{\\mathbf {R}}||_{\\rm F}\\ subject\\ to\\ {\\mathbf {R}}{\\mathbf {R}}^{T}={\\mathbf {I}}$ To solve this constrained optimization problem, We can construct the following error function: $e({\\mathbf {R}}) = Tr\\Big ((\\frac{\\partial l}{\\partial {\\mathbf {W}}}-{\\mathbf {R}})^{T}(\\frac{\\partial l}{\\partial {\\mathbf {W}}}-{\\mathbf {R}})\\Big ) + Tr\\Big (\\mathbf {\\Sigma } {\\mathbf {R}}^{T}{\\mathbf {R}}-{\\mathbf {I}}\\Big )$ where $Tr(\\cdot )$ is the trace measure, and $\\mathbf {\\Sigma }$ denotes the symmetric matrix Lagrange multiplier.", "Setting the derivative to zero leads to: ${\\begin{array}{c}\\frac{d e({\\mathbf {R}})}{d {\\mathbf {R}}} = -2 (\\frac{\\partial l}{\\partial {\\mathbf {W}}}-{\\mathbf {R}}) + 2{\\mathbf {R}}\\mathbf {\\Sigma } = 0 \\\\\\frac{\\partial l}{\\partial {\\mathbf {W}}} = {\\mathbf {R}}({\\mathbf {I}}+ \\mathbf {\\Sigma } ),\\ {\\mathbf {R}}= \\frac{\\partial l}{\\partial {\\mathbf {W}}}({\\mathbf {I}}+ \\mathbf {\\Sigma })^{-1}\\end{array}}$ The term $({\\mathbf {I}}+ \\mathbf {\\Sigma })$ can be represented using $\\frac{\\partial l}{\\partial {\\mathbf {W}}}$ .", "Consider the covariance of $\\frac{\\partial l}{\\partial {\\mathbf {W}}}$ : ${\\begin{array}{c}(\\frac{\\partial l}{\\partial {\\mathbf {W}}})^{T}\\frac{\\partial l}{\\partial {\\mathbf {W}}} = ({\\mathbf {I}}+ \\mathbf {\\Sigma } )^{T}{\\mathbf {R}}^{T}{\\mathbf {R}}({\\mathbf {I}}+ \\mathbf {\\Sigma } ) = ({\\mathbf {I}}+ \\mathbf {\\Sigma } )^{T}({\\mathbf {I}}+ \\mathbf {\\Sigma } ) \\\\({\\mathbf {I}}+ \\mathbf {\\Sigma } ) = \\Big ((\\frac{\\partial l}{\\partial {\\mathbf {W}}})^{T}\\frac{\\partial l}{\\partial {\\mathbf {W}}}\\Big )^{\\frac{1}{2}}\\end{array}}$ Substituting the term $({\\mathbf {I}}+ \\mathbf {\\Sigma })$ in eq.", "(REF ) with the above equation leads to the closed-form solution of the nearest orthogonal gradient: ${\\mathbf {R}}= \\frac{\\partial l}{\\partial {\\mathbf {W}}} \\Big (( \\frac{\\partial l}{\\partial {\\mathbf {W}}})^{T} \\frac{\\partial l}{\\partial {\\mathbf {W}}}\\Big )^{-\\frac{1}{2}}$" ], [ "Derivation of Optimal Learning Rate", "To jointly optimize the updated weight ${\\mathbf {W}}{-}{\\eta }\\frac{\\partial l}{\\partial {\\mathbf {W}}}$ , we need to achieve the following objective: $\\min _{\\eta } ||({\\mathbf {W}}{-}{\\eta }\\frac{\\partial l}{\\partial {\\mathbf {W}}})({\\mathbf {W}}{-}{\\eta }\\frac{\\partial l}{\\partial {\\mathbf {W}}})^{T}-{\\mathbf {I}}||_{\\rm F}$ This optimization problem can be more easily solved in the form of vector.", "Let $\\mathbf {w}$ , ${\\mathbf {i}}$ , and $\\mathbf {l}$ denote the vectorized ${\\mathbf {W}}$ , ${\\mathbf {I}}$ , and $\\frac{\\partial l}{\\partial {\\mathbf {W}}}$ , respectively.", "Then we construct the error function as: $e(\\eta ) =\\Big ( (\\mathbf {w}-\\eta \\mathbf {l})^{T}(\\mathbf {w}-\\eta \\mathbf {l})-{\\mathbf {i}}\\Big )^{T}\\Big ( (\\mathbf {w}-\\eta \\mathbf {l})^{T}(\\mathbf {w}-\\eta \\mathbf {l})-{\\mathbf {i}}\\Big )$ Expanding the equation leads to: $e(\\eta )=(\\mathbf {w}^{T}\\mathbf {w}-2\\eta \\mathbf {l}^{T}\\mathbf {w}+\\eta ^{2}\\mathbf {l}^{T}\\mathbf {l}-\\mathbf {i})^{T}(\\mathbf {w}^{T}\\mathbf {w}-2\\eta \\mathbf {l}^{T}\\mathbf {w}+\\eta ^{2}\\mathbf {l}^{T}\\mathbf {l}-\\mathbf {i})$ Differentiating $e(\\eta )$ w.r.t.", "$\\eta $ yields: ${\\begin{array}{c}\\frac{d e(\\eta )}{d \\eta } = -4{\\mathbf {w}}{\\mathbf {w}}^{T}{\\mathbf {l}}^{T}{\\mathbf {w}}+ 4\\eta {\\mathbf {w}}{\\mathbf {w}}^{T}{\\mathbf {l}}^{T}{\\mathbf {l}}\\\\+ 8\\eta {\\mathbf {l}}^{T}{\\mathbf {w}}{\\mathbf {l}}^{T}{\\mathbf {w}}-12\\eta ^{2}{\\mathbf {l}}^{T}{\\mathbf {w}}{\\mathbf {l}}^{T}{\\mathbf {l}}+4{\\mathbf {l}}{\\mathbf {w}}^{T}{\\mathbf {i}}+4\\eta ^{3}{\\mathbf {l}}{\\mathbf {l}}^{T} - 4\\eta {\\mathbf {i}}{\\mathbf {l}}{\\mathbf {l}}^{T}\\end{array}}$ Since $\\eta $ is typically very small, the higher-order terms (e.g., $\\eta ^{2}$ and $\\eta ^{3}$ ) are sufficiently small such that they can be neglected.", "After omitting these terms, the derivative becomes: $\\frac{d e(\\eta )}{d \\eta }{\\approx }-4{\\mathbf {w}}{\\mathbf {w}}^{T}{\\mathbf {l}}^{T}{\\mathbf {w}}+ 4\\eta {\\mathbf {w}}{\\mathbf {w}}^{T}{\\mathbf {l}}^{T}{\\mathbf {l}}+ 8\\eta {\\mathbf {l}}^{T}{\\mathbf {w}}{\\mathbf {l}}^{T}{\\mathbf {w}}+4{\\mathbf {l}}{\\mathbf {w}}^{T}{\\mathbf {i}}- 4\\eta {\\mathbf {i}}{\\mathbf {l}}{\\mathbf {l}}^{T}\\\\$ Setting the derivative to zero leads to the optimal learning rate: $\\eta ^{\\star } \\approx \\frac{{\\mathbf {w}}^{T}{\\mathbf {w}}{\\mathbf {l}}^{T}{\\mathbf {w}}-{\\mathbf {l}}^{T}{\\mathbf {w}}{\\mathbf {i}}}{{\\mathbf {w}}^{T}{\\mathbf {w}}{\\mathbf {l}}^{T}{\\mathbf {l}}+2{\\mathbf {l}}^{T}{\\mathbf {w}}{\\mathbf {l}}^{T}{\\mathbf {w}}- {\\mathbf {l}}^{T}{\\mathbf {l}}{\\mathbf {i}}}$ Notice that ${\\mathbf {i}}$ is the vectorization of the identify matrix ${\\mathbf {I}}$ , which means that ${\\mathbf {i}}$ is very sparse ($\\emph {i.e.,}$ lots of zeros) and the impact can be neglected.", "The optimal learning rate can be further simplified as: $\\eta ^{\\star } \\approx \\frac{{\\mathbf {w}}^{T}{\\mathbf {w}}{\\mathbf {l}}^{T}{\\mathbf {w}}}{{\\mathbf {w}}^{T}{\\mathbf {w}}{\\mathbf {l}}^{T}{\\mathbf {l}}+2{\\mathbf {l}}^{T}{\\mathbf {w}}{\\mathbf {l}}^{T}{\\mathbf {w}}}$" ], [ "Proof of the learning rate bounds", "Proposition 1 When both ${\\mathbf {W}}$ and $\\frac{\\partial l}{\\partial {\\mathbf {W}}}$ are orthogonal, $\\eta ^{\\star }$ is both upper and lower bounded.", "The upper bound is $\\frac{N^2}{N^2 + 2}$ and the lower bound is $\\frac{1}{N^{2}+2}$ where $N$ denotes the row dimension of ${\\mathbf {W}}$ .", "Since the vector product is equivalent to the matrix Frobenius inner product, we have the relation: ${\\mathbf {l}}^{T}{\\mathbf {w}}= \\langle \\frac{\\partial l}{\\partial {\\mathbf {W}}},{\\mathbf {W}}\\rangle _{\\rm F}$ For a given matrix pair ${\\mathbf {A}}$ and ${\\mathbf {B}}$ , the Frobenius product $\\langle \\cdot \\rangle _{\\rm F}$ is defined as: $\\langle {\\mathbf {A}},{\\mathbf {B}}\\rangle _{\\rm F}=\\sum A_{i,j}B_{i,j}\\le \\sigma _{1}({\\mathbf {A}})\\sigma _{1}({\\mathbf {B}})+\\dots +\\sigma _{N}({\\mathbf {A}})\\sigma _{N}({\\mathbf {B}})$ where $\\sigma (\\cdot )_{i}$ represents the $i$ -th largest eigenvalue, $N$ denotes the matrix size, and the inequality is given by Von Neumann’s trace inequality [85], [86].", "The equality takes only when ${\\mathbf {A}}$ and ${\\mathbf {B}}$ have the same eigenvector.", "When both ${\\mathbf {W}}$ and $\\frac{\\partial l}{\\partial {\\mathbf {W}}}$ are orthogonal, i.e., their eigenvalues are all 1, we have the following relation: $\\langle \\frac{\\partial l}{\\partial {\\mathbf {W}}},\\frac{\\partial l}{\\partial {\\mathbf {W}}}\\rangle _{\\rm F}=N,\\ \\langle \\frac{\\partial l}{\\partial {\\mathbf {W}}},{\\mathbf {W}}\\rangle _{\\rm F}\\le N$ This directly leads to: $\\langle \\frac{\\partial l}{\\partial {\\mathbf {W}}},{\\mathbf {W}}\\rangle _{\\rm F}\\le \\langle \\frac{\\partial l}{\\partial {\\mathbf {W}}},\\frac{\\partial l}{\\partial {\\mathbf {W}}}\\rangle _{\\rm F},\\ {\\mathbf {l}}^{T}{\\mathbf {w}}\\le {\\mathbf {l}}^{T}{\\mathbf {l}}$ Exploiting this inequality, the optimal learning rate has the relation: $\\begin{aligned}\\eta ^{\\star } \\approx \\frac{{\\mathbf {w}}^{T}{\\mathbf {w}}{\\mathbf {l}}^{T}{\\mathbf {w}}}{{\\mathbf {w}}^{T}{\\mathbf {w}}{\\mathbf {l}}^{T}{\\mathbf {l}}+2{\\mathbf {l}}^{T}{\\mathbf {w}}{\\mathbf {l}}^{T}{\\mathbf {w}}}\\le \\frac{{\\mathbf {w}}^{T}{\\mathbf {w}}{\\mathbf {l}}^{T}{\\mathbf {l}}}{{\\mathbf {w}}^{T}{\\mathbf {w}}{\\mathbf {l}}^{T}{\\mathbf {l}}+2{\\mathbf {l}}^{T}{\\mathbf {w}}{\\mathbf {l}}^{T}{\\mathbf {w}}}\\end{aligned}$ For ${\\mathbf {l}}^{T}{\\mathbf {w}}$ , we have the inequality as: $\\begin{aligned}{\\mathbf {l}}^{T}{\\mathbf {w}}&= \\langle \\frac{\\partial l}{\\partial {\\mathbf {W}}},{\\mathbf {W}}\\rangle _{\\rm F}=\\sum _{i,j} \\frac{\\partial l}{\\partial {\\mathbf {W}}}_{i,j}{\\mathbf {W}}_{i,j}\\\\&\\ge \\sigma _{min}(\\frac{\\partial l}{\\partial {\\mathbf {W}}})\\sigma _{min}({\\mathbf {W}})=1\\end{aligned}$ Then we have the upper bounded of $\\eta ^{\\star }$ as: $\\begin{aligned}\\eta ^{\\star } &\\le \\frac{{\\mathbf {w}}^{T}{\\mathbf {w}}{\\mathbf {l}}^{T}{\\mathbf {l}}}{{\\mathbf {w}}^{T}{\\mathbf {w}}{\\mathbf {l}}^{T}{\\mathbf {l}}+2{\\mathbf {l}}^{T}{\\mathbf {w}}{\\mathbf {l}}^{T}{\\mathbf {w}}}\\\\&= \\frac{N^2}{N^2+2{\\mathbf {l}}^{T}{\\mathbf {w}}{\\mathbf {l}}^{T}{\\mathbf {w}}} < \\frac{N^2}{N^2 + 2}\\end{aligned}$ For the lower bound, since we also have ${\\mathbf {l}}^{T}{\\mathbf {w}}{\\le }{\\mathbf {w}}^{T}{\\mathbf {w}}$ , $\\eta ^{\\star }$ can be re-written as: $\\begin{aligned}\\eta ^{\\star } &\\approx \\frac{{\\mathbf {w}}^{T}{\\mathbf {w}}{\\mathbf {l}}^{T}{\\mathbf {w}}}{{\\mathbf {w}}^{T}{\\mathbf {w}}{\\mathbf {l}}^{T}{\\mathbf {l}}+2{\\mathbf {l}}^{T}{\\mathbf {w}}{\\mathbf {l}}^{T}{\\mathbf {w}}}\\\\&\\ge \\frac{{\\mathbf {l}}^{T}{\\mathbf {w}}{\\mathbf {l}}^{T}{\\mathbf {w}}}{{\\mathbf {w}}^{T}{\\mathbf {w}}{\\mathbf {l}}^{T}{\\mathbf {l}}+2{\\mathbf {l}}^{T}{\\mathbf {w}}{\\mathbf {l}}^{T}{\\mathbf {w}}}\\\\&= \\frac{1}{\\frac{{\\mathbf {w}}^{T}{\\mathbf {w}}{\\mathbf {l}}^{T}{\\mathbf {l}}}{{\\mathbf {l}}^{T}{\\mathbf {w}}{\\mathbf {l}}^{T}{\\mathbf {w}}}+2} \\\\&=\\frac{1}{\\frac{N^{2}}{{\\mathbf {l}}^{T}{\\mathbf {w}}{\\mathbf {l}}^{T}{\\mathbf {w}}}+2}\\end{aligned}$ Injecting eq.", "(REF ) into eq.", "(REF ) leads to the further simplification: $\\eta ^{\\star } \\approx \\frac{1}{\\frac{N^{2}}{{\\mathbf {l}}^{T}{\\mathbf {w}}{\\mathbf {l}}^{T}{\\mathbf {w}}}+2} \\ge \\frac{1}{N^{2}+2}$ As indicated above, the optimal learning rate $\\eta ^{\\star }$ has a lower bound of $\\frac{1}{N^{2}+2}$ .", "Figure: Scheme of learning rate during the training process of decorrelated BN.", "For the orthogonal weight and gradient, our OLR has a much higher probability of occurrence and can enforce a stronger orthogonality constraint." ], [ "Detailed Experimental Settings", "In this section, we introduce the implementation details and experimental settings." ], [ "Decorrelated Batch Normalization", "The training lasts 350 epochs and the learning rate is initialized with $0.1$ .", "The SGD optimizer is used with momentum $0.9$ and weight decay $5e{-}4$ .", "We decrease the learning rate by 10 every 100 epochs.", "The batch size is set to 128.", "We use the technique proposed in [2] to compute the stable SVD gradient.", "The Pre-SVD layer in this experiment is the $3{\\times }3$ convolution layer." ], [ "Global Covariance Pooling", "The training process lasts 60 epochs and the learning rate is initialize with $0.1$ .", "We decrease the learning rate by 10 at epoch 30 and epoch 45.", "The SGD optimizer is used with momentum $0.9$ and weight decay $1e{-}4$ .", "The model weights are randomly initialized and the batch size is set to 256.", "The images are first resized to $256{\\times }256$ and then randomly cropped to $224{\\times }224$ before being passed to the model.", "The data augmentation of randomly horizontal flip is used.", "We use the technique proposed in [2] to compute the stable SVD gradient.", "The Pre-SVD layer denotes the convolution transform of the previous layer." ], [ "EigenGAN", "The input image is resize to $128{\\times }128$ for AnimeFace [79] and to $256{\\times }256$ for FFHQ [80].", "We set the batch size to 128, and the training process lasts $500,000$ steps.", "The subspace dimension of each layer is set to 6, i.e., each layer has 6 interpretable directions.", "All the orthogonality techniques are enforced on the projection matrix $\\mathbf {U}_{i}$ at each layer." ], [ "Vanilla GAN", "For both CelebA [81] and LSUN Church [82], we resize the input image to the resolution of $128{\\times }128$ .", "The training lasts 200 epochs for CelebA and lasts 400 epochs for LSUN Church.", "We set the batch size to 128 and set the latent dimension to 30." ], [ "Occurrence of OLR", "Since our proposed OLR needs manual tuning during the training, it would be interesting to investigate the probability of occurrence in different settings.", "Fig.", "REF depicts the learning rate schemes of decorrelated BN with ordinary learning rate (left), OLR for non-orthogonal weight/gradient (middle), and OLR for orthogonal weight/gradient (right).", "As can be seen, in both settings (orthogonal and non-orthogonal weight/gradient), our OLR occurs with a reasonable probability during the training, which enforces a related orthogonality constraint on the weight.", "When the weight and gradient are non-orthogonal, our OLR mainly occurs at the first training stage where the ordinary learning rate is relative large.", "For orthogonal gradient and weight, the OLR happens more frequently and consistently occurs throughout all the training stages.", "This meets our theoretical analysis in Prop.", "REF : our OLR suits simultaneously orthogonal weight and gradient." ] ]
2212.05599
[ [ "Towards a Learner-Centered Explainable AI: Lessons from the learning\n sciences" ], [ "Abstract In this short paper, we argue for a refocusing of XAI around human learning goals.", "Drawing upon approaches and theories from the learning sciences, we propose a framework for the learner-centered design and evaluation of XAI systems.", "We illustrate our framework through an ongoing case study in the context of AI-augmented social work." ], [ "Introduction", "Recent years have seen a surge of interest in the question of how AI systems can be made more “interpretable’’ or “explainable’’ to humans.", "Yet these terms are used in reference to many disparate goals within the literature [10], [17], [19].", "For instance, work on interpretability has sometimes focused on enhancing humans’ ability to mentally simulate and predict an AI system’s behavior [16], [17], [22] or to evaluate counterfactuals [27].", "Other work addresses ways to help humans decompose models, to understand their constituent parts (e.g., parameters) and how these parts fit together [17].", "From a human-centered perspective, these design goals can be understood as supporting different human capabilities, each of which may be more or less useful in different real-world contexts.", "For example, decomposing a model may be useful when debugging an AI system.", "In a decision-making context, the ability to identify situations that could impact a model's reliability may be more helpful [11], [20].", "In this paper, we argue that many, if not all, of the design goals in existing XAI research and practice can be productively reinterpreted as human learning goals.", "Much current XAI research focuses on designing ways to make models explainable to humans.", "By contrast, building upon recent arguments for centering human understanding in XAI research [19], [26], we focus on supporting humans in learning about particular AI systems and how to work with or around them.", "Whereas XAI research often aims at communicating information about an AI system instantaneously and with minimal effort on the part of a human recipient, some learning goals may best be met through longer learning engagements or through deliberate practice and feedback [3], [4], [15], [20].", "Drawing lessons from the learning sciences—a scientific and design discipline dedicated to the study of human learning and ways to support it in real-world contexts—we explore the implications of adopting a learning-centered lens for the design and evaluation of human-centered XAI.", "We propose a framework for learner-centered XAI, which integrates and extends existing concepts from the learning sciences.", "Finally, we present an ongoing case study illustrating how this framework can be applied in practice." ], [ "A framework for learner-centered XAI", "In this section, we propose a framework for the learner-centered design and evaluation of XAI.", "We describe how three concepts from the learning sciences—backward design [28], participatory design for learning [9], and “closing the loop’’ [7], [18]—can help to guide the design of XAI that positions humans as deliberate and continuous learners.", "The goals of this framework are to (1) offer a systematic process for designing XAI interfaces that target specific learning outcomes, (2) demonstrate how context- and stakeholder-specific needs can be surfaced and addressed during the design process, (3) combine participatory and data-driven methods to support more contextually-relevant XAI designs, and (4) provide a more rigorous approach for evaluating the effectiveness of XAI.", "As shown in Figure REF , our framework proposes that researchers should collaborate with relevant stakeholders in real-world human-AI interaction contexts, to iteratively co-design learning objectives, measures, activities, and evaluation approaches.", "Following a “backward design’’ approach, as discussed below, this collaboration should begin by specifying learning objectives: a set of specific capabilities that the learners should ideally have following a learning activity.", "Learners should then be involved in decisions about how to operationalize these learning objectives in the form of concrete learning measures which capture observable human behaviors as proxies for latent constructs such as “understanding’’ of a targeted concept [12].", "For instance, researchers might engage learners in specifying how they would know whether a given intervention had succeeded in meeting one of their learning objectives: how would they behave differently, or what would they be able to do that they could not do previously?", "With these objectives and measures in mind, researchers can work with learners to co-design learning activities to try to help learners achieve their specified objectives.", "The measures specified previously can then be used to evaluate learning outcomes, to guide the iterative, data-driven refinement of learning activities.", "Below, we introduce three concepts from the learning sciences that inform this framework.", "Figure: A framework for the design of learner-centered XAI.Wiggins and McTighe proposed backward design to address a longstanding challenge in instructional design: teachers and instructional designers often focus more on how to teach rather than on how to help students learn [2], [28].", "Backward design is an approach that `flips’ the design process.", "Rather than starting with the design of instructional materials, designers are encouraged to first identify desired learning outcomes, then to design assessments of those outcomes, and lastly to design instruction aimed at achieving those outcomes.", "These challenges in instructional design are echoed in the current XAI landscape: Even as research moves towards more human-centered XAI methods, it remains common to first propose an explainability technique, and then evaluate whether and how the technique is useful to users.", "In the learner-centered XAI framework, we propose a backward design process that starts by identifying meaningful learning objectives for a given task context, then operationalizes what it means to meet those learning objectives.", "Only after designing and operationalizing learning objectives that reflect stakeholder- and domain-specific needs are XAI designers prepared to design interfaces that meet these learning objectives.", "The framework additionally draws from participatory design practices in the learning sciences.", "From a learning sciences perspective, participatory design is recast as an opportunity for relevant stakeholders and researchers to collaboratively learn new knowledge that can guide the design process, based on each others’ complementary expertise [9].", "Stakeholders with relevant lived experience are uniquely positioned to understand their own needs and desires.", "Meanwhile, researchers can bring unique scientific, design, and technical expertise that is critical to designing effective learning interventions.", "Moreover, as researchers and stakeholders' joint understanding of the problem space strengthens, the framework’s emphasis on an iterative design process may encourage them to proactively reflect on their prior design decisions and refine them as needed.", "Empowering stakeholder participation earlier on in the design process, at the “defining learning objectives” stage, not just when evaluating the interfaces, may also open opportunities for different stakeholders in a given context to discuss any misalignments in their envisioned learning needs.", "Finally, Figure REF indicates that real-world evaluations of XAI techniques should inform the continuous process of iterative re-design.", "This aligns with the notion of “closing the loop” in the learning sciences, emphasizing the data-driven refinement of instructional materials based on analysis of data reflecting how people actually learn with them [7], [18].", "This approach offers an opportunity to rigorously evaluate and iterate on co-designed learning objectives, measures, and interfaces, to address design misalignments, or to adapt to changes in stakeholder needs over time." ], [ "Case study: Using the framework to design training interfaces for AI-augmented social work", "In this section, we illustrate how the learner-centered XAI framework can be used in practice, through an ongoing case study in the context of AI-augmented social work." ], [ "Background", "In an effort to augment social workers’ abilities to efficiently process and prioritize among large volumes of child maltreatment referrals, child welfare agencies have begun to turn to new machine learning-based ADS tools [23], [6], [24], [29].", "The Allegheny Family Screening Tool (AFST) has been in use in Allegheny County, Pennsylvania since 2016, where it assists child maltreatment hotline call screeners and supervisors in prioritizing among referred cases [21].", "While the county has published public-facing reports discussing the ethics and validity of using such a tool [8], recent research raises new concerns around how effectively the tool has been integrated into the organizational and social context in which workers make day-to-day use of the tool.", "In particular, in a recent paper, we report findings from a series of interviews and contextual inquiries at this child welfare agency, to understand how workers currently make AI-assisted child maltreatment screening decisions.", "We found that workers had little to no opportunities to learn about the AI system they were using, nor about how to work with it effectively, limiting their ability to appropriately calibrate their reliance on the tool’s predictions [13].", "Moreover, we found that workers' decision-making objectives (focusing on short term risks to child safety) differed from the model's predictive targets (focused on much longer-term predictions of indirect proxies of risk).", "While the tool was intentionally designed to complement workers' focus on immediate outcomes, workers were unsure how exactly they were meant to integrate the tool's predictions of long-term risk with their own assessments of immediate safety.", "Overall, these prior the findings suggest a need to more broadly reconsider and reconceptualize what appropriate roles for ADS in social work might look like.", "This reconceptualization necessitates, at minimum, finding ways to understand, empower, and integrate worker perspectives in the design of ADS.", "As a first step towards this vision, we are currently exploring ways to address the gap between the current design of the AFST and workers' beliefs regarding what effective human-AI decision-making should look like, and how it should be measured.", "In this ongoing work, we engage workers in the design of training materials, as a means to identify and design worker-centered learning objectives, measures, and learning activities.", "In this project, we do not plan to fully develop or deploy training materials for the AFST specifically.", "Indeed, based on our findings thus far, we expect that this co-design process will surface needs for fundamentally different kinds of ADS, not just building training interactions around the existing ADS.", "Rather, we view the AFST context as a rare opportunity to understand workers' learning goals and needs for support in a highly complex, social decision-making context where an ADS has already been in-use for many years (over half a decade).", "Beyond this context, we plan to explore the generalizability of our findings (e.g., regarding workers' learning goals) to other AI-augmented social decision-making contexts, such as AI-augmented content moderation." ], [ "Ongoing case study", "Following the first step of the learner-centered XAI framework, we first defined a set of fine-grained learning objectives, such as “the ability to identify cases where a model may be more or less reliable”, based on our design research with workers (see Appendixhttps://sites.google.com/andrew.cmu.edu/learner-centered-xai/home for details).", "We plan to further explore, refine, and operationalize these learning objectives in collaboration with various stakeholders (e.g., workers, community members, and agency leadership).", "Figure REF shows two examples of initial training interface sketches that could address specific learning objectives in our taxonomy.", "In the first example, the learning objective is to improve workers’ ability to appropriately rely on ADS outputs in specific cases.", "The sketch shows a simulated decision-making activity, which provides low-stakes opportunities for workers to practice integrating their own judgments with AI predictions on real historical data while receiving immediate feedback [1], [11], [14], [25].", "The second example sketch focuses on honing workers’ ability to mentally simulate the model’s behavior through repeated practice opportunities on a score guessing exercise, with immediate feedback on the closeness of their guesses.", "Figure: Example interfaces targeting different learning goals.As next steps, we plan to iteratively refine the learning objectives, measures, and activities through co-design activities with social workers who use this ADS in their daily work, along with other stakeholder groups.", "Taking a participatory design for learning approach, we view the co-design of learning objectives and measures as an opportunity to surface and address value tensions across different stakeholder groups, regarding what human-AI decision-making in this context should look like in the first place [5], [13].", "For example, while current worker-ADS decision-making performance measures are based on the ADS's predictive target, this assumes the workers should then learn to act like the system would.", "Our framework aims to involve workers in the design of improved learning measures, to offer alternative measures that counter these assumptions and align more closely with workers' own decision objectives or a mixture of workers' decision objectives and the systems' objectives, if that is believed to be desirable." ], [ "Open Questions", "At the workshop, we hope to further explore several open questions.", "For example: How might learning objectives vary across different human-AI tasks (e.g., prediction, decision-making, or co-creation)?", "What are other implications of approaching XAI through a learning-centric lens?" ] ]
2212.05588
[ [ "Studying Postmerger Outflows from Magnetized Neutrino-cooled Accretion\n Disks" ], [ "Abstract Neutrino-cooled accretion flow around a spinning black hole, produced by a compact binary merger is a promising scenario for jet formation and launching magnetically-driven outflows.", "Based on GW170817 gravitational wave detection by LIGO and Virgo observatories followed by electromagnetic counterparts, this model can explain the central engine of the short duration gamma ray bursts (GRB) and kilonova radiations.", "Using the open-source GRMHD HARM-COOL code, we evolved several 2D magnetized accretion disk-black hole models with realistic equation of state in the fixed curved space-time background.", "We applied particle tracer technique to measure the properties of the outflows.", "The disk and black hole's initial parameters are chosen in a way to represent different possible post-merger scenarios of the merging compact objects.", "Our simulations show a strong correlation between black hole's spin and ejected mass.", "Generally, mergers producing massive disks and rapidly spinning black holes launch stronger outflows.", "We observed our models generate winds with moderate velocity ($v/c \\sim 0.1-0.2$), and broad range of electron fraction.", "We use these results to estimate the luminosity and light curves of possible radioactively powered transients emitted by such systems.", "We found the luminosity peaks within the range of $10^{40}-10^{42}$ erg/s which agrees with previous studies for disk wind outflows." ], [ "Introduction", "Observation of gravitational waves accompanied by electromagnetic emissions from black hole-neutron star (BHNS) and binary neutron star (BNS) mergers provide treasured information about the physics of the dense matter ([28]), compact binary formation ([8]) and the origin of heavy elements ([45], [65], [77]).", "In a post-merger black hole-accretion disk remnants, energy can be channeled into ultra-relativistic outflow needed to explain GRB properties( [61], [6]) by hot, dense accretion flow.", "In such systems, the plasma cools by neutrino emission continuously ([66], [19], [42]).", "On the other hand, magnetic fields can also extract energy from the disk and black hole spin, by generating the magnetically-driven winds and Poynting flux-dominated jets ([7]).", "Moreover, it can support the heating process of the plasma through the viscous effects of the MRI ([3]).", "The neutron-rich dynamical ejecta outflows during merger, and post-merger winds are considered as possible sources explaining the observation of kilonovae, afterglows and other electromagnetic (EM) counterparts following the gravitational wave detections, ([59], [58], [20], [80]).", "Kilonovae are transient emissions in the optical or near-infrared band, powered by the radioactive decay of the elements produced through r-process nucleosynthesis ([75]).", "Two distinguishable sources are suggested to explain kilonova emissions: 1- High-speed, neutron-rich dynamical ejecta expelled from the tidal tail during merger; 2- Less neutron-rich material with moderate velocity ejected from post-merger accretion disk due to magnetic viscous heating and/or neutrino absorption heating effects.", "The emissions from former source are dimmer and long-lasting near infrared ([53], [58], [34], [73]), while the latter produces optically brighter and bluer transients ([34], [64], [44]).", "However, depending on the merger's scenario, it is possible to have multiple channels for outflow formation; in more recent studies by [11] it is shown that the decay of the fast free neutrons in the outermost layers of dynamical ejecta can power a UV/optical kilonova precursor on $\\approx 1$ hour timescale.", "In the case of a BNS merger, the long-lived postmerger hypermassive neutron star (HMNS) is considered as another channel of neutrino and magnetically-driven outflow formations ( [15], [16]).", "The observation of the electromagnetic counterpart of the gravitational wave signal from the BNS merger GW170817 gives us a unique opportunity to investigate the r-process nucleosynthesis directly.", "The identification of strontium in these emissions ([77]) is considered as the first evidence of rapid nuclear process and heavy elements production in the BNS merger scenario.", "The observed kilonova light-curves and spectra suggest that models with two components consistent with lanthanide-poor and lanthanide-rich ejecta provide a good fit to the data.", "These models indicate the ejecta mass of $M_{ej}/M_{\\odot } = 0.03-0.06$ , with postmerger disk ejeta as a dominant component ([14], [74], [47]).", "The theoretical models suggest that the magnitude, color, and duration of a kilonova are significantly affected by the mass, velocity, and composition of the ejected matter, and the properties of these outflows can themselves be related to the properties of the merging compact objects, such as mass ratio, compactness and equation of state ([5], [34]).", "In a post-merger scenario, the jet formation, disk evolution and ejected matter can be impacted by the magnetic field strength and geometry.", "[76] studied the long evolution of a BHNS merger where the neutron star is initially magnetized by asymmetric magnetic field dipole configurations.", "[10] evolved a long-term ($\\sim 4$ s) post-merger black hole-accretion disk system with different magnetic field configurations, concluding the ejected matter is slightly more massive for poloidal field configuration compared with purely toroidal field.", "[40] also reported that disk configurations with higher initial magnetic field strength and higher spin BH generate massive, lanthanide-poor outflows.", "Moreover, the total mass lost through wind outflows depends on the mass of the remnant torus as well.", "Larger and more massive disks generates stronger matter outflows ([45]).", "However, only a long-term 3D GRMHD simulations of a posmerger system can capture all the disk's and outflows features accurately, including the geometry and thermal evolution.", "[26] and [37] have evolved multi-second GRMHD simulations of black hole-disk remnants for BNS and BHNS mergers respectively.", "Both studies concluded that the neutrino cooling is only effective at the earlier time after merger, and the disk eventually becomes advective, hence the mass loss rate and ejecta's composition are expected to be affected by long evolution of the disk.", "Predicting the properties of matter outflows in BNS and BHNS mergers is crucial to carry out detailed analysis of post-merger electromagnetic emissions.", "[53] and  [34] obtained the analytical fits for the mass, kinetic energy, and the velocities of the dynamical ejected material to approximate the main properties of kilonovae according to BNS numerical simulations.", "[20] suggested fitting formulae to derive the dynamical ejecta's characteristics based on the results from over 170 BNS merger numerical simulations.", "Most recently, [38] introduced a novel method on estimating the individual masses and equation of state of BNS systems from r-process abundance signatures and numerical simulations using Bayesian analysis.", "[67] is another recent paper in the Bayesian framework, which improves the estimation of the gravitational wave source parameters using the kilonova's lightcurves in the case of BHNS systems.", "In order to investigate the properties of outflows, it is important to have a reliable method to determine the unbound matter and its evolution.", "The accurate measurement of the outflow's quantities and geometry can be captured only by 3D hydrodynamics simulations including the r-process heating and neutrino cooling terms in evolution equations ([49], [31]).", "However,  [18] and  [31] presented improved versions of Bernoulli's criteria to include the effects of these heating and cooling process approximately, to have a realistic estimation of the unbound matter.", "In this paper, we utilize the numerical simulations to investigate the evolution and different properties of the disk and the outflows in the presence of magnetic fields and neutrino cooling.", "We mainly focus on measuring the outflows’ properties such as composition, velocity and mass to estimate the kilonova’s luminosity and lightcurve.", "We also investigate how the initial analytic disk parameters can affect the features of the post-merger outflows by altering disk's mass and black hole's spin and mass.", "The range of parameters are chosen to mimic real post-merger remnant disks.", "This paper is organized as follows, in Sec.", "we briefly explain the numerical framework and initial setup of our simulations.", "In Sec.", "we present the general properties of the outflows and predict the possible kilonova lightcurves powered by each model.", "We give a discussion over comparison of our results with the previous studies in the literature and also physical and numerical limits of our study in Sec. .", "The summary and conclusion is given in Sec.", ", and finally, in the appendix  we present the results from several tests examining the accuracy of our tracer method for outflow measurements.", "We use a developed version of the general relativistic magnetohydrodynamic (GRMHD) code HARM ([33], [62]) explained in [71].", "This version of HARM has been developed to include realistic equation of state as well as neutrino treatment.", "HARM is a finite-volume code with HLL shock capturing scheme, which solves the partial differential GRMHD equations in the standard Valencia conservation formalism [63], for continuity equation, energy and momentum conservation, $\\begin{aligned}\\left( \\rho u_{\\mu } \\right)_{;\\mu } =0 \\\\T^{\\mu }_{\\nu ; \\mu } = 0,\\end{aligned}$ and induction equation, $\\begin{aligned}\\partial _t \\left( \\sqrt{-g} B^i \\right) = -\\partial _j \\left(\\sqrt{-g} \\left(b^j u^i - b^i u^j\\right)\\right).\\end{aligned}$ Here $T$ is the energy-momentum tensor with both matter and electromagnetic contributions $T^{\\mu \\nu } = T^{\\mu \\nu }_{gas} + T^{\\mu \\nu }_{EM}$ , $u_\\mu $ is the four-velocity, $B$ is the magnetic field and $b^i \\equiv (B^i + B^i u^{\\mu } g_{i \\mu }u^i)/u^t$ .", "The metric is frozen and fixed to the Kerr metric.", "The hydro equations are evolved in the modified spherical Kerr-Schild coordinates, where the following radial and angular maps are applied to decrease the grid spacing close to the black hole and the equatorial plane, respectively to improve the accuracy.", "$\\begin{aligned}& r_{KS} = R_0 + e^{r_{MKS}} \\\\& \\theta = \\pi x^{[2]} + \\frac{(1-h)}{2}sin(2\\pi x^{[2]})\\end{aligned}$ The coordinate parameter $h$ is set to $0.3$ for all models in this paper." ], [ "Initial Setup", "This work can be considered as a follow-up studies of [40] including a wider range of parameters for disk setup.", "We choose the initial disk mass in the range of [0.05,0.3]$M_{\\odot }$ to match with a realist post-merger remnant disks from a black hole-neutron star (BHNS) or neutron star binary system (BNS) [51].", "In order to investigate the spin effects on the disk ejecta, we alter the spin of the black hole (BH) for a few cases of prograde and one retrograde case.", "The mass of the BH has been set to $\\sim 2.65M_{\\odot }$ for BNS case, and $~5-6M_{\\odot }$ for the BHNS cases.", "In addition, we have a few cases of light black holes (LBH) $1-2M_{\\odot }$ , which are predicted to be formed through the collisions of low mass primordial black holes with galactic neutron stars in galactic dark matter halos [2].", "The details of our different models are given in table REF .", "The initial state of the torus is derived from the analytic solution of the Fisbone-Mocrief (FM) [27] disk model around a Kerr black hole.", "The FM disk is defined by $r_{in}$ the inner radius of the disk, $r_{max}$ the radius of maximum pressure, and $a$ dimensionless spin of the black hole.", "The FM disk free parameters in geometrical units, and the equivalent constant specific angular momentum $l_{FM}$ for each model are given in table REF .", "The initial magnetic field is confined within the torus with pure poloidal configuration for all the cases defined by vector potential: $A_{\\phi } = \\frac{\\overline{\\rho }}{\\rho _{max}} - 0.2,$ where $\\overline{\\rho }$ is the density averaged over density at grid point and the neighboring cells, and $\\rho _{max}$ is the maximum density.", "The strength of the magnetic field is set to $\\beta =50$ parameter, where $\\beta $ is defined as the ratio of the gas pressure to the magnetic pressure $\\beta \\equiv P_{g}/P_{B}$ .", "The resolution of the grid is 384x300 for radial and polar angular direction, respectively for all the models." ], [ "Equation of State and Neutrino Treatment", "For the nucleus equation of state we follow the same approach as [40] to have the plasma composed of free protons, neutrons, electron–positron pairs, and helium nuclei.", "The fraction of each species is determined by the equilibrium condition assumed between the reactions of electron–positron capture on nucleons, and neutron decays based on prescription given by [69].", "The gas is in equilibrium, so that the ratio of protons to neutrons satisfies the balance between forward and backward of the following weak nuclear reactions: $p + e^- \\rightarrow n + \\nu _e$ , $p + \\overline{\\nu }_e \\rightarrow n + e^+$ , $p + e^- + \\overline{\\nu }_e \\rightarrow n$ , $n + e^+ \\rightarrow p + \\overline{\\nu }_e$ , $n + \\nu _e \\rightarrow p + e^-$ , and $n \\rightarrow p + e^- + \\overline{\\nu }_e$ .", "We can define the proton-to-baryon number density ratio $Y_p$ (equivalently the electron fraction, for charge neutrality condition) as: $Y_e = (n_{e^-} - n_{e^+})/n_b$ , where $n_b$ is the baryon number density, $n_{e^-}$ and $n_{e^+}$ are electron and positron number densities respectively.", "In our simulations $Y_e$ is not evolved by time but it determines after each time step by the equilibrium conditions.", "This quantity is used to study the composition of the outflow winds.", "Based on the discussion in [79] and [44] the r-process nucleosynthesis is only effective when $Y_e < 0.4$ , and the lanthanide elements are more likely to be produced when $Y_e < 0.25$ in the ejected matter.", "Therefore the accuracy of $Y_e < 0.25$ measurement is crucial to predict the kilonova's features.", "For the neutrino treatment, we applied the same approach as [40] where the absorption optical depth for different neutrino species is given by an approximation ([19]), $\\tau _{a,\\nu _i} = \\frac{H}{4 \\frac{7}{8} \\sigma T^4} q_{a,\\nu _i},$ where $q_{a,\\nu _i}$ is the absorption rate derived by summation over the absorption rates from the weak interactions mentioned above, $T$ is temperature, $\\sigma $ is the Stefan-Bolzmann constant, and $H$ is disk's height.", "The lepton flavour considered here for neutrino species are electron, $\\mu $ and $\\tau $ .", "The scattering optical depth is estimated as: $\\begin{aligned}& \\tau _{s} = \\tau _{s,p} + \\tau _{s,n} \\\\& = 24.28 \\times 10^{-5} \\left[ \\left(\\frac{kT}{m_e c^2})\\right)^2 H \\left( C_{s,p}n_p + C_{s,n}n_n \\right)\\right],\\end{aligned}$ where $C_{s,p}=[4(C_V-1)^2+5\\alpha ^2]/24, C_{s,n}=(1+5 \\alpha ^2)/24, C_V=1/2+2~sin^2 \\theta _C$ , with $\\alpha =1.25$ and $sin^2 \\theta _C = 0.23$ The neutrino cooling rate is computed as: $Q_{\\nu } = \\frac{(7/8) \\sigma T^4}{(3/4)} \\sum _{i=e,\\mu ,\\tau }\\frac{1}{0.5(\\tau _{a,i}+\\tau _{s}) + 1/\\sqrt{3} +1/(3\\tau _{\\tau _{a,i}})}$ The detailed discussions over nuclear equation of state and neutrino treatment implemented in HARM-COOL code are given in [43], [41], [40].", "Table: Different disk setup for numerical simulations." ], [ "Outflows: Mass, Composition, Velocity and Geometry", "Measuring the general properties of the outflows such as mass, velocity and composition, with high accuracy is a crucial task to predict the observable kilonovae lightcurves.", "Using the particle tracer technique would allow us to learn about the geometry of the outflows as well.", "In this section, we present our quantitative measurements for outflow properties using the tracer method, focusing on the BH spin effects and also BH and disk masses' effects on these properties.", "At the end of this section, we use these results to estimate the time, luminosity and temperature peaks, and lightcurves of the kilonova emitting from each model.", "The total outflow mass, average velocity and electron fraction, measured by tracers at $r=800 r_g$ , are given in Table REF .", "Figs REF -REF show mass distribution histograms versus velocity, electron fraction and polar angle $\\theta $ for models with different spin and mass configurations.", "A quick observation from these quantities show that all the magnetized-neutrino cooled disk models generate ejecta with moderate velocity $v \\sim 0.1-0.2$ c, and high electron fraction $Y_e > 0.25$ for regular BNS and BHNS mass configurations, which perfectly describes the second channel of the kilonova emission sources from the postmerger torus producing bright and blue transients in a few hours after merger.", "The results for LBH cases are different regarding the composition which is discussed in Sec.", "REF and REF .", "Focusing on more details in our results show that the remnant BH spin plays an important role in outflows features.", "The models labeled as M1.0-0.14-a0.98, M1.0-0.14-a0.9, M1.0-0.14-a0.6 and M1.0-0.14-a0.2 all have identical initial parameters except for BH spin.", "As shown in Table REF the mass of ejecta is hugely affected by the BH spin.", "The mass increases monotonically by the BH spin and the difference may reach to more than three orders of magnitude between the low-spin case M1.0-0.14-a0.2 and extremely high-spin case M1.0-0.14-a0.98.", "This is consistent with previous GRMHD numerical studies observing that high spin BH cases with robust magnetically collimated jets produce more massive outflows ([36]).", "Similar observation about massive ejecta has been reported by [24] for viscous accretion disks with highly spinning black holes.", "The significant mass-loss is due to energy release by accretion happening deeper in the gravitational potential and closer to the BH singularity.", "This conclusion may let us rule out the merger scenarios with zero or low-spin remnants from the parameter estimation for bright kilonova observations.", "Moreover, the 2D profiles of the electron fraction given in Fig.", "REF and the average $Y_e$ in Table.", "REF , as well as the histograms in Fig.", "REF derived from SkyNet code for r-process nucleosynthesis for models with LBH ($M_{BH} = 1.0 M_{\\odot }$ ) and different spins show a dramatic change in the outflow compositions.", "Generally, the outflows are more neutron-rich for models with higher BH spin (with average $Y_e < 0.25$ ), which leads to more opaque lanthanide-rich material.", "This observation is in contrast with [40] and [24], which both found that the electron fraction will increase as the BH spin increases for stellar mass BH.", "The argument for their observation is, when the accretion happens in a deeper gravitational potential, it releases more energy and heat the inner part of the disk, therefore the weak interactions' equilibrium condition change in a way to release more neutrinos for more effective neutrino cooling.", "This causes producing more proton-rich plasma.", "However, in our equation of state, the partial trapping of neutrinos is taken into account (see the details in [40] for the calculation of trapped neutrino pressure), so the weak interaction rates are affected by neutrino trapping.", "In fact, a closer look into some quantities such as neutrino luminosity and neutrino emissivity show that the highest spin case, M1.0-0.14-a0.98, goes through a less effective neutrino cooling during the evolution compared with M1.0-0.14-a0.9 case.", "However, more cases with LBH and stellar mass BH configurations with different spins are needed in future studies to investigate this effect more closely.", "Similarly for velocity, we observe significant changes imposed by BH spin.", "Fig.", "REF shows the high spin cases such as M1.0-0.14-a0.98 and M1.0-0.14-a0.9 generate outflows with higher and broader range of velocities.", "However, the average velocity does not increase with BH spin monotonically as our highest spin case M1.0-0.14-a0.98 has the average velocity slightly lower than the M1.0-0.14-a0.9 case.", "From the geometrical point of view, as shown in Fig.", "REF all cases with different spins have very broad range of ejected angles.", "Almost all the cases have a distinguishable peak around the equator $\\theta \\sim 80^{\\circ }-100^{\\circ }$ , however it is interesting to mention that the symmetry is somehow broken for some cases, i.e.", "significantly more wind ejected from the northern hemisphere than the southern hemisphere and vise versa.", "This geometrical pattern is almost visible for all the cases regardless of their BH spins." ], [ "Prograde versus retrograde", "For a complete comparison, we perform similar analysis between one prograde case with moderate spin $a=0.6$ labeled as M6.0-0.14-a0.6 and one retrograde case with spin $a=-0.6$ labeled as M6.0-0.14-aR0.6.", "Both cases have BH mass equals to $6M_{\\odot }$ and disk mass equals to $0.14M_{\\odot }$ , however the initial disk parameters such the inner radius and the constant specific angular momentum for each case is different for each model based on the nature of the Fishbone-Mocrief solution (see  [50] for possible retrograde disk solutions).", "So, the disks are not identical in term of size and compactness.", "The comparison between these two models show that the retrograde case generates much lower mass (by one order of magnitude), and faster ejecta.", "However the higher velocity can be explained by that fact this disk is initially larger than the prograde case (larger initial specific FM angular momentum) and therefore less bounded to the BH, and produce higher speed ejecta.", "On the other hand, as shown in Figs.", "REF -REF the retrograde case has a narrow range of velocities and more symmetric geometry with more outflows leaving the grid around the equator compared with the prograde case.", "Overall, one may conclude that the negative spin can provide distinguishable changes in the amount of the outflows, its velocity and geometry.", "Figure: Electron fraction profiles for models with different BH spins at final time snapshot (t∼0.247t \\sim 0.247s).Figure: Density profiles for models with different BH mass and disk mass at final time snapshot.Figure: The mass distribution versus polar angle, comparison for cases with spin and mass configurations as Fig.", "." ], [ "Effects of BH and Disk Masses", "We found out the postmerger remnant disk's mass has a significant impact on the outflow properties as well.", "However the changes are less dramatic compared to the spin effects.", "The outflow becomes more massive by more than one order of magnitude as move from M2.0-0.05-0.9 case with $M_{disk} = 0.05 M_{\\odot }$ to the highest disk mass case M5.0-0.3-0.9 with $M_{disk} = 0.3 M_{\\odot }$ .", "This obviously suggests that mergers with massive remnants is more likely to have observable EM counterpart.", "Such scenarios are possible from BNS mergers and BHNS mergers with low mass ratio, softer neutron star's equation of state and high spin BH.", "The histogram plots in Figs.", "REF -REF show the massive disk M5.0-0.3-0.9 produce massive ejecta over a broad range of velocities $0.01-0.3$ c, but also with higher mass distribution over the lower velocities which makes the larger part of ejecta from this case slower than the others.", "The broken symmetry in the outflows geometry is still visible in Fig.", "REF , and overall we observe the outflows are ejected over a broad range of angles except the poles.", "This feature is not affected by the disk mass and/or BH mass.", "Moreover, we observe the composition of the winds is affected by the mass of the BH quite significantly.", "As mentioned in Sec.", "REF , for the LBH cases, the ejecta contains more neutron-rich material with lower average electron fraction $Y_e < 0.25$ , while for the systems with stellar mass central BH, the outflow is dominated by less neutron-rich material ($Y_e > 0.25$ ) and therefore more lanthanide-free composition.", "As a result, one might expect to observe only red and IR transients from accretion disk systems with central LBH.", "This feature can be considered as an important signature to estimate the mass of the central postmerger remnant, and therefore an evidence for the existence of the primordial BH progenitors for such systems ([12]).", "At this point, one might wonder how realistic these masses and geometries are compared with the remnant disks from merger simulations.", "Our selected models can present different BNS and BHNS with different parameters.", "[68] and [13] derived fitting formula calculating the masses of ejecta and remnant disk for BNS systems based on numerical simulations results.", "Later,  [51] suggested a simpler version of this formula based on the same data base.", "Similar analytic fitting formulae were proposed by [29], [46], [30] for BHNS systems.", "Using this data base and reversing this analysis to estimate the merger parameters from the black hole and accretion disk's masses, we can make the following statement about our models.", "For instance, the M5.0-0.3-a0.9 presents the remnants from a BHNS merger with mass ratio $Q \\sim 3.3$ , with $M_{BH}=4M_\\odot $ , $M_{NS}=1.2M_\\odot $ , and $R_{NS} = 13.2$ km.", "Such NS can be explained by a stiff equation of state for a non-rotating spherical star such as DD2 or BHB.", "Our FM model gives a disk with mass of $M_{disk}=0.3M_{\\odot }$ and the radius of the maximum density at $r\\sim 100$ km.", "Model M2.65-0.1-a0.9, resembles the remnants from a BNS merger with identical mass $M_{NS}=1.2 M_{\\odot }$ and $R_{NS} = 11.7$ km.", "Such NS can be explained by a soft equation of state for a non-rotating spherical star such as SRO and APR.", "The FM solution creates a disk with mass of $M_{disk}=0.1 M_{\\odot }$ and the radius of the maximum density at $r \\sim 44$ km.", "Comparing against literature, for compact binary mergers, the remnant disk's outer radius is usually around 100-200km with maximum density around 50-60km depending on the initial parameters of the merger ([17], [23]).", "Since in the FM solution, the size of the disk is scaled by the mass of the BH, this solution may create disks with larger radius.", "Therefore, our disk models resembling BHNS mergers are larger and less compact compared with final disks from merger simulations.", "This difference can cause deviations in the mass, velocity and composition of the outflows launching from the disks with different geometries, and non-axisymmetric configurations expected for disks from mergers.", "Regardless of BH spin and mass configurations, we see similar pattern in the ejecta's mass evolved during time.", "Fig.", "REF shows the evolution of the cumulative ejecta's mass launched from selected models.", "As illustrated in this figure, the ejecta is being developed shortly after the simulations start, increases exponentially during the first half of the evolution, and then continues increasing with a slower pace for the rest of the simulation.", "This exponential growth in outflow mass at early time is consistent with the GRMHD model reported by [26], and it is a characteristic of the magnetized disk models compared with the $\\alpha $ -disk models.", "The detailed discussion over outflow measurements and how it is affected by the computational grid resolution and tracer setups is given in the appendix .", "Figure: The mass distribution versus polar angle, comparing cases with different disk and black hole mass configurations.Figure: Evolution of the cumulative outflow mass launched from the disk measured by tracers.", "Time is given in the code unit in this figure." ], [ "Kilonovae peak properties", "In order to investigate some important kilonovae properties based on measured outflows, we use the approximations given by [34] and [20].", "The time $t_{peak}$ at which the peak occurs, the luminosity $L_{peak}$ at this time, and the corresponding temperature $T_{peak}$ are estimated as: $t_{peak} = 4.9~d \\times \\left( \\frac{M_{ej}}{10^{-2}M_\\odot } \\right) ^{\\frac{1}{2}} \\left( \\frac{\\kappa }{10~cm^2g^{-1}} \\right) ^{\\frac{1}{2}} \\left( \\frac{v_{ej}}{0.1} \\right)^{-\\frac{1}{2}},$ $\\begin{aligned}L_{peak} = 2.5 \\times 10^{40}~\\text{erg/s}~\\times \\left( \\frac{M_{ej}}{10^{-2}M_\\odot } \\right) ^{1-\\frac{\\alpha }{2}} \\\\\\left( \\frac{\\kappa }{10~cm^2g^{-1}} \\right) ^{-\\frac{\\alpha }{2}} \\left( \\frac{v_{ej}}{0.1} \\right)^{-\\frac{\\alpha }{2}},\\\\\\end{aligned}$ $\\begin{aligned}T_{peak} = 2200~K \\times \\left(\\frac{M_{ej}}{10^{-2}M_\\odot } \\right) ^{-\\frac{\\alpha }{8}} \\\\\\left( \\frac{\\kappa }{10~cm^2g^{-1}} \\right) ^{-\\frac{\\alpha +2}{8}} \\left( \\frac{v_{ej}}{0.1} \\right)^{-\\frac{\\alpha -2}{8}},\\end{aligned}$ where $\\kappa = 1 cm^2g^{-1}$ is the average opacity suggested by [34] for less opaque material produced by weak r-process, and $\\alpha = 1.3$ .", "The results of these peak values are given in table REF .", "Based on these analytic fitting formulae, the majority of our models power EM transients with their peaks in a few hours after merger and the luminosity around $\\sim 10^{40}-10^{41} erg/s$ .", "Table: Properties of the outflows and kilonovae peaks." ], [ "Lightcurves and r-process nucleosynthesis", "In order to estimate the possible lightcurves powered by the ejecta from different models, we consider the same approach as [46].", "As it is shown by [20] this approximation agrees well with the results given by the radiative transfer simulations for the dynamical ejecta.", "However, this analytic fitting overestimates the luminosity at the peak time (see Fig.8 from  [20] for comparisons).", "Therefore, we use the [34] fitting formula to estimate the peak values for a better approximation.", "The luminosity curve is given by $L_{bol}(t) = (1+\\theta _{ej}) \\epsilon _{th} \\dot{ \\epsilon }_{0} M_{ej} {\\left\\lbrace \\begin{array}{ll} \\frac{t}{t_c} \\left( \\frac{t}{1~d} \\right)^{-\\alpha }, & t \\leqslant t_c \\\\\\left( \\frac{t}{1~d} \\right)^{-\\alpha }, & t > t_c\\end{array}\\right.", "}$ where $\\dot{\\epsilon }=1.58 \\times 10^{10}~erg~g^{-1}~s^{-1}$ is the specific heating rate, $\\epsilon _{th}$ is the efficiency of thermalization introduced by [59], $0.5 < \\epsilon _{th} < 1$ and $t_c$ is derived by $t_c = \\sqrt{\\frac{\\theta _{ej} \\kappa M_{ej}}{2\\phi _{ej}(v_{max}-v_{min})}},$ with $v_{max}$ and $v_{min}$ are the maximum and minimum speed of the ejecta.", "$\\theta _{ej}$ and $\\phi _{ej}$ are geometrical parameters of the outflows.", "It was shown by [20] that if we assume the ejecta as homogeneously distributed material moving with velocity $v_{ej}$ in a $\\rho -z$ plane, the polar and azimuthal opening angles $\\theta _{ej}$ and $\\phi _{ej}$ are related to the velocity components with: $\\theta _{ej} \\approx \\frac{2^{4/3} v^2_{\\rho } - 2^{2/3} \\left(v^2_{\\rho } \\left(3v_z + \\sqrt{9v^2_z+4v^2_{\\rho }}\\right) \\right)^{2/3}}{\\left(v^5_{\\rho } \\left(3v_z+\\sqrt{9v^2_z+4v^2_{\\rho }}\\right)\\right)^{1/3}},$ $\\phi _{ej} = 4 \\theta _{ej} + \\frac{\\pi }{2}.$ Fig.", "REF shows the predicted lighcurves from this method.", "As one expects, M1.0-0.14-a0.98 and M5.0-0.3-a0.9 cases with the highest mass outflows produce brighter transients.", "There are two pairs of models launching outflows with similar features and therefore generates lightcurves following each other closely, M2.65-0.1-a0.9 and M1.0-0.14-a0.9, and the other pair is M2.0-0.05-a0.9 and M1.0-0.14-a0.6.", "Our models suggest that although bright blue transients is possible to be observed from light BH-accretion disk systems, they may not be distinguishable from BNS mergers.", "Overall, the possible degeneracy make the mergers parameter estimation very challenging from kilonova observations.", "Figure: Luminosity versus time estimated by Kawaguchi's method for different cases.As the next post-process analysis, we have applied the r-process nucleosynthesis on the simulations outputs using the open source SkyNet code ([55]) to measure the nuclear abundances.", "The results of these simulation are illustrated in Fig.", "REF by taking the average over all the tracers and compared with the Solar system abundances.", "The results show the 2nd and 3rd r-process abundance peaks are resolved well in our models.", "We would like to emphasise again that the composition analysis discussed in sections.", "REF and  REF for Figs.", "REF and  REF are from these skynet simulations, and these figures showing the composition of the ejecta at the onset of the r-process.", "Figure: Nuclear abundances as a function of mass number A foreach simulation, based on average of tracers sampling the outflows." ], [ "Comparison with previous studies", "There are two major mechanisms driving postmerger outflows, one is neutrino heating and the other is magnetically driven turbulence.", "The analytical estimation by [64] indicated that in a post-merger accretion disk neutrino heating timescale is short enough to drive a wind within the lifetime of the disk.", "On the other hand, in the case of a weakly magnetized accretion disk, MRI can be triggered in the disk's dynamical timescale, resulting in magnetic field amplification and turbulence.", "The plasma is being accreted to the BH and wind is originated from the disk, while the angular momentum is transported by magnetic turbulence effects ([4]).", "[74] has shown that the expected outflow mass from a BNS postmerger remnant disk has to be around $\\sim 0.03 M_\\odot $ to power emissions observed in AT2017gfo (kilonova emission observed right after GW170817) as predicted by the r-process nucleosynthesis.", "They also showed that both optical and near-infrared emissions are simultaneously reproduced by the ejecta with a medium $Y_e$ of 0.25.", "The recent Bayesian analysis by [70] constrained a mass ratio of $M_w/M_d = 2.81$ (the ratio of wind mass to dynamical ejecta mass) to reproduce the observed AT2017gfo kilonova lightcurves while it is also consistent with the observed r-process elements abundance measured in the Solar system.", "While the most perfect way to study kilonovae in a numerical simulation is evolving BNS and/or BHNS systems for late inspiral, merger and postmerger phases, measuring dynamical and postmeger ejecta, most recent studies focused only on one side of the scenario.", "In the case of disk winds, ideally we want to evolve the magnetized remnant torus in a high-resolution 3D grid for a long period ($\\sim 10$ seconds), however due to the limit of computational resources, long-term 2D simulations with the $\\alpha -$ viscosity prescription as the main mechanism to produce the outflows were initially preferred [24].", "Nonetheless, there are a few studies in the literature, which evolved magnetized remnants with neutrino cooling in fully GRMHD simulations within a multi-second timescales [26], [37].", "Generally, in the outflow studies, some groups considered analytical disk as the initial data configuration and then perturbed them by weak magnetic field (similar to our study in this paper) or $\\alpha $ viscosity, while others deal with remnant disk from a BHNS or BNS simulations, applying Cowling approximation (frozen space-time) at $t \\sim 10-15$ ms after merger.", "At this point, it would be interesting to compare our results with these studies, as well as the predicted values from the kilonova observations.", "For obvious reasons, the outflow properties are highly case dependent.", "The wind mass reported by different groups typically varied from $10^{-5}$ to $10^{-2} M_{\\odot }$ .", "We found most of our cases produce $\\sim 10^{-4}-10^{-3} M_{\\odot }$ , which are lower by one to two orders of magnitude compared with the predicted values for AT2017gfo observation.", "The outflow mass might be underestimated as a results of various reasons.", "The full discussion on our tracer's setup and alternative measurements is given in Appendix  .", "Comparing our models with similar cases in the literature, our results confirm the conclusion made by [44] for $\\alpha $ -viscosity disk models.", "As they mentioned in the case of the most mass massive disk $a0.8_{M0.3}$ (disk mass$\\sim 0.3 M_{\\odot }$ and BH spin $a=0.8$ ), the outflow mass increases by almost factor of 4 compared to their $\\sim 0.1 M_{\\odot }$ case, but the average velocity becomes slightly smaller.", "For similar cases, we observed the massive disk, M5.0-0.3-a0.9, provides the highest outflow mass compared to the intermediate mass cases such as M2.65-0.1-0.9, but in our models the difference is quite dramatic (about one order of magnitude), while the velocity is slightly bigger.", "However, the outflow of their $a0.8_{M0.3}$ case is still much higher than our M5.0-0.3-a0.9 by factor of 2.", "Our M2.65-0.1-a0.9 case can be compared with HS-Therm and HS-Magn models from [40].", "This case is initially moderately magnetized and the comparison of this case with Table(2) from the same reference indicates that the outflows properties are scaled by the initial magnetic field strength, i. e. the outflow mass and velocity increase as the initial magnetic field increases.", "Also, [40] found that the average electron fraction in the outflows depended on the disk magnetisation, especially at the earlier time.", "Though, the average electron fraction we calculated for M2.65-0.1-a0.9 case is closer to the highly magnetized case 'HS-Magn' from this reference.", "Here, we measured the averaged electron fraction around $0.1<Y_e<0.3$ , which is consistent with the GRMHD model in [26], and it is relatively lower than the averaged value reported by [35] ($0.3 < Y_e <0.39$ ) for viscous disks.", "Our results also show that outflow mass increases sharply at early times, matching well with GRMHD model, while the viscous disks have this large growth at much later times in [26].", "Though, the outflow masses are still lower in our models, which can be caused by the time of evolution and differences in disk parameters and equation of state (more analysis is given in the Appendix REF ).", "The outflow velocity measurements agree well with the previous studies and the theoretical models' prediction for the disk winds.", "The velocity range is around $0.1-0.2$ c and is generally higher for systems with higher spins.", "Regarding the geometry, the outflows launch over a wide range of angle which agrees with results reported in [26] and [22].", "However, our results show less equatorial symmetry compared with these studies." ], [ "Parameter estimation, binary population synthesis and spin effects", "Based on population synthesis studies, the GW from BNS mergers are more likely to be observed with EM counterparts ([57]).", "As already discussed in the literature there are some features in the kilonova emissions, which help to distinguish the BHNS from BNS mergers and make some parameter estimations.", "For instance,  [44] claimed that there is a rapid decline in the lightcurves from a BHNS postmerger ejecta (or equivalently, a prompt collapse for a BNS merger).", "Unfortunately, our current models do not allow us to investigate different possible scenarios for BNS mergers, such as creation of a magnetized and diferentially rotating HMNS with different lifetimes, which can affect the amount of ejected matter significantly ( [16]).", "However, even the results from [44] show that the light curves of models with high spin BH and HMNS with 30ms lifetime are quite similar, and this degeneracy introduces a challenge in using kilonova observations to estimate the lifetime of a HMNS remnant.", "For the BHNS merger cases, two signals GW200105 and GW200115 have been detected by the LIGO-Virgo collaborations so far.", "The masses of GW200115 and GW200105 are 8.9M and 1.9M, and 5.7M and 1.5M, respectively.", "The spin of the black hole in GW200115 is not tightly constrained but it is estimated to be $\\sim -0.5-0.04$ and most likely misaligned.", "The dimensionless spin magnitude of the black hole in GW200105 is estimated to be $< 0.2$ and its direction is unconstrained ([1]).", "The spins of the remnant central BHs are not constrained up to certain points, but they are likely around $\\sim 0.38$ and $0.43$ for GW200105 and GW200115 respectively ([9]) However, no EM counterpart have been observed from these mergers.", "The discussion given by [72] indicated that only BHNS with highly spinning black holes can generate massive dynamical ejecta (the difference may exceed three orders of magnitude from low spin ($\\sim 0.2$ ) to high spin $\\sim 0.9$ ).", "This high spin can be obtained by two mechanisms during a BHNS binary formation: i) inheritance due to weak core-envelope coupling of the stellar progenitors, and ii) accretion from the companion star during stable mass transfer.", "However, population synthesis studies of merger rates of BHNSs suggest that the majority of binaries will not result in observable EM counterparts.", "Studies by  [21] found that only a fraction ($\\sim 20\\%$ ) of BHNS binaries gain a high dimensionless BH spin from their stellar progenitors and produce massive ejecta during merger.", "For postmerger outflows, we obtained similar trends in our results, i.e.", "the disk with lower spin BH generates lower mass ejecta, up to two orders of magnitude compared to high spin cases.", "We also concluded that the ejecta mass may become lower for the retrograde accretion.", "However the mass of the postmerger remnant disk can be another important factor for launching the ejecta, which is highly dependent on the mass ratio, NS equation of state and BH spin ([32], [56]).", "Simulations done by [56] show that the massive remnant disk is likely to be originated from BHNS mergers with high spin BH.", "Overall, our models confirm that almost not much ejecta and no EM counterparts are expected to be observed from these BHNS systems, unless for BHs with high spins and lower mass ratio cases providing massive postmerger disks." ], [ "Outflow measurements and other important physical components", "In this section we present a discussion about important numerical and physical elements which are ignored in our simulations and can affect our outflow measurements significantly.", "First, it is worth mentioning that the accuracy of our outflow studies are affected by the current version of neutrino and equation of state treatments.", "Neutrino emissions can affect the disk's thermal and composition evolution significantly at the early time of evolution after merger ([39], [26]).", "Therefore, a more advanced neutrino treatment such as leakage or neutrino radiation transport schemes along with composition evolution are needed to provide a better accuracy.", "Our equation of state does not include nuclei heavier than Helium, however  [35] and [22] showed the inclusion of these nuclei in the equation of state may cause significant changes in the outflow's mass and velocity.", "In addition, the numerical scheme and grid resolution typically have impacts on the modelling of the kilonova emission.", "In the study by  [60] the authors compared the dynamical ejecta properties measured from 4th-order and regular 2nd-order finite difference schemes for magnetized BNS simulations.", "They found the second-order scheme overestimates the amount of proton-rich shock-heated ejecta.", "Also,  [11] performed convergence studies on the outflows from a BNS merger simulation, and they found out the highest resolution resolve the shock-heated plasma more accurately.", "The postmerger ejecta can be affected in a similar way.", "Fully-resolved MRI effects requires high accuracy in numerical scheme and high resolution grid, and both can impact the magnetically-driven viscous heating effects which leads to changes in the composition and other features of the disk and the outflows (see Appendix REF about the resolution test.)", "For the sake of electromagnetic lightcurve estimations, we use here a simple fitting formula, which can be correct only up to the order of magnitude.", "In our simple approach we assumed a constant value for the average opcacity of the ejected matter, which in reality varies a lot depending on the geometry and composition of Lanthanide/Actinide-rich matter.", "In order to predict the lightcurves and spectrum of the emissions accurately, one needs to apply a radioactive transfer code on the ejecta.", "Such studies have been done recently e.g.", "by [78].", "One crucial thermal component in studying outflows is the inclusion of the r-process heating and neutrino cooling terms in the long-term hydro simulations (see the discussion and eqs.", "(20-21) from [31]).", "Recent study by [35] for long-term 2D viscous disk simulations indicated that adding these terms to hydro equations after one second evolution increases the outflow mass by 30%.", "Another important physics to be considered for kilonova studies is neutrino's flavour oscillations.", "A few groups recently studied the effects of fast flavour instability (FFI) on the disk outflows in their simulations ([54], [25]).", "Neutrino flavor oscillation is reported to have moderate (or large, in some cases), impacts on the mass ejection, average velocity, and average electron fraction.", "Even with very accurate numerical evolution and measurements, there are possible ways to make our results different from the observations.", "First, the measured abundances can be largely affected by the nuclear uncertainties in the r-process models, especially at late-time emissions, $t>100$ days ([52]).", "Second, in a realistic merger and post-merger scenario, the emissions from disk winds can be affected by the dynamical ejecta.", "According to the discussion by [44], dynamical ejecta is generally faster and more neutron-rich and therefore more opaque, which acts like a 'Lanthanide curtain’, masking the emissions originating from the wind ejecta.", "They observed that for an edge-on orientation, the dynamical ejecta partially blocks the wind's optical flux by an order of magnitude." ], [ "Conclusions and Summary", "We have carried out two-dimensional simulations of several black hole-accretion disk models with different initial setups using the GRMHD HARM-COOL code.", "These simulations include both magnetic field evolution and neutrino emission effects, as well as realistic nuclear equation of state.", "The initial magnetic field has been seeded in poloidal loops configuration confined within the plasma.", "We evolved these models about $t \\sim 200$ ms, and used particle tracers to measure the outflows properties.", "We observed there is strong correlation between BH spin and the outflows properties.", "Generally, disks with higher mass and BH spin generate faster and more massive outflows.", "We observed our models generate winds with moderate velocity (v/c $\\sim 0.1-0.24$ ) and a broad range of electron fraction ($Y_e \\sim 0.01-0.5$ ), which are consistent with previous studies on GRMHD postmerger accretion disk simulations.", "We have included a few cases with lower mass black holes representing BH formed from primordial black hole and neutron star mergers, and we found out such cases generate more neutron-rich ejecta in comparison with regular BHNS and BNS mergers.", "Generally, the outflow masses measured by tracers are lower compared with long viscous disk and GRMHD simulations.", "We have investigated the accuracy of the outflow measurements with tracer method by altering the tracer's density threshold and extraction radius, and also grid resolution (explained in the Appendix ).", "We found that our measurements for ejecta's mass can be affected up to $50\\%$ by this alterations.", "However, we consider this method to be more accurate than the unbound matter estimation from the geodesic and Bernoulli criteria.", "The general properties of postmerger ejecta derived from our simulations give us the opportunity to estimate the luminosity lightcurves of possible radioactively powered transients using analytic fitting formula.", "We found the luminosity peaks within the range of $\\sim 10^{40}-10^{42}$ erg/s, brighter peaks for cases with higher ejecta mass, which agrees with previous studies for neutrino-driven disk wind models.", "Applying the r-process nucleosynthesis code on our results and comparing against the Solar-system abundances showed that the 2nd and the 3rd r-process abundance peaks are resolved well in our models.", "In the end, we should point out that for future studies we need to improve our numerical method by including a more sophisticated neutrino treatment such as leakage or transport schemes, as well as composition evolution to capture all the neutrino absorption/emission features.", "Moreover, including heavy nuclei in the equation of state are required to achieve a more realistic scenario with better accuracy.", "The authors thank Oleg Korobkin for helpful discussion and advice over the course of this project.", "This work was supported by grant No.", "2019/35/B/ST9/04000 from the Polish National Science Center, Poland.", "We also acknowledge the support from the PL-Grid infrastructure under computational grant plgmicrophysics." ], [ "Outflow measurements: methods and accuracy", "In this appendix first, we investigate the alternative methods to measure the outflow mass and compare them with the results from the tracer method.", "In the second part, we perform several test simulations for a single case to study the effects of the resolution and tracer's setup parameters on the outflow measurements.", "At the end, we leave a comment regarding the evolution's time and its effect on our measurements." ], [ "Unbound matter estimation: Geodesic and Bernoulli criteria versus tracer methods", "As mentioned, in this study we use the tracer particle technique to measure the outflows.", "In this method, all the particle trajectories leaving the outer boundary (or a large radius close to the outer boundary) of the computational grid during the evolution are marked as outflow winds.", "For the results discussed in Sec.", "we set this radius at $r=800~r_g$ .", "The details of the tracer method's implementation in HARM-COOL are given in  [40].", "The tracer particles provide the information on how different quantities such as density, temperature and electron fraction vary while tracking the particles.", "This information is used for the postprocess calculations of the element abundances from the r-process nucleosynthesis presented in Sec.", "REF .", "In addition, a detailed picture of wind's geometry can be obtained from the trajectories of the outflowing particles.", "Despite the fact that tracers are useful tools for outflow studies, the amount of measured outflows can be possibly underestimated if the system is not evolved long enough to provide enough time for the unbound mass to leave the grid.", "In fact, $t \\approx 200$ ms we have for these simulations can not be considered as a very long evolution to study the disk winds.", "On the other hand, the evolution of the disk ejecta can be affected by several other factors including the artificial atmosphere controls over the low density regions and the tracers' initial setup parameters.", "To test the accuracy of our tracers measurements, we apply the other alternative methods from the literature to identify the unbound matter.", "Referring to the discussion given by [31], the most accurate way to study outflows is a long-term (multi-second) 3D simulation where all the r-process heating and neutrino cooling terms are included in the source terms of the hydrodynamic evolution equations (see Eqs.", "(20-21) from the same reference for more details).", "However it is still possible to obtain a reasonable estimation for the outflow mass based on the energy criteria from a short-term simulation.", "We try three different criteria introduced in this paper and compare them with the results from tracers for our M2.65-0.1-a0.9 case: 1- The geodesic criterion $u_t<-1$ , which is not suitable for hot disk outflows, because it ignores the thermal energy of the fluid and underestimates the unbound mass significantly.", "2- The Bernoulli crtieria $hu_t < -h_{\\infty }$ , which includes the thermal energy but ignores the cooling effects caused by neutrino losses, and therefore overestimates the unbound mass.", "3- The improved version of the Bernoulli criterion, which includes the r-process heating and neutrino losses by a simple approximation given by $hu_t/h_{\\infty }(0.9968+0.0085Y_e) < -1.$ Here $h$ is the enthalpy given by $h=1 + \\epsilon + P/\\rho $ with $\\epsilon $ represent the specific internal energy, $h_{\\infty }$ is the asymptotic enthalpy for $\\rho \\rightarrow 0$ and $\\epsilon \\rightarrow 0$ (for our case $h_{\\infty }=1$ ) and $Y_e$ is the electron fraction.", "Generally, the improved version of the Bernoulli's criterion can be considered as a more realistic estimation to flag the unbound mass for different merger and post-merger scenarios.", "The comparison for ejected mass computed from tracer method and estimation from geodesic, Bernoulli and improved Bernoulli criteria calculated at a single time snapshot in the middle of the simulation at $t \\sim 100$ ms for $M2.65-0.1-a0.9$ case is given in Table REF .", "These results show that the outflow mass computed from tracers is lower by about factor of four compared with the improved version of Bernoulli criterion.", "However, we should remind ourselves that this criterion is still considered as an approximation, and can not be taken as an accurate measurement of the unbound mass.", "Table: Ejected mass: tracers versus geodesic, Bernoulli and improved Bernoulli criteria (at t∼100t \\sim 100ms snapshot).The volume integral is taken from r=400r g r=400~r_g (∼1500\\sim 1500km) to the outer boundary." ], [ "Tracer's Setup: Grid Resolution, density threshold and extraction radius effects", "Investigating the effects of the computational gird resolution and tracers setup parameters, we perform several tests for our M2.65-0.1-a0.9 model, and compare their results with the standard M2.65-0.1-a0.9-Std case discussed in Sec. .", "This is our closest case for a BNS postmerger scenario.", "The list of tests and the measured ejecta mass and velocity are given in Table REF .", "For Trac-Low-RhoMin test we alter the threshold density identifying the active tracers during the evolution and reduce this value by two orders of magnitude.", "The original threshold for the standard case was about $\\sim 10^{-4}$ of the maximum density.", "Trac-1500km is designed to study the effect of the radius where we extract the information about the outflows.", "This radius has been set to $r=800~r_g$ , which is almost 3000km for the standard case.", "Finally High-Res test which has the identical setup as M2.65-0.1-a0.9-Std, but with higher grid resolution 480*426 along the radial and angular directions respectively.", "Figs.", "REF and REF show the cumulative outflow mass versus time and the mass distribution versus velocity respectively for these tests.", "The simulation with lower density threshold measures slightly larger mass outflows, with larger mass distribution over the high velocity ranges.", "This makes the average velocity for this test significantly higher than the other cases.", "Extraction radius is another important parameter to be explored.", "Our results show that more massive and slower ejecta are measured by the smaller extraction radius.", "However, this mass is still lower by more than factor of two compared to the unbound mass measured by the improved Bernoulli criterion reported in Table REF .", "Specifying the extraction radius can be a challenging job, and it can affect some quantities' measurements such as velocity.", "In the literature, a smaller extraction radius $\\sim 200-500$ km has been used to measure the dynamical ejecta or the ejecta from magnetized postmerger HMNS with tracers (see for example [11] and [16]), and each tracer has to satisfy the geodesic or Bernoulli condition to identify the unbound mass.", "However the disk wind is a different scenario; the recent studies by [35] showed that if the r-process heating source terms are included in the evolution of the postmerger ejecta during a multi-second simulation, the ejecta's velocity need to be measured at a much larger distance.", "This study claimed that the ejecta is being continuously accelerated as a result of the r-process heating and it reaches to its asymptotic value around $r \\sim 40,000$ km.", "The High-Res test shows that our measured mass can be affected by the grid resolution up to some extend.", "For this particular case we have the ejacta mass increased by about 12%.", "Fig.", "REF shows that the outflows with higher velocities are overestimated in the lower resolution.", "The standard resolution of our simulations are high enough to resolve the fastest growing MRI mode almost everywhere in the torus (see Fig.", "(2) from [40] for a similar case).", "However, fully resolved MRI modes and capturing all the transport effects of turbulence requires a very high grid resolution.", "Even high resolution GRMHD simulations reported only a qualitative convergence in capturing all the turbulence's heating effects caused by the magnetorotational instability ([48]).", "Obviously, higher resolution simulations provide higher accuracy for resolving the heating effects of the turbulent plasma and produce more massive outflows.", "In conclusion, the results of these tests explain that our outflows measurements are affected by the resolution and the tracers setup parameters up to $50\\%$ , but still unable to explain the entire huge gap between the ejected mass from our model and the value predicted from GW170917 kilonova observation.", "Table: Different resolution and tracers setups for outflow measurements for M2.65-0.1-0.9 case.Figure: The outflows mass distribution versus velocity for M2.65-0.1-a0.9 case with different resolution and tracer setups." ], [ "Other important factors for outflow measurements", "One might argue that the numerical treatments can impose quantitative and qualitative effects on the outflows.", "Though, our results are less likely to be suffered from the numerical treatment over the atmosphere.", "In HARM we apply the velocity control only over low-density atmosphere with very high velocity.", "The disk outflow is dominated by subrelativistic plasma, and therefore, is not influenced significantly by the artificial atmosphere adjustments.", "The evolution time is another key factor we should take into account.", "We evolved our models for about $\\sim 200$ ms, while the viscous timescale for a thick postmerger accretion disk with $H/R \\sim 0.3$ is around $0.3$ s. The evolution of about a few viscous timescales is required to capture all the magnetically and thermally driven effects for launching the outflows.", "On the other hand, the evolution time is perhaps not long enough, so not all the unbound matter have enough time to leave the grid, though the longer evolution is somehow pointless for our simulation as it is impossible to maintain the magnetic field in a long two-dimensional simulation due to the anti-dynamo theorem.", "Even from thermal evolution point of view, the timescale of the evolution can be very effective based on studies by [26].", "They showed that a 3D GRMHD model ejects mass in two ways: one is the MHD-driven outflow at the earlier time of evolution when the torus is NDAF (neutrino-dominated accretion flow), and the second way is late-time, thermally driven wind, which occurs when the disk becomes advection-domianted (ADAF).", "They showed that the total amount of unbound mass ejected can reach to $0.013 M_{\\odot }$ , which is around 40% of the initial disk.", "Half of this mass lost over the first second of the evolution and the rest has launched thereafter.", "[40] investigated these types of outflows in two separate models: thermally-driven winds being ejected from an initially weakly magnetized disk and magnetically-driven winds launching from strongly magnetized disk.", "Computing the time averaged mass loss rate over the outer boundary estimated the final mass loss to be in the range of $2-16\\%$ of the initial disk.", "In comparison, our models are initially moderately magnetized (with $\\beta = 50$ ), so they are neither purely thermally-driven nor purely MHD-driven outflows from the very beginning.", "However, the cumulative outflow mass in Fig.", "REF shows that the outflow mass increases in a constant and slower pace at the later time of the evolution.", "Assuming this mass loss rate for later times, by extrapolation, we estimate to obtain about $\\sim 0.0035 M_{\\odot }$ and $\\sim 0.02 M_{\\odot }$ mass loss after about 1 second and 9 seconds of evolution respectively; The latter would be around $20\\%$ of the initial disk's mass for M2.65-0.1-a0.9 case." ] ]
2212.05628
[ [ "FactorJoin: A New Cardinality Estimation Framework for Join Queries" ], [ "Abstract Cardinality estimation is one of the most fundamental and challenging problems in query optimization.", "Neither classical nor learning-based methods yield satisfactory performance when estimating the cardinality of the join queries.", "They either rely on simplified assumptions leading to ineffective cardinality estimates or build large models to understand the data distributions, leading to long planning times and a lack of generalizability across queries.", "In this paper, we propose a new framework FactorJoin for estimating join queries.", "FactorJoin combines the idea behind the classical join-histogram method to efficiently handle joins with the learning-based methods to accurately capture attribute correlation.", "Specifically, FactorJoin scans every table in a DB and builds single-table conditional distributions during an offline preparation phase.", "When a join query comes, FactorJoin translates it into a factor graph model over the learned distributions to effectively and efficiently estimate its cardinality.", "Unlike existing learning-based methods, FactorJoin does not need to de-normalize joins upfront or require executed query workloads to train the model.", "Since it only relies on single-table statistics, FactorJoin has small space overhead and is extremely easy to train and maintain.", "In our evaluation, FactorJoin can produce more effective estimates than the previous state-of-the-art learning-based methods, with 40x less estimation latency, 100x smaller model size, and 100x faster training speed at comparable or better accuracy.", "In addition, FactorJoin can estimate 10,000 sub-plan queries within one second to optimize the query plan, which is very close to the traditional cardinality estimators in commercial DBMS." ], [ "Introduction", "Cardinality estimation (CardEst) is a critical component of modern database query optimizers.", "The goal of CardEst is to estimate the result size of each query operator (i.e., filters and joins), allowing the optimizer to select the most efficient join ordering and physical operator implementations.", "blackAn ideal CardEst method satisfies several properties: it is effective at generating high-quality query plans, efficient so that it minimizes estimation latency, and easy to deploy in that it has a small model size, fast training times, and the ability to scale with the number of tables and generalize to new queries.", "Unfortunately, existing CardEst techniques do not satisfy at least one of these properties.", "The reason behind these failures can largely be reduced to the handling of two related problems: (1) how to characterize attribute correlations and (2) how to model the distribution of join-keys, which directly determine the size of joins.", "Background: Many deployed CardEst approaches are still based on the Selinger model [60] and assume attribute independence, where the model ignores correlations between attributes, and join-key uniformity, blackwhere the model further assumes that join keys have uniformly distributed values.", "These assumptions provide a simple way of estimating the cardinality of join queries using only single-table statistics (e.g., histograms) over single attributes, which are easy to create and maintain.", "blackAs a result, the Selinger-based CardEst techniques are extremely efficient and easy to deploy but at the cost of effectiveness because most real-world datasets contain complex attribute correlations and skewed join-key distributions.", "To overcome the limitations of these assumptions, many commercial database systems try to blackrelax them to increase their effectiveness.", "blackFor example, past works  [26], [29], [7] propose and some systems (e.g.", "Oracle [8]) employ join-histograms that relax the join-key uniformity assumption.", "Specifically, these methods build frequency histograms on the join keys and then construct the join distribution by “multiplying” the histograms to estimate the join size (see Figure REF (b)).", "In this way, these methods can more precisely capture the join-key distributions to generate more effective estimates of the join cardinality.", "However, they still assume that the join keys are uniformly distributed within each bin of the histogram.", "Figure: Existing CardEst approaches for handling join queries.", "(a) The Selinger model writes the cardinality of a two-table join query as the product of filter selectivity P A ,P B P_A, P_B on both tables and the estimated join size |A⋈B||A \\bowtie B|.", "Using join-key uniformity, this is estimated as the product of table sizes |A|*|B||A| * |B| divided by the maximal number of unique values of the join keys.", "(b) The join-histogram bin the domain of join keys Id 1 ,Id 2 Id_1, Id_2 as histograms, assumes join-key uniformity within each bin.", "(c) The learned data-driven methods de-normalize J=A⋈BJ = A \\bowtie B and learn the distribution P J P_J to estimate the cardinality of QQ.", "(d) Our framework captures the correlation between join keys and filter predicates to accurately estimate |Q||Q| without de-normalization.Similarly, there have been works on relaxing the attribute independence assumption.", "For example, multi-dimensional histograms [9], [14], [15], [52], [66], [43], self-tuning histograms [4], [61], [31], [12], and singular value decomposition [58], have been proposed to capture the correlation between attributes.", "However, their effectiveness and/or efficiency are still not satisfactory [74], [18].", "More recently, learning-based methods which learn the underlying data distributions [33], [23], [77], [74] have been proposed to more accurately and compactly represent the attribute correlations within a single table, but these still have efficiency concerns that we detail below.", "Specifically, the learned data-driven methods [13], [64], [73], [70], [77], [25] analyze data and build distributions for all join patterns in a database.", "They need to de-normalize the joined tables and add a potentially exponential number of extra columns.", "Then, they build distributions over the de-normalized tables to characterize all attribute correlations and handle joins (see also Figure REF (c).", "blackThis allows the learned data-driven methods to be highly accurate for join estimates at the cost of slow training time and large model size (i.e., worse deployability).", "Alternatively, learned query-driven methods [33], [63], [42] circumvent the join-key uniformity assumption by building supervised models to map the join queries to their cardinalities.", "However, they require an impractical number of executed queries to train their models, which is unavailable to new DB instances and do not generalize well to unseen queries.", "Our Approach: Surprisingly, we are not aware of any work which tries to combine the advances of learning-based methods for accurately characterizing correlations with the idea of join-histograms.", "In this work, we develop a novel CardEst framework called FactorJoin that combines these two approaches, i.e., using join-histograms to efficiently handle joins coupled with learning-based methods to accurately capture attribute correlation.", "blackThis is not a straightforward extension.", "blackAs we will show, adapting the idea of join-histograms while preserving the benefits of using learning-based methods for attribute correlations is non-trivial.", "FactorJoin only builds sophisticated models to capture the correlations within a single table.", "The choice of the model to capture correlation is orthogonal to the techniques of FactorJoin, however, we do require the model to be able to provide conditional distributions of one or more keys.", "In our current implementation of FactorJoin, we use blacksampling [41] and Bayesian Networks [34] as they are fast to train and execute.", "These single-table conditional distributions are then combined using a factor graph model [44].", "Specifically, FactorJoin translates a join query $Q$ into a factor graph over single-table data distributions.", "This allows us to formulate the problem of estimating the cardinality of $Q$ as a well-studied inference problem [34], [44], [37] on this factor graph.", "Conceptually, there exist some similarities to the join-histogram technique.", "However, our formulation allows FactorJoin to generalize to cyclic and self-joins.", "It also improves the computation efficiency by relying on existing inference algorithms [46], [35], and facilitates scaling to arbitrary join sizes.", "Finally, in contrast to the idea of join-histogram and many other techniques in the space, we propose to actually not estimate the expected join cardinality, but rather use a probabilistic upper bound.", "As shown by others [5], [3], [1], [22], under-estimation is often worse than over-estimation, and tailoring the estimate to a probabilistic upper bound can significantly improve the end-to-end query performance.", "As a result, FactorJoin is able to estimate the cardinality of join queries using only single table statistics with similar or better estimation effectiveness than the state-of-the-art (SOTA) learning-based techniques.", "When compared with the learned data-driven methods, FactorJoin has a much smaller model size, faster training/updating speed, and easier for system deployment because we do not need to denormalize the join tables or add exponentially many extra columns.", "In addition, FactorJoin supports all forms of equi-joins, including cyclic joins and self joins, as well as complex base table filter predicates, including disjunctive filter clauses and string pattern matching predicates blackbecause it can flexibly plug-in various types of base-table estimators to support them.", "These queries are not supported by the existing learned data-driven methods [73], [25], [77], which require the join template to be a tree.", "Unlikely the learned query-driven methods [33], [63], [42], FactorJoin does not depend on the executed query workload, so it is robust against data updates and workload shifts, can be quickly adapted to a new DB instance, and is generalizable to new queries.", "However, in presence of query workload, FactorJoin can incorporate this information to further optimize the model construction process.", "We integrate FactorJoin into Postgres' query optimizer and evaluate its end-to-end query performance on two well-established and challenging real-world CardEst benchmarks: STATS-CEB [18] and IMDB-JOB [38].", "On the STATS-CEB, FactorJoin achieves near-optimal performance in end-to-end query time.", "It has comparable performance to the previous SOTA learned data-driven model (FLAT [77]) in terms of estimation effectiveness but with 40x lower estimation latency, 100x smaller model sizes, and 100x faster training times.", "On the IMDB-JOB, our framework achieves the SOTA performance in end-to-end query time (query execution plus planning time).", "Specifically, the pure execution time of our framework is comparable to the previous SOTA method (pessimistic estimator [5]), but our estimation latency is 100x lower, making our overall end-to-end query time significantly faster.", "Specifically, our framework can estimate 10,000 sub-plan queries in one second to optimize the query plan, which is close to the planning time of the traditional CardEst method used in Postgres.", "We also carry out a series of controlled ablation studies to demonstrate the robustness and advantages of different technical novelties of FactorJoin.", "In summary, our main contributions are: $\\bullet $ We formulate the problem of estimating the cardinality of join queries in its general form as a factor graph inference problem involving only single-table data distributions (Section ).", "$\\bullet $ We propose a new binning and upper bound-based algorithm to approximate the factor graph inference (Section ).", "$\\bullet $ We design blackattribute causal relation exploration and progressive estimation of sub-plan queries techniques to improve the efficiency of our framework (Section ).", "$\\bullet $ We conduct extensive experiments to show the advantages of our framework (Section ).", "Before describing the details of FactorJoin, we begin with a detailed description of existing CardEst approaches." ], [ "Background and analysis", "In this section, we first define the CardEst problem and then analyze existing CardEst methods and how our approach differs from them." ], [ "Single table CardEst: Let $A$ be a table with $n$ attributes $A_1, \\ldots , A_n$ .", "A single table selection query $Q(A)$ on $A$ can be viewed a conjunction or disjunction of filter predicates over each attribute, e.g.", "$Q(A) = (A_1 \\in [0, 5]) \\ \\vee $ $(A_2$ LIKE `%An%'$) \\wedge (A_5 \\le -10)$ .", "Let $|Q(A)|$ denote the cardinality of $Q(A)$ , i.e.", "the number of tuples in $A$ satisfying $Q(A)$ .", "Let $P_A(Q(A))$ denotes the selectivity (probability) of tuples satisfying $Q(A)$ .", "Then, we have $|Q(A)| = P_A(Q(A)) * |A|$ , where $|A|$ denotes total number of tuples in $A$ .", "CardEst for multi-table join queries: blackConsider a database instance $\\mathcal {D}$ with m tables $A, B, \\ldots , M$ , a query $Q$ consists of a join graph representing the join conditions among the selected tables and a set of base-table filter predicates.", "blackFor simplicity, we abuse the notation and use $Q(I)$ to denote the filter predicates of $Q$ on table $I$ .", "Let $\\Omega $ denote the denormalized table resulting from the join conditions of $Q$ .", "Then, the cardinality $|Q| = P_{\\Omega }(Q(A), Q(B), \\ldots , Q(M)) * |\\Omega |$ , where $Q(I)$ can be empty set if table $I$ is not touched by $Q$ .", "Estimating $Q$ is particularly challenging because there can exponential number of join patterns in the database, each associated with a unique $\\Omega $ and distinct probability distribution $P_{\\Omega }$ .", "Therefore, to accurately estimate different join queries, the CardEst methods need to capture exponential many complicated data distributions." ], [ "Analysis of existing ", "Since the focus of this paper is multi-table join queries, we defer the discussion of single-table estimators to Section  and focus on analyzing different approaches for handling join queries.", "We can categorize these approaches into four classes: traditional, learned data-driven, learned query-driven, and bound-based.", "We summarized their characteristics in Table REF and provide their details as follows.", "Traditional methods: The sampling-based methods [41], [39], [76], [40] construct a small sample on each table and join these samples to estimate the cardinality of join queries.", "The traditional histogram-based methods generally adopt the attribute independence and join uniformity assumptions to decompose the join queries as a combination of single table estimates.", "Taking a query $Q$ joining two tables $A$ and $B$ as an example, these assumptions will simplify the cardinality $|Q| = P_{\\Omega }(Q(A), Q(B)) * |\\Omega |$ as $P_A(Q(A)) * P_B(Q(B)) * \\overline{|\\Omega |}$ , where $P_A(Q(A))$ is the estimated single table selectivity.", "There exist two widely-used approaches to estimate the size for the denormalized join table $\\overline{|\\Omega |}$ .", "First, the Selinger models  [60], [10], [59] assume that join keys have uniformly distributed values, collect the number of distinct values (NDV) on the join keys from both tables, and estimate $\\overline{|\\Omega |}$ as $|A| * |B| / max\\lbrace NDV(A), NDV(B)\\rbrace $ .", "Second, the join-histogram methods [26], [29], [7] first bin the domain of the join keys as histograms and assume join uniformity assumption for the values in each histogram bin.", "Then they apply the distinct values methods for each bin and sum over all bins.", "Our work follows the convention of the join-histogram methods but avoids the simplifying assumptions, which can not be trivially achieved.", "Naive approaches such as building the full multi-dimensional histograms can resolve the attribute independence assumption but would introduce unaffordable storage overhead.", "Many follow-up works [27], [30], [28] explore different histogram strategies to reduce the error caused by the join uniformity assumption.", "They either take an impractical amount of time to construct the histograms or cannot provide high-quality estimates.", "In short, the traditional methods combine the estimates from single tables to estimate the join queries.", "This approach is very efficient, easy to train, update and maintain, thus perfect for system deployment.", "However, their simplifying assumptions can generate erroneous estimates and poor query plans.", "Learned data-driven methods: These methods circumvent the simplifying assumptions used in traditional methods by understanding the data distributions for the exponential number of join templates in a DB instance.", "Specifically, some methods [13], [64], [65] denormalize the join of all tables in a database and use Bayesian networks to model the distribution.", "The current SOTA approaches for handling join queries try to model the data distributions of the denormalized join tables by using fanout-based techniques [23], [77], [70], [73], [67].", "These methods denormalize some tables and add a possibly exponential number of fanout columns, which are used to formulate the distributions for each join template.", "Using this method, the learned data-driven methods can produce effective estimations for join queries and high-quality query plans [18].", "However, these models generally have a long training time, large model size, slow estimation speed, and unscalable performance with the number of joins in the query.", "Furthermore, the learned data-driven approaches currently can not handle self-joins or cyclic-joins and are very ineffective in processing complicated filter predicates such as string pattern matching queries.", "Learned query-driven methods: These methods analyze the executed query workload, map the featurized join query directly to its cardinality using supervised models such as neural networks [33], [54], xgboost [11], tree-LSTM [63], and deep ensembles [42].", "The query-driven methods generally have low estimation latency but the estimation effectiveness is highly dependent on the training workloadblack; we put a semi-check mark ([scale=0.3,fill=black](0,.3) – (.2,0) – (1,.7) – (.25,.15) – cycle (0.75,0.2) – (0.77,0.2) – (0.6,0.7) – cycle;) for these methods in Table REF .", "To achieve reasonable estimation quality, they generally need an excessive amount of executed training queries [18], which are unavailable for new DB instances.", "Furthermore, the query-driven models need to retrain in the case of data update or query workload shift.", "An impractical amount of new executed queries are again needed for this re-training process.", "Bound-based methods: Traditional CardEst methods in modern DBMS tend to severely under-estimate the cardinality, thus sometimes choosing significantly more expensive query plans [55], [3], [5].", "To avoid under-estimation, the bound-based methods [5], [3], [1], [22] use information theory to provide an upper bound on the cardinality.", "These bounds are very effective to help generate high-quality query plans as they can avoid expensive join orders and physical operators [55], [18] However, these methods do not build single-table estimators to understand the data distributions.", "Instead, in presence of base-table filters, they need to materialize the filtered tables and populate the bound during query run-time, which can produce very high latency and overhead.", "Therefore, despite the bound-based methods providing meaningful insights into understanding joins, they can not be practically deployed in a DBMS.", "Summary: None of the existing approaches can simultaneously satisfy the desired properties of CardEst, namely effective, efficient and appropriate for system deployment.", "However, each of them contains some advantageous techniques, inspiring us to build FactorJoin that can bridge the gaps amongst all these CardEst categories and unify their merits." ], [ "New framework for join queries", "Given this background on previous CardEst approaches, we now introduce the high-level operation of FactorJoin.", "In Section REF , we elaborate on FactorJoin's core technique: accurately estimating join queries with only single-table statistics using factor graph inference techniques.", "Then, in Section REF , we analyze the complexity of this inference procedure and motivate several optimizations in FactorJoin.", "Finally, we provide a workflow overview in Section REF ." ], [ "Problem formulation", "We first illustrate the main idea with a simple example of a two-table join query.", "Then, we formalize the join query estimation problem as a PGM inference problem over single table distributions.", "Two-table join query example: Figure REF illustrates a query $Q$ joining tables $A$ and $B$ on the inner joinIn this paper, we only discuss inner and equality joins.", "FactorJoin naturally supports left, right, and outer joins.", "We leave support for blacknon-equal joins as future work.", "condition $A.id = B.Aid$ with base table filter predicates $Q(A)$ and $Q(B)$ .", "Table $A$ first goes through the filter $Q(A)$ , resulting in an intermediate table $A|Q(A)$ (records in $A$ that satisfy the filter $Q(A)$ ).", "The same procedure is applied to table $B$ .", "Then, the query $Q$ will match the value of join keys $A.id$ and $B.Aid$ from these two intermediate tables.", "Specifically, the value $a$ appears 8 times in table $A|Q(A)$ and 6 times in table $B|Q(B)$ , resulting in value $a$ appearing 48 times in the join result.", "Therefore, we can calculate the cardinality of this query $Q$ as $8\\times 6 + 4\\times 5 + 3 \\times 5 = 83$ .", "We can formulate the above procedure for calculating $Q$ as a statistical equation in Equation REF , where $D(A.id)$ denotes the domain of all unique values of $A.id$ .", "We observe that only single-table distributions $P_A$ and $P_B$ are required to accurately compute the cardinality of this join query.", "Also in this equation, $P_{A}(A.id \\!", "= \\!", "v | Q(A)) * |Q(A)|$ equals to $P_{A}(A.id \\!", "= \\!", "v \\wedge Q(A)) * |A|$ , which is exactly what single-table CardEst methods estimate.", "Thus, we can accurately calculate the cardinalities of two-table join queries using single-table estimators.", "Note that the summation over the domain of join key $\\mathcal {D}(A.id)$ has the same complexity as computing the join.", "Therefore, we need to approximate this calculation for FactorJoin to be practical.", "The details of our approximation schemes are given in Section REF and Section .", "$|Q| = \\sum _{v \\in \\mathcal {D}(A.id)} & P_{A}(A.id = v | Q(A)) * |Q(A)| * \\nonumber \\\\& P_{B}(B.Aid = v | Q(B)) * |Q(B)|$ Figure: Formulating a join query CardEst into a factor graph inference problem.", "Figure (i) shows a SQL query Q 2 Q_2.", "Figure (ii) visualizes the join template of Q 2 Q_2.", "Figure (iii) presents the factor graph model to compute Q 2 Q_2's cardinality.Formal problem formulation: A general join query can involve a combination of different forms of joins (e.g., chain, star, self, or cyclic joins) so its counterpart equation (as Equation REF ) can be very difficult to derive and compute; blackwe have put these equations/derivations in the supplementary material.", "We provide a generalizable formulation that automatically decomposes join queries into single-table estimations using the factor graph model.", "Figure REF -(i) shows a SQL query $Q_2$ joining four tables $A, B, C, D$ .", "We visualize its join template as a graph in Figure REF -(ii), where each dashed rectangle represents a table, each ellipse (node) represents a join-key in $Q_2$ , and each solid line (edge) represents an equi-join relation between two join keys connected by it.", "Note that both sides of an equi-join relation represent the semantically equivalent join keys.", "We call them equivalent key group variables.", "In Figure REF -(ii), there are three connected components and thus three equivalent key group variables $V_1, \\ldots , V_3$ .", "$V_1$ represents $A.id$ and $B.Aid$ in this group since $Q_2$ contains the join condition $A.id = B.Aid$ .", "The exact computation required to compute the cardinality of the SQL query in Figure REF extends Equation REF to sum over the domain $\\mathcal {D}$ of each join key.", "black $|Q_2| = & \\sum _{a1 \\in \\mathcal {D}(A.id)} \\ \\ \\ \\ \\ \\sum _{a2 \\in \\mathcal {D}(A.id2)} \\ \\ \\ \\ \\ \\sum _{c \\in \\mathcal {D}(C.id)} \\nonumber \\\\& P_{A}(A.id = a1, \\ A.id2 = a2 \\ | \\ Q_2(A)) \\ast |Q_2(A)| \\nonumber \\\\& \\ast P_{B}(B.Aid = a1, \\ B.Cid = c \\ | \\ Q_2(B)) \\ast |Q_2(B)| \\nonumber \\\\& \\ast P_{C}(C.Aid2 = a2, \\ C.id = c \\ | \\ Q_2(C)) \\ast |Q_2(C)| \\nonumber \\\\& \\ast P_{D}(D.Cid = c \\ | \\ Q_2(D)) \\ast |Q_2(D)|$ This computation is extremely expensive — assuming the domains of each id being $n$ , a naive implementation would have the complexity $O(n^3)$ .", "We can improve this complexity to $O(n^2)$ by representing this computation as an equivalent factor graph model [44], which is a particular class of probabilistic graphical models (PGMs) [34].", "We have 2 in the exponent because the maximum number of join keys in $Q_2$ is two in the table $A$ .", "This is still very expensive, but in Section REF , we describe how we can use principled approximations made possible by the factor graph formulation to make the inference complexity practical.", "A factor graph is a bipartite graph with two types of nodes: variable nodes, and factor nodes representing an unnormalized probability distribution w.r.t.", "the variables connected to it.", "Figure REF -(iii) shows the constructed factor graph $\\mathcal {F}$ for computing the cardinality of $Q_2$ .", "Specifically, $\\mathcal {F}$ contains a variable node for each equivalent key group variable $V_i$ , and a factor node for each table touched by $Q_2$ .", "A factor node is connected to a variable node if the variable represents a key in the table.", "In this case, the factor node representing table $A$ is connected to $V_1$ (equivalent to $A.id$ ) and $V_2$ (equivalent to $A.id2$ ).", "Each factor node maintains an unnormalized probability distribution for the variable nodes connected to it, for e.g., node $A$ maintains $P_A(V_1, V_2|(Q_2(A)))*|Q_2(A)|$ , which is the same as the distribution $P_A(A.id, A.id2|(Q_2(A)))*|Q_2(A)|$ used in Equation REF .", "The factor graph model utilizes the graph structure to compute the sum using the well-studied inference algorithms.", "We can more formally state this relationship between computing the join cardinalities and factor graphs as the following lemma, whose proof is provided in the supplementary material.", "Lemma 1 Given a join graph $\\mathcal {G}$ representing a query $Q$ , there exists a factor graph $\\mathcal {F}$ such that the variable nodes in $\\mathcal {F}$ are the equivalent key group variables of $\\mathcal {G}$ and each factor node represents a table $I$ touched by $Q$ .", "A factor node is connected to a variable node if and only if this variable represents a join-key in table $I$ .", "The potential function of a factor node is defined as table $I$ 's probability distribution of the connected variables (join keys) conditioned on the filter predicates $Q(I)$ .", "Then, calculating the cardinality of $Q$ is equivalent to computing the partition function of $\\mathcal {F}$ ." ], [ "Inference complexity analysis", "Inference on factor graphs is a well-studied problem in the domain of PGMs [46], [34], [44].", "Popular approaches to solving this problem are variable elimination (VE) [34] and belief propagation algorithms [35].", "In FactorJoin's implementation, we use the VE, which first determines an optimal order of variables and then sums over the distributions defined by the factor graph model in this order.", "The complexity of VE is $O(N*|D|^{max(|JK|)})$ , where $N$ is the number of equivalent key groups (3 in Figure REF ), $|D|$ is the largest domain size of all join keys, and $max(|JK|)$ is the maximum number of join keys in a single table (2 in Figure REF ).", "Intuitively, this complexity is because the factor nodes need to understand the joint distribution of all join keys in one table.", "Thus, the largest factor node in the factor graph model maintains a probability distribution of size $|D|^{max(|JK|)}$ .", "This complexity is not practical for real-world queries, as $|D|$ can be millions, and $max(|JK|)$ can be larger than 4 in real-world DB instances, such as IMDB [38].", "Therefore, instead of calculating the exact cardinality, we propose a new PGM inference algorithm for FactorJoin that can estimate an upper bound on cardinality.", "Past works [5], [3], [1] have shown that cardinality upper bounds can help avoid very expensive query plans since under-estimating a large result can sometimes be catastrophic.", "For instance, an optimizer might choose to do a nested loop join if a CardEst method under-estimates that the result would easily fit in memory.", "This would lead to a disastrous plan if the actual result size is much larger than estimated.", "blackOur approximate inference algorithm performs two approximations: 1) reducing the domain size $|D|$ using binning and 2) decreasing the exponent, $max(|JK|)$ , by approximating the distribution of attributes in a single table.", "blackSpecifically, for binning, we first partition the domain of all join keys into $k$ bins.", "Instead of summing over the entire domain, we only need to sum over $k$ summarized values (probabilistic bounds) of each bin.", "The details are provided in Section .", "blackSecond, we approximate the causal relation among all attributes within a table as a tree structure.", "This allows us to factorize the $max(|JK|)$ -dimensional joint distribution of all join keys in a table as a product of two-dimensional conditional distributions.", "This procedure is known as structure learning in Bayesian networks in the PGM domain [6].", "Past work on single table cardinality estimation has shown that such tree approximations do not decrease the modeling accuracy significantly [70].", "Further details are provided in Section REF .", "Thereafter, we can directly run the VE inference algorithm on the binned domains and factorized two-dimensional distributions to estimate the cardinality upper bound of $Q$ .", "With these two approximate inference techniques, the complexity of estimating the cardinality bound is reduced to $O(N*k^2)$ , which is very efficient as both $N$ and $k$ are very small in practice." ], [ "Workflow overview", "The workflow of FactorJoin contains two phases: offline training and online inference (Figure REF ).", "During the offline phase, our framework analyzes the DB instance, bins the domain of join keys, explores the causal relation, and builds CardEst models for every single table.", "When a join query $Q$ comes in during the online phase, FactorJoin formulates the estimation of this query as a factor graph involving only single-table distributions, estimates them using (pre-trained) single-table estimators, runs PGM inference, and generates a probabilistic bound for $Q$ .", "We provide the details as follows.", "Offline training phase: Given a new DB instance, FactorJoin first analyzes its DB schema and data tables to get all possible join relations between different join-keys.", "We consider two join-keys to be semantically equivalent if there exists a join relation between them.", "After identifying all groups of equivalent join-keys, we bin the domains of all join-keys in each group.", "FactorJoin can also optionally use query workload information to optimize this binning procedure (details provided in Section REF ).", "Based on the data tables and the binned domains, we explore the blackcausal relation of these attributes and model them as a tree structure (details are provided in Section REF ).", "Then, we build the CardEst models to understand the data distribution of every single table.", "In principle, any single-table CardEst method that is able to provide conditional distributions can be adapted into FactorJoin.", "In practice, we implement two methods: traditional random sampling and the learned data-driven BayesCard method [70].", "The sampling method is extremely flexible to use and can easily support any complex filter predicates with disjunctions, string pattern matching, or any user-defined functions.", "Alternatively, similar to most data-driven methods, BayesCard can only work with queries filtering numeric and categorical attributes, but BayesCard has been shown to consistently produce accurate, fast, and robust estimations for various data distributions [70].", "Users of FactorJoin can switch between different single-table estimators based on their objectives and knowledge about the DB instances.", "Figure: Workflow of FactorJoin framework.Online inference phase: When a join query $Q$ comes in during run-time, FactorJoin will first parse its join graph and construct the corresponding factor graph $\\mathcal {F}$ (as in Figure REF ).", "Then, it uses the trained single-table estimators to estimate the join-key distributions conditioned on the filter predicates of $Q$ ; FactorJoin will use these distributions in the corresponding factor nodes of $\\mathcal {F}$ .", "Finally, we run the approximate version of VE inference algorithms on $\\mathcal {F}$ to derive the estimated/probabilistic cardinality bound.", "We note that the CardEst methods working inside a query optimizer will estimate all sub-plan queries of a target query to decide the best query plan.", "Since these sub-plan queries contain a large amount of overlapping information, we provide a progressive estimation algorithm to avoid computation redundancy in Section REF ." ], [ "Probabilistic bound algorithm", "In this section, we design a new probabilistic bound algorithm based on binning to calculate the factor graph inference problem.", "Specifically, we first explain the main ideas with a two-table join example and formulate the algorithm details within FactorJoin in Section REF .", "Then, we discuss how to optimize the bin selection using data and query information in Section REF ." ], [ "Algorithm details", "As described in Section , the main objective of our probabilistic bound-based algorithm is to reduce the domain size of the join keys, i.e., reducing the complexity of summation in Equation REF .", "Thus, instead of running the inference algorithm over the entire domain, FactorJoin only needs to sum over the binned domain.", "Two-table join query example: We use the previous example query $Q$ from Section REF to illustrate the core idea of our probabilistic upper bound algorithm based on binning.", "The objective is to estimate an upper bound on $Q$ , whose exact computation requires summing over the entire domain of $A.id$ according to Equation REF .", "Assume that we have a set of $k$ bins $\\lbrace bin_1, \\ldots , bin_k \\rbrace $ partitioning the domain of $A.id$ and we apply this same set of bins to $B.Aid$ .", "blackWe need to make sure that a given value in the domain of $A.id$ and $B.Aid$ will always belong to the bin with the same indexes.", "Then, Equation REF can be equivalently written as Equation 3.", "We can then replace the sum over all values in a bin $bin_i$ with a probabilistic bound on $bin_i$ to estimate an upper bound for $Q$ as Equation REF .", "$|Q| &= \\sum _{i = 1}^{k} \\sum _{v \\in bin_i} && P_{A}(A.Id = v | Q(A)) * |Q(A)| * \\nonumber \\\\& && P_{B}(B.Aid = v | Q(B)) * |Q(B)| \\\\&\\lesssim \\sum _{i = 1}^{k} && Probabilistic\\_bound(A, B, bin_i)$ Motivated by a bound based on the most frequent value (MFV) [22], we use a simple probabilistic bound method to derive $Probabilistic\\_bound(A, B, bin_i)$ for a particular bin.", "Specifically, assuming that value $\\lbrace a, b, c , e, f\\rbrace $ of $A.id$ and $B.Aid$ are binned into $bin_1$ as shown in Figure REF .", "We know the summation of all values in $bin_1$ equals to black$8\\times 6 + 4 \\times 5 + 3 \\times 5 = 83$ .", "This summation has a dominating term of $8 \\times 6$ , because the count of MFV of $bin_1$ is 8 for $A.id$ (denoted as $V^{*}_1(A.id)$ ) and 6 for $B.Aid$ (denoted as $V^{*}_1(B.Aid)$ ) so each value can appear at most $8 \\times 6$ times in the denormalized table after the join.", "Since we know the total count of values in $bin_1$ for $A.id$ is 16, there can be at most $16/8 = 2$ MFVs.", "Similarly, there can be at most black4 MFVs in $bin_1$ for $B.Aid$ .", "Therefore, we have the summation of all values in $bin_1$ is upper bounded by black$min(2, 4) \\times 8 \\times 6 = 96$ .", "We formally represent the aforementioned procedure in Equation REF , where $V^{*}_i(A.id)$ and $P_A(A.id \\in bin_i|Q(A)) * |Q(A)|$ are the MFV count and estimated total count of $bin_i$ for $A.id$ .", "Our bound is probabilistic because $P_A(A.id \\in bin_i|Q(A)) * |Q(A)|$ is estimated with a single table CardEst method, which may have some errors.", "$|Q| & \\lesssim \\sum _{i = 1}^{k} min(\\frac{P_A(A.id \\in bin_i|Q(A)) * |Q(A)|}{V^{*}_i(A.id)}, \\nonumber \\\\& \\frac{P_B(B.Aid \\in bin_i|Q(B)) * |Q(B)|}{V^{*}_i(B.Aid)}) * V^{*}_i(A.id) * V^{*}_i(B.Aid)$ Figure: The bound-based algorithm; each bin is summarized by the most frequent value count and total count.Working with PGM inference: Next, we will explain how to generalize this probabilistic bound to make it work inside the factor graph VE inference algorithm mentioned in Section REF .", "During the offline training, FactorJoin computes the MFV counts $V^*$ for all bins of each join-key.", "During the online phase, FactorJoin estimates the probability distribution for the binned domain of all join-keys in each table $I$ .", "Then, FactorJoin puts $I$ 's distribution $P_{I}$ and MFV counts $V^*(I)$ into the corresponding factor node.", "Recall that the VE algorithm generates an optimal elimination order of all variables and sums over the domain of these variables sequentially in this order.", "Since at each step, the variable elimination algorithm sums out only one variable, its probabilistic bound can be derived using Equation REF over multiple equivalent join keys.", "Thus, this algorithm does not need to sum over the entire value domain of a variable but can only sum over the probabilistic bounds in the binned domain instead, which greatly reduces the complexity.", "Analysis: This probabilistic bound algorithm significantly reduces the domain size of join keys and enables practical approximate PGM inference.", "Further, this algorithm upper-bounds the cardinality most of the time and the bound is very tight with even a small number of bins, such as $k=100$ .", "Therefore, as we empirically verify in our evaluation, the estimated cardinality bound is very effective." ], [ "Bin selection optimizations", "In our probabilistic bound algorithm, different bins can result in drastically different bound tightness.", "blackThere are two decisions: how many bins to use for each join key group, and what values to put in the same bin.", "Deciding $k$ based on query workloads: blackThe number of bins, $k$ , has a significant effect on the performance of FactorJoin: fewer bins aggregate more distinct values in the join key domain to each bin and are thus less accurate but more efficient.", "An approach to tune $k$ is to set different values of $k_i$ for different equivalent key groups $Gr_i$ .", "Suppose that users provide a budget $K$ with $K = \\sum k_i$ .", "If FactorJoin has access to the query workload, it can analyze the join patterns and count the number of times $n_i$ each $Gr_i$ appears in the workload.", "Our framework can adaptively set a larger $k_i$ for the frequently visited $Gr_i$ and vice versa.", "In our implementation, we use a simple heuristic — set $k_i = K * n_i/\\sum _j n_j$ .", "In this way, we can optimize the modeling capacity of FactorJoin to make sure that it is spent on the important joins.", "Binning strategy: For the probabilistic bound algorithm described in Section , we observe that the upper bound on a particular bin $bin_i$ can be very loose if the MFV count $V^*_i$ is a large outlier in $bin_i$ .", "Taking the two table join query as an example, if $bin_i$ contains only one value that appears 100 times in $A.id$ but $10,000$ values that only appear once in $B.Aid$ , then the bound could be 100 times larger than the actual cardinality.", "blackCommon binning strategies used in DBMS histograms such as equal-depth or equal-width bins can be catastrophic in such cases.", "Instead, we want the variance of the join key counts in each bin to be low.", "This strategy has also been proposed in different contexts [28], [30], but the key challenge is that the same bins will be built for all join keys within an equivalent key group, i.e.", "join keys that share the same semantics.", "For example, for a primary key (e.g.", "“title.id” in IMDB JOB [38]), any binning strategy will have zero variance because of uniqueness.", "However, the same bins will be applied for its equivalent foreign keys (e.g.", "“movie_id”) on other tables, which may lead to a large variance depending on how often each value repeats in the other tables.", "blackWe design a greedy binning selection algorithm (GBSA) to optimize for bins with low variance value counts across all tables, as illustrated in Algorithm REF .", "Jointly minimizing the variance of one bin for all join keys has exponential complexity.", "Therefore, GBSA uses a greedy algorithm to iteratively minimize the bin variance for all join keys.", "blackAt a high level, GBSA first optimizes the minimal variance bins with half the binning budget of $k/2$ on the domain of one join key (lines 2-4).", "blackThen, it recursively updates these bins by minimizing the variance of other join keys using the other half of the budget (lines 5-13).", "blackWe put the details of GBSA in the supplementary material.", "blackDiscussion: blackAs shown in Section REF , GBSA has significantly better performance than naive binning strategies.", "In the extreme case, if the value counts have zero variance for all equivalent join keys, then our bound can output the exact cardinality.", "Admittedly, this binning strategy does have the drawback that after applying the filter predicates, the join key frequencies will change and their variance may become higher within a bin.", "However, we empirically evaluate that for real-world datasets, this phenomenon does not have a severe impact on FactorJoin.", "In future work, we will explore an enhanced binning strategy that can efficiently address this issue.", "[t] blackGreedy Bin Selection Algorithm (GBSA) Input: Equivalent key groups $Gr_1, \\ldots , Gr_m$ , where $Gr_i = \\lbrace Id_i^1, \\ldots , Id_i^{|Gr_i|} \\rbrace $ ; Column data $\\mathcal {D}(Id_i^j)$ of all join keys in the DB instance $\\mathcal {D}$ ; Number of bins $k_i$ for each group $Gr_i$ .", "[1] $Gr_i \\in \\lbrace Gr_1, \\ldots , Gr_m\\rbrace $ $Bin(Gr_i) \\leftarrow $ [] $Gr_i^{\\prime } \\leftarrow sort\\_key\\_based\\_on\\_domain\\_size(\\mathcal {D}, Gr_i)$ $Bin(Gr_i) \\leftarrow get\\_min\\_variance\\_bins(\\mathcal {D}(Gr_i^{\\prime }[1]), k_i/2)$ remain_bins $\\leftarrow k_i/2$ $j \\in \\lbrace 2, \\ldots , |Gr_i^{\\prime }| \\rbrace $ binned_data $\\leftarrow apply\\_bin\\_to\\_data(\\mathcal {D}(Gr_i^{\\prime }[j]), Bin(Gr_i))$ bin_variance $\\leftarrow calculate\\_variance$ (binned_data) arg_sort_idx $\\leftarrow arg\\_sort\\_decreasing(bin\\_variance)$ $p \\in arg\\_sort\\_idx[1:remain\\_bins/2]$ $Bin(Gr_i) \\leftarrow min\\_variance\\_dichotomy(Bin(Gr_i)[p]$ , binned_data[p]) remain_bins $\\leftarrow $ remain_bins/2 return $\\lbrace Bin(Gr_i) | i, \\ldots , m\\rbrace $ black" ], [ "Incremental model updates", "black FactorJoin is friendly for DBs with frequent data changes because we only need to incrementally update the single table statistics.", "In the following, we will discuss the algorithm details in the scenario of data insertion and data deletion can be handled similarly.", "Consider we have trained a stale FactorJoin model $\\mathcal {F}$ on data $\\mathcal {D}$ and would like to incrementally update $\\mathcal {F}$ with the inserted data $\\mathcal {D}^{\\prime }$ .", "FactorJoin needs to identify the inserted tuple values from $\\mathcal {D}^{\\prime }$ for every join key and put them into the original bins optimized on $\\mathcal {D}$ .", "Then, it will update the total and most-frequent value count in each bin.", "At last, FactorJoin can incrementally update the base table CardEst models with off-the-shelf tools (e.g.", "BayesCard provides an efficient and effective approach to incrementally update the single-table models using $\\mathcal {D}^{\\prime }$  [70]; materializing a new sample on $\\mathcal {D} \\bigcup \\mathcal {D}^{\\prime }$ incrementally updates the sampling-based estimator).", "Therefore, the incremental update of FactorJoin does not need to normalize the joined tables nor need to use executed queries.", "It is also empirically verified to be very efficient and effective.", "However, during incremental updates, the FactorJoin models keep the same set of bins, which is optimized on the previous data and thus might not be optimal after data insertion.", "This might cause slight degrade in performance and the user can choose to retrain the model in case of massive data updates." ], [ "Improving ", "We describe two techniques to improve the modeling and estimation efficiency of FactorJoin.", "First, we model the joint distribution of attributes on a single table as a tree structure and use it to simplify the PGM inference calculations.", "Next, we define a systematic way of reusing estimates of intermediate sub-plan queries to estimate a larger query.", "This is only possible because we decompose cardinality estimates into single table sub-components." ], [ "blackExploring attributes causal relation", "Recall from blackSection REF , the factor graph computation to calculate the cardinality bound is $O(N*k^{max(|JK|)})$ , where $N$ is the number of tables, $k$ is the number of bins, and $max(|JK|)$ is the maximum number of join keys in a table.", "The exponent term, $max(|JK|)$ , represents the factor nodes requiring the joint distribution between all join keys on a single table.", "As $max(|JK|)$ can sometimes be large (e.g., in IMDB it can be up to 4), we need to reduce the dimensionality of the distributions defined in factor nodes to ensure an efficient CardEst procedure.", "Fortunately, real-world data has a lot of correlations that can be used to simplify this joint distribution.", "blackConsider a table $A$ with 6 attributes, $\\lbrace id_1, id_2, id_3, id_4 attr_1, attr_2\\rbrace $ , four of which are join keys.", "The factor graph computation blackmay need to estimate quantities such as: $P_A(id_1, id_2, id_3, id_4 | Q(A))$ blackThis requires storing a 4 dimensional probability distribution, and even with binning the domains of all ids into $k$ bins, it still requires $k^4$ space and inference time complexity.", "This joint distribution can be represented as a graph where each attribute is a node, and there are edges between every pair of attributes.", "We will use the dependencies and relationships between the attributes and model them as a Bayesian network (BN)  [34].", "Specifically, we will assign each edge in the joint distribution graph a weight proportional to the mutual information between the two attributes connected by the edge.", "Then, we will remove edges with the least mutual information until only a tree structure graph remains to represent the joint probability over these attributes.", "blackThis process is known as the Chow-Liu algorithm for decomposing joint distributions [6].", "The factorized tree representation of the joint distribution can let us approximate Equation REF with an equation of the form: $P_A(id_1 | Q(A)) * P_A(id_2 | Q(A)) * P_A(id_3 | id_1) * P_A(id_4 | id_3)$ , which is much more efficient to compute because it only involves a product over at most a two-dimensional distribution.", "This dimension reduction can cause some errors.", "But it has been shown empirically that the factorized distributions are accurate approximations of the original distribution for most real-world datasets [70].", "Using such an optimization, instead of storing the $max(|JK|)$ -dimensional distributions in the factor nodes, the factor graph only needs to store a series of one or two-dimensional data distributions without a significant decrease in estimation accuracy.", "Therefore, the overall complexity for running approximate inference on this factor graph becomes $O(N*k^2)$ in both storage and inference speed.", "Since $k$ is relatively small (around one hundred), estimating one query using FactorJoin is very efficient.", "blackIn principle, FactorJoin can also use other methods for the single table distribution inference.", "This may involve traditional methods, such as sampling, or more powerful graph-structured factorizations of the distributions — which can increase modeling accuracy at the cost of slower estimation speed.", "Thus, users of FactorJoin can make trade-offs between estimation accuracy and efficiency.", "We leave exploring these trade-offs as future work." ], [ "Progressive estimates of sub-plan queries", "In order to optimize the plan for one query, the CardEst method needs to estimate the cardinalities of hundreds or thousands of sub-plan queries.", "Estimating all these queries independently is very inefficient because they contain a lot of redundant computation.", "Similar to the traditional CardEst methods deployed in real DBMSes, FactorJoin supports estimating all sub-plan queries of one query progressively, reusing the computation as much as possible.", "At a high level, the progressive estimation algorithm will estimate all sub-plan queries in a bottom-up fashion.", "FactorJoin first estimates all two-table join sub-plan queries, then it estimates the three-table join sub-plan queries, which will contain the two-table joins as their sub-structures, and so on.", "Joining Factor Graphs: The key insight is that when we estimate the cardinality of a two-table join, we can also combine their factor graphs into a new factor graph for the joined table.", "Specifically, as described in Section , each base table is represented as a factor node in the factor graph.", "For instance, the factor node representing $A$ will store the unnormalized probability distribution and the MFV counts for the binned domain of join keys in $A$ .", "When estimating the cardinality of joining $A$ and $C$ , our approximate inference algorithm in Equation REF computes the bound over all bins and sums them.", "These bounds essentially define an unnormalized probability distribution over all join keys on the denormalized table $A \\bowtie C$ .", "We can derive a bound on the new MFV counts for a join-key $A \\bowtie C$ as the multiplication of the corresponding MFV counts in $A$ and $C$ .", "Therefore, we can cache this new probability distribution and MFV counts in a new factor node representing $A \\bowtie C$ .", "Thus, when estimating the cardinality of $A \\bowtie B \\bowtie C$ we can consider it as a join of two single tables $(A \\bowtie C$ ) and $B$ .", "Since all sub-plans can essentially be considered a join of two sub-plans that we have already estimated, our inference algorithm only needs to compute the join of blacktwo-factor nodes for each sub-plan query.", "Therefore, the progressive estimation algorithm has no redundant computation and maximizes efficiency.", "In experiments, with this algorithm, FactorJoin can estimate the cardinality of $10,000$ sub-plan queries in one second, which is more than ten times faster than estimating all these queries independently.", "Note that the progressive estimation algorithm is only possible to implement for FactorJoin and the traditional methods because it requires decomposing join queries into single-table estimates.", "None of the existing learned methods can apply this method since their models are built to model the distributions of specific join patterns." ], [ "Experimental Evaluation", "In this section, we empirically demonstrate the advantages of our framework over existing CardEst methods.", "We first introduce the datasets, baselines, and experimental environment in Section REF .", "Then, we explore the following questions: $\\bullet $ Overall performance (Section REF ): how much improvement can FactorJoin achieve in terms of end-to-end query time?", "What are the model size and training time compared to SOTA methods?", "$\\bullet $ Detailed analysis (Section REF ): Why does FactorJoin attain good overall performance?", "blackWhen does it perform better or worse than existing baselines?", "How will it perform in terms of data updates?", "$\\bullet $ Ablation study (Section REF ): How much does each optimization technique of FactorJoin contribute to the overall performance?" ], [ "Experimental Setup", "Datasets: We evaluate the performance of our CardEst framework on two well-established benchmarks: STATS-CEB [18] and IMDB-JOB [38], whose statistics are summarized in Table REF .", "The STATS-CEB benchmark uses the real-world dataset STATS, which is an anonymized dump of user-contributed content on Stack Exchange (Stack Overflow and related websites).", "This dataset consists of 8 tables with 34 columns and more than one million rows.", "It contains a large number of attribute correlations and skewed data distributions.", "The STATS-CEB query workload contains 146 queries involving 70 different join templates with real-world semantics.", "The true cardinalities of these queries range from 200 to $2 \\cdot 10^{10}$ , with some queries taking more than one hour to execute on Postgres.", "The STATS-CEB benchmark only has star or chain join templates and numerical or categorical filtered attributes — therefore it is possible to evaluate all existing CardEst baselines on it.", "The IMDB-JOB benchmark uses the real-world Internet movie database (IMDB).", "This dataset contains 21 tables with more than fifty million rows.", "The IMDB-JOB query workload contains 113 queries with 33 different join templates.", "Some queries have more than $10,000$ sub-plan queries, posing a significant challenge to the efficiency of CardEst methods.", "Because IMDB-JOB contains cyclic joins and string pattern matching filters, the JoinHist and existing learned data-driven methods do not support this benchmark.", "The queries in these two benchmarks cover a wide range of joins, filter predicates, cardinalities, and runtimes.", "To the best of our knowledge, they represent the most comprehensive and challenging real-world benchmarks for CardEst evaluation.", "Baselines: We compare our framework with the most competitive baselines in each CardEst method category.", "1) PostgreSQL [10] refers to the histogram-based CardEst method used in PostgreSQL.", "2) JoinHist [7] is the classical join-histogram method whose details are explained in Section REF .", "3) WJSample [40] uses a random walk based-method called wander join to generate samples from the join of multiple tables.", "Its superior performance over traditional sampling-based methods has been demonstrated in recent studies [40], [76], [56].", "blackWe limit the amount of time spent for each random walk of WJSample so the overall time spent for estimating cardinalities is comparable to other methods.", "If we allow WJSample longer inference time, it can generate more effective query plans — but it is impractical since its total estimation time will exceed the query execution time.", "4) MSCN [33] is a learned query-driven method based on a multi-set convolutional network model.", "On both datasets, we train MSCN on roughly $100K$ sub-plan training queries that have similar distributions to the testing query workloads.", "5) BayesCard [70], 6) DeepDB [25], and 7) FLAT [77] are the learned data-driven CardEst methods.", "They use fanout methods to understand the data distributions of all join templates.", "Then, they apply different distribution models (Bayesian networks, sum-product networks [57], factorized sum-product networks [72], respectively) to capture the data distribution on single tables or denormalized join tables.", "We test these methods on STATS-CEB using the optimally-tuned model parameters, as used in [18].", "8) PessEst [5] is the SOTA bound-based method, which leverages randomized hashing and data sketching to tighten the bound for join queries.", "Its estimation has been verified to be very effective in real-world DBMSes.", "We use the best-tuned hyperparameters to evaluate PessEst on both benchmarks.", "9) U-Block [22] uses top-k statistics to estimate cardinality bounds.", "The original paper designed a new plan enumeration strategy to work with this bound and showed promising results on IMDB-JOB.", "However, for a fair comparison, we only evaluate the standalone U-Block CardEst method.", "10) TrueCard outputs true cardinality for given queries with no estimation latency.", "This baseline does not refer to any specific method but represents the optimal CardEst performance.", "We omit comparisons with many other CardEst methods [4], [61], [31], [12], [62], [68], [21], [32], [39], [20], [73], [69], [11] blackbecause the baselines above have demonstrated clear advantages over these methods in various aspects [18].", "In addition, some newly proposed methods [63], [67], [42] do not have open-source implementations yet.", "We implemented FactorJoin in Python mainly using the NumPy package.", "Regarding the hyperparameters of FactorJoin, we assume no knowledge of the query workload and set the bin size $k$ to 100 for both datasets.", "We use the BN-based CardEst method [70] as the single-table estimators for STATS-CEB benchmark because this method has been shown to be very accurate, fast, and robust for single-table estimations.", "For IMDB-JOB, we use random sampling with $1\\%$ sampling rate as the single-table estimator because this benchmark contains very complicated filter predicates such as disjunctions and string pattern matching, which are not supported by the learned data-driven methods.", "We discuss and compare other hyperparameters of our framework in Section REF .", "Table: Summary of STATS-CEB and IMDB-JOB benchmark.Figure: Overall performance on STATS-CEB and IMDB-JOB.Environment: blackWe use the open-source implementation of all the baselines and tune their parameters to achieve the best possible performance.", "All experiments are performed on a Ubuntu server with Intel Core Processor CPU with 20 cores, 57GB DDR4 main memory, and 300GB SSD.", "For end-to-end evaluation, we integrate our framework along with all the baselines into the query optimizer of PostgreSQL 13.1 following the procedure described in the recent work [18], [17].", "We build the primary key and foreign key indices and disable parallel execution in PostgreSQL.", "Specifically, we inject into PostgreSQL all sub-plan query cardinalities estimated by each method, so the PostgreSQL optimizer uses the injected cardinalities to optimize the query plan, and record the end-to-end query runtime (planning plus execution time).", "To eliminate the effects of cache, we first execute each workload multiple times.", "Then, we execute the queries three times for each method and report the average blackand the standard errors." ], [ "Overall performance", "The most straightforward criterion to evaluate the performance of a CardEst method is to test how much each method improves the end-to-end query performance inside the query optimizer.", "Therefore, we compare the end-to-end performance along with the model size and training time of existing baselines on STATS-CEB and IMDB-JOB.", "We summarize the overall performance of all baselines in Figure REF .", "Recall that JoinHist, BayesCard, DeepDB, and FLAT can not support the IMDB-JOB benchmark.", "In addition, we do not provide the model size and training time for Postgres, TrueCard, JoinHist, WJSample, and U-Block as they are negligible.", "Table: End-to-end performance on STATS-CEB.Performance on STATS-CEB: In addition to Figure REF , we provide a detailed comparison of query execution and planning time (sum over all queries in the STATS-CEB) in Table REF , with the relative end-to-end improvement over Postgres shown in the last column, i.e.", "(Postgres time - method time) / Postgres time.", "FactorJoin achieves the best end-to-end query runtime ($19,116s$ ), which is close to optimal performance ($18,456s$ for TrueCard).", "When compared with the traditional methods (Postgres, JoinHist, and WJSample), FactorJoin has significantly better query execution time, indicating that our estimates are much more effective at generating high-quality query plans.", "Meanwhile, the planning time of our framework is close to Postgres.", "Judging from the pure query execution time in Table REF , FactorJoin has comparable estimation effectiveness as the previous SOTA method FLAT on this benchmark.", "This learned data-driven method uses the fanout method to understand the distribution of all join patterns, which is accurate but very inefficient.", "Specifically, FactorJoin can achieve better end-to-end performance than FLAT, with $10 \\times $ smaller planning time, $160 \\times $ smaller model size, and $240 \\times $ faster training time.", "When compared to the other learned methods: BayesCard, DeepDB, and MSCN, FactorJoin can simultaneously achieve better query execution and planning time, smaller model size, and faster training time.", "Therefore, we empirically verify that instead of modeling the distributions of all join patterns, accurately decomposing the join query into single-table estimates can generate equally effective but more efficient estimation with much smaller model size and training time.", "FactorJoin and the SOTA bound-based method PessEst can both generate very effective estimates.", "This observation verifies that a bound-based method can produce effective estimates because it can help avoid very expensive query plans [5], [3], [1], [18].", "However, the planning time of PessEst (1135s) is undesirable because it needs to materialize tables after applying the filters and populate the upper bound during run-time.", "FactorJoin has $35 \\times $ less planning latency since instead of using exact bounds, we use probabilistic bounds derived from single-table estimates, which is much more efficient.", "Table: End-to-end performance on IMDB-JOB.Performance on IMDB-JOB: Table REF shows a detailed end-to-end time comparison for the IMDB-JOB workload.", "FactorJoin achieves the best end-to-end query runtime ($1324s$ ) amongst all baselines.", "Similar observations hold as in the STATS-CEB workload when compared to the traditional methods.", "Like STATS-CEB, PessEst achieves the best performance in terms of pure execution time but has a much larger overhead and planning latency.", "FactorJoin has comparable performance as PessEst in terms of pure execution time.", "However, our planning time is $50 \\times $ shorter than PessEst, making our framework much faster in overall end-to-end query time.", "Specifically, our framework can estimate $10,000$ sub-plan queries within one second, which is close to Postgres.", "However, unlike in STATS-CEB where we have near-optimal performance, there still exists a large gap between FactorJoin (1324s) and the optimal TrueCard (375s).", "There are two possible explanations.", "First, our probabilistic bound error can accumulate with the number of joins.", "Compared to STATS-CEB, the IMDB-JOB contains more than twice the number of joins in the queries.", "Therefore, the relative performance of our framework drops severely when shifting from STATS-CEB to IMDB-JOB.", "Second, the base-table filters in IMDB-JOB contain a large amount of highly-selective predicates, making our sampling-based single-table CardEst less accurate.", "These observations suggest room for improving FactorJoin, which we discuss further in the future work section.", "Summary: FactorJoin achieves the best performance among all the baselines on both benchmarks.", "Specifically, FactorJoin is as effective as the previous SOTA methods, and simultaneously as efficient and practical as the traditional CardEst methods.", "Figure: Relative estimation errors on STATS-CEB." ], [ "Detailed Analysis", "In this section, we first study why FactorJoin is able to achieve SOTA performance by analyzing the tightness of our probabilistic bound.", "Then, we exhaustively compare the CardEst method's end-to-end performance on queries with different runtimes to investigate when can FactorJoin perform better or worse than the competitive baselines.", "For compactness, here, we only report the results of the most representative methods (Postgres, FLAT, PessEst) on STATS-CEB; the remaining results are in the appendix.", "Bound tightness: We show the relative errors between the baselines' estimates and the true cardinalities (estimate / true) for all sub-plan queries on STATS-CEB in Figure REF .", "Overall, all three SOTA methods significantly outperform Postgres in terms of estimation accuracy.", "PessEst generates exact upper bounds and never underestimates.", "FLAT uses a much larger model to understand the distributions of join patterns and produces the most accurate estimates.", "FactorJoin can output an upper bound on cardinality for more than $90\\%$ of the sub-plan queries.", "Most of the marginal underestimates are very close to the true cardinality, which can still generate relatively effective plans.", "Our generated bounds are slightly tighter than the bounds from PessEst, which explains why our query plans achieve slightly better results.", "Detailed comparision: We sort the 146 queries of STATS-CEB based on their Postgres end-to-end runtime and cluster them into 6 different runtime intervals.", "Figure REF reports relative improvements of the most competitive baselines over Postgres for each query on the left and each cluster of queries on the right.", "For the very short-running queries (which represent an OLTP-like workload), Postgres is the best amongst all baselines.", "These baselines perform worse because the estimation latency plays a significant role in these queries.", "We observe that improving the estimation accuracy of these queries has a very limited effect on the query plan quality.", "This also explains why the optimal TrueCard only marginally outperforms Postgres on queries with less than $1s$ of runtime.", "However, these queries only contribute a negligible proportion of total runtime in STATS-CEB.", "The query optimizer should fall back to default traditional CardEst methods for OLTP-like workload.", "FLAT and PessEst are significantly worse than FactorJoin because their planning latency surpasses the execution time, which overshadows the minor improvement in query plans.", "For the extremely long-running queries, the advantages of the SOTA methods over Postgres gradually appear.", "The reason is that estimation latency becomes increasingly insignificant for queries with a long execution time.", "Thus, FLAT and PessEst which spend a long time planning can generate significantly better query plans.", "FactorJoin has comparable performance on these queries.", "Figure: Per query performance of STATS-CEB.blackIncremental updates: Following the same evaluation setup as previous work [18], we train a stale FactorJoin model on STATS data created before 2014 (roughly $50\\%$ ) and insert the rest of the data to incrementally update this model.", "We provide the end-to-end query time of the updated FactorJoin and the update time on STATS-CEB queries in Table REF .", "Since FactorJoin only needs to update single-table statistics, it is extremely efficient, i.e.", "taking only $2.5s$ to update millions of tuples.", "This update speed up to $168 \\times $ faster than the learned data-driven methods with better end-to-end query performance after model update.", "We cannot fairly compare with the learned query-driven methods because there does not exist a training query workload after the data insertion.", "It worth noticing that the update times reported for other methods in Table REF include the time to re-compute the denormalized join tables for the inserted data, so numbers are larger than the ones reported in the original paper [18].", "However, we do see a slight drop in the end-to-end improvement when compared to the FactorJoin trained on the entire dataset ($43.4\\%$ in Table REF versus $45.9\\%$ in Table REF ).", "This small performance difference is due to the fact that the bins are decided on the data before 2014 and remain fixed during incremental updates.", "Therefore, after inserting the new data, the min-variance property of the bins could be violated, resulting in less accurate predictions.", "Table: blackIncremental update performance on STATS-CEB." ], [ "Ablation study", "We conduct a series of ablation studies to analyze different optimization techniques inside FactorJoin.", "Specifically, we first investigate how the number of bins (k) affects the overall performance.", "Next, we analyze the effectiveness of our new bin selection algorithm (GBSA in Section REF ) over the traditional equal-width and equal-depth algorithms.", "Then, we study the performance of plugging-in different single-table CardEst methods into FactorJoin.", "Due to space limitations, we only provide the results on STATS-CEB.", "At last, we investigate how removing each of the simplifying assumptions improves the performance of the original joining histogram.", "Number of bins (k): We train and evaluate FactorJoin with five different number of bins (1, 10, 50, 100, 200).", "Figure REF shows the end-to-end query time (A), bound tightness (B), estimation latency per query (C), training time (D), and model size (E) of FactorJoin.", "For bound tightness, we report the $50\\%, 95\\%, 99\\%$ percentiles of all queries' relative error: estimated/true, as the one used in Figure REF .", "Figure REF -(A-B) shows that larger numbers of bins ($k$ ) will generate tighter cardinality bounds and thus more effective query plans.", "Figure REF -(C-E) shows that more bins also increase estimation latency, training time, and model size, as expected.", "There are several interesting observations from these results.", "$\\bullet $ According to Figure REF -(B), with $k=1$ (i.e.", "bin the entire domain of each join key into one bin) FactorJoin does not generate tight bounds but it still outperforms Postgres baseline by $24.9\\%$ .", "This highlights the importance of upper bounds over underestimation and verifies the effectiveness of bound-based algorithms.", "$\\bullet $ Increasing $k$ can substantially increase the bound tightness.", "However, at some point (from $k=100$ to $k=200$ ) the framework stops achieving better end-to-end performance.", "One explanation is that this fine-grained improvement in estimation accuracy can generate a slightly better query plan but the slower estimation latency cancels out the improvements in query plans.", "$\\bullet $ The model size increases quadratically with $k$ because FactorJoin has $O(N*k^2)$ complexity in both storage and inference.", "However, estimation latency increases linearly with $k$ because the overall inference time is dominated by the single-table CardEst methods' inference time, whose complexity is linear with $k$ .", "Figure: Performance for different number of bins.Table: Performance of different binning techniques.Different bin selection algorithms: Recall that in Section REF , we design a new algorithm (GBSA) to optimize the binning process for FactorJoin.", "Here, we compare it with the traditional equal-width and equal-depth binning algorithms.", "We set $k = 100$ for all three algorithms.", "They have approximately the same estimation latency, training time, and model size.", "Thus, we only report the end-to-end performance and bound tightness in Figure REF .", "We observe that GBSA generates much tighter upper bounds, leading to significantly better end-to-end performance.", "This demonstrates the effectiveness of our GBSA algorithm.", "Varying single-table CardEst methods: To compare the importance of different single-table CardEst methods in FactorJoin, we tried three different estimators: 1) Bayesian network (blackBayesCard) for single-table, which is the SOTA learned single-table method [70], 2) Sampling, which draws a random sample (5%) from single tables on-the-fly and estimates the cardinalities of filter predicates, and 3) TrueScan which scans and filters the tables during query time and calculate the true cardinalities.", "The end-to-end performance of different single table estimators with bin size $k=100$ is shown in Table REF .", "The blackBayesCard method performs significantly better than Sampling because of more accurate estimates and slightly faster estimation speeds.", "blackTrueScan produces an exact upper bound on the cardinalities and thus generates more effective query plans.", "However, the estimation latency is too high, making its overall end-to-end performance worse than blackBayesCard.", "Table: FactorJoin, varying single-table CardEst methods.Improvement over joining histograms: Since FactorJoin follows the convention of JoinHist, we investigate how much each component of FactorJoin help improve this method.", "Specifically, apart from the original JoinHist method, we evaluate and compare the following variants.", "We incorporate our new probabilistic bound into JoinHist to remove its join uniformity assumption (denote as with Bound).", "We incorporate into JoinHist the histograms learned from BayesNet that represent conditional distributions of join keys given the filter predicates (denote as with Conditional).", "This variant avoids the attribute independent assumption.", "FactorJoin reduces to the JoinHist with both bound and conditional techniques on non-cyclic join templates.", "The results in Table REF demonstrate that we can achieve significant improvement over the JoinHist method by removing either or all of its simplifying assumptions.", "Summary: With a reasonable bin size $k$ , FactorJoin generates tight bounds and effective query plans.", "The GBSA bin selection algorithm is very effective at generating high-quality plans.", "Single-table estimators do have an impact on FactorJoin's overall performance and we should choose one that can generate effective and efficient estimates.", "Overall, all different settings/hyperparameters of FactorJoin can significantly outperform Postgres as well as all existing SOTA methods, demonstrating the robustness and stability of FactorJoin.", "Table: Performance comparison with JoinHist." ], [ "Related work", "We briefly review literatures on single-table CardEst methods and the learned query optimizers.", "Single-table CardEst: Machine learning approaches have been use to solve single-table CardEst problem, which can achieve accurate estimation with very low latency and overhead [74], [70], [77].", "Specifically, the data-driven learned methods build statistical models to understand data distributions, such as deep autoregressive models [74], [19], Bayesian networks [70], [16], [13], sum-product-networks [25], factorized sum-product-networks [77], and normalizing flow models [67].", "As for the query-driven methods, the first approach using neural networks for CardEst has been proposed for UDF predicates [36].", "Later on, a semi-automatic alternative [47] and a regression-based model [2] were used for general predicates.", "Recently, more sophisticated supervised models such as multi-set convolutional networks [33], XG-boost [11], tree-LSTM [63], and deep ensembles [42], are used to provide accurate estimation.", "These works are orthogonal to our framework and in principle, we can support plugging in any one of them for single table CardEst.", "Learned query optimizer: Apart from CardEst, there exist a large number of works that use ML to solve other tasks of query optimizer, such as cost estimation and join order selection.", "Learned cost estimation methods use tree convolution [51] and tree-LSTM [63] to encode a query plan as a tree and map the encoding to its estimated costs.", "Active learning [45] and zero-shot learning [24] are proposed to tackle this problem from a new perspective.", "Some deep reinforcement learning approaches [75], [50] have been proposed to determine the optimal join order.", "Recently, many methods [49], [48], [53], [71] propose to learn the entire query optimizer end-to-end without a clear separation of these components." ], [ "Conclusions", "In this paper, we propose FactorJoin, a framework for cardinality estimation of join queries.", "It combines classical join-histogram methods with learned single table cardinality estimates into a factor graph model.", "This framework converts CardEst problem into an inference problem over the factor graph involving only single-table distributions.", "We further propose several optimizations to make this problem tractable for large join graphs.", "Our experiments show that FactorJoin generates effective and efficient estimates and is suitable for system deployment on large real-world join benchmarks.", "Specifically, FactorJoin produces more effective estimates than the previous SOTA learned methods, with 40x less estimation latency, 100x smaller model size, 100x faster updating speed, and 100x faster training speed, while matching or exceeding them in terms of query execution time.", "We believe this work points out a new direction for estimating join queries, which would enable truly practical learned CardEst methods as a system component.", "Equations and proofs Equations for decomposing joins into single-tables In this section, we provide the case-by-case equations on how to accurately formulate join estimation problem into single-table CardEst problem without any simplified assumptions.", "Two-table join case: Assume that we have two tables $A$ and $B$ with join keys $A.id$ and $B.Aid$ and a query $Q = Q(A) \\mathbin {\\bowtie }\\rule [0.15ex]{.22em}{.6pt}\\unknown.", "{\\rule [0.9ex]{.22em}{.6pt}}\\hspace{-3.05542pt}\\bowtie \\hspace{-3.05542pt}$$$Q(B)$ where $ Q(A)$ is the filter on table $ A$ and same for $ Q(B)$.", "The join condition is $ A.id = B.Aid$.", "Then, the join cardinality of $ Q$ can be expressed as follows where $ D(A.id)$ represents the domain of $ A$.", "{\\begin{@align}{1}{-1}|Q| = \\sum _{v \\in D(A.id)} &P_{A}(A.id = v | Q(A)) * |Q(A)| * \\nonumber \\\\& P_{B}(B.Aid = v | Q(B)) * |Q(B)|\\end{@align}}\\smallskip \\underline{\\textbf {Chain join cases:}}\\emph {\\textbf {Case1 A.id = B.Aid = C.Aid:}}Assume we have three tables $ A$, $ B$ and $ C$ with join keys $ A.id$, $ B.Aid$, and $ C.Aid$.", "There exists query $ Q = Q(A) Q(B) Q(C)$ on join condition $ A.id = B.Aid AND B.Aid = C.Aid$.Then, the join cardinality can be expressed as follows:{\\begin{@align}{1}{-1}|Q| &= \\sum _{v \\in D(A.id)} P_{A}(A.id = v | Q(A)) * |Q(A)| * \\nonumber \\\\& P_{B}(B.Aid = v | Q(B)) * |Q(B)| * P_{C}(C.Aid = v | Q(C)) * |Q(C)|\\end{@align}}\\emph {\\textbf {Case2 A.id = B.Aid, B.id = C.Bid:}}Assume we have three tables $ A$, $ B$ and $ C$ with join keys $ A.id$, $ B.Aid$, $ B.id$, and $ C.Bid$.", "There exists query $ Q = Q(A) Q(B) Q(C)$ on join condition $ A.id = B.Aid AND B.id = C.Bid$.Then, the join cardinality can be expressed as follows:{\\begin{@align}{1}{-1}|Q| = &\\sum _{v_1 \\in D(A.id)} \\sum _{v_2 \\in D(B.id)} P_{A}(A.id = v_1 | Q(A)) * |Q(A)| * \\nonumber \\\\&P_{B}(B.Aid = v_1, B.id = v_2 | Q(B)) * |Q(B)| * \\nonumber \\\\& P_{C}(C.Bid = v_2 | Q(C)) * |Q(C)|\\end{@align}}\\smallskip \\underline{\\textbf {Self join case:}}Assume we have one table $ A$ with join keys $ A.id, A.id2$.", "There exists a query $ Q = Q(A) Q(A')$ on join condition $ A.id = A.id2$.", "Then, the join cardinality can be expressed as follows:{\\begin{@align}{1}{-1}|Q| = \\sum _{v \\in D(A.id)} &P_{A}(A.id = v | Q(A)) * |Q(A)| * \\nonumber \\\\& P_{A}(A.id_2 = v | Q(A^{\\prime })) * |Q(A^{\\prime })|\\end{@align}}\\smallskip \\underline{\\textbf {Cyclic join case:}}Assume we have two tables $ A$ and $ B$ with join keys $ A.id, A.id2$ and $ B.Aid, B.Aid2$.", "There exists a query $ Q = Q(A) Q(B)$ on join condition $ A.id = B.Aid AND A.id2 = B.Aid2$.", "Then, the join cardinality can be expressed as follows:{\\begin{@align}{1}{-1}|Q| = & \\sum _{v_1 \\in D(A.id)} \\sum _{v_2 \\in D(A.id_2)} P_{A}(A.id = v_1| Q(A)) * |Q(A)| * \\nonumber \\\\&P_{B}(B.Aid = v_1, B.Aid_2 = v_2 | Q(B)) * |Q(B)| * \\nonumber \\\\& P_{A}(A.id_2 = v_2| Q(A^{\\prime })) * |Q(A^{\\prime })| \\nonumber \\\\= & \\sum _{v_1 \\in D(A.id)} \\sum _{v_2 \\in D(A.id_2)} P_{B}(B.Aid = v_1| Q(B)) * |Q(B)| \\nonumber \\\\&P_{A}(A.id = v_1, A.id_2 = v_2| Q(A)) * |Q(A)| * \\nonumber \\\\&P_{B}(B.Aid_2 = v_2 | Q(B^{\\prime })) * |Q(B^{\\prime })|\\end{@align}}\\subsection {Proof of lamma 1}\\begin{lemma}Given a join graph \\mathcal {G} representing a query Q, there exists a factor graph \\mathcal {F} such that the variable nodes in \\mathcal {F} are the equivalent key group variables of \\mathcal {G} and each factor node represents a table I touched by Q.", "A factor node is connected to a variable node if and only if this variable represents a join key in table I.", "The potential function of a factor node is defined as table I^{\\prime }s probability distribution of the connected variables (join keys) conditioned on the filter predicates Q(I).", "Then, calculating the cardinality of Q is equivalent to computing the partition function of \\mathcal {F}.\\end{lemma}\\smallskip \\underline{\\textbf {Proof:}}Assume that we have a query $ Q$ joining $ m$ tables $ A, B, ..., M$.We represent $ Q$ as a join graph $ G$, where each node represents a join key and each edge represents a join relation between two keys connected by it.", "We further define hyper-nodes in $ G$ as a set of nodes whose corresponding join key lies in the same table.", "Thus, each hyper-nodes naturally represents a table in $ Q$.", "A visualization of such $ G$ can be found in Figure~\\ref {fig: formulation}.Each connected components of $ G$ connects a group of join keys with equal join relation, suggesting that they share the same semantics.", "Therefore, we consider all join keys in a connected component as a equivalent key group variable, denoted as $ Vi$.Therefore, $ G$ defines $ n$ equivalent variables $ V1, ..., Vn$.Then, assume that we have a set of unnormalized single table distributions $ PA(VA|Q(A)) * |Q(A)|, ..., PM(VM|Q(M)) * |Q(M)|$ and $ PA(VA'|Q(A')) * |Q(A')|, ..., PM(VM'|Q(M')) * |Q(M')|$.Each $ VI$ presents a set of equivalent variables that represent a join key in table $ I$.", "The $ I'$ is the duplicated table introduced by query $ Q$ only if there exists a cyclic join situation.Therefore, the cardinality of $ Q$ can be writen as:{\\begin{@align}{1}{-1}|Q| = & \\sum _{v_1 \\in D(V_1)} \\sum _{v_2 \\in D(V_2)} \\cdots \\sum _{v_n \\in D(V_n)} (P_{A}(V_A| Q(A)) * |Q(A)|) * \\nonumber \\\\&(P_{B}(V_B | Q(B)) * |Q(B)|) * \\cdots (P_{M}(V_M | Q(M)) * |Q(M)|) * \\nonumber \\\\& (P_{A}(V_{A^{\\prime }}| Q(A^{\\prime })) * |Q(A^{\\prime })|) * \\cdots (P_{M^{\\prime }}(V_{M^{\\prime }} | Q(M^{\\prime })) * |Q(M^{\\prime })|)\\end{@align}}Next, let^{\\prime }s construct a factor graph $ F$, such that the variable nodes in $ F$ are the equivalent key group variables $ Vi$ of $ G$ and the factor nodes in $ F$ represent the tables $ A, ..., M$ touched by $ Q$ and the duplicated tables if exist $ A', ..., M'$.", "A factor node representing table $ I$ is connected to a variable node if and only if this variable represents a join key in table $ I$.The potential function of a factor node is defined as the unnormalized probability distribution $ PI(VI|Q(I)) * |Q(I)|$.The partition function of $ F$ is defined exactly the same as Equation~\\ref {equ: lemma1}.Therefore, instead of summing a nested loop of $ n$ variables $ Vi$ by brute-force, the partition function (cardinality) can be computed using the well-established variable elimination and belief propogation techniques in PGM domain.\\subsection {Probabilistic bound}\\emph {FactorJoin} approximate the exact cardinality computation with a probabilistic bound based inference algorithm.", "An example on two table join queries can found below.", "In this section we discuss how to derive an upper bound ($ Probabilistic_bound(A, B, bini)$ in this equation) for different case of join.", "{\\begin{@align}{3}{2}|Q| &= \\sum _{i = 1}^{k} \\sum _{v \\in bin_i} && P_{A}(A.Id = v | Q(A)) * |Q(A)| * \\nonumber \\\\& && P_{B}(B.Aid = v | Q(B)) * |Q(B)| \\nonumber \\\\&\\lesssim \\sum _{i = 1}^{k} && Probabilistic\\_bound(A, B, bin_i)\\end{@align}}\\smallskip \\underline{\\textbf {Case1: Joining two tablesA.id = B.Aid:}}Let $ Vi*(A.id) = MAXv bini |A.id = v|$ be the most frequent value (MFV) count of $ A.id$ in a bin $ bini$, and same for $ Vi*(B.Aid)$.", "We have the following bound:{\\begin{@align}{3}{2}|Q| & = \\sum _{v = D(A.id)} && P_{A}(A.Id = v | Q(A)) * |Q(A)| * \\nonumber \\\\& && P_{B}(B.Aid = v | Q(B)) * |Q(B)| \\nonumber \\\\& \\le \\sum _{i = 1}^{k} min( &&\\frac{P_A(A.id \\in bin_i|Q(A)) * |Q(A)|}{V^{*}_i(A.id)}, \\nonumber \\\\& && \\frac{P_B(B.Aid \\in bin_i|Q(B)) * |Q(B)|}{V^{*}_i(B.Aid)}) * \\nonumber \\\\& && V^{*}_i(A.id) * V^{*}_i(B.Aid)\\end{@align}}\\smallskip \\underline{\\textbf {Case2: Joining three tables A.id = B.Aid = C.Aid:}}Let $ Vi*(A.id) = MAXv bini |A.id = v|$ be the most frequent value (MFV) count of $ A.id$ in a bin $ bini$, and same for $ Vi*(B.Aid)$, $ Vi*(C.Aid)$.", "We have the following bound:{\\begin{@align}{1}{-1}|Q| & = \\sum _{v \\in D(A.id)} P_{A}(A.id = v | Q(A)) * |Q(A)| * \\nonumber \\\\& \\hspace{20.0pt} P_{B}(B.Aid = v | Q(B)) * |Q(B)| * P_{C}(C.Aid = v | Q(C)) * |Q(C)| \\nonumber \\\\& \\le \\sum _{i = 1}^{k} min\\lbrace \\frac{P_{A}(A.id \\in bin_i | Q(A)) * |Q(A)|}{V_i^{*}(A.id)}, \\nonumber \\\\& \\hspace{50.0pt} \\frac{P_{B}(B.Aid \\in bin_i | Q(B)) * |Q(B)|}{V_i^{*}(B.Aid)} \\nonumber \\\\& \\hspace{50.0pt} \\frac{P_{C}(C.Aid \\in bin_i | Q(C)) * |Q(C)|}{V_i^{*}(C.Aid) }\\rbrace * \\nonumber \\\\& \\hspace{50.0pt} {V_i^{*}(A.id)} * {V_i^{*}(B.Aid) } * {V_i^{*}(C.Aid)}\\end{@align}}\\smallskip \\underline{\\textbf {Case3: Joining three tables A.id = B.Aid, B.id = C.Bid:}}Let $ Vi*(A.id) = MAXv bini |A.id = v|$ be the most frequent value (MFV) count of $ A.id$ in a bin $ bini$, and same for $ Vi*(B.Aid)$, $ Vi*(B.id)$, and $ Vi*(C.Bid)$.We have the following bound, where $ Upper(A B)$ is derived from Equation~\\ref {equ: bound2join}.", "{\\begin{@align}{1}{-1}|Q| &= \\sum _{v_1 \\in D(A.id)} \\sum _{v_2 \\in D(B.id)} P_{A}(A.id = v_1 | Q(A)) * |Q(A)| * \\nonumber \\\\&\\hspace{60.0pt} P_{B}(B.Aid = v_1, B.id = v_2 | Q(B)) * |Q(B)| * \\nonumber \\\\& \\hspace{60.0pt} P_{C}(C.Bid = v_2 | Q(C)) * |Q(C)| \\nonumber \\\\&\\le \\sum _{i = 1}^{k} min\\lbrace \\frac{Upper(Q(A) \\mathbin {\\bowtie }\\rule [0.15ex]{.22em}{.6pt}\\unknown.", "{\\rule [0.9ex]{.22em}{.6pt}}\\hspace{-3.05542pt}\\bowtie \\hspace{-3.05542pt}\\bowtie }{\\rule }[0.15ex]{.22em}{.6pt}\\unknown.", "{\\rule [0.9ex]{.22em}{.6pt}}\\end{@align}Q(B))}{V_i^{*}(A.id) * V_i^{*}(B.id)}, \\nonumber \\\\& \\hspace{50.0pt} \\frac{P_{C}(C.Aid \\in bin_i | Q(C)) * |Q(C)|}{V_i^{*}(C.Bid)}\\rbrace * \\nonumber \\\\& \\hspace{50.0pt} * V_i^{*}(A.id) * V_i^{*}(B.id) * {V_i^{*}(C.Bid)}$ Case4: Self join of one table $A.id = A.id2$ : Let $V_i^{*}(A.id) = MAX_{v \\in bin_i} |A.id = v|$ be the most frequent value (MFV) count of $A.id$ in a bin $bin_i$ , and same for $V_i^{*}(A.id_2)$ .", "We have the following bound: $|Q| & = \\sum _{v = D(A.id)} && P_{A}(A.Id = v | Q(A)) * |Q(A)| * \\nonumber \\\\& && P_{A}(A.id_2 = v | Q(A^{\\prime })) * |Q(A^{\\prime })| \\nonumber \\\\& \\le \\sum _{i = 1}^{k} min( &&\\frac{P_A(A.id \\in bin_i|Q(A)) * |Q(A)|}{V^{*}_i(A.id)}, \\nonumber \\\\& && \\frac{P_A(A.id_2 \\in bin_i|Q(A^{\\prime })) * |Q(A^{\\prime })|}{V^{*}_i(A.id_2)}) * \\nonumber \\\\& && V^{*}_i(A.id) * V^{*}_i(A.id_2)$ Case5: Cyclic join of two tables $A.id = B.Aid$ , $A.id_2 = B.Aid_2$ : Let $V_i^{*}(A.id) = MAX_{v \\in bin_i} |A.id = v|$ be the most frequent value (MFV) count of $A.id$ in a bin $bin_i$ , and same for $V_i^{*}(B.Aid)$ , $V_i^{*}(A.id_2)$ , and $V_i^{*}(B.Aid_2)$ .", "We have the following bound: $|Q| &= \\sum _{ v_1 \\in D(A.id)} \\sum _{v_2 \\in D(A.id_2)} P_{A}(A.id = v_1, A.id_2 = v_2 | Q(A)) * \\nonumber \\\\& \\hspace{30.0pt} |Q(A)| * P_{B}(B.Aid = v_1, B.Aid_2 = v_2 | Q(B)) * |Q(B)| * \\nonumber \\\\& \\hspace{30.0pt} P_{A}(A.id_2 = v_2| Q(A^{\\prime })) * |Q(A^{\\prime })| \\nonumber \\\\& \\le \\sum _{i = 1}^{k} min\\lbrace \\frac{Upper(Q(A) \\mathbin {\\bowtie }\\rule [0.15ex]{.22em}{.6pt}\\unknown.", "{\\rule [0.9ex]{.22em}{.6pt}}\\hspace{-3.05542pt}\\bowtie \\hspace{-3.05542pt}\\bowtie }{\\rule }[0.15ex]{.22em}{.6pt}\\unknown.", "{\\rule [0.9ex]{.22em}{.6pt}}$ Q(B)Vi*(A.id2) * Vi*(B.Aid) , Upper(Q(A') Q(B))Vi*(A.id) * Vi*(B.Aid2)} *               Vi*(A.id2) * Vi*(B.Aid) * Vi*(A.id) * Vi*(B.Aid2) Greedy bin selection Algorithm details We observe that the bound on a particular bin $bin_i$ can be very loose if the MFV count $V^*_i$ is a large outlier in $bin_i$ .", "Taking the two table join query as an example, if $bin_i$ contains only one value that appears 100 times in $A.id$ but $10,000$ values that only appear once in $B.Aid$ , then the bound could be 100 times larger than the actual cardinality.", "The existing equal-width or equal-depth binning strategy can generate very large estimation errors, so we design a new binning strategy called the greedy bin selection algorithm (GBSA) to optimize our bound tightness.", "The objective of GBSA is to minimize the variance of the value counts within $bin_i$ .", "In the extreme case, if the value counts have zero variance for all join keys in one equivalent key group, then our bound can output the exact cardinality (if with perfect single table CardEst models).", "However, as the same bin partition will be applied for all equivalent join keys, minimizing the value counts variance of $bin_i$ on the domain of one key may result in a bad bin for other keys.", "Jointly minimizing the variance of one bin for all join keys has exponential complexity.", "There, GBSA uses a greedy algorithm to iteratively minimize the bin variance for all join keys.", "At a high level, GBSA first optimizes the minimal variance bins with half number of binning budget $k/2$ on the domain of one join key.", "Then, it recursively updates these bins by minimizing the variance of other join keys using the rest half of the budget.", "The details of GBSA are provided in Algorithm REF .", "[t] Greedy Bin Selection Algorithm (GBSA) Input: Equivalent key groups $Gr_1, \\ldots , Gr_m$ , where $Gr_i = \\lbrace Id_i^1, \\ldots , Id_i^{|Gr_i|} \\rbrace $ ; Column data $\\mathcal {D}(Id_i^j)$ of all join keys in the DB instance $\\mathcal {D}$ ; Number of bins $k_i$ for each group $Gr_i$ .", "[1] $Gr_i \\in \\lbrace Gr_1, \\ldots , Gr_m\\rbrace $ $Bin(Gr_i) \\leftarrow $ [] $Gr_i^{\\prime } \\leftarrow sort\\_key\\_based\\_on\\_domain\\_size(\\mathcal {D}, Gr_i)$ $Bin(Gr_i) \\leftarrow get\\_min\\_variance\\_bins(\\mathcal {D}(Gr_i^{\\prime }[1]), k_i/2)$ remain_bins $\\leftarrow k_i/2$ $j \\in \\lbrace 2, \\ldots , |Gr_i^{\\prime }| \\rbrace $ binned_data $\\leftarrow apply\\_bin\\_to\\_data(\\mathcal {D}(Gr_i^{\\prime }[j]), Bin(Gr_i))$ bin_variance $\\leftarrow calculate\\_variance$ (binned_data) arg_sort_idx $\\leftarrow arg\\_sort\\_decreasing(bin\\_variance)$ $p \\in arg\\_sort\\_idx[1:remain\\_bins/2]$ $Bin(Gr_i) \\leftarrow min\\_variance\\_dichotomy(Bin(Gr_i)[p]$ , binned_data[p]) remain_bins $\\leftarrow $ remain_bins/2 return $\\lbrace Bin(Gr_i) | i, \\ldots , m\\rbrace $ FactorJoin analyzes the schema of DB instance $\\mathcal {D}$ to derive $m$ equivalent key groups $Gr_1, \\ldots , Gr_m$ , each of which contains $|Gr_i|$ join keys $Id_i^1, \\ldots , Id_i^{|Gr_i|}$ .", "Let $\\mathcal {D}(Id_i^j)$ represents the column data (domain) of join key $Id_i^j$ .", "GBSA will derive the sub-optimal binning with $k_i$ bins $Bin(Gr_i) = \\lbrace bin_1, \\ldots , bin_{k_i} \\rbrace $ for each the equivalent key group $Gr_i$ (line 1).", "We explain the procedure of this algorithm for binning one group $Gr_i$ (line 2-14).", "First, GBSA sorts the join keys $\\lbrace d_i^1, \\ldots , Id_i^{|Gr_i|}\\rbrace $ in decreasing order based on their domain size (line 3) to get $Gr_i^{\\prime }=\\lbrace Id_i^{1^{\\prime }}, \\ldots , Id_i^{|Gr_i|^{\\prime }}\\rbrace $ .", "We apply half of the binning budget to generate $k_i/2$ minimal variance bins on the domain of $d_i^{1^{\\prime }}$ (line 4).", "Note that minimal variance bins on a single attribute can be easily obtained by sorting the value counts of $\\mathcal {D}(d_i^{1^{\\prime }})$ and applying equal-depth binning over the sorted values.", "The remaining number of bins available $remain_bins$ is $k_i/2$ (line 5).", "Then for each rest of the join key $d_i^j$ , GBSA applies the current bins $Bin(Gr_i)$ to its data column $\\mathcal {D}(d_i^j)$ , calculates the variance for each bin, and sorts these bins in decreasing order (line 6-9).", "For the top $remain_bins/2$ bins with the largest variance, we dichotomize each bin into two bins to minimize the variance on join key $d_i^j$ (lines 10 -12).", "We iterative the above procedure until all join keys are optimized.", "According to our evaluation results, the GBSA has a significant impact on improving our probabilistic bound tightness and estimation effectiveness.", "Figure: Per query performance of STATS-CEB.Figure: Per query performance of STATS-CEB.", "Additional experiments and details In this section, we provide the detailed comparision of all baselines on STATS-CEB and IMDB-JOB benchmarks.", "Performance on STATS-CEB: We sort the 146 queries of STATS-CEB based on their Postgres end-to-end runtime and cluster them into 6 different runtime intervals.", "Figure REF reports relative improvements of all competitive baselines over Postgres for each query and the overall performance comparision on each cluster of queries on the last figure.", "For the very short-running queries (which represent an OLTP-like workload), Postgres is the best among all baselines.", "These baselines perform worse because the estimation latency plays a significant role in these queries.", "We observe that improving estimation accuracy has a very limited effect on the query plan quality of short-running queries.", "This also explains why the optimal TrueCard only marginally outperforms Postgres on queries with less than $2s$ of runtime.", "Overall, FactorJoin has the best performance among all baselines except for Postgres.", "For the extremely long-running queries, the advantage of the learning-based methods over Postgres gradually appear.", "The reason is that estimation latency becomes increasingly insignificant for queries with a long execution time.", "FactorJoin has comparable performance as the SOTA learning-based methods on these queries.", "Performance on IMDB-JOB: Since a large proportion of the queries in IMDB-JOB workload are short-running queries, Postgres has a better performance than all baselines for more than half of the queries.", "Similar to STATS-CEB queries, we do not observe any performance gain for using the learned CardEst method over Postgres on queries with less on $1s$ runtime.", "In fact, the optimal TrueCard also barely improves the Postgres.", "This suggests that the learned methods shoulf fall back to Postgres for OLTP workloads.", "Similar to STATS-CEB, FactorJoin also has a significantly better performance over all other baselines except Postgres." ], [ "Equations for decomposing joins into single-tables", "In this section, we provide the case-by-case equations on how to accurately formulate join estimation problem into single-table CardEst problem without any simplified assumptions.", "Two-table join case: Assume that we have two tables $A$ and $B$ with join keys $A.id$ and $B.Aid$ and a query $Q = Q(A) \\mathbin {\\bowtie }\\rule [0.15ex]{.22em}{.6pt}\\unknown.", "{\\rule [0.9ex]{.22em}{.6pt}}\\hspace{-3.05542pt}\\bowtie \\hspace{-3.05542pt}$$$Q(B)$ where $ Q(A)$ is the filter on table $ A$ and same for $ Q(B)$.", "The join condition is $ A.id = B.Aid$.", "Then, the join cardinality of $ Q$ can be expressed as follows where $ D(A.id)$ represents the domain of $ A$.", "{\\begin{@align}{1}{-1}|Q| = \\sum _{v \\in D(A.id)} &P_{A}(A.id = v | Q(A)) * |Q(A)| * \\nonumber \\\\& P_{B}(B.Aid = v | Q(B)) * |Q(B)|\\end{@align}}\\smallskip \\underline{\\textbf {Chain join cases:}}\\emph {\\textbf {Case1 A.id = B.Aid = C.Aid:}}Assume we have three tables $ A$, $ B$ and $ C$ with join keys $ A.id$, $ B.Aid$, and $ C.Aid$.", "There exists query $ Q = Q(A) Q(B) Q(C)$ on join condition $ A.id = B.Aid AND B.Aid = C.Aid$.Then, the join cardinality can be expressed as follows:{\\begin{@align}{1}{-1}|Q| &= \\sum _{v \\in D(A.id)} P_{A}(A.id = v | Q(A)) * |Q(A)| * \\nonumber \\\\& P_{B}(B.Aid = v | Q(B)) * |Q(B)| * P_{C}(C.Aid = v | Q(C)) * |Q(C)|\\end{@align}}\\emph {\\textbf {Case2 A.id = B.Aid, B.id = C.Bid:}}Assume we have three tables $ A$, $ B$ and $ C$ with join keys $ A.id$, $ B.Aid$, $ B.id$, and $ C.Bid$.", "There exists query $ Q = Q(A) Q(B) Q(C)$ on join condition $ A.id = B.Aid AND B.id = C.Bid$.Then, the join cardinality can be expressed as follows:{\\begin{@align}{1}{-1}|Q| = &\\sum _{v_1 \\in D(A.id)} \\sum _{v_2 \\in D(B.id)} P_{A}(A.id = v_1 | Q(A)) * |Q(A)| * \\nonumber \\\\&P_{B}(B.Aid = v_1, B.id = v_2 | Q(B)) * |Q(B)| * \\nonumber \\\\& P_{C}(C.Bid = v_2 | Q(C)) * |Q(C)|\\end{@align}}\\smallskip \\underline{\\textbf {Self join case:}}Assume we have one table $ A$ with join keys $ A.id, A.id2$.", "There exists a query $ Q = Q(A) Q(A')$ on join condition $ A.id = A.id2$.", "Then, the join cardinality can be expressed as follows:{\\begin{@align}{1}{-1}|Q| = \\sum _{v \\in D(A.id)} &P_{A}(A.id = v | Q(A)) * |Q(A)| * \\nonumber \\\\& P_{A}(A.id_2 = v | Q(A^{\\prime })) * |Q(A^{\\prime })|\\end{@align}}\\smallskip \\underline{\\textbf {Cyclic join case:}}Assume we have two tables $ A$ and $ B$ with join keys $ A.id, A.id2$ and $ B.Aid, B.Aid2$.", "There exists a query $ Q = Q(A) Q(B)$ on join condition $ A.id = B.Aid AND A.id2 = B.Aid2$.", "Then, the join cardinality can be expressed as follows:{\\begin{@align}{1}{-1}|Q| = & \\sum _{v_1 \\in D(A.id)} \\sum _{v_2 \\in D(A.id_2)} P_{A}(A.id = v_1| Q(A)) * |Q(A)| * \\nonumber \\\\&P_{B}(B.Aid = v_1, B.Aid_2 = v_2 | Q(B)) * |Q(B)| * \\nonumber \\\\& P_{A}(A.id_2 = v_2| Q(A^{\\prime })) * |Q(A^{\\prime })| \\nonumber \\\\= & \\sum _{v_1 \\in D(A.id)} \\sum _{v_2 \\in D(A.id_2)} P_{B}(B.Aid = v_1| Q(B)) * |Q(B)| \\nonumber \\\\&P_{A}(A.id = v_1, A.id_2 = v_2| Q(A)) * |Q(A)| * \\nonumber \\\\&P_{B}(B.Aid_2 = v_2 | Q(B^{\\prime })) * |Q(B^{\\prime })|\\end{@align}}\\subsection {Proof of lamma 1}\\begin{lemma}Given a join graph \\mathcal {G} representing a query Q, there exists a factor graph \\mathcal {F} such that the variable nodes in \\mathcal {F} are the equivalent key group variables of \\mathcal {G} and each factor node represents a table I touched by Q.", "A factor node is connected to a variable node if and only if this variable represents a join key in table I.", "The potential function of a factor node is defined as table I^{\\prime }s probability distribution of the connected variables (join keys) conditioned on the filter predicates Q(I).", "Then, calculating the cardinality of Q is equivalent to computing the partition function of \\mathcal {F}.\\end{lemma}\\smallskip \\underline{\\textbf {Proof:}}Assume that we have a query $ Q$ joining $ m$ tables $ A, B, ..., M$.We represent $ Q$ as a join graph $ G$, where each node represents a join key and each edge represents a join relation between two keys connected by it.", "We further define hyper-nodes in $ G$ as a set of nodes whose corresponding join key lies in the same table.", "Thus, each hyper-nodes naturally represents a table in $ Q$.", "A visualization of such $ G$ can be found in Figure~\\ref {fig: formulation}.Each connected components of $ G$ connects a group of join keys with equal join relation, suggesting that they share the same semantics.", "Therefore, we consider all join keys in a connected component as a equivalent key group variable, denoted as $ Vi$.Therefore, $ G$ defines $ n$ equivalent variables $ V1, ..., Vn$.Then, assume that we have a set of unnormalized single table distributions $ PA(VA|Q(A)) * |Q(A)|, ..., PM(VM|Q(M)) * |Q(M)|$ and $ PA(VA'|Q(A')) * |Q(A')|, ..., PM(VM'|Q(M')) * |Q(M')|$.Each $ VI$ presents a set of equivalent variables that represent a join key in table $ I$.", "The $ I'$ is the duplicated table introduced by query $ Q$ only if there exists a cyclic join situation.Therefore, the cardinality of $ Q$ can be writen as:{\\begin{@align}{1}{-1}|Q| = & \\sum _{v_1 \\in D(V_1)} \\sum _{v_2 \\in D(V_2)} \\cdots \\sum _{v_n \\in D(V_n)} (P_{A}(V_A| Q(A)) * |Q(A)|) * \\nonumber \\\\&(P_{B}(V_B | Q(B)) * |Q(B)|) * \\cdots (P_{M}(V_M | Q(M)) * |Q(M)|) * \\nonumber \\\\& (P_{A}(V_{A^{\\prime }}| Q(A^{\\prime })) * |Q(A^{\\prime })|) * \\cdots (P_{M^{\\prime }}(V_{M^{\\prime }} | Q(M^{\\prime })) * |Q(M^{\\prime })|)\\end{@align}}Next, let^{\\prime }s construct a factor graph $ F$, such that the variable nodes in $ F$ are the equivalent key group variables $ Vi$ of $ G$ and the factor nodes in $ F$ represent the tables $ A, ..., M$ touched by $ Q$ and the duplicated tables if exist $ A', ..., M'$.", "A factor node representing table $ I$ is connected to a variable node if and only if this variable represents a join key in table $ I$.The potential function of a factor node is defined as the unnormalized probability distribution $ PI(VI|Q(I)) * |Q(I)|$.The partition function of $ F$ is defined exactly the same as Equation~\\ref {equ: lemma1}.Therefore, instead of summing a nested loop of $ n$ variables $ Vi$ by brute-force, the partition function (cardinality) can be computed using the well-established variable elimination and belief propogation techniques in PGM domain.\\subsection {Probabilistic bound}\\emph {FactorJoin} approximate the exact cardinality computation with a probabilistic bound based inference algorithm.", "An example on two table join queries can found below.", "In this section we discuss how to derive an upper bound ($ Probabilistic_bound(A, B, bini)$ in this equation) for different case of join.", "{\\begin{@align}{3}{2}|Q| &= \\sum _{i = 1}^{k} \\sum _{v \\in bin_i} && P_{A}(A.Id = v | Q(A)) * |Q(A)| * \\nonumber \\\\& && P_{B}(B.Aid = v | Q(B)) * |Q(B)| \\nonumber \\\\&\\lesssim \\sum _{i = 1}^{k} && Probabilistic\\_bound(A, B, bin_i)\\end{@align}}\\smallskip \\underline{\\textbf {Case1: Joining two tablesA.id = B.Aid:}}Let $ Vi*(A.id) = MAXv bini |A.id = v|$ be the most frequent value (MFV) count of $ A.id$ in a bin $ bini$, and same for $ Vi*(B.Aid)$.", "We have the following bound:{\\begin{@align}{3}{2}|Q| & = \\sum _{v = D(A.id)} && P_{A}(A.Id = v | Q(A)) * |Q(A)| * \\nonumber \\\\& && P_{B}(B.Aid = v | Q(B)) * |Q(B)| \\nonumber \\\\& \\le \\sum _{i = 1}^{k} min( &&\\frac{P_A(A.id \\in bin_i|Q(A)) * |Q(A)|}{V^{*}_i(A.id)}, \\nonumber \\\\& && \\frac{P_B(B.Aid \\in bin_i|Q(B)) * |Q(B)|}{V^{*}_i(B.Aid)}) * \\nonumber \\\\& && V^{*}_i(A.id) * V^{*}_i(B.Aid)\\end{@align}}\\smallskip \\underline{\\textbf {Case2: Joining three tables A.id = B.Aid = C.Aid:}}Let $ Vi*(A.id) = MAXv bini |A.id = v|$ be the most frequent value (MFV) count of $ A.id$ in a bin $ bini$, and same for $ Vi*(B.Aid)$, $ Vi*(C.Aid)$.", "We have the following bound:{\\begin{@align}{1}{-1}|Q| & = \\sum _{v \\in D(A.id)} P_{A}(A.id = v | Q(A)) * |Q(A)| * \\nonumber \\\\& \\hspace{20.0pt} P_{B}(B.Aid = v | Q(B)) * |Q(B)| * P_{C}(C.Aid = v | Q(C)) * |Q(C)| \\nonumber \\\\& \\le \\sum _{i = 1}^{k} min\\lbrace \\frac{P_{A}(A.id \\in bin_i | Q(A)) * |Q(A)|}{V_i^{*}(A.id)}, \\nonumber \\\\& \\hspace{50.0pt} \\frac{P_{B}(B.Aid \\in bin_i | Q(B)) * |Q(B)|}{V_i^{*}(B.Aid)} \\nonumber \\\\& \\hspace{50.0pt} \\frac{P_{C}(C.Aid \\in bin_i | Q(C)) * |Q(C)|}{V_i^{*}(C.Aid) }\\rbrace * \\nonumber \\\\& \\hspace{50.0pt} {V_i^{*}(A.id)} * {V_i^{*}(B.Aid) } * {V_i^{*}(C.Aid)}\\end{@align}}\\smallskip \\underline{\\textbf {Case3: Joining three tables A.id = B.Aid, B.id = C.Bid:}}Let $ Vi*(A.id) = MAXv bini |A.id = v|$ be the most frequent value (MFV) count of $ A.id$ in a bin $ bini$, and same for $ Vi*(B.Aid)$, $ Vi*(B.id)$, and $ Vi*(C.Bid)$.We have the following bound, where $ Upper(A B)$ is derived from Equation~\\ref {equ: bound2join}.", "{\\begin{@align}{1}{-1}|Q| &= \\sum _{v_1 \\in D(A.id)} \\sum _{v_2 \\in D(B.id)} P_{A}(A.id = v_1 | Q(A)) * |Q(A)| * \\nonumber \\\\&\\hspace{60.0pt} P_{B}(B.Aid = v_1, B.id = v_2 | Q(B)) * |Q(B)| * \\nonumber \\\\& \\hspace{60.0pt} P_{C}(C.Bid = v_2 | Q(C)) * |Q(C)| \\nonumber \\\\&\\le \\sum _{i = 1}^{k} min\\lbrace \\frac{Upper(Q(A) \\mathbin {\\bowtie }\\rule [0.15ex]{.22em}{.6pt}\\unknown.", "{\\rule [0.9ex]{.22em}{.6pt}}\\hspace{-3.05542pt}\\bowtie \\hspace{-3.05542pt}\\bowtie }{\\rule }[0.15ex]{.22em}{.6pt}\\unknown.", "{\\rule [0.9ex]{.22em}{.6pt}}\\end{@align}Q(B))}{V_i^{*}(A.id) * V_i^{*}(B.id)}, \\nonumber \\\\& \\hspace{50.0pt} \\frac{P_{C}(C.Aid \\in bin_i | Q(C)) * |Q(C)|}{V_i^{*}(C.Bid)}\\rbrace * \\nonumber \\\\& \\hspace{50.0pt} * V_i^{*}(A.id) * V_i^{*}(B.id) * {V_i^{*}(C.Bid)}$ Case4: Self join of one table $A.id = A.id2$ : Let $V_i^{*}(A.id) = MAX_{v \\in bin_i} |A.id = v|$ be the most frequent value (MFV) count of $A.id$ in a bin $bin_i$ , and same for $V_i^{*}(A.id_2)$ .", "We have the following bound: $|Q| & = \\sum _{v = D(A.id)} && P_{A}(A.Id = v | Q(A)) * |Q(A)| * \\nonumber \\\\& && P_{A}(A.id_2 = v | Q(A^{\\prime })) * |Q(A^{\\prime })| \\nonumber \\\\& \\le \\sum _{i = 1}^{k} min( &&\\frac{P_A(A.id \\in bin_i|Q(A)) * |Q(A)|}{V^{*}_i(A.id)}, \\nonumber \\\\& && \\frac{P_A(A.id_2 \\in bin_i|Q(A^{\\prime })) * |Q(A^{\\prime })|}{V^{*}_i(A.id_2)}) * \\nonumber \\\\& && V^{*}_i(A.id) * V^{*}_i(A.id_2)$ Case5: Cyclic join of two tables $A.id = B.Aid$ , $A.id_2 = B.Aid_2$ : Let $V_i^{*}(A.id) = MAX_{v \\in bin_i} |A.id = v|$ be the most frequent value (MFV) count of $A.id$ in a bin $bin_i$ , and same for $V_i^{*}(B.Aid)$ , $V_i^{*}(A.id_2)$ , and $V_i^{*}(B.Aid_2)$ .", "We have the following bound: $|Q| &= \\sum _{ v_1 \\in D(A.id)} \\sum _{v_2 \\in D(A.id_2)} P_{A}(A.id = v_1, A.id_2 = v_2 | Q(A)) * \\nonumber \\\\& \\hspace{30.0pt} |Q(A)| * P_{B}(B.Aid = v_1, B.Aid_2 = v_2 | Q(B)) * |Q(B)| * \\nonumber \\\\& \\hspace{30.0pt} P_{A}(A.id_2 = v_2| Q(A^{\\prime })) * |Q(A^{\\prime })| \\nonumber \\\\& \\le \\sum _{i = 1}^{k} min\\lbrace \\frac{Upper(Q(A) \\mathbin {\\bowtie }\\rule [0.15ex]{.22em}{.6pt}\\unknown.", "{\\rule [0.9ex]{.22em}{.6pt}}\\hspace{-3.05542pt}\\bowtie \\hspace{-3.05542pt}\\bowtie }{\\rule }[0.15ex]{.22em}{.6pt}\\unknown.", "{\\rule [0.9ex]{.22em}{.6pt}}$ Q(B)Vi*(A.id2) * Vi*(B.Aid) , Upper(Q(A') Q(B))Vi*(A.id) * Vi*(B.Aid2)} *               Vi*(A.id2) * Vi*(B.Aid) * Vi*(A.id) * Vi*(B.Aid2)" ], [ "Greedy bin selection Algorithm details", "We observe that the bound on a particular bin $bin_i$ can be very loose if the MFV count $V^*_i$ is a large outlier in $bin_i$ .", "Taking the two table join query as an example, if $bin_i$ contains only one value that appears 100 times in $A.id$ but $10,000$ values that only appear once in $B.Aid$ , then the bound could be 100 times larger than the actual cardinality.", "The existing equal-width or equal-depth binning strategy can generate very large estimation errors, so we design a new binning strategy called the greedy bin selection algorithm (GBSA) to optimize our bound tightness.", "The objective of GBSA is to minimize the variance of the value counts within $bin_i$ .", "In the extreme case, if the value counts have zero variance for all join keys in one equivalent key group, then our bound can output the exact cardinality (if with perfect single table CardEst models).", "However, as the same bin partition will be applied for all equivalent join keys, minimizing the value counts variance of $bin_i$ on the domain of one key may result in a bad bin for other keys.", "Jointly minimizing the variance of one bin for all join keys has exponential complexity.", "There, GBSA uses a greedy algorithm to iteratively minimize the bin variance for all join keys.", "At a high level, GBSA first optimizes the minimal variance bins with half number of binning budget $k/2$ on the domain of one join key.", "Then, it recursively updates these bins by minimizing the variance of other join keys using the rest half of the budget.", "The details of GBSA are provided in Algorithm REF .", "[t] Greedy Bin Selection Algorithm (GBSA) Input: Equivalent key groups $Gr_1, \\ldots , Gr_m$ , where $Gr_i = \\lbrace Id_i^1, \\ldots , Id_i^{|Gr_i|} \\rbrace $ ; Column data $\\mathcal {D}(Id_i^j)$ of all join keys in the DB instance $\\mathcal {D}$ ; Number of bins $k_i$ for each group $Gr_i$ .", "[1] $Gr_i \\in \\lbrace Gr_1, \\ldots , Gr_m\\rbrace $ $Bin(Gr_i) \\leftarrow $ [] $Gr_i^{\\prime } \\leftarrow sort\\_key\\_based\\_on\\_domain\\_size(\\mathcal {D}, Gr_i)$ $Bin(Gr_i) \\leftarrow get\\_min\\_variance\\_bins(\\mathcal {D}(Gr_i^{\\prime }[1]), k_i/2)$ remain_bins $\\leftarrow k_i/2$ $j \\in \\lbrace 2, \\ldots , |Gr_i^{\\prime }| \\rbrace $ binned_data $\\leftarrow apply\\_bin\\_to\\_data(\\mathcal {D}(Gr_i^{\\prime }[j]), Bin(Gr_i))$ bin_variance $\\leftarrow calculate\\_variance$ (binned_data) arg_sort_idx $\\leftarrow arg\\_sort\\_decreasing(bin\\_variance)$ $p \\in arg\\_sort\\_idx[1:remain\\_bins/2]$ $Bin(Gr_i) \\leftarrow min\\_variance\\_dichotomy(Bin(Gr_i)[p]$ , binned_data[p]) remain_bins $\\leftarrow $ remain_bins/2 return $\\lbrace Bin(Gr_i) | i, \\ldots , m\\rbrace $ FactorJoin analyzes the schema of DB instance $\\mathcal {D}$ to derive $m$ equivalent key groups $Gr_1, \\ldots , Gr_m$ , each of which contains $|Gr_i|$ join keys $Id_i^1, \\ldots , Id_i^{|Gr_i|}$ .", "Let $\\mathcal {D}(Id_i^j)$ represents the column data (domain) of join key $Id_i^j$ .", "GBSA will derive the sub-optimal binning with $k_i$ bins $Bin(Gr_i) = \\lbrace bin_1, \\ldots , bin_{k_i} \\rbrace $ for each the equivalent key group $Gr_i$ (line 1).", "We explain the procedure of this algorithm for binning one group $Gr_i$ (line 2-14).", "First, GBSA sorts the join keys $\\lbrace d_i^1, \\ldots , Id_i^{|Gr_i|}\\rbrace $ in decreasing order based on their domain size (line 3) to get $Gr_i^{\\prime }=\\lbrace Id_i^{1^{\\prime }}, \\ldots , Id_i^{|Gr_i|^{\\prime }}\\rbrace $ .", "We apply half of the binning budget to generate $k_i/2$ minimal variance bins on the domain of $d_i^{1^{\\prime }}$ (line 4).", "Note that minimal variance bins on a single attribute can be easily obtained by sorting the value counts of $\\mathcal {D}(d_i^{1^{\\prime }})$ and applying equal-depth binning over the sorted values.", "The remaining number of bins available $remain_bins$ is $k_i/2$ (line 5).", "Then for each rest of the join key $d_i^j$ , GBSA applies the current bins $Bin(Gr_i)$ to its data column $\\mathcal {D}(d_i^j)$ , calculates the variance for each bin, and sorts these bins in decreasing order (line 6-9).", "For the top $remain_bins/2$ bins with the largest variance, we dichotomize each bin into two bins to minimize the variance on join key $d_i^j$ (lines 10 -12).", "We iterative the above procedure until all join keys are optimized.", "According to our evaluation results, the GBSA has a significant impact on improving our probabilistic bound tightness and estimation effectiveness.", "Figure: Per query performance of STATS-CEB.Figure: Per query performance of STATS-CEB." ], [ "Additional experiments and details", "In this section, we provide the detailed comparision of all baselines on STATS-CEB and IMDB-JOB benchmarks.", "Performance on STATS-CEB: We sort the 146 queries of STATS-CEB based on their Postgres end-to-end runtime and cluster them into 6 different runtime intervals.", "Figure REF reports relative improvements of all competitive baselines over Postgres for each query and the overall performance comparision on each cluster of queries on the last figure.", "For the very short-running queries (which represent an OLTP-like workload), Postgres is the best among all baselines.", "These baselines perform worse because the estimation latency plays a significant role in these queries.", "We observe that improving estimation accuracy has a very limited effect on the query plan quality of short-running queries.", "This also explains why the optimal TrueCard only marginally outperforms Postgres on queries with less than $2s$ of runtime.", "Overall, FactorJoin has the best performance among all baselines except for Postgres.", "For the extremely long-running queries, the advantage of the learning-based methods over Postgres gradually appear.", "The reason is that estimation latency becomes increasingly insignificant for queries with a long execution time.", "FactorJoin has comparable performance as the SOTA learning-based methods on these queries.", "Performance on IMDB-JOB: Since a large proportion of the queries in IMDB-JOB workload are short-running queries, Postgres has a better performance than all baselines for more than half of the queries.", "Similar to STATS-CEB queries, we do not observe any performance gain for using the learned CardEst method over Postgres on queries with less on $1s$ runtime.", "In fact, the optimal TrueCard also barely improves the Postgres.", "This suggests that the learned methods shoulf fall back to Postgres for OLTP workloads.", "Similar to STATS-CEB, FactorJoin also has a significantly better performance over all other baselines except Postgres." ] ]
2212.05526
[ [ "Deep Inelastic Scattering with Application to Nuclear Targets: Lectures\n at the 1985 Los Alamos School on Relativistic Dynamics and Quark Nuclear\n Physics" ], [ "Abstract This paper is essentially a verbatim reconstruction of lectures that I gave at the Los Alamos School on Relativistic Dynamics and Quark Nuclear Physics in 1985.", "They were published in the school proceedings, but the book is not widely available.", "The Los Alamos School took place at the height of the first wave of interest in the quark substructure of nuclei, stimulated by the 1983 discovery of the EMC Effect.", "Interest in this subject has been increasing for years and the prospect of a dedicated Electron Ion Collider within the decade guarantees even greater attention to quarks and gluons in nuclei among both theorists and experimentalists.", "Recently, to my surprise, I learned that copies of my old lectures have been circulating and been found useful by the relatively few people who know about them.", "The are, of course, dated: experiments have far outstripped what was available 37 years ago and theory has progressed too.", "However, the rest frame derivation of the parton model, the derivation and discussion of the convolution formalism for nucleons, nucleon correlations, and other, virtual, constituents of nuclei, and sections on scaling violation and the operator product expansion have aged pretty well and seem to still be useful.", "With the help and encouragement of Richard Milner, I have recreated the LaTeX files necessary to post the 1985 Lectures on the arXiv, making them available to the nuclear and particle physics community.", "Apart from correcting some typographical errors, I have made no attempt to edit, improve, or update these lectures.", "I hope readers will nevertheless find them useful." ], [ "MOTTO", "“Looking for the quarks in the nucleus is like looking for the Mafia in Sicily: Everyone knows they're there, but its hard to find the evidence.” (Anonymous) This paper is essentially a verbatim reconstruction of lectures that I gave at the Los Alamos School on Relativistic Dynamics and Quark Nuclear Physics in 1985.", "They were published in the school proceedings[1], but the book is not widely available.", "The Los Alamos School took place at the height of the first wave of interest in the quark substructure of nuclei, stimulated by the 1983 discovery of the EMC Effect[2].", "Interest in this subject has been increasing for years and the prospect of a dedicated Electron Ion Collider within the decade guarantees even greater attention to quarks and gluons in nuclei among both theorists and experimentalists.", "Recently, to my surprise, I learned that copies of my old lectures have been circulating and been found useful by the relatively few people who know about them.", "The are, of course, dated: experiments have far outstripped what was available 37 years ago and theory has progressed too.", "However, the rest frame derivation of the parton model, the derivation and discussion of the convolution formalism for nucleons, nucleon correlations, and other, virtual, constituents of nuclei, and sections on scaling violation and the operator product expansion have aged pretty well and seem to still be useful.", "With the help and encouragement of Richard Milner, I have recreated the LaTeX files necessary to post the 1985 Lectures on the arXiv, making them available to the nuclear and particle physics community.", "Apart from correcting some typographical errors, I have made no attempt to edit, improve, or update these lectures.", "I hope readers will nevertheless find them useful.", "These lectures are addressed to a very specific audience: graduate students and young researchers in theoretical nuclear physics.", "They contain very little that cannot be found elsewhere in the vast literature on the subject, but they summarize a particular viewpoint which is both straightforward and fairly correct.", "My intention is to provide a reasonably thorough introduction to inclusive, inelastic electron scattering and enough advanced material to excite the reader to go on by him/herself.", "The reader is assumed to have some familiarity with relativistic quantum mechanics and with the more elementary side of quantum field theory.", "Some previous exposure to QCD $-$ quarks, color, gluons, SU(3) and the like $-$ will help.", "But; previous exposure to renormalization theory, the renormalization group and the operator product expansion is not assumed.", "I have tried to avoid the temptation to oversimplify $-$ the subject has its subtleties and it is not possible to do good work in the field without understanding them.", "More simplified treatments can be found in books by Feynman (Photon-Hadron Interactions, (W. A. Benjamin, New York (1972)) and by Close (An Introduction to Quarks and Hadrons (Academic Press, New York, 1979)).", "Equally, I have tried to avoid too much formalism.", "I hope the power of the more advanced methods developed in §5 and applied in §6 will encourage the reader to pursue the subject more formally in the future.", "More advanced treatments can be found in tho lecture notes of D. J.", "Gross (in Methods in Field Theory, Proc.", "of 1975 Les Houches Summer School, (ed., R. Balian and J. Zinn-Justin, North-llolland, Amsterdam) and C.H.", "Llewellyn Smith (Topics in Quantum Chronomdynamics, in Quantum Flavordynamics, Quantum Chromodynamics and Unified Theories (ed., K. T. Mahanthappa and J. Randa, Plenum Press, New York, 1980) in the recent books by C. Itzykson and J. Zuber (Introduction to Quantum Field Theory, McGraw-Hill, New York, 1980) and T. P. Cheng and L.F. Li Gauge Theories of Elementary Particle Physics, (Clarendon Press, Oxford, 1984).", "In the interest of time, I have had to eliminate all mention of inclusive, inelastic scattering of neutrinos and other related processes such as electron-positron annihilatiqn and Drell-Yan production of lepton pairs.", "These too can be found treated in detail elsewhere in the literature.", "Finally I would like to thank Frank Close, Chris Llewellyn-Smith, Dick Roberts and especially Graham Ross for collaboration on many of the ideas presented here.", "I would also like to thank Gerry Garvey and Mikkel Johnson for making it possible for me to attend the school, and Roger Gilson for preparing the manuscript.", "The quark description of hadrons is now universally accepted, but its implications for nuclear physics are anything but clear.", "Nuclear structure and low energy nuclear reactions are well-described by a variety of semi-phenomenological theories which leave little room for insight from thinking about quark and gluon dynamics.", "The reason is obvious: the energies involved in most nuclear phenomena are so low compared to the natural scale of quark dynamics that quark and gluon degrees-of freedom are effectively frozen into nucleon and meson “quasiparticles\".", "To support this, consider first that the string tension in QCD is $\\sim $ 1 GeV /fm, so it costs $\\sim $ 1 GeV to separate a quark from two others (or from an antiquark) by an additional fermi beyond their equilibrium separation; and second, that the N – $\\Delta $ mass difference is 300 MeV, so it costs $\\sim $ 300 MeV to flip a quark's spin relative to the two others which together with it form a nucleon.", "Both energies are much larger than typical nuclear energy scales.", "Thus, $\\Delta $ s are relegated to a relatively minor role in the description of nuclei, and quarks (as distinct from nucleons and mesons) are even less important.", "It is not even clear, as a matter of principle, how to establish the necessity of a fundamentally “quarkic\" description of nuclei.", "Because color is confined, it is almost always possible to find an entirely equivalent hadronic description of some quark-dynamical process.", "For example, there has been much interest in isolating so-called “hidden color\" components in the deuteron wave function.", "These are of the form [(q$^3$ )$^8$ –(q$^3$ )$^8$ ]$^1$ : although the entire six quark system is a color singlet, each group of three quarks is coupled to a color octet.", "Such admixtures have been invoked, for example, to resolve discrepancies in models of deuteron photodisintegration.", "It is trivial to show, however, that any such configuration can be rewritten, after a change of coupling transformation, as a sum of configurations in which each set of three quarks is coupled to a color singlet, i.e., an ordinary baryon.", "[The proof is merely to note that $\\epsilon _{abc}$ is the only covariant, invariant tensor in SU(3).]", "So any hidden color state can be equally well represented as a sum over conventional baryon-baryon configurations, and any phenomenon ascribed to hidden color could be explained equivalently by a sufficiently clever theorist who had never heard of quarks and color.", "The quark theorist might triumph in the end if their description were simpler and more economical: one “hidden color\" state might do as well as some complex superposition of baryon resonances.", "But this is not the “smoking gun\" enthusiasts are seeking.", "Similar observations could have been made about low energy hadron physics twenty years ago, and one therefore wonders how the quark description of hadrons was established in the first place, which brings me to the subject of these lectures.", "Of course, quarks were recognized as a convenient bookkeeping device as early as 1964, but the need for a quark dynamics for hadrons was not compelling until a set of experiments was performed at SLAC in the late 1960s.", "High energy electrons were scattered inelastically from nucleons in close analogy to the $\\alpha $ -particle scattering experiments performed by Rutherford in the early 1900s.", "In both cases, the experimenters were surprised to find that the scattering cross section remained large even at high momentum transfer indicating the presence of point-like scattering centers within the target.", "Further studies with electrons and then with neutrinos established the spin, charge and baryon number of the point-like objects within hadrons and showed they were quarks.", "It was also recognized that these experiments not only detect quarks but directly measure their distribution within the target baryon.", "During the early 1970s, it became clear that only a non-Abelian gauge field theory, quantum chromodynamics (QCD), could explain the qualitative features of quark dynamics.", "One of its triumphs was the quantitative prediction of the details of inelastic electron scattering in the very high momentum transfer, or deep inelastic limit.", "Despite the complexity of QCD, the simple Rutherford picture remains valid: the scattering electron measures the quark distribution in the target.", "These lectures explore the use of deep inelastic electron scattering as a probe of the quark distribution in nuclei.", "For more than a decade it apparently was thought that the quark distribution in a nucleus was simply given by the quark distribution in so many neutrons and protons, corrected for the fact that they are in motion within the nucleus (Fermi motion).", "Little attention was paid to quark distributions in nuclei.", "It came as a surprise to particle and nuclear theorists alike when the first careful comparison of a nuclear target (iron) with a nucleon (actually a deuteron) presented at the 1982 Paris Conference by the European Muon Collaboration (EMC), showed a $\\sim $ 15% difference in a region where Fermi motion effects are thought to be negligible.", "The “EMC effect\" as it is known, was immediately confirmed by experiments at SLAC and elsewhere.", "Theorists quickly presented a variety of explanations of the EMC effect.", "Now, three years later, there are several well-developed schools of thought, much controversy, and only a little agreement among partisans about the origin of the effect.", "Most explanations are based on a “convolution model\" formalism in which one supposes the nucleus contains, in addition to nucleons, some small admixture of more exotic constituents (pions, multiquark bags, $\\Delta $ s, $\\alpha $ clusters ... ), and then adds up their quark distributions.", "Unfortunately, this approach has a hitch: the assumption that quark distributions in constituents add incoherently is not justified and in many cases probably wrong.", "Another approach, known as “rescaling\" suggested by the scale transformation properties of QCD, avoids the questionable assumption of convolution models by dealing directly and solely with the quark distribution of the nucleus.", "The result is simple and striking: the EMC effect shows that the typical length scale associated with quark propagation in the nuclear ground state is longer than the corresponding length scale in the nucleon.", "For a particle physicist, one of the most interesting features of this approach is its implications for QCD at finite density.", "Nuclei provide samples of quark/nuclear matter at a variety of mean densities (as $A$ increases, the surface to volume ratio goes to zero so the mean density grows).The $A$ dependence of the EMC effect closely reflects the variations in mean nuclear density, indicating that the quark length scale in quark/nuclear matter grows with density.", "In these lectures, I have tried to avoid detailed analysis of models of the EMC effect, although critics of the “rescaling\" approach, to which I am devoted, will point out that I present it in considerable detail.", "In fact, much of the rescaling analysis is model independent and essential for a modern education in the subject.", "In §1, I introduce the kinematic variables in coordinate and momentum space.", "The whole discussion is set in the laboratory or target rest frame.", "Certain general tools like dispersion relations are reviewed and summarized there.", "In §2, I present the parton model from a somewhat unfamiliar point-of-view, one which experts will recognize as imitating the more formal operator product expansion analysis used in QCD.", "The reason for this approach is to keep as close to coordinate space as possible since considerable insight into the parton model and the EMC effect comes from an easy fluency between coordinate and momentum space.", "In §3, I summarize the data, as far as I know it, and use the parton model to interpret it.", "§4, is dedicated to convolution models.", "Since so much work has been based on these models, I have tried to present them in detail, illustrated by the case in which the constituents are the nucleons themselves (Fermi motion), and let the readers judge their utility for themselves.", "In §5, I return to fundamentals.", "QCD changed our understanding of inelastic scattering and corrected and extended the parton model.", "I have tried to introduce the QCD analysis with as little excess formalism as possible, though for a real working knowledge the reader will have to learn more about gauge field theory and the renormalization group elsewhere.", "Finally, in §6, I use the powerful methods developed in §5 to help give a new way of looking at the EMC effect, and draw some surprisingly simple conclusions.", "We are interested in the process $eA \\rightarrow e^\\prime X$ where $A$ is a nucleus, the proton and neutron being important special cases, and $X$ is an unobserved hadronic final state.", "The electron and nucleus are assumed unpolarized, although polarization dependent effects can be handled in the same fashion.", "The process is known as inelastic electron scattering or inclusive electroproduction.", "To lowest order in $\\alpha $ , the process is described [3] by one photon exchange (Fig.", "REF ): $A \\propto \\bar{u}(k^\\prime )\\gamma ^\\mu u(k)\\frac{1}{q^2}\\mathinner {\\langle {X|J_\\mu (0)|p}\\rangle } \\ , \\qquad \\mathrm {(1.1)}$ where $J_\\mu (0)$ is the hadronic electromagnetic current operator.", "The differential cross-section for scattering in which $X$ is not observed is proportional to $\\Sigma _X |A|^2 (2 \\pi )^4 \\delta ^4(p+q-p_X)$ , or $d \\sigma \\propto l^{\\mu \\nu } W_{\\mu \\nu } \\ , \\qquad \\mathrm {(1.2)}$ where $l^{\\mu \\nu } = \\frac{1}{2} Tr {k}^\\prime \\gamma ^\\mu {k} \\gamma ^\\nu = 2({k^\\prime }^\\mu k^\\nu + {k^\\prime }^\\nu k^\\mu - g^{\\mu \\nu } k \\cdot k^\\prime ) \\qquad \\mathrm {(1.3)}$ (we ignore the electron mass), and $W_{\\mu \\nu } \\equiv \\frac{1}{4 \\pi } \\Sigma _X (2 \\pi )^4 \\delta ^4(p+q-p_X) \\mathinner {\\langle {p|J_\\mu (0)|X}\\rangle } \\mathinner {\\langle {X|J_\\nu (0)|p}\\rangle } \\ .", "\\qquad \\mathrm {(1.4)}$ Figure: Inclusive inelastic electron scattering via one photon exchange.", "EE, E ' E^\\prime and θ\\theta are defined in the target rest frame.$W_{\\mu \\nu }$ contains all reference to hadronic states.", "It is represented graphically in Fig.", "REF .", "Figure: W μν W_{\\mu \\nu } which determines the cross section for electron scattering and also, the imaginary part of forward, virtual Compton scattering.Eq.", "(1.4) may be simplified by replacing $(2 \\pi )^4 \\delta ^4(p+q-p_X) \\equiv \\int d^4 \\xi \\ e^{i(p+q-p_X) \\cdot \\xi } \\ , \\qquad \\mathrm {(1.5)}$ translating $J_\\mu (0)$ to the space-time point $\\xi $ , and using completeness ($\\Sigma _X|X><X| = 1$ ): $W_{\\mu \\nu } = \\frac{1}{4 \\pi } \\int d^4 \\xi \\ e^{iq \\cdot \\xi } \\mathinner {\\langle {p|J_\\mu (\\xi )J_\\nu (0)|p}\\rangle }_c \\ .", "\\qquad \\mathrm {(1.6)}$ The subscript $c$ on the matrix element denotes “connected\" and ensures that vacuum-to-vacuum transitions of the form $\\mathinner {\\langle {0|J_\\mu (\\xi ) J_\\nu (0)|0}\\rangle }\\mathinner {\\langle {p|p}\\rangle }$ are excluded.", "The current product in Eq.", "(1.6) can be replaced by a commutator $W_{\\mu \\nu } =\\frac{1}{4 \\pi } \\int d^4 \\xi \\ e^{iq \\cdot \\xi } \\mathinner {\\langle {p|[J_\\mu (\\xi ),J_\\nu (0)]|p}\\rangle }_c\\qquad \\mathrm {(1.7)}$ because the term we have subtracted vanishes for stable targets if $q^0 > 0$ .", "In quantum field theory, it is most convenient to deal with time-ordered products of operators.", "Thus, $T_{\\mu \\nu } =i \\int d^4 \\xi \\ e^{iq \\cdot \\xi } \\mathinner {\\langle {p|T(J_\\mu (\\xi )J_\\nu (0))|p}\\rangle }_c\\qquad \\mathrm {(1.8)}$ is the amplitude [4] for forward scattering of a virtual photon of momentum $q$ from a hadronic target of momentum $p$ (Fig.", "REF ).", "Figure: Forward virtual Compton scattering.It is easily seen that $W_{\\mu \\nu }$ is the imaginary part of $T_{\\mu \\nu }$ (when $q^0$ is taken to have a small positive imaginary part) $W_{\\mu \\nu } = \\frac{1}{2 \\pi } IT_{\\mu \\nu } \\ .", "\\qquad \\mathrm {(1.9)}$ So, inclusive electroproduction is intimately related to virtual Compton scattering.", "$W_{\\mu \\nu }$ and $T_{\\mu \\nu }$ can be decomposed in terms of a pair of Lorentz invariant “structure functions\" $W_{1,2}$ ($T_{1,2}$ ): $W_{\\mu \\nu } &= - \\left(g_{\\mu \\nu } - \\frac{q_\\mu q_\\nu }{q^2} \\right) W_1(q^2,\\nu ) \\nonumber \\\\&+ \\frac{1}{M^2_T} \\left( p_\\mu - \\frac{M_T \\nu }{q^2} q_\\mu \\right)\\left( p_\\nu - \\frac{M_T \\nu }{q^2} q_\\nu \\right) W_2(q^2,\\nu ) $ and likewise for $T_{\\mu \\nu }$ .", "$W_{1,2}$ are functions of the of the Lorentz invariants $q^2$ and $p \\cdot q = M_T \\nu $ ($q^2 = -4EE^\\prime \\sin ^2 \\frac{\\theta }{2}$ and $\\nu = E-E^\\prime $ in the target rest frame) where $M_T$ is the target mass.", "No other terms are allowed by Lorentz invariance, current conservation ($q^\\mu W_{\\mu \\nu } = W_{\\mu \\nu } q^\\nu = 0$ ) and parity.", "The structure functions depend on the squared four momentum transfer $q^2 \\equiv - Q^2 = \\nu ^2 - {\\cal {Q}}^2$ which is spacelike (so $Q^2 > 0$ ) and the energy transfer in the laboratory, $\\nu = q^0$ , which is positive.", "The squared mass of the final hadronic state, $X$ , is $(p + q)^2 = M_T^2 + 2M_T \\nu - Q^2 \\equiv W^2$ and is greater than $M_T^2$ .", "Thus, the “scaling\" variable, $x_T \\equiv Q^2 /2M_T \\nu $ must be between 0 and 1.", "Often experimentalists and theorists prefer to use a uniform scaling variable $x = x_N = Q^2 /2M\\nu $ ($M \\equiv M_N$ is the nucleon mass) for all targets.", "The reason for this is that the structure functions of different nuclei look to first order like the sum of structure functions for $A$ independent nucleons.", "They are therefore very small for $x > 1$ regardless of $A$ .", "It is easy to get confused between the scaling variable intrinsic to a target of mass $M_T$ , $x_T \\equiv Q^2 /2M_T \\nu $ , which is bounded between 0 and 1, and the uniform scaling variable $x =x_N = Q^2/2M$ , which ranges from zero to $M_T/M$ .", "Note that $M_T/M \\unknown.", "A$ for a nucleus of mass number $A$ , so the distinction between the two variables is quite significant for nuclear targets.", "Bjorken suggested [5] that in the limit of large $Q^2$ at fixed $x_T$ (now known as the Bjorken or “deep\" inelastic limit) $W_1$ and $\\frac{\\nu }{M_T} W_2$ should become functions of $x_T$ alone: $\\lim _{Bj} W_1 (q^2,\\nu ) &= F_1(x_T)\\nonumber \\\\\\lim _{Bj} \\frac{\\nu }{M_T} W_2(q^2,\\nu ) &= F_2(x_T) \\ ,$ which is approximately verified by experiment.", "This phenomenon is known as “Bjorken scaling\" or just “scaling\" for short.", "The kinematic range of inclusive scattering is shown in Fig.", "REF .", "Note that the $Q^2 \\rightarrow 0$ limit gives photoproduction.", "Figure: Kinematic variables for inelastic electron scattering.The virtual forward Compton amplitude, $T_{\\mu \\nu }$ , possesses simple properties when regarded as an analytic function of $\\nu $ at fixed $q^2$ .", "These are special cases of the general results of dispersion theory, most of which are forgotten [6].", "Since I will need these properties in the subsequent chapters, I will review them here.", "For pedagogical simplicity I will ignore spin and analyze a hypothetical virtual “Compton\" amplitude $T( q^2 , \\nu )$ for a scalar “photon\" scattering from a proton or neutron defined by $T(q^2,\\nu ) \\equiv i \\int d^4 \\xi \\ e^{i q \\cdot \\xi } \\mathinner {\\langle {p|T(J(\\xi )J(0))|p}\\rangle } \\ .", "\\qquad \\mathrm {(1.12)}$ (The generalization to other targets is straightforward.)", "The generalization to the physically interesting case of $T_{\\mu \\nu }$ will be quoted at the end.", "$T(q^2 , \\nu )$ is a real analytic function of $\\nu $ at fixed $q^2$ , $T(q^2,\\nu ^\\star ) = T^\\star (q^2,\\nu )\\nonumber \\ ,$ and is crossing symmetric $T(q^2,\\nu ) = T(q^2,-\\nu ) \\nonumber \\ .$ The fundamental assumption of dispersion theory is that scattering amplitudes are analytic except at values of the kinematic variables which allow intermediate states to be physical (i.e., on shell).", "This can be proven to all orders of perturbation theory [7], but must be regarded as an assumption in QCD where the quanta of the perturbation theory (quarks and gluons) are not physical states.", "When $\\nu \\ge - q^2/2M$ , the virtual photon-target system can form a physical hadronic intermediate state so $T(q^2,\\nu )$ has a cut along the positive real-$\\nu $ axis.", "More precisely, $T(q^2,\\nu )$ has a pole at $\\nu = -q^2/2M$ corresponding to elastic scattering ($ep \\rightarrow e^\\prime p^\\prime $ ) and a cut beginning at pion production threshold.", "$T(q^2,\\nu )$ also has a cut on the negative real axis corresponding to the “crossed\" process $p \\rightarrow \\gamma + x$ which is physically allowed when $\\nu \\le -|q|^2/2M$ .", "The discontinuity across the right hand cut is ${\\rm disc}\\ T(q^2,\\nu ) = 2i \\,IT(q^2,\\nu ) = 4\\pi i W(q^2,\\nu ) \\ .", "\\qquad \\mathrm {(1.15)}$ These analytic properties are summarized in Fig.", "REF .", "Figure: The complex ν\\nu -plane.", "The contour shown figures in the derivation of dispersion relations.Using Cauchy's theorem on the contour shown in Fig.", "REF , it is possible to derive a “dispersion relation\" for $T(q^2,\\nu )$ , $T(q^2,\\nu ) = 4 \\int _{-q^2/2M}^{\\infty } \\frac{d \\nu ^\\prime \\nu ^\\prime }{{\\nu ^\\prime }^2 - \\nu ^2} W(q^2,\\nu ^\\prime ) \\ .", "\\qquad \\mathrm {(1.16)}$ Here, $\\nu $ is complex ($\\nu ^\\prime $ is real) and the physical Compton amplitude is obtained by letting it approach the positive real axis from above $\\nu \\rightarrow \\nu _R + i \\epsilon $ .", "If $W(q^2,\\nu ^\\prime )$ falls too slowly as $ \\nu ^\\prime \\rightarrow \\infty $ then the integral may not converge.", "In that case one can derive a weaker, “subtracted\" dispersion relation.", "Formally, take Eq.", "(1.16) for two different choices of $\\nu $ and subtract, e.g.", "$T(q^2,\\nu ) = T(q^2,0) +4 \\nu \\int _{-q^2/2M}^{\\infty } \\frac{d \\nu ^\\prime }{\\nu ^\\prime ({\\nu ^\\prime }^2 - \\nu ^2)} W(q^2,\\nu ^\\prime ) \\ .", "\\qquad \\mathrm {(1.17)}$ The integral is better behaved at large $\\nu ^\\prime $ .", "This procedure can be continued as far as necessary and works as long as $W(q^2,\\nu )$ is polynomial bounded as $\\nu \\rightarrow \\infty $ , which we will assume (see below).", "Modulo subtractions, the real part of $T$ can be calculated if $W$ is known, thus dispersion relations can be (and have been) verified experimentally.", "It is convenient to change variables to $\\omega \\equiv -2M \\nu /q^2$ in Eq.", "(1.16), $T(q^2,\\omega ) = 4 \\int _1^\\infty \\frac{\\omega ^\\prime d \\omega ^\\prime }{{\\omega ^\\prime }^2 - \\omega ^2} W(q^2,\\omega ^\\prime ) \\ .", "\\qquad \\mathrm {(1.18)}$ Note that $T(q^2,\\omega )$ is analytic in the circle of radius 1 about $\\omega = 0$ and may therefore be expanded in a Taylor series for $|\\omega | < 1$ : $T(q^2,\\omega ) = 4 \\sum _{n\\,\\rm even} M^n(q^2) \\omega ^n \\ .", "\\qquad \\mathrm {(1.19)}$ with $M^n(q^2) &= \\int _1^\\infty d \\omega ^\\prime {\\omega ^\\prime }^{-n-1} W(q^2,\\omega ^\\prime ) \\nonumber \\\\&= \\int _0^1 dx\\ x^{n-1} W(q^2,x) \\ .$ The physically interesting case of $T_{\\mu \\nu }$ is summarized by dispersion relations for the two invariant amplitudes $T_{1,2}(q^2,\\omega )$ .", "$T_1(q^2,\\omega )$ requires subtraction $T_1(q^2,\\omega ) &= T_1(q^2,0) + 4 \\omega ^2 \\int d \\omega ^\\prime \\frac{W_1(q^2,\\omega ^\\prime )}{\\omega ^\\prime ({\\omega ^\\prime }^2-\\omega ^2)} \\nonumber \\\\\\frac{\\nu T_2}{M_T}(q^2,\\omega )&= 4 \\omega \\int _1^\\infty d \\omega ^\\prime \\frac{\\frac{\\nu W_2}{M}(q^2,\\omega ^\\prime )}{{\\omega ^\\prime }^2 - \\omega ^2} \\ .$ Most introductory treatments of deep inelastic lepton scattering are formulated in the “infinite momentum frame\" where the target is boosted to some arbitrarily large momentum $P_\\infty $ .", "I find this approach both unnecessary and misleading and prefer instead to work in the target rest frame where I suppose nuclear physicists also feel at home.", "Of course, the physics is frame independent, at least if one is careful enough [8].", "In the rest frame of a target with mass $M_T$ , the Bjorken limit takes on a particularly simple form.", "We choose the negative $z$ -axis to lie along the virtual photon direction: $q = (\\nu ,0,0,-\\sqrt{\\nu ^2+Q^2}) \\ .", "\\qquad \\mathrm {(1.22)}$ As $Q^2 \\rightarrow \\infty $ with $x$ fixed, $Q^2/\\nu ^2 \\rightarrow 0$ so $q \\rightarrow (\\nu ,0,0,-\\nu -M_Tx_T) \\ .", "\\qquad \\mathrm {(1.23)}$ The significance of this form is most transparent if we introduce “light-cone\" coordinates, $q^\\pm = \\frac{1}{\\sqrt{2}}(q^0 \\pm q^3) \\ .", "\\qquad \\mathrm {(1.24)}$ [Note: $a \\cdot b = a^+b^- + a^-b^+ - {\\bf a}_\\perp \\cdot {\\bf b}_\\perp $ , $a^2 = 2a^+a^- - {\\bf a}^2_\\perp $ , so $g^{+-} = g^{-+} = 1$ and $a_\\mp =a^\\pm $ .", "To avoid confusion, I will stick to contravariant indices.]", "In the Bjorken limit $q^- \\rightarrow \\infty $ but $q^+ \\rightarrow -M_Tx_T/\\sqrt{2}$ , i.e., $q^+$ remains finite.", "Note $M_Tx_T = Mx = Q^2/2\\nu $ , so the limiting value of $q^+$ is independent of $M_T$ .", "We will frequently discuss the important distant scales which contribute to electroproduction.", "The distance referred to is the space-time separation, $\\xi _\\lambda $ , between the points at which the currents $J_\\nu $ and $J_\\nu $ act.", "Note that $\\xi _\\lambda $ is only defined in the Compton amplitude, $T_{\\mu \\nu }$ , or its imaginary part $W_{\\mu \\nu }$ , not in the electroproduction amplitude $A$ itself (Eq.", "1.1).", "Since $\\xi $ and $q$ appear as conjugate variables in the definition of $W_{\\mu \\nu }$ and $q \\cdot \\xi \\equiv q^+\\xi ^- + q^- \\xi ^+$ , $q^- \\rightarrow \\infty $ forces $\\xi ^+ \\rightarrow 0$ , but $q^+=-Mx/\\sqrt{2}$ requires only $|\\xi ^-|\\lesssim \\sqrt{2}/Mx$ .", "The first of these relations, $\\xi ^+ \\rightarrow 0$ , follows from general theorems on Fourier transforms [9].", "Let $\\tilde{f}(q^-) = \\int d \\xi ^+ e^{i q^- \\cdot \\xi ^+} f(\\xi ^+)\\ .", "\\qquad \\mathrm {(1.25)}$ If ${f}(\\xi ^+)$ is smooth (infinitely differentiable everywhere) and well-behaved as $|\\xi ^+| \\rightarrow \\infty $ , then $\\tilde{f}(q^-)$ vanishes faster than any power of $q^-$ as $q^- \\rightarrow \\infty $ .", "But, if $f(\\xi ^+)$ has singularities, then the $q^- \\rightarrow \\infty $ limit is dominated by the behavior of $f(\\xi ^+)$ near the singularities.", "Suppose, for example, $f(\\xi ^+) =0$ for $\\xi ^+ \\le 0$ , then as $q^- \\rightarrow \\infty $ $\\tilde{f}(q^-) \\sim \\frac{i}{q^-} f(0) - (\\frac{i}{q^-})^2 f^\\prime (0) + ...\\ , \\qquad \\mathrm {(1.26)}$ as easily obtained from integration by parts.", "The integrand in Eq.", "(1.7) defining $W_{\\mu \\nu }$ is singular at $\\xi ^+ =0$ since it vanishes when $\\xi ^2 <0$ , so the $q^- \\rightarrow \\infty $ limit is dominated by $\\xi ^+ \\approx 0$ .", "The second relation, $|\\xi ^-|\\lesssim \\sqrt{2}/Mx$ , is more subtle.", "After all the dust settles (in $§$ 2) the structure functions will be given by Fourier transforms in $q^+=-Mx/\\sqrt{2}$ of a smooth function of $\\xi ^-$ .", "The Fourier transform, however, is only conditionally convergent at large $\\xi ^-$ .", "It diverges like $1/\\sqrt{x}$ or $1/x$ (for “valence\" or “ocean\" quarks, see $§2$ as $x \\rightarrow 0$ , so the oscillation in the exponential acts as a large $\\xi ^-$ cutoff.", "The larger $\\sqrt{2}/Mx$ , the larger range of $\\xi ^-$ contributes to the structure function.", "For a more complete discussion with examples see the Heidelberg talk by Llewellyn Smith in [10].", "Since the commutator in Eq.", "(1.7) is causal, $\\xi ^2 = 2\\xi ^+\\xi ^- - {\\xi }^2_\\perp $ is positive.", "Hence, $\\xi ^+ \\rightarrow 0$ with $\\xi ^-$ finite requires $\\xi _\\perp \\rightarrow 0$ .", "All components of $\\xi ^\\lambda $ except $\\xi ^-$ vanish in the Bjorken limit.", "Thus, deep inelastic scattering is not, as is often incorrectly remarked, a short distance ($\\xi _\\lambda \\rightarrow 0$ ) phenomenon; it is instead a light-cone ($\\xi ^2 \\rightarrow 0$ ) dominated process.", "This distinction is crucial to understanding the dynamics and yields considerable insight into nuclear effects in inclusive electron scattering.", "Together $\\xi ^+ \\rightarrow 0$ and $|\\xi ^-| \\lesssim \\sqrt{2}/Mx$ imply $|\\xi ^0|\\lesssim 1/Mx$ and $|\\xi ^3| \\lesssim 1/Mx$ .", "$W_{\\mu \\nu }$ measures a current-current correlation function in the target ground state (c.f.", "Eq.", "(1.7)).", "In the Bjorken limit, the correlation probed becomes light-like, but may extend to very large spatial distances (and times) in the small $x$ limit.", "Note that the ranges of $\\xi ^0$ and $\\xi ^3$ probed by the correlated pair of currents in the target rest frame are independent of the target mass because $1/Mx=2\\nu /Q^2$ .", "The relation between $x$ and $|\\xi ^3|$ will be crucial to our analysis of nuclear effects in leptoproduction.", "It is often convenient to use different structure functions to describe $W_{\\mu \\nu }$ .", "A particularly important pair are $W_L$ and $W_T$ defined by $W_T &= W_1 \\nonumber \\\\W_L&= \\left( 1 + \\frac{\\nu ^2}{Q^2} \\right) W_2 - W_1 \\ .$ $W_T$ and $W_L$ arise when one considers the polarization of the virtual photon, $\\epsilon _\\mu $ .", "The leptonic current produces a flux of virtual photons which may be transverse, $\\epsilon ^\\mu _{T_1} &= (0, 0, 1, 0) \\nonumber \\\\\\epsilon ^\\mu _{T_2}&= (0, 1, 0, 0) \\ ,$ or longitudinal $\\epsilon ^\\mu _L = \\frac{1}{\\sqrt{Q^2}}(\\sqrt{\\nu ^2+Q^2}, 0, 0, -\\nu )\\ , \\qquad \\mathrm {(1.29)}$ in the target rest frame in which $q^\\mu $ is given by Eq.", "(1.21).", "Note $\\epsilon \\cdot q =0$ and $\\epsilon ^2_{T_j} = - \\epsilon ^2_L = -1$ .", "$W_T$ and $W_L$ are the components of $W_{\\mu \\nu }$ which couple to $\\epsilon _T$ and $\\epsilon _L$ , respectively: $W_T &\\equiv \\epsilon ^\\mu _{T_j} W_{\\mu \\nu } \\epsilon ^\\nu _{T_j} \\nonumber \\\\W_L&\\equiv \\epsilon ^\\mu _{L} W_{\\mu \\nu } \\epsilon ^\\nu _{L} \\ .$ They are proportional to the total cross sections for absorption of a transverse or longitudinal polarized virtual photon, respectively, $\\sigma _{T,L}(q^2,\\nu ) = \\frac{4 \\pi ^2 \\alpha }{k} W_{T,L}(q^2,\\nu ) \\qquad \\mathrm {(1.31)}$ where $k= (W^2-M^2_T)/2M_T$ .", "$\\sigma _L$ and $\\sigma _T$ are positive, so $W_1 \\ge 0$ and $(1+\\nu ^2/Q^2)W_2 \\ge W_1$ .", "As $q^2 \\rightarrow 0$ , $\\sigma _L$ vanishes and $\\sigma _T$ approaches the photoproduction cross section for real photons.", "Experimentalists measure $W_1$ and $W_2$ by comparing cross sections at fixed $q^2$ and $\\nu $ , but different values of $E$ , $E^\\prime $ and $\\theta $ : $\\frac{d^2 \\sigma }{dE^\\prime d \\Omega } = \\frac{4 \\alpha ^2{E^\\prime }^2 \\cos ^2 \\theta /2}{q^4} \\left[W_2(q^2,\\nu )+2W_1(q^2,\\nu ) \\tan ^2 \\theta /2 \\right]\\ .", "\\qquad \\mathrm {(1.32)}$ In practice, it is convenient to write the differential cross section in terms of $\\sigma _L$ , $\\sigma _T$ and a parameter $\\epsilon $ : $\\epsilon ^{-1} \\equiv 1+2 \\left( 1+ \\frac{\\nu ^2}{Q^2} \\right) \\tan ^2 \\theta /2\\ , \\qquad \\mathrm {(1.33)}$ $\\frac{d^2 \\sigma }{dE^\\prime d \\Omega } = \\frac{\\alpha }{4 \\pi ^2 Q^2} \\frac{kE^\\prime }{E} \\left( \\frac{2}{1- \\epsilon } \\right) [\\sigma _T(q^2,\\nu ) + \\epsilon \\sigma _L(q^2,\\nu )]\\ .", "\\qquad \\mathrm {(1.34)}$ Apart from kinematic factors, $d^2\\sigma $ is a linear function of $\\epsilon $ .", "A linear fit to the data yields $\\sigma _L + \\sigma _T \\propto \\nu W_2$ as the intercept at $\\epsilon = 1$ , and $R \\equiv \\sigma _L/\\sigma _T$ as the slope.", "$R$ is difficult to measure.", "Data sets from different spectrometer settings and different beam energies must be combined, which introduces systematic uncertainties.", "At high beam energies $\\epsilon \\approx 1$ so the experiments are not sensitive to $R$ .", "Anyone who studies inelastic electron scattering should begin with the parton model of Bjorken and Feynman.", "There are many fine sources from which to learn it [8]; one or more should be studied in conjunction with these lectures.", "I, too, will describe the parton model, but quickly and maintaining as much contact with coordinate space as possible.", "If you have never seen the parton model before, the derivation presented here will appear difficult and rather formal.", "The parton model is often misused.", "To learn how to not misuse it one must approach the model who more formally, e.g, via the operator product expansion (OPE).", "Anyone who intends to do research in this field is strongly advised to study the OPE and the renormalization group in QCD beforehand [11], but I will only touch briefly on them.", "It is conventional to motivate the parton model by arguing that in some sense interactions can be ignored near the light cone.", "There is no realistic theory in which this is true, though in QCD it is approximately true.", "In $§5$ , we will develop a more precise language for such matters.", "Here, I will only state the assumptions which lead to the model.", "The first assumption of the parton model is that the current $J_\\mu $ couples to quarks (as opposed to fundamental scalars, etc.).", "Then the contributions to the forward virtual Compton amplitude can be classified by the flow and interactions of quark lines.", "The second assumption is that at large values of $Q^2$ the currents, but not the states, may be treated as in free field theory.", "Thus, final state interactions (Fig.", "REF (b)) and vertex corrections (Fig.", "REF (c)) are ignored.", "Figure: Some contributions to forward virtual Compton scattering in QCD: (a) The parton model diagram; (b) final state interactions; (c) vertex corrections; (d) interference.This leaves the one and two particle contributions shown in Figs.", "REF (a) and (d), respectively.", "Methods similar to those we shall apply to Fig.", "REF (a) show that the contribution of Fig.", "REF (d) vanish faster by a power of $Q^2$ , so I will ignore them henceforth.", "This leaves only Fig.", "REF (a): the elastic and incoherent scattering of each quark in the target, i.e., “quasielastic\" scattering.", "It is important to keep in mind the place of the parton model in QCD.", "It is valid modulo logarithms: quantities which scale — that is, they become functions of $x$ -alone in the parton model — will be modulated by powers of $\\ln Q^2$ when QCD interactions are included.", "Quantities which vanish like a power of $Q^2$ in the parton model may vanish only like a power of $\\ln Q^2$ in QCD.", "Figure: Parton model for W μν W_{\\mu \\nu }.The structure function, $W_{\\mu \\nu }$ , in the parton model is obtained by placing the intermediate state in Fig.", "REF (a) on-shell.", "The struck quark and the remnants of the target appear separately as physical intermediate states as shown in Fig.", "REF .", "This, of course, is wrong $-$ quarks are confined by non-perturbative effects in QCD.", "It is assumed that the processes which neutralize quark quantum numbers do not affect the dominant terms at large $Q^2$ .", "The justification for this is that those non-perturbative, confining effects in QCD which have been studied, while strong, appear to vanish rapidly with $Q^2$ .", "Until the non-perturbative aspects of QCD are better understood this will remain a major, though reasonable, assumption.", "The current $J_\\mu (\\xi )$ reduces to $\\overline{\\psi }(\\xi ) {\\cal {Q}} \\gamma _\\mu \\psi (\\xi )$ in a free quark model, where $\\cal {Q}$ is the quark charge matrix: ${\\cal {Q}} =$ diag$(2/3,-1/3,-1/3,...)$ for $u, d, s,...$ The current commutator in $W_{\\mu \\nu }$ reduces to [12] $[J_\\mu (\\xi ),J_\\nu (0)] = \\overline{\\psi }(\\xi ) \\gamma _\\mu S(\\xi ) \\gamma _\\nu {\\cal {Q}}^2 \\psi (0) - \\overline{\\psi }(0) \\gamma _\\nu S(-\\xi ) \\gamma _\\mu {\\cal {Q}}^2 \\psi (\\xi ) \\qquad \\mathrm {(2.1)}$ where $S(\\xi )$ is the anticommutator function [3] $S(\\xi ) = \\lbrace \\psi (\\xi ),\\overline{\\psi }(0) \\rbrace = -\\gamma ^\\rho \\partial /\\partial \\xi ^\\rho \\Delta (\\xi ) = S(-\\xi ) \\qquad \\mathrm {(2.2)}$ and $\\Delta (\\xi ) = \\frac{1}{2 \\pi } \\delta (\\xi ^2) \\epsilon (\\xi ^0) + .... \\qquad \\mathrm {(2.3)}$ The additional terms in $\\Delta (\\xi )$ vanish for zero quark mass.", "Since quark masses generate only ${\\cal O}(m^2/Q^2$ ) corrections in the Bjorken limit, we ignore them from now on.", "The product of operators in Eq.", "(2.1) is singular when $\\xi _\\lambda \\rightarrow 0$ , but the singularity is a $C$ -number $-$ namely the term removed by normal ordering $-$ so it doesn't contribute to a connected matrix element.", "Thus, $\\mathinner {\\langle {p|\\overline{\\psi }(\\xi )\\psi (0)|p}\\rangle }_c &= \\mathinner {\\langle {p|:\\overline{\\psi }(\\xi )\\psi (0):|p}\\rangle } \\nonumber \\\\&= - \\mathinner {\\langle {p|\\psi (0)\\overline{\\psi }(\\xi )|p}\\rangle } $ where the second equality is a consequence of normal ordering.", "The Lorentz structure of Eq.", "(2.1) is simplified by the identity $\\gamma _\\mu \\gamma _\\rho \\gamma _\\nu &\\equiv S_{\\mu \\nu \\rho \\sigma } \\gamma ^\\sigma - \\epsilon _{\\mu \\nu \\rho \\sigma } \\gamma ^\\sigma \\gamma ^5 \\nonumber \\\\S_{\\mu \\nu \\rho \\sigma } &\\equiv g_{\\mu \\rho } g_{\\nu \\sigma } + g_{\\mu \\sigma } g_{\\nu \\rho } - g_{\\mu \\nu } g_{\\rho \\sigma } \\ .", "$ The $\\epsilon $ -term does not contribute because it is impossible to construct a pseudovector from $\\xi _\\lambda $ and $p_\\lambda $ .", "Putting together the pieces, we find $\\lim _{Bj} W_{\\mu \\nu } &= - \\frac{S_{\\mu \\nu \\rho \\sigma }}{8 \\pi ^2} \\int d^4 \\xi \\ e^{i q \\cdot \\xi } \\left[\\frac{\\partial }{\\partial \\xi _\\rho } \\delta (\\xi ^2) \\epsilon (\\xi ^0) \\right] \\nonumber \\\\&\\times \\mathinner {\\langle {p|\\overline{\\psi }(\\xi ) \\gamma ^\\sigma {\\cal {Q}}^2\\psi (0) - \\overline{\\psi }(0) \\gamma ^\\sigma {\\cal {Q}}^2 \\psi (\\xi )|p}\\rangle }_c \\ .$ The $\\lim _{Bj}$ reminds us that the parton model assumptions which went into Eq.", "(2.6 ) are only (approximately) valid as $Q^2 \\rightarrow \\infty $ at fixed $x$ .", "Integrating by parts and introducing light-cone coordinates in the target rest frame $\\lim _{Bj} W_{\\mu \\nu } &= \\lim _{q^- \\rightarrow \\infty } \\frac{S_{\\mu \\nu \\rho \\sigma } i q^\\rho }{8 \\pi ^2} \\int d \\xi ^+ d \\xi ^- d^2 \\xi _\\perp e^{i q^+ \\xi ^- + i q^- \\xi ^+} \\nonumber \\\\&\\times \\delta (2\\xi ^+\\xi ^- - {\\xi }^2_\\perp ) \\epsilon (\\xi ^+ + \\xi ^-) \\nonumber \\\\&\\times \\mathinner {\\langle {p|\\overline{\\psi }(\\xi ) \\gamma ^\\sigma {\\cal {Q}}^2\\psi (0) - \\overline{\\psi }(0) \\gamma ^\\sigma {\\cal {Q}}^2 \\psi (\\xi )|p}\\rangle }_c \\ .$ We have dropped the term in which $\\partial /\\partial \\xi _\\rho $ acts on the matrix element since it generates at most a factor $p^\\rho $ or $\\xi ^\\rho \\mu ^2$ ($\\mu ^2$ is some mass characteristic of the target) both of which are negligible with respect to $q^\\rho $ in the Bjorken limit.", "[Note $\\xi ^\\rho $ Fourier transforms into $q^\\rho /q^2$ .]", "The form of $S_{\\mu \\nu \\rho \\sigma }$ implies that the coefficient of $-g_{\\mu \\nu }$ in $W_{\\mu \\nu }$ equals half the trace of $W_{\\mu \\nu }$ , or referring to Eqs.", "(1.10) and (1.11), $F_1 = \\frac{1}{2} \\left( 3F_1 - \\frac{1}{2 x_T} F_2 \\right) \\qquad \\mathrm {(2.8)}$ which requires $F_1 = \\frac{1}{2 x_T} F_2 \\qquad \\mathrm {(2.9)}$ or $\\lim _{Bj} \\sigma _L/\\sigma _T = 0 \\qquad \\mathrm {(2.10)}$ which is the famous Callan-Gross relation and follows from the quark spin being $1/2$ .", "It is well verified experimentally.", "Even though $R$ is difficult to measure and still a subject of debate, all experiments agree that for $Q^2 >1$ GeV$^2$ , $R < 0.2$ .", "Eq.", "(2.7) can be reduced to a one-dimensional integral.", "First, we use the $\\delta -$ function to perform the $\\xi ^2_\\perp $ integral leaving $F_2 &= 2 x_T \\lim _{q^- \\rightarrow \\infty } \\frac{iq^-}{8 \\pi } \\int d \\xi ^+ d \\xi ^- e^{i(q^+\\xi ^- + q^-\\xi ^+)} \\nonumber \\\\&\\times [\\theta (\\xi ^+)\\theta (\\xi ^-) - \\theta (-\\xi ^+)\\theta (-\\xi ^-)] \\nonumber \\\\&\\times \\mathinner {\\langle {p|\\overline{\\psi }(\\xi ) \\gamma ^+ {\\cal {Q}}^2 \\psi (0) - \\overline{\\psi }(0) \\gamma ^+ {\\cal {Q}}^2 \\psi (\\xi )|p}\\rangle }_c\\bigg \\vert _{{\\xi }^2_\\perp = 2 \\xi ^+ \\xi ^-} \\ ,$ then integrate by parts on $\\xi ^+$ keeping only the leading term at large $q^-$ $F_2 = \\frac{x_T}{4 \\pi } \\int d \\xi ^- e^{i q^+ \\xi ^-} \\mathinner {\\langle {p|\\overline{\\psi }(\\xi ^-) \\gamma ^+ {\\cal {Q}}^2 \\psi (0) - \\overline{\\psi }(0) \\gamma ^+ {\\cal {Q}}^2 \\psi (\\xi ^-)|p}\\rangle }_c \\bigg \\vert _{\\xi ^+ = {\\bf \\xi }_\\perp =0} \\ .\\qquad \\mathrm {(2.12)}$ So, $F_2$ is a dimensionless function of $q^+=-M_Tx_T/\\sqrt{2}$ , i.e.", "$F_2 \\rightarrow F_2(x_T)$ , which is Bjorken scaling.", "Before converting Eq.", "(2.12) into the most familiar parton model form, we should note that $F_2(x_T)$ measures a particular quark correlation function in the target ground state.", "The first term in Eq.", "(2.12), for example, measures the amplitude to remove a quark from the target at some point, $\\xi _1^\\mu $ , and replace it at $\\xi _2^\\mu $ with $\\xi _2^\\mu - \\xi _1^\\mu \\equiv \\xi ^\\mu $ , $\\xi ^+ = {\\bf \\xi }_\\perp =0$ and $|\\xi ^-|\\lesssim 1/q^+$ , leaving the target in the ground state.", "The shape of the structure function teaches us about a correlation function in the target ground state.", "One must be careful, however, not to use one's intuition from non-relativistic quantum mechanics: this is not an equal time correlation function, but instead a light-cone correlation function.", "More about these later.", "To further simplify (2.12) we must study light-cone $\\gamma -$ matrices.", "The matrices $P^\\pm = \\frac{1}{2} \\gamma ^\\mp \\gamma ^\\pm = \\frac{1}{2} (1 \\pm \\alpha ^3) \\qquad \\mathrm {(2.13)}$ are projection matrices: $P^+ = P^- =1$ , ${P^\\pm }^2 = P^\\pm $ , $P^\\pm P^\\mp =0$ , and if we define $\\psi _\\pm = P^\\pm \\psi \\ , \\qquad \\mathrm {(2.14)}$ then (2.12) may be written $F_2 = \\frac{x_T}{2 \\sqrt{2} \\pi } \\int d \\xi ^- e^{i q^+ \\xi ^-} \\mathinner {\\langle {p|\\psi ^\\dagger _+(\\xi ^-) {\\cal {Q}}^2 \\psi _+(0) + \\psi _+(\\xi ^-) {\\cal {Q}}^2 \\psi ^\\dagger _+(0)|p}\\rangle }_c \\bigg \\vert _{\\xi ^+ = {\\bf \\xi }_\\perp =0} \\ .\\qquad \\mathrm {(2.15)}$ where we used Eq.", "(2.4) to interchange the quark fields in the second term.", "If we now insert a complete set of states between quark fields, translate the $\\xi ^-$ dependence out of $\\psi _+$ or $\\psi ^\\dagger _+$ , integrate over $\\xi ^-$ and sum explicitly over quark flavors ($a = u,d,s,....$ ), we get $F^T_2(x_T) = x_T \\sum _a {\\cal {Q}}^2_a \\sum _n \\frac{1}{\\sqrt{2}} \\delta (p^++q^+-p^+_n)\\lbrace |\\mathinner {\\langle {n|\\psi _{a+}|p}\\rangle }|^2 + |\\mathinner {\\langle {n|\\psi ^\\dagger _{a+}|p}\\rangle }|^2 \\rbrace \\ .", "\\qquad \\mathrm {(2.16)}$ I have added a superscript $T$ to $F_2$ to remind us that $F_2$ depends on the target.", "For a target of mass $M_T$ , $q^+ = -x_TM_T/\\sqrt{2}$ where $x_T = Q^2/2M_T \\nu $ , and $p^+ = M_T/\\sqrt{2}$ (we are working in the target rest frame), so $F^T_2(x_T) = x_T \\sum _a {\\cal {Q}}^2_a (f_{a/T}(x_T) + f_{\\bar{a}/T}(x_T)) \\ , \\qquad \\mathrm {(2.17)}$ where $f_{a/T}(x_T) &= \\frac{1}{\\sqrt{2}} \\sum _n \\delta (p^+-x_Tp^+-p_n^+) |\\mathinner {\\langle {n|\\psi _{a+}|p}\\rangle }|^2 \\nonumber \\\\f_{\\bar{a}/T}(x_T) &= \\frac{1}{\\sqrt{2}} \\sum _n \\delta (p^+-x_Tp^+-p_n^+) |\\mathinner {\\langle {n|\\psi ^\\dagger _{a+}|p}\\rangle }|^2 \\ .$ This is the familiar parton model, except it is written in the target rest frame rather than the “infinite momentum frame\".", "$f_{a/T}(x_T)$ is the probability (per unit $x_T$ ) to remove from the target a quark of flavor $a$ with “momentum\" (i.e.", "$p^+$ ) fraction $x_T$ , leaving behind a physical state ($\\mathinner {|{n}\\rangle }$ ) with $p^+_n = (1-x_T)p^+$ .", "Similarly, $f_{\\bar{a}/T}(x_T)$ is the probability (per unit $x_T$ ) to remove an antiquark with $p^+$ -fraction $x_T$ leaving behind a physical state with $p^+_n = (1-x_T)p^+$ .", "$f_{a/T}(x_T)$ $f_{\\bar{a}/T}(x_T)$ are shown graphically in Figs.", "REF (b) and (c).", "Figure: (a) The virtual-quark hadron scattering amplitude; (b) The quark distribution function f a/T (x T )f_{a/T}(x_T) .", "Note that k + k^+ is fixed but k - k^- and 𝐤 ⊥ {\\bf k}_\\perp [Notice that $p^+$ appears in the target rest frame formulation where the “infinite momentum\" $P_\\infty $ appears in the more familiar formulation.]", "$f_{a/T}(x_T)$ and $f_{\\bar{a}/T}(x_T)$ obey important positivity, spectral and normalization constraints.", "The state $n$ in Eq.", "(2.18) is physical and must have $p^+_n>0$ , i.e., $E_n > |\\bf p_n|$ , thus $f_{a/T}(x_T) = f_{\\bar{a}/T}(x_T) =0$ , for $x_T \\ge 1$ or $x \\ge M_T/M$ ($\\approx A$ for a nuclear target).", "For $0<x_T<1$ , $f_{a/T}(x_T)$ defined by Eq.", "(2.18) is manifestly positive.", "Next, let us consider $f_{a/T}(x_T)$ for $x_T<0$ .", "Returning to Eq.", "(2.15) $f_{a/T}(x_T) &= \\frac{1}{2 \\sqrt{2} \\pi } \\int d \\xi ^- e^{i q^+ \\xi ^-} \\mathinner {\\langle {p|\\psi ^\\dagger _{a+}(\\xi ^-) \\psi _{a+}(0)|p}\\rangle }_c \\bigg \\vert _{\\xi ^+ = {\\bf \\xi }_\\perp =0} \\nonumber \\\\&= - \\frac{1}{2 \\sqrt{2} \\pi } \\int d \\xi ^- e^{i q^+ \\xi ^-} \\mathinner {\\langle {p|\\psi ^\\dagger _{a+}(0) \\psi _{a+}(\\xi ^-)|p}\\rangle }_c \\bigg \\vert _{\\xi ^+ = {\\bf \\xi }_\\perp =0} $ where we've used Eq.", "(2.4).", "Replacing $\\xi ^-$ by $-\\xi ^-$ and translating the matrix element we find $f_{a/T}(-x_T) = - f_{\\bar{a}/T}(x_T) \\ .", "\\qquad \\mathrm {(2.20)}$ Although $f_{a/T}(x_T)$ does not vanish for $x<0$ (it does vanish for $x_T<-1$ ) it is determined by $f_{\\bar{a}/T}(x_T)$ for $x>0$ .", "The literature is very confused on this point: there is a lot of talk about the need to “prove\" that “no partons can go backwards in the infinite momentum frame\", i.e., $f_{a/T}(x_T) \\equiv 0$ for $x_T <0$ .", "We see here that $f_{a/T}(x_T)$ is defined in such a way that measurements in the physical region $0<x_T<1$ determine it everywhere.", "For a more extensive discussion of this issue, see Ref [13].", "Now let us integrate $f_{a/T}(x_T)$ over all $x_T$ .", "Using $x_T = -\\sqrt{2}q^+/M_T$ and Eq.", "(2.19) and Eq.", "(2.20) we find $\\int ^\\infty _{-\\infty } dx_T f_{a/T}(x_T) &= \\int _0^1 dx_T(f_{a/T}(x_T) - f_{{\\bar{a}}/T}(x_T)) \\nonumber \\\\&= \\frac{1}{M_T} \\mathinner {\\langle {p|\\psi ^\\dagger _{a+}(0)\\psi _{a+}(0)|p}\\rangle }_c \\nonumber \\\\&= N_{a/T} - N_{{\\bar{a}}/T} $ where the last step follows from the fact that $\\psi ^\\dagger _{a+} \\psi _{a+} = \\frac{1}{\\sqrt{2}}j^+_a$ and $j_a^\\mu $ is a conserved current whose expectation value measures the number of quarks (minus the number of antiquarks) of flavor $a$ : $\\mathinner {\\langle {p|j^\\mu _a|p}\\rangle } = 2 p^\\mu (N_{a/T} - N_{{\\bar{a}}/T}$ ).", "Clearly, we may interpret $f_{a/T}(x_T)$ ($f_{\\bar{a}/T}$ ) as a probability per unit $x_T$ to find a quark (antiquark) of flavor $a$ with $k^+ = x_T p^+$ in the target T $\\frac{d P_{a/T}}{d x_T} = f_{a/T}(x_T) \\ .", "\\qquad \\mathrm {(2.22)}$ From this interpretation follows a host of parton model sum rules for the structure functions.", "The most important for our purpose is the “momentum sum rule\", $\\int _0^1 dx_T\\,\\,x_T (f_{a/T}(x_T) + f_{\\bar{a}/T}(x_T)) = \\epsilon _{a/T} + \\epsilon _{\\bar{a}/T} \\qquad \\mathrm {(2.23)}$ where $\\epsilon _{a/T}$ ($\\epsilon _{\\bar{a}/T}$ ) is the fraction of the target's $p^+$ carried by quarks (antiquarks) of flavor $a$ .", "The derivation mimics the derivation of Eq.", "(2.21) except the quark stress tensor $i \\bar{\\psi }_a \\gamma ^\\mu \\partial ^\\nu \\psi _a$ appears instead of $j^\\mu _a$ .", "If hadrons contained only quarks $\\sum _a \\epsilon _{a/T}$ would be 1.", "Instead, it is typically $\\sim \\frac{1}{2}$ (at large $Q^2$ ) indicating that substantial momentum and energy are carried by other, neutral quanta, namely gluons.", "If two targets are related by a symmetry, their quark distributions are similarly related.", "Isospin relates the neutron and proton and gives $f_{u/n}(x) &= f_{d/p}(x) \\nonumber \\\\f_{d/n}(x) &= f_{u/p}(x) \\nonumber \\\\f_{s/n}(x) &= f_{s/p}(x),\\ {\\rm etc.}", "$ For completeness, I record the structure functions for electron and (charged current) neutrino scattering from nucleon targets assuming isospin symmetry: $F^{ep}_2(x) &= x \\left[ \\frac{4}{9}(f_{u/p}(x) + f_{\\bar{u}/p}(x)) + \\frac{1}{9}(f_{d/p}(x) + f_{\\bar{d}/p}(x)) + f_{s/p}(x) + f_{\\bar{s}/p}(x)) \\right] \\nonumber \\\\F^{en}_2(x) &= x \\left[ \\frac{4}{9}(f_{d/p}(x) + f_{\\bar{d}/p}(x)) + \\frac{1}{9}(f_{u/p}(x) + f_{\\bar{u}/p}(x)) + f_{s/p}(x) + f_{\\bar{s}/p}(x)) \\right] \\nonumber \\\\F^{\\nu p}_2 (x) &= F^{\\bar{\\nu }n}_2(x) = 2x [f_{d/p}(x) + f_{\\bar{u}/p}(x)] \\nonumber \\\\F^{\\nu n}_2 (x) &= F^{\\bar{\\nu }p}_2(x) = 2x [f_{u/p}(x) + f_{\\bar{d}/p}(x)] \\ .", "$ I have ignored heavy quarks ($c, b, t$ ) and the Cabibbo angle ($\\sin ^2 \\theta _c \\cong 0.05$ ) and used the shorthand notation $\\nu p$ ($\\bar{\\nu }p$ ) for $ \\nu p \\rightarrow \\mu ^- X$ ($\\bar{\\nu }p \\rightarrow \\mu ^+ X$ ), etc.", "I have also left out the (important) parity violating structure function, $F_3$ , which arises in neutrino scattering.", "One important implication of Eqs.", "(2.25) is that in the absence of any nuclear effect in deuterium one would expect $F^{ed}_2(x) &= x \\frac{5}{9} \\left[ f_{u/p}(x) + f_{\\bar{u}/p}(x) + f_{d/p}(x) + f_{\\bar{d}/p}(x) \\right]+ \\frac{2}{9}\\left[ (f_{s/p}(x) + f_{\\bar{s}/p}(x)) \\right] \\nonumber \\\\&\\cong \\frac{5}{18} \\left[ F^{\\nu p}_2(x) + F^{\\bar{\\nu }p}_2(x) \\right] \\ , $ because strange quarks are suppressed in the nucleon and weighted by $2/5$ relative to non-strange quarks.", "This relation allows one to look for a nuclear effect in the deuteron in a model independent manner (see.", "Ref. [15]).", "To further simplify the structure functions of nucleons, it is customary to distinguish between quarks which must be present to account for the target's quantum numbers ($u$ and $d$ quarks for nucleons) known as “valence\" quarks and those which may be present in pairs due to relativistic effects, known as “ocean quarks\".", "Thus, for an isospin averaged nucleon $f_{u/N}(x) &= f_{d/N}(x) = f_V(x) + f_O(x) \\nonumber \\\\f_{\\bar{u}/N}(x) &= f_{\\bar{d}/N}(x) = f_O(x) \\ .", "$ Strange and heavier quarks may also be present.", "Typically, one assumes $f_{s/N}(x) &= f_{\\bar{s}/N}(x) \\le f_O(x) \\nonumber \\\\f_{c/N}(x) &= f_{\\bar{c}/N}(x) \\approx 0 \\ ,\\ {\\rm etc.}", "\\nonumber $ Before leaving this general discussion of the parton model, it is useful to relate the quark distribution function to the amplitude for quark-target scattering.", "We define the (connected) virtual quark-target forward scattering amplitude by $\\chi _{a/T} \\equiv \\int d^4 \\xi e^{-i k \\cdot \\xi } \\mathinner {\\langle {p|T(\\overline{\\psi }_a(\\xi )\\psi _a(0))|p}\\rangle }_c \\qquad \\mathrm {(2.28)}$ as illustrated in Fig.", "REF (a).", "$\\chi $ is a matrix in color and Dirac spaces, but we have suppressed those indices.", "The distribution function $f_{a/T}(x_T)$ can be projected out of $\\chi _{a/T}$ by integrating over all components of $k$ except $k^+$ which is held fixed, $k^+=x_Tp^+$ , and tracing the Dirac indices with $\\gamma ^+$ : $f_{a/T}(x_T) = \\int \\frac{d^4 k}{(2 \\pi )^4} \\delta \\left(\\frac{k^+}{p^+}-x_T \\right) {\\rm Tr} [\\gamma ^+\\chi _{a/T}(k,p)] \\ .", "\\qquad \\mathrm {(2.29)}$ It is easy to verify that Eqs.", "(2.28)$-$ (2.29) lead to Eq.", "(2.18) provided one used Eq.", "(2.4) to relate the $T$ -product to the ordinary product.", "It is often convenient, when studying electroproduction from nuclei, to define quark distribution functions depending on a universal variable $x = Q^2/2M_N \\nu $ .", "To preserve their probabilistic interpretation, it is necessary to rescale them: $F_{a/T}(x) \\equiv \\frac{d P_{a/T}}{dx} = \\frac{d P_{a/T}}{d x_T} \\frac{dx_T}{dx}=\\frac{M}{M_T} f_{a/T}(x_T) \\ .", "\\qquad \\mathrm {(2.30)}$ (See Eq. 2.22.)", "Then, $\\int _0^{M_T/M} dx (F_{a/T}(x) - F_{\\bar{a}/T}(x)) = N_{a/T} - N_{\\bar{a}/T} \\qquad \\mathrm {(2.31)}$ and $\\int _0^{M_T/M} x dx (F_{a/T}(x) + F_{\\bar{a}/T}(x)) = \\frac{M_T}{M} (\\epsilon _{a/T} + \\epsilon _{\\bar{a}/T}) \\ .", "\\qquad \\mathrm {(2.32)}$ At the same time, it is convenient to introduce a sructure function per nucleon, $\\overline{F}^T_2(x)$ $\\overline{F}^T_2 (x) \\equiv x \\sum _a {\\cal {Q}}_a^2 (\\overline{F}_{a/T}(x) + \\overline{F}_{\\bar{a}/T}(x)) \\qquad \\mathrm {(2.33)}$ where $\\overline{F}_{a/T}(x) = F_{a/T}(x)/A$ .", "In the analysis of nuclear targets, I will try to preserve this notation: lower case $(f_{a/T}(x_T))$ for intrinsically defined distribution functions as functions of $x_T$ , upper case for functions of $x$ and barred upper case for functions of $x$ “per nucleon\".", "Also, the label “A\" will denote a nucleus, “T\" a generic target and “a\" a quark of flavor $a$ .", "Note that $\\overline{F}^A_2(x,q^2)$ is defined so that it would reduce to the (isospin weighted) nucleon structure function $\\frac{Z}{A}F^P_2(x,q^2) + \\frac{N}{A}F^N_2(x,q^2)$ if the nucleons in the nucleus were non-interacting.", "At this point, it would be appropriate to discuss the phenomenology of the neutron and proton structure functions; however, no time would be left for nuclear targets.", "So the reader will have to consult the references [14] for more information about nucleons.", "Here, I will mention only a few properties of importance for future work.", "Near $x = 0$ both $F_{O/N}(x)$ and $F_{V/N}(x)$ are expected to diverge: $F_{O/N} \\sim 1/x$ and $F_{V/N}(x) \\sim 1/\\sqrt{x}$ .", "As $x \\rightarrow 1$ , $F_{O/N}/F_{V/N} \\rightarrow 0$ .", "The neutron and proton structure functions differ significantly at large $x$ leading to the observation that $f_{d/p}/f_{u/p} \\rightarrow 0$ as $x \\rightarrow 1$ .", "Some quark distributions extracted from electron and neutrino scattering experiments are shown in Fig.", "REF .", "Figure: Examples of quark, antiquark and gluon distributions in the nucleon, from F. Dydak .Before 1982, little attention had been paid to deep inelastic lcpton scattering from nuclei.", "Interest in the subject was awakened by experimental results from CERN.", "In order to put the rest of these lectures in proper context it is necessary to present those results, discuss the (dis)agreemcnt among experiments and present the most rudimentary parton model analysis of the data, which already has important implications for the quark substructure of nuclei.", "The basic parton model of Bjorken and Feynman and the material of §2 are the only prerequisites for this analysis.", "It will be impossible, however, to avoid some reference to QCD, asymptotic freedom and other issues which will be introduced later in these lectures.", "Figure: EMC data on F ¯ 2 Fe /F 2 D \\overline{F}^{Fe}_2/F^D_2 .The first precise comparison of deep inelastic scattering from a nuclear target with scattering from a nucleon was made by the EMC Collaboration [2].", "They compared iron and deuterium targets.", "They assumed that the weakly bound deuteron approximates the isospin averaged nucleon.", "Subsequent analysis by Bodek and Simon [15] has confirmed this assumption.", "In the absence of any nuclear effect at large $Q^2$ one expects $F^A_2(x,Q^2) = AF^D_2(x,Q^2)/2$ up to a small correction for the neutron excess in the nucleus.", "To display deviations from this naive expectation and to minimize systematic errors, it is conventional to plot $S^A (x,Q^2) \\equiv 2F^A_2(x,Q^2)/AF^D_2(x,Q^2)$ .", "The deviation of $S^{\\rm Fe}(x,Q^2)$ from unity shown in Fig.", "REF caught nearly everyone by surprise [16] The effect at large $x$ was particularly surprising because $\\overline{F}^A_2(x, Q^2)/ F^N_2(x, Q^2)$ must go to infinity as $x \\rightarrow 1$ .", "[The denominator vanishes for $x > 1$ , the numerator vanishes only for $x > A$ .]", "The “EMC effect\", as the deviation of $S^{\\rm Fe}$ from unity came to be known, was quickly confirmed by a reanalysis of old SLAC data with iron, aluminum and deuterium targets [17].", "Last year a dedicated SLAC experiment (E-139) measured the EMC effect on a sequence of nuclear targets [18].", "Their data are shown in Fig.", "REF .", "Figure: SLAC data on σ A /σ D \\sigma ^A/\\sigma ^D .", "The fits are from .The SLAC and EMC data agree reasonably well for $x > 0.3$ but the enhancement seen at low-$x$ by EMC was not observed at SLAC.", "Recently, there has been input from other groups.", "The BCDMS collaboration at CERN [19], has measured deep inelastic muon scattering from iron, nitrogen and deuterium targets.", "Their iron data are restricted by detector geometry to $x \\ge 0.2$ where they agree with both SLAC and EMC.", "All available data on iron are shown in Fig.", "REF .", "The nitrogen data include a point at $x = 0.1$ which is significantly above 1 but below the trend of the EMC iron data.", "[See Fig.", "REF ] Figure: Compilation of data on F ¯ 2 Fe /F ¯ 2 D \\overline{F}^{\\rm Fe}_2/\\overline{F}^D_2 .Figure: Comparison of BCDMS nitrogen data with SLAC carbon data .Several neutrino detector collaborations have measured ratios of cross sections from nuclear targets [20].", "In general, their results are subject to larger statistical and systematic uncertainties than the electron and muon data.", "It is fair to say, however, that at low-$x$ , where their statistics are best, the neutrino experiments fail to confirm the enhancement seen by the EMC group.", "For example, the CDHS group presented some data on $S^{\\rm Fe}$ at this year's Moriond meeting [21].", "Their data are also shown in Fig.", "REF .", "Figure: CDHS (neutrino scattering) measurement of F ¯ 2 Fe /F 2 N \\overline{F}^{Fe}_2/F^N_2 .It is of great importance to sort out the apparent disagreement among experiments at low-$x$ .", "This can only be done for certain by future experiments.", "However, we can catalog some of the possibilities while we wait.", "To help, some of the salient features of the experiments are summarized in Table I.", "Among the possibilities are: 1.", "Systematic errors: The EMC collaboration quote a considerable systematic error in the slope of the straight line fit to their data.", "They also quote a 7% systematic uncertainty in the normalization [2].", "Rotating their data to smaller slope and lowering it by 7% improves its agreement with SLAC and BCDMS considerably.", "Merlo quoted a systematic error on the $x = 0.03$ CDHS point equal to the statistical error.", "Moving the CDHS point up by that amount largely removes the discrepancy.", "2.", "An $A$ dependence of $R (\\sigma _L/\\sigma _T$ ): The SLAC E-139 experiment measured $R$ on several nuclear targets and found it large ($R \\cong 0.2$ ) and possibly $A$ dependent.", "If they extract $S^A$ from their data with an $A$ dependent $R$ it agrees better with the EMC measurement.", "This reconciliation is only superficial, however.", "It generates a much greater disagreement in the measurement of $\\overline{F}^{\\rm Fe}_1/F^D_1$ .", "[EMC do not, in fact, measure $F_1$ because the beam energy is high ($\\epsilon \\approx 1$ , see Eq.", "(1.34)) but at EMC energies $Q^2$ is quite large ($Q^2 \\ge 9$ GeV$^2$ for all $x-$ bins) and $R$ is expected to be very small at large $Q^2$ (even in QCD where the Callan-Gross relation is not exact) so $\\overline{F}^A_1/F^N_1 \\cong \\overline{F}^A_2/F^N_2$ .]", "3.", "Strong $Q^2$ , $x$ and $A$ dependence at low-$x$ : Perhaps the differences among experiments are due to the fact that their bins average in different ways over a rapidly varying function.", "The likely source of this variation is “shadowing\" which is expected to be important at low$-x$ and will be discussed further in $§6.4$ .", "Table: Summary of experiments.For the remainder of these lectures, I will assume that the EMC data for $x > 0.3$ are correct, but for $x < 0.3$ , I will assume the truth lies somewhere between the EMC and SLAC results.", "The parton model ideas developed in the previous section can be applied directly to the nuclear structure function [22].", "In the parton model and the Bjorken limit, $\\overline{F}^A_2(x)$ is independent of $Q^2$ .", "In QCD, the notion of parton distribution functions and other aspects of the parton model are preserved, but $\\overline{F}^A_2(x)$ and the parton distribution functions develop a weak but important $Q^2$ -dependence.", "To allow for this, we occasionally keep the $Q^2$ -label explicit $\\overline{F}^A_2(x,Q^2) = x \\sum _a {\\cal {Q}}^2_a (\\overline{F}_{a/A}(x,Q^2) + \\overline{F}_{\\bar{a}/A}(x,Q^2)) \\qquad \\mathrm {(3.1)}$ and the difference between a nucleus and deuterium is defined by $\\overline{\\Delta }_A(x,Q^2) \\equiv \\overline{F}^A_2(x,Q^2) - \\overline{F}^D_2(x,Q^2) \\ .\\qquad \\mathrm {(3.2)}$ $\\overline{\\Delta }$ depends only on the difference of quark distributions in nucleus $A$ and deuterium.", "In the valence parton model described in $§2$ , $\\overline{\\Delta }_A(x,Q^2) = x \\left( \\frac{5}{9}\\delta \\overline{F}_{V/A}(x,Q^2) + \\frac{4}{3} \\delta \\overline{F}_{O/A}(x,Q^2) \\right) \\qquad \\mathrm {(3.3)}$ and $\\delta \\overline{F}_{V/A} \\equiv \\overline{F}_{V/A} - \\overline{F}_{V/D}$ , etc.", "Figure: The difference of the structure function per nucleon of iron an deuterium .The EMC data for $\\overline{\\Delta }_{\\rm Fe}$ are shown in Fig.", "REF , which was constructed from the published EMC iron data and the data on $S^{\\rm Fe}$ .", "Experimentalists caution that systematic errors are more treacherous in the difference $\\overline{\\Delta }_{\\rm Fe}$ than in the ratio $S^{\\rm Fe}$ .", "Nevertheless, the qualitative features of $\\overline{\\Delta }_{\\rm Fe}$ are probably reliable (modulo our caveats about $x < 0.3$ ) and they tell us what has happened to the quark distributions in passing from deuterium to iron.", "The EMC effect — at least as seen in the EMC data — has several different aspects which can be extracted from a parton model analysis of the data.", "The same analysis applied to the SLAC data leads to somewhat different conclusions which I will mention along the way: 1.", "The valence quarks in iron are “degraded\" — shifted to lower $p^+$ — relative to those in deuterium.", "[All experiments agree on this.]", "2.", "There is an increase in the number of ocean quark pairs in iron compared to deuterium.", "[The increase is large if we believe the EMC data, smaller or even absent if one accepts the SLAC data at low-$x$ .]", "3.", "The fraction of momentum ($p^+$ ) per nucleon on quarks and antiquarks in iron relative to deuterium can be extracted.", "[It increases slightly if one believes the EMC data but the SLAC data are ambiguous.]", "These results hold for other nuclei as well in proportion to the size of the measured “EMC effect\".", "The extraction makes use of parton model sum rules and positivity constraints.", "For $x > 0.35$ , $\\overline{F}_{O/N}(x)$ (the ocean quark distribution in the isolated nucleon) is known to be negligible, thus $\\delta \\overline{F}_{O/N}(x) \\ge 0$ for $x > 0.35$ .", "Since $\\overline{\\Delta }_A(x) < 0$ for $x > 0.35$ , we see from Eq.", "(3.3) that $\\delta \\overline{F}_{V/A}(x)$ must be negative for $x > 0.35$ .", "The valence quark distribution is conserved, i.e., $\\int _0^A dx~\\delta \\overline{F}_{V/A}(x) = 0 \\qquad \\mathrm {(3.4)}$ because the number of valence quarks (per nucleon) in all nuclei is three.", "$\\delta \\overline{F}_{V/A}(x)$ must therefore be positive for some $x$ values where it has not been observed.", "The SLAC data show $\\delta \\overline{F}_{V/A}(x)$ turning positive for $x>0.8$ by which point $F_{V/A}(x)$ is very small.", "I assume that $\\int _{0.8}^A dx~\\delta F_{V/A}(x)$ is negligible.", "This leaves $x< 0.35$ as the region in which $\\delta \\overline{F}_{V/A}(x)$ is positive.", "In $§1$ we saw that $x$ is conjugate to $1/M \\xi ^3$ in the laboratory, where $\\xi ^3$ is the spatial separation of the currents in the correlation function defining $F_{a/A}(x)$ .", "Thus, the shift to lower $x$ is indicative of a shift, to longer range quark correlations in the target ground state.", "This result, is independent of whatever “microscopic\" explanation of the EMC effect might eventually be forthcoming.", "Whether it is attributed to “dynamical rescaling\", $N$ -quark bags, quark percolation, or more prosaic sources like pion admixtures in the nuclear wavefunction or other binding effects, the EMC effect directly measures an increased quark light-cone correlation length in nuclei.", "For more discussion of the space-time interpretation of the EMC effect, see [10].", "In retrospect, it is not surprising that measures of the quark correlation length increase in nuclei [23].", "It is believed that quark/nuclear matter, regarded as a function of density at zero temperature, undergoes a deconfining phase transition at some $\\rho _{\\rm critical}$ .", "For densities below $\\rho _{\\rm critical}$ , quarks are confined in nucleons but for densities above $\\rho _{\\rm critical}$ , they move about more or less freely in a degenerate quark gas.", "One support for this is that QCD is known to become asymptotically free at large chemical potential (equivalent to high density), so at high enough density a quark gas will become free.", "We identify the nucleon as the zero density limit of quark matter.", "As $A$ increases, the mean density of the nucleus increases (as the surface to volume ratio goes to zero), so we may regard the increased quark correlation length in iron as a consequence of its increased mean density and as a precursor of a deconfining phase transition where the correlation length would become very large.", "This view of the EMC effect is supported by the $A$ dependence observed at SLAC which correlates very closely with nuclear densities.", "It is discussed at length in $§6.3$ .", "Point 2, the measurement of ocean quark pairs, is obtained by examining $\\int dx \\delta \\overline{F}_{O/A}(x)$ over the range of the measured data ($x_{\\rm min}<x<x_{\\rm max}$ ): $\\int _{x_{\\rm min}}^{x_{\\rm max}} dx~\\delta \\overline{F}_{O/A}(x) = \\frac{3}{4} \\int _{x_{\\rm min}}^{x_{\\rm max}} \\frac{dx}{x} \\overline{\\Delta }_A(x) + \\frac{5}{12} \\int _0^{x_{\\rm min}} dx~\\delta \\overline{F}_{V/A}(x) \\qquad \\mathrm {(3.5)}$ where we have used Eq.", "(3.4) and assumed $\\delta \\overline{F}_{V/A}(x)$ is negligible for $x_{\\rm max}<x<A$ .", "If $\\delta \\overline{F}_{V/A}(x)$ does not change sign twice, that is, if it remains positive for $x<x_{\\rm min}$ , then $\\int _{x_{\\rm min}}^{x_{\\rm max}} dx~\\delta \\overline{F}_{O/A}(x) > \\frac{3}{4} \\int _{x_{\\rm min}}^{x_{\\rm max}} \\frac{dx}{x} \\overline{\\Delta }_A(x) \\ .\\qquad \\mathrm {(3.6)}$ Eq.", "(3.6) could fail only if $\\overline{F}_{V/A}(x)$ behaves as shown in Fig.", "REF  [24].", "Figure: Behavior of F ¯ 2 Fe /F ¯ 2 D \\overline{F}^{\\rm Fe}_2/\\overline{F}^D_2 required by the EMC data at low-x-x if only valence quarks are involved in the effect.Shadowing is expected to produce a depletion in $\\overline{F}^A_2(x,Q^2)$ at low-$x$ and low $Q^2$ , but it should affect primarily the ocean quark distribution and a shadowing of the valence quarks sufficient to invalidate Eq.", "(3.6) at large $Q^2$ would be surprising.", "According to Eq.", "(3.6), the ocean quarks are enhanced to the extent that the $x^{-1}$ weighted integral of $\\overline{\\Delta }_A$ is $>0$ .", "If one accepts the EMC data, the effect is quite large: $\\int _{x_{\\rm min}}^{x_{\\rm max}} dx~\\delta \\overline{F}_{O/{\\rm Fe}}(x) > 1.5 \\int _{x_{\\rm min}}^{x_{\\rm max}}dx\\, F_{O/N}(x) \\ .", "\\qquad \\mathrm {(3.7)}$ If we ignore the small contributions from $x > x_{\\rm max}$ , and $x < x_{\\rm min}$ , the EMC data give $5.0 \\pm 1.5 \\times 10^{-2}$ for the first term on the right.", "The second term is small and positive (see Eq.", "(3.4) and subsequent discussion) so we obtain a bound $\\delta \\epsilon _{\\rm Fe} > 5.0 \\pm 1.5 \\times 10^{-2}$ .", "The SLAC data, on the other hand, do not give a positive $\\overline{\\Delta }_A(x)$ for $x < 0.3$ so the sign of $\\delta \\epsilon _{\\rm Fe}$ cannot be determined although its magnitude is certainly small [25] At the level of the parton model, these features of the data appear logically independent.", "Some models, notably the rescaling model, are able to correlate a modest increase in the number of ocean quarks with the degradation (i.e., shift to lower $x$ ) of the valence quarks.", "Other models are able to account for only one feature of the data or invoke several effects in concert to account for the different aspects of the EMC effect.", "It is intuitively appealing to regard inclusive electroproduction from nuclei as a two step process.", "First, the nuclear wave function is decomposed into some basis of constituents, nucleons in the first instance, nucleons and pions in more elaborate schemes, and later perhaps including more exotic objects like $\\Delta $ s, multiquark configurations, and so on.", "Then, the structure functions of the constituents are added incoherently to give the structure function of the whole nucleus.", "This is the “convolution\" model\" [26].", "The simplest version, which includes only nucleons, gives what are known as “Fermi motion\" corrections to the free nucleon structure function [27].", "These were calculated long before the present excitement about electroproduction from nuclei.", "More recently, the model has been extended to more exotic constituents [28], [29], [30] in an attempt to “explain\" the EMC effect.", "There is no adequate derivation of the convolution model.", "The parton analyses of the $§2$ will provide a framework in which the assumptions which lead to the convolution model may be analyzed and criticized.", "The two steps of the convolution model are summarized diagrammatically in Fig.", "REF .", "The nucleus, with baryon number $A$ and momentum $P$ , contains a constituent, label $T$ and momentum $p$ , which in turn contains a quark, flavor $a$ and momentum $k$ .", "The quark absorbs the virtual photon while the fragments of the nucleus and the constituent propagate into the final state without interaction or interference.", "Other diagrams in which fragments of the nucleus or the constituent $T$ interact or interfere are ignored.", "Some are shown in Fig.", "REF .", "Figure: Contributions which are ignored in convolution models.Superficially, they resemble the diagrams of Fig.", "6 (b)–(d) which were ignored in the parton model, but there is an important difference: Fig.", "6(b)–(d) can be dropped because $Q^2 \\rightarrow \\infty $ , so $\\xi ^2 \\rightarrow 0$ and the operator product expansion in QCD can be used to prove that they are O($1/Q^2$ ) compared to Fig.", "6(a).", "There is no analogous large mass scale characterizing the process enclosed in the dashed line in Fig.", "REF , so there is no a priori justification for ignoring the additional processes of Fig.", "REF .", "Furthermore, the fragments of the nucleus and the constituent have a long time, $\\xi ^0 \\lesssim 1/M x$ , to interact while awaiting the return of the active quark.", "Nevertheless, for certain constituents under favorable kinematic conditions (e.g., for nucleons in the weak binding limit, or for pions near $x = 0$ ) [10] ignoring final state interactions and interference may be justifiable.", "For the moment, we will simply ignore the problem and proceed.", "The quark distribution for nucleus $A$ with four-momentum $P$ is defined in analogy to Eq.", "(2.29): $f_{a/A}(x_A) = \\int \\frac{d^4k}{(2 \\pi )^4} \\delta \\left( \\frac{k^+}{P^+} - x_A \\right) {\\rm Tr}[\\gamma ^+ \\chi _{a/A}(k,P)] \\qquad \\mathrm {(4.1)}$ where $x_A = Q^2/2M_A \\nu $ .", "The content of Fig.", "REF is a convolution form for $\\chi _{a/A}(k,P)$ : $\\chi _{a/A}(k,P) = \\sum _T \\int \\frac{d^4p}{(2 \\pi )^4} \\chi _{a/T}(k,p) \\chi _{T/A}(p,P) \\qquad \\mathrm {(4.2)}$ where the sum covers all constituents of the nucleus.", "We have treated the constituent $T$ as a scalar (omitting a sum over the spins of $T$ ).", "The generalization to spin $1/2$ is straightforward.", "Note that the quark and/or constituent legs in Fig.", "REF have been properly included in Eq.", "(4.2): From Eq.", "(2.28) it is apparent that $\\chi _{a/T}(\\chi _{T/A)})$ is untruncated in the quark (constituent) legs but truncated in the constituent (nucleus) legs.", "Only the $+$ components of momentum really matter in Eq.", "(4.1)-(4.2).", "To make this manifest, we substitute the identity: $1 = \\int dy\\ \\delta (y_A-p^+/P^+) \\int dz_T\\ \\delta (z_T-k^+/p^+) \\qquad \\mathrm {(4.3)}$ where the ranges of $y_A$ and $z_T$ integrals are yet to be determined.", "After some algebra we find $f_{a/A}(x_A) = &\\sum _T \\int dy_Adz_T~\\delta (y_Az_T-x_A)\\int \\frac{d^4p}{(2 \\pi )^4} \\delta (y_A-p^+/P^+) \\chi _{T/A}(p,P)\\times \\\\\\times &\\int d^4k~\\delta (z_T-k^+/p^+) {\\rm Tr}[\\gamma ^+ \\chi _{a/T}(k,p)]\\ .", "\\nonumber $ The $k-$ integration defines a Lorentz invariant function of $z$ and $p^2$ which is the off-shell generalization of $f_{a/T}(x_T)$ defined in Eq.", "(2.29): $f_{a/T}(z_T,p^2)\\equiv \\int \\frac{d^4k}{(2 \\pi )^4} \\delta (z_T-k^+/p^+) Tr[\\gamma ^+ \\chi _{a/T}(k,p)]\\ .", "\\qquad \\mathrm {(4.5)}$ Let us save the $p^2$ integral for last by inserting $\\int dp^2_0~\\delta (p^2-p^2_0) = 1$ .", "Then the $d^4p$ integration gives $f_{a/A}(x_A) = \\sum _T \\int dy_A dz_T \\delta (y_Az_T-x_A) \\int d p^2_0f_{a/T}(z_T,p^2_0)f_{T/A}(p^2_0,y_A) \\qquad \\mathrm {(4.6)}$ where $f_{T/A}(p^2_0,y_A) = \\int \\frac{d^4p}{(2 \\pi )^4}~\\delta (p^2-p^2_0)~\\delta (y_A-p^+/P^+) \\chi _{T/A}(p,P) \\ .", "\\qquad \\mathrm {(4.7)}$ Note that $f_{T/A}(p^2_0,y_A)$ is the probability to find a constituent $T$ in nucleus $A$ with momentum fraction $y_A$ of the nucleus's $P=M_A/\\sqrt{2}$ and invariant mass $p^2_0$ , whereas $f_{a/T}(z_T,p^2_0)$ is the probability to find a quark of flavor $a$ with momentum fraction $z_T$ in an off shell target with invariant mass $p^2_0$ .", "So $f_{a/T}(z_T)$ , which we have defined in Eq.", "2.18, would correspond to $f_{a/T}(z_T,M^2_T)$ , but $f_{T/A}(y_A) = \\int dp^2_0 f_{T/A}(p^2_0,y_A)$ .", "In practice, no one uses a $p^2_0$ dependent quark distribution $f_{a/T}(z_T,p^2_0)$ in convolution model calculations.", "The rationale for this must be that $f_{T/A}(p^2_0,y_A)$ peaks very strongly at some $\\overline{p}^2$ so $f_{a/T}(z,p^2_0)$ can be replaced by $f_{a/T}(z,\\overline{p}^2)$ .", "Then the $p^2_0$ integration in Eq.", "(4.7) can be performed, but it is an additional assumption that $f_{a/T}(z_T,\\overline{p}^2) = f_{a/T}(z_T,M^2_T)$ unless of course $\\overline{p}^2 \\approx M^2_T$ .", "The result $f_{a/A}(x_A) = \\sum _T \\int dy_A dz_T \\delta (y_Az_T-x_A) f_{a/T}(z_T)f_{T/A}(y_A) \\qquad \\mathrm {(4.8)}$ together with the equations which define $f_{a/T}(z_T)$ and $f_{T/A}(y_A)$ comprise the convolution model.", "The range of the $y_A$ and $z_T$ integrations can be determined by arguments identical to those used in $§2.2$ to show $0<x_A<1$ .", "The result is $0<y_A$ , $z_T<1$ .", "The parton distribution functions are often written as functions of Bjorken's variable $x= M_A x_A/M$ .", "As given in Eq.", "2.30, $F_{a/A}(x) =(M/M_A)f_{a/A}(Mx/M_A)$ .", "If the constituents in question are nucleons and the binding is not too strong, then $\\chi _{N/A}(p,P)$ does peak strongly and near mass shell (compared to the scale for variation in $f_{a/T}(z_T,p^2)$ which is $\\Lambda \\sim 200-300$ MeV).", "However, the model is applied to other constituents which are far off shell.", "Pion contributions are believed to be largest for $p^0 \\approx 0$ and $|\\bf p| \\approx $ 300-400 MeV [31] giving $\\overline{p}^2 \\approx -(0.1-0.2)$ GeV compared to $m^2_{\\pi } \\cong 0.02$ GeV.", "Whether the distribution of quarks in a pion so far off shell is the same as the distribution on shell is anybody's guess.", "In any case, advocates of pion and other convolution based models ignore any $p^2$ dependence of the constituents' quark distributions.", "The assumptions leading to a convolution model are arguable at best and probably can only be supported on a case by case basis.", "Despite this, I believe they have substantial value as qualitative guides to nuclear effects and especially as a formalism for treating Fermi motion and other “trivial\" sources of nuclear modifications of structure functions.", "Here is a convenient summary of the convolution model formulas: $f_{a/A}(x_A) &= \\sum _T f_{a/T/A}(x_A) \\nonumber \\\\&= \\sum _T \\int dy_A dz_T \\delta (y_Az_T-x_A) f_{a/T}(z_T)f_{T/A}(y_A) $ $f_{a/T}(z_T) &= \\frac{d P_{a/T}}{d z_T} \\, 0<z_T<1, \\ z_T=k^+/p^+ \\ , \\nonumber \\\\f_{T/A}(y_A) &= \\frac{d P_{T/A}}{d y_A} \\, 0<y_A<1, \\ y_A=p^+/P^+ \\ , $ $\\int _0^1 dz_Tf_{a/T}(z_T) = N_{a/T} \\ \\ \\ &\\ \\ \\ \\int _0^1 dy_Af_{T/A}(y_A) = N_{T/A} \\nonumber \\\\\\int _0^1 dz_T z_T f_{a/T}(z_T) = \\epsilon _{a/T} \\ \\ \\ &\\ \\ \\ \\int _0^1 dy_A y_A f_{T/A}(y_A) = \\epsilon _{T/A} $ $\\int _0^1 dx_A f_{a/A}(x_A) &= \\sum _T N_{a/T} N_{T/A} \\nonumber \\\\\\int _0^1 dx_A x_A f_{a/A}(x_A) &= \\sum _T \\epsilon _{a/T} \\epsilon _{T/A} \\nonumber $ where $N_{T/A}$ is the number of constituents of type $T$ in nucleus $A$ and $\\epsilon _{T/A}$ is the fraction of the nucleus's $P^+$ carried by constituents $T$ .", "It is useful to introduce quark and constituent distributions depending on Bjorken's variable $x = M_Ax_A/M$ and $y=M_Ay_A/M$ .", "We leave $z_T$ as is since the constituent $T$ is in general not at rest and $M_T$ plays no special role.", "Thus, $x=\\sqrt{2}k^+/M$ and $F_{T/A}(y)=dP_{T/A}/dy=\\frac{M}{M_A}f_{T/A}(y_A)$ .", "Then, Eq.", "(4.8) becomes $F_{a/A}(x) &= \\sum _T \\int _0^{M_A/M}dy\\int _0^1 dz_T \\delta (x-yz_T)f_{a/T}(z_T)F_{T/A}(y)\\\\&=\\sum _T\\int _x^{M_A/M}\\frac{dy}{y}f_{a/T}(x/y)F_{T/A}(y) $ and the sum rules analogous to Eqs.", "(4.11) are $\\int _0^{M_A/M} dx \\overline{F}_{a/A}(x) &= \\sum _T N_{a/T}N_{T/A}/A \\\\\\int _0^{M_A/M} dx \\overline{F}_{a/A}(x) &= \\frac{M_A}{MA} \\sum _T \\epsilon _{a/T} \\epsilon _{T/A} \\\\$ where $\\overline{F}_{a/A}=F_{a/A}/A$ is the quark distribution per nucleon.", "Throughout this discussion, I have been careful not to make the approximation $M_A \\cong MA$ although this is a reasonable approximation in most cases.", "Convolution has some unexpected and important kinematic effects on the quark distribution of some constituents.", "Consider, for example, the contribution of pions in a model like Erikson and Thomas [31]: $p^0 \\approx 0$ , but $|{\\bf p}| \\approx 300 - 400$ MeV, so $y_{\\pi } \\sim |{\\bf p}| \\cos \\theta /M$ which is typically less than $\\sim 1/3$ .", "The valence quark distribution in the pion is expected on theoretical grounds and found experimentally to be quite “hard\", $f_{V/\\pi }(z_\\pi ) \\sim (1-z_\\pi )$ .", "Convolution converts this into $F_{V/\\pi /A}(x) = \\int _x^{M_A/M} \\frac{dy}{y} F_{\\pi /A}(y) f_{V/\\pi }(x/y)$ which is negligible for $x$ much larger than $1/3$ .", "So the hard pion distribution has been mapped to small $x$ .", "No such fate befalls the nucleon: $p^0 \\approx M$ , so $y_N \\approx 1 + p \\cos \\theta /M$ which peaks near $y_N =1$ .", "Nevertheless, the motion of nucleons does affect the quark distribution in the nucleus even if it is composed of quarks alone.", "The most obvious source of a nuclear effect in deep inelastic scattering comes from the fact that the nucleons in the nucleus are in motion.", "As a “baseline\" model for nuclear targets we assume the nucleus consists exclusively of nucleons and that the quark distribution in the nucleons are the same as in isolation.", "One might naively think if their kinetic energies are small with respect to $\\nu $ , the motion could be neglected as $\\nu \\rightarrow \\infty $ .", "This is not correct, as we shall see.", "We begin with $f_{N/A}(y_A) = \\int \\frac{d^4p}{(2 \\pi )^4} \\delta (y_A-p^+/P^+) \\chi _{N/A}(p,P) \\qquad \\mathrm {(4.15)}$ where $\\chi _{N/A}(p,P)$ is the forward, possibly virtual, nucleon nucleus scattering amplitude defined in analogy to Eq.", "(2.28).", "$\\chi _{N/A}(p,P) = \\int d^4 \\zeta e^{-ip \\cdot \\zeta } \\mathinner {\\langle {P|T(\\Phi ^+(\\zeta )\\Phi (0))|P}\\rangle }_c\\qquad \\mathrm {(4.16)}$ where $\\Phi $ is a nucleon interpolating field.", "$\\Phi $ is not uniquely defined, and when $p^2 \\ne M^2_N$ different choices yield different results.", "This reflects an inherent uncertainty when one attempts to use field-theoretic methods to manipulate composite objects.", "Spin has been suppressed in Eq.", "(4.16).", "If we substitute Eq.", "(4.16) into Eq.", "(4.15) and perform as many $\\zeta $ and $p$ integrations as possible we obtain a form analogous to Eq.", "(2.18): $f_{N/A}(y_A) = \\sum _n \\delta (y_A-1+P^+_n/P^+)|\\mathinner {\\langle {n|\\Phi |P}\\rangle }|^2 \\qquad \\mathrm {(4.17)}$ where the sum is on all states which can be obtained by removing a single nucleon from the nucleus leaving behind a state with $P^+_n = (1-y_A)P^+$ .", "The problem is how to calculate $|\\mathinner {\\langle {n|\\Phi |P}\\rangle }|^2$ .", "This must be done inclusively, i.e., all possible states {$\\mathinner {|{n}\\rangle }$ } must be included and they must be physical states.", "A variety of approximations can be made but great care must be taken to ensure that the number and $p^+$ sum rules (Eqs.", "(4.11)) remain valid.", "The most naive approach, to replace the nucleus by an independent particle model in which nucleons occupy energy eigenstates in some potential, obeys neither sum rule and must be altered in some ad hoc way before it can be used in this context.", "When this form for $f_{N/A}(y_A)$ is inserted into Eq.", "(4.12) to obtain $F_{a/N/A}(x)$ we obtain an independent nucleon model for $F_{a/A}(x)$ which includes what are generally known as “Fermi motion corrections\": $\\overline{F}_{a/N/A}(x) = \\int _x^{M_A/M} \\frac{dy}{y} f_{a/N}(x/y) \\overline{F}_{N/A}(y) \\qquad \\mathrm {(4.18)}$ (Reminder: $F_{N/A}(y)=dP_{T/A}/dy=\\frac{M}{M_A}f_{N/A}(y_A)$ , $\\overline{F}_{a/N/A}=F_{a/N/A}/A$ and $\\overline{F}_{N/A}=F_{N/A}/A.$ ) $F_{a/N/A}$ has several important features independent of the explicit form of $\\overline{F}_{N/A}$ .", "First, as already noted $\\overline{F}_{a/N/A}(x)/F_{a/N}(x)$ diverges as $x \\rightarrow 1$ , so Fermi motion corrections to the ratio are large and positive near $x \\sim 1$ .", "The divergence of the ratio is deceptive because both $\\overline{F}_{a/N/A}(x)$ and $F_{a/N}(x)$ are very small for $x \\sim 1$ .", "Second, Fermi motion corrections cannot change the number of quarks of any flavor: $\\int _0^{M_A/M}(\\overline{F}_{a/N/A}(x) - F_{a/N}(x)) = 0$ , which can be obtained from Eq.", "(4.13) with $T = N$ and $N_{T/A} = A$ .", "Third, the effect of nuclear binding appears to be to decrease slightly the $p^+$ carried by the quarks (and antiquarks) even though the quark distribution in the nucleon is not altered.", "This can he seen from Eq.", "(4.14): $M_A/MA = 1 -\\delta $ where $\\delta $ is the binding energy per nucleon in units of the nucleon mass, and $\\epsilon _{N/A} = 1$ (the nucleons carry all the nucleus $P^+$ in this simple model), so $\\int _0^{M_A/M} dx x[\\overline{F}_{a/N/A}(x)-F_{a/N}(x)] = -\\delta \\epsilon _{a/N}$ .", "This goes in the right direction toward explaining the EMC effect but explicit calculations with “realistic\" nuclear wave functions fail to get a large enough shift in the valence quark distribution and get the wrong shape (i.e.", "$x-$ dependence) of the effect.", "Also, the model cannot produce an increase in the ocean quark pairs.", "Recently, it has been claimed that a model of the form we have been discussing can account for the valence quark part of the EMC effect [32].", "In that model, $\\epsilon _{N/A} < 1$ so $P^+$ has been lost, presumably to the constituents responsible for nuclear binding, and the quark content of those constituents has not been included in the calculation.", "It is interesting to explore the effect of binding in a simple model.", "Let us assume the nucleons form a relativistic, degenerate free Fermi gas (FG) with Fermi momentum $k_F$ .", "Then $|\\mathinner {\\langle {n|\\Phi |P}\\rangle }|^2 \\equiv \\frac{dN}{d^3p} = \\frac{3A}{4 \\pi k^3_F} \\theta (k_F-|{\\bf p}|) \\ .", "\\qquad \\mathrm {(4.19)}$ The constant $3A/4\\pi k^3_F$ is chosen so $\\int ^{k_F} dN = A$ .", "The bound nucleons must have an effective mass $M^\\star < M_N$ in this model otherwise the sum over the energies of the nucleons would exceed $M_A$ .", "Substituting into Eq.", "(4.18) and evaluating the $d^3p$ integral, $f^{FG}_{N/A}(y_A) = \\frac{3}{4} \\frac{AM_A}{k_F} \\left( 1-\\frac{M^2_A}{k^2_F} \\left(\\frac{y^2_A-M^{\\star 2}/M^2_A}{2 y_A} \\right)^2 \\right) \\ , \\ y_-<y_A<y_+ \\qquad \\mathrm {(4.20)}$ where $y_\\pm $ are the values for which $f^{FG}_{N/A}(y_\\pm )=0$ .", "One can check that $f^{FG}_{N/A}(y_A)$ satisfies both $\\int _0^1 dy_A f^{FG}_{N/A}(y_A)=A$ and $\\int _0^1 dy_A y_A f^{FG}_{N/A}(y_A)=1$ provided $M^\\star $ is chosen so the energy of the Fermi gas is $M_A$ .", "For non-relativistic nucleons ($M^\\star = M_A/A + O(k^3_F/M)$ ) a quadratic approximation suffices $\\overline{F}^{FG}_{N/A}(y) = \\frac{M}{AM_A} f^{FG}_{N/A}(y_A) \\approx \\frac{3}{4 \\lambda } \\left( 1-\\frac{(y-\\eta )^2}{\\lambda ^2} \\right) \\ , \\ \\eta -\\lambda <y<\\eta + \\lambda \\ , \\qquad \\mathrm {(4.21)}$ where $\\lambda =k_F/M$ and $\\eta =M_A/MA\\ (\\le 1)$ .", "Since $\\overline{F}^{FG}_{N/A}(y)$ has maximum height $\\sim 1/\\lambda $ and width $\\sim \\lambda $ and since it is convoluted with a smooth function $f_{a/N}(x/y)$ in Eq.", "(4.18), it is convenient to approximate it as a generalized function $F^{FG}_{N/A}(y) \\cong \\delta (y-\\eta ) + \\frac{\\lambda ^2}{10} \\delta ^{^{\\prime \\prime }}(y-\\eta ) \\qquad \\mathrm {(4.22)}$ for $\\lambda <<1$ .", "Note that $F^{FG}_{N/A}(y)$ in this form satisfies the required sum rules trivially.", "In particular, $\\int _0^{M_A/M} dy y F^{FG}_{N/A}(y)= \\eta = M_A/MA$ which leads to the decrease in the quark's momentum noted above.", "Substituting into Eq.", "(4.18), we obtain $\\overline{F}^{FG}_{a/N/A}(x) = \\frac{1}{\\eta } f_{a/N}(x/\\eta ) + \\frac{\\lambda ^2}{10} d^2 /dy^2 \\frac{1}{y} f_{a/n}(x/y) |_{y=\\eta } \\ .", "\\qquad \\mathrm {(4.23)}$ In this model the effects of Fermi smearing are small (for $k_F = 300$ MeV, $\\lambda ^2/10 \\approx 0.01$ , for typical nuclei $1-\\eta \\le 0.01$ ).", "A sample calculation of $\\overline{F}_{a/N/A}(x)/F_{a/N}(x)$ is shown in Fig.", "REF .", "It clearly cannot account for the EMC effect.", "Figure: Estimates of Fermi motion effects in a Fermi gas model for both valence and ocean quarks.Within the context of convolution models, there are only two alternatives: either other constituents must be present (e.g., pions, $\\Delta $ 's, 6 quark bags, etc.)", "[28], [29] or the structure function of the nucleons must be modified by the nuclear medium [30] I will not discuss either alternative in these lectures.", "The interested reader should consult the references for some work in these directions.", "Instead, I will describe a framework for analyzing the EMC effect which emerges from the scaling properties of QCD.", "In interacting field theories the structure functions $F_1$ and $F_2$ depend on both $x$ and $Q^2$ even at very large $Q^2$ .", "The $Q^2-$ dependence will give us a new handle on the distance scales characterizing the target.", "To understand the $Q^2-$ dependence we must take another excursion into formalism.", "At present, the $x-$ dependence of $F_2(x,Q^2 )$ at fixed $Q^2$ cannot be predicted for any hadronic target in a rigorous fashion.", "It depends on the non-perturbative dynamics which confines quarks.", "However, given $F_2(x,Q^2)$ at some large $Q^2 = Q^2_0$ , QCD perturbation theory enables one to predict $F_2(x,Q^2)$ at all $Q^2 > Q^2_0$ and at $Q^2 < Q^2_0$ down to some minimum which appears to be of order 1 GeV$^2$ .", "In QCD, $F_2(x,Q^2)$ depends logarithmically on $Q^2$ at large $Q^2$ .", "The logarithmic $Q^2-$ dependence has been verified experimentally and constitutes one of the major quantitative tests of the theory.", "In addition to $\\ln Q^2$ corrections there are expected to be $O(1/Q^2)$ and higher order corrections which become important at low-$Q^2$ .", "In this chapter, I first give a heuristic “derivation\" of the logarithmic $Q^2-$ dependence of quark distributions.", "Next, I will describe some of the formalism behind the $Q^2-$ dependence.", "Then, I catalogue and discuss $O(1/Q^2)$ corrections.", "Finally, I compare the quark description of hadrons in QCD with more naive quark models.", "This analysis applies equally well to any target, so the target subscripts $T$ and $A$ will generally be suppressed .", "Until further notice the scaling variable is $x_T$ with $0 < x_T < 1$ .", "In QCD, quarks are coupled to gluons in much the same fashion that electrons are coupled to photons in QED.", "The fundamental vertex is shown in Fig.", "REF (a).", "Instead of the electric charge $e$ , one has $g\\lambda ^a_{ij}/2$ , where $g$ is the QCD coupling ($\\alpha _c \\equiv g^2 /4 \\pi $ ).", "$\\lambda ^a_{ij}$ are Gell-Mann's matrices (normalized so ${\\rm tr}\\,\\lambda ^2 = 2$ ) describing the coupling of quark with color $i$ to quark with color $j$ ($i,j = 1,2,3$ ) by emitting a gluon with color $a$ ($a = 1, ... 8$ ).", "In QED it is well-known that the electric and magnetic fields of a relativistic electron are predominantly transverse and look like the fields of a photon [33].", "This is the basis of the Weizsäcker-Williams or “equivalent photon\" approximation in QED.", "A careful study of the analogous process in QCD will lead to the logarithmic scaling violations we seek [34].", "The equivalent photon approximation is usually formulated in a frame in which the electron is extremely relativistic (moving for definiteness, in the $x$ -direction) $-$ the “infinite momentum frame\".", "The number of photons associated with the electron depends on the energy of the photon ($E_\\gamma = x E_e$ ) and the impact parameter at which one probes the electron's electromagnetic field: $\\frac{d N_\\gamma }{dx dA} \\sim \\frac{\\alpha }{\\pi ^2 b^2 x} + {\\rm terms\\ less\\ singular\\ in\\ b\\ and\\ x} \\qquad \\mathrm {(5.1)}$ where $dA=2\\pi b db$ and the variables are defined in Fig.", "REF .", "Figure: Weizsäcker-Williams kinematics.The proper interpretation of Eq.", "(5.1) is that a measurement sensitive to the intensity of the electron's electromagnetic field performed at impact parameter $b$ and absorbing energy $E_\\gamma =x E_e$ could not differentiate between a passing electron and the equivalent number of photons.", "Eq.", "(5.1) is derived, for example, by Jackson.", "The derivation neglects the recoil of the electron, which must lose energy when a photon with $E_\\gamma /E_e =x$ is absorbed.", "This is reasonable for $x \\approx 0$ .", "For larger $x$ , one must include recoil and electron spin effects, which requires calculation of Feynman graphs.", "The lowest order graph corresponding to the measurements I've described is that of Fig.", "REF (a).", "The calculations can be found in Ref. [35].", "The result is $\\frac{1}{x} \\rightarrow \\frac{1}{2x} (1+(1-x)^2)$ .", "QED and QCD are identical to this order except $e \\rightarrow g \\lambda ^a_{ij}$ .", "So the number of gluons (summed over all colors) with momentum fraction $x$ at impact parameter $b$ in a quark (averaged over colors) is $\\frac{dN_g}{dxdb} \\sim \\frac{2 \\alpha _c}{\\pi b} \\frac{1}{3} {\\rm Tr} \\sum _a \\left( \\frac{\\lambda ^a}{2} \\right)^2 \\left[\\frac{1+(1-x)^2}{2x} \\right] \\qquad \\mathrm {(5.2)}$ to leading order in $b$ and $\\alpha _c$ .", "Since $\\frac{1}{3} {\\rm Tr} \\sum _a \\left( \\frac{\\lambda ^a}{2} \\right)^2 = \\frac{4}{3} \\qquad \\mathrm {(5.3)}$ we have $\\frac{dN_g}{dxdb} = \\frac{8 \\alpha _c}{3 \\pi b} \\left[\\frac{1+(1-x)^2}{2x} \\right] \\ .", "\\qquad \\mathrm {(5.4)}$ Figure: Feynman diagrams corresponding to Weizsäcker-Williams processes: (a) An (electron) quark radiating a photon (gluon); (b) An electron (quark) “radiating\" an electron (quuark); Vertex correction required at x=1x=1.If one probes a quark (or electron) by delivering a momentum transfer $Q_\\perp = \\sqrt{Q^2}$ , then $b_{\\rm min}=1/\\sqrt{Q^2}$ plays the role of a resolution: all gluons (or photons in QED) with $b \\ge b_{\\rm min}$ will appear distinct from the quark (electron), but the rest ($b < b_{\\rm min}$ ) will be lumped into what one calls the quark (electron) when probed with that resolution.", "This is evident, for example, in the Weizsäcker-Williams calculation of bremsstrahlung (per unit $x$ ) is $\\approx x E_e \\int _{b_{\\rm min}}^\\infty db \\frac{dN_\\gamma }{dx db}$ with $b_{\\rm min} = 1/\\sqrt{Q^2_\\perp }$ .", "Thus, $\\frac{d N_g}{dx d \\ln Q^2/Q^2_0} = \\frac{2 \\alpha _c}{3 \\pi } \\frac{1+(1-x)^2}{x} \\qquad \\mathrm {(5.5)}$ is the rate of change with $t \\equiv \\ln Q^2/Q^2_0$ of the probability to find a gluon of momentum fraction $x$ in a quark.", "(Note that the logarithm of $Q^2$ arose when $b\\propto 1/\\sqrt{Q^2}$ was substituted for $db/b$ in Eq.", "5.4 and that $Q^2_0$ is an arbitrary scale for the logarithm.)", "In the following, I will often use the variable $t=\\ln (Q^2/Q_0^2)$ .", "So we define $P_{g/q}(x,t)=dN_g/dxdt$ .", "($P_{g/q}$ appears to be independent of $t$ , but in QCD $\\alpha _c$ depends logarithmically on $t$ .)", "When a quark radiates a gluon it leaves behind a quark of lower momentum.", "So we can determine the rate of change with $t$ of the probability that a measurement on a quark with energy $E_q$ will detect a quark with $E^\\prime _q = xE_q$ : $P_{q/q}(x,t) = \\frac{dN_g(1-x)}{dxdt} = \\frac{2 \\alpha _c}{3 \\pi } \\frac{1+x^2}{1-x}\\qquad \\mathrm {(5.6)}$ corresponding to the Feynman graphs of Fig.", "REF (b).", "The three gluon coupling of QCD shown in Fig.", "REF (b) leads to a probability for a gluon to contain gluons of lower momentum, and the $q\\bar{q}q$ vertex of Fig.", "REF (a) leads to a probability for a gluon to contain quark-antiquark pairs.", "If we confine our attention to valence quarks we need not study these other processes since they produce only ocean $q \\bar{q}$ pairs.", "Eqs.", "(5.5) and (5.6) lead to the notion of a quark distribution “evolving\" with increased $t \\sim \\ln Q^2$ .", "As $\\ln Q^2$ grows, valence quarks emit gluons, gluons in turn emit quark-antiquark pairs and more gluons.", "This “evolutionary\" view of the $Q^2$ dependence of quark (and gluon) distributions was first suggested by Kogut and Susskind [36] and was developed for QCD by Altarelli and Parisi [34].", "The idea that the definition of a quark or gluon depends upon the mass or distance scale at which one probes it is a familiar one in quantum field theory.", "It is related to renormalization.", "A quark (or electron) propagating in isolation couples to quantum fluctuations in the vacuum.", "When we perform some measurement on the quark we lump some of the fluctuations (and all of the very short distance ones which lead to divergences) into the definition of the quark and treat the remainder as radiative corrections.", "To do this we must introduce a mass (or length) scale ($\\mu $ ) known as the “renormalization point\" into the theory, which distinguishes, roughly speaking, those fluctuations we treat explicitly ( $\\Delta x > \\mu ^{-1}$ ) from those we incorporate into the definition of the particles ($\\Delta x< \\mu ^{-1}$ ).", "It is well-known that $\\mu $ is arbitrary - nothing physical depends on it - but the scale invariance of QCD allows us to trade $\\mu ^2$ -dependence for $Q^2$ -dependence in a fashion which will be outlined briefly below ($§$ 5.2).", "Up to now, we've worked in the infinite momentum frame.", "$x$ measures the $z$ -component of a daughter particle's momentum relative to its parent: $p^3_\\infty = xP^3_\\infty $ .", "The subscript \"$\\infty $ \" denotes quantities measured in the infinite momentum frame.", "The particles are not far from mass shell and have limited transverse momenta (we can choose $P^3_\\infty > > \\sqrt{Q^2}$ ) so $p^0_\\infty \\cong x P^3_\\infty $ ) and $p^+_\\infty = x P^+_\\infty \\ .", "\\qquad \\mathrm {(5.7)}$ The ratio $p^+/P^+$ is invariant under Lorentz transformation in the along the $z$ -axis.", "This follows from the Lorentz transformation in the form $p^{\\pm \\,\\prime }= e^{\\pm y}p^\\pm $ , ${\\mathbf {p}}^\\prime _\\perp = {\\bf p}_\\perp $ , where $\\beta = \\tanh y$ is the relative velocity of theframes along the $z$ -axis.", "Thus, Eq.", "(5.7) implies $p^+ = x P^+ \\qquad \\mathrm {(5.8)}$ in the laboratory, and we can identify $x$ as the variable we've been using all along.", "$P_{q/q}(x,t)$ appears to be singular at $x = 1$ .", "This presents a problem because we would like the number of valence quarks in a target to be independent of $Q^2$ , $\\frac{d}{dt} \\int _0^1 dx \\frac{dN_q}{dx} = \\int _0^1 dx P_{q/q}(x,t) = 0 \\,.\\qquad \\mathrm {(5.9)}$ With Eq.", "(5.6) as it stands, the integral diverges.", "The resolution of this problem requires methods outside the scope of these lectures [38].", "The proper prescription is to interpret $1/(1-x)$ as a distribution: $1/(1-x) \\rightarrow 1/(1-x)_+$ , defined by $\\int _0^1dx \\frac{f(x)}{(1-x)_+}\\equiv \\int _0^1 dx(f(x)-f(1))/(1-x) \\qquad \\mathrm {(5.10)}$ and to add a $\\delta $ -function to $P_{q/q}(x,t)$ , so that Eq.", "(5.9) is satisfied.", "Since $\\int _0^1 dx (1+x^2)/(1-x)_+ = -\\frac{3}{2}$ we find $P_{q/q}(x,t) = \\frac{2 \\alpha _c}{3 \\pi } \\left( \\frac{1+x^2}{(1-x)_+} + \\frac{3}{2} \\delta (x-1) \\right) \\ .", "\\qquad \\mathrm {(5.11)}$ The rate of change of the quark distributions in a target will be determined by the functions $P_{q/q}(x, t)$ and $P_{q/g}(x,t)$ and by the quark and gluon distributions at $t$ : A quark or gluon with momentum fraction $y$ will be observed to consist of quarks with momentum fraction $zy$ according to $P_{q/q}(x, t)$ and $P_{q/g}(x,t)$ respectively.", "Clearly, evolution is described mathematically by convolution in precise analogy to $§4.2$ : $\\frac{df_{\\rm N. S.}}{dt} = \\frac{\\alpha _c}{2 \\pi } \\int _x^1 \\frac{dy}{y} \\overline{P}_{q/q} (x/y)f_{\\rm N. S.}(y,t) \\qquad \\mathrm {(5.12)}$ where $P_{q/q} \\equiv \\frac{\\alpha _c}{2 \\pi } \\overline{P}_{q/q}$ The subscript $\\rm N. S.$ denotes non-singlet and indicates a quark distribution with non-trivial flavor quantum numbers such as the valence quark distribution of $§2$ .", "A non-singlet distribution is one which cannot be populated by the quark-antiquark pairs evolved from gluons.", "Singlet distributions, on the other hand, satisfy coupled integro-differential equations in which the gluon distribution appears as well.", "To describe the evolution of $F_2(x,Q^2)$ it is necessary to solve these differential equations involving valence and ocean quark distributions and a gluon distribution $f_g(x,t)$ .", "Starting values of all distributions at $t=$ 0, i.e., $Q^2 = Q^2_0$ , are required as input.", "The general features of the $t-$ dependence are clear from the Weizäcker-Williams approximation.", "As $t$ increases, valence quarks lose momentum by emitting gluons — i.e., probed with higher resolution more of a quark momentum appears to be carried by its gluon field.", "Gluons, in turn, lose momentum to quark-antiquark pairs.", "The net result, with increasing $t$ , is a transfer of the valence quark momenta to newly created pairs.", "The gluons are caught in the middle - valence quarks emit gluons but the gluons turn into $q\\bar{q}$ pairs.", "Not surprisingly, the process saturates: at very large $\\ln Q^2$ the fraction of the momentum of any target carried by gluons saturates at $\\approx $ 0.47 [38].", "The $Q^2$ -dependence of $F_2(x,Q^2)$ is illustrated schematically in Fig.", "REF .", "Close, Roberts and Ross were struck by the similarity of Fig.", "REF to the shape of the EMC effect at fixed $Q^2$ , and were led to the “rescaling\" analysis which is the subject of the next chapter.", "Figure: Q 2 Q^2-dependence of the structure function (dF 2 (x,Q 2 )/dlnQ 2 )(dF_2(x,Q^2)/d \\ln Q^2) at Q 2 Q^2 = 3.2 GeV 2 ^2 using F 2 (x,Q 2 )F_2(x,Q^2) from SLAC data .Integrodifferential equations like Eq.", "(5.12) are not easily solved directly.", "We can gain considerable insight, though, by taking moments in $x$ on both sides and using the special properties of convolutions.", "Let $M^n_{\\rm N. S.} = \\int _0^1 dx x^{n-1} f_{\\rm N. S.}(x,t) \\ .", "\\qquad \\mathrm {(5.13)}$ Then, it is easy to show from Eq.", "(5.12), that $\\frac{dM^n_{\\rm N. S.}}{dt} = \\frac{\\alpha _c}{2 \\pi }B_nM^n_{\\rm N. S.}(t) \\qquad \\mathrm {(5.14)}$ where $B_n = \\int _0^1 dz z^{n-1} \\overline{P}_{q/q} (z) \\ .", "\\qquad \\mathrm {(5.15)}$ Eq.", "(5.14) is easier to solve.", "The character of the solution depends on the $t$ -dependence of $\\alpha _c$ , which I have not yet specified.", "In quantum field theories, the effective coupling has a definite and calculable $t$ -dependence [39].", "In free field theory $\\alpha _c = 0$ and $M^n_{\\rm N. S.}$ is constant - corresponding to exact Bjorken scaling.", "In some model field theories the effective coupling becomes constant at large $t$ , $\\lim _{t \\rightarrow \\infty } \\alpha _c(t) \\rightarrow \\alpha _0$  [40].", "This behavior is known as “ultraviolet fixed point\".", "In such a theory, the solution of Eq.", "(5.14) at large $Q^2$ is $M^n_{\\rm N. S.}(Q^2) \\sim M^n_{\\rm N. S.} (Q^2_0) \\left(\\frac{Q^2}{Q^2_0}\\right)^{\\frac{\\alpha _0 B_n}{2 \\pi }} \\qquad \\mathrm {(5.16)}$ i.e.", "scaling is violated by powers of $Q^2$ .", "The coefficient $B_n$ is known as the “anomolous dimension\".", "In QCD, the effective coupling vanishes as $t \\rightarrow \\infty $ butit vanishes too slowly to use the approximation $\\alpha _c=0$ at large $t$ : $\\alpha _c(t)$ can be shown to have an expansion of the form: $\\frac{\\alpha (0)}{\\alpha _c(t)} = 1 + b \\alpha _c(0)t + O(\\alpha _c(0)^2) \\ , \\qquad \\mathrm {(5.17)}$ or to leading order, $\\alpha _c(t) \\sim \\frac{\\alpha _c(0)}{1+bt\\alpha _c(0)} \\ .", "\\qquad \\mathrm {(5.18)}$ The coefficient $b$ is $b = \\frac{1}{4 \\pi } \\left( 11-\\frac{2}{3}N_f \\right) \\ , \\qquad \\mathrm {(5.19)}$ where $N_f$ is the number of quark flavors with masses small compared with $\\sqrt{Q^2}$ , in practice $N_f \\sim 3-4$ .", "Eq.", "(5.18) can be rewritten $\\alpha _c(Q^2) \\sim \\frac{1}{b \\ln Q^2/\\Lambda ^2} \\qquad \\mathrm {(5.20)}$ where $\\Lambda ^2 = Q^2_0 \\exp (-1/b \\alpha _c(0))$ .", "This well-known behavior of $\\alpha _c(Q^2)$ is known as asymptotic freedom.", "It is very special to non-Abelian gauge field theories and it makes it possible to calculate with QCD at large $Q^2$  [41].", "If $\\alpha _c(t)$ behaves as in Eq.", "(5.18), the moments evolve logarithmically: $M^n_{\\rm N. S.}(t) = M^n_{\\rm N. S.}(0)(\\alpha _c(0)/\\alpha _c(t))^{B_n/2 \\pi b} \\ .", "\\qquad \\mathrm {(5.21)}$ A more familiar form of Eq.", "(5.21) is obtained by changing variables to $Q^2$ and taking the logarithm $\\ln M^n_{\\rm N. S.}(Q^2) \\sim \\ln M^n_{\\rm N. S.}(Q^2_0) + \\frac{B_n\\alpha _c(Q^2_0)}{2 \\pi b} \\ln Q^2/Q^2_0 \\ , \\qquad \\mathrm {(5.22)}$ where we've replaced $\\ln (1+bt\\alpha _c(0)) \\sim bt\\alpha _c(0)$ since we are working to lowest order in $\\alpha _c$ .", "The coefficients {$B_n$ }, still known as (non-singlet) anomolous dimensions, can be computed from Eqs.", "(5.11) and (5.15) $B_n = \\frac{4}{3} \\left\\lbrace \\frac{1}{n(n+1)} - 2 \\sum _{j=2}^n \\frac{1}{j} -\\frac{1}{2} \\right\\rbrace \\ .", "\\qquad \\mathrm {(5.23)}$ $B_1$ is zero, which corresponds to the fact that the number of valence quarks does not change with $Q^2$ .", "All the others are negative, so all other moments decrease monotonically with $Q^2$ at large $Q^2$ .", "Figure: Testing QCD in inelastic lepton scattering.", "At large Q 2 Q^2, [M n ] 2πb/B n [M^n]^{2 \\pi b/B_n} lies on a straight line with a predicted slope when plotted versus lnQ 2 \\ln Q^2.In principle, a particularly simple way to test QCD is to plot the moments of the valence quark distribution versus $\\ln Q^2$ : the slope is predicted by QCD up to a single parameter, $\\alpha (Q^2_0)$ or $\\Lambda $ .", "In practice, the situation is quite complicated: $F_2(x,Q^2)$ is not well-measured at all $x$ , so moments are hard to compute; non-singlet quark distributions must be extracted from $F_2(x,Q^2)$ in order to apply the simple analysis I've described; $\\alpha _c(t)$ is not so small where there is good data (remember $d \\sigma /dE^\\prime d\\Omega $ falls like $1/Q^4$ ) so higher corrections to Eq.", "(5.17) must be included at least at the low-$t$ end.", "Nevertheless, the moments have been extracted and compared with theory and the agreement is excellent as shown in Fig.", "REF .", "For our purposes, it is important to remember that the logarithms of the moments depend linearly on $\\ln Q^2$ and that the slope is negative and independent of the target.", "To get more insight into the $Q^2$ -dependence of deep inelastic structure functions it is necessary to return to coordinate space.", "In $§2$ we derived the parton model by studying the spacetime dependence of the product of currents.", "This gave us considerable insight into the relation between the $x$ -dependence of $F_2(x)$ and the quark correlation function in the target ground state.", "Now I want to look for similar insight into the $Q^2-$ dependence we've found in QCD and to make sure our earlier insight is not spoiled by the interactions we've added.", "(It isn't.)", "The starting point is Wilson's operator product expansion [42] (OPE).", "The idea is that the product of operators simplifies in the limit that their arguments coincide: $\\lim _{\\xi ^\\mu \\rightarrow 0} A(\\xi )B(0) \\sim \\sum _{\\lbrace \\beta \\rbrace } C_{\\lbrace \\beta \\rbrace }(\\xi )O_{\\lbrace \\beta \\rbrace } (0) \\qquad \\mathrm {(5.24)}$ where $C_{\\lbrace \\beta \\rbrace }$ are $C$ -number functions and $O_{\\lbrace \\beta \\rbrace }$ are finite local operators.", "$C_{\\lbrace \\beta \\rbrace }$ may depend on any other parameters in the theory such as masses and coupling constants, in addition to $\\xi $ ; $\\lbrace \\beta \\rbrace $ are all labels which might occur in such an expansion (Lorentz and internal symmetry indices, etc.).", "The content of Eq.", "(5.24) is that the singularities are factored out from the operators and that the terms in the expansion can be organized in decreasing order of singularity as $\\xi _\\mu \\rightarrow 0$ .", "Eq.", "(5.24) is useful only for $\\xi ^\\mu \\approx 0$ , the short distance limit.", "This will translate into information about the Compton amplitude in the limit $Q^2 \\rightarrow \\infty $ and $\\omega \\rightarrow 0$ ($\\omega = 1/x$ ).", "Deep inelastic scattering probes the light-cone and requires $\\omega \\ge 1$ , but the dispersion relations discussed in $§1.2$ will provide the necessary connection between the two regimes.", "It is easy to check that Eq.", "(5.24) works in free field theories.", "For example, 1.", "In a free, massless, scalar field $\\phi (\\xi )\\phi (0) &= \\mathinner {\\langle {0|\\phi (\\xi )\\phi (0)|0}\\rangle }+ : \\phi (\\xi )\\phi (0): \\nonumber \\\\&= \\frac{i}{4 \\pi ^2(-\\xi ^2+i \\epsilon \\xi ^0)} I \\nonumber \\\\&+ \\sum _{n=0}^{\\infty } \\frac{\\xi ^{\\mu _1}....\\xi ^{\\mu _n}}{n!", "}: \\phi (0) \\overleftarrow{\\partial }_{\\mu _1},...\\overleftarrow{\\partial }_{\\mu _n} \\phi (0): $ $I$ is the identity, and $:\\phi (0)...\\phi (0):$ are the regular operators.", "Their “coefficient functions\" are $C$ -numbers.", "Note that the first term in Eq.", "(5.25) diverges as $\\xi _\\mu \\rightarrow 0$ but the rest vanish with successfully higher powers of $\\xi _\\mu $ .", "2.", "In a free massless Dirac theory let $J(\\xi ) = : \\overline{\\psi }(\\xi )\\psi (\\xi ):$ then in analogy to Eq.", "(2.1) et seq.", "$[J(\\xi ),J(0)] &= \\frac{1}{2 \\pi }[\\partial ^\\rho \\delta (\\xi ^2)\\epsilon (\\xi ^0)] \\sum _{n=0}^{\\infty } \\frac{\\xi ^{\\mu _1}...\\xi ^{\\mu _n}}{n!}", "\\nonumber \\\\&: \\overline{\\psi }(0)\\overleftarrow{\\partial }_{\\mu _1}...\\overleftarrow{\\partial }_{\\mu _n} \\gamma _\\rho \\psi (0) \\nonumber \\\\&- \\overline{\\psi }(0) \\overrightarrow{\\partial }_{\\mu _1}...\\overrightarrow{\\partial }_{\\mu _n} \\gamma _\\rho \\psi (0): \\ .", "$ In a free field theory, the Taylor expansions in Eq's.", "(5.25) and (5.26) can of course be summed to give a “bilocal operator\" like :$\\overline{\\psi }(\\xi )\\gamma _\\rho \\psi (0)$ : but in interacting theories the coefficient function will differ slightly for each term in the Taylor expansion preventing resummation.", "This, it will turn out, makes the difference between exact Bjorken scaling as in free field theory, and logarithmic scaling violation as in QCD.", "It is clear from these examples that the form the operator product expansion takes in a particular field theory depends upon the procedure necessary to remove the singularity which arises when one tries to bring operators to the same space time point.", "In free field theories the singular piece is a $C-$ number and can be isolated by normal ordering the operator product.", "In interacting field theories, the divergences are worse and normal ordering does not suffice.", "Instead, it is necessary to renormalize the operators.", "There is a certain arbitrariness in renormalization: to define finite operators it is necessary to introduce a mass-scale $\\mu ^2$ — loosely speaking, it is the scale at which the operator's matrix elements have the values they would have in free field theory — but $\\mu ^2$ is arbitrary, nothing physical can depend on it.", "Nevertheless, both the operators $O_{\\lbrace \\beta \\rbrace }$ and the coefficient functions $C_{\\lbrace \\beta \\rbrace }$ will separately depend on the mass scale introduced by the necessity of renormalization.", "To allow for this we replace $C_{\\lbrace \\beta \\rbrace }(\\xi ) $ by $C_{\\lbrace \\beta \\rbrace }(\\xi ,\\mu ^2)$ and $O_{\\lbrace \\beta \\rbrace }(0)$ by $O^{(\\mu ^2)}_{\\lbrace \\beta \\rbrace }(0)$ .", "In anything physically measurable, the $\\mu ^2$ dependence of $C_{\\lbrace \\beta \\rbrace }(\\xi ,\\mu ^2)$ will cancel that of $O_{\\lbrace \\beta \\rbrace }^{(\\mu ^2)}(0)$ .", "$\\mu ^2$ is known as the “renormalization point\".", "To make use of the OPE, it is convenient to make explicit the factors of $\\xi _\\mu $ which accompany an operator carrying Lorentz indices.", "To this end we rewrite Eq.", "(5.24) as $A(\\xi )B(0) \\sim \\sum _{\\lbrace \\beta \\rbrace } C^\\prime _{\\lbrace \\beta \\rbrace }(\\xi ^2,\\mu ^2)\\xi _{\\mu _1}.....\\xi _{\\mu _{n_{\\beta }}} O^{\\lbrace \\mu ^2\\rbrace \\mu _1...{\\mu _{n_{\\beta }}}}_{\\lbrace \\beta \\rbrace } \\ .", "\\qquad \\mathrm {(5.27)}$ The operators $O_{\\lbrace \\beta \\rbrace }$ can always be defined so they are symmetric traceless in all Lorentz indices: $O^{\\mu _1..\\mu _i..\\mu _j..{\\mu _{n_{\\beta }}}} = O^{\\mu _1..\\mu _j..\\mu _i..{\\mu _{n_{\\beta }}}}$ and $g_{\\mu _i \\mu _j}O^{\\mu _1..\\mu _i..\\mu _j..{\\mu _{n_{\\beta }}}} =0$  [43].", "Then, $n_\\beta $ is called the “spin\" of the operators [44].", "If $A$ and $B$ carry Lorentz indices then the form of Eq.", "(5.27) becomes more complicated.", "To avoid writing complicated equations, I will suppress the indices on the currents $J_\\mu $ and $J_\\nu $ or equivalently study deep inelastic scattering by a particle coupled to a hypothetical scalar current $J(\\xi )=\\overline{\\psi }(\\xi )\\psi (\\xi )$ .", "Let us expand the product $T(J(\\xi )J(0))$ in the fashion of Eq.", "(5.27) and calculate the contributions to $T(q^2,\\omega )$ (see Eq.", "(1.12)).", "We must carry out the Fourier transform $\\tilde{C}_{\\mu _1...{\\mu _{n_{\\beta }}}}(q,\\mu ^2) \\equiv \\int d^4 \\xi e^{i q \\cdot \\xi } C^\\prime _{\\lbrace \\beta \\rbrace }(\\xi ^2,\\mu ^2)\\xi _{\\mu _1}...\\xi _{\\mu _{n_{\\beta }}} \\ .", "\\qquad \\mathrm {(5.28)}$ The Lorentz indices on $\\tilde{C}$ can only be in the form of $q_{\\mu _k}$ or $g_{\\mu _i \\mu _j}$ .", "The latter vanish when contracted with the traceless operator $O_{\\lbrace \\beta \\rbrace }$ .", "In effect, then $\\tilde{C}_{\\mu _1...{\\mu _{n_{\\beta }}}}(q,\\mu ^2) = \\frac{q_{\\mu _1}...q_{{\\mu _{n_{\\beta }}}}}{(q^2)^{n_\\beta }} (-1)^{n_\\beta } \\tilde{C}_{\\lbrace \\beta \\rbrace }(q^2,\\mu ^2) \\ .", "\\qquad \\mathrm {(5.29)}$ The phase and the factors of $(q^2)^{n_\\beta }$ have been introduced for later convenience.", "Note that $\\tilde{C}_{\\lbrace \\beta \\rbrace }(q^2,\\mu ^2)$ has the same dimension as $\\int d^4 \\xi e^{i q \\cdot \\xi } C^\\prime _{\\lbrace \\beta \\rbrace }(\\xi ^2,\\mu ^2)$ .", "Now, $T(q^2,\\omega )$ can be written as $T(q^2,\\omega ) \\sim \\sum _{\\lbrace \\beta \\rbrace }\\tilde{C}_{\\lbrace \\beta \\rbrace }(q^2,\\mu ^2) (-1)^{n_\\beta } \\frac{q_{\\mu _1}...q_{{\\mu _{n_{\\beta }}}}}{(q^2)^{n_\\beta }} \\mathinner {\\langle {p|O_{\\lbrace \\beta \\rbrace }^{(\\mu ^2)\\mu _1..{\\mu _{n_{\\beta }}}}|p}\\rangle }_c \\ .", "\\qquad \\mathrm {(5.30)}$ The matrix element in Eq.", "(5.30) must carry Lorentz indices $\\mu _1...{\\mu _{n_{\\beta }}}$ .", "We can write $\\mathinner {\\langle {p|O_{\\lbrace \\beta \\rbrace }^{(\\mu ^2)\\mu _1..{\\mu _{n_{\\beta }}}}|p}\\rangle } = \\Theta _{\\lbrace \\beta \\rbrace }(\\mu ^2)(p^{\\mu _1}...p^{{\\mu _{n_{\\beta }}}}+....) \\ , \\qquad \\mathrm {(5.31)}$ where the other terms are determined by the fact that $O_{\\lbrace \\beta \\rbrace }$ is symmteric and traceless.", "The terms required to remove traces all contain at least one factor of $g^{\\mu _i\\mu _j}$ .", "Thus, for example, if $n_\\beta =2$ , one has $p^{\\mu _1}p^{\\mu _2}-\\frac{M^2_p}{4}g^{\\mu _1\\mu _2}$ .", "Combining Eqs.", "(5.30) and (5.31) we find $T(q^2,\\omega ) \\sim \\sum _\\beta \\tilde{C}_{\\lbrace \\beta \\rbrace }(q^2,\\mu ^2) \\Theta _{\\lbrace \\beta \\rbrace }(\\mu ^2) \\left[ (\\frac{\\omega }{2})^{n_\\beta } + O(\\frac{1}{q^2}) \\right] \\ , \\qquad \\mathrm {(5.32)}$ ($\\omega \\equiv -2p \\cdot q/q^2$ ) because $p^\\alpha p^\\beta q_\\alpha q_\\beta /(q^2)^2 = \\omega ^2/4$ but $g^{\\alpha \\beta }q_\\alpha q_\\beta /(q^2)^2 = 1/q^2$ .", "In fact, only even powers of $\\omega $ occur in Eq.", "(5.32) because $T(q^2,\\omega )$ is crossing symmetric (see $§1.2$ ).", "Since $\\omega $ ($\\cong 1/x$ ) scales in the Bjorken limit, the importance of any particular operator $O_{\\lbrace \\beta \\rbrace }$ is determined by the large-$q^2$ behavior of $\\tilde{C}_{\\lbrace \\beta \\rbrace }(q^2,\\mu ^2)$ .", "This can be calculated to all orders in perturbation theory using the methods of the renormalization group.", "The first step is to determine the dimension of $\\tilde{C}_{\\lbrace \\beta \\rbrace }(q^2,\\mu ^2)$ .", "We define the “naive\" or “canonical\" dimension (or, for short, simply dimension) of an operator to be the units in which it is measured as a power of mass (with $\\hbar = c= 1$ ).", "We use the notation $[O_{\\lbrace \\beta \\rbrace }] = d_{\\lbrace \\beta \\rbrace }$ .", "Here are some examples: $[J_\\mu ]=3$ because $\\int d^3x J_0 =$ charge which is dimensionless; $[\\phi ]=1$ because the action, $S=\\int d^4 x \\frac{1}{2}(\\partial \\phi )^2$ , is dimensionless; likewise $[\\psi ] = \\frac{3}{2}$ .", "Using these dimensions and remembering $\\mathinner {|{p}\\rangle }$ is covariantly normed, $[\\mathinner {|{p}\\rangle }]=-1$ , we see that $T(q^2,\\omega )$ is dimensionless.", "Let us suppose the operator $O_{\\lbrace \\beta \\rbrace }$ has dimension $d_\\beta $ , then from Eq.", "(5.27), $[C^\\prime _{\\lbrace \\beta \\rbrace }]=6-d_\\beta -n_\\beta \\qquad \\mathrm {(5.33)}$ and $[\\tilde{C}_{\\lbrace \\beta \\rbrace }(q^2,\\mu ^2)] = 2-d_\\beta +n_\\beta \\ .", "\\qquad \\mathrm {(5.34)}$ T he dimensions of $\\tilde{C}_{\\lbrace \\beta \\rbrace }$ could, in principle, be provided by $q^2$ , $\\mu ^2$ or quark masses.", "Weinberg's theorem [45] can be used to show that at large $-q^2$ to each order in perturbation theory quark masses can be ignored provided $\\mu ^2$ is fixed and not zero.", "Also, in each order of perturbation theory the $\\mu ^2$ dependence of a renormalizable field theory is at most logarithmic, reflecting at worst logarithmic divergences in such theories.", "So we can write $\\lim _{q^2 \\rightarrow -\\infty }\\tilde{C}_{\\lbrace \\beta \\rbrace }(q^2,\\mu ^2) = \\left( \\frac{1}{q^2} \\right)^{d_\\beta -n_\\beta -2} c_{\\lbrace \\beta \\rbrace }(\\ln q^2/\\mu ^2) \\ .", "\\qquad \\mathrm {(5.35)}$ The next step is to calculate the leading large $-q^2$ dependence of $c_{\\lbrace \\beta \\rbrace }(\\ln q^2/\\mu ^2)$ to all orders in perturbation theory.", "In each order, $c_{\\lbrace \\beta \\rbrace }$ grows like a power of $\\ln q^2/\\mu ^2$ .", "When summed to all orders the logarithms may yield a power, i.e., $c_{\\lbrace \\beta \\rbrace }$ may depend exponentially on its argument.", "The actual calculation of the asymptotic behavior of $c_{\\lbrace \\beta \\rbrace }$ requires renormalization group methods beyond the scope of these lectures.", "But as will be seen below, we have performed an equivalent calculation using Weizsäcker-Williams methods in $§5.1$ .", "If $c_{\\lbrace \\beta \\rbrace }$ does not go like an exponential of its argument, then operators with $d_\\beta -n_\\beta =2$ give rise to Bjorken scaling modulo powers of $\\ln q^2$ .", "$d_\\beta -n_\\beta $ plays such a central role in the analysis of large $q^2$ effects that it is given a name, “twist\", $t_\\beta =d_\\beta -n_\\beta \\ .", "\\qquad \\mathrm {(5.36)}$ Operators with $t_\\beta < 2$ would give contributions which diverge in the Bjorken limit, but the only operator with $t_\\beta <2$ which can couple to the product of the two currents is the identity, $t_I=0$ , and the identity has no connected matrix elements.", "Operators with $t_\\beta >2$ give contributions to $T(q^2,\\omega )$ which vanish in the Bjorken limit (provided $c_{\\lbrace \\beta \\rbrace }$ is not exponential in its argument).", "Only even twists occur in the expansion of two currents in the limit of zero quark mass [46] so the next important case is $t_\\beta = 4$ .", "Twist-4 or $O(1/q^2)$ corrections to scaling are a rich and fascinating, if technically complicated subject, in themselves [47].", "The twist-two operators in QCD come in two classes: quark operators $O_{n,a}^{(\\mu ^2)\\mu _1..\\mu _n} = S \\overline{\\psi }_a \\gamma ^{\\mu _1} \\partial ^{\\mu _1}D^{\\mu _2}...D^{\\mu _n} \\psi _a^{(\\mu ^2)} \\ , \\qquad \\mathrm {(5.37)}$ where $D_\\mu $ is the (color) gauge covariant derivative [48], “a\" labels flavor and $S$ symbolizes the operation of making $O_{n,a}$ traceless and symmetric; and gluon operators which we needn't write out, since they contribute only to singlet distributions.", "It is easy to check that $O_{n,a}$ indeed has $t=2$ for all $n$ .", "It should be emphasized that it requires an infinite tower of operators of increasing spin to describe $T(q^2,\\omega )$ .", "This is not at all surprising.", "First of all, if the sum in Eq.", "(5.30) stopped at some $n_{\\rm max}$ , $T(q^2,\\omega )$ would be a polynomial in $\\omega $ with no cut on the real axis for $|\\omega |>1$ .", "Second, we know that deep inelastic electroproduction probes the light cone, not just short distances.", "The OPE is a short distance expansion ($\\xi _\\mu \\rightarrow 0$ ) and no finite number of terms in the short distance expansion gives information about light-like separation.", "To summarize: the OPE approach leads to a simultaneous expansion of $T(q^2,\\omega )$ in $q^2$ and $\\omega $ .", "The expansion in $q^2$ is an asymptotic expansion and is ordered by the twist quantum number of the operators, the expansion in $\\omega $ is a Taylor expansion (which converges for $|\\omega |<1$ , see $§1$ ) and is ordered by the spin of the operators.", "Ignoring gluon operators $-$ which is adequate if we are interested in non-singlet quark distributions alone $-$ we have $T_{\\rm N. S.}(q^2,\\omega ) = \\sum _{\\rm n\\ even} c^n_{\\rm N. S.}(\\ln q^2/\\mu ^2) \\Theta ^n_{\\rm N. S.} (\\mu ^2) \\omega ^n + {\\rm higher\\ twist} \\ , \\qquad \\mathrm {(5.38)}$ where we have replaced the generic label $\\beta $ by the spin ($n$ ) of the 2 twist-two, non-singlet quark operator.", "$c^n_{\\rm N. S.}(\\ln q^2/\\mu ^2)$ depends only on $\\ln q^2/\\mu ^2$ and explicit calculation shows that it equals 4 when $q^2 = \\mu ^2$ .", "Comparing Eq.", "(5.38) with the Taylor expansion of $T(q^2,\\omega )$ developed in $§1$ , (Eq.", "1.18), we identify $M^n_{\\rm N. S.}(q^2) = \\frac{1}{4} c^n_{\\rm N. S.}(\\ln q^2/\\mu ^2) \\Theta ^n_{\\rm N. S.}(\\mu ^2) \\ {\\rm for\\ n\\ even} \\ .", "\\qquad \\mathrm {(5.39)}$ At this point, we can make a connection with the “evolutionary\" approach of $§5.1$ and comment on the derivation of the parton model in QCD.", "The ordering of contributions at large $Q^2$ by twist has given a result, Eq.", "(5.38), which looks like scaling modulo logarithms.", "This is deceptive because in general $c^n_{\\rm N. S.}$ depends exponentially on its argument giving rise to power law violations of scaling.", "Only in asymptotically free theories like QCD does $c^n_{\\rm N. S.}$ go like a power of its argument, giving scaling up to logarithms.", "The derivation of the dependence of $c^n_{\\rm N. S.}$ on $\\ln q^2/\\mu ^2$ is outside the scope of these lectures, but the result $c^n_{\\rm N. S.}(\\ln Q^2/Q^2_0) = (\\alpha _c(0)/\\alpha _c(t))^{B_n/2 \\pi b} \\qquad \\mathrm {(5.40)}$ (where $\\mu ^2 \\equiv Q^2_0$ ) should not be surprising since it converts Eq.", "(5.39) into Eq.", "(5.21) which we already derived using the more heuristic, Weizäcker-Williams approach.", "Comparing Eqs.", "(5.39), (5.40) and (5.21) we see that the moments of the structure functions are directly related to the matrix elements of specific local operators, $M^n_{\\rm N. S.}(q^2)=\\Theta ^n_{\\rm N. S.}(q^2)$ where $q^2$ is the mass-scale at which the operator is $O^{\\mu _1...\\mu _n}_{n,\\rm N. S.}$ is renormalized.", "The quark operators $O_{n,a}$ which determine the moments are the (gauge invariant) terms in the Taylor expansion of the operator product $\\overline{\\psi }(\\xi ) \\gamma _\\rho \\psi (0)$ which determined the quark distribution function in the parton model.", "So we see that the modification of the parton model required by the interactions in QCD is that each moment of the quark distribution (i.e., each term in the Taylor expansion) scales modulo a slightly different power of $\\ln q^2$ .", "Earlier, I remarked that the normalization point, $\\mu ^2$ , was arbitrary, that nothing physical could depend on it.", "There is no contradiction here.", "Eq.", "(5.30) is independent of $\\mu ^2$ , specifically $&\\Theta ^n_{\\rm N. S.}(\\mu ^2)d/d\\mu ^2 c^n_{\\rm N. S.}(\\ln q^2/\\mu ^2) \\nonumber \\\\&= -c^n_{\\rm N. S.}(\\ln q^2/\\mu ^2) d/d\\mu ^2 \\Theta ^n_{\\rm N. S.} (\\mu ^2) \\ .", "$ But, $c^n_{\\rm N. S.}$ depends only on $\\ln q^2/\\mu ^2$ , so $d/d\\mu ^2 c^n_{\\rm N. S.}(\\ln q^2/\\mu ^2) = -d/dq^2 c^n_{\\rm N. S.}(\\ln q^2/\\mu ^2) \\qquad \\mathrm {(5.42)}$ and therefore $d/dq^2 M^n_{\\rm N. S.}(q^2) = d/d \\mu ^2 \\Theta ^n_{M.S.", "}(\\mu ^2) |_{\\mu ^2=q^2} \\ , \\qquad \\mathrm {(5.43)}$ so the renormalization point dependence of the operator matrix elements determines the $q^2-$ dependence of the moment of the structure function.", "Combining the OPE analysis with the evolutionary picture of the previous section we recognize that the renormalization point introduced in the OPE analysis is the same as the the transverse resolution in the Weizäcker-Williams method.", "In both cases, it is necessary to define how much of the gluon field is to be lumped into the definition of a quark.", "Whatever way you look at it, the fact that this definition changes with $Q^2$ gives rise to the logarithmic scaling violation of QCD.", "In addition to the $\\ln Q^2$ corrections we have uncovered, there are expected to be $O(1/Q^2)$ and higher order $O(1/Q^{2n})$ corrections which become important at small $Q^2.$ The $O(1/Q^2)$ corrections take several forms.", "They are easy to distinguish using the language of OPE.", "First are “target mass\" corrections.", "As will become clear, this is an unfortunate name.", "These are apparent $O(M^2_T/Q^2)$ terms which arise from the $g^{\\mu _i \\mu _j}$ factors in the matrix elements of traceless operators.", "It has been shown [49] that these corrections may be completely absorbed by replacing the variable $x_T$ by the variable $\\xi = -q^+/P^+$ everywhere in the definition of $f_{a/T}(x_T)$ .", "This isn't surprising since $q^+$ is the variable which emerged automatically in the derivation (see, e.g., Eq.", "(2.18)).", "$\\xi $ is written in many forms: $\\xi &= (\\sqrt{\\nu ^2 + Q^2}-\\nu )/M_T \\ , \\\\\\xi &= 2 x_T/(1+ \\sqrt{1+Q^2/\\nu ^2}) \\ , \\\\\\xi &= 2x_T/(1+\\sqrt{1+4 M^2_Tx^2_T/Q^2}) \\ .", "$ It is clear from Eq.", "(5.44b) that the large mass corrections do not, in fact, depend on the target mass [50]!", "They are kinematic corrections (which do not grow like $A^2$ for nuclear targets), and are well-understood.", "Second are “quark mass\" corrections.", "These are important for heavy quarks ($c,b,t$ ) but negligible for up and down quarks whose masses, $m_{u,d} < 20$ MeV, are tiny.", "Finally are the dynamical $O(1/Q^2)$ corrections associated with operators of twist-4.", "Although they are complicated, twist-4 corrections to inelastic electron (and neutrino) scattering have been completely analyzed in QCD [47].", "Typically, they are small because the natural mass scale associated with a target is one upon its radius (once “target mass\" corrections have been incorporated via $\\xi -$ scaling), $1/R \\sim 1$ fm$^{-1} \\sim $ 200 MeV.", "It is therefore not surprising that scaling, modulo logarithms and using the $\\xi $ variable, sets in at a very low value of $Q^2$ ($<$ 1 GeV$^2$ ).", "For precisely this reason higher twist contributions to $F_2(x,Q^2)$ , which measure matrix elements of interesting local operators, are hard to extract from available experimental data.", "We have seen that the quark, antiquark and gluon content of a hadron changes with the scale at which it is probed.", "In more naive quark models the nucleon, for example, is treated as (approximately) three quarks in some confining “bag\" with no reference to the scale at which this description might hold.", "Certainly, if the nucleon were three quarks at some scale $\\mu ^2_0$ ($\\mu ^2_0 \\sim 1$ GeV$^2$ ) then it would become more complicated, containing antiquarks and glue at larger scales $Q^2 > \\mu ^2_0$ by virtue of QCD radiation.", "In the early days of QCD it was recognized that if the nucleon's quark, antiquark and gluon distributions measured at large $Q^2$ were evolved back to lower $Q^2$ , then quark-antiquark pairs and gluons are reabsorbed into the valence quarks, so that at some $\\mu ^2_0 \\sim $ 1 GeV$^2$ all of the $q \\bar{q}$ pairs and most of the glue would be gone leaving a nucleon made of three quarks alone.", "G. Ross and I checked this quantitatively in the M.I.T.", "version of the bag model [52].", "Using QCD evolution to second order, which is necessary because $\\alpha (\\mu ^2_0)$ is not small, we found that measured non-singlet nucleon structure functions evolved backwards to a $\\mu ^2_0$ of order 1 GeV$^2$ indeed gave valence quark $x-$ distributions in agreement with earlier bag calculations [53].", "$\\mu ^2_0$ is then interpreted as a parameter of the quark model: It is the mass scale (or resolution) at which quark fields should be defined in order that the nucleon should be made of three quarks.", "A recent reevaluation of this program with modern values for structure functions and the QCD $\\Lambda $ parameter (c.f.", "Eq.", "(5.20)) gave $\\mu ^2_0 \\cong 0.75$ GeV$^2$  [58].", "Notice that the structure function predicted by simple quark models cannot be compared directly with experimental measurements of $F_2$ at $Q^2=\\mu ^2_0$ because at such a low $Q^2$ higher twist effects are large but have not been included in the quark model calculations.", "It seems best to regard quark models as models for the twist-two matrix elements at a renormalization point $\\mu ^2_0$ , which must then be evolved to $Q^2 >> \\mu ^2_0$ in order to be compared with experiment.", "Close, Roberts and Ross [55] realized that the scale ($Q^2$ ) dependence of quark distribution functions in QCD could be used to parametrize and, to some extent, explain the $A$ dependence of structure functions [56].", "In $§2$ , we learned that the shift in the valence quarks observed in nuclei could be understood as an increase in the quark correlation length in the nuclear ground state.", "The increase in ocean quark pairs appeared to be an independent phenomenon.", "In the QCD inspired analysis, I shall describe both aspects of the EMC effect have a single origin: a dynamical change in scale of the twist-2 matrix elements in nuclei.", "In the last chapter, we saw that QCD evolution reduced the momentum on valence quarks and increased the number of pairs.", "Suitably adapted, evolution can explain the EMC effect.", "This method of analysis has come to be known as “dynamical rescaling\" or simply “rescaling\".", "Its virtues are first, it gives a unified description of all aspects of the EMC effect; second, it avoids the dubious assumptions of the constituent convolution models of $§4$ ; and third, it gives us insight into the reason other superficially quite different “explanations\" of the EMC effect work.", "Its drawback is that it does not provide a microscopic enough explanation to satisfy most of us: it is not clear exactly what the quarks and gluons are doing differently in a nucleus which gives rise to the effect.", "In this chapter, I will work from the general toward the specific.", "First, I will merely use QCD as an aid to present the data in a new way.", "This presentation will lead to a surprising conclusion and suggest rescaling as a mechanism behind the EMC effect.", "Then, I will analyze rescaling in some detail.", "Next, I will describe a calculation of the $A$ dependence motivated by, but perhaps more general than rescaling [10].", "Finally, I will close with some remarks about shadowing and future experiments.", "These have little to do with QCD and less to do with rescaling, but they follow naturally upon the discussion of $A$ dependence.", "In $§2$ , we compared the structure functions of different nuclei at fixed $Q^2$ , as functions of $x$ .", "This is the way the data come from the experimenters.", "QCD provides an alternative.", "Consider the moments: $M^n_A(Q^2) \\equiv \\int _0^A dx x^{n-2} \\overline{F}_2^A(x,Q^2) \\qquad \\mathrm {(6.1)}$ ($\\overline{F}^A_2=\\frac{1}{A}F^A_2$ ).", "According to Eq.", "(5.21), the moments are monotonically falling functions of $Q^2$ .", "(This analysis like that of $§5$ is restricted to non-singlet structure functions but a similar conclusion applies as well to singlets.", "): $M^n_A(Q^2) = \\left( \\frac{\\alpha _c(Q^2)}{\\alpha _c(Q^2_o)} \\right)^{d_n} M^n_A(Q^2_0) \\ , \\qquad \\mathrm {(6.2)}$ where $d_n \\equiv -B_n/2 \\pi b>0$ .", "If QCD is correct, and if $Q^2$ is large enough so leading order in perturbation theory suffices and $O(1/Q^2)$ corrections are negligible, then $\\ln M^n_A(Q^2)$ must lie on a straight line when plotted versus $\\ln [\\alpha _c(Q^2_0)/\\alpha _c(Q^2)]$ and he slope must be $-d_n$ .", "Such a plot is shown schematically in Fig.", "REF for two different targets with baryon numbers $A$ and $A^\\prime $ .", "Figure: Rescaling a single moment.At fixed $Q^2$ , the EMC effect appears as the observation that $M^n_{A^\\prime }(Q^2)<M^n_A(Q^2)$ for $A^\\prime > A$ .", "On the other hand, it is clear that there is a value of $Q^2$ , call it $Q^{\\prime \\,2}$ , such that $M^n_{A^\\prime }(Q^{\\prime \\,2})=M^n_A(Q^2)$ .", "As illustrated in Fig.", "REF , $Q^{\\prime \\,2} < Q^2$ .", "The value of $Q^{\\prime \\,2}$ , in principle, depends on $A, A^\\prime $ and $Q^2$ and on $n$ .", "Let us define $\\xi ^n_{AA^\\prime }(Q^2)$ (not to be confused with the coordinate $\\xi _\\lambda $ or the modified scaling variable $\\xi $ of $§5$ ) so $M^n_{A^\\prime }(Q^2) = M^n_A(\\xi ^n_{AA^\\prime }(Q^2)Q^2) \\ .", "\\qquad \\mathrm {(6.3)}$ From its definition $\\xi ^n_{AA^\\prime }(Q^2)>1 \\ \\ {\\rm for\\ } A^\\prime > A \\qquad \\mathrm {(6.4)}$ and $\\xi ^n_{AA^{\\prime \\prime }}(Q^2) &= \\xi ^n_{AA^\\prime }(Q^2) \\xi ^n_{A^\\prime A^{\\prime \\prime }}(Q^2) \\nonumber \\\\\\xi ^n_{AA^\\prime }(Q^2) &= 1/\\xi ^n_{A^\\prime A}(Q^2) \\ .", "$ The $Q^2-$ dependence of $\\xi ^n_{AA^\\prime }(Q^2)$ is determined entirely by QCD, and is independent of $A$ and $A^\\prime $ .", "Consider Eq.", "(6.2) first for $A^\\prime $ at $Q^2_0$ and $Q^2$ $M^n_{A^\\prime }(Q^2) = \\left( \\frac{\\alpha _c(Q^2)}{\\alpha _c(Q^2_o)} \\right)^{d_n} M^n_{A^\\prime }(Q^2_0) \\qquad \\mathrm {(6.6)}$ and then for $A$ at $\\xi ^n_{AA^\\prime }(Q^2_0)Q^2_0$ and $\\xi ^n_{AA^\\prime }(Q^2)Q^2$ : $M^n_{A}(\\xi ^n_{AA^\\prime }(Q^2)Q^2) = \\left( \\frac{\\alpha _c(\\xi ^n_{AA^\\prime }(Q^2)Q^2)}{\\alpha _c(\\xi ^n_{AA^\\prime }(Q^2_0)Q^2_0)} \\right)^{d_n} M^n_{A}(\\xi ^n_{AA^\\prime }(Q^2_0)Q^2_0) \\ .", "\\qquad \\mathrm {(6.7)}$ Now use Eq.", "(6.3) to eliminate all reference to moments $\\frac{\\alpha _c(\\xi ^n_{AA^\\prime }(Q^2)Q^2)}{\\alpha _c(\\xi ^n_{AA^\\prime }(Q^2_0)Q^2_0)} = \\frac{\\alpha _c(Q^2)}{\\alpha _c(Q^2_0)} \\ .", "\\qquad \\mathrm {(6.8)}$ To lowest order in $\\alpha _c$ , $\\alpha _c(\\xi Q^2) = \\alpha _c(Q^2)/(1+ \\alpha _c(Q^2)b \\ln \\xi ) \\ , \\qquad \\mathrm {(6.9)}$ which, together with Eq.", "(6.8), implies $\\xi ^n_{AA^\\prime }(Q^2) = [\\xi ^n_{AA^\\prime }(Q^2_0)]^{\\alpha _c(Q^2_0)/\\alpha _c(Q^2)} \\ .", "\\qquad \\mathrm {(6.10)}$ The extension to next order in $\\alpha _c$ is quoted in [58].", "Suppose $Q^2_0 < Q^2$ , then $\\alpha _c(Q^2_0)/\\alpha _c(Q^2)>1$ and $1< \\xi ^n_{AA^\\prime }(Q^2_0) < \\xi ^n_{AA^\\prime }(Q^2) \\ \\ {\\rm for \\ \\ } Q^2_0<Q^2 \\ .", "\\qquad \\mathrm {(6.11)}$ The implication of this result is that a small change of $Q^2$ -scale at low $Q^2$ gets magnified into a large change when observed at large $Q^2$ .", "Eqs.", "(6.4), (6.5) and (6.10) summarize the properties of $\\xi ^n_{AA^\\prime }(Q^2)$ which can be determined from general considerations alone.", "The surprise came when Close, Roberts and Ross used this method to analyze the EMC data and found that to a good approximation $\\xi ^n_{AA^\\prime }(Q^2)$ appears to be independent of $n$: $\\xi ^n_{AA^\\prime }(Q^2) \\Rightarrow \\xi _{AA^\\prime }(Q^2)$ , at least for the values of $n$ sensitive to the $x$ -range of the EMC data.", "Actually CRR did not construct moments but made an equivalent observation about the structure function itself.", "Namely, if $\\xi ^n_{AA^\\prime }(Q^2)$ is independent of $n$ then the structure functions themselves as functions of $x$ are related by a universal scale change in $Q^2$ : $\\overline{F}^{A^\\prime }_2(x,Q^2) = \\overline{F}^A_2(x,\\xi _{AA^\\prime }(Q^2) Q^2) \\ .", "\\qquad \\mathrm {(6.12)}$ Eq.", "(6.12) has become known as “rescaling\".", "CRR were led to it by the observation we referred to in $§5$ , that the EMC effect resembles QCD evolution.", "In fact, the EMC data are not in complete agreement with Eq.", "(6.12).", "The excess at low $x$ is more than can be produced by the amount of evolution that is required to fit the depletion at large $x$ .", "Their analysis, with $\\xi _{DFe} \\cong 2$ at $Q^2 \\approx $ 20 GeV$^2$ is shown in Fig.", "REF .", "Figure: Rescaling the EMC iron data .The newer SLAC data on iron and deuterium data have a smaller enhancement at low-$x$ and a lower cross over point (where $\\overline{F}^{\\rm Fe}_2/\\overline{F}^{D}_2 =1$ ), both of which improve the agreement with the rescaling analysis [57], [58].", "The reader might wish to look back to Fig.", "REF to see the present state of the rescaling fits.", "Several comments and caveats are in order: 1.", "QCD is subtle: Changing the scale creates quark-antiquark pairs.", "If, after all the discussion of $§5$ , this still seems unreasonable, perhaps it would help to remember that a similar thing happens in a Bogoliubov transformation.", "By redefining the vacuum, annihilation and creation operators get mixed up with one another and a state which originally contained only particles, appears after the transformation, to contain both particles and antiparticles.", "2.", "Rescaling predicts that the EMC effect should vanish at $x \\approx 0.2$ where QCD evolution vanishes (see Fig.", "REF ).", "The EMC data cross unity at $x \\approx 0.35$ in disagreement with this prediction.", "Once again, however, the SLAC data look better: $\\overline{F}^{A}_2/\\overline{F}^{D}_2$ is definitely below unity for $x>0.3$ .", "A careful test must await better data at small-$x$ .", "3.", "Rescaling cannot work for $x \\rightarrow 1$ , or equivalently for $n \\rightarrow \\infty $ .", "At $x \\rightarrow 1$ , the structure function of the nucleon vanishes but that of a nucleus does not.", "Eq.", "(6.12) fails to reproduce this behavior.", "The reason for this failure will become clear soon.", "4.", "Rescaling does not mean that an iron nucleus can be “mapped\" to a nucleon by a universal change of scale.", "The complete description of a nucleus requires matrix elements of all twists.", "The gross structure of the EMC effect only involves a few operators of twist-two.", "In $§5.4$ we learned to associate a mass scale $\\mu ^2_0$ with the quark model description of a hadron.", "It is the scale at which, approximately, the hadron consists of valence quarks alone.", "It is an intrinsic characteristic of each hadron.", "The measured structure function of the nucleon is consistant with this notion with $\\mu ^2_0 \\cong 0.75$ GeV$^2$ .", "Clearly, any nucleus related to the nucleon by rescaling, as in Eq.", "(6.12), also admits a valence quark description, but at a shifted mass scale $\\mu _A$ : $\\mu ^2_A = \\frac{\\mu ^2_0}{\\xi _{NA}(\\mu ^2_0)} \\ .", "\\qquad \\mathrm {(6.13)}$ Since $\\xi _{NA}>1$ , $\\mu ^2_A$ is less than $\\mu ^2_0$ .", "This can be checked by the following argument: At any fixed $Q^2$ , the nucleus appears more highly evolved than the nucleon (more pairs, softer valence quarks).", "So to reabsorb all pairs it is necessary to devolve the nucleus further than the nucleon.", "It has become necessary to express Eq.", "(6.13) as a ratio of length scales.", "Let us define $\\lambda _A = c/\\mu _A$ , $\\lambda _N = c/\\mu _0$ , then $\\lambda _A/\\lambda _N = \\mu _0/\\mu _A = \\sqrt{\\xi _{NA}(\\mu ^2_0)} \\ \\qquad \\mathrm {(6.14)}$ $\\xi (\\mu ^2_0)$ is typically much smaller than $\\xi (Q^2)$ , because of Eq.", "(6.10).", "For example, $\\xi _{\\rm Fe\\,N}(20\\ {\\rm GeV^2}) \\cong 2.0$ , but $\\xi _{\\rm Fe\\,N}(\\mu ^2_0) \\cong 1.33$ making $\\lambda _{\\rm Fe}/\\lambda _N \\cong 1.15$ .", "Thus, an increase of only $\\sim $ 15% in the intrinsic length scale of the twist-two matrix elements for iron compared to the nucleon can give rise to the EMC effect.", "If there is anything fundamental about rescaling, rather than being merely an accident, the basic relation must be defined at the intrinsic to the target, $\\mu ^2_0$ , not at some arbitrary $Q^2$  [55].", "At $\\mu ^2_0$ , however, the structure function contains large contributions from higher twists, so an equation like Eq.", "(6.12) with $Q^2$ replaced by $\\mu ^2_0$ cannot be written down.", "Instead, we must write relations between matrix elements of twist-two operators using the formalism of $§5.2$ .", "We define $\\mathinner {\\langle {P|O_{n,a}^{(\\mu ^2)\\mu _1..\\mu _n}|P}\\rangle } = P^{\\mu _1}...P^{\\mu _n}\\Theta ^A_{n,a}(\\mu ^2) \\qquad \\mathrm {(6.15)}$ in analogy to Eq.", "(5.31).", "Then the operator equivalent of Eq.", "(6.12) is $A^{n-2} \\Theta ^A_{n,a}(\\mu ^2_A) = A^{\\prime \\,{n-2}} \\Theta ^{A^\\prime }_{n,a} (\\mu ^2_{A^\\prime }) \\ .", "\\qquad \\mathrm {(6.16)}$ This is the basic rescaling between targets.", "The powers of $A$ and $A^\\prime $ are only kinematic.", "It leads to Eq.", "(6.12) at large $Q^2$ provided $\\xi _{AA^\\prime }(Q^2)$ is obtained from $\\mu _A/\\mu _A^\\prime $ via QCD evolution (Eq.", "(6.10) plus higher order improvements) and provided the moments can be reliably evolved from $\\mu _A^2$ to $Q^2$ using perturbative QCD.", "This turns out to be an important proviso.", "In [52], we found that only the moments with $n<8$ could be reliably devolved from large $Q^2$ to a $\\mu ^2_0$ as small as 1 GeV$^2$ .", "The problem is that higher order (in $\\alpha _c$ ) corrections typically contain $\\ln n$ factors making perturbation theory worse for large $n$ .", "Suppose, then, that Eq.", "(6.16) were exact for all $n$ .", "Nevertheless, at large $Q^2$ only the moments with $n<8$ would be likely to show uniform scaling.", "This explains why rescaling fails near $x=1$ (as noted earlier) since large $n$ moments probe exclusively large $x$ .", "In fact, one can estimate the $x$ values for which rescaling should be reliable when observed at large $Q^2$ by using the Mellin transform relation [58].", "$n \\sim \\ln \\alpha _s/ \\ln x \\qquad \\mathrm {(6.17)}$ which for $n=8$ and $\\alpha _s =0.2$ gives $x \\sim 0.8$ as the upper limit of reliability.", "Turning this argument around one can see that exact uniform rescaling for all $x$ at large $Q^2$ would be very hard to understand, since it would imply a complicated and non-uniform relation among twist-two matrix elements with $n>8$ at $\\mu ^2_0$ .", "The task for someone trying to understand the EMC effect from the point of rescaling, then, is two-fold.", "First, one must explain why rescaling should be uniform at the intrinsic scale, i.e.", "why should $\\lambda _A/\\lambda _{A^\\prime }$ be independent of $n$ ?", "Second, one must predict the $A$ dependence of the rescaling parameter, i.e.", "of $\\lambda _A/\\lambda _N$ .", "Close, Roberts, Ross and I have argued that rescaling is a rather natural prediction of quark models with only one length scale.", "An example is the MIT bag model with only $u$ and $d$ quarks, which may be taken to be massless, so the only dimensionful parameter is the bag constant $B$ .", "Such models are very close in spirit to QCD itself in which the only dimensionful parameter is $\\Lambda $ .", "In a model like this, quarks carrying momentum $p$ confined within a radius $\\lambda $ transform into quarks carrying momentum $p^\\prime =(\\lambda /\\lambda ^\\prime )p$ when the confinement scale is changed to $\\lambda ^\\prime $ - the dimensionless quantity $p \\lambda $ is constant.", "The intrinsic scale $\\mu ^2$ is then proportional to $p^2$ , there being no other scale in the problem.", "It is hard to turn this heuristic argument into a proof of uniform rescaling.", "That would probably require a consistent formulation of perturbative QCD (including renormalization) in a bag model, something which exists only in fragments [62].", "On the other hand, as Llewellyn Smith has noted, it seems clear that other models such as the non-relativistic quark model have no hope of giving rescaling unless the quark masses are assumed (rather unnaturally) to scale with the inverse confinement radius [10].", "A satisfactory theoretical understanding of rescaling will have to await a more powerful QCD-based theory of confinement.", "The second task $-$ determining the $A$ dependence of $\\lambda _A/\\lambda _N$ $-$ is more straightforward.", "It is the subject of the next Section.", "It hardly needs saying that the EMC effect derives from the proximity of nucleons within the nucleus, and that it would vanish if one could arrange that that the nucleus was very dilute.", "Fortunately, nature has given us one very dilute nucleus $-$ the deuteron $-$ and Bodek and Simon [15] have shown that $\\overline{F}^D_2(x,Q^2)/F^N_2(x,Q^2)$ is very close to unity.", "It seems reasonable to assume, therefore, that the magnitude of the EMC effect should be proportional to the probability that nucleons approach each other or overlap within the nucleus.", "Close, Roberts, Ross and I [57], [58] defined the simplest measure of this effect we could imagine: we defined an “overlapping\" volume for a nucleus which is the integral over the nucleus of the two body density $\\rho _A({\\bf r_1},{\\bf r_2})$ multiplied by an overlap factor $V_0(|{\\bf r_1-r_2}|)$ which we took to be the overlapping volume of two spheres of radius $a$ : $V_0(d) &= 1-\\frac{3}{4} \\left(\\frac{d}{a} \\right) + \\frac{1}{16}\\left(\\frac{d}{a} \\right)^3 {\\ } &d \\le 2a \\nonumber \\\\&= 0 {\\ \\ \\ \\ \\ \\ } &d>2a \\ .", "$ Thus, the overlapping volume per nucleon is $V_A =(A-1) \\int d^3{\\bf r_1} d^3 {\\bf r_2} \\rho _A({\\bf r_1},{\\bf r_2}) V_0(|{\\bf r_1-r_2}|) \\ .", "\\qquad \\mathrm {(6.19)}$ $\\rho _A({\\bf r_1},{\\bf r_2})$ is normalized to $\\int d^3{\\bf r_1} d^3 {\\bf r_2} \\rho _A({\\bf r_1},{\\bf r_2})=1$ .", "Saturation of the nuclear density at large $A$ implies $\\rho _A \\sim 1/A^2$ .", "With this behavior of $\\rho _A$ and the finite integral of $V_0$ it is easy to see that $V_A$ saturates at large $A$ , i.e.", "$\\lim _{A \\rightarrow \\infty } V_A =$ constant.", "The choice of a geometrical form for $V_0(|{\\bf r_1-r_2}|)$ was in fact quite arbitrary.", "Any function which goes to unity as ${\\bf r_1-r_2} \\rightarrow 0$ and to zero when $|{\\bf r_1-r_2}|>2R_{\\rm nucleon}$ and which respects the three dimensional geometry of the problem would do.", "We calculated the overlapping volume for nucleons with $a$ chosen so $a_{\\rm RMS} = 0.9$ fm ($a_{\\rm RMS}=\\sqrt{\\frac{3}{5}}a$ , so $a=1.16$ fm).", "$\\rho _A({\\bf r_1},{\\bf r_2})$ was written in terms of the single particle density $\\rho _A({\\bf r})$ and a correlation function $f({\\bf r_1-r_2})$ : $\\rho _A({\\bf r_1-r_2}) = \\rho _A({\\bf r_1}) \\rho _A({\\bf r_2}) f({\\bf r_1-r_2}) \\ .", "\\qquad \\mathrm {(6.20)}$ $\\rho _A({\\bf r})$ was taken from experimental measurements of nuclear charge densities.", "$f({\\bf r_1-r_2})$ should, in principle, depend on $A$ but there is little or no direct information on it from experiment.", "We took $f({\\bf r_1-r_2})$ from models of nuclear matter, the most realistic probably being one based on a Reid soft core potential [60] shown in Fig.", "REF .", "Figure: Choice of the correlation function f(r)f(r): (a) no correlation; (b) Fermi gas correlation; (c) Reid soft core correlation.We then assumed that the effective confinement size in a nucleus $A$ will be intermediate between that for an isolated nucleon and some limiting value $\\lambda _{\\rm tot}$ associated with two totally overlapping nucleons.", "We assumed a linear interpolation in $V_A$ leading to $\\frac{\\lambda _A}{\\lambda _N} = 1 + V_A \\left( \\frac{\\lambda _{\\rm tot}}{\\lambda _N}-1 \\right) \\ .", "\\qquad \\mathrm {(6.21)}$ $\\lambda _{\\rm tot}$ might be viewed as a parameter.", "Instead, we estimated its value from the bag model, where a spherical baryon number two system must have volume at least twice as large as a single nucleon (otherwise it would be stable against decay into two nucleons), so $R_6 \\ge 2^{1/3}R_3$ .", "This led us to take $\\lambda _{\\rm tot}/\\lambda _N = 2^{1/3}$ , and to the values of $\\lambda _A/\\lambda _N$ given in Table REF .", "Table: Values of the confinement size relative to that for the free nucleon for a range of nuclei.", "The three values (a), (b) and (c) correspond to three choices of the correlation function f(r)f(r) shown in Fig. .", "ξ NA \\xi _{NA} (20 GeV 2 ^2) is shown for the Reid correlation function.", "The others are similar.These translate into values of $\\xi _{NA}$ (20 GeV$^2$ ) which are much larger, and these in turn give predictions for the EMC effect in a variety of nuclei.", "The predictions of the rescaling model are compared with the SLAC data in Fig.", "11.", "The agreement is fine for $x < 0.7$ above which Fermi motion becomes important.", "One thing which is not obvious from Fig.", "11 is that the data correlate well with idiosyncracies in the periodic table.", "Fig.", "REF shows predictions for several $x$ values and $Q^2$ = 4.98 GeV$^2$ compared with SLAC data.", "Figure: AA dependence of the EMC effect at fixed xx.", "The data are squared with error bars.", "The predictions of rescaling are solid dots .The fluctuations in the rescaling predictions reflect variations in nuclear densities, e.g.", "$^4$ He is more tightly bound than $^3$ He, and generally follows the data.", "Some predictions for the future are shown in Fig.", "REF .", "Figure: AA dependence at x=0.6x=0.6, Q 2 ≈10Q^2 \\approx 10 GeV 2 ^2 predicted by rescaling.Several comments are appropriate before leaving the discussion of $A$ dependence: 1.", "Llewellyn Smith has argued that the calculation of the $A$ dependence we have given is much more general than the rescaling model.", "Any scheme which fits the EMC effect in iron and in which the effect is linear in $V_A$ will agree as well with the SLAC data.", "This is qualitatively true but the assumption of linearity is non-trivial.", "We argued that $\\lambda _A/\\lambda _N$ should be linear in the overlapping volume.", "Had we instead (erroneously) assumed $\\xi _{NA}(Q^2=20\\ GeV^2)$ to be linear in $V_A$ the $A$ dependence would have come out wrong.", "2.", "The rescaling model is not related to convolution models nor does it ascribe the EMC effect to any particular exotic component in the nuclear wavefunction (e.g., six quark bags).", "The scale change might originate from a change in the size of individual nucleons, from quark percolation between nucleons, or from multiquark or meson admixtures in the nuclear wavefunction.", "Personally, I suspect that to the extent that these notions can be well-defined they will turn out to correspond to the same underlying physics — partial deconfinement.", "3.", "The rescaling fit to the $A$ -dependence depends on several parameters: the nucleon radius $a$ , the formula chosen to relate $V_A$ to $\\lambda _A/\\lambda _N$ , the scale $\\mu _0$ and the choice of two nucleon correlation function.", "In fact, none of these were treated as free parameters in [58].", "Instead, they were fixed at “reasonable\" values from other aspects of hadron dynamics.", "Of course, the precise choice of parameters is not particularly central to the explanation of the EMC effect.", "4.", "The value of $V_A$ is very large for large $A$ .", "It is $\\approx $ 0.72 for $^{208}$ Pb with the Reid correlation function.", "It has been remarked that such a large value of $V_A$ is “unreasonable\" and in some sense contradicted by the many successes of conventional nuclear physics.", "The overlapping volume of nucleons with $a_{\\rm rms}$ = 0.9 fm is very large.", "That is a fact, not a shortcoming of the rescaling model.", "As for the idea that large $V_A$ is inconsistent with nuclear physics in general - I believe it is an unwarranted concern: It costs $\\approx $ 1 GeV (the string tension) to separate colored sources by l fm.", "It even costs 300 MeV to flip a quark spin.", "These energetic considerations, not naive classical, geometrical considerations determine the quantum mechanics of nuclei.", "It has long been expected that at very low values of $x$ deep inelastic electron scattering from nuclei would behave like hadron scattering from nuclei and exhibit “shadowing\" [61].", "At high energies hadron nucleus total cross sections grow like $A^{2/3}$ .", "The simple explanation of this effect is that the incoming hadron doesn't “see\" the nucleons at the back of the nucleus.", "They are in the shadow of those in front.", "The cross section then grows like $\\pi R^2 \\sim A^{2/3}$ .", "Real photon-nucleus interactions are shadowed [62].", "This finds a simple explanation in the framework of vector meson dominance: the hadronic interactions of a real photon are well-approximated by supposing the photon converts with probability $\\sim \\!\\!\\alpha $ into a vector meson ($\\rho $ , $\\omega $ or $\\phi $ ) which then interacts hadronically and experiences shadowing.", "Vector meson dominance fails to explain inelastic lepton scattering at large $Q^2$ , or at least an infinite tower of vector states with precisely tuned couplings is required, but the prejudice that shadowing occurs there too is strong.", "Figure: Cartoons representing the ξ 3 \\xi ^3 values probed by deep inelastic lepton scattering at low-xx from an iron target.My own thoughts on shadowing are still in flux.", "Since much of the future experimental work in this field will be carried out in the kinematic regime where shadowing might be important, I would like to outline the problem here.", "Perhaps some reader will solve it!", "The kinematic relation $\\xi ^3<1/Mx \\qquad \\mathrm {(6.22)}$ tells us that for very small values of $x$ the struck quark may propagate over very large distances in the target.", "For $ x \\approx 0.5$ , $\\xi ^3 \\le 0.4$ fm, but for $x \\approx 0.05$ , $\\xi ^3 \\le 4$ fm and for $x \\approx 0.005$ , $\\xi ^3 \\le 40$ fm!", "The parton model diagram dominant at large $Q^2$ is shown relative to an iron nucleus in Fig.", "REF where the struck quark is associated with a single nucleon.", "Other possibilities represent EMC-like corrections.", "The question is whether a quark propagating over such distances is or is not strongly absorbed in the nuclear medium.", "Hadron interactions at very high energies are predominantly absorptive, but the state which is propagating in Fig.", "REF is not an ordinary hadron.", "In particular, the propagating quark, which is a color triplet is always close to an antiquark [63] which is a color antitriplet that may neutralize some or most of its strong interactions.", "Furthermore, the invariant mass of the $q \\bar{q}$ pair is $-Q^2$ .", "Suppose for the sake of discussion, we characterize quark propagation in the nuclear medium (accompanied by an antiquark as in Fig.", "REF ) by an absorption length $\\lambda $ which may depend on $Q^2$ .", "For a given nucleus when $1/Mx< \\lambda (Q^2)$ there is little shadowing.", "Shadowing sets in when $1/Mx \\sim \\lambda (Q^2)$ , but when $1/Mx$ exceeds twice the radius of the nucleus ($R_A$ ) shadowing saturates: decreasing $x$ further does not put more matter along the propagating quark's path.", "So there are two $x$ -values characterizing shadowing, $x_0(Q^2) = 1/M \\lambda (Q^2)$ marking its onset and $x(A) = 1/2MR_A$ marking its saturation.", "For $x < x(A)$ , only the front skin of the nucleus down to a depth $\\lambda (Q^2)$ participates in the scattering, so shadowing increases with $A$ but goes away as $\\lambda (Q^2) \\rightarrow \\infty $ .", "All of this can be summarized by the purely phenomenological formula $\\overline{F}^A_2(x,Q^2) = \\frac{F^N_2(x,Q^2)}{\\left[ 1 + \\frac{x_0(Q^2)}{x+x(A)} \\right] } \\qquad \\mathrm {(6.23)}$ in which for simplicity I have ignored all other nuclear effects such as rescaling and Fermi motion.", "Eq.", "(6.23) does not satisfy the quark number sum rule.", "It is not intended to be valid at all $x$ , only for $x \\sim 0$ where shadowing may be important.", "According to Eq.", "(6.23) there is an asymptotic shadowing curve for nuclear matter ($A \\rightarrow \\infty $ , $x(A) \\rightarrow 0$ ), shown for example in Fig.", "REF .", "Finite nuclei track along $\\overline{F}^\\infty _2(x,Q^2)$ until $x \\sim z(A)$ below which they depart above $\\overline{F}^\\infty _2$ .", "A determination of $\\lambda (Q^2)$ is crucial [64].", "Unfortunately, it is largely unknown.", "Naively, one might expect $\\lambda (Q^2)$ to be a typical meson mean free path, corresponding to a cross section of order 30 mb.", "This is too naive.", "The propagating $q \\bar{q}$ pair in Fig.", "REF must have invariant mass $-Q^2$ .", "This can be generated if both quark and antiquark have transverse momentum of order $\\frac{1}{2}\\sqrt{Q^2}$ or by a large mismatch in their longitudinal momentum.", "In the former case, the transverse dimension of the $q \\bar{q}$ system is very small, $\\sim 1/\\sqrt{Q^2}$ .", "The system looks like a small color dipole and has a small absorption cross section.", "In the latter case, the transverse dimensions are not small, color screening is not so effective and the absorption cross section may be large.", "The importance of shadowing depends on which region of phase space dominates.", "This is not yet known.", "But at least some of the shadowing seen at $Q^2 =0$ should disappear at large $Q^2$ .", "The behavior of $\\overline{F}^A_2(x,Q^2)$ at small $x$ expected if shadowing indeed disappears at large $Q^2(x(Q^2) \\rightarrow 0$ ) as shown in Fig.", "REF .", "Figure: Scenarios for shadowing.", "(a) and (b) compared to two different values of λ(Q 2 )\\lambda (Q^2) with λ a >λ b \\lambda _a > \\lambda _b.The discussion in this section has been very qualitative and phenomenological.", "Eq.", "(6.23) should be taken with a grain of salt.", "There is much to do on the subject of shadowing and little of substance to report here.", "Nevertheless, I thought it might be appropriate to end by whetting the reader's appetite for the next round of experiments." ] ]
2212.05616
[ [ "Mediation analysis with the mediator and outcome missing not at random" ], [ "Abstract Mediation analysis is widely used for investigating direct and indirect causal pathways through which an effect arises.", "However, many mediation analysis studies are often challenged by missingness in the mediator and outcome.", "In general, when the mediator and outcome are missing not at random, the direct and indirect effects are not identifiable without further assumptions.", "In this work, we study the identifiability of the direct and indirect effects under some interpretable missing not at random mechanisms.", "We evaluate the performance of statistical inference under those assumptions through simulation studies and illustrate the proposed methods via the National Job Corps Study." ], [ "Introduction to mediation analysis and the National Job Corps Study", "Mediation analysis is increasingly adopted by researchers in a variety of fields, including epidemiology and social sciences, to test specific theories about the underlying mechanism through which an effect arises.", "In a typical mediation analysis, the average treatment effect on an outcome is decomposed into a natural indirect effect (NIE) that operates through the mediator of interest and a natural direct effect (NDE) that operates through other pathways.", "The NIE and NDE are identified under the sequential ignorability assumption [14], [15], that is, when there is no unmeasured pretreatment confounding in the treatment-mediator and treatment-outcome relationships and there is no post-treatment confounding or unmeasured pretreatment confounding in the mediator-outcome relationship.", "With the identification results, the NIE and NDE can be estimated through various approaches, such as regression [32], weighting [11], [13] and simulation-based strategy [14], [15].", "However, missing data are prevalent in empirical research and it is often a concern that the missingness is not at random [20].", "Missing not at random occurs if the probability of data being missing depends on unobserved data even conditional on observed data.", "The motivation of our study comes from the well-known National Job Corps Study (NJCS).", "The NJCS is a randomized nation wide evaluation of Job Corps, which is the largest education and vocational training program administered by the U.S. Department of Labor for 16 to 24 years old youths who are unemployed and disconnected from school.", "Past research showed that Job Corps successfully improves those disadvantaged youths' employment and increases their earnings [31], [17], [37].", "Given that education and vocational training are the central elements of the program, [27] further proposed to evaluate how much of the Job Corps effect on earnings is mediated by educational and vocational attainment and found a significant positive mediated effect.", "However, the analysis is challenged by missingness both in the mediator and the outcome.", "In the NJCS study, the assignment to either Job Corps program or the control group was random.", "The mediator describes whether or not the subject obtained an education credential or vocational certificate after randomization, and the outcome describes subject's weekly earnings in the fourth year after randomization.", "The mediator and outcome information were collected at the 30-months and 48-months follow-up, respectively, through in-person interviews.", "And the missing rates of the mediator and the outcome are both higher than $20\\%$ .", "Qin et al.", "(2019) addressed the missing data problem through non-response weighting, which is a valid approach if the missingness is random given the observed pretreatment covariates.", "However, we are concerned that the missingness is likely not at random.", "Conceivably, people who failed to obtain an education credential or vocational certificate may be less likely to provide the information compared to people who successfully obtained a credential or certificate.", "One could also imagine that people who had no earnings may be less willing to report the amount of earnings in the interview.", "As a result, the chance of missingness would depend on the unobserved missing value itself, missing not at random.", "In such a scenario, commonly adopted strategies to deal with missing data, such as complete case analysis, multiple imputation or non-response weighting, may fail to provide valid inference.", "Missingness not at random presents a challenge for causal inference, because in general, the underlying data distribution can not be identified without further assumptions.", "This topic has attracted some attention in the literature.", "For example, [7] and [36] studied the identifiability of subgroup treatment effects and average treatment effects, respectively, with covariates missing not at random.", "In the context of instrumental variable analysis, methods were developed when the missingness in the covariates [35] or in the outcome [9], [26], [4] is not at random.", "Previous research also studied the identifiability of graphical models with a binary outcome missing not at random in longitudinal studies [21], and reviewed missing data research in graphical models [22].", "However, limited effort has been made to study the identifiability of causal mediation effects with the mediator and outcome missing not at random.", "Considering missingness in the outcome only, [19] utilized an instrumental variable type of covariate to identify the direct and indirect effects when the missingness in the outcome depends on the outcome value itself.", "To apply their method, we need a covariate that is associated with the outcome, however, is conditionally independent of the missingness of the outcome.", "In many studies, such a covariate may not be available." ], [ "Organization of the paper", "In this paper, we study the identification of the causal mediation effects with mediator and outcome missing not at random.", "We provide conditions for identification under various interpretable missing not at random assumptions, and also provide examples for some missingness mechanisms where the identification cannot be achieved without further assumptions.", "The rest of the paper is organized as follows.", "We introduce notation and basic assumptions for causal mediation analysis in section .", "In section , we discuss the identification in a simple setup where the missingness exists only in the mediator and depends on the missing mediator value itself.", "We then extend the results to the more complicated setup where both the mediator and the outcome have missing data in section .", "Expectation-maximization algorithms are used for estimation and in section , extensive simulation studies are conducted to test our theoretical results and to evaluate the performance of the proposed methods.", "We apply our method to the NJCS data in section and develop a sensitivity analysis approach to study the robustness of conclusions on the causal mediation effects if the underlying missingness mechanism is beyond the mechanisms that allow identification.", "We conclude with a discussion and provide all proofs of the theorems in the Supplementary Material." ], [ "Notation and some basic definitions", "Let $A\\perp \\!\\!\\!\\perp B\\mid C$ denote that the random variables $A$ and $B$ are conditionally independent given the random variable $C$ .", "Further, the property of completeness [18], [2] will play a key role in our nonparametric identification of the mediation effects.", "Define a function $f(A,B)$ to be complete in $B$ if $\\int g(A)f(A,B)d\\nu (A)=0$ implies $g(A)=0$ almost surely for any square-integrable function $g$ .", "In the above integral, $\\nu (\\cdot )$ presents a generic measure, which is the Lebesgue measure for a continuous variable and the counting measure for a discrete variable." ], [ "Review of mediation analysis without missing data", "Let $T$ denote the binary treatment assignment, with $t=0$ and $t=1$ representing the control condition and the experimental condition, respectively.", "Let $ X $ be the vector of measured pre-treatment covariates.", "We use $M$ and $Y$ to denote the mediator and the outcome, respectively.", "We adopt the potential outcomes framework [23], [29] to define the causal effects of interest and make the stable unit treatment value assumption (SUTVA) that there is no hidden variations of each treatment condition and there is no interference between individuals.", "Specifically, we use $M_i(t)$ to denote individual $i$ 's potential mediator value under treatment $t$ for $t=0, 1$ , and use $Y_i(t)$ to denote individual $i$ 's potential outcome value under treatment $t$ for $t=0, 1$ .", "$T$ potentially may affect $Y$ through $M$ , hence $Y_i(t)$ can be equivalently written as $Y_i(t,M_i(t))$ .", "Given the above notation, the average treatment effect ($\\mathrm {ATE}$ ) on the outcome can be defined as $\\mathbb {E}\\lbrace Y(1)-Y(0)\\rbrace $ where the expectation is taken over the population of interest.", "Define the nested potential outcome, $Y_i(1,M_i(0))$ , to describe individual $i$ 's potential outcome under the experimental condition, however, with the mediator counter factually taking its value under the control condition [28].", "The $\\mathrm {ATE}$ can be decomposed into [25]: $\\mathrm {ATE} = \\mathrm {NIE}+\\mathrm {NDE},$ where $\\mathrm {NIE} = \\mathbb {E}\\lbrace Y(1,M(1))-Y(1,M(0))\\rbrace $ is the natural indirect effect ($\\mathrm {NIE}$ ) and $\\mathrm {NDE} = \\mathbb {E}\\lbrace Y(1,M(0))-Y(0,M(0))\\rbrace $ is the natural direct effect ($\\mathrm {NDE}$ ).", "The $\\mathrm {NIE}$ quantifies the average treatment effect on the outcome transmitted through the treatment induced change in the mediator from $M_i(0)$ to $M_i(1)$ ; and the $\\mathrm {NDE}$ quantifies the direct effect of the treatment on the outcome that does not operate through its impact on the mediator.", "We invoke the standard sequential ignorability assumption [14], [15] throughout the paper.", "Assumption 1 (Sequential ignorability).", "$\\lbrace Y(t^{\\prime }, m), M(t)\\rbrace \\perp \\!\\!\\!\\perp T \\mid X = x $ , and $Y(t^{\\prime },m) \\perp \\!\\!\\!\\perp M(t)\\mid T=t, X = x $ , for $t, t^{\\prime } \\in \\lbrace 0, 1\\rbrace $ and all $ X \\in \\mathcal {X}$ .", "In addition, $0 < \\mathbb {P}(T=t\\mid X = x )<1$ and $0 < \\mathbb {P}\\lbrace M(t)=m \\mid T=t, X = x \\rbrace <1$ for $t, t^{\\prime } \\in \\lbrace 0, 1\\rbrace $ , and all $ X \\in \\mathcal {X}$ and $m\\in \\mathcal {M}$ .", "The notation $\\mathcal {X}$ and $\\mathcal {M}$ denote the supports of the random variables $X$ and $M$ , respectively.", "Assumption REF says that both the treatment assignment and the mediator value under each treatment condition can be viewed as if randomized within levels of the observed pre-treatment covariates.", "Under Assumption REF without missing data, we have the following mediation formula [25], [14]: $\\mathbb {E}\\lbrace Y(t, M(t^{\\prime }))\\mid X = x \\rbrace = \\int _{\\mathcal {M}}\\mathbb {E}(Y\\mid M=m, T=t, X = x )\\,d F(m\\mid T=t^{\\prime }, X = x ).$ When there exists missing data, the key for the identification of the $\\mathrm {NIE}$ and $\\mathrm {NDE}$ would be to identify the probabilities $\\mathbb {P}(Y=y\\mid M=m, T=t, X = x )$ and $\\mathbb {P}(M=m\\mid T=t, X = x )$ , or equivalently, the joint probability $\\mathbb {P}(Y=y, M=m\\mid T=t, X = x )$ , from the observable data." ], [ "Missingness only in the mediator", "In this section, we consider a simple setup where the mediator has missing values and the outcome is fully observed.", "It may happen in studies where the outcome is of primary interest with the mediator being a secondary outcome of interest.", "Let $R^M$ be the missingness indicator for $M$ such that $R^M=1$ if $M$ is observed and $R^M=0$ if $M$ is missing.", "When $R^M \\perp \\!\\!\\!\\perp (Y,M,T, X )$ , the missingness mechanism is missing completely at random (MCAR), and the complete case analysis is enough to provide consistent estimates of $\\mathbb {P}(Y,M \\mid T, X )$ .", "When $R^M \\perp \\!\\!\\!\\perp M \\mid (Y,T, X )$ , the missingness mechanism is missing at random (MAR), and the joint distribution $\\mathbb {P}(Y,M \\mid T, X )$ can be consistently estimated from the observed data.", "However, as we explained in section , often we have the concern that the missingness of $M$ may depend on the value of $M$ itself even conditional on other observed data.", "In such a case, the missingness mechanism is missing not at random (MNAR).", "Since the outcome $Y$ occurs after the mediator $M$ , it is plausible in many studies to assume that the missingness of $M$ is conditionally independent of $Y$ .", "Based on the above discussion, we propose the following MNAR assumption when missing only the mediator: Assumption 2 $R^M \\perp \\!\\!\\!\\perp Y\\mid (M,T, X )$ and $Y$ is fully observed.", "Assumption REF allows $R^M$ to depend on the value of the mediator $M$ , $T$ and $ X $ .", "However, we assume $R^M$ to be conditionally independent of the outcome $Y$ given $M$ , $T$ and $ X $ .", "The direct acyclic graphs in Figure REF illustrate the different missingness mechanisms under MCAR, MAR and our proposed MNAR mechanism in Assumption REF , respectively.", "Figure: Direct acyclic graphs describing MCAR, MAR and the proposed MNAR missingness mechanisms when the missingness exists only in the mediator.", "All graphs condition on X X and allow X X to have arrows to all other variables in the graphs.The following theorem presents the nonparametric identification results under the proposed MNAR assumption: Theorem 1 Under Assumptions REF and REF , if the joint distribution $\\mathbb {P}(Y,M,R^M=1 \\mid T=t, X=x )$ is complete in $Y$ for all $t$ and $x$ , and $\\mathbb {P}(R^M=1\\mid Y=y,M=m,T=t, X = x )>0$ for all $y, m, t, x $ , the joint distribution $\\mathbb {P}(Y,M\\mid T, X )$ is identifiable, and therefore, the $\\mathrm {NIE}$ and $\\mathrm {NDE}$ are identifiable.", "For sufficient conditions on completeness, we refer readers to [8] for a comprehensive discussion.", "The assumption of completeness is routinely made in nonparametric identification problems [5], [36], [1].", "For discrete $M$ and discrete $Y$ , completeness is equivalent to Rank $(\\Theta _{tx})=J$ , where $\\Theta _{tx}$ is a $J \\times K$ matrix with $\\mathbb {P}_{my1\\mid t, x}$ as the $(m,y)$ th element.", "This essentially requires that the number of elements in the support of $Y$ is not smaller than the number of elements in the support of $M$ , and that $M \\lnot \\!\\perp \\!\\!\\!\\perp Y\\mid (T, X , R^M=1)$ or equivalently $M \\lnot \\!\\perp \\!\\!\\!\\perp Y\\mid (T, X )$ .", "In fact, under Assumption REF , the identification of $\\mathbb {P}(Y\\mid M, T, X )$ does not rely on the completeness assumption.", "The completeness assumption is only needed to identify $\\mathbb {P}(M\\mid T, X )$ .", "This implies that in a special case when $M \\perp \\!\\!\\!\\perp Y\\mid (T, X )$ and the completeness assumption is violated, since $\\mathbb {P}(Y\\mid M, T, X ) = \\mathbb {P}(Y\\mid T, X )$ , both the $\\mathrm {NIE}$ and $\\mathrm {NDE}$ can still be identified, and $\\mathrm {NIE} = 0$ and $\\mathrm {NDE} = \\mathrm {ATE} = \\int _{\\mathcal {X}}\\lbrace \\mathbb {E}(Y\\mid T=1, X = x )-\\mathbb {E}(Y\\mid T=0, X = x )\\rbrace \\,d F( x ).$ We also illustrate this point using simulation studies." ], [ "Missing data mechanisms", "We now extend the results to the scenario where both the mediator and outcome have missing data.", "Further define the notation $R^Y$ to be the missingness indicator for $Y$ such that $R^Y=1$ if $Y$ is observed and $R^Y=0$ otherwise.", "When $(R^Y, R^M) \\perp \\!\\!\\!\\perp (Y,M,T, X )$ , the missingness mechanism is MCAR; and when $(R^Y,R^M) \\perp \\!\\!\\!\\perp (Y,M)\\mid (T, X )$ , the missingness mechanism is MAR.", "Continuing to allow the missingness of $M$ to depend on the value of $M$ itself and assume the outcomes occurred later (i.e., $Y$ and $R^Y$ ) not to affect variables measured earlier (i.e., $M$ and $R^M$ ), we consider the following MNAR mechanisms in Assumption REF , REF , and REF , respectively.", "Assumption 3 $(R^Y, R^M) \\perp \\!\\!\\!\\perp Y\\mid (M,T, X )$ and $R^Y \\perp \\!\\!\\!\\perp M\\mid (R^M,T, X )$ .", "Assumption 4 $R^Y \\perp \\!\\!\\!\\perp (M,R^M)\\mid (Y,T, X )$ and $R^M \\perp \\!\\!\\!\\perp (Y,R^Y)\\mid (M,T, X )$ .", "Assumption 5 $Y, R^Y$ and $ R^M$ are mutually independent given $(M,T, X )$ .", "The direct acyclic graphs in Figure REF illustrate different missingness mechanisms under MCAR, MAR, and MNAR Assumptions REF to REF when the missingness exists both in the mediator and outcome.", "The differences among the MNAR mechanisms under Assumption REF to REF are in the missingness mechanisms in $Y$ .", "The MNAR mechanism under Assumption REF allows the chance of missingness in $M$ to depend on the value of $M$ itself in addition to the fully observed variables $T$ and $ X $ , and allows the missingness in $Y$ to depend on the missingness in $M$ in addition to $T$ and $ X $ .", "It is important to note that the MAR mechanism is a special case of the MNAR mechanism under Assumption REF without allowing $M$ to affect $R^M$ .", "In the context of NJCS study, this mechanism says that whether or not people successfully obtained a certificate may have an impact on their willingness to report, and their decision on reporting their certificate status may be associated with their decision to report their earnings or not.", "The MNAR mechanism under Assumption REF allows the missingness of $Y$ to depend on the outcome value $Y$ itself instead of $R^M$ as in Assumption REF .", "In the NJCS study, it says that the amount of earnings may have an impact on the probability to report earnings, which is also a reasonable concern.", "Note that both the missingness in $M$ and the missingness in $Y$ are not at random under Assumption REF .", "The MNAR mechanism under Assumption REF allows the missingness in $Y$ to depend on $M$ instead of $R^M$ or $Y$ , another case where both the missingness in $M$ and that in $Y$ are not at random.", "In the context of NJCS study, it says that whether or not people successfully obtained a certificate drives the missingness in both $M$ and $Y$ after conditioning on $T$ and $ X $ .", "Figure: Direct acyclic graphs describing MCAR, MAR and the proposed MNAR missingness mechanisms when the missingness exists both in the mediator and outcome (all graphs condition on X X allowing X X to have directed arrows to all variables in the graphs)." ], [ "Identification results", "We now present the nonparametric identification results under each of the MNAR mechanisms described in Assumptions REF to REF .", "Theorem 2 Under Assumptions REF and REF , if the joint distribution $\\mathbb {P}(Y,M,R^M=1,R^Y=1 \\mid T=t, X =x)$ is complete in $Y$ for all $t$ and $x$ , and $\\mathbb {P}(R^M=1,R^Y=1\\mid Y=y,M=m,T=t, X = x ) > 0$ and $\\mathbb {P}(R^M=0,R^Y=1\\mid Y=y,M=m,T=t, X = x )>0$ for all $y, m, t, x $ , the joint distribution $\\mathbb {P}(Y,M\\mid T, X )$ is identifiable, and therefore, the NIE and NDE are identifiable.", "Under the MNAR mechanism described by Assumption REF , $Y\\perp \\!\\!\\!\\perp (R^M, R^Y)\\mid (M, T, X) $ , and therefore, the identification of $\\mathbb {P}(Y\\mid M, T, X )$ does not rely on the completeness assumption and can be identified by the complete cases, i.e., $\\mathbb {P}(Y\\mid M, T, X ) = \\mathbb {P}(Y\\mid M, T, X , R^Y=1, R^M=1)$ .", "The completeness assumption is used only to identify $\\mathbb {P}(M\\mid T, X )$ .", "As a result, when $M\\perp \\!\\!\\!\\perp Y\\mid (T, X)$ , the NIE and NDE can still be identified as discussed in Section .", "Theorem 3 Under Assumptions REF and REF , if the joint distribution $\\mathbb {P}(Y,M,R^M=1,R^Y=1 \\mid T=t, X=x )$ is complete in $Y$ for all $t$ and $x$ , and the joint distribution $\\mathbb {P}(Y,M,R^M=1,R^Y=1 \\mid T=t, X=x )$ is also complete in $M$ for all $t$ and $x$ , and if $\\mathbb {P}(R^M=1,R^Y=1\\mid Y=y,M=m,T=t, X = x ) > 0$ , $\\mathbb {P}(R^M=0,R^Y=1\\mid Y=y,M=m,T=t, X = x ) > 0$ and $\\mathbb {P}(R^M=1,R^Y=0\\mid Y=y,M=m,T=t, X = x )>0$ for all $y, m,t, x $ , the joint distribution $\\mathbb {P}(Y,M\\mid T, X )$ is identifiable, and therefore, the NIE and NDE are identifiable.", "Different from Assumption REF , under the MNAR mechanism described by Assumption REF , $Y$ is not independent of $R^Y$ given $M, T$ and $ X $ , and therefore, the conditional distribution $\\mathbb {P}(Y\\mid M, T, X )$ can no longer be identified by the complete cases.", "In fact, the MNAR mechanism under Assumption REF requires completeness in $Y$ and completeness in $M$ to be able to identify both $\\mathbb {P}(Y\\mid M, T, X )$ and $\\mathbb {P}(M\\mid T, X )$ .", "As a result, when $M\\perp \\!\\!\\!\\perp Y\\mid (T, X) $ , the NIE and NDE can no longer be identified.", "Further, since the completeness is required in $Y$ and required in $M$ , to achieve identification of $\\mathbb {P}(Y,M\\mid T, X )$ under Assumption REF , the number of elements in the support of $Y$ has to be the same as the number of elements in the support of $M$ .", "We present an unidentifiable case when $Y$ has more categories than $M$ in the section of the Supplementary Material.", "Theorem 4 Under Assumptions REF and REF , if either of the following two conditions holds, the joint distribution $\\mathbb {P}(Y,M\\mid T, X )$ is identifiable, and therefore, the NIE and NDE are identifiable: (i) the joint distribution $\\mathbb {P}(Y,M,R^M=1,R^Y=1 \\mid T=t, X=x )$ is complete in $Y$ for all $t$ and $x$ , and $\\mathbb {P}(R^M=1,R^Y=1\\mid Y=y,M=m,T=t, X = x ) > 0$ , $\\mathbb {P}(R^M=0,R^Y=1\\mid Y=y,M=m,T=t, X = x ) > 0$ and $\\mathbb {P}(R^M=1,R^Y=0\\mid Y=y,M=m,T=t, X = x )>0$ for all $y, m,t, x $ ; (ii) the joint distribution $\\mathbb {P}(M,R^M=1,R^Y \\mid T=t, X=x )$ is complete in $R^Y$ for all $t$ and $x$ , and $\\mathbb {P}(R^M=1\\mid R^Y=r^Y, M=m,T=t, X = x )>0$ for all $r^Y, m, t, x $ .", "Under the MNAR mechanism described by Assumption REF , $\\mathbb {P}(Y\\mid M, T, X )$ can also be directly identified by the complete cases, i.e., $\\mathbb {P}(Y\\mid M, T, X ) = \\mathbb {P}(Y\\mid M, T, X , R^Y=1, R^M=1)$ .", "Different from the identification under MNAR Assumption REF or 4, there are two ways to identify $\\mathbb {P}(M\\mid T, X )$ , one is by utilizing the effect of $M$ on $Y$ if exists, i.e., the condition (i), the other one is by utilizing the effect of $M$ on $R^Y$ if exists, i.e., the condition (ii).", "Note that for the second condition (ii) to hold, $M$ needs to be binary given that $R^Y$ is binary.", "Assuming $Y \\perp \\!\\!\\!\\perp R^M\\mid (M,T, X )$ , and allowing $M$ to have a impact on $R^M$ , we have shown that the joint distribution $\\mathbb {P}(Y,M\\mid T, X )$ can be identified under some completeness assumptions when $R^Y$ only depends on one of $(R^M,Y,M)$ given $T$ and $ X $ .", "When $R^Y$ depends on more than one of $(R^M,Y,M)$ given $T$ and $ X $ , the joint distribution $\\mathbb {P}(Y,M\\mid T, X )$ cannot be identified without further assumptions.", "The discussions and examples for those unidentifiable cases are provided in the section of the Supplemental Material.", "So far, we have shown the nonparametric identification results of the joint distribution $\\mathbb {P}(Y,M\\mid T, X )$ , NIE and NDE, under various MNAR assumptions.", "However, the nonparametric estimation for these quantities may suffer from the curse of dimensionality in practice, especially with a large number of covariates.", "Therefore, we adopt a parametric method to obtain likelihood-based inference through the Expectation-Maximization algorithm [6].", "The estimation details are provided in the section of the Supplementary Material." ], [ "Simulation", "We conducted simulation studies to evaluate the performance of the proposed estimators under each of the MNAR assumptions described in sections and .", "In a simple context of a single covariate $X \\sim N(0,1)$ and a randomized $T\\sim \\mathrm {Bernoulli}(0.5)$ , we considered the following four setups representing different relationships in the supports of $M$ and $Y$ : (A) binary $M$ and binary $Y$ , (B) binary $M$ and continuous $Y$ , (C) continuous $M$ and continuous $Y$ , and (D) continuous $M$ and binary $Y$ .", "We generated the mediator $M$ as follows: $\\mathrm {logit}~\\mathbb {P}(M=1\\mid T,X)=\\alpha _0+\\alpha _t T+\\alpha _x X$ if binary; and $M \\sim N(\\alpha _0+\\alpha _t T+\\alpha _x X, 0.5^2)$ if continuous.", "We then generated the outcome $Y$ according to: $\\mathrm {logit}~\\mathbb {P}(Y=1\\mid M,T,X)=\\beta _0+\\beta _m M+\\beta _t T+\\beta _{mt} M \\cdot T+\\beta _x X$ if binary; and $Y \\sim N(\\beta _0+\\beta _m M+\\beta _t T+\\beta _{mt} M \\cdot T+\\beta _x X, 0.5^2)$ if continuous.", "For each of the four setups described above, we considered missingness mechanisms under (I) Assumption REF , (II) Assumption REF , (III) Assumption REF , and (IV) Assumption REF , respectively, that is, sixteen simulation scenarios in total.", "Under all MNAR Assumptions REF to REF , $R^M$ is allowed to depends on $M$ , $T$ and $X$ , and therefore, we generated the binary variable $R^M$ as follows: $\\mathrm {logit}~\\mathbb {P}(R^M=1\\mid M,T,X)=\\lambda _0+\\lambda _m M+\\lambda _t T+\\lambda _x X.$ Under (I) Assumption REF , $Y$ is fully observed.", "For scenarios (II) to (IV) with missingness in $Y$ , the data generating models for $R^Y$ varied according to different MNAR Assumptions: $\\mathrm {logit}~\\mathbb {P}(R^Y=1\\mid R^M,T,X)=\\gamma _0+\\gamma _{r^M} R^M+\\gamma _t T+\\gamma _x X$ if under (II) Assumption REF ; $\\mathrm {logit}~\\mathbb {P}(R^Y=1\\mid Y,T,X)=\\gamma _0+\\gamma _y Y+\\gamma _t T+\\gamma _x X$ if under (III) Assumption REF ; and $\\mathrm {logit}~\\mathbb {P}(R^Y=1\\mid M,T,X)=\\gamma _0+\\gamma _m M+\\gamma _t T+\\gamma _x X$ if under (IV) Assumption REF .", "For each of the sixteen simulation scenarios considered, we tested our theoretical results and evaluated the performance of our method when $M \\lnot \\perp \\!\\!\\!\\perp Y\\mid (T, X )$ and when $M \\perp \\!\\!\\!\\perp Y\\mid (T, X )$ .", "Table REF presents the specifications of parameter values when $M \\lnot \\perp \\!\\!\\!\\perp Y\\mid (T, X )$ , and when $M \\perp \\!\\!\\!\\perp Y\\mid (T, X )$ , respectively.", "We set parameter values in the $R^M$ and $R^Y$ models to generate missing rates in $M$ and $Y$ both to be around $20\\%$ to $25\\%$ , which are similar to the missing rates in our application study.", "Table: Specifications of the parameter values.", "1, M¬⊥⊥Y∣(T,X)M \\lnot \\perp \\!\\!\\!\\perp Y\\mid (T, X ); 2, M⊥⊥Y∣(T,X)M \\perp \\!\\!\\!\\perp Y\\mid (T, X ); Setting A, Binary MM and Binary YY; Setting B, Binary MM and Continuous YY; Setting C, Continuous MM and Continuous YY; Setting D, Continuous MM and Binary YY.We considered a sample size of 1000, and simulated 500 data sets for each simulation scenario.", "We applied the following three methods to compare their results on the estimations of the NIE and NDE: 1) complete case analysis, which provides consistent estimates under MCAR; 2) our proposed methods using the Expectation-Maximization algorithm, which are designed to deal with the MNAR assumptions under concern; and 3) oracle estimators, which are obtained by using the true values of the missing data.", "Figure REF presents the boxplots of percentages of biases with respect to the true values for each of the simulation scenarios when $M \\lnot \\perp \\!\\!\\!\\perp Y\\mid (T, X )$ across the 500 replications.", "And Figure REF presents the boxplots for the corresponding scenarios when $M\\perp \\!\\!\\!\\perp Y\\mid (T, X )$ .", "The simulation results are consistent with the results predicted by our theorems.", "Under MNAR Assumption REF where the missingness only exists in the mediator, the percentages of biases for both NIE and NDE estimated using our methods are close to zero with slightly larger standard errors than the oracle estimates when $M\\lnot \\perp \\!\\!\\!\\perp Y\\mid (T, X )$ and the completeness assumption holds (A.I, B.I, C.I), while both the estimated NIE and NDE from the complete case analysis have substantial biases.", "When the completeness assumption is violated even though $M\\lnot \\perp \\!\\!\\!\\perp Y\\mid (T, X )$ as in the continuous $M$ and binary $Y$ case (D.I), our method would not recover the underlying truth as expected.", "When $M \\perp \\!\\!\\!\\perp Y\\mid (T, X )$ , as shown in Figure REF A.I (0) to D.I (0) with $(0)$ indicating that $\\mathrm {NIE}=0$ , NIE and NDE from both our methods and the complete case analysis are consistent.", "This is due to the fact that although $\\mathbb {P}(M\\mid T, X )$ cannot be identified, the $\\mathbb {P}(Y\\mid M, T, X )$ can still be identified by the complete cases under Assumption REF .", "Under MNAR Assumption REF (A.II to D.II, and A.II(0) to D.II(0)) and MNAR Assumption REF (A.IV to D.IV, and A.IV(0) to D.IV(0)), we reached the same conclusions as those under MNAR Assumption REF .", "Under MNAR Assumption REF , when $M\\lnot \\perp \\!\\!\\!\\perp Y\\mid (T, X )$ , the completeness assumption is violated in both the binary $M$ and continuous $Y$ (B.III) and the continuous $M$ and binary $Y$ (D.III) cases, and our estimators would not recover the underlying truth.", "Different from the other MNAR assumptions, when $M \\perp \\!\\!\\!\\perp Y\\mid (T, X )$ , both the distributions $\\mathbb {P}(Y\\mid M,T, X )$ and $\\mathbb {P}(M\\mid T, X )$ cannot be identified, we observed biases in estimating the $\\mathrm {NIE}$ and $\\mathrm {NDE}$ using either the complete case analysis or our method.", "Figure: Simulation results when M¬⊥⊥Y∣(T,X)M \\lnot \\perp \\!\\!\\!\\perp Y\\mid (T, X ).", "A, Binary MM and Binary YY; B, Binary MM and Continuous YY; C, Continuous MM and Continuous YY; D, Continuous MM and Binary YY.", "I, Assumption ; II, Assumption ; III, Assumption ; IV, Assumption .", "CC, complete case analysis; EM, our proposed Expectation-Maximization algorithm; Oracle, oracle estimators.", "Bias (%), {(estimate-truth)/truth}*100.Figure: Simulation results when M⊥⊥Y∣(T,X)M \\perp \\!\\!\\!\\perp Y\\mid (T, X ).", "A, Binary MM and Binary YY; B, Binary MM and Continuous YY; C, Continuous MM and Continuous YY; D, Continuous MM and Binary YY.", "I, Assumption ; II, Assumption ; III, Assumption ; IV, Assumption .", "CC, complete case analysis; EM, our proposed Expectation-Maximization algorithm; Oracle, oracle estimators.", "(0), M⊥⊥Y∣(T,X)M \\perp \\!\\!\\!\\perp Y\\mid (T, X ).", "Bias (%), {(estimate-truth)/truth}*100; Bias (%) for NIE is defined as (estimate/NDE)*100 because NIE equals 0." ], [ "Data", "The data describes 8707 eligible applicants in the mid-1990s who lived in the areas selected for in-person interviews at the baseline.", "The subjects were randomized either to the experimental group $(T=1)$ where they could join the Job Corps program soon after randomization, or to the control group $(T=0)$ where they were not provided the Job Corps program for three years [30].", "The mediator ($M$ ) was collected at the 30-months follow-up describing subject's educational and vocational attainment, measured by whether or not the subject obtained an education credential or vocational certificate after randomization.", "We use $M=1$ to denote that an education credential or vocational certificate was obtained, and $M=0$ otherwise.", "The outcome ($Y$ ) was collected at the 48-months follow-up describing the subject's weekly earnings in the fourth year after randomization.", "The measured covariates $ X $ include information on gender, age, race, education level, earnings in the year before participating in the study, whether the subject had a child or not, and whether the subject had ever been arrested or not.", "There are small portions of the missingness exist in covariates $ X $ , including education level (0.6%), earnings levels in the year before participating in the study (6.4%), and whether the subject had ever been arrested or no (3.5%).", "Due to the low missing rates in those categorical covariates, we treat the missingness as another category in the analysis.", "The number of subjects with missing information in the mediator or outcome is nontrivial.", "The missingness patterns in the mediator and the outcome are described in Table REF for the experimental group and the control group.", "We suspect that the missingness may be MNAR in the data.", "Besides the potential impacts of $T$ and $ X $ on the missingness, we have the following concerns: (1) conceivably, people who failed to obtain an education credential or vocational certificate ($M=0$ ) may be less likely to report compared to people who successfully obtained an educational credential or vocational certificate ($M=1$ ), that is, $M$ may have a direct effect on $R^M$ ; (2) we are concerned that people who were unwilling to respond to questions at the 30-months follow-up may also be unwilling to respond at the 48-months follow-up, that is, $R^M$ may have a direct impact on $R^Y$ ; (3) people who had no earnings $(Y=0)$ may be less likely to report their earnings compared to people who had earnings $(Y>0)$ , which results in a direct effect of $Y$ on $R^Y$ ; (4) in addition, $M$ occurs before $R^Y$ , and therefore, may potentially have an impact on people's probability of reporting earnings through channels other than $R^M$ and $Y$ .", "Concerns $(1)$ and $(2)$ can be addressed by the MNAR mechanism under Assumption REF , concerns $(1)$ and $(3)$ can be addressed by the MNAR mechanism under Assumption REF , and concerns $(1)$ and $(4)$ can be addressed by the MNAR mechanism under Assumption REF .", "When incorporating the weekly earnings $Y$ into the prediction of $R^Y$ under MNAR Assumption REF , we assume that it is the binary indicator $\\mathrm {I}(Y>0)$ describing whether $Y$ is positive or not that predicts $R^Y$ .", "This is because of the following two considerations: first, there are excessive zero values of the earnings (14.85%) in the data; second, the identification under MNAR Assumption REF requires the form of $Y$ in predicting $R^Y$ to be binary given that our mediator $M$ in this study is binary (see the completeness assumption in Theorem REF ).", "Table: Missingness patterns in the mediator and the outcome" ], [ "Models", "Our outcome $Y$ , weekly earnings, contains many zero values as well as heavily right skewed positive values.", "To address those complications, we adopted two-part models.", "Let $Z_i$ be the binary indicator describing whether the earning is greater than 0 or not, i.e., $Z_i=1$ if $Y_i>0$ and $Z_i=0$ if $Y_i=0$ .", "We used the following logistic regression to model $Z_{i}$ : $\\mathrm {logit}~\\mathbb {P}(Z_i=1\\mid M_i=m,T_i=t, X _i= x ) = \\delta _0+\\delta _m m+\\delta _t t+\\delta _{mt} m \\cdot t+\\delta _x^ \\textsc {t} x .$ Conditioning on $Z_{i}=1$ , we considered two commonly adopted models, Gamma and log-normal models to fit the positively skewed values of earnings.", "As an illustration, we describe the Gamma model here: $Y_i \\mid (Z_i=1, M_i=m, T_i=t, X _i= x ) \\sim \\mathrm {Gamma}\\lbrace \\nu ,\\nu /\\mu _i(m, t, x )\\rbrace $ , where $\\nu $ denotes the shape parameter, $\\nu / \\mu _i(m, t, x )$ is the rate parameter, and the function $\\mu _i(m, t, x )$ is parameterized as $\\mathrm {exp}(\\beta _0+\\beta _m m+\\beta _t t + \\beta _{mt} m\\cdot t+\\beta _x^ \\textsc {t} x ).$ We used a logistic model for $M$ : $\\mathrm {logit}~\\mathbb {P}(M_i=1\\mid T_i=t, X _i= x )=\\alpha _0+\\alpha _{t}t+\\alpha _{x}^ \\textsc {t} x ,$ and a logistic model for $R^M$ allowing an effect from $M$ : $\\mathrm {logit} ~\\mathbb {P}(R^M_i=1\\mid M_i=m, T_i=t, X _i= x ) = \\lambda _0+\\lambda _m m+\\lambda _t t +\\lambda _x^ \\textsc {t} x .$ The model for $R^Y$ varied according to different MNAR assumptions.", "Under MNAR Assumption REF , we specified that $\\mathrm {logit} ~\\mathbb {P}(R^Y_i=1\\mid R^M_i=r^M, T_i=t, X _i= x ) = \\gamma _0+\\gamma _{r^M}r^M+\\gamma _t t +\\gamma _x^ \\textsc {t} x ;$ under MNAR Assumption REF , we specified that $\\mathrm {logit} ~\\mathbb {P}(R^Y_i=1\\mid Z_i=z, T_i=t, X _i= x ) = \\gamma _0+\\gamma _{z} z+\\gamma _t t +\\gamma _x^ \\textsc {t} x ;$ and under MNAR Assumption REF , we adopted that $\\mathrm {logit} ~\\mathbb {P}(R^Y_i=1\\mid M_i=m, T_i=t, X _i= x ) = \\gamma _0+\\gamma _{m} m+\\gamma _t t +\\gamma _x^ \\textsc {t} x .$" ], [ "Results", "We compared the performance and results using two-part Gamma and two-part log-normal models for the outcome under MNAR Assumptions REF , REF and REF .", "Table REF presents the log-likelihoods evaluated at the MLEs for those six models.", "Since all six models have the same numbers of parameters, we chose the two-part Gamma model and MNAR mechanism under Assumption REF allowing an impact on $R^M$ from $M$ and an impact on $R^Y$ from $R^M$ , besides the impacts from $T$ and $ X $ .", "Below we describe the results based on the two-part Gamma model under MNAR Assumption REF .", "Table: Model comparison among models under MNAR Assumptions , and using two-part Gamma and two-part log-normal models for the outcome.", "The log-likelihoods are evaluated at the MLEs.The coefficient of $M$ in the $R^M$ model is estimated to be $1.73$ with $95\\%$ CI $(0.34,~3.33)$ - see $\\lambda _m$ in Table REF , which suggests that people who had acquired certificates were more likely to report their certificate status.", "The coefficient of $R^M$ in the $R^Y$ model is estimated to be $1.87$ with $95\\%$ CI $(1.76,~2.00)$ - see $\\gamma _{r^M}$ in Table REF , which suggests that people who were willing to report their certificate status were also more likely to report their earnings.", "The natural indirect effect is estimated to be $10.94$ with $95\\%$ CI $(7.94,~14.29)$ - see $\\mathrm {NIE}$ in Table REF , which indicates that there was a significant indirect effect of the program assignment on weekly earnings through an educational credential or vocational certificate at the $0.05$ significance level.", "The natural direct effect is estimated to be $12.93$ with $95\\%$ CI $(-1.95,~27.64)$ - see NDE in Table REF , which indicates that there was no significant direct effect of the program assignment on weekly earnings at the $0.05$ significance level.", "The causal conclusions regarding the indirect effect (NIE) and the direct effect (NDE) are the same between the complete case analysis and our methods, in spite of the significant effect of $M \\rightarrow R^M$ ($\\lambda _{m}$ ) and the significant effect of $R^M \\rightarrow R^Y$ ($\\gamma _{r^M}$ ).", "Table: Data analysis results from the Gamma model under MNAR Assumption .", "Est, estimate; SE, standard error based on 500 bootstrap samples; CI, confidence interval; λ m \\lambda _m, coefficient of MM in the R M R^M model; γ r M \\gamma _{r^M}, coefficient of R M R^M in the R Y R^Y model." ], [ "Sensitivity analysis", "We consider the Gamma model under MNAR Assumption REF from the data analysis as a starting model for building the sensitivity analysis.", "It is possible that the missingness of earnings also depends on the value of earnings itself and/or the educational and vocational attainment, in addition to the missingness of the educational and vocational attainment.", "The goal is to access the sensitivity of our causal conclusions to the additional impacts on $R^Y$ from $Z$ and/or $M$ .", "The revised model for $R^Y$ is as follows: $&&\\mathrm {logit}~\\mathbb {P}(R^Y_i=1\\mid R^M_i=r^M, Z_i=z, M_i=m, T_i=t, X _i= x )\\\\&=&\\gamma _0+\\gamma _{r^M} r^M +\\gamma _z z+\\gamma _m m +\\gamma _t t +\\gamma _x^ \\textsc {t} x,$ where $\\gamma _z$ and $\\gamma _m$ are the sensitivity parameters.", "We consider a large effect in the log odds ratio [3] and let both sensitivity parameters to vary among $-2$ , 0 and 2.", "When $\\gamma _z=0$ and $\\gamma _m=0$ , it is the same as the MNAR mechanism under Assumption REF that stands out in the data analysis.", "Figure: The direct acyclic graph describing the missing data mechanism for the sensitivity analysis.", "The graph conditions on X X .The results for the sensitivity analysis are presented in Table REF .", "The NIE estimate increases more than 10% in the case where $\\gamma _m=-2$ and $\\gamma _z=2$ , and the case where $\\gamma _m=0$ and $\\gamma _z=2$ .", "The NDE estimate decreases more than 10% in the case where $\\gamma _m=0$ and $\\gamma _z=2$ , and increases more than 10% in the case where $\\gamma _m=2$ and $\\gamma _z=2$ .", "However, the NIEs are estimated to be positive and significant at the $0.05$ level and the NDEs are estimated to be positive but not significant at the $0.05$ level, for all pairs of values $(\\gamma _m, \\gamma _z)$ considered.", "In summary, the causal conclusions on the NIE and NDE are not sensitive to a strong impact of $Z \\rightarrow R^Y$ and/or $M \\rightarrow R^Y$ in addition to the impact of $R^M$ on $R^Y$ .", "Table: Sensitivity analysis results from the Gamma model.", "Est, estimate; CI, confidence interval based on 500 bootstrap samples; λ m \\lambda _m, coefficient of MM in the R M R^M model; γ r M \\gamma _{r^M}, coefficient of R M R^M in the R Y R^Y model; γ z \\gamma _z (sensitivity parameter), coefficient of ZZ in the R Y R^Y model; γ m \\gamma _m (sensitivity parameter), coefficient of MM in the R Y R^Y model." ], [ "Discussion", "Mediation analysis plays an important role in studying the underlying biological or social mechanisms through which an exposure exerts its effect.", "When the mediator and outcome are missing not at random, the evaluation of the NIE and NDE can be very challenging.", "Motivated by the NJCS, in scenarios where it is reasonable to assume that future variables do not have an impact on variables measured in the past, we explored the identification of the NIE and NDE for some interpretable missing not at random assumptions.", "The condition of completeness plays an important role in achieving nonparametric identification.", "We proved that the joint distribution $\\mathbb {P}(Y,M\\mid T, X )$ can be identified when the missingness of $M$ may depend on the value of $M$ itself and when $Y$ is either fully observed, or the missingness of $Y$ depends on only one of $(R^M, Y, M)$ in addition to $T$ and $ X $ .", "In a special case when $M\\perp \\!\\!\\!\\perp Y\\mid T, X $ and therefore violates the completeness requirement, although $\\mathbb {P}(M\\mid T, X )$ is no longer identifiable, the NIE and NDE can still be identified except when both the missingness of $M$ and the missingness of $Y$ depend on their missing values themselves (i.e., MNAR Assumption REF ).", "Based on the comparison of likelihoods, the MNAR mechanism under Assumption REF is more plausible compared to other MNAR assumptions in the NJCS study.", "The strong association between $R^M$ and $R^Y$ in the NJCS study might be a result of both being affected by the subjects' unmeasured tendency or willingness to respond to interview questions.", "Further, to address the concern that the missingness of $Y$ may also depends on $Y$ itself and $M$ in addition to $R^M$ , we proposed a sensitivity analysis approach and found that our conclusions on the NIE and NDE in the NJCS study are not sensitive to some large impacts on $R^Y$ from $Y$ and $M$ .", "Several limitations of the work are to be addressed in future research.", "First, in current work, we focused on the identification of the NIE and NDE when the mediator and outcome are MNAR.", "In practice, prevalent missingness may exist in both the mediator/outcome and the measured covariates, and the missingness of covariates may also be MNAR.", "Identifiability in this more complicated scenario is to be explored.", "Second, the NIE and NDE evaluated in the NJCS study should be interpreted as the effects of the program assignment instead of the effects of the program.", "According to [31], although control group subjects were barred from participating in the Job Corps program, there were about $30\\%$ of the program group subjects who failed to participate in the program.", "One possibility to address this one-sided noncompliance issue is to consider the random program assignment as an instrumental variable and incorporate the instrumental variable into causal mediation analysis [34], [10], [24] for the latent subgroup of compliers, i.e., who would comply with their treatment assignment no matter being assigned to the treatment or the control group.", "Third, we considered a setting where there is a single mediator.", "In practice, cases with multiple mediators may arise, and the potential missingness mechanisms may vary depending on whether those mediators have a sequential relationship or parallel relationship, and etc.", "We leave this topic for future research.", "Fourth, the results in this paper invoke the sequential ignorability assumption.", "In the NJCS study, although the ignorability of the treatment assignment is guaranteed by the study design, however, the ignorability of the mediator is not guaranteed and could be violated if there exists post-treatment or unmeasured pretreatment confounding.", "One approach to partly address this issue is to conduct sensitivity analysis to quantify the potential bias due to the departure from the sequential ignorability assumption [33], [12].", "SUPPLEMENTARY MATERIAL Section gives the proofs of the theorems.", "Section gives counterexamples for the unidentifiable cases.", "Section provides parametric estimation details." ], [ "Proof of Theorem ", "The identification of $\\mathbb {P}(Y=y\\mid M=m, T=t, X = x )$ follows from $\\mathbb {P}(Y=y\\mid M=m, T=t, X = x ) = \\mathbb {P}(Y=y\\mid R^M = 1, M=m, T=t, X = x )\\nonumber .$ We will focus on the identification of $\\mathbb {P}(M=m\\mid T=t, X = x )$ .", "Define $\\mathbb {P}_{my1|t, x} &=& \\mathbb {P}(M=m, Y=y, R^M=1\\mid T=t, X = x ), \\\\\\mathbb {P}_{+y0\\mid t,x} &=& \\mathbb {P}(Y=y, R^M=0\\mid T=t, X = x ), \\\\\\zeta _{t,x}(m) &=& \\frac{\\mathbb {P}(R^M=0\\mid M=m, T=t, X = x )}{\\mathbb {P}(R^M=1\\mid M=m, T=t, X = x )}.$ We have $\\mathbb {P}_{my1\\mid t, x} = \\mathbb {P}(M=m, Y=y\\mid T=t, X = x )\\mathbb {P}(R^M=1\\mid M=m,T=t, X = x )$ and $\\mathbb {P}_{+y0\\mid t,x} &= \\int _{m\\in \\mathcal {M}} \\mathbb {P}(M=m, Y=y, R^M=0\\mid T=t, X = x )dm \\nonumber \\\\&=\\int _{m\\in \\mathcal {M}} \\mathbb {P}(M=m, Y=y\\mid T=t, X = x ) \\mathbb {P}(R^M=0\\mid M=m, T=t, X = x )dm \\nonumber \\\\&=\\int _{m\\in \\mathcal {M}}\\mathbb {P}_{my1\\mid t, x} \\frac{\\mathbb {P}(R^M=0\\mid M=m, T=t, X = x )}{\\mathbb {P}(R^M=1\\mid M=m, T=t, X = x )}dm \\nonumber \\\\&=\\int _{m\\in \\mathcal {M}}\\mathbb {P}_{my1\\mid t, x} \\zeta _{t,x}(m)dm.", "\\nonumber $ The existence and uniqueness of solutions $\\zeta _{t,x}(m)$ require that $\\mathbb {P}(Y,M,R^M=1 \\mid T=t, X=x )$ is complete in $Y$ .", "For discrete $M$ and discrete $Y$ , the completeness assumption is equivalent to Rank $(\\Theta _{tx})=J$ , where $\\Theta _{tx}$ is a $J \\times K$ matrix with $\\mathbb {P}_{my1\\mid t, x}$ as the $(m,y)$ th element.", "For binary $M$ , the rank condition further reduces to $M \\lnot \\perp \\!\\!\\!\\perp Y\\mid (T, X )$ , which is equivalent to the testable condition $M \\lnot \\perp \\!\\!\\!\\perp Y\\mid (T, X ,R^M=1)$ .", "For continuous $M$ and continuous $Y$ , the dimension of $Y$ needs to be no smaller than the dimension of $M$ in general as required by the completeness assumption.", "We can subsequently identify $\\mathbb {P}(R^M=1\\mid M=m, T=t, X = x )$ once $\\zeta _{t,x}(m)$ is identified.", "Then $&&\\mathbb {P}(M=m\\mid T=t, X = x ) \\\\&=&\\frac{\\mathbb {P}(M=m, Y=y\\mid T=t, X = x )}{\\mathbb {P}(Y=y\\mid M=m,T=t, X = x )} \\\\&=& \\frac{\\mathbb {P}_{my1\\mid t, x}}{\\mathbb {P}(R^M=1\\mid M=m, T=t, X = x ) \\mathbb {P}(Y=y\\mid M=m,T=t, X = x )}.$" ], [ "Proof of Theorem ", "The identification of $\\mathbb {P}(Y=y\\mid M=m, T=t, X = x )$ follows from $\\mathbb {P}(Y=y\\mid M=m, T=t, X = x ) = \\mathbb {P}(Y=y\\mid R^M=1, R^Y=1, M=m, T=t, X = x ).\\nonumber $ We will focus on the identification of $\\mathbb {P}(M=m\\mid T=t, X = x )$ .", "Define $\\mathbb {P}_{my11\\mid t,x} &=& \\mathbb {P}(M=m, Y=y, R^M=1, R^Y=1\\mid T=t, X = x ), \\\\\\mathbb {P}_{+y01\\mid t,x} &=& \\mathbb {P}(Y=y, R^M=0, R^Y=1\\mid T=t, X = x ), \\\\\\zeta _{t,x}(m)&=&\\frac{\\mathbb {P}(R^M=0\\mid M=m,T=t, X = x )}{\\mathbb {P}(R^M=1\\mid M=m,T=t, X = x )}.$ We have $\\mathbb {P}_{my11\\mid t,x} &=& \\mathbb {P}(M=m, Y=y\\mid T=t, X = x ) \\\\&& \\cdot \\mathbb {P}(R^Y=1\\mid R^M=1,T=t, X = x )\\cdot \\mathbb {P}(R^M=1\\mid M=m,T=t, X = x )$ and $\\mathbb {P}_{+y01\\mid t,x} &= \\int _{m\\in \\mathcal {M}} \\mathbb {P}(M=m, Y=y, R^M=0, R^Y=1\\mid T=t, X = x )dm\\nonumber \\\\&=\\int _{m\\in \\mathcal {M}}\\mathbb {P}_{my11\\mid t,x} \\frac{\\mathbb {P}(R^Y=1\\mid R^M=0,T=t, X = x )\\mathbb {P}(R^M=0\\mid M=m,T=t, X = x )}{\\mathbb {P}(R^Y=1\\mid R^M=1,T=t, X = x )\\mathbb {P}(R^M=1\\mid M=m,T=t, X = x )}dm\\nonumber \\\\&=\\frac{\\mathbb {P}(R^Y=1\\mid R^M=0,T=t, X = x )}{\\mathbb {P}(R^Y=1\\mid R^M=1,T=t, X = x )}\\int _{m\\in \\mathcal {M}}\\mathbb {P}_{my11\\mid t,x} \\zeta _{t,x}(m)dm.", "\\nonumber $ The existence and uniqueness of solutions $\\zeta _{t,x}(m)$ require that $\\mathbb {P}(Y,M,R^M=1,R^Y=1 \\mid T=t, X=x )$ is complete in $Y$ .", "For discrete $M$ and discrete $Y$ , the completeness assumption is equivalent to Rank $(\\Theta _{tx})=J$ , where $\\Theta _{tx}$ is a $J \\times K$ matrix with $\\mathbb {P}_{my11\\mid t,x}$ as the $(m,y)$ th element.", "For binary $M$ , the rank condition further reduces to $M \\lnot \\perp \\!\\!\\!\\perp Y\\mid (T, X )$ , which is equivalent to the testable condition $M \\lnot \\perp \\!\\!\\!\\perp Y\\mid (T, X ,R^M=1,R^Y=1)$ .", "For continuous $M$ and continuous $Y$ , the dimension of $Y$ needs to be no smaller than the dimension of $M$ in general as required by the completeness assumption.", "We can subsequently identify $\\mathbb {P}(R^M=1\\mid M=m, T=t, X = x )$ once $\\zeta _{t,x}(m)$ is identified.", "Then $\\mathbb {P}(M=m\\mid T=t, X = x )&=\\frac{\\mathbb {P}(M=m, Y=y\\mid T=t, X = x )}{\\mathbb {P}(Y=y\\mid M=m,T=t, X = x )}\\nonumber \\\\&= \\frac{\\mathbb {P}_{my11\\mid t,x}}{\\mathbb {P}(Y=y, R^Y=1, R^M=1\\mid M=m,T=t, X = x )}.\\nonumber $" ], [ "Proof of Theorem ", "We will discuss the identification of $\\mathbb {P}(M=m,Y=y\\mid T=t, X = x )$ .", "Define $\\mathbb {P}_{my11\\mid t,x} &=& \\mathbb {P}(M=m, Y=y, R^M=1, R^Y=1\\mid T=t, X = x ), \\\\\\mathbb {P}_{+y01\\mid t,x} &=& \\mathbb {P}(Y=y, R^M=0, R^Y=1\\mid T=t, X = x ), \\\\\\mathbb {P}_{m+10\\mid t,x} &=& \\mathbb {P}(M=m, R^M=1, R^Y=0\\mid T=t, X = x ), \\\\\\zeta _{t,x}(m)&=&\\frac{\\mathbb {P}(R^M=0\\mid M=m,T=t, X = x )}{\\mathbb {P}(R^M=1\\mid M=m,T=t, X = x )},\\\\\\eta _{t,x}(y)&=&\\frac{\\mathbb {P}(R^Y=0\\mid Y=y,T=t, X = x )}{\\mathbb {P}(R^Y=1\\mid Y=y,T=t, X = x )}.$ We first have $\\mathbb {P}_{my11|t,x}&=& \\mathbb {P}(M=m, Y=y\\mid T=t, X = x ) \\\\&&\\cdot \\mathbb {P}(R^Y=1\\mid Y=y,T=t, X = x )\\cdot \\mathbb {P}(R^M=1\\mid M=m,T=t, X = x ) .$ We then have $\\mathbb {P}_{+y01\\mid t,x} &= \\int _{m\\in \\mathcal {M}} \\mathbb {P}(M=m, Y=y, R^M=0, R^Y=1\\mid T=t, X = x )dm\\nonumber \\\\&=\\int _{m\\in \\mathcal {M}}\\mathbb {P}_{my11\\mid t,x}\\frac{\\mathbb {P}(R^M=0\\mid M=m,T=t, X = x )}{\\mathbb {P}(R^M=1\\mid M=m,T=t, X = x )}dm\\nonumber \\\\&=\\int _{m\\in \\mathcal {M}}\\mathbb {P}_{my11\\mid t,x} \\zeta _{t,x}(m)dm \\nonumber $ and $\\mathbb {P}_{m+10\\mid t,x} &= \\int _{y\\in \\mathcal {Y}}\\mathbb {P}(M=m, Y=y, R^M=1, R^Y=0\\mid T=t, X = x )dy\\nonumber \\\\&=\\int _{y\\in \\mathcal {Y}}\\mathbb {P}_{my11\\mid t,x} \\frac{\\mathbb {P}(R^Y=0\\mid Y=y,T=t, X = x )}{\\mathbb {P}(R^Y=1\\mid Y=y,T=t, X = x )}dy\\nonumber \\\\&=\\int _{y\\in \\mathcal {Y}}\\mathbb {P}_{my11\\mid t,x} \\eta _{t,x}(y)dy.", "\\nonumber $ The existence and uniqueness of solutions $\\zeta _{t,x}(m)$ require that $\\mathbb {P}(Y,M,R^M=1,R^Y=1 \\mid T=t, X=x )$ is complete in $Y$ , and the existence and uniqueness of solutions $\\eta _{t,x}(y)$ require that $\\mathbb {P}(Y,M,R^M=1,R^Y=1 \\mid T=t, X=x )$ is complete in $M$ .", "For discrete $M$ and discrete $Y$ , the above completeness assumptions are equivalent to Rank $(\\Theta _{tx})=J$ , where $\\Theta _{tx}$ is a $J \\times J$ matrix with $\\mathbb {P}_{my11\\mid t,x}$ as the $(m,y)$ th element.", "For binary $M$ and binary $Y$ , the rank condition reduces to $M \\lnot \\perp \\!\\!\\!\\perp Y\\mid (T, X )$ .", "For continuous $M$ and continuous $Y$ , the dimension of $Y$ needs to be the same as the dimension of $M$ in general as required by $\\mathbb {P}(Y,M,R^Y=1,R^M=1\\mid T, X )$ being complete in $M$ and being complete in $Y$ .", "We can subsequently identify $\\mathbb {P}(R^M=1\\mid M=m, T=t, X = x )$ and $\\mathbb {P}(R^Y=1\\mid Y=y, T=t, X = x )$ once $\\zeta _{t,x}(m)$ and $\\eta _{t,x}(y)$ are identified.", "Then $&&\\mathbb {P}(Y=y, M=m\\mid T=t, X = x )\\nonumber \\\\&=&\\frac{\\mathbb {P}_{my11\\mid t,x}}{\\mathbb {P}(R^M=1\\mid M=m, T=t, X = x )\\mathbb {P}(R^Y=1\\mid Y=y, T=t, X = x )}.\\nonumber $" ], [ "Under condition $(i)$", "The identification of $\\mathbb {P}(Y=y\\mid M=m, T=t, X = x )$ follows from $\\mathbb {P}(Y=y\\mid M=m, T=t, X = x ) = \\mathbb {P}(Y=y\\mid R^M = 1, R^Y=1, M=m, T=t, X = x )\\nonumber .$ We will focus on the identification of $\\mathbb {P}(M=m\\mid T=t, X = x )$ .", "Define $\\mathbb {P}_{my11\\mid t,x} &=& \\mathbb {P}(M=m, Y=y, R^M=1, R^Y=1\\mid T=t, X = x ), \\\\\\mathbb {P}_{+y01\\mid t,x} &=& \\mathbb {P}(Y=y, R^M=0, R^Y=1\\mid T=t, X = x ), \\\\\\mathbb {P}_{m+10\\mid t,x} &=& \\mathbb {P}(M=m, R^M=1, R^Y=0\\mid T=t, X = x ), \\\\\\mathbb {P}_{m+11\\mid t,x} &=& \\mathbb {P}(M=m, R^M=1, R^Y=1\\mid T=t, X = x ), \\\\\\zeta _{t,x}(m)&=&\\frac{\\mathbb {P}(R^M=0\\mid M=m,T=t, X = x )}{\\mathbb {P}(R^M=1\\mid M=m,T=t, X = x )}, \\\\\\eta _{t,x}(m)&=&\\frac{\\mathbb {P}(R^Y=0\\mid M=m,T=t, X = x )}{\\mathbb {P}(R^Y=1\\mid M=m,T=t, X = x )}.$ We first have $\\mathbb {P}_{my11\\mid t,x} &=& \\mathbb {P}(M=m, Y=y \\mid T=t, X = x ) \\\\&&\\cdot \\mathbb {P}(R^Y=1\\mid M=m, T=t, X = x )\\cdot \\mathbb {P}(R^M=1\\mid M=m, T=t, X = x ).$ We then have $\\mathbb {P}_{+y01\\mid t,x} &= \\int _{m\\in \\mathcal {M}} \\mathbb {P}(M=m, Y=y, R^M=0, R^Y=1\\mid T=t, X = x )dm\\nonumber \\\\&=\\int _{m\\in \\mathcal {M}}\\mathbb {P}_{my11\\mid t,x} \\frac{\\mathbb {P}(R^M=0\\mid M=m, T=t, X = x )}{\\mathbb {P}(R^M=1\\mid M=m, T=t, X = x )}dm\\nonumber \\\\&=\\int _{m\\in \\mathcal {M}}\\mathbb {P}_{my11\\mid t,x} \\zeta _{t,x}(m)dm \\nonumber $ and $\\mathbb {P}_{m+10\\mid t,x} &= \\int _{y\\in \\mathcal {Y}} \\mathbb {P}(M=m, Y=y, R^M=1, R^Y=0\\mid T=t, X = x )dy\\nonumber \\\\&=\\int _{y\\in \\mathcal {Y}}\\mathbb {P}_{my11\\mid t,x} \\frac{\\mathbb {P}(R^Y=0\\mid M=m, T=t, X = x )}{\\mathbb {P}(R^Y=1\\mid M=m, T=t, X = x )}dy\\nonumber \\\\&=\\eta _{t,x}(m)\\int _{y\\in \\mathcal {Y}}\\mathbb {P}_{my11\\mid t,x}dy \\nonumber \\\\&=\\eta _{t,x}(m)\\mathbb {P}_{m+11\\mid t,x}.", "\\nonumber $ Solve $\\eta _{t,x}(m)$ as $\\eta _{t,x}(m) = \\mathbb {P}_{m+10\\mid t,x} / \\mathbb {P}_{m+11\\mid t,x} .$ When the joint distribution of $\\mathbb {P}(Y,M,R^M=1,R^Y=1 \\mid T=t, X=x )$ is complete in $Y$ , the existence and uniqueness of solutions $\\zeta _{t,x}(m)$ can be achieved.", "For discrete $M$ and discrete $Y$ , this completeness assumption is equivalent to Rank $(\\Theta _{tx})=J$ , where $\\Theta _{tx}$ is a $J \\times K$ matrix with $\\mathbb {P}_{my11\\mid t,x}$ as the $(m,y)$ th element.", "For binary $M$ , the rank condition further reduces to $M \\lnot \\perp \\!\\!\\!\\perp Y\\mid (T, X )$ , which is equivalent to the testable condition $M \\lnot \\perp \\!\\!\\!\\perp Y\\mid (T, X ,R^M=1,R^Y=1)$ .", "For continuous $M$ and continuous $Y$ , the dimension of $Y$ needs to be no smaller than the dimension of $M$ in general as required by the completeness assumption.", "We can subsequently identify $\\mathbb {P}(R^Y=1\\mid M=m, T=t, X = x )$ and $\\mathbb {P}(R^M=1\\mid M=m, T=t, X = x )$ once $\\zeta _{t,x}(m)$ and $\\eta _{t,x}(m)$ are identified.", "Then $\\mathbb {P}(M=m\\mid T=t, X = x )& = \\frac{\\mathbb {P}(M=m, Y=y\\mid T=t, X = x )}{\\mathbb {P}(Y=y\\mid M=m,T=t, X = x )} \\nonumber \\\\&= \\frac{\\mathbb {P}_{my11\\mid t,x}}{\\mathbb {P}(Y=y,R^M=1,R^Y=1\\mid M=m,T=t, X = x )}\\nonumber .$ The identification of $\\mathbb {P}(Y=y\\mid M=m, T=t, X = x )$ follows from $\\mathbb {P}(Y=y\\mid M=m, T=t, X = x ) = \\mathbb {P}(Y=y\\mid R^M=1, R^Y=1, M=m, T=t, X = x ),$ and the identification of $\\mathbb {P}(R^Y=r^Y\\mid M=m, T=t, X = x )$ follows from $\\mathbb {P}(R^Y=r^Y\\mid M=m, T=t, X = x ) = \\mathbb {P}(R^Y=r^Y\\mid R^M=1,M=m,T=t, X = x ) .$ We will focus on the identification of $\\mathbb {P}(M=m\\mid T=t, X = x )$ .", "Define $\\mathbb {P}_{m1r^Y|t,x} &=& \\mathbb {P}(M=m,R^M=1,R^Y=r^Y\\mid T=t, X = x ), \\\\\\mathbb {P}_{+0r^Y\\mid t,x} &=& \\mathbb {P}(R^M=0,R^Y=r^Y\\mid T=t, X = x ), \\\\\\zeta _{t,x}(m) &=& \\frac{\\mathbb {P}(R^M=0\\mid M=m, T=t, X = x )}{\\mathbb {P}(R^M=1\\mid M=m, T=t, X = x )}.$ We have $\\mathbb {P}_{m1r^Y\\mid t,x} = \\mathbb {P}(M=m,R^Y=r^Y\\mid T=t, X = x )\\mathbb {P}(R^M=1\\mid M=m,T=t, X = x )$ and $\\mathbb {P}_{+0r^Y\\mid t,x} &= \\int _{m\\in \\mathcal {M}} \\mathbb {P}(M=m, R^M=0, R^Y=r^Y\\mid T=t, X = x )dm \\nonumber \\\\&=\\int _{m\\in \\mathcal {M}} \\mathbb {P}_{m1r^Y|t,x} \\frac{\\mathbb {P}(R^M=0\\mid M=m, T=t, X = x )}{\\mathbb {P}(R^M=1\\mid M=m, T=t, X = x )}dm \\nonumber \\\\&=\\int _{m\\in \\mathcal {M}}\\mathbb {P}_{m1r^Y|t,x} \\zeta _{t,x}(m)dm.", "\\nonumber $ When the joint distribution of $\\mathbb {P}(M,R^M=1,R^Y \\mid T=t, X=x )$ is complete in $R^Y$ ,the existence and uniqueness of solutions $\\zeta _{t,x}(m)$ can be achieved.", "This requires that $M$ is binary since $R^Y$ is binary, and the completeness assumption reduces to $M \\lnot \\perp \\!\\!\\!\\perp R^Y\\mid (T, X )$ , which is equivalent to the testable condition $M \\lnot \\perp \\!\\!\\!\\perp R^Y\\mid (T, X ,R^M=1)$ .", "We can subsequently identify $\\mathbb {P}(R^M=1\\mid M=m, T=t, X = x )$ once $\\zeta _{t,x}(m)$ is identified.", "Then $\\mathbb {P}(M=m\\mid T=t, X = x )\\nonumber &=\\frac{\\mathbb {P}(M=m, R^Y=r^Y\\mid T=t, X = x )}{\\mathbb {P}(R^Y=r^Y\\mid M=m,T=t, X = x )} \\nonumber \\\\&= \\frac{\\mathbb {P}_{m1r^Y\\mid t,x}}{\\mathbb {P}(R^M=1,R^Y=r^Y\\mid M=m, T=t, X = x )}.\\nonumber $" ], [ "Counterexamples for the Unidentifiable Cases", "Consider a binary mediator $M$ and binary outcome $Y$ .", "All graphs and probabilities below are conditioning on $T$ and $ X $ .", "Define $\\mathbb {P}_{my11} &=& \\mathbb {P}(M=m, Y=y, R^M=1, R^Y=1), \\\\\\mathbb {P}_{+y01} &=& \\mathbb {P}(Y=y, R^M=0, R^Y=1),\\\\\\mathbb {P}_{m+10} &=& \\mathbb {P}(M=m, R^M=1, R^Y=0), \\\\\\mathbb {P}_{++00} &=& \\mathbb {P}(R^M=0, R^Y=0).$ Based on the observable data probabilities, we can directly identify $\\mathbb {P}_{my11}$ , $\\mathbb {P}_{+y01}$ , $\\mathbb {P}_{m+10}$ and $\\mathbb {P}_{++00}$ , and $\\sum ^{1}_{m=0}\\sum ^{1}_{y=0}\\mathbb {P}_{my11}+\\sum _{y=0}^1 \\mathbb {P}_{+y01}+\\sum _{m=0}^1 \\mathbb {P}_{m+10}+ \\mathbb {P}_{++00}=1$ ." ], [ "$(i)$ We present below an unidentified case when {{formula:ca2bf032-22fe-4901-8a91-776f1be6e9e3}} depends on both {{formula:8fa07d9e-4c56-4243-bc2f-9023ccaf307d}} and {{formula:975b0000-5a92-4422-a27b-0023e2a95cb3}} as described by Figure ", "Consider the following observable data probabilities: $(\\mathbb {P}_{1111},\\mathbb {P}_{0111},\\mathbb {P}_{1011},\\mathbb {P}_{0011},\\mathbb {P}_{+101},\\mathbb {P}_{+001},\\mathbb {P}_{1+10},\\mathbb {P}_{0+10},\\mathbb {P}_{++00})=\\left(\\frac{3}{10},\\frac{1}{10},\\frac{1}{20},\\frac{1}{20},\\frac{1}{10},\\frac{1}{20},\\frac{1}{10},\\frac{1}{20},\\frac{1}{5} \\right)\\nonumber .$ The key to identify $\\mathbb {P}(Y=y, M=m)$ is to identify both $\\mathbb {P}(R^M=1\\mid M=m)$ and $\\mathbb {P}(R^Y=1\\mid Y=y,R^M=1)$ in the following formula $\\mathbb {P}(Y=y, M=m)=\\frac{\\mathbb {P}_{my11}}{\\mathbb {P}(R^M=1\\mid M=m)\\mathbb {P}(R^Y=1\\mid Y=y,R^M=1)}.\\nonumber $ We now show that the identification of $\\mathbb {P}(R^Y=1\\mid Y=y,R^M=1)$ can be achieved.", "This is because $\\mathbb {P}_{m+10} &= \\sum _{y\\in \\mathcal {Y}}\\mathbb {P}(M=m, Y=y, R^M=1, R^Y=0)\\nonumber \\\\&=\\sum _{y\\in \\mathcal {Y}}\\mathbb {P}_{my11}\\frac{\\mathbb {P}(R^Y=0\\mid Y=y,R^M=1)}{\\mathbb {P}(R^Y=1\\mid Y=y,R^M=1)}.\\nonumber $ By plugging in the two possible values for $m$ and $y$ in the above formula, we have $\\mathbb {P}_{1+10} &= \\mathbb {P}_{1111}\\frac{\\mathbb {P}(R^Y=0\\mid Y=1,R^M=1)}{\\mathbb {P}(R^Y=1\\mid Y=1,R^M=1)}+\\mathbb {P}_{1011}\\frac{\\mathbb {P}(R^Y=0\\mid Y=0,R^M=1)}{\\mathbb {P}(R^Y=1\\mid Y=0,R^M=1)}$ and $\\mathbb {P}_{0+10} &= \\mathbb {P}_{0111}\\frac{\\mathbb {P}(R^Y=0\\mid Y=1,R^M=1)}{\\mathbb {P}(R^Y=1\\mid Y=1,R^M=1)}+\\mathbb {P}_{0011}\\frac{\\mathbb {P}(R^Y=0\\mid Y=0,R^M=1)}{\\mathbb {P}(R^Y=1\\mid Y=0,R^M=1)}.$ And therefore, $\\mathbb {P}(R^Y=1\\mid Y=y,R^M=1)$ can be identified by solving the linear equations (REF ) and (REF ).", "Based on the observable data probabilities, $\\mathbb {P}(R^Y=1\\mid Y=1,R^M=1)=\\frac{4}{5}$ and $\\mathbb {P}(R^Y=1\\mid Y=0,R^M=1)=\\frac{2}{3}$ .", "We now focus on the identifiability of $\\mathbb {P}(R^M=1\\mid M=m)$ and show that $\\mathbb {P}(R^M=1\\mid M=m)$ cannot be identified without further assumptions.", "We have $\\mathbb {P}_{+y01} &= \\sum _{m\\in \\mathcal {M}} \\mathbb {P}(M=m, Y=y, R^M=0, R^Y=1)\\nonumber \\\\&=\\sum _{m\\in \\mathcal {M}}\\mathbb {P}_{my11}\\frac{\\mathbb {P}(R^Y=1\\mid Y=y,R^M=0)\\mathbb {P}(R^M=0\\mid M=m)}{\\mathbb {P}(R^Y=1\\mid Y=y,R^M=1)\\mathbb {P}(R^M=1\\mid M=m)}\\nonumber \\\\&= \\frac{\\mathbb {P}(R^Y=1\\mid Y=y,R^M=0)}{\\mathbb {P}(R^Y=1\\mid Y=y,R^M=1)} \\sum _{m\\in \\mathcal {M}}\\mathbb {P}_{my11}\\frac{\\mathbb {P}(R^M=0\\mid M=m)}{\\mathbb {P}(R^M=1\\mid M=m)},\\nonumber $ and as a result, $\\mathbb {P}_{+y01}\\frac{\\mathbb {P}(R^Y=1\\mid Y=y,R^M=1)}{\\mathbb {P}(R^Y=1\\mid Y=y,R^M=0)} &= \\sum _{m\\in \\mathcal {M}}\\mathbb {P}_{my11}\\frac{\\mathbb {P}(R^M=0\\mid M=m)}{\\mathbb {P}(R^M=1\\mid M=m)}.\\nonumber $ By plugging in the two possible values for $m$ and $y$ in the above formula, we have $\\mathbb {P}_{+101}\\frac{\\mathbb {P}(R^Y=1\\mid Y=1,R^M=1)}{\\mathbb {P}(R^Y=1\\mid Y=1,R^M=0)}&= \\mathbb {P}_{1111}\\frac{\\mathbb {P}(R^M=0\\mid M=1)}{\\mathbb {P}(R^M=1\\mid M=1)}\\nonumber +\\mathbb {P}_{0111}\\frac{\\mathbb {P}(R^M=0\\mid M=0)}{\\mathbb {P}(R^M=1\\mid M=0)}\\nonumber $ and $\\mathbb {P}_{+001}\\frac{\\mathbb {P}(R^Y=1\\mid Y=0,R^M=1)}{\\mathbb {P}(R^Y=1\\mid Y=0,R^M=0)}&= \\mathbb {P}_{1011}\\frac{\\mathbb {P}(R^M=0\\mid M=1)}{\\mathbb {P}(R^M=1\\mid M=1)}\\nonumber +\\mathbb {P}_{0011}\\frac{\\mathbb {P}(R^M=0\\mid M=0)}{\\mathbb {P}(R^M=1\\mid M=0)}.\\nonumber $ Since $\\mathbb {P}(R^Y=1\\mid Y=y,R^M=1)$ are identified from the previous step, the identifiability of $\\mathbb {P}(R^M=1 \\mid M=m)$ depends on the identifiability of $\\mathbb {P}(R^Y=1\\mid Y=y,R^M=0)$ .", "We have $&&\\mathbb {P}(R^Y=1\\mid Y=y,R^M=0)\\nonumber \\\\&=& \\frac{\\mathbb {P}(Y=y,R^M=0,R^Y=1)}{\\mathbb {P}(Y=y,R^M=0)}\\nonumber \\\\&=& \\frac{\\mathbb {P}(Y=y,R^M=0,R^Y=1)}{\\mathbb {P}(Y=y,R^M=0,R^Y=0)+\\mathbb {P}(Y=y,R^M=0,R^Y=1)}\\nonumber \\\\&=&\\frac{\\mathbb {P}(Y=y,R^M=0,R^Y=1)}{\\mathbb {P}(Y=y \\mid R^M=0, R^Y=0)\\mathbb {P}(R^M=0,R^Y=0)+\\mathbb {P}(Y=y, R^M=0,R^Y=1)}\\nonumber \\\\&=&\\frac{\\mathbb {P}_{+y01}}{\\mathbb {P}(Y=y \\mid R^M=0, R^Y=0)\\mathbb {P}_{++00}+\\mathbb {P}_{+y01}}.\\nonumber $ In the above expression, $\\mathbb {P}_{+y01}$ and $\\mathbb {P}_{++00}$ are known.", "$\\mathbb {P}(Y=y \\mid R^Y=0,R^M=0)$ is not observable or identifiable, and can take any value between 0 and 1.", "Different values of $\\mathbb {P}(Y=y \\mid R^Y=0,R^M=0)$ will result in different values of $\\mathbb {P}(R^M=1 \\mid M=m)$ , which in turn will give different values of $\\mathbb {P}(Y=y, M=m)$ .", "For example, let $\\mathbb {P}(Y=1 \\mid R^Y=0,R^M=0)=\\frac{5}{6}$ , and the corresponding $\\mathbb {P}(R^Y=1\\mid Y=1,R^M=0)$ and $\\mathbb {P}(R^Y=1\\mid Y=0,R^M=0)$ equal $\\frac{3}{8}$ and $\\frac{3}{5}$ , respectively.", "Subsequently, we have $\\mathbb {P}(Y=1,M=1)=\\frac{17}{30}$ , $\\mathbb {P}(Y=0,M=1)=\\frac{17}{150}$ , $\\mathbb {P}(Y=1,M=0)=\\frac{1}{5}$ and $\\mathbb {P}(Y=0,M=0)=\\frac{3}{25}$ .", "Alternatively, let $\\mathbb {P}(Y=1 \\mid R^Y=0,R^M=0)=\\frac{7}{8}$ , and the corresponding $\\mathbb {P}(R^Y=1\\mid Y=1,R^M=0)$ and $\\mathbb {P}(R^Y=1\\mid Y=0,R^M=0)$ equal $\\frac{4}{11}$ and $\\frac{2}{3}$ , respectively.", "Subsequently, we have $\\mathbb {P}(Y=1,M=1)=\\frac{3}{5}$ , $\\mathbb {P}(Y=0,M=1)=\\frac{3}{25}$ , $\\mathbb {P}(Y=1,M=0)=\\frac{7}{40}$ and $\\mathbb {P}(Y=0,M=0)=\\frac{21}{200}$ .", "The two sets of values of $\\mathbb {P}(Y=y, M=m)$ correspond to the same observable data probabilities: $(\\mathbb {P}_{1111},\\mathbb {P}_{0111},\\mathbb {P}_{1011},\\mathbb {P}_{0011},\\mathbb {P}_{+101},\\mathbb {P}_{+001},\\mathbb {P}_{1+10},\\mathbb {P}_{0+10},\\mathbb {P}_{++00})$ , and therefore, $\\mathbb {P}(Y=y, M=m)$ can not be uniquely identified without further assumptions.", "Consider the following observable data probabilities: $(\\mathbb {P}_{1111},\\mathbb {P}_{0111},\\mathbb {P}_{1011},\\mathbb {P}_{0011},\\mathbb {P}_{+101},\\mathbb {P}_{+001},\\mathbb {P}_{1+10},\\mathbb {P}_{0+10},\\mathbb {P}_{++00})=\\left(\\frac{3}{10},\\frac{1}{10},\\frac{1}{20},\\frac{1}{10},\\frac{1}{10},\\frac{1}{40},\\frac{1}{10},\\frac{1}{10},\\frac{5}{40}\\right)\\nonumber .$ Define $&\\mathbb {P}_M=\\mathbb {P}(M=1),\\\\&\\mathbb {P}_{Y1}=\\mathbb {P}(Y=1\\mid M=1),\\\\&\\mathbb {P}_{Y0}=\\mathbb {P}(Y=1\\mid M=0),\\\\&\\mathbb {P}_{R^M1}=\\mathbb {P}(R^M=1\\mid M=1),\\\\&\\mathbb {P}_{R^M0}=\\mathbb {P}(R^M=1\\mid M=0),\\\\&\\mathbb {P}_{R^Y00}=\\mathbb {P}(R^Y=1\\mid M=0,R^M=0),\\\\&\\mathbb {P}_{R^Y01}=\\mathbb {P}(R^Y=1\\mid M=0,R^M=1),\\\\&\\mathbb {P}_{R^Y10}=\\mathbb {P}(R^Y=1\\mid M=1,R^M=0),\\\\&\\mathbb {P}_{R^Y11}=\\mathbb {P}(R^Y=1\\mid M=1,R^M=1).$ Below we study the identifiablity of the above 9 parameters describing the graphical model in Figure REF based on the observable data probabilities.", "Note that although there are 9 observable data probabilities, the degree of freedom in the probabilities is only 8 given that they sum up to 1.", "The following relationships between the observable data probabilities and the parameters hold, $&\\mathbb {P}_{1111}=\\mathbb {P}_M\\mathbb {P}_{Y1}\\mathbb {P}_{R^M1}\\mathbb {P}_{R^Y11}, \\\\&\\mathbb {P}_{1011}=\\mathbb {P}_M(1-\\mathbb {P}_{Y1})\\mathbb {P}_{R^M1}\\mathbb {P}_{R^Y11},\\\\&\\mathbb {P}_{0111}=(1-\\mathbb {P}_M)\\mathbb {P}_{Y0}\\mathbb {P}_{R^M0}\\mathbb {P}_{R^Y01},\\\\&\\mathbb {P}_{0011}=(1-\\mathbb {P}_M)(1-\\mathbb {P}_{Y0})\\mathbb {P}_{R^M0}\\mathbb {P}_{R^Y01},\\\\&\\mathbb {P}_{1+10}=\\mathbb {P}_M\\mathbb {P}_{R^M1}(1-\\mathbb {P}_{R^Y11}),\\\\&\\mathbb {P}_{0+10}=(1-\\mathbb {P}_M)\\mathbb {P}_{R^M0}(1-\\mathbb {P}_{R^Y01}),\\\\&\\mathbb {P}_{+101}=\\mathbb {P}_M\\mathbb {P}_{Y1}(1-\\mathbb {P}_{R^M1})\\mathbb {P}_{R^Y10}+(1-\\mathbb {P}_M)\\mathbb {P}_{Y0}(1-\\mathbb {P}_{R^M0})\\mathbb {P}_{R^Y00},\\\\&\\mathbb {P}_{+001}=\\mathbb {P}_M(1-\\mathbb {P}_{Y1})(1-\\mathbb {P}_{R^M1})\\mathbb {P}_{R^Y10}+(1-\\mathbb {P}_M)(1-\\mathbb {P}_{Y0})(1-\\mathbb {P}_{R^M0})\\mathbb {P}_{R^Y00}.$ By solving the equations (REF ) to (), we can identify the parameters $\\mathbb {P}_{Y1}$ , $\\mathbb {P}_{Y0}$ , $\\mathbb {P}_{R^Y11}$ , and $\\mathbb {P}_{R^Y01}$ : $&\\mathbb {P}_{Y1}=\\frac{\\mathbb {P}_{1111}}{\\mathbb {P}_{1111}+\\mathbb {P}_{1011}},\\\\&\\mathbb {P}_{Y0}=\\frac{\\mathbb {P}_{0111}}{\\mathbb {P}_{0111}+\\mathbb {P}_{0011}},\\\\&\\mathbb {P}_{R^Y11}=\\frac{\\mathbb {P}_{1111}+\\mathbb {P}_{1011}}{\\mathbb {P}_{1111}+\\mathbb {P}_{1011}+\\mathbb {P}_{1+10}},\\\\&\\mathbb {P}_{R^Y01}=\\frac{\\mathbb {P}_{0111}+\\mathbb {P}_{0011}}{\\mathbb {P}_{0111}+\\mathbb {P}_{0011}+\\mathbb {P}_{0+10}}.$ In addition, we can identify the following products of parameters based on equations () to (): $\\mathbb {P}_M\\mathbb {P}_{R^M1}$ , $(1-\\mathbb {P}_M)\\mathbb {P}_{R^M0}$ , $\\mathbb {P}_M(1-\\mathbb {P}_{R^M1})\\mathbb {P}_{R^Y10}$ and $(1-\\mathbb {P}_M)(1-\\mathbb {P}_{R^M0})\\mathbb {P}_{R^Y00}$ .", "As a result, when $\\mathbb {P}_M$ is known, one can solve for $\\mathbb {P}_{R^M1}$ , $\\mathbb {P}_{R^M0}$ , $\\mathbb {P}_{R^Y10}$ and $\\mathbb {P}_{R^Y00}$ .", "Based on the observable data probabilities, $\\mathbb {P}_{Y1}=\\frac{6}{7}$ , $\\mathbb {P}_{Y0}=\\frac{1}{2}$ , $\\mathbb {P}_{R^Y11} = \\frac{7}{9}$ , $\\mathbb {P}_{R^Y01} = \\frac{2}{3}$ and $\\mathbb {P}_M$ is bounded between $\\frac{9}{20}$ and $\\frac{14}{20}$ .", "For example, let $\\mathbb {P}_M=\\frac{12}{20}$ , we have $\\mathbb {P}_{R^M1}=\\frac{3}{4}$ , $\\mathbb {P}_{R^M0}=\\frac{3}{4}$ , $\\mathbb {P}_{R^Y10}=\\frac{7}{10}$ , $\\mathbb {P}_{R^Y00}=\\frac{1}{5}$ .", "This set of parameter values give us the following joint probabilities of $M$ and $Y$ as $\\mathbb {P}(Y=1,M=1) =\\frac{18}{35} $ , $\\mathbb {P}(Y=0,M=1) = \\frac{3}{35}$ , $\\mathbb {P}(Y=1,M=0) = \\frac{1}{5}$ , and $\\mathbb {P}(Y=0,M=0) = \\frac{1}{5}$ .", "Alternatively, let $\\mathbb {P}_M=\\frac{13}{20}$ , we have $\\mathbb {P}_{R^M1}=\\frac{9}{13}$ , $\\mathbb {P}_{R^M0}=\\frac{6}{7}$ , $\\mathbb {P}_{R^Y10}=\\frac{21}{40}$ and $\\mathbb {P}_{R^Y00}=\\frac{2}{5}$ .", "This alternative set of parameter values give us the following joint probabilities of $M$ and $Y$ as $\\mathbb {P}(Y=1,M=1) = \\frac{39}{70}$ , $\\mathbb {P}(Y=0,M=1) =\\frac{13}{140} $ , $\\mathbb {P}(Y=1,M=0) = \\frac{7}{40}$ , and $\\mathbb {P}(Y=0,M=0) = \\frac{7}{40}$ .", "The two sets of values of $\\mathbb {P}(Y=y, M=m)$ correspond to the same observable data probabilities: $(\\mathbb {P}_{1111},\\mathbb {P}_{0111},\\mathbb {P}_{1011},\\mathbb {P}_{0011},\\mathbb {P}_{+101},\\mathbb {P}_{+001},\\mathbb {P}_{1+10},\\mathbb {P}_{0+10},\\mathbb {P}_{++00})$ , and therefore, $\\mathbb {P}(Y=y, M=m)$ can not be uniquely identified without further assumptions.", "Consider the following probabilities from the observable data: $(\\mathbb {P}_{1111},\\mathbb {P}_{0111},\\mathbb {P}_{1011},\\mathbb {P}_{0011},\\mathbb {P}_{+101},\\mathbb {P}_{+001},\\mathbb {P}_{1+10},\\mathbb {P}_{0+10},\\mathbb {P}_{++00})=\\left(\\frac{18}{96},\\frac{6}{96},\\frac{3}{96},\\frac{2}{96},\\frac{12}{96},\\frac{3}{96},\\frac{15}{96},\\frac{16}{96},\\frac{21}{96}\\right)\\nonumber .$ Define $&\\mathbb {P}_M=\\mathbb {P}(M=1),\\\\&\\mathbb {P}_{Y1}=\\mathbb {P}(Y=1\\mid M=1),\\\\&\\mathbb {P}_{Y0}=\\mathbb {P}(Y=1\\mid M=0),\\\\&\\mathbb {P}_{R^M1}=\\mathbb {P}(R^M=1\\mid M=1),\\\\&\\mathbb {P}_{R^M0}=\\mathbb {P}(R^M=1\\mid M=0),\\\\&\\mathbb {P}_{R^Y00}=\\mathbb {P}(R^Y=1\\mid M=0,Y=0),\\\\&\\mathbb {P}_{R^Y01}=\\mathbb {P}(R^Y=1\\mid M=0,Y=1),\\\\&\\mathbb {P}_{R^Y10}=\\mathbb {P}(R^Y=1\\mid M=1,Y=0),\\\\&\\mathbb {P}_{R^Y11}=\\mathbb {P}(R^Y=1\\mid M=1,Y=1).$ Below we study the identifiablity of the above 9 parameters describing the graphical model in Figure REF based on the observable data probabilities.", "Note that although there are 9 observable data probabilities, the degree of freedom in the probabilities is only 8 given that they sum up to 1.", "The following relationships between the observable data probabilities and the parameters hold, $&\\mathbb {P}_{1111}=\\mathbb {P}_M\\mathbb {P}_{Y1}\\mathbb {P}_{R^M1}\\mathbb {P}_{R^Y11},\\\\&\\mathbb {P}_{1011}=\\mathbb {P}_M(1-\\mathbb {P}_{Y1})\\mathbb {P}_{R^M1}\\mathbb {P}_{R^Y10},\\\\&\\mathbb {P}_{0111}=(1-\\mathbb {P}_M)\\mathbb {P}_{Y0}\\mathbb {P}_{R^M0}\\mathbb {P}_{R^Y01},\\\\&\\mathbb {P}_{0011}=(1-\\mathbb {P}_M)(1-\\mathbb {P}_{Y0})\\mathbb {P}_{R^M0}\\mathbb {P}_{R^Y00},\\\\&\\mathbb {P}_{1+10}=\\mathbb {P}_M\\mathbb {P}_{Y1}\\mathbb {P}_{R^M1}(1-\\mathbb {P}_{R^Y11})+\\mathbb {P}_M(1-\\mathbb {P}_{Y1})\\mathbb {P}_{R^M1}(1-\\mathbb {P}_{R^Y10}),\\\\&\\mathbb {P}_{0+10}=(1-\\mathbb {P}_M)\\mathbb {P}_{Y0}\\mathbb {P}_{R^M0}(1-\\mathbb {P}_{R^Y01})+(1-\\mathbb {P}_M)(1-\\mathbb {P}_{Y0})\\mathbb {P}_{R^M0}(1-\\mathbb {P}_{R^Y00}),\\\\&\\mathbb {P}_{+101}=\\mathbb {P}_M\\mathbb {P}_{Y1}(1-\\mathbb {P}_{R^M1})\\mathbb {P}_{R^Y11}+(1-\\mathbb {P}_M)\\mathbb {P}_{Y0}(1-\\mathbb {P}_{R^M0})\\mathbb {P}_{R^Y01},\\\\&\\mathbb {P}_{+001}=\\mathbb {P}_M(1-\\mathbb {P}_{Y1})(1-\\mathbb {P}_{R^M1})\\mathbb {P}_{R^Y10}+(1-\\mathbb {P}_M)(1-\\mathbb {P}_{Y0})(1-\\mathbb {P}_{R^M0})\\mathbb {P}_{R^Y00}.$ By solving the equations (REF ) to (), we can identify the parameters $\\mathbb {P}_{M}$ , $\\mathbb {P}_{R^M1}$ and $\\mathbb {P}_{R^M0}$ .", "Given the observable data probabilities, $\\mathbb {P}_{M} = \\frac{1}{2}$ , $\\mathbb {P}_{R^M1} = \\frac{3}{4}$ and $\\mathbb {P}_{R^M0} =\\frac{1}{2} $ .", "However, $\\mathbb {P}_{Y1}$ , $\\mathbb {P}_{Y0}$ , $\\mathbb {P}_{R^Y11}$ , $\\mathbb {P}_{R^Y10}$ , $\\mathbb {P}_{R^Y01}$ and $\\mathbb {P}_{R^Y00}$ are not identifiable.", "For example, we can have $\\mathbb {P}_{Y1}=\\frac{3}{4}$ , $\\mathbb {P}_{Y0}=\\frac{1}{2}$ , $\\mathbb {P}_{R^Y11}=\\frac{4}{6}$ , $\\mathbb {P}_{R^Y10}=\\frac{2}{6}$ , $\\mathbb {P}_{R^Y01}=\\frac{3}{6}$ and $\\mathbb {P}_{R^Y00}=\\frac{1}{6}$ , which in turn give us $\\mathbb {P}(Y=1, M=1)=\\frac{3}{8}$ , $\\mathbb {P}(Y=1, M=0)=\\frac{1}{4}$ , $\\mathbb {P}(Y=0, M=1)=\\frac{1}{8}$ and $\\mathbb {P}(Y=0, M=0)=\\frac{1}{4}$ .", "Alternatively, we can have $\\mathbb {P}_{Y1}=\\frac{2}{3}$ , $\\mathbb {P}_{Y0}=\\frac{3}{4}$ , $\\mathbb {P}_{R^Y11}=\\frac{3}{4}$ , $\\mathbb {P}_{R^Y10}=\\frac{1}{4}$ , $\\mathbb {P}_{R^Y01}=\\frac{1}{3}$ and $\\mathbb {P}_{R^Y00}=\\frac{1}{3}$ , which in turn give us $\\mathbb {P}(Y=1, M=1)=\\frac{1}{3}$ , $\\mathbb {P}(Y=1, M=0)=\\frac{3}{8}$ , $\\mathbb {P}(Y=0, M=1)=\\frac{1}{6}$ and $\\mathbb {P}(Y=0, M=0)=\\frac{1}{8}$ .", "The two sets of values of $\\mathbb {P}(Y=y, M=m)$ correspond to the same observable data probabilities: $(\\mathbb {P}_{1111},\\mathbb {P}_{0111},\\mathbb {P}_{1011},\\mathbb {P}_{0011},\\mathbb {P}_{+101},\\mathbb {P}_{+001},\\mathbb {P}_{1+10},\\mathbb {P}_{0+10},\\mathbb {P}_{++00})$ , and therefore, $\\mathbb {P}(Y=y, M=m)$ can not be uniquely identified without further assumptions.", "Consider the following probabilities from the observable data: $&&(\\mathbb {P}_{1211},\\mathbb {P}_{1111},\\mathbb {P}_{1011},\\mathbb {P}_{0211},\\mathbb {P}_{0111},\\mathbb {P}_{0011},\\mathbb {P}_{1+10},\\mathbb {P}_{0+10},\\mathbb {P}_{+201},\\mathbb {P}_{+101},\\mathbb {P}_{+001},\\mathbb {P}_{++00})\\nonumber \\\\&=&\\left(\\frac{180}{1440},\\frac{60}{1440},\\frac{15}{1440},\\frac{120}{1440},\\frac{30}{1440},\\frac{15}{1440},\\frac{285}{1440},\\frac{195}{1440},\\frac{180}{1440},\\frac{50}{1440},\\frac{20}{1440},\\frac{290}{1440}\\right)\\nonumber .$ Define $&\\mathbb {P}_M=\\mathbb {P}(M=1),\\\\&\\mathbb {P}_{Y^21}=\\mathbb {P}(Y=2\\mid M=1),\\\\&\\mathbb {P}_{Y^11}=\\mathbb {P}(Y=1\\mid M=1),\\\\&\\mathbb {P}_{Y^20}=\\mathbb {P}(Y=2\\mid M=0),\\\\&\\mathbb {P}_{Y^10}=\\mathbb {P}(Y=1\\mid M=0),\\\\&\\mathbb {P}_{R^M1}=\\mathbb {P}(R^M=1\\mid M=1),\\\\&\\mathbb {P}_{R^M0}=\\mathbb {P}(R^M=1\\mid M=0),\\\\&\\mathbb {P}_{R^Y2}=\\mathbb {P}(R^Y=1\\mid Y=2),\\\\&\\mathbb {P}_{R^Y1}=\\mathbb {P}(R^Y=1\\mid Y=1),\\\\&\\mathbb {P}_{R^Y0}=\\mathbb {P}(R^Y=1\\mid Y=0).$ The following relationships between the observable data probabilities and the parameters hold, $&\\mathbb {P}_{1211}=\\mathbb {P}_M\\mathbb {P}_{Y^21}\\mathbb {P}_{R^M1}\\mathbb {P}_{R^Y2},\\\\&\\mathbb {P}_{1111}=\\mathbb {P}_M\\mathbb {P}_{Y^11}\\mathbb {P}_{R^M1}\\mathbb {P}_{R^Y1},\\\\&\\mathbb {P}_{1011}=\\mathbb {P}_M(1-\\mathbb {P}_{Y^21}-\\mathbb {P}_{Y^11})\\mathbb {P}_{R^M1}\\mathbb {P}_{R^Y0},\\\\&\\mathbb {P}_{0211}=(1-\\mathbb {P}_M)\\mathbb {P}_{Y^20}\\mathbb {P}_{R^M0}\\mathbb {P}_{R^Y2},\\\\&\\mathbb {P}_{0111}=(1-\\mathbb {P}_M)\\mathbb {P}_{Y^10}\\mathbb {P}_{R^M0}\\mathbb {P}_{R^Y1},\\\\&\\mathbb {P}_{0011}=(1-\\mathbb {P}_M)(1-\\mathbb {P}_{Y^20}-\\mathbb {P}_{Y^10})\\mathbb {P}_{R^M0}\\mathbb {P}_{R^Y0},\\\\&\\mathbb {P}_{+201}=\\mathbb {P}_M\\mathbb {P}_{Y^21}(1-\\mathbb {P}_{R^M1})\\mathbb {P}_{R^Y2}+(1-\\mathbb {P}_M)\\mathbb {P}_{Y^20}(1-\\mathbb {P}_{R^M0})\\mathbb {P}_{R^Y2},\\\\&\\mathbb {P}_{+101}=\\mathbb {P}_M\\mathbb {P}_{Y^11}(1-\\mathbb {P}_{R^M1})\\mathbb {P}_{R^Y1}+(1-\\mathbb {P}_M)\\mathbb {P}_{Y^10}(1-\\mathbb {P}_{R^M0})\\mathbb {P}_{R^Y1},\\\\&\\mathbb {P}_{+001}=\\mathbb {P}_M(1-\\mathbb {P}_{Y^21}-\\mathbb {P}_{Y^11})(1-\\mathbb {P}_{R^M1})\\mathbb {P}_{R^Y0}+\\nonumber \\\\&\\hspace{42.67912pt}(1-\\mathbb {P}_M)(1-\\mathbb {P}_{Y^20}-\\mathbb {P}_{Y^10})(1-\\mathbb {P}_{R^M0})\\mathbb {P}_{R^Y0},\\\\&\\mathbb {P}_{1+10}=\\mathbb {P}_M\\mathbb {P}_{Y^21}\\mathbb {P}_{R^M1}(1-\\mathbb {P}_{R^Y2})+\\mathbb {P}_M\\mathbb {P}_{Y^11}\\mathbb {P}_{R^M1}(1-\\mathbb {P}_{R^Y1})+\\nonumber \\\\&\\hspace{42.67912pt}\\mathbb {P}_M(1-\\mathbb {P}_{Y^21}-\\mathbb {P}_{Y^11})\\mathbb {P}_{R^M1}(1-\\mathbb {P}_{R^Y0}),\\\\&\\mathbb {P}_{0+10}=(1-\\mathbb {P}_M)\\mathbb {P}_{Y^20}\\mathbb {P}_{R^M0}(1-\\mathbb {P}_{R^Y2})+(1-\\mathbb {P}_M)\\mathbb {P}_{Y^10}\\mathbb {P}_{R^M0}(1-\\mathbb {P}_{R^Y1})+\\nonumber \\\\&\\hspace{42.67912pt}(1-\\mathbb {P}_M)(1-\\mathbb {P}_{Y^20}-\\mathbb {P}_{Y^10})\\mathbb {P}_{R^M0}(1-\\mathbb {P}_{R^Y0}).$ By solving the equations (REF ) to (), we can identify the parameters $\\mathbb {P}_{M}$ , $\\mathbb {P}_{R^M1}$ and $\\mathbb {P}_{R^M0}$ .", "Given the observable data probabilities, $\\mathbb {P}_{M}=\\frac{1}{2}$ , $\\mathbb {P}_{R^M1}=\\frac{3}{4}$ and $\\mathbb {P}_{R^M0}=\\frac{1}{2}$ .", "However, $\\mathbb {P}_{Y^21}$ , $\\mathbb {P}_{Y^11}$ , $\\mathbb {P}_{Y^20}$ , $\\mathbb {P}_{Y^10}$ , $\\mathbb {P}_{R^Y2}$ , $\\mathbb {P}_{R^Y1}$ and $\\mathbb {P}_{R^Y0}$ are not identifiable.", "For example, we can have $\\mathbb {P}_{Y^21}=\\frac{3}{6}$ , $\\mathbb {P}_{Y^11}=\\frac{2}{6}$ , $\\mathbb {P}_{Y^20}=\\frac{2}{4}$ , $\\mathbb {P}_{Y^10}=\\frac{1}{4}$ , $\\mathbb {P}_{R^Y2}=\\frac{2}{3}$ , $\\mathbb {P}_{R^Y1}=\\frac{1}{3}$ and $\\mathbb {P}_{R^Y0}=\\frac{1}{6}$ , which in turn give us $\\mathbb {P}(Y=2, M=1)=\\frac{1}{4}$ , $\\mathbb {P}(Y=1, M=1)=\\frac{1}{6}$ , $\\mathbb {P}(Y=0, M=1)=\\frac{1}{12}$ , $\\mathbb {P}(Y=2, M=0)=\\frac{1}{4}$ , $\\mathbb {P}(Y=1, M=0)=\\frac{1}{8}$ and $\\mathbb {P}(Y=0, M=0)=\\frac{1}{8}$ .", "Alternatively, we can have $\\mathbb {P}_{Y^21}=\\frac{5}{8}$ , $\\mathbb {P}_{Y^11}=\\frac{1}{4}$ , $\\mathbb {P}_{Y^20}=\\frac{5}{8}$ , $\\mathbb {P}_{Y^10}=\\frac{3}{16}$ , $\\mathbb {P}_{R^Y2}=\\frac{8}{15}$ , $\\mathbb {P}_{R^Y1}=\\frac{4}{9}$ and $\\mathbb {P}_{R^Y0}=\\frac{2}{9}$ , which in turn give us $\\mathbb {P}(Y=2, M=1)=\\frac{5}{16}$ , $\\mathbb {P}(Y=1, M=1)=\\frac{1}{8}$ , $\\mathbb {P}(Y=0, M=1)=\\frac{1}{16}$ , $\\mathbb {P}(Y=2, M=0)=\\frac{5}{16}$ , $\\mathbb {P}(Y=1, M=0)=\\frac{3}{32}$ and $\\mathbb {P}(Y=0, M=0)=\\frac{3}{32}$ .", "The two sets of values of $\\mathbb {P}(Y=y, M=m)$ correspond to the same observable data probabilities: $(\\mathbb {P}_{1211},\\mathbb {P}_{1111},\\mathbb {P}_{1011},\\mathbb {P}_{0211},\\mathbb {P}_{0111},\\mathbb {P}_{0011},\\mathbb {P}_{1+10},\\mathbb {P}_{0+10},\\mathbb {P}_{+201},\\mathbb {P}_{+101},\\mathbb {P}_{+001},\\mathbb {P}_{++00})$ , and therefore, $\\mathbb {P}(Y=y, M=m)$ can not be uniquely identified without further assumptions." ], [ "Parametric Estimation Details", "For illustration, we describe the parametric methods in the scenarios considered in Theorem REF when the missingness exists only in the mediator.", "Under Assumption REF , the log of the complete-data likelihood is $\\ell _{c}(\\theta ) = \\mathrm {constant}+\\sum ^n_{i=1} &\\log ~\\mathbb {P}(Y_i=y_i\\mid M_i=m_i,T_i=t_i, X_i = x_i )+\\log ~\\mathbb {P}(M_i=m_i \\mid T_i=t_i, X_i = x_i )\\\\& + \\log ~\\mathbb {P}(R^M_i=r^M_i \\mid M_i=m_i,T_i=t_i, X_i = x_i ).$ Under Assumption REF , the observed-data likelihood is $L_{obs}(\\theta )\\propto \\prod _{\\lbrace i:R^M_i=1\\rbrace } &\\mathbb {P}(Y_i=y_i\\mid M_i=m_i,T_i=t_i, X_i = x_i )\\mathbb {P}(M_i=m_i \\mid T_i=t_i, X_i = x_i )\\\\&\\mathbb {P}(R^M_i=1 \\mid M_i=m_i,T_i=t_i, X_i = x_i )\\\\&\\hspace{-39.83368pt}\\prod _{\\lbrace i:R^M_i=0\\rbrace }\\int _{\\mathcal {M}} \\mathbb {P}(Y_i=y_i\\mid M_i=m,T_i=t_i, X_i = x_i )\\mathbb {P}(M_i=m \\mid T_i=t_i, X_i = x_i )\\\\&\\mathbb {P}(R^M_i=0 \\mid M_i=m,T_i=t_i, X_i = x_i )\\ d m.$ When $M$ is categorical, the integral involved in the above expression is reduced to summation.", "Since the value of $M$ is missing for some subjects, we implement the Expectation-Maximization algorithm to obtain the Maximum Likelihood Estimates (MLEs) by treating the missing $M$ as a latent variable.", "Specifically, in the E-step, we find the conditional expectation of complete-data log likelihood by calculating the conditional expectation of $M_i$ for subjects with missing $M_i$ .", "For example, if $M$ is binary, $E\\lbrace I(M_{i}=m)\\mid Y_i,R^M_i=0,T_i, X_i ;\\theta ^{(t)}\\rbrace =\\frac{\\mathbb {P}(Y_i,M_{i}=m,R^M_i=0\\mid T_i, X_i )}{\\sum _{m={0,1}}\\mathbb {P}(Y_i,M_{i}=m,R^M_i=0\\mid T_i, X_i )}.$ When $M$ is continuous, the conditional expectation of complete-data log likelihood may be complicated to calculate.", "Therefore, we applied fractional imputation [16] using the idea of importance sampling and weighting method to approximate the conditional expectation.", "Specifically, we generate the fractionally imputed data $m^{(1)}_{i}, \\ldots , m^{(S)}_{i}$ from a proposed distribution $h(M_{i}\\mid T_i, X_i )$ for subjects with missing $M_i$ .", "Then, we compute the fractional weight for each imputed observation.", "The Monte Carlo approximation of the conditional expectation becomes more accurate when $S$ is large: $E\\lbrace \\ell _{ci}(M_{i}=m;\\theta )|Y_i,R^M_i=0,T_i, X_i ;\\theta ^{(t)}\\rbrace \\approx \\frac{1}{S}\\sum \\limits _{j=1}^{S} \\ell _{ci}(M_{i}=m^{(j)}_{i};\\theta ) \\hat{w}(m^{(j)}_{i}),$ where $\\hat{w}(m^{(j)}_{i}) \\propto \\frac{\\mathbb {P}(Y_{i},M_{i}=m^{(j)}_{i},R^M_i=0\\mid T_i, X_i )}{h(M_{i}=m^{(j)}_{i}\\mid T_i, X_i )}$ is the fractional weight for $m^{(j)}_{i}$ that satisfy $\\hat{w}(m^{(j)}_{i}) \\ge 0$ and $\\sum _{j=1}^{S} \\hat{w}(m^{(j)}_{i})=1$ .", "We iterate between the E-step and M-step until convergence.", "The same estimation methods can be applied to the situation where the missingness exists in both the mediator and outcome.", "For subjects with both $M_i$ and $Y_i$ missing, we generate the imputed data sequentially.", "For binary $M$ and binary $Y$ , we generate the possible value of ($m_i$ , $y_i$ ).", "For binary $M$ and continuous $Y$ , we generate the possible value of $m_i$ and then the fractionally imputed data $y^{(1)}_{i}, \\ldots , y^{(S)}_{i}$ for each possible value of $m_i$ .", "For continuous $M$ and continuous $Y$ , we generate the fractionally imputed data $(m^{(1)}_{i},y^{(1)}_{i})$ ,...,$(m^{(S)}_{i},y^{(S)}_{i})$ .", "For continuous $M$ and binary $Y$ , we generate the fractionally imputed data $m^{(1)}_{i}, \\ldots , m^{(S)}_{i}$ and then the possible value of $y_i$ for each fractionally imputed $m_i$ .", "It is important to note that we can identify the outcome model using the complete cases under MNAR Assumption REF , REF and REF .", "Therefore, an alternative approach for those scenarios is to estimate the outcome model first using the complete cases, then estimate the parameters in other models through the Expectation-Maximization algorithm by plugging in the estimated outcome model.", "We tried those two slightly different approaches to our simulation settings, both provided consistent results, with the alternative approach enjoying higher computation efficiency as expected.", "Note that under MNAR Assumption REF , the alternative approach does not work because the outcome distribution $\\mathbb {P}(Y \\mid M, T, X )$ can not be identified by the complete cases." ] ]
2212.05577
[ [ "A Formalization of Doob's Martingale Convergence Theorems in mathlib" ], [ "Abstract We present the formalization of Doob's martingale convergence theorems in the mathlib library for the Lean theorem prover.", "These theorems give conditions under which (sub)martingales converge, almost everywhere or in $L^1$.", "In order to formalize those results, we build a definition of the conditional expectation in Banach spaces and develop the theory of stochastic processes, stopping times and martingales.", "As an application of the convergence theorems, we also present the formalization of L\\'evy's generalized Borel-Cantelli lemma.", "This work on martingale theory is one of the first developments of probability theory in mathlib, and it builds upon diverse parts of that library such as topology, analysis and most importantly measure theory." ], [ "=1 calc decorations.pathmorphing curve/.style=settings=1,to path=() .. controls ($()!{pos}!()!", "{height}!270:()$ ) and ($()!1-{pos}!()!", "{height}!270:()$ ) .. (), settings/.code=quiver/.cd,1 , quiver/.cd,pos/.initial=0.35,height/.initial=0 tail reversed/.code=tikzcd to 2tail/.code=Implies[reversed] 2tail reversed/.code=Implies no body/.style=/tikz/dash pattern=on 0 off 1mm" ] ]
2212.05578
[ [ "Bayesian inference for partial orders from random linear extensions:\n power relations from 12th Century Royal Acta" ], [ "Abstract We give a new class of models for time series data in which actors are listed in order of precedence.", "We model the lists as a realisation of a queue in which queue-position is constrained by an underlying social hierarchy.", "We model the hierarchy as a partial order so that the lists are random linear extensions.", "We account for noise via a random queue-jumping process.", "We give a marginally consistent prior for the stochastic process of partial orders based on a latent variable representation for the partial order.", "This allows us to introduce a parameter controlling partial order depth and incorporate actor-covariates informing the position of actors in the hierarchy.", "We fit the model to witness lists from Royal Acta from England, Wales and Normandy in the eleventh and twelfth centuries.", "Witnesses are listed in order of social rank, with any bishops present listed as a group.", "Do changes in the order in which the bishops appear reflect changes in their personal authority?", "The underlying social order which constrains the positions of bishops within lists need not be a complete order and so we model the evolving social order as an evolving partial order.", "The status of an Anglo-Norman bishop was at the time partly determined by the length of time they had been in office.", "This enters our model as a time-dependent covariate.", "We fit the model, estimate partial orders and find evidence for changes in status over time.", "We interpret our results in terms of court politics.", "Simpler models, based on bucket orders and vertex-series-parallel orders, are rejected.", "We compare our results with a stochastic process extension of the Plackett-Luce model." ], [ "Observation model (likelihood)", "Let $y = (o_i)_{i=1}^N$ be our data, which is a list of witness lists.", "Under a Plackett-Luce model, the likelihood is $L(y | \\lambda , \\beta ) & = \\prod _{i=1}^N \\prod _{j=1}^{n_i} \\frac{\\exp (\\lambda _{o_{ij}, t_i} + \\beta _{r_{o_{ij}}})}{\\sum _{k=j}^{n_i} \\exp (\\lambda _{o_{ik}, t_i} + \\beta _{r_{o_{ik}}})} = \\prod _{i=1}^N \\prod _{j=1}^{n_i} \\frac{\\exp (f_{ij})}{\\sum _{k=j}^{n_i} \\exp (f_{ik})}\\\\l(y | \\lambda , \\beta ) = \\log L(y | \\lambda , \\beta ) & = \\sum _{i=1}^N \\sum _{j=1}^{n_i} \\left( f_{ij} - \\log \\left( \\sum _{k=j}^{n_i} \\exp (f_{ik})\\right)\\right)$ where $f_{ij} = \\lambda _{o_{ij}, t_i} + \\beta _{r_{o_{ij}}}$ , $o_{ij}$ is the j-th bishop of the i-th list, $t_i$ is the time of the i-th list, $r_{bishop}$ is the seniority rank of the bishop.", "Here $\\lambda $ is an $n \\times T$ matrix, where $n$ is the total number of bishops (59 here), $T$ is the number of years we are considering (1155 - 1180 = 76 here).", "$\\beta $ is a vector of length $n_d$ , which is the number of dioceses (16 here)." ], [ "Priors", "The prior on all the parameters is $p(\\lambda , \\beta , \\sigma , \\theta ) = p(\\lambda | \\sigma , \\theta ) p(\\beta ) p(\\sigma ) p(\\theta )$ The prior we put on $\\lambda $ is an AR(1) process as follows.", "For all $i, t \\ge 1$ ,we have $\\lambda _{i, t} = \\theta \\lambda _{i, t-1} + \\epsilon _{i, t}$ where $\\epsilon _{i, t} \\sim \\mathcal {N}(0, \\sigma ^2)$ i.i.d., with $\\theta \\in (0, 1)$ and $\\sigma > 0$ .", "We let the initial states of the process to have the stationary distribution of the process, i.e.", "$\\lambda _{i,0} \\sim \\mathcal {N} \\left( 0, \\frac{\\sigma ^2}{1 - \\theta ^2} \\right) \\text{i.i.d.}", "\\quad \\forall i \\in \\lbrace 1, \\dots , n\\rbrace $ However there is a non-identifiability issue with this setup.", "Hence we constrain the $\\lambda $ so that $\\sum _{i=1}^n \\lambda _{i,t} = 0 \\forall t$ by considering the following process instead.", "$\\lambda _{,t} = \\theta \\lambda _{,t-1} + M \\epsilon _{,t}$ where $M = I_n - \\frac{1}{n} \\mathbf {1} \\mathbf {1}$ with $\\mathbf {1}$ being a column of ones.", "Notice the matrix $M$ projects the $\\epsilon $ to a subspace where they are centred.", "Hence observe that if $\\lambda _{,t-1}$ is centred, then so is $\\lambda _{,t}$ .", "Since the degree of freedom now for each time $t$ is $n-1$ , we only need to consider the prior on $n-1$ $\\lambda $ 's for each time.", "Without loss of generality, we consider the prior on the first $n-1$ rows of $\\lambda $ and the prior becomes $p(\\lambda | \\theta , \\sigma ) & = p(\\lambda _{,0} | \\theta , \\sigma )\\prod _{t=1}^T p(\\lambda _{,t} | \\lambda _{, t-1}, \\theta , \\sigma ) \\\\p(\\lambda _{,t} | \\lambda _{,t-1}, \\theta , \\sigma ) & = \\mathcal {N}\\left( \\lambda _{1:(n-1), t} | \\theta \\lambda _{1:(n-1), t-1}, M_{n-1,} M_{n-1,}^T \\sigma ^2 \\right) \\\\p(\\lambda _{,0}| \\theta , \\sigma ) & = \\mathcal {N} \\left( \\lambda _{1:(n-1), 0} | 0, M_{n-1,} M_{n-1,}^T \\frac{\\sigma ^2}{1 - \\theta ^2} \\right)$ where $M_{n-1,}$ is the first $n-1$ rows of $M$ .", "The prior on $\\beta $ is a standard multivariate normal, namely $p(\\beta ) = \\mathcal {N}(0, I_{n_d})$ However, there is the same identifiability issue and we also constrain them to be centred.", "Hence the resulting prior becomes $p(\\beta ) = \\mathcal {N}(\\beta _{1:(n_d - 1)} | 0, N_{n_d - 1, } N_{n_d - 1, }^T)$ where $N_{n_d - 1, }$ is the first $n_d - 1$ rows of the matrix $N = I_{n_d} - \\frac{1}{n_d} \\mathbf {1}\\mathbf {1}^T$ .", "Finally, we specify the prior on $\\theta $ and $\\sigma $ .", "We chose a $U(0, 1)$ for $\\theta $ and a $\\mathcal {G}(2, 2)$ for $\\sigma $ ." ], [ "The MCMC algorithm", "Here the Metropolis-Hastings algorithm is used.", "At each MCMC step, we propose a move for $\\theta $ , $\\sigma $ , each of the entries of the $\\beta $ vector except the last element, and the $\\lambda $ matrix except the last row.", "More formally, for $k \\ge 1$ , we proposal distributions are $\\theta ^{(k)} & \\sim U(\\theta ^{(k-1)} - w_{\\theta }, \\theta ^{(k-1)} + w_{\\theta }) \\\\\\sigma ^{(k)} & \\sim U(\\sigma ^{(k-1)} - w_{\\sigma }, \\sigma ^{(k-1)} + w_{\\sigma }) \\\\\\beta _j^{(k)} & \\sim \\mathcal {N}(\\beta _j^{(k-1)}, \\sigma _{\\beta }^2) \\quad \\forall j \\in \\lbrace 1, \\dots , n_d-1\\rbrace , \\quad \\beta _{n_d}^{(k)} = - \\sum _{j=1}^{n_d - 1} \\beta _j^{(k)} \\\\\\lambda _{i,t}^{(k)} & \\sim \\mathcal {N}(\\lambda _{i,t}^{(k-1)}, \\sigma _{\\lambda }^2) \\quad \\forall i \\in \\lbrace 1, \\dots , n-1\\rbrace , \\quad \\lambda _{n,t}^{(k)} = - \\sum _{i=1}^{n - 1} \\lambda _{i,t}^{(k)} \\quad \\forall t \\in \\lbrace 1, \\dots , T\\rbrace $ where in our case, the hyperparameters are chosen as $w_{\\theta } = 0.05, w_{\\sigma } = 0.05, \\sigma _{\\theta } = 0.5, \\sigma _{\\lambda } = 0.5$ .", "The initial states are $\\theta ^{(0)} = 0.95, \\sigma ^{(0)} = 0.5, \\beta = \\mathbf {0}, \\lambda =\\mathbf {0}$ .", "In total, there are 77500 MCMC steps and we sub-sample every 10 steps, which results in 7750 samples.", "When doing the analysis, the first 2000 samples are removed as burn-in, which leaves 5750 samples.", "In my MCMC run, i separated the whole run into 10 mini-runs.", "The acceptance probabilities for the parameters for the mini-runs has means ($\\pm $ standard deviations) $\\theta : 0.164 (\\pm 0.014), \\sigma : 0.212 (\\pm 0.035), \\beta : 0.506 (\\pm 0.006), \\lambda : 0.577 (\\pm 0.056)$ .", "The ESS for the parameters are ($\\theta $ : 26, $\\sigma $ : 18.7, $\\beta $ : 176 - 5532, $\\lambda $ : 41 - 1062)" ] ]
2212.05524
[ [ "Recurrent Vision Transformers for Object Detection with Event Cameras" ], [ "Abstract We present Recurrent Vision Transformers (RVTs), a novel backbone for object detection with event cameras.", "Event cameras provide visual information with sub-millisecond latency at a high-dynamic range and with strong robustness against motion blur.", "These unique properties offer great potential for low-latency object detection and tracking in time-critical scenarios.", "Prior work in event-based vision has achieved outstanding detection performance but at the cost of substantial inference time, typically beyond 40 milliseconds.", "By revisiting the high-level design of recurrent vision backbones, we reduce inference time by a factor of 5 while retaining similar performance.", "To achieve this, we explore a multi-stage design that utilizes three key concepts in each stage: First, a convolutional prior that can be regarded as a conditional positional embedding.", "Second, local- and dilated global self-attention for spatial feature interaction.", "Third, recurrent temporal feature aggregation to minimize latency while retaining temporal information.", "RVTs can be trained from scratch to reach state-of-the-art performance on event-based object detection - achieving an mAP of 47.5% on the Gen1 automotive dataset.", "At the same time, RVTs offer fast inference (13 ms on a T4 GPU) and favorable parameter efficiency (5 times fewer than prior art).", "Our study brings new insights into effective design choices that could be fruitful for research beyond event-based vision." ], [ "Introduction", "Time matters for object detection.", "In 30 milliseconds, a human can run 0.3 meters, a car on public roads covers up to 1 meter, and a train can travel over 2 meters.", "Yet, during this time, an ordinary camera captures only a single frame.", "Frame-based sensors must strike a balance between latency and bandwidth.", "Given a fixed bandwidth, a frame-based camera must trade-off camera resolution and frame rate.", "However, in highly dynamic scenes, reducing the resolution or the frame rate may come at the cost of missing essential scene details, and, in safety-critical scenarios like automotive, this may even cause fatalities.", "In recent years, event cameras have emerged as alternative sensor that offers a different trade-off.", "Instead of counterbalancing bandwidth requirements and perceptual latency, they provide visual information at sub-millisecond latency but sacrifice absolute intensity information.", "Instead of capturing intensity images, event cameras measure changes in intensity at the time they occur.", "This results in a stream of events, which encode time, location, and polarity of brightness changes [14].", "The main advantages of event cameras are their sub-millisecond latency, very high dynamic range ($>120$ dB), strong robustness to motion blur, and ability to provide events asynchronously in a continuous manner.", "Figure: Detection performance vs inference time of our RVT models on the 1 Mpx detection dataset using a T4 GPU.", "The circle areas are proportional to the model size.In this work, we aim to utilize these outstanding properties of event cameras for object detection in time-critical scenarios.", "Therefore, our objective is to design an approach that reduces the processing latency as much as possible while maintaining high performance.", "This is challenging because event cameras asynchronously trigger binary events that are spread of pixel space and time.", "Hence, we need to develop detection algorithms that can continuously associate features in the spatio-temporal domain while simultaneously satisfying strict latency requirements.", "Recent work has shown that dynamic graph neural networks (GNNs) [27], [40] and sparse neural networks [32], [54], [52], [10] can theoretically achieve low latency inference for event-based object detection.", "Yet, to achieve this in practical scenarios they either require specialized hardware or their detection performance needs to be improved.", "An alternative thread of research approaches the problem from the view of conventional, dense neural network designs [19], [7], [20], [35], [25].", "These methods show impressive performance on event-based object detection, especially when using temporal recurrence in their architectures [35], [25].", "Still, the processing latency of these approaches remains beyond 40 milliseconds such that the low-latency aspect of event cameras cannot be fully leveraged.", "This raises the question: How can we achieve both high accuracy and efficiency without requiring specialized hardware?", "We notice that common design choices yield a suboptimal trade-off between performance and compute.", "For example, prior work uses expensive convolutional LSTM (Conv-LSTM) cells [41] extensively in their feature extraction stage [35], [25] or relies on heavy backbones such as the VGG architecture [25].", "Sparse neural networks instead struggle to model global mixing of features which is crucial to correctly locate and classify large objects in the scene.", "To achieve our main objective, we fundamentally revisit the design of vision backbones for event-based object detection.", "In particular, we take inspiration from neural network design for conventional frame-based object detection and combine them with ideas that have proven successful in the event-based vision literature.", "Our study deliberately focuses on macro design of the object detection backbone to identify key components for both high performance and fast inference on GPUs.", "The resulting neural network is based on a single block that is repeated four times to form a multi-stage hierarchical backbone that can be used with off-the-self detection frameworks.", "We identify three key components that enable an excellent trade-off between detection performance and inference time.", "First, we find that interleaved local- and global self-attention [47] is ideally suited to mix both local and global features while offering linear complexity in the input resolution.", "Second, this attention mechanism is most effective when preceded by a simple convolution that also downsamples the spatial resolution from the previous stage.", "This convolution effectively provides a strong prior about the grid-structure of the pixel array and also acts as a conditional positional embedding for the transformer layers [9].", "Third, temporal recurrence is paramount to achieve strong detection performance with events.", "Differently from prior work, we find that Conv-LSTM cells can be replaced by plain LSTM cells [18] that operate on each feature separatelyequivalent to $1\\times 1$ kernel in a Conv-LSTM cell.", "By doing so, we dramatically reduce the number of parameters and latency but also slightly improve the overall performance.", "Our full framework achieves competitive performance and higher efficiency compared to state-of-the-art methods.", "Specifically, we reduce parameter count (from 100M to 18.5 M) and inference time (from 72 ms to 13 ms) up to a factor of 5 compared to prior art [25].", "At the same time, we train our networks from scratch, showing that these benefits do not originate from large-scale pretraining.", "Our main contributions can be summarized as follows: (1) We re-examine predominant design choices in event-based object detection pipelines and reveal a set of key enablers for high performance in event-based object detection.", "(2) We propose a simple, composable stage design that unifies the crucial building blocks in a compact way.", "(3) We propose a hierarchical multi-stage backbone that is fast, lightweight and still offers performance comparable to the best reported so far.", "(4) We present state-of-the-art object detection performance of 47.5% mAP on the Gen1 detection dataset [11] and highly competitive results on the 1 Mpx detection dataset [35] while training the proposed architecture from scratch.", "In addition, we also provide insights into effective data augmentation techniques that contribute to reaching these results.", "Figure: Overview of the unrolled computation graph of our multi-stage recurrent backbone.", "Events are processed into a tensor representation before they are used as input to the first stage.", "Each stage also reuses the LSTM states (c: cell, h: hidden) from the previous timestep.", "Finally, the detection framework interfaces with the backbone from the second stage onwards.", "Specifically, the hidden states of the LSTMs are used as features for the detection framework." ], [ "Object Detection for Event Cameras", "Object detection in the event camera literature can be broadly classified into three emerging research directions.", "Recent work explores graph neural networks to dynamically construct a spatio-temporal graph [27], [40], [33].", "New nodes and node edges are established by sub-sampling events and finding existing nodes that are close in space-time.", "The main challenge is to design the architecture such that information can propagate over vast distances in the space-time volume.", "This is relevant, for example, when large objects move slowly with respect to the camera.", "Furthermore, aggressive sub-sampling of events can lead to the removal of potentially crucial information, but is often required to maintain low-latency inference.", "A second line of work employs spiking neural networks (SNNs) that propagate information sparsely within the network [54], [52], [10].", "SNNs are closely related to dense recurrent neural networks (RNNs) in that each spiking neuron has an internal state that is propagated in time.", "Differently from RNNs, neurons in SNNs only emit spikes whenever a threshold is reached.", "This spike generation mechanism is not differentiable, which leads to substantial difficulties in optimizing these networks [34], [42], [23], [53], [22], [4], [44].", "One workaround is to avoid the aforementioned threshold and instead propagate features throughout the receptive field [32] .", "The downside of this mechanism is that the sparse-processing property is lost within deeper layers of the network.", "Overall, the design and training of SNNs still requires fundamental investigation before competitive performance can be reached.", "A third research direction is concerned with exploring dense neural networks for object detection with event cameras.", "The first step is the creation of a dense tensor (event representation) that enables compatibility with dense operations such as convolutions.", "Early work directly uses a single event representation generated from a short temporal window of events to infer detections [19], [7], [20].", "These approaches discard relevant information from beyond the considered temporal window such that detecting slowly moving objects becomes difficult or impossible.", "Followup work addresses this issue by incorporating recurrent neural network layers [35], [25] which drastically improved the detection performance.", "We follow this line of work but revamp dominant architecture choices to build a canonical framework that is fast, lightweight and highly performant." ], [ "Vision Transformers for Spatio-Temporal Data", "The success of attention-based models [49] in NLP has inspired the exploration of transformer-based architectures in computer vision [12].", "Attention-based models have recently also been explored in video classification [45], [13], [1], [5] where the models are applied directly to a set of frames.", "While these approaches have shown promising results in spatio-temporal modelling, they are optimized for offline processing of stored video data.", "In event-based vision, attention-based components have found applications in classification [39], [50] and image reconstruction [51], but their use in event-based object detection has yet to be investigated.", "Figure: RVT block structure.", "The input is convolved with kernel size k×kk\\times k and stride ss.", "Block-SA applies self-attention in local windows while Grid-SA is a global operation using dilated attention.", "Finally, each block ends with an LSTM that reuses the (cell- and hidden) states from the previous timestep.", "The LSTM is applied to each feature separately.", "Normalization and activation layers are omitted for conciseness." ], [ "Method", "Our object detection approach is designed to process a stream of events sequentially as they arrive.", "Incoming events are first processed into tensors that represent events in space and time.", "In every timestep, our network takes a new event representation as input as well as the previous states of the recurrent neural network layers.", "After each pass through the backbone, the output of the RNNs are used as input to the detection framework.", "The following sections elaborate on each one of these steps.", "Figure REF shows an overview of the RVT architecture." ], [ "Event Processing", "Each pixel of an event camera can independently trigger an event when a significant log brightness change occurs.", "An event can be positive or negative depending on the sign of the brightness change.", "We characterize an event with polarity $p_k\\in \\lbrace 0,1\\rbrace $ as a tuple $e_k=(x_k,y_k,t_k,p_k)$ that occurs at pixel $(x_k, y_k)$ at time $t_k$ .", "Modern event cameras can produce 10s of millions of events per second which renders event-by-event processing out of reach on conventional processing units.", "In this work we opt for a very simple preprocessing step to enable compatibility with convolutional neural network layers which, as we will show later in Sec.", "REF , are an important contributor to the performance of our model.", "Our preprocessing step starts with the creation of a 4-dimensional tensor $E$ .", "The first dimension consists of two components and represents the polarity.", "The second dimension has $T$ components and is associated with $T$ discretization steps of time.", "The 3rd and 4th dimension represent height and width of the event camera.", "We process set of events $\\mathcal {E}$ within a time duration $[t_a,t_b)$ the following way: $E(p, \\tau , x, y) &= \\sum _{e_k\\in \\mathcal {E}}\\delta (p - p_k)\\delta (x-x_k,y-y_k)\\delta (\\tau -\\tau _k),\\\\\\tau _k &= \\left\\lfloor \\frac{t_k-t_a}{t_b - t_a}\\cdot T\\right\\rfloor $ In words, we create $T$ 2-channel frames where each pixel contains the number of positive or negative events within one of the $T$ temporal frames.", "As a final step, we flatten the polarity and time dimension to retrieve a 3-dimensional tensor with shape $(2T, H, W)$ to directly enable compatibility with 2D convolutions.", "We implement the presented algorithm with byte tensors to save memory and bandwidth.", "Other, more sophisticated representations are possible [6], [55], [16], [2], [25], [48], [3], but their thorough evaluation is not our focus." ], [ "Mixing Spatial and Temporal Features", "The main difficulty of object detection with event cameras is that at any given time, the neural network should be able to efficiently (1) extract local- and global task-relevant features in pixel space because objects can cover both very small regions or large portions of the field of view; (2) extract features from very recent events (e.g.", "moving edges) as well as events from several seconds ago.", "This is necessary because some objects or moving slowly with respect to the camera such that they generate very few events over time.", "These observations motivate us to investigate transformer layers for spatial feature extraction and recurrent neural networks for efficient temporal feature extraction.", "Figure REF illustrates the components of a single stage." ], [ "Spatial Feature Extraction", "The spatial feature extraction stage should incorporate a prior about the fact that pixels are arranged in a 2D grid as early as possible in the computation graph.", "We enable this by using a convolution with overlapping kernels on the input features that at the same time spatially downsamples the input or features from the previous stage.", "This convolution also endows our model with a conditional positional embedding [9] such that we do not require absolute [49], [12] or relative [30] positional embeddings.", "Our ablation study in Sec.", "REF shows that overlapping kernels lead to a substantial boost in detection performance.", "In a subsequent step, the resulting features are transformed through multi-axis self-attention.", "We quickly summarize the steps but refer to Tu et.", "al [47] for an elaborate explanation.", "Multi-axis attention consists of two stages using self-attention.", "The first stage performs local feature interaction while the second stage enables dilated global feature mixing.", "More specifically, the features are first grouped locally into non-overlapping windows: Let $X\\in \\mathbb {R}^{H\\times W\\times C}$ be the input feature tensor.", "We reshape the tensor to a shape $(\\frac{H}{P}\\times \\frac{W}{P}, P\\times P, C)$ where $P\\times P$ is the window size in which multi-head self-attention [49] is applied.", "This block attention (Block-SA in Fig.", "REF ) is used to model local interactions.", "As a next step, we would ideally be able to extract features globally.", "One straightforward way to achieve this would be applying self-attention on the whole feature map.", "Unfortunately, global self-attention has quadratic complexity in the number of features.", "Instead, we use grid attention (Grid-SA in Fig.", "REF ).", "Grid attention partitions the feature maps into a grid of shape $(G\\times G, \\frac{H}{G}\\times \\frac{W}{G}, C)$ using a $G\\times G$ uniform grid.", "The resulting windows are of size $\\frac{H}{G}\\times \\frac{W}{G}$ .", "Self-attention is then applied to these windows which corresponds to global, dilated mixing of features.", "We study alternative designs as part of our architecture in the ablation studies in Sec.", "REF ." ], [ "Temporal Feature Extraction", "We opt for temporal feature aggregation with LSTM [18] cells at the end of the stage.", "Differently from prior work [35], [25] we find that temporal and spatial feature aggregation can be completely separated.", "This means that we use plain LSTM cells such that the states of the LSTMs do not interact with each other.", "By avoiding Conv-LSTM units [41], we can drastically reduce the computational complexity and parameter count.", "I.e.", "a Conv-LSTM with kernel size $k\\times k$ and stride 1 demands $k^2$ the number of parameters and compute compared to the original LSTM cell.", "We examine this aspect in the experimental Sec.", "REF ." ], [ "Model Details", "We apply LayerNorm [24] before and LayerScale [46] after each attention and MLP module, and add a residual connection after each module.", "We found that LayerScale enables a wider range of learning rates." ], [ "Hierarchical Multi-Stage Design", "We compose multiple RVT blocks together to form a multi-stage hierarchical backbone.", "The overall architecture is shown in Fig.", "REF .", "At first, a local temporal slice of events is processed into a 2D tensor format as formulated in the beginning of this section.", "Subsequently, each stage takes the previous features as input and optionally uses the LSTM state from the last timestep to compute features for the next stage.", "By saving the LSTM states for the following timestep, each recurrent stage can retain temporal information for the whole feature map.", "We follow prior work and use features from the second to the fourth stage for the object detection framework.", "To do so, we reshape the hidden states of the LSTMs into 2D feature maps." ], [ "Experiments", "We conduct ablations and evaluation our model on the Gen1 [11] and 1 Mpx [35] event camera datasets.", "We train two variants of our model: RVT-B the base model and a small model RVT-S on both datasets.", "Parameter details for both models are shown in Tab.", "REF ." ], [ "Implementation Details", "We initialize all layers randomly except LayerScale which is initialized to 1e-5 for each module.", "We train our models for 400k iterations with the ADAM optimizer [21] with a OneCycle learning rate schedule [43] using a 2000 warmup iterations and linear decay from a maximum learning rate or both models on both datasets and all ablation studies.", "For comparison with prior work, we use backpropagation through time (BPTT) over a sequence length of 21 for the Gen1 dataset and 11 for the 1 Mpx dataset.", "We use a batch size of 8 and a maximum learning rate of 2e-4 for the Gen1 dataset and train the model on a single Tesla V100 GPU which takes approximately 2 days.", "For the 1 Mpx dataset, we use an effective batch size of 12, a maximum learning rate of 2.45e-4 and train the model on four Tesla V100 GPUs which takes up to 3 days.", "Our data augmention includes random horizontal flipping, zooming in and zooming out applied over the BPTT sequence.", "More details on data augmentation are available in Sec.", "REF and the supplementary material.", "For evaluation, we feed the model full sequences and only reset the state of the recurrent layers at the beginning of a new sequence.", "This is different from the training setting where we only train on chunks of randomly sampled shorter sequences for BPTT.", "Finally, we use the YOLOX framework [15], which includes the IOU loss, class loss and regression loss.", "These losses are averaged both over the batch and sequence length for each optimization step.", "We discretize time into $T=10$ bins and summarize 50 milliseconds of events into an event representation." ], [ "Datasets", "The Gen1 Automotive Detection dataset [11] consists of 39 hours of event camera recordings at a resolution of $304 \\times 240$ .", "In total, the Gen1 dataset contains 228k car and 28k pedestrian bounding boxes available at 1, 2 or 4 Hz.", "We follow the evaluation protocol of prior work [35], [25] and remove bounding boxes with a side length of less than 10 pixels and a diagonal of less than 30 pixels.", "The 1 MPx dataset [35] also features driving scenarios but provides recordings at a higher resolution of $720 \\times 1280$ over a period of several months at day and night.", "It consists of approximately 15 hours of event data labeled at a frequency of 30 or 60 Hz with a total amount of 25 million bounding box labels for three classes (car, pedestrian, and two-wheeler).", "We follow the evaluation protocol of prior work [35], [25].", "That is, we downsample the input resolution by 2 to nHD resolution ($640\\times 360$ ) and remove bounding boxes with a side length of less than 20 pixels and a diagonal of less than 60 pixels (at the original resolution).", "We provide qualitative examples of this dataset together with predictions of our base model in Fig.", "REF .", "For both datasets, mean average precision (mAP) is the main metric [28] that we consider." ], [ "Ablation Studies", "This section examines the two main contributors to the final performance of the proposed model.", "First, we investigate key components and design choices of the proposed backbone.", "Second, we study the influence of different data augmentation techniques that are compatible with our sequential problem setting.", "The ablation studies are performed on the Gen1 dataset, unless stated otherwise, and the training sequence length for BPTT is set to 11 instead of 21 to reduce the training time.", "We set the window size of window-based attention models to $8\\times 10$ at all scales in the backbone.", "This means that the window at the fourth stage covers the whole feature map.", "Our ablation studies are performed on the validation set where the best performing model is taken after 400k iterations.", "Table: Spatial Aggregation.", "Multi-axis attention leads to the best results on both the Gen1 and 1 Mpx dataset." ], [ "Spatial Interaction", "In Tab.", "REF , we study different spatial aggregation techniques.", "For a fair comparison, we keep the LSTM and convolutional downsampling layers identical and only exchange the attention and MLP modules.", "We compare multi-axis attention with ConvNext blocks [31] and Swin transformer blocks [30].", "ConvNext is a convolutional neural network architecture that has shown competitive performance with transformer-based models on a wide range of tasks, including object detection.", "We use the default kernel size of $7 \\times 7$ as originally suggested and place three ConvNeXt blocks in each stage to approximately match the number of parameters of the reference model.", "Swin, instead, is an attention-based model that applies local self-attention in windows that interact with each other through cyclic shifting.", "We find that our Swin variant achieves better performance than the ConvNext variant, however, both are outperformed by multi-axis self-attention [47] on both the Gen1 and 1 Mpx dataset.", "This experiment suggests that global interaction at every stage (multi-axis) is advantageous to purely local interaction (Swin, ConvNext).", "Table: Downsampling Strategy.", "The usage of overlapping kernels leads to higher performance at the expense of a slight increase in the number of parameters." ], [ "Convolutional Downsampling", "The original vision transformer [12] architecture does not perform local feature interaction with convolutional layers.", "Some popular hierarchical counterparts also choose to apply downsample features without overlapping kernels [8], [30].", "In Tab.", "REF , we compare overlapping and non-overlapping convolutional kernels in both the input layer (patch embedding) and feature downsampling stage.", "While non-overlapping convolutions reduce the number of parameters, they cause a substantial drop in performance.", "Consequently, we choose overlapping kernels in all stages of the network.", "Table: LSTM kernel size.", "Conv-LSTM variants do not outperform the feature specific (1×11\\times 1) LSTM." ], [ "LSTM with Convolutions", "Prior state-of-the-art approaches on object detection with event cameras heavily rely on convolutional LSTM cells [35], [25].", "We revisit this design choice and experiment with plain LSTM cells and a depthwise separable Conv-LSTM variant [36].", "The depthwise separable Conv-LSTM first applies a depthwise separable convolution on both the input and hidden state before a point-wise ($1\\times 1$ ) convolution is applied.", "Our results in Tab.", "REF suggest that plain LSTM cells are sufficient in our model and even outperform both variations.", "This is to some degree surprising because both variants are a strict superset of the plain LSTM.", "We decide to use a plain LSTM cell based on these observations.", "Table: LSTM placement.", "LSTM cells contribute to the overall performance even in the early stages.Table: Comparisons on test sets of Gen1 and 1 Mpx datasets.", "Best results in bold and second best underlined.", "Brackets (·)(\\cdot ) in runtime indicate the inference time of the backbone without detection head.", "A star * ^* suggests that this information was not directly available and estimated based on the publications.", "Runtime is measured in milliseconds for a batch size of 1.", "We used a T4 GPU for RVT to compare against indicated timings in prior work , on comparable GPUs (Titan Xp)." ], [ "LSTM Placement", "In this ablation we study the influence of using temporal recurrence only in a subset of stages or not at all.", "For all comparisons, we leave the model exactly the same but reset the states of the LSTMs at selected stages in each timestep.", "This way, we can simulate the absence of recurrent layers while keeping the number of parameters constant in the comparisons.", "The results in Tab.", "REF suggest that using no recurrence at all leads to a drastic decline of detection performance.", "Enabling the LSTMs in each stage, starting from the fourth consistently leads to enhanced performance.", "Surprisingly, we find that adding an LSTM to the first stage also leads to improvements, albeit the increase in mAP is not large.", "In general, this experiment suggests that the detection framework benefits from features that have been augmented with temporal information.", "Based on our observations, we decide to keep the LSTM also in the first stage.", "Table: Data Augmentation.", "Data augmentation consistently improves the results." ], [ "Data Augmentation", "While data augmentation is not directly related to the model itself, it greatly influences the final result as we will illustrate next.", "Here, we investigate three data augmentation techniques that are suitable for object detection on spatio-temporal data: Random (1) horizontal flipping, (2) zoom-in, and (3) zoom-out.", "Zoom-in augmentation randomly selects crops that contain at least one full bounding box at the final timestep of the BPTT sequence (i.e.", "during training).", "This crop is then applied to the rest of the sequence before the crops are rescaled to the default resolution.", "This procedure ensures that we have at least a single label to compute the loss function while maintaining the same resolution during training.", "Zoom-out augmentation resizes the full input to a lower resolution and randomly places the downscaled input in a zero-tensor initialized at the default resolution.", "This procedure is then applied in an identical way to the remaining BPTT sequence.", "Table REF shows that our model is performing poorly if no data augmentation is applied.", "Overall, we find that data augmentation is important to combat overfittig not only on the Gen1 sequence but also on the 1 Mpx dataset.", "The most effective augmentation is zoom-in, followed by zoom-out and horizontal flipping.", "Based on these results, we decide to apply all data augmentation techniques.", "We report the specific hyperparameters in the supplementary material.", "Figure: Predictions on the 1 Mpx dataset.", "All examples are thematically picked to illustrate the behaviour of the model in different scenarios.", "(d) shows a scenario in which the model can still partially detect objects in absence of events due the temporal memory." ], [ "Benchmark Comparisons", "In this section, we compare our proposed neural network architecture against prior work on both the Gen1 [11] and 1 Mpx dataset [35] and summarize the results in Tab.", "REF .", "We train two models, a base model (RVT-B) with approximately 18.5 million parameters and a small variant (RVT-S) with 4.4 million parameters by reducing the channel dimensions by two in each stage.", "Their architectural hyperparameters are outlined in Tab.", "REF .", "To compare with prior work, we choose the models based on their best performance on the validation set and evaluate them on the test set.", "From Tab.", "REF we can draw multiple conclusions.", "First, we observe that models using recurrent layers consistently outperform other approaches, both sparse (GNNs, SNNs) and dense feed-forward models without recurrent layers (Inception+SSD, RRC-Events, YOLOv3 Events) by an mAP of more than 10 on both datasets.", "One notable exception is MatrixLSTM [6] which applies LSTM cells directly at the input.", "In contrast, RED [35] and ASTMNet [25] employ recurrent layers only in deeper layers.", "Our base model achieves a new state-of-the-art performance of 47.5 mAP on the Gen1 dataset and 47.0 mAP on the 1 Mpx dataset.", "ASTMNet claims comparable results on both datasets albeit at the cost of using a much larger backbone and increased inference time.", "The RED model, also reports favorable results, but achieves 7.5 lower mAP on the Gen1 and 4 mAP lower mAP on the 1 Mpx dataset compared to our model.", "Finally, our small model is amongst the smallest in our comparison.", "Still, it achieves 4.3 higher mAP on the Gen1 dataset than the RED model while using 5 times fewer parameters." ], [ "Inference Time", "We also compute the inference time of our model in PyTorch eager mode on a T4 GPU with a batch size of 1.", "Unfortunately, both RED and ASTMNet are not open source such that we cannot directly compare inference time on the same GPU model.", "Instead, we use the timings provided by the authors that conducted their timing experiments on comparable GPUs (e.g.", "Titan Xp).", "We report the timing results of both our base and small model in Tab.", "REF and also visualize them in Fig.", "REF .", "With an inference time of 10.4 ms on the Gen1 dataset ($304\\times 240$ ) our base model yields a latency reduction of approximately 6 milliseconds compared to RED and over 3 times lower inference time than ASTMNet.", "On the the 1 Mpx dataset, using a higher resolution of $640\\times 360$ , our base model runs 3 times faster than RED and over 5 times faster than ASTMNet.", "Our small model keeps the inference time on both datasets below 10 milliseconds.", "We also report the inference time of the backbone alone, by excluding computation dedicated to the detection framework.", "The numbers in brackets in the timing column indicate that the backbone typically occupies about half of the total inference time.", "The inference times of both model variants are quite similar, although the small model uses approximately 4 times less compute.", "This suggests that a proper deployment (e.g.", "with TensorRT) could reduce the overhead of PyTorch eager execution and further reduce the overall latency." ], [ "Discussion and Limitations", "We use a very simple event representation which does not leverage the full potential of event-based data.", "For example, we only have a weak prior on the order of events because we process the temporal dimension directly with fully connected layers.", "Recent work has shown substantial gains by introducing temporal convolutions in early layers [25].", "Efficient low-level processing of event data is still an open research problem that we have not addressed in this work.", "Our approach currently only uses event streams to detect objects.", "Frames yield complementary information that, when properly incorporated, will yield significantly enhanced detection performance.", "For example, in Fig.", "REF (d) we can see that our model can retain information over some period when no events are available.", "Still, the memory of the network will fade and detection performance deteriorates.", "High quality frames even at low frame-rate could provide the missing complementary information.", "Hence, we believe that a multi-modal extension of our method on a suitable dataset is a promising next step." ], [ "Conclusion", "We introduced a novel backbone architecture for object detection with event cameras.", "The architecture consists of a stage design that is repeatedly applied to create a multi-stage hierarchical neural network.", "Each stage compactly incorporates convolutional priors, local- and sparse global attention and recurrent feature aggregation.", "Our experiments highlight that recurrent vision transformers can be trained from scratch to reach state-of-the-art performance in object detection with event cameras.", "The resulting canonical stage-design is directly compatible with existing detection frameworks, and paves the way to low-latency object detection with event cameras on conventional hardware.", "Nonetheless, we hope that this work also inspires novel designs in future neuromorphic systems." ], [ "Acknowledgment", "This work was supported by Huawei Zurich Research Center; by the National Centre of Competence in Research (NCCR) Robotics (grant agreement No.", "51NF40-185543) through the Swiss National Science Foundation (SNSF), and the European Research Council (ERC) under grant agreement No.", "864042 (AGILEFLIGHT)." ], [ "Overview", "Here we supplement the main paper with details regarding data augmentation parameters in section A- (A- refers to the sections in this appendix) and additional experiments in section A-.", "More specifically, section A-REF studies skip connections for the LSTM cells and section A-REF discusses the cross-dataset generalization capabilities with a deployment on the DSEC [17] dataset." ], [ "Data Augmentation Details", "We apply three data augmentation techniques to train our models from scratch.", "Table REF summarizes the probability of each augmentation being used on an individual sample.", "For each sample, we may apply horizontal flipping and also apply a zoom augmentation subsequently.", "For the zoom augmentation we draw from a Bernoulli distribution to indicate whether we apply the augmentation at all.", "If zoom augmentation shall be applied, we randomly choose between zoom-in or zoom-out augmentation based on the respective probability.", "For the zoom augmentations, there is also the parameter that defines the magnitude with which the augmentation is applied.", "A magnitude of 1 means that no zoom is applied while a magnitude greater than 1 indicates how much zooming in or zooming out is applied with respect to the original resolution.", "The magnitude that we finally apply is drawn from a continuous uniform distribution with bounds min and max.", "Table: Data Augmentation Parameters.", "The probability defines the Bernoulli distribution from which we draw the decision whether to apply this augmentation on a given sample." ], [ "Additional Experiments", "This section provides two additional experiments that did not fit into the main paper.", "First, we briefly discuss an ablation on the possibility of using residual LSTM layers.", "Second, we follow with a qualitative study of cross-dataset generalization using our model trained on the 1 Mpx dataset." ], [ "Residual LSTM Ablation", "Our model employs LSTM cells [18] without skip/residual connections in the model.", "We also experimented with adding skip connections to the LSTM cells on the Gen1 [11] dataset.", "Table REF shows that adding a residual connection to the LSTM cells leads to worse results.", "We hypothesize that this residual connection hampers the LSTM's ability to control the mixture of incoming (current timestep) and retained temporal features (previous timesteps).", "For example, it would be difficult for the residual-LSTM combination to ignore the incoming feature because the output of the LSTM is simply added to this feature.", "Without the residual connection, the LSTM could simply set the input gate to 0 to ignore the input.", "Table: LSTM with and without residual connection.", "Using a skip connection over the LSTM cells leads to worse results." ], [ "Cross-Dataset Generalization: From 1 Mpx to DSEC", "DSEC is a dataset that features event cameras and global shutter cameras close to each other.", "Unlike the 1 Mpx dataset [35] which was recorded mostly urban scenarios in Paris, DSEC [17] provides recordings from urban and rural regions in Switzerland.", "Furthermore, the event camera used in the DSEC dataset is a Gen 3 prophesee event camera instead of a Gen 4 camera as in the 1 Mpx dataset.", "In this section, we qualitatively show that our model can be successfully deployed in a different environment and using different event cameras.", "We deploy RVT-B, trained on the 1 Mpx dataset , on several sequences of the DSEC dataset to qualitatively assess the cross-dataset generalization.", "While DSEC does not provide object detection labels, we can visually assess the quality of the detections by using the provided calibration files to map the frames of the global shutter camera to the event camera view.", "Figure REF and REF show predictions of our model together with the images closest in time to the detections.", "Figure REF shows our model can successfully detect cars in mountainous environments.", "In particular, Figure REF (a) shows an HDR scene where the global shutter frame is overexposed such that the approaching car is barely visible.", "Due to the high dynamic range of the event camera, our model can detect the approaching car without any difficulty.", "Figure REF features more urban environments where our model also manages to detect objects correctly.", "Figure: Prediction examples on the DSEC dataset featuring a mountainous environment.", "Frames are shown only for visualization purposes and are not used by the model.", "Column (a) shows a typical high-dynamic range (HDR) scenario where the vehicle is exiting a tunnel with a car approaching from outside the tunnel.", "The HDR capabilities of the event cameras enables our model to detect the approaching car.", "Column (b) shows a scenario with a wet road and challenging reflections.Figure: Prediction examples on the DSEC dataset featuring a (sub-)urban environments.", "Frames are shown only for visualization purposes and are not used by the model.", "Column (a) illustrates a typical urban situation where pedestrians, two-wheelers, cars and other road users occupy the street simultaneously.", "In column (c), we show a failure case of our model where a street pillar is erroneously detected and classified as a two-wheeler." ], [ "Discussion of Failure Cases", "By and large, our model can successfully detect objects on DSEC even though it has only been trained on the 1 Mpx dataset.", "Still, we found failure cases that might stem from distribution shift between the datasets.", "For example, Figure REF (c) shows the erroneous detection of a two-wheeler that instead is a pillar on the street.", "Overall, the model is good at detecting cars but is less confident and accurate at detecting two-wheelers and pedestrians.", "This effect likely stems from the fact that the 1 Mpx dataset has almost twice as many car labels as pedestrian and two-wheeler labels combined.", "The submission comes with two videos showing the predictions on two separate sequences on DSEC for a better qualitative assessment." ], [ "Gen1 {{cite:1a57c7d3860ad2c81ecd07530cb1f14e26ad31f9}}", "“Prophesee Gen1 Automotive Detection Dataset License Terms and Conditions”: https://www.prophesee.ai/2020/01/24/prophesee-gen1-automotive-detection-dataset/" ], [ "1 Mpx {{cite:b7f6a351fa8f5aea53b2ab10daf2026d7fc7796f}}", "“Prophesee 1MegaPixel Automotive Detection Dataset License Terms and Conditions”: https://www.prophesee.ai/2020/11/24/automotive-megapixel-event-based-dataset/" ], [ "DSEC {{cite:eade248fcd80be136c742eef47239dceb3a1892f}}", "“Creative Commons Attribution-ShareAlike 4.0 International public license (CC BY-SA 4.0)”: https://dsec.ifi.uzh.ch/" ] ]
2212.05598
[ [ "UV-driven Chemistry as a Signpost for Late-stage Planet Formation" ], [ "Abstract The chemical reservoir within protoplanetary disks has a direct impact on planetary compositions and the potential for life.", "A long-lived carbon-and nitrogen-rich chemistry at cold temperatures (<=50K) is observed within cold and evolved planet-forming disks.", "This is evidenced by bright emission from small organic radicals in 1-10 Myr aged systems that would otherwise have frozen out onto grains within 1 Myr.", "We explain how the chemistry of a planet-forming disk evolves from a cosmic-ray/X-ray-dominated regime to an ultraviolet-dominated chemical equilibrium.", "This, in turn, will bring about a temporal transition in the chemical reservoir from which planets will accrete.", "This photochemical dominated gas phase chemistry develops as dust evolves via growth, settling and drift, and the small grain population is depleted from the disk atmosphere.", "A higher gas-to-dust mass ratio allows for deeper penetration of ultraviolet photons is coupled with a carbon-rich gas (C/O > 1) to form carbon-bearing radicals and ions.", "This further results in gas phase formation of organic molecules, which then would be accreted by any actively forming planets present in the evolved disk." ], [ "Protoplanetary disks are the natal environments for planets.", "Disks have three main components: a pebble-rich dusty midplane (dust grain radius $>\\sim $ 1mm), a gaseous atmosphere extending well above[73] and radially beyond (by a factor of $\\sim $ 2[6]) the pebble-rich midplane, and a small dust population (radius $<$ 10 $\\mu $ m) that is coupled to the gas.", "Each component of the protoplanetary disk has an impact on shaping the chemistry of actively forming planets.", "The solid cores of giant planets must form over a short timescale ($\\sim $ 1 Myr) for the eventual planet to obtain its full mass over the course of a typical lifetime of gas in a disk (3-10 Myr[48]) and to explain the widely observed gap and ring structures that are thought to be indicative of planet formation[5].", "The compositions of pebbles and their icy mantles directly influence the final composition of a solid planetary core [78], [59].", "After a core becomes sufficiently massive, planets start to accrete material from the gaseous reservoir surrounding the pebble-rich midplane of a protoplanetary disk to form their atmospheres [62].", "It remains difficult to directly probe the gas within the planet-forming midplane due to the high dust densities leading to elevated dust optical depths that mask line emission, as well as cold temperatures which leads to the freezing out common gas tracers such as CO onto dust grains.", "However, constraining the chemical environment of the planet-forming midplane is essential to connect sub-mm observations probing the warm intermediate regions above the disk midplane to the composition of actively forming planets.", "The molecules $\\textrm {CH}_{3}\\textrm {CN}$ and $\\textrm {HC}_{3}\\textrm {N}$ are two of many complex organic molecules (loosely defined as a molecules with at least four atoms, including multiple carbon atoms) that could act as basic precursors to prebiotic molecules[82], [83], [87].", "$\\textrm {CH}_{3}\\textrm {CN}$ and $\\textrm {HC}_{3}\\textrm {N}$ have been observed and spatially resolved towards the protoplanetary disks around six young stars: GM Aur, AS 209, HD 163296, MWC 480, LkCa 15, and V4046 Sgr [13], [55], [60], [79].", "$\\textrm {CH}_{3}\\textrm {CN}$ has also been spatially resolved observed toward the TW Hya disk[66], and a couple of small ($\\approx $ 4 source) surveys have detected unresolved $\\textrm {CH}_{3}\\textrm {CN}$ or $\\textrm {HC}_{3}\\textrm {N}$ emission from other young stellar objects[28].", "These molecules exhibit bright emission signifying high gas phase abundances and column densities (N$_{\\rm {Total}}$ =10$^{12}$ -10$^{13} \\rm {cm}^{-2}$ [13], [55]).", "$\\textrm {CH}_{3}\\textrm {CN}$ is an excellent probe of gas temperature due to multiple transitions tracing a range of energy states [lowest energy state E$_{L} \\approx $ 5 K, spanning lower energy states over $\\Delta $ E$_{L}$ =$\\sim $ 100 K] that can be observed simultaneously.", "Each J-transition (where J is the rotational quanum number) has a series of K-ladder transitions (where K is the quantum number of angular momentum along the molecular axis) which are only sensitive to collisions, thus the ratio between K-transitions depends only on the gas density and temperature.", "The K-transitions span a large range of temperatures and are sufficiently close in frequency to observe in one spectral setting with the Atacama Large Millimeter/submillimeter Array (ALMA).", "Analyses of rotational diagrams[44] made from $\\textrm {CH}_{3}\\textrm {CN}$ observations demonstrate an origin in gas with a temperature between 25-50 K[55], well below the expected desorption temperature (100-124 K[30]).", "Thus, the brightly observed flux from this species and similar nitriles and organics like $\\textrm {HC}_{3}\\textrm {N}$ , and CH$_{2}$ CN[26] present a chemical conundrum as they should not be present in the gaseous state; rather, they should be frozen on cold grain surfaces.", "Where $\\textrm {CH}_{3}\\textrm {CN}$ and other carbon-rich molecules reside and how they are replenished in the gas phase has a substantial impact on our understanding of prebiotic enrichment into gaseous planet atmospheres.", "The traditional solution to this apparent discrepancy has been to turn to grain-surface chemistry.", "Simple carbon- and nitrogen-bearing molecules can undergo hydrogenation or other reactions to form complex organics on the surface of a grain that then non-thermally desorb the grain intact.", "This solution has been used to explain the organic inventories locked onto pebbles and icy grains [78], [12].", "This dust chemistry path requires both an intact photodesorption rate of $\\textrm {CH}_{3}\\textrm {CN}$ from the grains of 10$^{-3}$ mols/photon[93], [66] and a reactive desorption efficiency[92] of 1%.", "Laboratory experiments find an intact photodesorption efficiency of $\\textrm {CH}_{3}\\textrm {CN}$ to be orders of magnitude less efficient (10$^{-5}$ mols/photon[10]) and reactive desorption has not been well studied in laboratory experiments for these species.", "We posit here that there is a simpler alternative based solely on disk evolution processes that have been previously theorized and observed which link the disk gas chemistry to planet formation.", "This involves two ingredients: (1) an elevated C-to-O ratio in the gas and (2) greater penetration of UV photons due to a reduction in the total surface area of small grain population due to pebble formation.", "These combine to power a state of photo-chemistry equilibrium.", "There has been mounting evidence for an elevated C-to-O gas phase ratio at large radial distances in evolved and cool gas-rich protoplanetary disks.", "Brighter than expected emission from small hydrocarbons such as C$_{2}$ H [72], [21] and complex organics such as c-C$_{3}$ H$_{2}$ [29] in disks and $\\textrm {HC}_{3}\\textrm {N}$ and $\\textrm {CH}_{3}\\textrm {CN}$ in photon-dominated regions[65] have independently suggested a C-to-O ratio well above the solar abundance ratio (C-to-O$\\approx $ 0.55) [7].", "As a radical, C$_{2}$ H has a short lifetime ($<$ 1000 years), but it is found to be abundant in the gas.", "Bosman et al.", "2021[21] find that the only way to reproduce the high column densities observed was to increase the C-to-O ratio throughout the full disk to between 1-2.", "They found that it was also necessary for C$_{2}$ H to exist in high density gas while remaining above the disk midplane (height/radius$\\approx $ 0.1-0.2) where UV photons dominate the high-energy photon budget.", "Together, this produces a long-lived carbon-rich gas phase cycle in photochemical equilibrium[21].", "One critical factor is that C$_2$ H is predicted and is observed to exist above the disk midplane[64].", "In contrast, $\\textrm {CH}_{3}\\textrm {CN}$ has been observed to emit closer to the midplane[55].", "Extending the carbon-rich chemical environment that is used to explain C$_{2}$ H observations to the midplane could allow for gas phase formation of complex molecules in the planet-forming zone.", "Elevated C-to-O ratios would be necessary to supply the materials to create hydrocarbons and nitriles.", "However, this alone would not alleviate the issue of $\\textrm {CH}_{3}\\textrm {CN}$ and $\\textrm {HC}_{3}\\textrm {N}$ existing in the gas at temperatures at which they should be frozen and locked onto grains.", "A non-thermal desorption mechanism is needed to increase the number of molecules that are being desorbed from the grains, either intact or as fragments of larger molecules.", "If the disk atmosphere is small-dust rich, with a gas-to-dust ratio close to the typical interstellar medium (ISM) mass ratio (gas-to-dust = 100) then UV radiation cannot penetrate deep into the disk.", "In this case, only high energy radiation (X-rays and cosmic rays) can penetrate the dust and gas and drive the chemistry within and near the midplane.", "Small dust grains are the main opacity source for UV photons which are readily produced by the young and active star.", "As the protoplanetary disk evolves, small dust grains agglomerate and eventually settle downwards and drift inward radially as they grow in size[3], [17], decreasing the total surface area of the small grain population.", "As the main opacity source begins to deplete, UV photons can penetrate deeper into the disk.", "Here, complete desorption of $\\textrm {CH}_{3}\\textrm {CN}$ or $\\textrm {HC}_{3}\\textrm {N}$ is inhibited due to the dense and cold environment[65].", "This mechanism would have an effect on all molecules residing on grains, and a brief discussion regarding this can be found in the Supplemental Materials, in the section `Implications on other molecules'.", "The combined effect of a high C-to-O ratio and excess UV flux allows for complex molecules to exist within the gas phase at cold, midplane temperatures.", "Thus bright hydrocarbon and nitrile (i.e.", "C$_{2}$ H, CH$_{3}$ CN) emission from the cold midplane acts as a signpost for an evolved dust population, coincident with an advanced stage of planet formation including the accumulation of gas-giant atmospheres.", "The chemical scenario we put forward, comparing early and late-stage disk environments is shown in Figure REF .", "The disks observed in the Molecules with ALMA at Planet-forming Scales (MAPS) large program[80] provide plausible support for this theory.", "The youngest disks in the MAPS sample reside around IM Lup and AS 209 which are on the order of 1-2 Myr old[80].", "Towards the youngest source, IM Lup, there is no detection of $\\textrm {CH}_{3}\\textrm {CN}$ nor $\\textrm {HC}_{3}\\textrm {N}$ .", "AS 209 has detections of $\\textrm {CH}_{3}\\textrm {CN}$ and $\\textrm {HC}_{3}\\textrm {N}$ which suggest both molecules emit from z/r$>$ 0.1.", "The disk systems surrounding stars $>$ 6 Myr have $\\textrm {CH}_{3}\\textrm {CN}$ detected at z/r$<$ 0.1 as determined by the modeled thermal structure and detected rotational temperature from the K-ladder transitions.", "This complex organic-rich midplane will influence the atmospheres of planetary companions.", "It is worth noting that the oldest disk systems in the MAPS program exist around Herbig stars while the youngest systems are T Tauri stars.", "The UV-bright spectrum innate to Herbig stars may additionally influence the push of complex molecules towards the planet-forming midplane.", "Currently, the MAPS sample contains the bulk of the resolved $\\textrm {CH}_{3}\\textrm {CN}$ and $\\textrm {HC}_{3}\\textrm {N}$ data towards protoplanetary disks.", "To explore the effect of the stellar spectrum on $\\textrm {CH}_{3}\\textrm {CN}$ and $\\textrm {HC}_{3}\\textrm {N}$ emission more disks with a wide range of stellar host masses would need to be observed.", "Figure: A schematic highlighting the physical evolution of a disk and how that physical environment can affect the chemistry.", "At the top, we show a disk with a large amount of small dust that acts to block UV photons.", "As the small dust settles, UV photons make their way deeper into the disk, allowing for photodesorption of complex species off grains.", "Now, there is a cycle of carbon chemistry that can be observed in the gas phase.To test the validity of this proposed end-stage chemistry, we first explored single point models representative of the disk midplane with corresponding cold temperatures and high gas densities (approximately 35 K, 5$\\times $ 10$^{11}$ mol/cm$^{-3}$ ).", "We varied the gas-to-dust mass ratio and initial carbon abundance among other physical and chemical variables including the nitrogen abundance, dust extinction, and ionization rate.", "We found that the gas phase $\\textrm {CH}_{3}\\textrm {CN}$ abundance was the most sensitive to the gas-to-dust ratio and initial gas phase carbon abundance.", "We then produced a thermo-chemical model representing the disk around the Herbig Ae star HD 163296.", "This disk is old (approx.", "7.6 Myr)[14] and is nearby at 101 pc[42] and has observed jets and winds [19], [96].", "The HD 163296 disk has been widely observed in multiple gas and dust tracers with high spatial resolution ($\\sim $ 10 au) with clear gap and ring structures typically assumed to be associated with active planet formation[5], [80].", "There is bright $\\textrm {CH}_{3}\\textrm {CN}$ emission coming from $\\sim $ 35 K gas as well as bright emission from $\\textrm {HC}_{3}\\textrm {N}$ and HCN[55], [15].", "Our modeling efforts follow that of Zhang et al.", "2021[97], and Calahan et al.", "2021b[25] which set up a thermo-chemical model of the HD 163296 disk by reproducing the mm dust-continuum observations, the spectral-energy distribution (SED), the full line intensity and morphology of six CO isotopologue transitions, and the vertical distribution of the optically thick lines.", "The model of HD 163296 started with a gas-to-small dust ratio of the disk equal to 500[97] which was needed to reproduce the disk's SED.", "c|ccc|ccc 7 0pt Modeling Parameters: TW Hya & HD 163296 Parameters TW Hya HD 163296 Gas Small Dust Large Dust Gas Small Dust Large Dust$^{a}$ Mass (M$_{\\odot }$ ) 0.025 $1.0\\times 10^{-4}$ $4.0\\times 10^{-4}$ 0.14$2.6 \\times 10^{-5}$ 0.024 $\\Psi $ 1.1 1.2 1.2 1.08 1.081.08 $\\gamma $ 0.75 0.75 1.0 0.80.80.1 h$_{c}$ (au) 42 42 8.4 8.448.44 n/a r$_{c}$ (au) 400 400 400 165165 n/a r$_{in}$ (au) 0.1 0.5 1 0.45 0.450.45 r$_{out}$ (au) 200 200200 600 600240 Final values of the TW Hya and HD 163296 models that reproduce CO, HD, $\\textrm {CH}_{3}\\textrm {CN}$ , HCN, and $\\textrm {HC}_{3}\\textrm {N}$ observations when available.", "r$_{in}$ and r$_{out}$ are the radial inner and outer limits of the disk, beyond these limits there is assumed to be no gas nor dust.", "$^{a}$ The surface density of the large dust distribution in HD 163296 is empirically set by continuum observations, thus it is not smooth and it is not dictated by the parametric equations.", "To produce this end-stage chemical environment, we deplete the small dust mass by a factor of 10 throughout the disk of HD 163296 making the new gas-to-dust mass ratio above the pebble disk midplane equal to 5,000.", "We then enhanced the initial gas phase carbon abundance in the system in the form of C, CH$_{4}$ , or C$_{2}$ H, increasing the overall gas phase carbon-to-oxygen ratio in both disks to above unity.", "There are two possible sources for excess carbon in protoplanetary disks, either through the destruction of refractory carbon grains[22] or CO depletion through mechanisms such as reactions with ionized molecules and atoms such as He$^{+}$ or H$_{3}^{+}$[86] and a series of chemical and freeze-out processes.", "We note that while signatures of CO depletion are found in the HD 163296 disk[97], [25], CO destruction likely only occurs in environments with a high cosmic-ray flux [2$\\times $ 10$^{-17}$ s$^{-1}$ ][86].", "Additionally, CO destruction would supply equal amounts of carbon and oxygen, while we seek to enhance carbon over oxygen.", "Our thermo-chemical model is run for a full megayear (Myr), after which the chemistry reaches an equilibrium.", "Notable chemical feedback due to dust evolution is predicted to occur over scales of $\\sim $ 1 Myr[61], [91], thus we use $\\sim $ 1 Myr as an approximate length of time for the chemical environment to transition from `early-stage' to `late-stage' dust evolution and subsequently exist in a state of photochemical equilibrium.", "Gas phase chemical reactions and rates were taken from the chemical network derived in Bosman et al.", "2018[20] which in turn relies on the UMIST Database for Astrochemistry Rate12 version[70] (see Methods section for more details).", "We found that regardless of the carrier of carbon, using a C/O = 1-2 and a factor of 10 depletion of small dust roughly reproduces line ratios and fluxes of the radial intensity profiles of $\\textrm {CH}_{3}\\textrm {CN}$ , $\\textrm {HC}_{3}\\textrm {N}$ , and HCN (see Figure REF ).", "In this proposed scenario, $\\sim $ 96% of the total mass of $\\textrm {CH}_{3}\\textrm {CN}$ continues to reside frozen out onto grains, but the increase in UV flux allows for sufficient $\\textrm {CH}_{3}\\textrm {CN}$ to exist and be formed in the gas to reproduce observed radial intensity profiles and column densities.", "Figure REF shows the two-dimensional number densities of $\\textrm {CH}_{3}\\textrm {CN}$ and $\\textrm {HC}_{3}\\textrm {N}$ .", "The inner $\\sim $ 20 au exhibited brighter or more centrally peaked emission than was seen in observations if the C-to-O ratio = 2 throughout the full disk.", "To counteract this, we set the C-to-O ratio = 0.47, or the ISM ratio, inside of ($\\sim $ 20 au) where Zhang et al.", "2021[97] found an ISM ratio of $\\textrm {H}_{2}$ /CO (See Supplementary Figure 2).", "The inner disk emission remains slightly brighter than observed in HCN and $\\textrm {CH}_{3}\\textrm {CN}$ (see Figure REF ) with this alteration.", "This could be accounted for via a depletion in the nitrogen abundance within the N$_{2}$ ice line, a strong buildup of pebbles around the water ice line ($\\sim $ 5au), a lower HCN desorption energy, or a combination of these effects.", "Additionally, the observations may also be affected by processes such as beam smearing and a higher dust opacity than modeled.", "A comparison model without a depletion in small dust mass is shown in Supplementary Figure 5.", "Figure: Radial intensity profiles of the observed complex organic molecules and HCN towards the disk around HD 163296.", "The molecules shown are CH 3 CN \\textrm {CH}_{3}\\textrm {CN}, HCN, and HC 3 N\\textrm {HC}_{3}\\textrm {N} (left to right with two J transition of CH 3 CN \\textrm {CH}_{3}\\textrm {CN} shown in the first two panels).", "Solid thick lines in the background correspond to the observations derived from Ilee et al 2021 and Guzmán et al.", "2021 (which utilized Law et al.", "2021) while dashed lines correspond to modeled radial profiles.", "Our final model includes an increase in C/O ratio beyond 20 au and a depletion of small dust and represents 1 Myr of chemistry.", "This model can simultaneously fit the flux, line ratios, and general morphology of the observed radial profiles of these complex organic molecules.Figure: The radial and vertical number density distributions of CH 3 CN \\textrm {CH}_{3}\\textrm {CN} and HC 3 N\\textrm {HC}_{3}\\textrm {N} The density distributions were determined by our final HD 163296 model with an increase in C/O ratio beyond 20 au and a depletion of small dust.", "This distribution produced the radial profiles shown in Figure .", "A z/r<<0.1 is good approximation for the midplane of the disk, and both CH 3 CN \\textrm {CH}_{3}\\textrm {CN} and HC 3 N\\textrm {HC}_{3}\\textrm {N} emit partially from the midplane.", "A comparison can be made with Figure which lacks the inclusion of small dust depletion.$\\textrm {HC}_{3}\\textrm {N}$ and $\\textrm {CH}_{3}\\textrm {CN}$ are built up in the gas from simple carbon and nitrogen-based volatiles (see Methods section “Chemical Reactions\" for details).", "A reservoir of these more simple molecules is maintained in the gas phase due to photodesorption from dust grain surfaces by an enhanced UV-field.", "The main destruction products from $\\textrm {CH}_{3}\\textrm {CN}$ and $\\textrm {HC}_{3}\\textrm {N}$ are simple volatiles that can cycle back to create larger nitrile molecules including their original parent molecule.", "While there are enough complex organic molecules in the gas phase to observe bright emission, the majority of the $\\textrm {CH}_{3}\\textrm {CN}$ , $\\textrm {HC}_{3}\\textrm {N}$ , and HCN near the midplane still remains frozen out onto grains.", "The carbon-rich gas reservoir and UV-dominated disk together allow for a cycle of carbon chemistry to remain active in the gas phase.", "This implies that actively accreting gas giants will build their atmosphere out of this complex nitrile-enhanced material.", "Gas giant planets with atmospheres containing a chemical make-up with a C-to-O ratio $>$ 1 can be explained by the natural evolution of dust and the observed high C-to-O ratios within protoplanetary disks.", "This environment could be extended to the inner disk in some cases, as studies such as Najita et al.", "2011[76] and Anderson et al.", "2021[2] posit a higher than solar C-to-O ratio within the inner disk to account for Spitzer observations of HCN and C$_{2}$ H$_{2}$ around T Tauri stars.", "The implications of this end-stage chemistry are far-reaching.", "Primarily, we put forward a new chemically and physically coupled picture of planet formation and its direct impact on the chemistry of forming planet atmospheres.", "Within the early stages of planet formation, pebble growth and accretion into solid planet cores has begun, and chemistry onto these cores are influenced primarily by the chemical make-up of large pebbles and their icy mantles.", "The chemistry active in the midplane during this stage of formation is dominated by X-rays and cosmic rays[95].", "After timescales of order 1 Myr, at the end-stage of pebble formation, small grains have grown and settled towards the midplane, allowing for UV photons to penetrate deeper in the disk than previously assumed.", "This excess UV flux in concert with an above solar C-to-O ratio produces a carbon-rich gas phase cycle of production and destruction in the gas surrounding actively forming planets.", "Bright emission from $\\textrm {CH}_{3}\\textrm {CN}$ , $\\textrm {HC}_{3}\\textrm {N}$ and other carbon-rich molecules emitting at cold temperatures are a sign-post of this evolved chemical environment.", "These forming gas giants may accrete this surrounding gas and its chemical signature into their atmosphere, and may be responsible for high C-to-O measurements that have been observed in exo-planetary atmospheres as compared to their host star[23].", "Secondly, the deeper penetration of UV photons pushes the CO and H$_{2}$ self-shielding layers deeper into the disk.", "Due to this, there are more ions such as C$^{+}$ and H$^{+}$ throughout the disk than would have been present with a regular (ISM) level of small dust surface area.", "As the ion-fraction increases so does the area of the disk that is subject to magneto-rotational instability (MRI).", "In our final model, the UV field increases by a factor of 3-5 in the atmosphere and between 1-2 near the midplane, and thirteen times more gas mass is MRI-active, most notably the gas midplane within 10 au, and then at a z/r=0.1-0.2 within 30 au (see Figure 4).", "This could be a source of accretion[43] for older disks.", "Our proposed UV-dominated carbon-rich gas phase chemistry is a major shift in our understanding of the astrochemistry of planet formation.", "This transition will have a strong effect on the composition of actively forming planets, disk MRI-activity, and subsequent disk accretion.", "Figure: A comparison of the MRI active region within a non-elevated UV environment and an elevated UV environment.", "The two left-most plots show regions within the HD 163296 disk which are MRI activated in an environment where small dust is depleted by a factor of  10 in total mass (gas/dust = 5,000 middle plot) and where small dust is not depleted (gas/dust=500 left).", "The color corresponds to the gas density in mol/cm 3 ^{3}, highlighting the substantial increase in mass that is MRI-active.", "The right-most plot shows a comparison of the UV field strength through a `regular' disk and a dust depleted disk.", "The UV field increases upwards of a factor of 3-5 in the atmosphere of the disk and 1-2 below a z/r=0.2.", "This has a notable effects on the chemistry of the midplane.For a thorough description of the observations and the techniques used to obtain the CLEAN-ed images and radial profiles of $\\textrm {CH}_{3}\\textrm {CN}$ , HCN, and $\\textrm {HC}_{3}\\textrm {N}$ towards HD 163296 see Czekala et al.", "2021[31], Law et al.", "2021[63], and Öberg et al.", "2021[80].", "A description of the observations of $\\textrm {CH}_{3}\\textrm {CN}$ towards TW Hya can be found in Loomis et al.", "2018a[66].", "A brief summary of these observations are as follows.", "The data for HD 163296 is from the MAPS large program (Project ID 2018.1.01055.L)[80].", "The image cubes were produced using the tclean task in the Common Astronomy Software Applications (CASA) package, version 6.1.0[71].", "Keplerian masks based on the disk geometric parameters were used in the CLEANing process and the final images were then corrected for the Jorsater & van Moorsel effect to ensure that the image residuals are in units consistent with that of the CLEAN model.", "For all lines, we used the beam-circularized and uv-tapered images, which had synthesised beam sizes of 0.\"3.", "All radial intensity profiles were generated by deprojecting and azimuthally-averaging zeroth moment maps using the GoFish python package.", "$\\textrm {CH}_{3}\\textrm {CN}$ was observed towards TW Hya as a part of ALMA project 2016.1.01046.S.", "Each emission line was individually imaged using CLEAN and the synthesised beam for each transition were matched using small uv-tapers to 1$.\\!\\!^{\\prime \\prime }$ 05 x 0$.\\!\\!^{\\prime \\prime }$ 83.", "A Keplerian mask was used to extract the flux for each transition.", "See Loomis et al.", "2018a[66] for more details.", "Simple single point models of the disk environment were utilized to quickly and efficiently understand the chemical impacts of different physical and initial chemical conditions.", "Our chemical network is derived from Bosman et al.", "2018[20] which in turn is derived from the UMIST “RATE12” network[70], and for this study we disregarded most grain-surface chemical reactions in order to isolate gas-phase formation of $\\textrm {CH}_{3}\\textrm {CN}$ .", "After our results from the modeling suggested a higher gas/dust ratio and carbon content, we turned to more comprehensive thermo-chemical codes which model the thermal physics and chemical evolution throughout the whole disk.", "The code RAC2D (https://github.com/fjdu/rac-2d)[35], was used to create models of the disks around TW Hya and HD 163296.", "The 2D temperature, density, and molecular abundance results were used to simulate observations of each disk with a raytracing code: RADMC-3D[37].", "A brief description of the physical code of RAC2D is given below; a detailed description of the code can be found in Calahan et al 2021a[24].", "RAC2D takes into account a gas and dust structure and stellar radiation field and computes the gas and dust temperature and chemical structure over time.", "Our model consists of three mass components: gas, small dust, and large dust grains.", "The spatial extent of each component is given by a global surface density distribution [68], which is widely used in protoplanetary disk modeling and corresponds to the self-similar solution of a viscously evolved disk.", "$\\Sigma (r)=\\Sigma _{c}\\left(\\frac{r}{r_{c}}\\right)^{-\\gamma }\\exp {\\left[-\\left(\\frac{r}{r_{c}}\\right)^{2-\\gamma }\\right]},$ $r_{c}$ is the characteristic radius at which the surface density is $\\Sigma _{c}/\\rm {e}$ where $\\Sigma _{c}$ is the characteristic surface density, and $\\gamma $ is the power-law index that describes the radial behavior of the surface density.", "A 2D density profile for the gas and dust populations can be derived from the surface density profile and a scale height: $\\rho (r,z) = \\frac{\\Sigma (r)}{\\sqrt{2\\pi }h(r)} \\exp {\\left[-\\frac{1}{2}\\left(\\frac{z}{h(r)}\\right)^{2}\\right]},$ $h=h_{c}\\left( \\frac{r}{r_{c}}\\right)^{\\Psi } ,$ where $h_{c}$ is the scale height at the characteristic radius, and $\\Psi $ is a power index that characterizes the flaring of the disk structure.", "The modeling parameters used for TW Hya and HD 163296 are shown in Table 1.", "For both TW Hya and HD 163296 models, each dust population follows an Mathis, Rumpl, and Nordsieck (MRN) grain distribution $n(a) \\propto a^{-3.5}$ [69], where `a' indicates the size of the grain.", "The small dust grains have radii between $5 \\times 10^{-3}$ - $1 \\mu $ m, and the large grains have radii between $5 \\times 10^{-3}$ - $10^{3} \\mu $ m. The large dust population is settled in the midplane with a smaller vertical extent and radial extent (gas extends $\\sim $ 5 and 2.5 times above and beyond, respectively).", "This settled large grain population is the result of dust evolution, namely growth in concert with vertical settling to the midplane and radial drift.", "For the HD 163296 model, the large grain population has a unique, non-smooth, surface density profile that reproduces the millimeter continuum observations of the HD 163296 disk[58], [97].", "Opacity values for the dust are calculated based on Birnstiel et al.", "2018[18].", "Large dust grains consist of water ice[94], silicates[32], troilites and refractory organics[53].", "Small dust grains consist of 50% silicates and 50% refractory organics.", "Further discussion on the modeling efforts for TW Hya can be found in Calahan et al.", "2021a[24] and for HD 163296 in Calahan et al.", "2021b[25].", "The thermo-chemical model resulted in 2D distributions of the gas and dust temperature, density, and molecular abundances.", "These were used as inputs for the raytracing code RADMC-3D[37].", "The molecular properties of $\\textrm {CH}_{3}\\textrm {CN}$ , $\\textrm {HC}_{3}\\textrm {N}$ , and HCN were taken from the Leiden Atomic and Molecular Database (LAMDA[85]), with some molecular parameters updated according to data from the Cologne Database for Molecular Spectroscopy (CDMS[74], [75]).", "The result from RADMC-3D was a 3D image cube of the molecular emission across velocity space.", "We utilized GoFish[88] to compress these 3D images into zeroth moment maps and radial profiles which were then directly compared to the azimuthally-averaged observations.", "The chemical reaction network utilized in this work comes from Bosman et al.", "2018[20] which in turn relies on the Rate12 version of the UMIST Database for Astrochemistry[70]: a network that is widely used across astrochemical modeling efforts.", "In our network there are 6,302 gas phase reactions and we have limited the grain-surface reactions to twelve.", "We kept grain-surface reactions that involved the formation of H$_{2}$ , CH$_{2}$ OH, CH$_{3}$ OH, CO$_{2}$ , H$_{2}$ O and NH$_{3}$ due to each of these species being well-studied in the laboratory and come with strong evidence for active two-body chemistry on dust grains[1], [51], [45], [56], [57], [41], [77], [84] (see Table 2).", "Both thermal adsorption and desorption are taken into account for every molecule in the network.", "Our network contains molecules with at most eleven carbon atoms, and at most twelve total atoms.", "We run our thermo-chemical model for 1 Myr as the $\\textrm {CH}_{3}\\textrm {CN}$ and $\\textrm {HC}_{3}\\textrm {N}$ chemistry reaches an equilibrium after this time period (see Supplementary Figure 3).", "In our evolved disk model, the main formation pathways for $\\textrm {CH}_{3}\\textrm {CN}$ once the chemistry has reached equilibrium is as follows: $\\rm {CH_{3}^{+} + HCN \\rightarrow CH_{3}CNH^{+} + e^{-} \\rightarrow CH_{3}CN + H}$ This reaction in the gas phase is the primary formation pathway for $\\textrm {CH}_{3}\\textrm {CN}$ , thus there is a strong reliance on HCN existing in the gas phase even at cold temperatures below its measured sublimation temperature ($\\sim $ 85-103 K[16]) and an ionization source (UV photons) to produce CH$_{3}^{+}$ .", "$\\textrm {HC}_{3}\\textrm {N}$ can be readily produced by a number of different ways including: $\\rm {N + CH_{2}CCH \\rightarrow HC_{3}N + H_{2}}$ $\\rm {CN + C_{2}H_{2} \\rightarrow HC_{3}N + H}$ $\\rm {H + C_{3}N^{-} \\rightarrow HC_{3}N + e^{-}}$ The formation of $\\textrm {HC}_{3}\\textrm {N}$ strongly relies on the existence of carbon-rich molecules including radicals (CN) and ions (C$_{3}$ N$^{-}$ ).", "c|c 7 0pt Dust Surface Reactions Reaction Reference gH + gH $\\rightarrow $ gH$_{2}$ Hasegawa, Herbst, & Leung 1992[51] gH + gOH $\\rightarrow $ gH$_{2}$ O Ioppolo et al.", "2010[57] gH + gH$_{2}$ O$_{2}$ $\\rightarrow $ gH$_{2}$ O + gOH Ioppolo et al.", "2008[56] gH + gCH$_{3}$ OH $\\rightarrow $ gH$_{2}$ + gCH$_{2}$ OH Extrapolated from Fuchs et al.", "2009[41] gH + gCH$_{2}$ OH $\\rightarrow $ gCH$_{3}$ OH Extrapolated from Fuchs et al.", "2009[41] gH$_{2}$ + gOH $\\rightarrow $ gH$_{2}$ O + gH Oba et al 2012[77] gOH + gCO $\\rightarrow $ gCO$_{2}$ + gH $^{a}$ Ruffle & Herbst 2001[84] gO + gCO $\\rightarrow $ gCO$_{2}$ $^{a}$ Goumans & Brown 2008[45] gO + gHCO $\\rightarrow $ gCO$_{2}$ + gH Goumans & Brown 2008[45] gH + gNH$_{2}$ $\\rightarrow $ gNH$_{3}$ Allen & Robinson 1977[1] The complete list dust surface reactions accounted for in this study.", "$^{a}$ Have additional special treatment for three body reactions We additionally produced a thermo-chemical model representing the disk around T Tauri star TW Hya.", "TW Hya is approximately 10 Myrs old[90] and hosts the closest Class II disk at 59.9 pc[42].", "TW Hya has been widely observed in multiple gas and dust tracers with high spatial resolution ($\\sim $ 10 au)[4], [54].", "It also exhibits bright $\\textrm {CH}_{3}\\textrm {CN}$ coming from $\\sim $ 33 K gas[66].", "Our modeling efforts of TW Hya follow that of Calahan et al.", "2021a[24] which sets up a thermo-chemical model of the disk by reproducing the spectral-energy distribution (SED), the full line intensity and morphology of seven CO isotopologue transitions, and an HD J=1-0 observation from the Hershel PACS instrument[11].", "To reproduce all CO radial profiles as well as the HD flux, the small dust in the upper layers of the atmosphere of the TW Hya disk were effectively depleted slightly[24] due to having a slightly lower flaring angle than the gas population.", "This was enough small dust depletion to reproduce the available $\\textrm {CH}_{3}\\textrm {CN}$ lines from Loomis et al.", "2018a given a C/O ratio equal to 1.0 (see Figure REF ).", "This disk is an additional piece of evidence supporting our evolved chemistry proposal.", "This introduction of a late-stage photo-chemical equilibrium was motivated by observations of $\\textrm {CH}_{3}\\textrm {CN}$ , $\\textrm {HC}_{3}\\textrm {N}$ , and HCN.", "However, the late-stage chemistry would have an effect on other molecules as well.", "We predict in addition to these organic molecules, molecules such as C$_{X}$ H, C$_{X}$ H$_{2}$ , and HC$_{X}$ N will be abundant in the gas, enhanced by the cycle of carbon chemistry that reproduces observed $\\textrm {CH}_{3}\\textrm {CN}$ and $\\textrm {HC}_{3}\\textrm {N}$ .", "The main carbon carrier, CO, is largely unaffected by the increase in photo-chemistry.", "CO is largely in the gas phase in previous models that do not include this `late-stage' chemistry and in the region in which $\\textrm {CH}_{3}\\textrm {CN}$ was added into the gas, CO was already primarily in the gas phase.", "We find a slight depletion of CO in the upper atmosphere of the disk due to the CO photodissociation layer being pushed down, and there is a slight enhancement of CO in the midplane due to the UV-enhancement.", "However this accounts for less than 1% of the total abundance of CO thus did not have a strong impact on the modeled radial profiles.", "Another key molecule is H$_{2}$ O.", "In our models, we do not initialize our model with H$_{2}$ O, thus there is very little water to be affected by this `late-stage' chemistry.", "Observational results of Du et al.", "2017[36] support this as they found a low overall abundance in H$_{2}$ O in disks with a survey of 13 protoplanetary disks.", "$\\textrm {CH}_{3}\\textrm {CN}$ nor $\\textrm {HC}_{3}\\textrm {N}$ would be seen to be at a high abundance if gas-phase H$_{2}$ O was in high abundance in the disk as it would disrupt the carbon-rich chemistry.", "The depletion of small grains will affect the observed spectral energy distribution (SED) from each disk.", "TW Hya’s dust population was not altered from that of Calahan et al.", "2021a[24] and continues to match its observed SED.", "A depletion of a factor of 10 in the small dust population around HD 163296 would cause a dimming in the mid-infrared part of the SED.", "However, the small dust abundance, the distribution of dust grain size, and the assumed dust opacity sources are degenerate in the ways they may affect the disk SED.", "We modeled protoplanetary disk SEDs using the code TORUS[49], [50].", "TORUS is a Monte Carlo radiative transfer code utilizing radiative equilibrium[67] and silicate grains[33].", "In Figure REF , we show a series of SEDs produced from models motivated by our thermo-chemical model of HD 163296 including the stellar parameters and dust distribution in Table 1 from Calahan et al.", "2021b[25].", "We find that by varying the minimum dust grain radius or the power law index, we can account for a factor of 10 in UV attenuation.", "The total population mass, grain size, and how the grain sizes are distributed are strongly degenerate and can result in uncertainties of the dust mass by a factor of at least 10.", "Thus our depletion of small dust continues to reproduce all previous observables including the SED.", "With the increase in UV flux deeper into the disk, more ions are created.", "Ions can be coupled with the magnetic fields that thread through the disk and interact with the bulk gas creating turbulence.", "This mechanism is called the magneto-rotational instability or MRI[9].", "Magneto-hydrodynamical processes are assumed to be present throughout protoplanetary disks due to young star’s active magnetic field, and are thought to be one of the main drivers of angular momentum transport [27].", "An MRI-active zone may drive the bulk of the mass transportation in the disk and activate accretion onto the star, two vital processes that determine the future of a young solar system.", "It is thought that planet formation may be aided within ‘dead-zones’ where MRI is non-active[46].", "The magnetic Reynolds number ($Re$ ) and ambipolar diffusion term ($Am$ ) are two quantities that help quantify the presence of an MRI-active zone.", "The Reynolds number quantifies the level of coupling between ionized gas and magnetic fields and is defined as $Re \\equiv \\frac{c_{s}h}{D} \\approx 1 \\left(\\frac{\\chi _{e}}{10^{-13}}\\right) \\left(\\frac{T}{100~\\rm {K}}\\right)^{1/2} \\left(\\frac{a}{\\rm {AU}}\\right)^{3/2} ,$ where $c_{s}$ is the sound speed, $h$ is the scale height of the disk, D is the magnetic diffusivity parameter, $\\chi _{e}$ is the electron abundance, $T$ temperature and $a$ radial location in the disk [81].", "The ambipolar diffusion term describes the coupling of ionized molecules and their interaction with neutral gas particles: $Am \\equiv \\frac{\\chi _{i} n_{\\rm {H}_{2}} \\beta _{\\rm {in}}}{\\Omega } \\approx 1 \\left(\\frac{\\chi _{i}}{10^{-8}}\\right) \\left(\\frac{n_{\\rm {H}_{2}}}{10^{10}\\rm {cm^{-3}}}\\right) \\left(\\frac{a}{\\rm {AU}}\\right)^{3/2} ,$ where $\\chi _{i}$ is the ion abundance, $n_{\\rm {H}_{2}}$ is the number density of H$_{2}$ atoms, $\\Omega $ is the dynamical time, $\\beta _{\\rm {in}}$ is the collisional rate coefficient for singly charged species to share momentum with neutral species, and $a$ is the radial location in the disk [34], [81].", "For MRI to act as a turbulent driver of neutral gas, both $Re$ and $Am$ must be sufficiently high.", "Simulations show that values between 0.1-100 for $Am$ can trigger significant coupling between ions and neutrals [52], [8].", "Models by Flock et al.", "2012[40] suggest $Re \\approx $ 3,300-5,000 is required to sustain sufficient turbulence with a critical $Re$ = 3,000.", "In our work, we assume a combined $Am >$ 100 and $Re >$ 3,000 to signify an MRI-active zone.", "We calculate the regions in which the MRI is active in the HD 163296 disk in our models with an early-stage physical environment and a late-stage environment (signified by a depletion in the small dust population mass) in Figure REF .", "Our solution allows for an increase of UV flux in the atmosphere by a factor of 3-5.", "More of the disk becomes ion-rich due to the CO and H$_{2}$ self-shielding layers being located deeper into the disk.", "The increase in the ion and electron abundance ($\\chi _{i}$ , $\\chi _{e}$ ) are the key factors that enhance the Re and Am values and thus produce additional MRI activity (see Figure REF ).", "As a result, thirteen times more mass within the disk becomes MRI-active including within the midplane.", "In the base model representing early stages of disk chemistry, the MRI activity at the midplane extends to 4 au.", "By depleting the small dust population by an order of magnitude, the MRI activity at the midplane then extends out to $\\sim $ 10 au.", "This increase in MRI activity can contribute to the reason behind why older disk systems, such as TW Hya, are actively accreting.", "Meridional flows have been identified in the HD 163296 disk located at two of the largest dust gaps (approx.", "45 and 86 AU)[89] and these vertical flows are coincident with the lower vertical limit of the MRI-active zone.", "The MRI activation could have an effect on the height or continue to drive meridional flows.", "Turbulence measurements of the HD 163296 disk have been derived in Flaherty et al.", "2015 and 2017[38], [39] using molecular tracers, and they find turbulent velocities to be 5% or less of the sound speed between 30-300 au and at all measured heights.", "It is not yet clear whether our MRI prediction is in tension with the observational evidence of low turbulence in HD 163296, more work needs to be done on the modeling of how MRI-active regions drive turbulence and more observational constraints are needed to constrain the inner 40 au where we find the strongest MRI-activity." ], [ "Acknowledgements", "Jenny Calahan is the corresponding author and can be contacted via jcalahan@umich.edu J.K.C.", "acknowledges support from the National Science Foundation Graduate Research Fellowship under Grant No.", "DGE 1256260 and the National Aeronautics and Space Administration FINESST grant, under Grant no.", "80NSSC19K1534.", "E.A.B.", "acknowledges support from NSF AAG Grant #1907653.", "A.D.B.", "acknowledges support from NSF AAG Grant #1907653.", "E.A.R.", "acknowledges support from NSF AST 1830728.", "Support for J. H. was provided by NASA through the NASA Hubble Fellowship grant #HST-HF2-51460.001-A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS5-26555.", "This work was supported by a grant from the Simons Foundation 686302 and by an award from the Simons Foundation 321183FY19, KÖ.", "This material is based upon work supported by the National Science Foundation under Grant No.", "AST-1907832.", "J.D.I.", "acknowledges support from an STFC Ernest Rutherford Fellowship (ST/W004119/1) and a University Academic Fellowship from the University of Leeds.", "C.W.", "acknowledges financial support from the University of Leeds, the Science and Technology Facilities Council, and UK Research and Innovation (grant numbers ST/T000287/1 and MR/T040726/1).", "V.V.G.", "gratefully acknowledges support from FONDECYT Regular 1221352, ANID BASAL projects ACE210002 and FB210003, and ANID, – Millennium Science Initiative Program – NCN19_171.", "We thank Tim Harries for help with and providing access to the TORUS modeling program." ], [ "Data Availability", "The data that support the findings of this study can be obtained as part of the MAPS program and are publicly available via alma-maps.info.", "Data regarding TW Hya can be obtained via data-rich figures in Calahan et al 2021 published on the online publication of the Astrophysical Journal article.", "This paper makes use of the following ALMA data: ADS/JAO.ALMA#2018.1.01055.L.", "and 2016.1.01046.S ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), MOST and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile.", "The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ.", "The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.", "This study relied on the following publicly available coding packages: rac2d: https://github.com/fjdu/rac-2d, RADMC-3D: https://www.ita.uni-heidelberg.de/ dullemond/software/radmc-3d/, and GoFish: https://github.com/richteague/gofish.", "TORUS is a private code developed by Tim Harries and collaborators." ] ]
2212.05539
[ [ "Effects of spin orbit coupling on proximity induced superconductivity" ], [ "Abstract We investigate the effect of spin orbit coupling on proximity induced superconductivity in a normal metal attached to a superconductor.", "Specifically, we consider a heterostructure where the presence of interfaces gives rise to a Rashba spin orbit coupling.", "The properties of the induced superconductivity in these systems are addressed within the tunneling Hamiltonian formalism.", "We find that the spin orbit coupling induces a mixture of singlet and triplet pairing and, under specific circumstances, an odd frequency, even parity, spin triplet pairs can arise.", "We also address the effect of impurity scattering on the induced pairs, and discuss our results in context of heterostructures consisting of materials with spin-momentum locking." ], [ "Introduction", "Hybrid nanostructures consisting of superconductors have been intensively studied, both experimentally and theoretically.", "Such hybrid proximity structures provide a platform for the realization of novel superconducting states in the vicinity of interfaces that connect superconductors to non-superconducting materials.", "The current heightened interest in these systems is also driven by the potential of these heterostructures to host Majorana fermions[1], [2], [3], [4].", "As these Majorana fermions obey non-Abelian braiding statistics they may serve as building blocks for fault-tolerant quantum computation [5], [6], [7], [8], [9].", "Superconductor and ferromagnetic (S/F) hybrid structures have been studied heavily[10], [11], [12], [13], [14], [15], with a recent focus on proximity structures with topological materials.", "In such proximity structures, the spin orbit coupling (SOC) plays an important role.", "It is usually induced by the breaking of inversion symmetry e.g.", "through an underlying substrate or the presence of an interface.", "In a superconductor, the SOC leads to a mixing of singlet and triplet pairing.", "In this two-component superconductivity either singlet or triplet pairing can be dominating[16], [17].", "Recent observation of triplet dominant two component superconductivity in CoSi$_2$ /TiSi$_2$ /Si heterostructures has confirmed the realization of dominant triplet pairing in these structures via a substrate induced SOC[18], [19].", "The role of SOC in the S/F nanostructures has been studied extensively [20], [21], [22], [23], [24], where the junctions involved $s$ -wave superconductors.", "These studies were carried out in the quasi-classical formalism in the diffusive limit[25].", "In the diffusive limit, the most dominant energy scale in the problem is the elastic impurity scattering i. e. the impurity scattering rate $\\tau ^{-1} \\gg \\Delta , \\tau ^{-1}_{sf}, \\tau ^{-1}_{in} $ , where $\\Delta $ is the superconducting gap, $\\tau ^{-1}_{sf/in}$ is the scattering rate from spin-flip/inelastic scattering.", "In such a regime, even frequency - even parity - spin singlet superconductor (EES) induces EES pairs in the diffusive normal metals, and even frequency - odd parity - spin triplet superconductors (EOT) leads to the formation of odd frequency - even parity - spin triplet pairs in the diffusive metal, as long as the interface in non-magnetic in nature[26].", "A non-magnetic interface prevents triplet to singlet conversion.", "Our main objective is to understand the properties of proximity-induced superconductivity in metals with sizable SOC.", "We will examine the stability of the proximity-induced superconductivity against weak disorder and analyze the emergence of odd frequency pairs.", "In this paper, we focus on the effect of SOC on induced superconductivity in a normal non-magnetic metal that is connected to an unconventional superconductor.", "For concreteness, we consider a Rashba SOC that is induced by the underlying substrate beneath the normal metal component.", "One reason for this particular choice of SOC interaction is that it can be generated and controlled by applying a gate voltage to the heterostructure.", "Here, we focus on the structure of the induced superconductivity in the normal metal connected to a triplet superconductor.", "In what follows, we adopt the tunneling Hamiltonian formalism[27], [28], [29], [30], [31], [32], [33].", "In the next section, we introduce the basic model and the theoretical methods.", "The subsequent section provides a discussion of our results.", "The final section summarizes the key qualitative conclusions.", "Figure: Schematic illustration of a superconductor - normal junction.", "The interface is along the yzyz plane." ], [ "Model & Formalism", "Figure REF shows a schematic diagram of the superconductor - normal (SN) junction.", "The interface is located at the $x=0$ plane.", "We consider a SOC that is induced by the substrate and take the $\\hat{z}$ axis to be parallel the normal to the substrate.", "The Hamiltonian for the normal component reads, $\\hat{H}_N &=& \\sum _{\\mathbf {k},\\sigma } c^\\dagger _{k\\sigma } \\left[ \\xi _\\mathbf {k} \\delta _{\\sigma \\sigma ^{\\prime }}+ \\hat{H} _{Rashba} \\right] c_{k\\sigma ^{\\prime }},$ where $c^\\dagger /c$ is the electron creation/annihilation operator, $\\xi _\\mathbf {k} $ is the electronic dispersion, $\\mathbf {k}$ denotes the momentum and $\\sigma $ represents the electron spin.", "We denote a $4\\times 4$ matrix in Nambu-spin space with $\\check{\\square }$ while $\\hat{\\square }$ indicates a $2\\times 2$ matrix in the spin space.", "The Rashba SOC term reads, $\\hat{\\mathcal {H}}_{Rashba} &=& -\\frac{\\alpha }{m} \\left( \\sigma \\times \\mathbf {k}\\right)\\cdot \\hat{z} = \\epsilon _N \\left( \\hat{z} \\times \\hat{\\mathbf {k}}\\right)\\cdot \\mathbf {\\sigma } , \\\\\\epsilon _N&=&\\frac{\\alpha |\\mathbf {k}|}{m}.$ Here, $\\alpha $ is the Rashba SOC coupling constant, $m$ is the effective mass and $\\sigma $ is the Pauli vector ($\\sigma _x,\\sigma _y,\\sigma _z$ ), where $\\sigma _{x/y/z}$ are the Pauli matrices in spin space.", "The two helical bands generated by this term have energies $\\xi _\\mathbf {k}\\pm \\epsilon _N$ .", "The Hamiltonian for the superconductor reads, $\\check{H}_{sc} &=& \\Psi ^\\dagger \\begin{pmatrix}\\xi _{\\mathbf {k}} & \\hat{\\Delta } \\\\\\hat{\\Delta }^\\dagger & -\\xi _{\\mathbf {k}}\\end{pmatrix} \\Psi $ where $\\xi _{\\mathbf {k}}$ is the dispersion in the superconductor, $\\Psi ^\\dagger =(a^\\dagger _{\\uparrow \\mathbf {k}},a^\\dagger _{\\downarrow \\mathbf {k}},a_{\\uparrow -\\mathbf {k}},a_{\\downarrow -\\mathbf {k}})$ , where $a^\\dagger /a$ is the creation/annihilation operator.", "The gap $\\hat{\\Delta }=\\Delta i\\sigma _y$ for the singlet case while for the triplet case $\\hat{\\Delta }=\\Delta \\mathbf {d}\\cdot \\sigma i \\sigma _y$ .", "Here, $\\mathbf {d}$ is the $d$ -vector of the triplet paring.", "The tunneling Hamiltonian is, $\\check{H}_{tunneling} = \\gamma \\Psi ^\\dagger \\check{\\tau }_3 \\Phi + h. c.$ where $\\check{\\tau }_3=\\mathrm {diag}(1,1,-1,1)$ is a matrix in Nambu spin space, $\\Phi $ is the analog of $\\Psi $ in the normal metal and $\\gamma $ is the tunneling matrix element.", "Here, we assume $\\gamma $ to be spin and momentum independent and take it to be real.", "The mean-field expression for the Green's function of the superconducting component is $\\check{G}_{sc} = \\frac{1}{\\omega ^2-\\xi ^2-|\\Delta |^2} \\begin{pmatrix}(\\omega +\\xi ) \\sigma _0 & \\hat{\\Delta } \\\\\\hat{\\Delta }^\\dagger & (\\omega -\\xi ) \\sigma _0\\end{pmatrix}.$ Here, $\\sigma _0$ is the $2\\times 2$ identity matrix in spin space and for notational convenience, we abbreviate $\\xi _{\\mathbf {k}}$ as $\\xi $ .", "The tunneling self energy for the normal side of the junction at the interface reads, $\\check{\\Sigma }_t(k_{||},\\omega ) &=& |\\gamma |^2 \\int \\frac{d k_\\perp }{2 \\pi } \\frac{1}{\\omega ^2-\\xi ^2-|\\Delta |^2} \\nonumber \\\\&\\times & \\begin{pmatrix}(\\omega +\\xi ) \\sigma _0 & -\\hat{\\Delta } \\\\-\\hat{\\Delta }^\\dagger & (\\omega -\\xi ) \\sigma _0\\end{pmatrix}.$ Here, $k_\\perp $ is the momentum component perpendicular to the SN interface and $k_{||}$ is the momentum parallel to it.", "The self-energy has the same general structure as the structure of the Green's function in the superconductor.", "The integration over the momentum component perpendicular to the interface may modify $\\mathbf {d}$ to $\\tilde{\\mathbf {d}}$ depending on the junction geometry.", "To study the robustness of the induced pairs against disorder, we consider point-like impurities randomly distributed in the normal metal.", "Within a self-consistent T-matrix approximation which incorporates all scattering processes from a single impurity site the impurity self energy contribution is, $\\check{\\Sigma }_{imp}( \\omega ) &=& n_{imp} \\check{\\tau }_3 {V}_{imp}\\left[ \\check{1}- \\check{\\mathbf {g}} \\check{\\tau }_3 {V}_{imp} \\right]^{-1},$ where $n_{imp}$ is the impurity concentration, $V_{imp}$ is the impurity potential, $\\check{1}$ is the $4\\times 4$ identity matrix and $\\check{\\mathbf {g}}$ is $\\check{\\mathbf {g}} &=& \\int _{\\mathbf {k}} \\check{\\mathbb {G}}.$ where $\\check{\\mathbb {G}}&=& \\frac{\\check{1}}{\\check{G}_0^{-1}-\\check{\\Sigma }_t( k_{||},\\omega )-\\check{\\Sigma }_{imp}(\\omega )}.$ Here $\\check{G}_0$ is the normal metal bare Green's function, and the impurity self-energy is calculated self-consistently." ], [ "SN junction with a singlet superconductor", "First, we consider a pure singlet superconductor attached to the normal metal component discussed above.", "In that case, $\\hat{\\Delta }=\\Delta i \\sigma _y$ and the tunneling self-energy is given by, $\\check{\\Sigma }_t(k_{||},\\omega ) = |\\gamma |^2 \\begin{pmatrix}(\\Sigma _0 + \\Sigma _3)\\sigma _0 & i \\sigma _y {\\Sigma }_1 \\\\-i \\sigma _y {\\Sigma }_1 & (\\Sigma _0 - \\Sigma _3)\\sigma _0.\\end{pmatrix}$ In the case of a momentum independent $\\Delta $ and for a particle-hole symmetric system, the tunneling self-energy becomes, $\\check{\\Sigma }_t(k_{||},\\omega ) = -\\Gamma _t \\frac{1}{\\sqrt{\\Delta ^2-\\omega ^2}} \\begin{pmatrix}\\omega \\sigma _0 & i \\sigma _y {\\Delta } \\\\-i \\sigma _y \\Delta &\\omega \\sigma _0\\end{pmatrix}.$ Here $\\Gamma _t\\equiv \\pi |\\gamma |^2 \\nu _s$ is the energy scale associated with the tunneling process, where $\\nu _s$ is the normal state density of states (DOS) of the superconductor at the Fermi level.", "Inclusion of particle-hole asymmetry gives finite $\\Sigma _3$ , that can be absorbed in the chemical potential.", "As it turns out, the presence of particle-hole asymmetry does not lead to significant qualitative difference.", "The normal metal Green's function in the clean limit is, $\\check{\\mathbb {G}} &=& \\begin{pmatrix}\\left[ \\bar{\\omega } - {\\xi } \\right] \\sigma _0 - \\epsilon _N (\\mathbf {w} \\cdot \\sigma ) & - i\\Sigma _1 \\sigma _y \\\\i \\Sigma _1 \\sigma _y & \\left[\\bar{\\omega } + {\\xi } \\right] \\sigma _0 - \\epsilon _N (\\mathbf {w} \\cdot \\sigma ^\\ast )\\end{pmatrix}^{-1}, \\nonumber \\\\\\check{\\mathbb {G}} &=& \\begin{pmatrix}\\hat{\\mathbb {G}}_{11} & \\hat{\\mathbb {G}}_{12} \\\\\\hat{\\mathbb {G}}_{21} & \\hat{\\mathbb {G}}_{22}.\\end{pmatrix}$ Here, $\\bar{\\omega }=\\omega -\\Sigma _0$ and $\\mathbf {w}=\\hat{z}\\times \\hat{\\mathbf {k}}$ .", "The components of $\\check{\\mathbb {G}} $ are, $\\hat{\\mathbb {G}}_{11} &=& \\frac{1}{2} \\left[ \\mathcal {R}_+ + \\mathcal {R}_- \\right] + \\frac{1}{2} \\left[ \\mathcal {R}_+ - \\mathcal {R}_- \\right] (\\mathbf {w} \\cdot \\sigma ), \\\\\\mathcal {R}_\\pm &=& \\frac{\\bar{\\omega } + (\\bar{\\xi } \\pm \\epsilon _N)}{\\bar{\\omega }^2-(\\bar{\\xi } \\pm \\epsilon _N)^2 -\\Sigma _1^2}, \\\\\\hat{\\mathbb {G}}_{12} & =& \\left(\\frac{1}{2} \\left[ \\frac{1}{D_+} + \\frac{1}{D_-}\\right] + \\frac{1}{2} \\left[ \\frac{1}{D_+} - \\frac{1}{D_-}\\right] \\mathbf {w} \\cdot \\sigma \\right), \\nonumber \\\\& \\times & ( i \\sigma _y \\Sigma _1) \\\\D_\\pm &=& \\bar{\\omega }^2-(\\bar{\\xi } \\pm \\epsilon _N)^2 -\\Sigma _1^2.$ The structure of the induced pairs can be obtained from the anomalous Green's function $\\hat{\\mathbb {G}}_{12} $ .", "The first term of $\\hat{\\mathbb {G}}_{12}$ in Eq.", "() is the spin-singlet, even parity and even frequency component.", "This is the conventional proximity effect for a singlet superconductor.", "The second term of $\\hat{\\mathbb {G}}_{12} $ is a spin-triplet, odd parity and even frequency term.", "This term is directly proportional to the strength of SOC.", "Due to finite SOC, spin rotational symmetry is broken which allows for a mixing of singlet and triplet terms.", "The $\\mathbf {d}$ -vector for triplet pairing is determined by the SOC vector $\\mathbf {w}$ .", "At this point, we can include the effect of impurity scattering.", "The impurity self-energy depends on the momentum integrated Green's function.", "Since $\\mathbf {w}$ is an odd function of momentum, the terms linear in $\\mathbf {w}$ vanish in the momentum integrated Green's function for a singlet superconductor.", "As a result, the momentum integrated Green's function is, $\\int _\\mathbf {\\mathbf {k}} \\check{\\mathbb {G}} &=& -\\pi N_0 \\frac{1}{\\sqrt{\\Sigma _1^2-\\bar{\\omega }^2}} \\begin{pmatrix}\\bar{\\omega } \\sigma _0 & \\Sigma _1 i \\sigma _y \\\\-i \\sigma _y \\Sigma _1 & \\bar{\\omega } \\sigma _0\\end{pmatrix},$ where $N_0$ is the normal metal DOS at the Fermi level.", "The impurity self-energy can be expressed as, $\\check{\\Sigma }_{imp} &=&\\begin{pmatrix}{\\Sigma }_{imp0} \\sigma _0 & i \\sigma _y \\Sigma _{imp1} \\\\-i \\sigma _y \\Sigma _{imp1} & {\\Sigma }_{imp0} \\sigma _0,\\end{pmatrix}, \\\\\\Sigma _{imp0} &=& n_{imp}\\frac{ g_0 V^2}{1- V^2 (g_0^2 - g_1^2)}, \\\\\\Sigma _{imp1} &=& n_{imp}\\frac{-g_1 V^2}{1- V^2 (g_0^2 - g_1^2)},$ where we have neglected the $\\Sigma _3$ component of the impurity self energy, which vanishes for a particle hole symmetric system.", "We can define impurity renormalized energy and induced off-diagonal self-energy as, $\\tilde{\\omega }&=& \\omega - \\Sigma _0(\\omega ) - \\Sigma _{imp0}(\\tilde{\\omega }), \\\\\\tilde{\\Sigma }_1 &=& \\Sigma _1(\\omega ) + \\Sigma _{imp1}(\\tilde{\\omega }).$ Here, the fully dressed Green's function is used to calculate $\\tilde{\\omega }$ , $\\tilde{\\Sigma }_1$ self-consistently.", "The equations for renormalized $\\tilde{\\Sigma }_1$ and $\\tilde{\\omega }$ are identical as those for an $s$ -wave superconductors with nonmagnetic impurities.", "Now, we rewrite the impurity and tunneling dressed Green's function $\\check{\\mathbb {G}}$ by replacing $\\bar{\\omega }$ and $\\Sigma _1$ with $\\tilde{\\omega }$ and $\\tilde{\\Sigma }_1$ , respectively.", "After some straightforward algebra, we get $\\tilde{\\omega }/\\tilde{\\Sigma }_1 = \\bar{\\omega }/\\Sigma _1$ .", "This ensures that the induced pairs remain robust against nonmagnetic disorder.", "Next, we consider the interface DOS given by $N(\\omega )=N_0 \\mathrm {Im} \\frac{\\tilde{\\omega }}{\\sqrt{\\tilde{\\Sigma }_1^2-\\tilde{\\omega }^2}}.$ Fig.", "REF (a) shows the interface DOS for several values of the tunneling energy scale $\\Gamma _t$ .", "In the weak tunneling regime ($\\Gamma _t\\ll \\Delta $ ), the effective gap in the DOS is determined by $\\Gamma _t^2/\\Delta $ , and in the strong tunneling regime ($\\Gamma _t\\gg \\Delta $ ), the effective gap becomes identical to the size of the gap of the superconductor.", "The sub-dominant triplet component does not induce any low energy sub-gap states, instead the low energy DOS is mainly controlled by the singlet order parameter, which is an isotropic s-wave in the present case.", "The effect of impurity scattering is depicted in Fig.", "REF (b), which shows no change in the DOS with increasing impurity scattering rate $\\Gamma _{imp}\\equiv n_{imp}\\pi N_0 V^2/(1+\\pi ^2 V^2 N_0^2) $ .", "Figure: Proximity induced gap in the normal metal.", "(a) Variation of normalized DOS with strength of tunneling energy scale Γ t \\Gamma _t.", "(b) Normalized density of states at the interface as a function of energy for several values of impurity scattering rate." ], [ "SN junction with a triplet superconductor", "In a triplet superconductor the order parameter is $\\hat{\\Delta }=\\Delta \\mathbf {d}\\cdot \\sigma i\\sigma _y$ , where the $\\mathbf {d}$ -vector describes the pair structure in spin space.", "Here, we restrict ourselves to unitary pairing ($\\mathbf {d}\\times \\mathbf {d}^\\ast =0$ ) in the superconducting component of the heterojunction.", "In this case, we find for the tunneling self-energy, $\\check{\\Sigma }_t(k_{||},\\omega ) = |\\gamma |^2 \\begin{pmatrix}\\Sigma _0 \\sigma _0 & {\\Sigma }_1 \\tilde{\\mathbf {d}} \\cdot \\sigma i \\sigma _y \\\\-i \\sigma _y {\\Sigma }_1 \\tilde{\\mathbf {d}} ^\\ast \\cdot \\sigma & \\Sigma _0 \\sigma _0\\end{pmatrix},$ where the components are given by, $\\Sigma _0 & = & |\\gamma |^2 \\int \\frac{d k_\\perp }{2 \\pi } \\frac{\\omega }{\\omega ^2-\\xi ^2-\\Delta ^2 | \\mathbf {d}|^2}, \\\\\\mathbf {\\Sigma }_1 & = & |\\gamma |^2 \\int \\frac{d k_\\perp }{2 \\pi } \\frac{-\\Delta \\mathbf {d}}{\\omega ^2-\\xi ^2-\\Delta ^2 | \\mathbf {d}|^2 } = \\Sigma _1 \\tilde{\\mathbf {d}}.$ As before in the singlet case, we ignore the $\\Sigma _3$ self-energy.", "The $\\mathbf {d}$ -vector is an odd function of momentum and $\\mathbf {\\Sigma }_1$ is also an odd function of the momentum parallel to the interface ($k_{||}$ ), and it does not depend on the momentum component normal to the interface ($k_\\perp $ ).", "The Green's function in the normal metal can be written as, $\\check{\\mathbb {G}} &=& \\begin{pmatrix}\\left[ \\bar{\\omega } - {\\xi } \\right] \\sigma _0 - \\epsilon _N (\\mathbf {w} \\cdot \\sigma ) & - {\\Sigma }_1 \\tilde{\\mathbf {d}} \\cdot \\sigma i \\sigma _y \\\\i \\sigma _y {\\Sigma }_1 \\tilde{\\mathbf {d}} ^\\ast \\cdot \\sigma & \\left[\\bar{\\omega } + {\\xi } \\right] \\sigma _0 - \\epsilon _N (\\mathbf {w} \\cdot \\sigma ^\\ast )\\end{pmatrix}^{-1} \\nonumber \\\\&=& \\begin{pmatrix}\\hat{\\mathbb {G}}_{11} & \\hat{\\mathbb {G}}_{12} \\\\\\hat{\\mathbb {G}}_{21} & \\hat{\\mathbb {G}}_{22}.\\end{pmatrix}$ The general structure of the off-diagonal Green's function has the form, $\\hat{\\mathbb {G}}_{12} &\\propto & A_0 \\sigma _0 + A_1 \\mathbf {w}\\cdot \\sigma + A_2 \\tilde{\\mathbf {d}} \\cdot \\sigma + A_3 \\tilde{\\mathbf {d}} ^\\ast \\cdot \\sigma \\nonumber \\\\& & + A_4 \\left( \\tilde{\\mathbf {d}} \\times \\mathbf {w} \\right) \\cdot \\sigma ,$ where the specific values of the set $\\lbrace A_i\\rbrace $ (i=0,...4) will depend on geometry of the junction and the specifics of the $\\mathbf {d}$ -vector which in turn determine $\\tilde{\\mathbf {d}}$ .", "In general, $\\tilde{\\mathbf {d}} \\times \\tilde{\\mathbf {d}} ^\\ast $ may not vanish despite unitary pairing in the superconductor ($\\mathbf {d}\\times \\mathbf {d}^\\ast =0$ ).", "Eq.", "(REF ) has singlet and triplet terms, but due to the anisotropic nature of the gap in the superconductor details of geometry and pairing structure is essential to understand the structure of induced pairs in the normal metal.", "To proceed, we will consider a few pertinent cases." ], [ "$\\mathbf {d}$ -vector {{formula:cd423bf1-efa1-4737-9b31-b95841a50102}}", "Motivated by the recent experiments on CoSi$_2$ /TiSi$_2$ /Si heterostructures[18], [34], [19], we first consider the case of a $\\mathbf {d}$ -vector that is parallel to the SOC vector $\\mathbf {w}$ .", "The experimental results for CoSi$_2$ /TiSi$_2$ on a Si substrate indicate the presence of a dominant triplet superconducting state in CoSi$_2$ with $\\mathbf {d}$ -vector along $\\mathbf {w}$ .", "Due to the SOC which is induced by the Si substrate, there will also be a finite but weak singlet component in the superconductor which we ignore for now.", "The general case of a mixed parity superconductor will be addressed in subsection REF .", "For side-by-side coupled junctions as the one illustrated in Fig.", "REF (a), the tunneling self-energies are, $\\Sigma _0 &=& \\Sigma _0(\\omega ,k_{||}), \\\\\\mathbf {\\Sigma }_1 &=& - k_y \\Sigma _{1}(\\omega ,k_{||}) \\hat{x} = \\Sigma _1 \\tilde{\\mathbf {d}} ,$ where $\\tilde{\\mathbf {d}} = -k_y \\hat{x}$ and the normal and anomalous part of the Green's functions are, $\\hat{G}_{11} &=& \\frac{1}{D} \\left[ \\left( a_0 b_+ b_- - b_0\\Sigma _1^2 \\tilde{\\mathbf {d}}\\cdot \\tilde{\\mathbf {d}} \\right) \\sigma _0 \\right.", "\\nonumber \\\\& &- \\epsilon _N \\left( b_+ b_- +\\Sigma _1^2 \\tilde{\\mathbf {d}}\\cdot \\tilde{\\mathbf {d}} \\right)(\\mathbf {w} \\cdot \\sigma ) \\nonumber \\\\&& \\left.", "-2\\epsilon _n \\Sigma _1^2 (\\tilde{\\mathbf {d}}\\cdot \\mathbf {w}) \\tilde{\\mathbf {d}} \\cdot \\sigma \\right] \\\\\\hat{G}_{12} &=& \\hat{\\mathtt {G}}_{12}\\frac{1}{D} i \\sigma _y \\\\\\hat{\\mathtt {G}} _{12}&=& (2 \\epsilon _N \\xi _\\mathbf {k} {\\Sigma }_1 \\tilde{\\mathbf {d}} \\cdot \\mathbf {w} ) \\sigma _0 \\nonumber \\\\&& -2 \\epsilon _N^2 {\\Sigma }_1 (\\tilde{\\mathbf {d}} \\cdot \\mathbf {w}) \\mathbf {w} \\cdot \\sigma \\nonumber \\\\&&+ ( \\bar{\\omega }^2 -\\xi _{\\mathbf {k}}^2 +\\epsilon _N^2 -{\\Sigma }_1^2 \\tilde{\\mathbf {d}}\\cdot \\tilde{\\mathbf {d}} ) {\\Sigma }_1 \\tilde{\\mathbf {d}} \\cdot \\sigma \\nonumber \\\\&&-2 i \\bar{\\omega } \\epsilon _N {\\Sigma }_1 (\\tilde{\\mathbf {d}} \\times \\mathbf {w} ) \\cdot \\sigma \\\\D&=& (\\bar{\\omega }^2 -\\xi _{\\mathbf {k}}^2+\\epsilon _N^2-{\\Sigma }_1^2 \\tilde{\\mathbf {d}}\\cdot \\tilde{\\mathbf {d}} )^2 \\nonumber \\\\&& + 4 \\epsilon _N^2\\left[ {\\Sigma }_1^2 (\\tilde{\\mathbf {d}} \\cdot \\mathbf {w})^2 -\\bar{\\omega }^2\\right].$ Here $a_0 = \\bar{\\omega }-\\xi _\\mathbf {k}$ , $b_0 = \\bar{\\omega }+\\xi _\\mathbf {k}$ and $b_\\pm $ is $\\bar{\\omega }+\\xi _\\mathbf {k}\\pm \\epsilon _N$ .", "Eq.", "() contains an even parity ($\\propto k_y^2$ ), even frequency singlet term that arises due to a non-zero SOC and disappears in the limit of vanishing SOC.", "Apart from an expected triplet pairing with $\\tilde{\\mathbf {d}} \\cdot \\sigma $ , a non-vanishing SOC brings about another kind of triplet pairing with $\\mathbf {w}\\cdot \\sigma $ structure.", "Both these triplet components are even in frequency and have odd parity.", "The novel pairing that arises due to the SOC is described by the last term in Eq.", "() which is $\\propto \\left( \\tilde{\\mathbf {d}} \\times \\mathbf {w} \\right)\\cdot \\sigma $ and possesses a momentum dependence $\\propto k_x k_y$ .", "This term describes triplet pairs that are have even parity and are odd in frequency.", "This term is absent when the SOC vanishes.", "This term also vanishes for junctions possessing top-bottom geometry as the one shown in Fig.", "REF (b) as $\\tilde{\\mathbf {d}} || \\mathbf {w}$ in this geometry.", "Thus, there are only singlet and triplet components with even frequency structure in this junction geometry where the singlet pairs are generated by the SOC.", "Figure: Two different SN junction geometries.Figure: The local density of states at the interface for several values of normal state scattering rate.", "The parameter cc is the cotangent of ss-wave phase shift c≡cotθ s c\\equiv \\cot \\theta _s is 1 for panel (a) and 0.010.01 for panel(b).Next, we include the effect of impurity scattering as described in Sec.", ".", "We consider a two-dimensional electron gas and a side-by-side coupled geometry.", "For a two dimensional superconductor with an order parameter characterized by $\\mathbf {d}=\\hat{z}\\times \\mathbf {k}$ , the Fermi surface is fully gapped.", "In that case, the self-energy components are $\\Sigma _0 = -\\Gamma _t \\frac{\\omega }{\\sqrt{\\Delta ^2-\\omega ^2}}, \\\\\\mathbf {\\Sigma }_1 = \\Gamma _t \\frac{\\Delta }{\\sqrt{\\Delta ^2-\\omega ^2}} \\frac{k_y}{k_F} \\hat{x}.$ The momentum integrated Green's function is $g_0 \\check{{1}}$ .", "The disorder renormalized $\\tilde{\\omega }$ is determined by, $\\tilde{\\omega }&=& \\omega + \\Gamma _t \\frac{\\omega }{\\sqrt{\\Delta ^2 - \\omega ^2}} +\\frac{n_{imp}}{\\pi N_0} \\frac{\\mathtt {g}_0}{\\cot ^2 \\theta _s- \\mathtt {g}_0^2}$ where $\\theta _s\\equiv \\tan ^{-1} (\\pi N_0 V) $ is the $s$ -wave scattering phase shift.", "Fig.", "REF (a) shows the local DOS at the interface for weak scattering ($c=1$ ) and panel (b) shows the local DOS for $c=0.01$ , i.e.", "strong scattering.", "In contrast to isotropic $s$ -wave, the impurity scattering rapidly suppresses the induced superconductivity in this case.", "While the superconductor is fully gapped, the induced superconductivity has low-energy states and does not develop a gap in the DOS.", "Figure: (a) The interface DOS at the Fermi level as a function of SOC energy in the clean limit.", "The interface DOS as a function of energy for several values of SOC energy in the clean limit (b) and with a scattering rate Γ N =Δ\\Gamma _N=\\Delta for c=1c=1." ], [ "$\\mathbf {d}$ -vector {{formula:cf57e828-e844-443c-b04a-0e912ff69a93}}", "Next, we consider a chiral $p$ -wave state with $\\mathbf {d} =( p_x + i p_y)\\hat{z}$ which is a complex, but unitary order parameter ($\\mathbf {d}\\times \\mathbf {d}^\\ast =0$ ).", "In this case, the tunneling self-energies read, $\\Sigma _0 &=& \\Sigma _0(\\omega ,k_{||}), \\\\\\mathbf {\\Sigma }_1 &=& i k_y \\Sigma _{1}(\\omega ,k_{||}) \\hat{z}= i\\Sigma _1 \\tilde{\\mathbf {d}} ,$ with ${\\tilde{\\mathbf {d}}} \\in \\mathcal {R}$ and $\\tilde{\\mathbf {d}}\\perp \\mathbf {w}$ .", "The anomalous part of the Green's function is obtained as, $\\hat{G}_{11} &=&\\left[ \\left( a_0 b_+ b_- - b_0\\Sigma _1^2 \\tilde{\\mathbf {d}}\\cdot \\tilde{\\mathbf {d}} \\right) \\sigma _0 \\right.", "\\nonumber \\\\& & \\left.", "- \\epsilon _N \\left( b_+ b_- +\\Sigma _1^2 \\tilde{\\mathbf {d}}\\cdot \\tilde{\\mathbf {d}} \\right)(\\mathbf {w} \\cdot \\sigma ) \\right]\\frac{1}{D} , \\\\ \\hat{G}_{12} &=& \\frac{ \\hat{\\mathtt {G}}_{12} }{D} i \\sigma _y.", "\\\\\\hat{\\mathtt {G}}_{12} &=& i\\Sigma _1 ( \\bar{\\omega }^2 -\\xi _{\\mathbf {k}}^2 +\\epsilon _N^2 -{{\\Sigma }}_1^2 \\tilde{\\mathbf {d}}\\cdot \\tilde{\\mathbf {d}} ) {\\tilde{\\mathbf {d}}} \\cdot {\\sigma } \\nonumber \\\\& & +2 \\bar{\\omega } \\epsilon _N {\\Sigma }_1( {\\tilde{\\mathbf {d}}}\\times \\mathbf {w} ) \\cdot \\sigma \\\\D&=& (\\bar{\\omega }^2 -\\xi _{\\mathbf {k}}^2+\\epsilon _N^2-{{\\Sigma }_1}^2 \\tilde{\\mathbf {d}}\\cdot \\tilde{\\mathbf {d}})^2- 4 \\epsilon _N^2 \\bar{\\omega }^2.$ As $\\tilde{\\mathbf {d}} \\perp \\mathbf {w}$ , any $\\tilde{\\mathbf {d}}\\cdot \\mathbf {w}$ component be it singlet or triplet, vanishes.", "This also ensures the presence of an odd frequency component for both kinds of junction geometries.", "The momentum structure of the odd frequency component turns out to be $\\propto k_x k_y \\hat{x} + k_y^2 \\hat{y}$ therefore, the $\\Sigma _{imp1}$ self-energy contribution becomes finite, in contrast to the previous case.", "A non-vanishing $\\Sigma _{imp1}$ self-energy, as it turns out, converts the induced pairing into non-unitary pairing.", "(See the general expression for the Green's function in the appendix ).", "The impurity self-energies are given by Eq.", "(REF ) and ().", "However, the SOC itself kills the induced pairing as can be inferred from Fig.", "REF (a), where it is shown that the DOS at the Fermi level reaches the normal state value as the SOC strength increases beyond the pairing energy scale of the superconductor.", "In Fig.", "REF (b), the DOS at the interface is shown for energies below the superconducting gap in the case of weak SOC.", "We are, however, interested in the regime $\\Delta \\ll \\epsilon _N \\ll E_F$ , and in this regime the proximity induced superconductivity does not survive.", "The inclusion of impurity scattering does not change this conclusion but further diminishes the proximity induced superconductivity even for weak SOC as be inferred from Fig.", "REF (c).", "Figure: The interface DOS for a mixed parity and normal junction without any impurity scattering.", "(a) The interface DOS for r=0.8r=0.8 and r=1.2r=1.2 with no SOC in the normal metal.", "The tunneling energy scale Γ t =0.5Δ 0 \\Gamma _t=0.5\\Delta _0.", "(b) The interface DOS states for r=0.8r=0.8 and r=1.2r=1.2 with (dashed lines) and without (solid lines)." ], [ "Mixed parity state", "The last case that we consider is that of a superconductor with mixed parity order parameters.", "Such a state is possible, if the superconductor itself is under the influence of a SOC.", "The normal and anomalous mean field Green's functions in this case are, $\\hat{G}_{11} &=& \\frac{1}{2} \\left[ \\hat{G}_+ + \\hat{G}_- \\right] + \\frac{1}{2} \\left[ \\hat{G}_+ - \\hat{G}_- \\right] \\mathbf {d}\\cdot \\sigma , \\\\\\hat{G}_{\\pm } &=& \\frac{\\omega +\\xi _\\pm }{\\omega ^2 - \\xi _\\pm ^2 -\\Delta ^2_\\pm }, \\\\\\hat{G}_{12} &=& \\frac{1}{2}\\left[ (\\hat{F}_+ + \\hat{F}_-) + (\\hat{F}_+ - \\hat{F}_-) \\mathbf {d}\\cdot \\sigma \\right] i \\sigma _y, \\\\\\hat{F}_\\pm &=& \\frac{\\Delta _\\pm }{\\omega ^2 - \\xi _\\pm ^2 -\\Delta ^2_\\pm }.$ Here $\\Delta _\\pm = (\\Delta _s \\pm \\Delta _t)$ and $\\xi _\\pm =\\xi _{\\mathbf {k}}\\pm \\epsilon _S$ , where $\\epsilon _S$ is the SOC energy scale in the superconductor and $\\Delta _{s/t} $ is the singlet/triplet component of the gap.", "The normal ($\\hat{\\Sigma }_{11}$ ) and anomalous ($\\hat{\\Sigma }_{12}$ ) tunneling self-energies are $\\Sigma _0 + \\Sigma _{soc} \\tilde{\\mathbf {d}}\\cdot \\sigma $ and $ \\left( \\Sigma _s + \\Sigma _t \\tilde{\\mathbf {d}}\\cdot \\sigma \\right) i\\sigma _y$ , where $\\Sigma _{soc}$ modifies the SOC in the normal metal and $\\Sigma _{s/t}$ is the singlet/triplet component.", "They are given by $\\Sigma _{0/soc} &=& -\\Gamma _t \\left( \\frac{\\omega }{Q_+} \\pm \\frac{\\omega }{Q_-}\\right), \\\\\\Sigma _{s/t} &=& \\Gamma _t \\left( \\frac{\\Delta _+}{Q_+} \\pm \\frac{\\Delta _-}{Q_-}\\right).$ where $Q_\\pm = 2\\sqrt{\\Delta _\\pm ^2-\\omega ^2}$ and it was assumed that the SOC splitting energy in the superconductor is $\\epsilon _S \\ll E_F$ .", "Consequently, we ignore the differences between the DOS of the two helical bands, which is of the order of $\\epsilon _S/E_F$ .", "The renormalized SOC term in the normal metal can be rewritten as, $\\tilde{\\epsilon }_N \\widetilde{\\mathbf {w}}\\cdot \\sigma &=& \\epsilon _N \\mathbf {w}\\cdot \\sigma +\\Sigma _{soc} \\tilde{\\mathbf {d}}\\cdot \\sigma ,\\\\\\tilde{\\epsilon }_N &=& \\sqrt{(\\epsilon _N \\mathbf {w}+\\Sigma _{soc} \\tilde{\\mathbf {d}})\\cdot (\\epsilon _N \\mathbf {w}+\\Sigma _{soc} \\tilde{\\mathbf {d}})},\\\\\\widetilde{\\mathbf {w}}&=& \\frac{\\epsilon _N \\mathbf {w} + \\Sigma _{soc} \\tilde{\\mathbf {d}}}{\\tilde{\\epsilon }_N}.$ The anomalous Green's function in the normal metal is, $\\hat{G}_{12} &\\propto &\\left[ A_0 \\sigma _0 + A_1 \\widetilde{\\mathbf {w}}\\cdot \\sigma + A_2 \\tilde{\\mathbf {d}}\\cdot \\sigma \\right.", "\\nonumber \\\\& & \\left.", "+ A_3 ( \\tilde{\\mathbf {d}} \\times \\widetilde{\\mathbf {w}} )\\cdot \\sigma \\right]i\\sigma _y,$ where the set $\\lbrace A_i\\rbrace ~(i=0,\\dots ,4)$ of coefficients is provided in appendix .", "The first term in Eq.", "(REF ) is the even parity, spin singlet term and the second and third terms are odd parity, spin triplet terms.", "There is an odd frequency, spin triplet, even parity (OTE) term, which only exists in the presence of both a finite triplet component in the superconductor and a finite SOC in the normal metal.", "Figure: (a) The interface DOS for singlet dominant (r=0.8r=0.8) SN junction for several representative values of impurity scattering rate.", "(b) Variation of interface DOS with disorder for triplet dominant (r=1.2r=1.2) case.We parameterize the singlet and the triplet gaps in the superconductor as $\\Delta _s = \\Delta _0/\\sqrt{1+r^2}$ and $\\Delta _t = \\Delta _0 r/\\sqrt{1+r^2}$ , respectively[34].", "The parameter $r$ is the ratio of triplet and singlet order parameters, where $r>1$ describes a triplet dominant regime while $r<1$ presents a singlet dominant regime.", "In the extreme singlet ($r\\rightarrow 0$ ) and triplet ($r\\rightarrow \\infty $ ) limit, we recover the results presented in Sec.", "REF and REF , respectively.", "In the intermediate regime, the DOS at the interface shows signatures of two energy scales $\\Delta _s$ and $\\Delta _t$ .", "For a vanishing SOC in the normal metal, the singlet dominant regime ($r<1$ ) ensues, where the DOS shows a gap $~\\mathrm {min}\\lbrace \\Delta _+, \\Delta _-\\rbrace $ , as shown Fig.", "REF (a) and for the triplet dominant case ($r>1$ ), the DOS is similar to the pure triplet case discussed above (see Sec.", "REF ), but the relevant energy scale is again $\\mathrm {min}\\lbrace \\Delta _+, \\Delta _-\\rbrace $ .", "This behavior remains qualitatively same in the presence of a SOC in the normal metal.", "However, the SOC leads to a particle-hole asymmetric DOS as illustrated in Fig.", "REF (b).", "This happens due to the coupling of singlet and triplet pairs in the normal metal due to the SOC.", "This coupling generates a term linear in $\\xi $ in the denominator of the Green's function, with a prefactor $\\propto \\tilde{\\epsilon }_N \\Sigma _s \\Sigma _t \\tilde{\\mathbf {d}}\\cdot \\widetilde{\\mathbf {w}}$ , which causes the breakdown of particle-hole symmetry.", "The effect of impurity scattering for the singlet/triplet dominant case is qualitatively the same as for the pure singlet/triplet case.", "The particle-hole asymmetry in the DOS also gets smeared by the impurity scattering as depicted in Fig.", "REF (a) for the singlet dominant ($r<1$ ) case and in the Fig.", "REF (b) for the triplet dominant case ($r>1$ )." ], [ "Summary & Conclusion", "In this paper, we have studied the effect of Rashba SOC on the structure of proximity induced Cooper pairs in a normal metal connected to a superconductor.", "We considered several kinds of gap symmetries.", "The SOC in the normal metal leads to a singlet-triplet mixed state if the SN junction involves a singlet superconductor.", "The strength of the triplet component in this case depends on the strength of the SOC and the low energy quasiparticle spectrum remains gapped and robust against disorder.", "The SOC driven triplet state does not lead to any low energy states.", "This is reminiscent of the proximity induced mixed parity states that have been reported for topological insulator and $s$ -wave superconducting junctions where the origin of the singlet-triplet mixing is the spin-momentum locking [35], [36], [37], [38], , [30], [32].", "In the case of an SN junction involving a triplet superconductor, the broken spin-rotational symmetry can lead to a non-zero singlet component, but its presence depends on the spin-structure of the gap in the superconductor and the junction geometry.", "We find that a singlet component is present whenever the off-diagonal self-energy ($\\mathbf {\\Sigma }_1$ ) has a component parallel to the SOC vector ($\\mathbf {w}$ ).", "On the other hand, an off-diagonal self-energy that is perpendicular to the SOC vector get suppressed by the SOC very quickly.", "The induced triplet component may have a different spin-structure compared to the superconductor, depending on the junction geometry.", "The induced triplet pairs have sub-gap low energy states, however, the induced triplet component turns out to be fragile against impurity scattering.", "The effect of disorder is similar to the effect of disorder on proximity induced $p$ -wave superconductivity on the surface of topological insulators[31].", "In the SN junctions with two component superconductors that have both singlet and triplet components, the low energy behavior is determined by the dominant component.", "However, the SOC induced coupling between the singlet and triplet components leads to a particle-hole asymmetric DOS.", "The disorder suppresses this particle hole asymmetry.", "One of the key conclusions of this work is the formation of odd frequency, spin triplet, even parity pairs in the normal metal segment of an SN junction with a triplet superconductor.", "Such odd frequency component only arises in the presence of SOC in the normal metal.", "The induced $\\mathbf {d}$ -vector of the odd frequency term is $\\propto (\\mathbf {\\Sigma }_1 \\times \\mathbf {w})$ , which is an even function of momentum.", "Similar OTE induced superconductivity has been reported in an $s$ -wave junction with a topological insulator with gap modulation near the interface[29] or in the presence of an exchange field[30].", "In contrast, the formation of OTE pairs that we find for a triplet superconductor SN junction does not require gap modulation near the interface or any exchange field.", "We have obtained the full momentum-spin-energy structure of the OTE pairs.", "The OTE pairs have momentum dependence, which makes them vulnerable to impurity scattering.", "OTE pairs have also been reported in the normal metal junctions with triplet superconductors where the normal metal did not have the SOC[26].", "These studies were performed using the Usadel equations with Nazarov-Tanaka boundary conditions[40], [41] Here, the approach is different.", "We calculate the normal state Green function right at the interface, thus avoiding ambiguity with respect to possible boundary conditions.", "In summary, we have investigated the effect of Rashba SOC on proximity induced superconductivity in the SN junctions consisting of conventional and unconventional superconductor.", "Eq.", "(REF ) and () are very general, and the structure of the induced superconductivity is applicable to many other systems, such as surface states of topological insulators or systems with Dresselhaus SOC.", "We examine the robustness of the induced superconductivity against disorder, and find that the induced triplet superconductivity gets suppressed by it.", "In contrast, the fully gapped $s$ -wave superconductivity remains robust against disorder.", "We find that the OTE state is induced in the SN junctions with triplet superconductors, but it does not show any low energy signature.", "The OTE pairs may gets suppressed weakly or strongly by the disorder depending on their momentum structure.", "We show that the formation of the OTE pairs requires a triplet superconductor in the SN junction, SOC and a favorable geometry.", "OTE pairs are not induced in every triplet superconductor - normal metal junction." ], [ "Acknowledgments", "The authors are grateful to Shao-Pin Chiu and Juhn-Jong Lin for useful discussions.", "VM, YL and FCZ are partially supported by NSFC grant 11674278 and by the priority program of the Chinese Academy of Sciences grant No.", "XDB28000000, and YL is also supported by the China Postdoctoral Science Foundation under grant No.", "2020M670422 and by the Fundamental Research Funds for the Central Universities under grant No.", "E2E44305.", "SK is supported by the Ministry of Science and Technology of Taiwan through grant numbers MOST 106-2112-M-009-007-MY4 and 110-2112-M-A49-015, and by the Ministry of Education of Taiwan through the Higher Education Sprout Project and acknowledges support by Yushan Fellowship Program of the Ministry of Education, Taiwan." ], [ "General triplet case", "This appendix provides the details of the normal metal Green's function for the general triplet case discussed in Sec.", "REF .", "The $\\hat{\\mathbf {G}}_{11}$ component of the normal state Green's function $\\check{\\mathbb {G}}$ is given by, $\\hat{\\mathbf {G}}_{11} &= &\\frac{b_+ b_-}{\\mathbf {D}} \\left[ M_0 \\sigma _0 - M_1 \\mathbf {w}\\cdot \\sigma - M_2 \\mathbf {q}\\cdot \\sigma \\right.", "\\nonumber \\\\&& \\left.", "- M_3 \\mathbf {\\Sigma }_1\\cdot \\sigma - M_4 \\mathbf {\\Sigma }_1^\\ast \\cdot \\sigma \\right] \\\\\\mathbf {D} &=& M_0^2 - M_1^2 (\\mathbf {w}\\cdot \\mathbf {w}) -M_2^2 (\\mathbf {q}\\cdot \\mathbf {q}) \\nonumber \\\\&& - M_3^2 (\\mathbf {\\Sigma }_1\\cdot \\mathbf {\\Sigma }_1) -M_4^2 (\\mathbf {\\Sigma }_1^\\ast \\cdot \\mathbf {\\Sigma }_1^\\ast ) \\nonumber \\\\&& -2 M_1 M_2 \\mathbf {q}\\cdot \\mathbf {w} - 4\\epsilon _N M_1 \\mathbf {\\Sigma }_1 \\cdot \\mathbf {w} \\mathbf {\\Sigma }_1^\\ast \\cdot \\mathbf {w} \\nonumber \\\\& &- 2\\epsilon _N^2|\\mathbf {\\Sigma }_1|^2 \\mathbf {\\Sigma }_1 \\cdot \\mathbf {w} \\mathbf {\\Sigma }_1^\\ast \\cdot \\mathbf {w}.$ Here $b_\\pm = \\bar{\\omega }+\\xi _{\\mathbf {k}}\\pm \\tilde{\\epsilon }_N$ , $\\mathbf {q}=i \\mathbf {d}\\times \\mathbf {d}^\\ast $ , and the coefficients $M_{i}$ ($i=0,..,4$ ) are, $M_0 &=& a_0 b_+ b_- -b_0 |\\mathbf {\\Sigma }_1|^2 - \\epsilon _N \\mathbf {q} \\cdot \\mathbf {w}, \\\\M_1 &=& -\\epsilon _N b_+ b_- - \\epsilon _N |\\mathbf {\\Sigma }_1|^2, \\\\M_2 &=& -b_0, \\\\M_3 &=& \\epsilon _N \\mathbf {\\Sigma }_1^\\ast \\cdot \\mathbf {w}, \\\\M_4 &=& \\epsilon _N \\mathbf {\\Sigma }_1 \\cdot \\mathbf {w}.$ The anomalous component $\\hat{\\mathbf {G}}_{12}$ of $\\check{\\mathbb {G}}$ reads, $\\hat{\\mathbf {G}}_{12} &= & \\hat{\\mathtt {G}}_{12} \\frac{i \\sigma _y}{\\mathbf {D}},$ where $\\hat{\\mathtt {G}}_{12} &=&\\left[ C_0 \\sigma _0 + C_1 \\mathbf {w}\\cdot \\sigma +C_2 \\mathbf {\\Sigma }_1\\cdot \\sigma \\right.\\nonumber \\\\&& \\left.", "+ C_3 \\mathbf {\\Sigma }_1^\\ast \\cdot \\sigma + C_4 \\left( \\mathbf {\\Sigma } \\times \\mathbf {w}\\right) \\sigma \\right],$ $C_0 &=& 2\\epsilon _N \\xi b_+ b_- \\mathbf {\\Sigma }_1 \\cdot \\mathbf {w} \\nonumber \\\\& & + 2\\epsilon _N b_0 (\\mathbf {\\Sigma }_1 \\times (\\mathbf {\\Sigma }_1 \\times \\mathbf {\\Sigma }_1^\\ast )) \\cdot \\mathbf {w},\\\\C_1 &=& -2b_+ b_- \\epsilon _N^2 \\mathbf {\\Sigma }_1 \\cdot \\mathbf {w} \\nonumber \\\\& & - \\epsilon _N^2 (\\mathbf {\\Sigma }_1 \\times (\\mathbf {\\Sigma }_1 \\times \\mathbf {\\Sigma }_1^\\ast )) \\cdot \\mathbf {w}, \\\\C_2 &=& b_+ b_- \\left( a_0 b_0 + \\epsilon _N^2 \\right) \\nonumber \\\\& & + \\epsilon _N^2 \\left( \\mathbf {\\Sigma }_1 \\times \\mathbf {w} \\right)\\cdot \\left( \\mathbf {\\Sigma }_1^\\ast \\times \\mathbf {w} \\right), \\\\C_3 &=& -b_+ b_- \\mathbf {\\Sigma }_1 \\cdot \\mathbf {\\Sigma }_1 \\nonumber \\\\& & - \\epsilon _N^2 \\left( \\mathbf {\\Sigma }_1 \\times \\mathbf {w} \\right)\\cdot \\left( \\mathbf {\\Sigma }_1 \\times \\mathbf {w} \\right), \\\\C_4 &=& -2i\\epsilon _N \\bar{\\omega } b_+ b_- + i \\epsilon _N^2 (\\mathbf {q}\\cdot \\mathbf {w}).$ Here $a_0=\\bar{\\omega }-\\xi _{\\mathbf {k}}$ and $b_0=\\bar{\\omega }+\\xi _{\\mathbf {k}}$ ." ], [ "Mixed parity", "Details for the normal and anomalous Green's functions the mixed parity state of Sec.", "REF are provided in this appendix.", "The normal Green's function is, $\\hat{\\mathbf {G}}_{11} &= & \\frac{b_+ b_-}{\\mathtt {D}} \\left[ L_0 \\sigma _0 - L_1 \\widetilde{\\mathbf {w}}\\cdot \\sigma -L_2 \\tilde{\\mathbf {d}} \\cdot \\sigma \\right], \\\\L_0 &=& a_0 b_+ b_- -b_0 \\left( \\Sigma _s^2 + \\Sigma _t^2 \\tilde{\\mathbf {d}} \\cdot \\tilde{\\mathbf {d}} \\right) \\nonumber \\\\& & +2\\tilde{\\epsilon }_N \\Sigma _s \\Sigma _t \\tilde{\\mathbf {d}} \\cdot \\widetilde{\\mathbf {w}}, \\\\L_1 &=& -\\tilde{\\epsilon }_N \\left( b_+ b_- - \\Sigma _s^2 + \\Sigma _t^2 \\tilde{\\mathbf {d}} \\cdot \\tilde{\\mathbf {d}} \\right), \\\\L_2 &=& -2\\left( b_0 \\Sigma _s \\Sigma _t - \\tilde{\\epsilon }_N \\Sigma _t^2 \\tilde{\\mathbf {d}} \\cdot \\widetilde{\\mathbf {w}} \\right),\\\\\\mathtt {D}&=& L_0^2 -L_1^2 - L_2^2 \\tilde{\\mathbf {d}} \\cdot \\tilde{\\mathbf {d}} - 2L_1 L_2 \\tilde{\\mathbf {d}} \\cdot \\widetilde{\\mathbf {w}} .$ The anomalous Green's function can be cast into the form $\\hat{\\mathbf {G}}_{12} &= & \\hat{\\mathtt {G}}_{12} \\frac{i \\sigma _y}{\\mathtt {D}},$ with $\\hat{\\mathtt {G}}_{12} &=& \\left[ A_0 \\sigma _0 + A_1 \\widetilde{\\mathbf {w}}\\cdot \\sigma + A_2 \\tilde{\\mathbf {d}}\\cdot \\sigma \\right.", "\\nonumber \\\\&& \\left.", "+ A_3 ( \\tilde{\\mathbf {d}} \\times \\widetilde{\\mathbf {w}} )\\cdot \\sigma \\right], \\\\A_0 &=& (a_0 b_0-\\tilde{\\epsilon }_N^2) \\Sigma _s - \\Sigma _s^3 + \\Sigma _s \\Sigma _t^2 \\tilde{\\mathbf {d}} \\cdot \\tilde{\\mathbf {d}} \\nonumber \\\\&& +2\\xi _{\\mathbf {k}} \\tilde{\\epsilon }_N \\tilde{\\mathbf {d}} \\cdot \\widetilde{\\mathbf {w}} \\\\A_1 &=& 2\\xi _{\\mathbf {k}} \\tilde{\\epsilon }_N \\Sigma _s -2\\tilde{\\epsilon }_N^2\\Sigma _t \\widetilde{\\mathbf {w}} \\cdot \\tilde{\\mathbf {d}} , \\\\A_2 &=& \\Sigma _t \\left( a_0 b_0 +\\tilde{\\epsilon }_N^2 +\\Sigma _s^2 - \\Sigma _t^2 \\tilde{\\mathbf {d}} \\cdot \\tilde{\\mathbf {d}} \\right)\\ \\\\A_3 &=& -2i \\tilde{\\epsilon }_N \\bar{\\omega } \\Sigma _t.$" ] ]
2212.05550
[ [ "A Weyl's law for black holes" ], [ "Abstract We discuss a Weyl's law for the quasi-normal modes of black holes that recovers the structural features of the standard Weyl's law for the eigenvalues of the Laplacian in compact regions.", "Specifically, the asymptotics of the counting function $N(\\omega)$ of quasi-normal modes of $(d+1)$-dimensional black holes follows a power-law $N(\\omega)\\sim \\mathrm{Vol}_d^{\\mathrm{eff}}\\omega^d$, with $\\mathrm{Vol}_d^{\\mathrm{eff}}$ an effective volume determined by the light-trapping and decay properties of the black hole geometry.", "Closed forms are presented for the Schwarzschild black hole and a quasi-normal mode Weyl's law is proposed for generic black holes.", "As an application, such Weyl's law could provide a probe into the effective dimensionality of spacetime and the relevant resonant scales of actual astrophysical black holes, upon the counting of sufficiently many overtones in the observed ringdown signal of binary black hole mergers." ], [ "Spectrum asymptotics, degrees of freedom counting and geometry", "The goal of the present note is to present the basic elements of a Weyl's law proposed for black hole (BH) quasi-normal modes (QNMs).", "Such a law provides an estimate of the number of QNMs, inside a circle of a given radius in the frequency-complex plane, in terms of the asymptotics of high-frequency QNM overtones and a particular scale defined by the geometry of the black hole spacetime.", "It directly extends to the BH QNM setting the standard Weyl's law for the eigenvalues of the Laplacian of a compact Riemannian manifold.", "Weyl's law [1], [2] relates three distinct notions: the number of states of a physical system contained in a given spatial domain $D$ , the high-frequency asymptotics of the spectrum of the operator controlling its dynamics, and the geometry of the region $D$ .", "The study of this triple relation has proved to be a far-reaching catalyst for key developments both in physics and mathematics.", "On the physics side, it lays e.g.", "at the core of the developments in radiation theory aiming at understanding the black body radiation and, therefore, it is ultimately interwoven with the birth of quantum theory (see e.g.", "[3], [4]).", "On the mathematical side, it dwells in the more general setting of inverse problems in spectral geometry that have unveiled a rich set of correspondences between the spectral properties of geometric differential operators defined on a (Riemanninan) manifold, namely the Laplacian, and the properties of families of geodesics on such manifolds [3], [5], [6].", "Specifically, Weyl's law for the Laplacian stands as an archetype of such results in spectral geometry with direct implications in physics.", "For concreteness, let us consider the asymptotics of the homogenous scalar Helmholtz operator on a compact domain $D$ $(\\Delta + k_n^2)\\phi _n = 0 \\ ,$ subject to homogeneous (Dirichlet, Neumann or Robin) boundary conditions on its boundary $\\partial D$ .", "From the compacity of $D$ the eigenvalues $k_n^2$ form a discrete set and, denoting by $N(k)$ the number of eigenvalues $k_n^2$ below a certain wave-number $k$ , the following large-$k_n$ asymptotic relation holds $N(k) \\sim \\mathrm {Vol}_d(D) k^d + o(k^{d-1}) \\ \\ , \\ \\ (k\\rightarrow \\infty )$ where $\\mathrm {Vol}_d(D)$ is the volume of the compact domain $D$ and $d$ is the dimension of $D$ , namely $d=\\mathrm {dim}(D)$ .", "This is the Weyl's law.", "Crucially this `universal' asymptotic leading term does not depend on the shape of $D$ , the boundary $\\partial D$ only entering at the subleading term in the asymptotic expansion (see e.g.", "[7]).", "Similar Weyl's law relations hold for other fields (e.g.", "the electromagnetic field) and differential operators [6].", "In ref.", "[8] a Weyl's law was proposed for BH QNMs.", "QNMs encode the linear response of a resonator under an external perturbation, corresponding to exponentially decaying oscillating solutions to an appropriate wave equation under outgoing boundary conditions in a (generally) non-compact domain $D$ (cf.", "[9], [10], [11], [12] for reviews of QNMs in the gravitational seeting and [13] for optical nanoresonators).", "Specifically in the BH case, QNM complex frequencies $\\omega _n$ (the real part providing the QNM oscillation frequency and the imaginary part the inverse of its time-decay scale) encode invariant information on the BH stationary background.", "A crucial feature of QNM frequencies, in spite of the non-compactness of $D$ , is that they form a discrete set.", "The number of frequencies $\\omega _n$ in a given circle of the complex plane can therefore be counted and the question of a QNM version of Weyl's law can meaningfully be posed.", "Consequently, QNM Weyl's laws have been the subject of extensive research in the literature (see e.g.", "[14], [15], [16].", "However, the situation for QNMs is more subtle than in the standard self-adjoint case (REF ) and a Weyl's law expression analogous to (REF ) is not generic beyond the one-dimensional case and for compact support (or very fastly decaying) scattering potentials [17], [18].", "In this setting it seems quite remarkable that asymptotically flat (spherically symmetric) BH QNMs do admit [8] a Weyl-like's law of the standard functional form in (REF ) $N(\\omega ) \\sim \\frac{1}{c^d}\\mathrm {Vol}_d^{\\mathrm {eff}} \\omega ^d + o(^{d-1}) \\ \\ , \\ \\ (\\omega \\rightarrow \\infty )$ where $N(\\omega )$ amounts for the number of (complex) QNM frequencies $\\omega _n$ such that $|\\omega _n|\\le \\omega $ , where $\\omega >0$ , and $\\mathrm {Vol}_d^{\\mathrm {eff}}$ is an $d$ -dimension effective volume determined by the BH geometry (the appropriate power of the light speed $c$ has been introduced for dimensional reasons).", "Note that, given the non-compact support of the BH geometry, the factor $\\mathrm {Vol}_d^{\\mathrm {eff}}$ does not admit the direct interpretation of a compact “cavity” volume in (REF ).", "It is tempting, however, to seek interpreting $\\mathrm {Vol}_d^{\\mathrm {eff}}$ as providing a length/time (cubic) scale where the resonance phenomenon takes place.", "Upon this assumption, one can invert the logic in reading BH QNM Weyl's law (REF ), as compared to the standard Weyl's law (REF ).", "Indeed, whereas in the latter the dimension $d$ and the cavity's volume $\\mathrm {Vol}_d(D)$ are known Or the interest lies only on the density of states $N(k)/\\mathrm {Vol}_d(D)$ , as in the black body radiation problem.", "In either case, the cavity's volume $\\mathrm {Vol}_d(D)$ is not the quantity of interest in itself, just playing the (key) role of being the only trace of the cavity's geometry and therefore guaranteeing the universality of the Weyl's law (actually the original motivation for Weyl's proof).", "a priori, so the interest is focused on determining the number of states $N(k)$ , the former can be used to probe: i) The (effective) spacetime dimension $(d+1)$ .", "ii) The dynamical scale for the BH resonant phenomenon, under the assumption that $N(\\omega )$ can be independently (for instance, observationally) accessed.", "This possibility of using the proposed BH QNM Weyl's law as a probe into spacetime features defines our interest in this QNM asymptotic problem.", "The article is organized as follows.", "In section we review the Weyl's law in the classical setting of the eigenvalues of the Laplacian in a compact region.", "In section we discuss the Weyl's law for QNMs associated with the scattering (in non-compact domains) by compact support potentials, separating the special one-dimensional case from the general discussion in higher dimensions.", "In section we discuss the application to the BH case, focusing on the Schwarzschild BH case and then proposing a QNM Weyl's law for generic BHs.", "Finally, section presents the conclusions and some perspectives." ], [ "Weyl's law: space(-time) dimension and length scales", "We present here a brief account of the elements of Weyl's law relevant in our later discussion on BH QNMs, with a focus on the volume factor and the power-law dependence on the dimension.", "In a first step we review the Weyl's in the classical self-adjoint case and, in a second stage, we comment on the known results in the QNM (scattering resonances) setting." ], [ "Weyl's law in closed manifolds", "For concreteness, we focus here on the case of a compact manifold without boundaries [5].", "This case contains all the specific elements we aim at addressing in the QNM case, providing a simplified setting but adequate setting.", "The same leading-order result is found for compact manifolds with boundary, the differences related to the boundary conditions only appearing at the subleading-order, that we will not address here.", "Given a closed (compact without boundary) Riemannian manifold $(M, g_{ab})$ of dimension $d$ with Levi-Civita connection $\\nabla _a$ , we consider the eigenvalue problem associated with scalar Laplace-Beltrami operator $\\Delta _g = \\nabla ^a\\nabla _a$ , namely $-\\Delta _g\\phi _n = \\lambda _n\\phi _n \\ ,$ eigenvalues $\\lambda _n$ are real and non-negative.", "Given $\\lambda \\in \\mathbb {R}^+$ we define the eigenvalue counting function $N(\\lambda )$ as $N(\\lambda ) = \\hbox{number of } \\lambda _n \\hbox{ such that } \\lambda _n\\le \\lambda \\ .$ The Weyl's law states that, in the large-$\\lambda $ asymptotic limit $N(\\lambda ) \\sim C_d \\mathrm {Vol}_d(M) \\lambda ^{\\frac{d}{2}} \\ \\ , \\ \\ (\\lambda \\rightarrow \\infty )$ where $\\mathrm {Vol}_d(M)$ is the ($d-$ dimensional) volume of $M$ and $C_d$ are universal constants only depending on the dimension $d$ $C_d = \\frac{\\mathrm {Vol}_d(B^d_1)}{(2\\pi )^d} \\ ,$ with $\\mathrm {Vol}_d(B^d_1)$ the Euclidean volume of the unit $d$ -dimensional ball $B^d_1$ .", "To get a taste and for later comparison, we give some examples for the lowest dimensions: i) Case $d=1$: for the ball $B^1_1$ , namely the $[-1,1]$ interval $\\mathrm {Vol}_1(B^1_1) = 2 \\ \\ ,$ and therefore, denoting $\\mathrm {Vol}_1 = L$ , we get $N(\\lambda ) \\sim \\frac{2}{2\\pi }L \\lambda ^{\\frac{1}{2}} =\\Big (\\frac{L}{\\pi }\\Big ) \\lambda ^{\\frac{1}{2}}\\ \\ , \\ \\ (\\lambda \\rightarrow \\infty ) \\ .$ ii) Case $d=2$: for the ball $B^2_1$ , the unit circle in the plane $\\mathrm {Vol}_2(B^2_1) = \\pi \\ \\ ,$ and denoting $\\mathrm {Vol}_2(M) = A$ , namely the area, we get $N(\\lambda ) \\sim \\frac{\\pi }{(2\\pi )^2} A \\; \\lambda = \\Big (\\frac{A}{4\\pi }\\Big )\\lambda \\ \\ , \\ \\ (\\lambda \\rightarrow \\infty ) \\ .$ iii) Case $d=3$: the ball $B^3_1$ is the unit Euclidean ball, so $V_3(B^2_1) = \\frac{4}{3}\\pi \\ \\ ,$ and denoting $\\mathrm {Vol}_3(M) = V$ we get $N(\\lambda ) \\sim \\frac{4\\pi /3}{(2\\pi )^3} V \\; \\lambda ^{\\frac{3}{2}}= \\Big (\\frac{V}{6\\pi ^2}\\Big )\\lambda ^{\\frac{3}{2}} \\ \\ , \\ \\ (\\lambda \\rightarrow \\infty ) \\ .$ iv) Case general $d$: for dimension $d$ , the unit Euclidean $d$ -ball has volume (with $\\Gamma (x)$ Euler's gamma function) $\\mathrm {Vol}_d(B^d_1) = \\frac{\\pi ^{\\frac{d}{2}}}{\\Gamma (\\frac{d}{2}+1)} \\ \\ ,$ and denoting $\\mathrm {Vol}_d(M) = V_d$ we get $N(\\lambda ) &\\sim & \\frac{\\pi ^{\\frac{d}{2}}/\\Gamma (\\frac{d}{2}+1)}{(2\\pi )^d} V_d \\lambda ^{\\frac{d}{2}} \\nonumber \\\\&=& \\bigg (\\frac{V_d}{(4\\pi )^{\\frac{d}{2}}\\Gamma (\\frac{d}{2}+1)}\\bigg )\\lambda ^{\\frac{d}{2}}\\ \\ , \\ \\ (\\lambda \\rightarrow \\infty ) \\ .$ The important point we would like to stress here is that, from the leading order of Weyl's law, one can extract both the dimension of the manifold and an averaged (volumetric) typical length scale $L_{\\mathrm {vol}}:=(V_d)^{\\frac{1}{d}}$ ." ], [ "Weyl's law in compact manifolds with boundaries", "As commented above, the Weyl's law asymptotics for the leading term of the eigenvalue counting function $N(\\lambda )$ holds also for the spectral problem of the Laplacian in a compact domain $D\\subset \\mathbb {R}^d$ .", "The differences in $N(\\lambda )$ following from distinct boundary conditions show up only at the subleading order.", "Specifically, given the spectral problem $\\left\\lbrace \\begin{array}{lcl}\\Delta \\phi _n = \\lambda _n \\phi _n \\ \\ &,& \\ \\ (D\\subset \\mathbb {R}^d, D \\hbox{ compact}) \\\\\\left.\\phi _n\\right|_{\\partial D} = 0 \\ \\ &,& \\ \\ \\hbox{(Dirichlet boundary conditions)}\\\\\\hbox{or} &&\\\\\\left.\\partial _s\\phi _n\\right|_{\\partial D} = 0 \\ \\ &,& \\ \\ \\hbox{(Neumann boundary conditions)}\\end{array}\\right.", "\\ ,$ with $\\partial _s\\phi _n$ the normal derivative at the boundary $\\partial D$ .", "In this case, under appropriate conditions [6], `Weyl's conjecture' $N(\\lambda ) \\sim C_d \\mathrm {Vol}_d(D) \\lambda ^{\\frac{d}{2}} \\mp \\frac{1}{4}C_{d-1} \\mathrm {Vol}_{d-1}(\\partial D) \\lambda ^{\\frac{d-1}{2}} \\ \\ ,$ holds when $\\lambda \\rightarrow \\infty $ , with the “minus” sign corresponding to the Dirichlet case and the “plus” to the Neumann one." ], [ "QNM Weyl's law", "For the sake of concreteness, in order to introduce QNMs we consider the scattering problem corresponding to the wave equation of a scalar field propagating in the (non-compact) space $\\mathbb {R}^d$ (namely scalar wave in $(d+1)$ -dimensional Minkowski spacetime) with scattering potential $V(x)$ , where $x\\in \\mathbb {R}^d$ and $V(x)$ is time-independent More general time-dependent situations can be considered (see e.g.", "[53]), but in the present Weyl's law setting of stay in the time-independent case.", "$\\left(\\partial ^2_t - \\Delta + V(x)\\right) \\phi = 0 \\ ,$ subject to outgoing boundary conditions and with appropriate initial data.", "By taking the Fourier transform in time, and denoting the frequency spectral parameter as $\\omega $ , we can write the following spectral problem $\\left(- \\Delta + V(x)\\right) \\phi = \\omega ^2 \\phi \\ ,$ complemented to the asymptotic outgoing boundary conditions.", "Introducing the Schrödinger operator $P = -\\Delta + V(x)$ QNMs can be formally understood as corresponding to discrete The continuous part of the spectrum would correspond to the so-called `tails'.", "part of the spectrum of this operator, crucially, defined under outgoing boundary conditions $\\left\\lbrace \\begin{array}{l}P\\phi _n = \\left(-\\Delta + V(x)\\right)\\phi _n = \\omega ^2_n \\phi _n \\ \\ , \\ \\ \\hbox{(in } \\mathbb {R}^d)\\\\\\hbox{Outgoing boundary conditions at infinity}\\end{array}\\right.$ The dissipative character of the outgoing boundary conditions entails the non-selfadjoint character of the operator, so the QNM frequencies $\\omega _n$ 's are generically complex numbers.", "From a more formal perspective, QNMs can be introduced in terms of the resolvent $R_P(\\omega ) = (P- \\omega \\mathrm {I})^{-1}$ of $P$ .", "Under the appropriate conditions on $V(x)$ (and with the appropriate convention for $\\omega $ , namely $\\phi (t,x)\\sim e^{i\\omega t}\\phi (x)$ ), the resolvent $R_P(\\omega )$ is analytic in the lower-half $\\omega $ -complex plane.", "QNMs frequencies $\\omega _n$ correspond then to the poles of the meromorphic extension of the resolvent $R_P(\\omega )$ to the upper-half $\\omega $ -complex plane (see, for instance [15], [16]).", "Alternatively, a characterization of QNMs can be given as proper eigenvalues of an non-selfadjoint operator defined in terms of hyperboloidal spacetime foliations [22], [23], [24], [25], [26].", "The latter is actually the approach we adopt in the calculations presented below.", "A crucial feature in the present Weyl's law setting is the discrete nature of QNM frequencies $\\omega _n$ 's, in spite of the non-compact nature of the integration domain.", "This allows to introduce a counting function for such $\\omega _n$ 's, as in the case of eigenvalues $\\lambda _n$ 's of the Laplace operator in compact domains, namely Eqs.", "(REF ) and (REF ).", "However, given that the problem is now being defined in the complex plane (not an ordered set), different natural strategies for the counting can be considered.", "In this setting, given $\\omega \\in \\mathbb {R}^+$ , here we consider the QNM counting function $N(\\omega )$ defined as $N(\\omega ) = \\hbox{number of } \\omega _n \\hbox{ such that } |\\omega _n|\\le \\omega \\ ,$ that is, $N(\\omega )$ counts the number of complex QNM frequencies $\\omega _n$ 's contained in a circle in the complex plane of radius $\\omega $ and centered in the origin.", "Having introduced the counting function $N(\\omega )$ the study of its large-$\\omega $ asymptotics is naturally posed, in particular the question about a possible Weyl's law.", "The assessment of the Weyl's law in the case of QNM frequencies is more delicate than in the classical selfadjoint case.", "A definite positive answer can be given in the one-dimensional case for compact support or fast decreasing potentials, but the situation becomes more complicate in higher dimensions.", "In contrast, a powerful approach in terms of a semi-classical treatment emerges, providing rich information on the distribution of QNMs in such semiclassical limit.", "In the following we briefly review the main situation in these respective cases." ], [ "QNM Weyl's law: one-dimensional case", "Let us consider a compact support, bounded potential $V(x)$ in the one-dimensional case $d=1$ .", "Considering the support of $V(x)$ , namely $\\mathrm {supp(V)}$ , the so-called “convex hull” of $\\mathrm {supp(V)}$ , denoted as $\\mathrm {chsupp(V)}$ , is the minimal convex set containing $\\mathrm {supp(V)}$ .", "In other words, $\\mathrm {chsupp(V)}$ is the smallest interval containing the support of $V(x)$ (cf.", "Fig.", "REF for an illustration of this concept).", "Denoting by $L$ such $\\mathrm {chsupp(V)}$ , the following asymptotic Weyl's law in one dimensions hold [27], [17], [15], [16] $N(\\omega ) \\sim 2 \\left(\\frac{L}{c\\pi }\\right) \\omega \\ \\ , \\ \\ (\\omega \\rightarrow \\infty ) \\ .$ We note that this expression recovers, up to a factor 2, Weyl's law (REF ), with the $\\omega /c = k = \\lambda ^{\\frac{1}{2}}$ (where $k$ is the “wave number” in expression (REF ) and $c$ is the “light speed”) and with the identification of $L$ as the “effective volume” associated with the potential $V(x)$ .", "This follows from the existence of two QNM branches, symmetric with respect to the imaginary axis.", "We discuss further this in section REF , reinterpreting $L$ .", "For later convenience, we note that this one-dimensional Weyl's law is intimately related to the following result [27], [17] concerning the asymptotic distribution of the QNMs frequencies $\\omega _n$ 's, with $n\\gg 1$ , of a one-dimensional potential $V(x)$ of low regularity (class $C^p$ , $p<\\infty )$ $\\!\\!\\!\\!\\!\\omega ^{\\rm R}_n \\sim \\pm \\frac{\\pi c}{L}\\left( n + \\tilde{\\gamma }\\right), \\,\\,\\omega ^{\\rm I}_n \\sim \\frac{c}{L} \\bigg [\\gamma \\ln \\left( \\left|\\omega _n^{\\rm R}\\right| + \\gamma ^{\\prime }\\right) - \\ln S\\bigg ],$ where $\\gamma $ , $\\tilde{\\gamma }$ , $\\gamma ^{\\prime }$ and $S$ are constants depending on the potential $V(x)$ .", "This asymptotic distribution of $\\omega _n^{\\prime }$ defines the so-called “Regge QNM branches” [27], [17].", "Note the asymptotic regular distribution of frequencies $\\omega _n$ 's along the logarithmic QNM branches, with constant $\\Delta \\omega ^{\\rm R} := \\omega ^{\\rm R}_{n+1}-\\omega ^{\\rm R}_{n}$ , so that we can estimate in this case, for each QNM branch, $\\displaystyle N(\\omega ) \\sim \\frac{\\omega }{\\Delta \\omega ^{\\rm R}} = \\frac{L}{c\\pi }\\omega $ .", "Multiplying by the 2 branches we recover (REF ).", "Figure: Illustration of the “convex hull” chsupp (V)\\mathrm {chsupp(V)} of the support of a potentialV(x)V(x).", "This notion, namely the “diameter” of the region where the potentialdoes not vanish, provides the relevant multiplicative scale in QNM Weyl's law for compact support potentials.The one-dimensional result (REF ) is extended in [18] (see also [28]) beyond the case of compact support potentials $V(x)$ to a Weyl-like law valid for potentials with a very fast decay, namely with a so-called `super-exponentially decreasing' behaviour.", "Specifically it holds (cf.", "expression in “Conjecture 1.2” in [18]) $N(\\omega ) \\sim \\frac{s_V}{2\\pi \\rho } \\omega ^\\rho \\ \\ , \\ \\ (\\omega \\rightarrow \\infty ) \\ ,$ where $s_V$ is a quantity determined by $V(x)$ , in particular its decay rate at infinity, and $\\rho $ is the so-called `order $\\rho $ ' of $V(x)$ (e.g.", "[18]).", "In particular, for functions referred to as of `exponential type' (see again [18]), it holds $\\rho =1$ .", "Unfortunately, the theorem in [18] requires the potential $V(x)$ to be super-exponentially decreasing, with $\\rho >1$ .", "This shows in particular that for generic potentials, even those satisfying a Weyl-like power law in $\\omega $ , the connection between the power of $\\omega $ and space dimension $d$ is generically lost.", "This is illustrated by a class of potentials including Gaussians $V(x)=e^{-ax^2}$ , for which Weyl's law (REF ) is shown to correspond to $N(\\omega ) \\sim 2\\frac{a}{c^2\\pi } \\omega ^2 \\ \\ , \\ \\ (\\omega \\rightarrow \\infty )$ (we have introduced the factor $c^2$ , as compared to the expression in [18] to keep track of the physical dimensions).", "The situation for more generic potentials is therefore quite open and depending on the details of $V(x)$ and it does not get better when increasing the space dimension $d$ , as we see below.", "As commented above, the QNM Weyl's law (REF ) presents a factor 2 with respect to the standard Weyl's law (REF ) for bounded states.", "This factor 2 is naturally understood in terms of the existence of two QNM branches symmetric with respect the imaginary axis in the $\\omega $ -complex plane: QNMs frequencies $\\omega _n = \\omega _n^{\\rm R} + i \\omega ^{\\rm I}_n$ indeed come in pairs, $\\omega _n^\\mp = \\pm |\\omega _n^{\\rm R}| + i \\omega ^{\\rm I}_n$ , corresponding to the two modes propagating respectively towards the left ($\\omega _n^-$ ) and towards the right ($\\omega _n^+$ ).", "From a physical perspective, and in the context of a mode propagating to the left and another propagating to the right, it is natural to interpret the quantity $2 L/c$ as the time required for the thermalization at the scale $L$ , namely the time to travel forth and back the distance $L$ $T_{\\mathrm {therm}} = 2\\left(\\frac{L}{c}\\right) \\ .$ (see [29] for an interpretation along these lines).", "Such time scale seems also natural in a setting employing a frequency spectral parameter $\\omega $ , rather than a wavenumber $k$ .", "We can then rewrite (REF ) as $N(\\omega ) \\sim \\left(\\frac{T_{\\mathrm {therm}}}{\\pi }\\right) \\omega \\ \\ , \\ \\ (\\omega \\rightarrow \\infty ) \\ .$" ], [ "QNM Weyl's law: higher dimensions", "Considering QNMs of a compact support, bounded potential $V(x)$ in odd-dimensions $d\\ge 3$ , the following upper bound for the QNM counting function holds [16] $N(\\omega ) \\le C_V \\omega ^d \\ .$ This provides a bound for $N(\\omega )$ but, in contrast with the one-dimensional case where the power in Weyl's law (REF ) is indeed given by the dimension $d=1$ , in the $d\\ge 3$ case this is not enough to establish Weyl's law in the large-$\\omega $ asymptotics of $N(\\omega )$ for generic compact support potentials.", "An important exception to this is the radial case $V(x)=V(||x||)$ .", "Specifically, given a radially-symmetric bounded, compact support potential $V(||x||)$ in odd dimension $d\\ge 3$ , with support inside a ball $B^d_R$ of radius $R$ (namely $\\mathrm {chsupp}(V)=B^d_R$ ) and with a jump at its boundary $V(R)\\ne 0$ ), it holds [30] $N(\\omega ) \\sim C_R \\omega ^d + o(\\omega ^{d-1}) \\ \\ , \\ \\ (\\omega \\rightarrow \\infty ) \\ .$ This is a fine result.", "Remarkably, an (almost) explicit form can be given for the constant $C_R$ .", "Specifically it holds [31] $C_R &=& \\left(2 \\frac{(\\mathrm {Vol}_d(B^d_1))^2}{(2\\pi )^d} + A_{S^{d-1}}\\right) R^d \\nonumber \\\\&=& \\left(2 \\frac{(\\mathrm {Vol}_d(B^d_1))}{(2\\pi )^d} \\mathrm {Vol}_d(B^d_R) + A_{S^{d-1}} R^d\\right) \\nonumber \\\\&=& \\left(2 C_d \\mathrm {Vol}_d(\\mathrm {chsupp}(V)) + A_{S^{d-1}} R^d\\right)$ where we have used the expression $\\mathrm {Vol}_d(B^d_R)= \\mathrm {Vol}_d(B^d_1)R^d$ for the Euclidean volume of the ball of radius $R$ .", "Here $A_{S^{d-1}}$ is the constant in the QNM Weyl's law of the associated “obstacle problem”.", "By the later we mean the $d$ -dimensional scattering problem in the exterior of a $(d-1)$ -sphere of radius $R$ , namely $S_R^{d-1}$ .", "That is, the QNM problem defined as $\\left\\lbrace \\begin{array}{l}-\\Delta \\phi _n = \\omega ^2_n \\phi _n \\ \\ , \\ \\ \\hbox{(in } \\mathbb {R}^d\\setminus B^d_R)\\\\\\hbox{Homogeneous boundary conditions at } S_R^{d-1} \\\\\\hbox{Outgoing boundary conditions at infinity}\\end{array}\\right.$ for which a Weyl's law holds as [31] $N(\\omega ) \\sim A_{S^{d-1}} \\omega ^d + o(\\omega ^{d-1}) \\ \\ , \\ \\ (\\omega \\rightarrow \\infty )$ independently An explicit (but formal) expression can indeed be found for $A_{S^{d-1}}$ .", "See Eq.", "(6) in [31].", "of the Dirichlet, Neumann or Robin (homogenous) boundary conditions at $S_R^{d-1}$ .", "Note the additive form of $C_R$ with two terms $C_R = C_R^V + C_R^{\\mathrm {ext}} \\ .$ where: i) $C_R^V= 2 C_d \\mathrm {Vol}_d(\\mathrm {chsupp}(V))$ .", "This is a contribution associated with the potential $V(x)$ .", "This is exactly twice the expression for the selfadjoint Weyl's law in a compact domain $D$ , with the volume of $D$ substituted by the volume of the convex hull of the support of $V(x)$ .", "ii) $C_R^{\\mathrm {ext}}= A_{S^{d-1}} R^d$ .", "This is contribution associated with the “exterior problem”, in particular with $V(x)=0$ .", "We will come back to this additive character of $C_R$ , later in the discussion of the BH QNM Weyl's law.", "If we relax the compact support condition on the potential $V(x)$ , we already lose control on the validity of the upper bound (REF ) and a general Weyl's law is also out of reach, as dramatically illustrated in the one-dimensional case We have not discussed the even dimension $d$ case.", "References to bounds on $N(\\omega )$ can be found in [31]..", "The actual fact is that the problem of the asymptotics of the QNM counting function $N(\\omega )$ for QNMs is a much more involved than in the self-adjoint case.", "As stated in Ref.", "[16]: “Weyl laws for counting resonant states are more complicated and richer [than for bound states] as they involve both energy and rates of decay.", "Even the leading term can be affected by dynamical properties of the system”.", "Therefore in the most generic case, in particular for non-compact potentials in high-dimensions and with slow decay, a QNM Weyl's law sharing the universal features of the self-adjoint one is not expected.", "Again, citing [15] in reference to the QNM counting function: “Except in dimension one [...] the issue of asymptotics or even optimal lower bounds remains unclear”." ], [ "QNM Weyl's law for black holes", "As discussed at the end section REF , the status of the QNM Weyl's law in a general setting remains an open problem.", "This is in particular the case for non-compactly supported potentials and for the discussion in higher-dimensions.", "In this setting, it seems a quite remarkable feature that QNM of $(3+1)$ -dimensional BHs do indeed satisfy the standard Weyl's law [8] of compact manifolds discussed in section .", "Stationary vacuum BHs are very particular solutions in Einstein theory, as a consequence of the uniqueness theorem [34], showing very special structural features [35].", "One might ask then if satisfying Weyl's law is a consequence of this special character.", "This might be true regarding the large scale (long-wavenumber) aspects of these spacetimes, but it seems not to be the case for the small scales (high-wavenumber).", "Indeed, as discussed in [8], the Weyl's law is robust under high-frequency (actually high-wavenumber) perturbations, in spite of the ultraviolet instability of QNM overtones [26], [8], [36], [26].", "In this section we discuss QNM Weyl's law for asymptotically flat BHs.", "We do not present a derivation of such results (that seems quite a formidable challenge, given the commented difficulties for QNM Weyl's law), but present a verification by straightforward evaluation.", "We dwell in the spherically symmetric case that, under the light of the discussion in section REF , is indeed a strong restriction.", "However, this case contains already genuine non-trivial features and offers key insight of what can be expected in the general case." ], [ "Effective one-dimensional BH potentials", "We focus our Weyl's law discussion in the $(3+1)$ -Schwarzschild BH, proceeding in several stages from the reduction to a one-dimensional effective problem to the full three dimensional Weyl's law.", "Specifically, we consider here QNM frequencies associated to gravitational perturbations.", "Making use of its spherical symmetry, gravitational perturbations in Schwarzschild can be reduced to two independent scalar one-dimensional wave equations of the form (REF ), with $x$ the so-called tortoise coordinate $r^*\\in ]-\\infty , \\infty [$ $\\left(-\\frac{d}{dr^*} + V(r(r^*))\\right) \\phi _{n\\ell m} = \\omega _{n\\ell m}\\phi _{n\\ell m} \\ ,$ where the angular modes $(\\ell , m)$ correspond to the spherical mode decomposition.", "More explicitly, the “axial” and “polar” Schwarzschild gravitational parities have respectively associated the Regger-Wheeler $V^{\\mathrm {RW},s}_{\\ell }(r)$ and Zerilli $V^{\\mathrm {Z}}_{\\ell }(r)$ potentials [37], [38], [39], [9], [40], with $r$ the radial Schwarzschild coordinate $V^{\\mathrm {RW},s}_{\\ell }(r) = \\left(1-\\frac{2M}{r}\\right) \\left(\\frac{\\ell (\\ell + 1)}{r^2} +(1-s^2) \\frac{2M}{r^3} \\right)\\ ,$ for the axial case ($s=0,1,2$ correspond to the scalar, electromagnetic and (linearized) gravitational cases), and $&&V^{\\mathrm {Z}}_{\\ell }(r) = \\left(1-\\frac{2M}{r}\\right) \\nonumber \\\\&&\\left(\\frac{2n^2(n+1)r^3 + 6n^2Mr^2+18nM^2r+18M^3}{r^3(nr+3M)^2} \\right) \\ ,$ with $n = \\dfrac{(\\ell -1)(\\ell +2)}{2} \\ .$ for the polar case (the expression $r^* = r + 2M\\ln (r/2M-1)$ gives the relation between the $r^*$ and the standard radial Schwarzschild coordinate $r$ in the potentials' expressions).", "A key feature of Schwarzschild is its so-called QNM isospectrality, namely, QNMs associated with the $V^{\\mathrm {RW},s}_{\\ell }(r)$ and $V^{\\mathrm {Z}}_{\\ell }(r)$ coincide (see [41], [42] for a fine discussion of this issue in terms of underlying integrability properties; see also references therein).", "We can therefore choose either of the two potentials above in our discussion.", "For simplicity one can think of $V^{\\mathrm {RW},s}$ with $s=2$ (gravitational case).", "The key point in our discussion is that the relevant one-dimensional potential is of non-compact support.", "Therefore it is not in the class of potentials under the hypothesis of the theorem in [17] leading to the Weyl's asymptotics (REF ).", "Even more, when expressed in terms of the $r^*$ coordinate entering in the QNM equation (REF ), the decay is only exponential at $r^*\\rightarrow -\\infty $ and, even worse, a power-law at $r^*\\rightarrow \\infty $ .", "Such decays are therefore too slow and we do not fall in the hypotheses of the theorem in [18] leading to (REF ).", "As a consequence, there is no reason a priori to expect that the asymptotics of the QNM counting function $N(\\omega )$ for Schwarzschild satisfies a Weyl's law.", "And, in spite of this, it does.", "To show it, let us consider the large-$n$ QNM asymptotics of $\\omega _{n\\ell m}$ for a fixed mode $(\\ell m)$ (QNM frequencies are actually degenerated in the azimuthal mode $m$ , but we keep it in the present QNM counting context).", "Following [43] it holds $\\!\\!\\!\\!", "2M\\omega _{n\\ell m} = \\pm 0.0874247 + \\frac{1}{2}(n - \\frac{1}{2}) + \\ldots \\ \\ (n \\gg 1) ,$ that it is not only independent of $m$ , but also of $\\ell $ .", "As sketched in Fig.", "REF , QNMs are placed asymptotically parallel to the imaginary axis.", "Therefore, for sufficiently large $n$ , the asymptotics of $N_{\\ell m}$ are controlled by (REF ) in a simple manner.", "Concretely, the number $N_{\\ell m}$ of QNMs $\\omega _{n\\ell m}$ with fixed $(\\ell m)$ are homogeneously distributed parallel to the imaginary axis, with a constant vertical gap $\\Delta \\omega _{n\\ell m} =\\omega _{(n+1)\\ell m}-\\omega _{n \\ell m}=\\frac{1}{4M} \\ .$ Therefore, for each $(\\ell , m)$ and each parallel of the two QNM branches parallel to the imaginary axis, the number $N_{\\ell m}(\\omega )$ of QNMs contained in that circle of radius $\\omega $ is approximately given by the simple quotient $N_{\\ell m}(\\omega ) \\sim \\frac{2M \\omega }{1/2} = 4 M \\omega \\ \\ , \\ \\ (\\omega \\rightarrow \\infty ) \\ .$ Counting the two branches, we finally have $N_{\\ell m}(\\omega ) \\sim 8M \\omega \\ \\ , \\ \\ (\\omega \\rightarrow \\infty ) \\ .$ This satisfies indeed the one-dimensional Weyl's law (REF ) with the proper $d=1$ dimensionality in the power law and a length scale (independent of $(\\ell m)$ angular numbers) fixed by the relation $2 L/\\pi = 8 M$ , that is $L=4\\pi M \\ \\ .$ We do not have a justification for this verification of the Weyl's law in a “strong sense”, apart from the direct verification of large $n$ QNM asymptotics found by Nollert.", "In particular, the length $L$ is an effective one fixed by the potential.", "However, the latter not being compact-supported, it is not clear how to interpret it.", "Some heuristics, lying at a physical speculative level without providing a proper geometric understanding, are presented in [8] and will be expanded elsewhere.", "Figure: Sketch of the figure used in the “counting argument”of QNMs in the Schwarzschild BHs.", "The key features arethe asymptotic distribution with a constant gap of QNMs along a line parallelto the imaginary axis, as well as thebound on the real part of QNM frequencies ω n \\omega _n'sin terms of the real part of the fundamental QNM.As commented above and presented in [8], the Weyl's law (REF ) is robust under `ultraviolet' perturbations of the effective potentials (REF ) and (REF ).", "More specifically, under high-wavenumber (small scales) tiny perturbations of $V(r)$ , the large-$n$ asymptotics of $N(\\omega $ ) keeps a power law with the same power $d=1$ , though the multiplicative constant depends on the perturbation scale (fixed by its wave number), transitioning from the Schwarzschild value to approximately three times that value (cf.", "behaviour of $L^{\\mathrm {W}}$ in Fig.", "2 of [8]).", "It is a remarkable feature that such robustness of the Weyl's law under ultraviolet perturbations happens in spite of the completely different distribution in the complex plane of unperturbed and perturbed QNM overtones: whereas in the unperturbed case QNM overtones distribute asymptotically in parallel to the imaginary axis, in the perturbed case they migrate and 'open' to new branches in the complex plane, with unbounded real part, distributing themselves along (Nollert-Price-like) asymptotically open logarithmic branches Actually, together with this logarithmic QNM branches, another family of “inner QNMs” appear.", "The latter seem to play a key role in the change of the multiplicative length scale in the Weyl's law [8].", "This will be systematically studied elsewhere..", "The rationale for the preservation of Weyl's law in the perturbed case seems to be intimately related to the structure of Regge QNMs in (REF ) that, as discussed above, encodes the Weyl's law in dimension one.", "Specifically, Weyl's law in the ultraviolet perturbed case would be a consequence of the conjecture proposed in [8] and refined in [45], according to which the ultraviolet QNM overtone instability is a general relativity low-regularity phenomenon in which perturbed Nollert-Price overtone branches tend towards Regge QNM branches in the infinite wavenumber limit of the perturbations.", "The key point is that, although the branches dramatically migrate to different regions in the complex plane, the distribution of QNM overtones along the branches remains homogeneous with an approximately constant separation along the branch, so that the simple counting argument presented above and illustrated in Fig.", "REF remains essentially unchanged.", "The detailed analysis of this limit and its consequences on Weyl's law, in particular, the dependence of the length scale in the Weyl's law with the wavenumber of the perturbation, will be presented elsewhere." ], [ "Three dimensional case: analytic estimation", "We assess now the Weyl's law in the full ($(3+1)$ -dimensional, spherically symmetric) Schwarzschild BH, in particular recovering the power $d=3$ .", "The argument is not fine enough to correctly estimate the multiplicative volume scale in Weyl's law, see below in section REF , but it offers non-trivial insight.", "Expression (REF ) provides the QNM counting function for a fixed $(\\ell , m)$ mode.", "Let us consider now the sum over $(\\ell , m)$ $N(\\omega ) &=& \\sum _{\\ell =2}^{\\ell _{\\mathrm {max}}} \\sum _{m=-\\ell }^{\\ell }N_{\\ell m}= \\sum _{\\ell =2}^{\\ell _{\\mathrm {max}}} \\sum _{m=-\\ell }^{\\ell } 8 M \\omega \\nonumber \\\\&=& 8 M \\omega \\sum _{\\ell =2}^{\\ell _{\\mathrm {max}}} (2\\ell + 1) \\ ,$ where we have made use of the degeneracy in the azimuthal number $m$ .", "The key point is to estimate $\\ell _{\\mathrm {max}}$ (in particular, it is a finite number).", "This depends critically on the specific distribution of QNMs in the complex plane for Schwarzschild.", "Such structure is illustrated, for instance, in the upper panel of Fig.", "REF : The real part of QNM branches of fixed $\\ell $ is bounded by its value for the fundamental mode $\\omega _{0,\\ell }$ , then the two branches cross at the algebraicly special QNM on the imaginary axis and the asymptotic a line parallel to the imaginary axis.", "The key point is that for a given circle of radius $\\omega $ , all QNM branches with $\\mathrm {Re}(\\omega _{0,\\ell })>\\omega $ lie outside that circle (note that horizontal and vertical scales in the figure are the same, so the depicted circle is faithful to the present argument), so they do not contribute to $N(\\omega )$ .", "For a given $\\ell $ , we can estimate the value of $\\mathrm {Re}(\\omega _{0,\\ell })$ from the large-$\\ell $ asymptotics (cf.", "e.g.", "Eq.", "(32) in [9]) $3\\sqrt{3}M\\omega _{n,\\ell } \\sim \\ell + \\frac{1}{2} + i\\left(n +\\frac{1}{2}\\right) \\ \\ , \\ \\ (\\ell \\rightarrow \\infty )$ that are intimately related to the light ring radius, $r=3M$ .", "Therefore, keeping the real part (for $n=0$ ) $3\\sqrt{3}M \\mathrm {Re}(\\omega _{0,\\ell }) \\sim \\ell + \\frac{1}{2} \\ \\ , \\ \\ (\\ell \\rightarrow \\infty )$ As argued above, branches with $\\mathrm {Re}(\\omega _{0,\\ell })>\\omega $ do not enter in the QNM counting, so QNMs with $\\ell > 3\\sqrt{3}M \\omega $ do not contribute to $N(\\omega )$ , this providing the estimate $\\ell _{\\mathrm {max}} \\sim 3\\sqrt{3}M \\omega \\ .$ As seen in Fig.", "REF , this overestimates the counting QNMs for large $\\ell \\le \\ell _{\\mathrm {max}}$ , but has the virtue of relating the counting to the light ring structure.", "We can then write (REF ) as $N(\\omega ) \\sim 8M \\omega \\sum _{\\ell =2}^{3\\sqrt{3}M\\omega } (2\\ell + 1)\\sim 8M \\omega \\cdot 2 \\sum _{\\ell =2}^{3\\sqrt{3}M\\omega } \\ell $ Using $\\displaystyle \\sum _{i=2}^p i \\sim \\sum _{i=1}^p i = \\frac{p(p+1)}{2}\\sim \\frac{p^2}{2}$ (for $p\\gg 1$ ), we get $N(\\omega ) \\sim 8M \\omega \\cdot 2 \\frac{27M^2 \\omega ^2}{2} = (6M)^3 \\omega ^3$ As commented above, the factor $(6M)^3$ overestimates $N(\\omega )$ , so we should rather keep as a proper result of this argument $N \\sim C_{\\mathrm {Schwarz}}\\;\\omega ^3 \\ ,$ with $C_{\\mathrm {Schwarz}}$ a constant.", "We proceed now to estimate it from direct counting in numerically calculated QMNs." ], [ "Three dimensional case: numerical computation", "Using the hyperboloidal, compactified scheme in [24], [46], [47], [26], [8] for the numerical calculation of QNMs in a spherically symmetric setting, the following result follows from a straightforward computation (with $2\\le \\ell \\le 20$ in the gravitational case, $s=2$ , cf.", "top panel in Fig.", "REF ) $N(s) = \\alpha \\cdot (4M)^3 \\omega ^{3.0} \\ ,$ with $\\alpha \\sim 2.98$ .", "On the one hand, this is consistent with the power law $d=3$ , that we recovered with the analytic estimation in (REF ).", "Regarding the multiplicative factor, it is tempting to approach $\\alpha \\sim 3$ , although this requires the use of more $\\ell $ 's in the calculation and this becomes rapidly challenging from a numerical point of view.", "Under this assumption, the analytical estimation $N^\\mathrm {analytic}(\\omega )$ counting function (REF ) overestimates the numerical estimation $N^\\mathrm {numerical}(\\omega )$ in (REF ) by a factor $\\frac{N^\\mathrm {analytic}(\\omega )}{N^\\mathrm {numerical}(\\omega )}\\sim \\frac{(6M)^3 \\omega ^3}{3(4M)^3 \\omega ^3} = \\frac{9}{8} \\ .$ Given the approximations in the argument leading to (REF ), this seems quite a reasonable estimation, thus supporting the role of the light-ring scale, $L^{\\mathrm {LR}} = 3M$ , as a relevant scale in the QNM counting problem.", "This is consistent, of course, with the key role of the light ring for QNMs (cf.", "e.g. [11]).", "As a curious remark, the factor $9/8$ is the one appearing in the so-called Buchdahl bound requiring $R > (9/8)R_{\\mathrm {Schwarz}}$ (where $R$ is the areal radius) for the stability of static, spherically symmetric matter configurations.", "If this Buchdahl bound plays a role in our QNM discussion, it is for now intriguing and unclear.", "Figure: Numerical study of the QNM Weyl's law for Schwarzschild.Top panel: Numerically calculated BH QNMs for a gravitational perturbation (ss=||spin|| = 2) on the (3+1)(3+1)-dimensionalSchwarzschild spacetime.", "Within the circle |r h ω|=8.1|r_{\\rm h} \\omega | = 8.1, with r h =2Mr_{\\rm h}=2M,one counts the number of QNMs for the angular modes ℓ=2,⋯,20\\ell =2,\\cdots , 20.", "Bottom panel: Weyl's law N(ω)N(\\omega ) in (3+1)(3+1)-dimensionalSchwarzschild BHs for scalar (ss=||spin|| = 0), electromagnetic (ss=||spin|| = 1) and gravitational (ss=||spin|| = 2) perturbations.The asymptotic N(ω)∼|ω| 3 N(\\omega ) \\sim |\\omega |^3 is recovered, while we observe a length scale L W ∼3r h L^{\\rm W}\\sim 3\\,r_h.Finally, the bottom panel of Fig.", "REF shows the results of the Weyl's law for all types of perturbations in the Regge-Wheeler potential (REF ), namely scalar ($s=0$ ), electromagnetic ($=1$ ) and gravitational ($s = 2$ ) perturbations.", "The obtained Weyl's law seem to be independent of the type of perturbation, in spite of the differences in the associated potentials, supporting the idea that the multiplicative scale is a geometric property intrinsic to the underlying Schwarzschild BH spacetime." ], [ "BH QNM Weyl's law: a proposal", "We have just seen, both using an analytic estimation or a straightforward numerical calculation, that the asymptotics of the counting function $N(\\omega )$ for the Schwarzschild BH is consistent with the classical Weyl's law, with a power law given in terms of the dimension $d$ of spatial sections of the spacetime.", "Given the spherical symmetry of the Schwarzschild BH, this is certainly suggested by the result for spherically symmetric potentials in odd dimensions $d$ discussed in section REF , namely the asymptotics (REF ) for $N(\\omega )$ .", "However there is a key difference Another minor point to highlight is that the Schwarzschild QNM problem is not of the form $(-\\Delta + V(r))\\phi _n = \\omega ^2\\phi _n$ for the Euclidean Laplacian $-\\Delta $ in [30], [31].", ": in contrast with the (key) assumption in [30], [31] on the compact support of the potential, the potentials $V(r)$ in (REF ) and (REF ) are not only of non-compact support, but they present a slow decay.", "It is therefore remarkable that the standard Weyl's law holds.", "We conjecture that this is indeed the case for generic asymptotically flat BH $(d+1)$ -dimensional spacetimes, with odd $d$ , so the asymptotics (REF ) hold.", "This is a bold statement, given the special nature of the spherically symmetric case and in the absence of a study specifically devoted to a non-spherically symmetric case, namely Kerr.", "But it is also in the spirit of the conjecture of Weyl's law by Sommerfeld [49] and Lorentz [50] preceding (and leading) to Weyl's proof one year later [1], [2] (see also the discussion in [4]).", "In the specific case of BH spacetimes, such a proposal is actually encouraged by the genericity and universality properties of stationary BH vacuum solutions.", "Such a universality is akin to the universality of Weyl's law.", "Essentially motivated by structural consistency with the standard Weyl's law (REF ) for bound states of selfadjoint operators in compact manifolds, we can refine a bit further our proposal (REF ) in what refers to the volume factor.", "Specifically, we start from (REF ) and (REF ) and consider the additive splitting of $C_R$ in (REF ).", "As discussed after Eq.", "(REF ), there is a contribution $C_R^V$ from the (convex hull of the) compact-support potential and a contribution $C_R^{\\mathrm {ext}}$ from the “exterior” scattering problem without potential (an “obstacle”).", "We can consider now pushing further and further away the boundary of the support of $V(||x||)$ .", "Then, in a formal sense, non-compact support potentials (as it is the case for Schwarzschild) can be seen as the (singular) limit in which the support of $V(||x||)$ is pushed to infinity and no “exterior region” remains.", "In particular, no contribution of $C_R^{\\mathrm {ext}}$ would enter into $C_R$ , the latter being then given solely by $C_R^V= 2 C_d \\mathrm {Vol}_d(\\mathrm {chsupp}(V))$ .", "Of course, this is just a formal expression, since $\\mathrm {Vol}_d(\\mathrm {chsupp}(V))$ diverges and it should be substituted by an effective volume term $\\mathrm {Vol}_d^{\\mathrm {eff}}$ completely fixed by the BH geometry.", "Upon these considerations, our proposal for the QNM Weyl law for $(d+1)$ -dimensional BH spacetimes, with $d$ odd, is $N(\\omega ) \\sim \\frac{2}{c^d}C_d\\mathrm {Vol}_d^{\\mathrm {eff}} \\omega ^d + o(^{d-1}) \\ \\ , \\ \\ (\\omega \\rightarrow \\infty )$ We stress that $\\mathrm {Vol}_d^{\\mathrm {eff}}$ is a spacetime quantity, not linked with a particular spatial foliation of the spacetime: it should be related to the intrinsic properties of the spacetime curvature." ], [ "QNM Weyl's law for the Schwarzschild BH", "We can revisit now the discussion in section under the light of the proposal (REF ).", "We can determine $\\mathrm {Vol}_d^{\\mathrm {eff}}$ for the Schwarzschild BH from (REF ), from the expression for $N(\\omega )$ in (REF ) and from the value of of $C_d$ for $d=3$ , namely $\\displaystyle C_3 = \\frac{1}{6\\pi ^2}$ (cf.", "(REF ).", "Adopting $\\alpha =3$ in (REF ), we have the Weyl's law $N(\\omega )\\sim 3\\cdot (4 M)^3\\omega ^3 + o(\\omega ^2)$ .", "From this it follows $\\mathrm {Vol}_d^{\\mathrm {eff}} = (3\\pi )^2 (4M)^3 \\ .$ This expression is not very illuminating.", "We take an alternative perspective by making use of the thermalization time $T_{\\mathrm {therm}}$ introduced in (REF ).", "From a physical perspective (and in the spirit of the reasoning in the black body problem [4]) , instead of thinking in terms of a density of states $\\rho (\\omega )$ $\\rho (\\omega ) \\sim \\frac{N(\\omega )}{\\mathrm {Volume}} \\ ,$ we can consider the related notion of radiation flux per unit of time and area $F(\\omega )$ , namely $F(\\omega ) \\sim \\frac{N(\\omega )}{\\mathrm {Time}\\cdot \\mathrm {Area}} \\ .$ Under this perspective, the multiplicative factor in the Weyl's law has naturally the dimensions $[\\mathrm {Time}]\\cdot [\\mathrm {Area}]$ .", "We can rewrite the factor in (REF ), for $d=3$ , in terms of the product of a thermalization time $T_{\\mathrm {therm}}$ and an effective area $\\mathrm {Area}_{2}^{\\mathrm {eff}}$ , as $\\frac{2}{c^3}C_3\\mathrm {Vol}_3^{\\mathrm {eff}} = \\frac{1}{c^{2}}C_3 T_{\\mathrm {therm}} \\mathrm {Area}^{\\mathrm {eff}} \\ .$ This poses question about the natural sphere for evaluating $\\mathrm {Area}^{\\mathrm {eff}}$ .", "Invoking the role of the light-ring radius $R = 3M$ as a relevant (key) scale in the Schwarzschild QNM counting problem, cf.", "discussion after Eqs.", "(REF ) and (REF ), we adopt $\\mathrm {Area}^{\\mathrm {eff}} = \\mathrm {Area}^{\\mathrm {LR}} = 4\\pi \\cdot (3M)^2 \\ .$ This expression for $\\mathrm {Area}^{\\mathrm {eff}}$ , together with Eq.", "(REF ) fixes the value of the thermalization time, resulting in $T_{\\mathrm {therm}} = 32\\pi M \\ .$ This is a suggestive result, both for its simplicity and its natural connection to BH `thermal issues'.", "Indeed, if we regard the “radial scale” in $\\mathrm {Vol}_3^{\\mathrm {eff}}$ as a “linear length”, rather than as a time, it is not natural to explain the presence of the “$\\pi $ factor”, that appears naturally related to “angular lengths”.", "On the contrary when looking at it as a `time', and moreover as a `thermalization time' (precisely because of the factor 2 in QNM Weyl's laws), the factor $\\pi $ appears naturally as connected to the thermal aspects of BH physics.", "Indeed, the factor $\\displaystyle \\frac{\\kappa }{8\\pi }$ in BH first law (with $\\kappa =(4M)^{-1}$ the BH surface gravity) $\\delta M = \\left(\\frac{\\kappa }{8\\pi }\\right)\\cdot \\delta \\mathrm {Area}^{\\mathrm {Hor}} \\ ,$ is precisely the inverse of the thermalization time $T_{\\mathrm {therm}}$ (REF ) $T_{\\mathrm {therm}} = \\frac{8\\pi }{\\kappa } \\ .$ This connection with BH thermodynamics further supports the interpretation of $T_{\\mathrm {therm}}$ as a thermalization time Heuristic speculations on the possible connections between $T_{\\mathrm {therm}}$ and the BH evaporation process and Hawking radiation will be discussed elsewhere.", "and the change of perspective implicit in the passage from the “density of states” perspective in (REF ) to the “flux” one in (REF ).", "From a purely BH dynamical perspective, we can rewrite the first law (REF ), in a dynamical rather than a variational version $\\frac{d\\mathrm {Area}^{\\mathrm {Hor}}}{dt} = (32\\pi M) \\frac{dM}{dt} = T_{\\mathrm {therm}} \\frac{dM}{dt} \\ ,$ that relates this purely QNM thermalization notion, $T_{\\mathrm {therm}}$ , with the rate of change of the BH physical quantities.", "In this “thermal setting”, the QNM Weyl's law for the Schwarzschild BH adopts the following form that highlights the relevant mechanisms underlying the description of the BH resonances $N(\\omega ) \\sim \\frac{1}{c^2}\\left(\\frac{1}{6\\pi ^2}\\right) T_{\\mathrm {therm}} \\cdot \\mathrm {Area}^{\\mathrm {LR}} \\omega ^3 + o(^{2})\\ ,$ when $\\omega \\rightarrow \\infty $ .", "Remarkably, inspired by this thermal setting (but fully agnostic and independent of thermal issues), a proper intrinsic geometric expression can be formulated for Schwarzschild QNM Weyl's law.", "Making use of Smarr formula [52] (valid for general vacuum stationary BHs), we can rewrite the first law (REF ) as an integrated expression, namely $M = 2 \\left(\\frac{\\kappa }{8\\pi } \\right)\\cdot \\mathrm {Area}^{\\mathrm {Hor}} \\ ,$ The factor 2 being the consequence of the application of Euler's theorem for homogeneous functions (Kerr's expression for $M$ is homogeneous of degree $1/2$ in the $A$ variable).", "Then $T_{\\mathrm {therm}} = 2 \\cdot \\frac{\\mathrm {Area}^{\\mathrm {Hor}}}{M} \\ ,$ and we can write (REF ) as $N(\\omega ) \\sim \\frac{2}{c^3}\\left(\\frac{1}{6\\pi ^2}\\right) \\left(\\frac{\\mathrm {Area}^{\\mathrm {Hor}}\\cdot \\mathrm {Area}^{\\mathrm {LR}}}{M}\\right)\\omega ^3 + o(^{2})\\ ,$ This is a genuine geometric expression, depending solely on BH intrinsic quantities.", "In particular, it is oblivious to any thermal reasoning (though the latter has proved a key catalyst leading to expression (REF )).", "We note the fine intertwining between Euler's theorem and BH quantities, leading to the reintroduction of the factor 2 in BH QNM Weyl's law, when re-expressed in purely geometric terms.", "This leads in particular to the identification of the effective volume $\\mathrm {Vol}_d^{\\mathrm {eff}}$ in (REF ) as $\\mathrm {Vol}_d^{\\mathrm {eff}} = \\left(\\frac{\\mathrm {Area}^{\\mathrm {Hor}}\\cdot \\mathrm {Area}^{\\mathrm {LR}}}{M}\\right) \\ ,$ a much more transparent expression than (REF )." ], [ "Generic BHs: a heuristic (formal) proposal ", "Unfortunately, expressions (REF ) and (REF ) do not extend straightforwardly to the generic case BH, due to the generic absence of light-rings.", "Indeed, in the generic case, the region of trapping for light geodesics is no longer a surface but a volume.", "The intuition gained from (REF ) and (REF ) suggests that the effective volume $\\mathrm {Vol}_d^{\\mathrm {eff}}$ in Weyl's law (REF ) should know about an (effective) measure of the light trapping region.", "Formally, for a general ($d+1$ )-dimensional BH, one can write the following version of QNM Weyl's law $N(\\omega ) \\sim \\frac{1}{c^{d-1}}C_d\\left(T_{\\mathrm {therm}}\\cdot \\mathrm {Area}_{d-1}^{\\mathrm {eff-LT}}\\right)\\omega ^d + o(^{d-1}) \\ ,$ (for $\\omega \\rightarrow \\infty $ ) as an analogue for (REF ) and $N(\\omega ) \\sim \\frac{2}{c^{d}}C_d\\mathrm {Vol}_d^{\\mathrm {eff-LT}} \\omega ^d + o(^{d-1}) \\ \\ , \\ \\ (\\omega \\rightarrow \\infty )$ with $T_{\\mathrm {therm}}$ given by the well-defined BH geometric expression (REF ) and where $\\mathrm {Area}_{d-1}^{\\mathrm {eff-LT}}$ and $\\mathrm {Vol}_d^{\\mathrm {eff-LT}}$ are, respectively, effective area and volume associated with the light-trapped region, related as $2\\cdot \\mathrm {Vol}_d^{\\mathrm {eff-LT}} = c T_{\\mathrm {therm}}\\cdot \\mathrm {Area}_{d-1}^{\\mathrm {eff-LT}}$ .", "How such an effective $\\mathrm {Vol}_d^{\\mathrm {eff-LT}}$ is related to the actual volume of the light-trapping region is unclear, since an appropriate notion of “convex hull” for such a region (taking into account the presence of the BH horizon) could be needed." ], [ "Conclusions", "We have discussed the Weyl's law for BH QNMs introduced in ref.", "([8]), showing its consistency with the standard Weyl's law for bound states in self-adjoint problems.", "In particular, for $(d+1)$ -dimensional BHs it recovers the power-law dependence on the (space) dimension $d$ .", "Regarding the multiplicative factor in the Weyl's law, we have proposed that the effective volume playing the role of the finite volume associated with compact support potentials in the standard Weyl's law, is controlled by the size of BH light-trapping region.", "Such a BH QNM Weyl's law can be used in two directions.", "On the one hand, given the BH spacetime dimension and having a candidate for the BH light-trapping region, one can estimate the asymptotics of the number of QNMs and from there infer expressions for a notions of “QNM density states” or “QNM radiation flux”.", "Perhaps more interestingly, one could proceed in the reverse direction.", "Assuming a (hypothetical future) direct observational measurement of sufficiently many high-overtones in gravitational wave signals detected by interferometric gravitational antennae, an observational QNM counting function $N^{\\mathrm {obs}}(\\omega )$ could be constructed.", "Using the proposed BH QNM Weyl's law one could then infer: i) an observational estimation of spacetime dimension, $(d^{\\mathrm {obs}}+1)$ , and ii) an estimation of the relevant (length/time) scale in the BH QNM resonant phenomenon.", "These quantities thus provide observational probes into the geometry of BH spacetimes.", "Regarding future perspectives, we can comment on the following lines: i) Assessment of the QNM BH Weyl's law in the non-spherically symmetric case, namely in the Kerr BH.", "This is crucial, since part of our inferences (e.g.", "the dimension $d$ in QNM Weyl's law) heavily rely in results that could be specific of the spherically symmetric case.", "ii) Related to point i), assessment of the QNM Weyl's law expression (REF ) in the spherically Reissner-Nordström case.", "This is a non-trivial test of our discussion.", "iii) Systematic study of the impact of the ultraviolet BH QNM overtone instability on the BH QNM Weyl's law, in particular assessing the “phase transition” in [8], associated with the change of Weyl's law (light-trapping) scale as the wavenumber of the perturbations increases.", "iv) Systematic assessment of the proposed BH QNM Weyl's law in more general scenarios: BHs in higher dimensions, different spacetime asymptotics (Anti-de Sitter, de Sitter...), dependence on the type of perturbation (scalar, electromagnetic, gravitational...), et cetera.", "Acknowledgments.", "We would like to thank Rémi M. Mokdad and Johannes Sjöstrand.", "This work was supported by the French “Investissements d'Avenir” program through project ISITE-BFC (ANR-15-IDEX-03), the ANR “Quantum Fields interacting with Geometry” (QFG) project (ANR-20-CE40-0018-02), the EIPHI Graduate School (ANR-17-EURE-0002), the Spanish FIS2017-86497-C2-1 project (with FEDER contribution), the European Research Council Grant ERC-2014-StG 639022-NewNGR “New frontiers in numerical general relativity\" and the European Commission Marie Sklodowska-Curie grant No 843152 (Horizon 2020 programme).", "The project used Queen Mary's Apocrita HPC facility, supported by QMUL Research-IT, and CCuB computational resources (université de Bourgogne)." ] ]
2212.05570
[ [ "DISCO: Adversarial Defense with Local Implicit Functions" ], [ "Abstract The problem of adversarial defenses for image classification, where the goal is to robustify a classifier against adversarial examples, is considered.", "Inspired by the hypothesis that these examples lie beyond the natural image manifold, a novel aDversarIal defenSe with local impliCit functiOns (DISCO) is proposed to remove adversarial perturbations by localized manifold projections.", "DISCO consumes an adversarial image and a query pixel location and outputs a clean RGB value at the location.", "It is implemented with an encoder and a local implicit module, where the former produces per-pixel deep features and the latter uses the features in the neighborhood of query pixel for predicting the clean RGB value.", "Extensive experiments demonstrate that both DISCO and its cascade version outperform prior defenses, regardless of whether the defense is known to the attacker.", "DISCO is also shown to be data and parameter efficient and to mount defenses that transfers across datasets, classifiers and attacks." ], [ "Introduction", "It has long been hypothesized that vision is only possible because the natural world contains substantial statistical regularities, which are exploited by the vision system to overcome the difficulty of scene understanding [12], [49], [119], [42], [146], [43], [58], [130], [137], [101], [13], [142].", "Under this hypothesis, natural images form a low-dimension manifold in image space, denoted as the image manifold, to which human vision is highly tuned.", "While deep neural networks (DNNs) [140], [52], [167], [141], [131] aim to classify natural images with human-like accuracy, they have been shown prone to adversarial attacks that, although imperceptible to humans, significantly decrease their performance.", "As shown in Fig.", "REF (a) and (b), these attacks typically consist of adding an imperceptible perturbation, which can be generated in various manners [46], [88], [17], [79], to the image.", "Over the past few years, the introduction of more sophisticated attacks has exposed a significant vulnerability of DNNs to this problem [138], [38], [37], [29], [29], [125].", "In fact, it has been shown that adversarial examples crafted with different classifiers and optimization procedures even transfer across networks [34], [117], [159], [61], [108].", "A potential justification for the success of adversarial attacks and their transferability is that they create images that lie just barely outside the image manifold [135], [124], [81], [45], [71], [11].", "We refer to these images as barely outliers.", "While humans have the ability to project these images into the manifold, probably due to an history of training under adverse conditions, such as environments with low-light or ridden with occlusions, this is not the case for current classifiers.", "A key factor to the success of the human projection is likely the accurate modeling of the image manifold.", "Hence, several defenses against adversarial attacks are based on models of natural image statistics.", "These are usually global image representations [92], [139], [124], [166] or conditional models of image pixel statistics [135], [11], [145], [122], [74].", "For example, PixelDefend [135] and HPD [11] project malicious images into the natural image manifold using the PixelCNN model [74], which predicts a pixel value conditioned on the remainder of the image.", "However, these strategies [135], [11], [92], [139], [124] can be easily defeated by existing attacks.", "We hypothesize that this is due to the difficulty of learning generative image models, which require global image modelling, a highly complex task.", "It is well known that the synthesis of realistic natural images requires very large model sizes and training datasets [69], [14], [70].", "Even with these, it is not clear that the manifold is modeled in enough detail to defend adversarial attacks.", "In this work, we argue that, unlike image synthesis, the manifold projection required for adversarial defense is a conditional operation: the synthesis of a natural image given the perturbed one.", "Assuming that the attack does not alter the global structure of the image (which would likely not be imperceptible to humans) it should suffice for this function to be a conditional model of local image (i.e.", "patch) statistics.", "We argue that this conditional modeling can be implemented with an implicit function [24], [134], [94], [121], [100], [73], [65], [25], where the network learns a conditional representation of the image appearance in the neighborhood of each pixel, given a feature extracted at that pixel.", "This strategy is denoted aDversarIal defenSe with local impliCit functiOns (DISCO).", "Local implicit models have recently been shown to provide impressive results for 3D modeling [134], [94], [121], [100], [160], [73], [65], [25], [93], [109], [44], [156] and image interpolation [24].", "We show that such models can be trained to project barely outliers into the patch manifold, with much smaller parameter and dataset sizes than generative models, while enabling much more precise control of the manifold projection operation.", "This is illustrated in Fig.", "REF , which presents an image, its adversarial attack, and the output of the DISCO defense.", "To train DISCO, a dataset of adversarial-clean pairs is first curated.", "During training, DISCO inputs an adversarial image and a query pixel location, for which it predicts a new RGB value.", "This is implemented with a feature encoder and a local implicit function.", "The former is composed by a set of residual blocks with stacked convolution layers and produces a deep feature per pixel.", "The latter consumes the query location and the features in a small neighborhood of the query location.", "The implicit function is learned to minimize the $L_1$ loss between the predicted RGB value and that of the clean image.", "The restriction of the manifold modeling to small image neighborhoods is a critical difference with respect to previous defenses based on the modeling of the natural image manifold.", "Note that, as shown in Fig.", "REF , DISCO does not project the entire image into the manifold, only each pixel neighborhood.", "This considerably simplifies the modeling and allows a much more robust defense in a parameter and data efficient manner.", "This is demonstrated by evaluating the performance of DISCO under both the oblivious and adaptive settings [163], [115], [161].", "Under the oblivious setting, the popular RobustBench [27] benchmark is considered, for both $L_{\\infty }$ and $L_{2}$ attacks with Autoattack [29].", "DISCO achieves SOTA robust accuracy (RA) performance, e.g.", "outperforming the prior art on Cifar10, Cifar100 and ImageNet by 17%, 19% and 19% on $L_{\\infty }$ Autoattack.", "A comparison to recent test-time defenses [164], [4], [120], [28], [90], [99] also shows that DISCO is a more effective defensive strategy across various datasets and attacks.", "Furthermore, a study of the defense transferability across datasets, classifiers and attacks shows that the DISCO defense maintains much of its robustness even when deployed in a setting that differs from that used for training by any combination of these three factors.", "Finally, the importance of the local manifold modeling is illustrated by experiments on ImageNet [35], where it is shown that even when trained with only 0.5% of the dataset DISCO outperforms all prior defenses.", "Under the adaptive setting, DISCO is evaluated using the BPDA [8] attack, known to circumvent most defenses based on image transformation [135], [124], [139], [161], [40].", "While DISCO is more vulnerable under this setting, where the defense is known to the attacker, it still outperforms existing approaches by 46.76%.", "More importantly, we show that the defense can be substantially strengthened by cascading DISCO stages, which magnifies the gains of DISCO to 57.77%.", "This again leverages the parameter efficiency of the local modeling of image statistics, which allows the implementation of DISCO cascades with low complexity.", "The ability to cascade DISCO stages also allows a new type of defense, where the number of DISCO stages is randomized on a per image basis.", "This introduces some degree of uncertainty about the defense even under the adaptive setting and further improves robustness.", "Overall, this work makes four contributions.", "First, it proposes the use of defenses based purely on the conditional modeling of local image statistics.", "Second, it introduces a novel defense of this type, DISCO, based on local implicit functions.", "Third, it leverages the parameter efficiency of the local modeling to propose a cascaded version of DISCO that is shown robust even to adaptive attacks.", "Finally, DISCO is shown to outperform prior defenses on RobustBench [27] and other 11 attacks, as well as test-time defenses under various experimental settings.", "Extensive ablations demonstrate that DISCO has unmatched defense transferrability in the literature, across datasets, attacks and classifiers." ], [ "Related Work", "Adversarial Attack and Defense.", "We briefly review adversarial attacks and defenses for classification and prior art related to our work.", "Please refer to [19], [103], [2] for more complete reviews.", "Adversarial Attacks aim to fool the classifier by generating an imperceptible perturbation (under $L_p$ norm constraint) that is applied to the clean image.", "Attack methods have evolved from simple addition of sign gradient, as in FGSM [46], to more sophisticated approaches [79], [17], [138], [144], [88], [29], [38], [37], [29].", "While most white-box attacks assume access to the classifier gradient, BPDA [8] proposed a gradient approximation attack that can circumvent defenses built on obfuscated gradients.", "In general, these attacks fall into two settings, oblivious or adaptive, depending on whether the attacker is aware of the defense strategy [163], [115], [161].", "DISCO is evaluated under both settings.", "Adversarial Defenses can be categorized into adversarially trained and transformation based.", "The former are trained against adversarial examples generated on-the-fly during training [114], [48], [47], [113], [126], [104], [126], allowing the resulting robust classifier to defend against the adversarial examples.", "While adversarially trained defenses dominate the literature, they are bound together with the classifier.", "Hence, re-training is required if the classifier changes and the cost of adversarial training increases for larger classifiers.", "Transformation based defenses [50], [161], [64], [135], [139] instead introduce an additional defense module, which can be applied to many classifiers.", "This module preprocesses the input image before passing it to the classifier.", "The proposed preprocessing steps include JPEG compression [33], [40], [87], bit reduction [50], [161], [64], pixel deflection [111] or applications of random transformations [158], [50] and median filters [102].", "Beyond pixel space defenses, malicious images can also be reconstructed to better match natural image statistics using autoencoders [92], [139], GANs [124], [7], [166], or other generative models, such as the PixelCNN [122].", "The latter is used to project the malicious image into the natural image manifold by methods like PixelDefend [135] or HPD [11].", "These methods can only produce images of fixed size [92], [124], [139], [166] and model pixel likelihoods over the entire image [135], [11].", "This is unlike DISCO, which models conditional local statistics and can produce outputs of various size.", "The idea of performing adversarial purification before feeding the image into the classifier is central to a recent wave of test-time defenses [164], [4], [90], [99].", "[164] addresses the impracticality of previous Monte-Carlo purification models by introducing a Denoising Score-Matching and a random noise injection mechanism.", "[4] prepends an anti-adversary layer to the classifier, with the goal of maximizing the classifier confidence of the predicted label.", "[90] reverses the adversarial examples using self-supervised contrastive loss.", "[99] proposed a diffusion model for adversarial removal.", "Unlike these prior works, DISCO purifies the adversarial image by modeling the local patch statistics.", "Such characteristics results in data and parameter efficiency, which have not been demonstrated for [164], [4], [90], [99].", "Furthermore, DISCO outperforms all prior works in terms of robust accuracy, under the various settings they proposed.", "Implicit Function.", "refers to the use of a neural network to model a continuous function [134].", "This has been widely used in applications involving audio [176], [51], [134], 2D images [24], [39] and 3D shapes [134], [94], [121], [100], [160], [73], [65], [25], [93], [82], [109], [44], [156].", "In the 3D literature, local implicit functions have become popular models of object shape [100], [121], [109] or complex 3D scenes [94].", "This also inspired 2D applications to super-resolution [24], image [68] and video generation [165].", "In the adversarial attack literature, implicit functions have recently been proposed to restore adversarial point clouds of 3D shape, through the IF-Defense [156].", "To the best of our knowledge, ours is the first paper to propose local implicit functions for 2D adversarial defense.", "Table: NO_CAPTION" ], [ "Method", "In this section, we introduce the architecture of DISCO and its training and testing procedure." ], [ "Motivation", "Under the hypothesis that natural images lie on a low-dimension image manifold, classification networks can be robustified by learning to project barely outliers (i.e.", "adversarial images) into the manifold, a process that can be seen as manifold thickening.", "Images in a shell around the manifold are projected into it, leaving a larger margin to images that should be recognized as outliers.", "While this idea has been studied [45], [71], [85], [63], its success hinges on the ability of classification models to capture the complexities of the image manifold.", "This is a very hard problem, as evidenced by the difficulty of model inversion algorithms that aim to synthesize images with a classifier [148], [89], [98], [162].", "These algorithms fail to produce images comparable to the state of the art in generative modeling, such as GANs [70], [69], [15].", "Recently, however, it has been shown that it is possible to synthesize realistic images and 3D shapes with implicit functions, through the use of deep networks [44], [24], [134], [94], [121], [100], [65], [25] that basically memorize images or objects as continuous functions.", "The assumption behind DISCO is that these implicit functions can capture the local statistics of images or 3D shapes, and can be trained for manifold thickening, that is to learn how to projecting barely outliers into the image manifold." ], [ "Model Architecture and Training", "Data Preparation To train DISCO, a dataset $D=\\lbrace (x^i_{cln}, x^i_{adv})\\rbrace _{i=1}^N$ containing a set of paired clean $x^i_{cln}$ and adversarial $x^i_{adv}$ images is curated.", "For this, a classifier $P_{trn}$ and an attack procedure $A_{trn}$ are first selected.", "For each image $x^i_{cln}$ , the adversarial image $x^i_{adv}$ is generated by attacking the predictions $P_{trn}(x^i_{cln})$ using $A_{trn}$ , as shown in Fig.", "REF (a).", "Training As shown in Fig.", "REF (b), the DISCO defense is trained with pairs of random patches cropped at the same location of the images $x_{cln}$ and $x_{adv}$ .", "For example, random patches of size $48\\times 48$ are sampled from training pairs of the ImageNet dataset [35].", "The network is then trained to minimize the L1 loss between the RGB values of the clean $x_{cln}$ and defense output $x_{def}$ .", "Architecture DISCO performs manifold thickening by leveraging the LIIF [25] architecture to purify adversarial patches.", "It takes an adversarial image $x_{adv} \\in \\mathbb {R}^{H\\times W \\times 3}$ and a query pixel location $p=[i,j] \\in \\mathbb {R}^2$ as input and predicts a clean RGB value $v \\in \\mathbb {R}^3$ at the query pixel location, ideally identical to the pixel value of the clean image $x_{cln} \\in \\mathbb {R}^{H\\times W \\times 3}$ at that location.", "The defense output $x_{def} \\in \\mathbb {R}^{H^{\\prime }\\times W^{\\prime } \\times 3}$ can then be synthesized by predicting a RGB value for each pixel location in a grid of size ${H^{\\prime }\\times W^{\\prime } \\times 3}$ .", "Note that it is not a requirement that the size of $x_{def}$ be the same as that of $x_{cln}$ .", "In fact, the size of $x_{def}$ could be changed during inference.", "Figure: The DISCO architecture includes an encoder and a local implicit module.", "The network is trained to map Adversarial into Defense images, using an L 1 L_1 loss to Clean images.To implement this, DISCO is composed of two modules, illustrated in Fig.", "REF .", "The first is an encoder $E$ that extracts the per-pixel feature of an input image $x$ .", "The encoder architecture resembles the design of EDSR [84], originally proposed for super-resolution.", "It contains a sequence of 15 residual blocks, each composed of a convolution layer, a ReLu layer and a second convolution layer.", "The encoder output is a feature map $f=E(x) \\in \\mathbb {R}^{H\\times W \\times C}$ , with $C=64$ channels.", "The feature at location $p=[i,j]$ is denoted as $f_{ij}$ .", "The second module is the the local implicit module $L$ , which is implemented by a MLP.", "Given query pixel location $p$ , $L$ first finds the nearest pixel value $p^*=[i^*,j^*]$ in the input image $x$ and corresponding feature $f_{i^*j^*}$ .", "$L$ then takes the features in the neighborhood of $p^*$ into consideration to predict the clean RGB value $v$ .", "More specifically, let $\\hat{f}_{i^*j^*}$ denote a concatenation of the features whose location is within the kernel size $s$ of $p^*$ .", "The local implicit module $L$ takes the concatenated feature, the relative position $r=p-p^*$ between $p$ and $p^*$ , and the pixel shape as input, to predict a RGB value $v$ .", "By default, the kernel size $s$ is set to be 3.", "Since the network implements a continuous function, based only on neighboring pixels, the original grid size $H\\times W$ is not important.", "The image coordinates can be normalized so that $(i,j) \\in [-1,1]^2$ and the pixel shape is the height and width of the output pixel in the normalized coordinates.", "This makes DISCO independent of the original image size or resolution." ], [ "Inference", "For inference, DISCO takes either a clean or an adversarial image as input.", "Given a specified output size for $x_{def}$ , DISCO loops over all the output pixel locations, predicting an RGB value per location.", "Note that this is not computationally intensive because the encoder feature map $f=E(x)$ is computed once and used to the predict the RGB values of all query pixel locations.", "Furthermore, while the training pairs are generated with classifier $P_{trn}$ and attack $A_{trn}$ , the inference time classifier $P_{tst}$ and attack $A_{tst}$ could be different.", "In the experimental section we show that DISCO is quite flexible, performing well when (1) $P_{trn}$ and $P_{tst}$ consume images of different input size and (2) the attack, classifier and dataset used for inference are different than those used for training.", "In fact, DISCO is shown to be more robust to these configuration changes than previous methods." ], [ "DISCO Cascades", "DISCO is computationally very appealing because it disentangles the training of the defense from that of the classifier.", "This can be a big practical advantage, since classifier retraining is needed whenever training settings, such as architecture, hyper-parameters, or number of classes, change.", "Adversarial defenses require retraining on the entire dataset when this is the case, which is particularly expensive for large models (like SENet [55] or EfficientNet [141]) trained on large datasets (like ImageNet [35] and OpenImages [75]).", "Unsurprisingly, RobustBench [27], one of the largest adversarial learning benchmarks, reports more than 70 baselines for Cifar10, but less than 5 on ImageNet.", "DISCO does not have this defense complexity, since it is trained independently of the classifier.", "Furthermore, because DISCO is a model of local statistics, it is particularly parameter efficient.", "As shown in Fig.", "REF , DISCO has a lightweight design with only 1.6M parameters, which is significantly less than most recent classifier [141], [55], [131], [52] and GAN [70], [14] models with good performance for ImageNet-like images.", "This also leads to a computationally efficient defense.", "Our experiments show that DISCO can be trained with only 50,000 training pairs.", "In fact, we show that it can beat the prior SOTA using less than 0.5% of ImageNet as training data (Table REF ).", "One major benefit of this efficiency is that it creates a large unbalance between the costs of defense and attack.", "Consider memory usage, which is dominated by the computation of gradients needed for either the attack or the backpropagation of training.", "Let $N_d$ and $N_c$ be the number of parameters of the DISCO network and classifier, respectively.", "The per image memory cost of training the DISCO defense is $O(N_d)$ .", "On the other hand, the attack cost depends on the information available to the attacker.", "We consider two settings, commonly considered in the literature.", "In the oblivious setting, only the classifier is known to that attacker and the attack has cost $O(N_c)$ .", "In the adaptive setting, both the classifier and the DISCO are exposed and backpropagation has memory cost $O(N_c + N_d)$ .", "In experiments, we show that DISCO is quite effective against oblivious attacks.", "Adaptive attacks are more challenging.", "However, as shown in Fig REF , it is usually the case that $N_c > N_d$ , making the complexity of the attack larger than that of the defense.", "This is unlike adversarial training, where attack and defense require backpropagation on the same model and thus have the same per-image cost.", "This asymmetry between the memory cost of the attack and defense under DISCO can be magnified by cascading DISCO networks.", "If $K$ identical stages of DISCO are cascaded, the defense complexity remains $O(N_d)$ but that of the attack raises to $O(N_c + K N_d)$ .", "Hence the ratio of attack-to-defense cost raises to $O(K + N_c/N_d)$ .", "Interestingly, our experiments (see Section REF ) show that when $K$ is increased the defense performance of the DISCO cascade increases as well.", "Hence, DISCO cascades combine high robust accuracy with a large ratio of attack-to-defense cost." ], [ "Experiments", "In this section, we discuss experiments performed to evaluate the robustness of DISCO.", "Results are discussed for both the oblivious and adaptive settings [163], [115], [161] and each result is averaged over 3 trials.", "$\\epsilon _{p}$ denotes the perturbation magnitude under the $L_p$ norm.", "All experiments are conducted on a single Nvidia Titan Xp GPU with Intel Xeon CPU E5-2630 using Pytorch [110].", "Please see appendix for more training details, quantitative results and visualizations.", "We adopt the code from LIIF [24] for implementation.", "Training Dataset: The following training configuration is used unless noted.", "Three datasets are considered: Cifar10 [76], Cifar100 [77] and Imagenet [35].", "For each, 50,000 adversarial-clean training pairs are curated.", "For Cifar10 and Cifar100, these are the images in the training set, while for ImageNet, 50 images are randomly sampled per class.", "Following RobustBench [27], the evaluation is then conducted on the test set of each dataset.", "To create training pairs, PGD [88] ($\\epsilon _{\\infty }=8/255$ with step size is 2/255 and the number of steps 100) is used to attack a ResNet18 and a WideResNet28 on Cifar10/ImageNet and Cifar100, respectively.", "Attack and Benchmark: DISCO is evaluated on RobustBench [27], which contains more than 100 baselines evaluated using Autoattack [29].", "This is an ensemble of four sequential attacks, including the PGD [88] attack with two different optimization losses, the FAB attack [26] and the black-box Square Attack [5].", "DISCO is compared to defense baselines under both $L_{\\infty }$ and $L_{2}$ norms.", "To study defense generalization, 11 additional attacks are considered, including FGSM [46], BIM [79], BPDA [8] and EotPgd [86].", "Note that DISCO is not trained specifically for these attacks.", "Metric: Standard (SA) and robust (RA) accuracy are considered.", "The former measures classification accuracy on clean examples, the latter on adversarial.", "The average of SA and RA is also used.", "Table: NO_CAPTIONFigure: Comparison of DISCO to No Defense, Adversarially Trained, and Transformation based baselines.", "(a) Cifar10, (b) Cifar100, and (c) ImageNet.", "Top-row: trade-off between SA and RA.", "Bottom row: average accuracy of each of theRobustBench baselines and DISCO." ], [ "Oblivious Adversary", "SOTA on RobustBench: DISCO achieves SOTA performance on RobustBench.", "Table REF and REF compare DISCO to the RobustBench baselines on Cifar10 under $L_{\\infty }$ and $L_2$ Autoattack, respectively.", "Baselines are categorized into (1) no defense (first block), (2) adversarially trained (second block) and (3) transformation based (third block).", "The methods presented in each table are those of highest RA performance in each category.", "The full table is given in the supplemental, together with those of Cifar100 and ImageNet.", "Note that the RobustBench comparison slightly favours the adversarially trained methods, which use a larger classifier.", "A detailed comparison to all RobustBench baselines is given in Fig.", "REF , for the three datasets.", "The upper row visualizes the trade-off between SA and RA.", "The bottom row plots the averaged SA/RA across baselines.", "Blue, green, cyan and red indicate no defense, adversarially trained baselines, transformation based baselines and DISCO, respectively.", "These results show that, without a defense, the attack fools the classifier on nearly all examples.", "Adversarially trained baselines improve RA by training against the adversarial examples.", "Some of these [114], [48], [47], [67], [113], [57], [153], [136], [155] also leverage additional training data.", "Transformation based defenses require no modification of the pre-trained classifier and can generalize across attack strategies [124], [139].", "While early methods (like Jpeg Compression [40] and Bit Reduction [161]) are not competitive, recent defenses [139] outperform adversarially trained baselines on Cifar100 and ImageNet.", "DISCO is an even more powerful transformation-based defense, which clearly outperforms the prior SOTA RA by a large margin (17 % on Cifar10, 19 % on Cifar100 and 19 % on ImageNet).", "In the upper row of Fig.", "REF , it clearly beats the prior Pareto front for the SA vs. RA trade-off.", "Table REF and Table REF also show that previous transformation based methods tend to perform better for $L_2$ than $L_{\\infty }$ Autoattack.", "DISCO is more robust, achieving similar RA for $L_2$ and $L_{\\infty }$ Autoattacks.", "Table: Defense Transfer of L ∞ L_{\\infty } trained defenses to L 2 L_{2} attacks on Cifar10.", "Top block: adversarially trained, Bottom block: transformation based.Improving SOTA Methods: While DISCO outperforms the SOTA methods on RobustBench, it can also be combined with the latter.", "Table REF shows that adding DISCO improves the performance of top three ResNet50 robust baselines for ImageNet [123], [41], [152] by 16.77 (for RA) and 8.12 (for averaged SA/RA) on average.", "This demonstrates the plug-and-play ability of DISCO.", "Comparison to Test-Time Defenses First, we compare DISCO to four recent test-time defenses.", "Following the setup of [164], DISCO is evaluated on Cifar10 using a WRN28-10 network under the PGD40 attack ($\\epsilon =8/255$ ).", "While [164] reported an RA of 80.24 for the default setting, DISCO achieves 80.80, even though it is not optimized for this experiment and has much fewer parameters (1.6M vs 29.7M).", "Second, a comparison to [4], [90] under Autoattack, shows that [4] achieves RAs of 79.21/40.68 and [90] of 67.79/33.16 on the Cifar10/Cifar100 datasets.", "These numbers are much lower than those reported for DISCO (85.56/67.93) on Table 1 & Appendix Table C. Third, under the APgd [29] attack, [4] achieves 80.65/47.63 RA on Cifar10/Cifar100 dataset, while DISCO achieves 85.79/77.33 (Appendix Table E & Table 3).", "This shows that DISCO clearly outperforms [4] on two different attacks and datasets.", "Finally, like DISCO, [99] compares to defenses in RobustBench.", "For Cifar10 and a WRN28-10 classifier, [99] achieves 70.64/78.58 RA under $\\epsilon _\\infty =8/255$ and $\\epsilon _2=0.5$ respectively, while DISCO achieves 85.56/88.47 (Table 1 & Table 2).", "On ImageNet, [99] achieves 40.93/44.39 RA with ResNet50/WRN50, while DISCO achieves 68.2/69.5 (Appendix Table D).", "In summary, DISCO outperforms all these approaches in the various settings they considered, frequently achieving large gains in RA.", "Dataset Size: Table REF shows the SA and RA performance of DISCO when training pairs are sampled from a random subset of the ImageNet classes (100 and 500).", "Compared to the ImageNet SOTA [123] RA of 38.14% (See Appendix), DISCO outperforms the prior art by 21.7% (59.84 vs 38.14) when trained on about 0.5% of the ImageNet training data.", "Defense Transferability The transferability of the DISCO defense is investigated across attacks, classifiers and datasets.", "Transfer across Attacks.", "RobustBench evaluates the model on Autoattack, which includes the PGD attack used to train DISCO.", "Table REF summarizes the transfer performance of DISCO, trained with PGD attacks, when subject to ten different $L_{\\infty }$ attacks at inference.", "This is compared to the transfer peformance of the two top Cifar100 baselines on RobustBench.", "DISCO outperforms these baselines with an average RA gain greater than 24.49%.", "When compared to the baseline that uses the same classifier (WRN28-10), this gain increases to 29.5%.", "Fig.", "REF visualizes the gains of DISCO (red bar) on Cifar10 and ImageNet.", "Among 3 datasets and 10 attacks, DISCO outperforms the baselines on 24 results.", "The average gains are largest on Cifar100 and ImageNet, where the RA of the prior approaches is lower.", "Note that the defense is more challenging on ImageNet, due to the higher dimensionality of its images [128].", "The full table can be found in the supplemental.", "We next evaluate the transfer ability of DISCO trained with the $L_{\\infty }$ PGD attacks to four $L_2$ norm inference attacks: FGSM [46], BIM [79], CW [17] and DeepFool [95].", "Table REF compares the defense transferability of DISCO to both adversarially trained (top block) and transformation baselines (lower block).", "DISCO generalizes well to $L_2$ attacks.", "It can defend more attacks than adversarially trained baselines (top block) and is more robust than the prior SOTA transformation based defenses.", "Beyond different test attacks $A_{tst}$ on a PGD-trained DISCO, we also evaluated the effect of changing the training attack $A_{trn}$ used to generate the adversarial-clean pairs on which DISCO is trained.", "In rows 3-5 of Table REF , PGD [88], BIM [79] and FGSM [46] are used to generate training pairs, while Autoattack is used as testing attack.", "BIM and PGD have comparable results, which are stronger than those of FGSM.", "Nevertheless all methods outperform the SOTA RobustBench defense  [114] for Autoattack on Cifar10, shown in the first row.", "These results suggests that DISCO is robust to many combinations of training and inference attacks.", "Table: NO_CAPTIONTable: NO_CAPTIONTransfer across Classifiers.", "The first section of Table REF shows the results when the testing classifier is different from the training classifier.", "While the ResNet18 is always used to curate the training pairs of DISCO, the testing classifier varies between ResNet18, WideResNet28 and VGG16.", "The small impact of the classifier used for inference on the overall RA shows that DISCO is classifier agnostic and can be applied to multiple testing classifiers once it is trained.", "Transfer across Datasets.", "The evidence that adversarial attacks push images away from the natural image manifold [175], [83], [139], [40], [63], [85] and that attacks can be transferred across classifiers [34], [150], [154], [60], [149], suggest that it may be possible to transfer defenses across datasets.", "This, however, has not been studied in detail in the literature, partly because adversarially trained baselines entangle the defense and the classifier training.", "This is unlike DISCO and other transformation based baselines, which can be transferred across datasets.", "The bottom section of Table REF shows the test performance on Cifar10 of DISCO trained on Cifar100 and ImageNet.", "Since Cifar100 images are more similar to those of Cifar10 than ImageNet ones, the Cifar100 trained DISCO transfers better than that trained on ImageNet.", "However, even the RA of the latter is 7.72% higher than the best RA reported on RobustBench [114].", "Note that the DISCO trained on Cifar100 and ImageNet never see images from Cifar10 and the transfer is feasible because no limitation is imposed on the output size of DISCO." ], [ "Adaptive Adversary", "The adaptive adversary assumes both the classifier and defense strategy are exposed to the attacker.", "As noted by [8], [139], [143], this setting is more challenging, especially for transformation based defenses.", "We adopt the BPDA [8] attack, which is known as an effective attack for transformation based defenses, such as DISCO.", "Fig.", "REF compares the RA of DISCO trained with PGD attack to the results published for other methods in [139].", "For fair comparison, DISCO is combined with a VGG16 classifier.", "The figure confirms that both prior transformation defenses and a single stage DISCO ($K=1$ ) are vulnerable to an adaptive adversary.", "However, without training against BPDA, DISCO is 46.76% better than prior methods.", "More importantly, this gain increases substantially with $K$ , saturating at RA of 57.77% for $K=5$ stages, which significantly outperforms the SOTA by 57.35%." ], [ "Cascade DISCO", "So far, we have considered the setting where the structure of the cascade DISCO is known to the attacker.", "DISCO supports a more sophisticated and practical setting, where the number $K$ of DISCO stages used by the defense is randomized on a per-image basis.", "In this case, even if the use of DISCO is exposed to the attacker, there is still uncertainty about how many stages to use in the attack.", "We investigated the consequences of this uncertainty by measuring the defense performance when different values of $K$ are use for attack and defense, denoted as $K_{adv}$ and $K_{def}$ , respectively.", "The oblivious setting has $K_{adv}=0$ and $K_{def}\\ge 1$ , while $K_{adv}=K_{def}$ in the adaptive setting.", "We now consider the case where $K_{adv} \\ne K_{def}$ .", "Fig.", "REF investigates the effectiveness of cascade DISCO trained with PGD attack when faced with the BPDA [8] attack in this setting, where RA($K_{adv}$ ,$K_{def}$ ) is the RA when $K_{adv}$ and $K_{def}$ are used, and $K_{adv} \\in \\lbrace i\\rbrace _{i=0}^5$ , $K_{def} \\in \\lbrace i\\rbrace _{i=1}^3$ .", "Under the setting of $K_{adv} \\ne K_{def}$ , the RA is higher than that of the adaptive setting.", "Take $K_{adv}=2$ for example.", "Both RA(2,1)=55.3 and RA(2,3)=59.8 outperform RA(2,2)=52.", "In addition, Fig.", "REF compares the time to generate a single adversarial example on Cifar10 and defend against it using DISCO.", "Clearly, the computational resources needed to generate an attack are significantly higher than those of the defense and the ratio of attack-to-defense cost raises with $K$ .", "Both this and the good defense performance for mismatched $K$ s give the defender a strong advantage.", "It appears that the defense is more vulnerable when the attacker knows $K$ (adaptive setting) and even there, as seen in the previous section, the defense can obtain the upper hand by casacading several DISCO stages.", "Table: NO_CAPTION" ], [ "Discussion, Societal Impact and Limitations", "In this work, we have proposed the use of local implicit functions for adversarial defense.", "Given an input adversarial image and a query location, the DISCO model is proposed to project the RGB value of each image pixel into the image manifold, conditional on deep features centered in the pixel neighborhood.", "By training this projection with adversarial and clean images, DISCO learns to remove the adversarial perturbation.", "Experiments demonstrate DISCO's computational efficiency, its outstanding defense performance and transfer ability across attacks, datasets and classifiers.", "The cascaded version of DISCO further strengthens the defense with minor additional cost.", "Limitations: While DISCO shows superior performance on the attacks studied in this work (mainly norm-bounded attacks), it remains to be tested whether it is robust to other type of attacks [138], [16], [56], [54], [80], such as one pixel attack [138], patch attacks [16], [56] or functional adversarial attack [80].", "In addition, more evaluation configurations across attacks, datasets and classifiers will be investigated in the future.", "Societal Impact: We hope the idea of using local implicit functions can inspire better defenses and prevent the nefarious effects of deep learning attacks.", "Obviously, better defenses can also be leveraged by bad actors to improve resistance to the efforts of law enforcement, for example." ], [ "Acknowledgments", "This work was partially funded by NSF awards IIS1924937 and IIS-2041009, a gift from Amazon, a gift from Qualcomm, and NVIDIA GPU donations.", "We also acknowledge and thank the use of the Nautilus platform for some of the experiments discussed above.", "Appendix" ], [ "Compare to SOTA in RobustBench", "In this section, we list the quantitative result of the baselines in RobustBench [27] .", "Table REF , REF and REF correspond to Fig.6(a), (b) and (c) of the main paper, respectively.", "Table REF shows the baselines under Autoattack with $\\epsilon _{2}=0.5$ .", "The index displayed in each table corresponds to the index shown in Fig.6 in the main paper.", "The baselines of each table are grouped into No defense (first block), Adversarially trained defense in RobustBench (second block), Transformation based defense (third block) and DISCO (last block).", "The results of adversarially trained baselines are copied from RobustBench, while the results of transformation-based defenses are obtained with our implementation.", "For STL [139], models with different sparse constraints $\\lambda $ are used from the publicly available STL githubhttps://github.com/GitBoSun/AdvDefense_CSC.", "DISCO is also combined with various classifiers for evaluation.", "More discussion can be found in Sec.", "4.1 of the paper.", "Table: Cifar10 baselines and DISCO under Autoattack (ϵ ∞ =8/255\\epsilon _{\\infty }=8/255).", "This table corresponds to Fig.", "6(a) in the main paper.Table: Cifar10 baselines and DISCO under Autoattack (ϵ 2 =0.5\\epsilon _{2}=0.5).Table: Cifar100 baselines and DISCO under Autoattack (ϵ ∞ =8/255\\epsilon _{\\infty }=8/255).", "This table corresponds to Fig.", "6(b) in the main paper.Table: ImageNet baselines and DISCO under Autoattack (ϵ ∞ =4/255\\epsilon _{\\infty }=4/255).", "This table corresponds to Fig.", "6(c) in the main paper." ], [ "Defense Transfer", "In this section, we discuss the qualitative results of DISCO transferability across attacks.", "Table REF , REF and REF represents the results for Cifar10, Cifar100 and ImageNet, respectively.", "The corresponding plots are illustrated in Fig.", "REF , REF and REF .", "More discussion can be found in Sec.", "4.1 of the paper.", "Table: NO_CAPTIONTable: NO_CAPTIONTable: NO_CAPTION" ], [ "Improving Cifar10 and Cifar100 SOTA on RobustBench", "Sec.", "4.1 in the main paper shows that DISCO can improve the prior SOTA defenses on the ImageNet dataset.", "In Table REF , we further investigate the gain of applying DISCO on SOTA Cifar10 and Cifar100 defenses.", "The first and second block of Table REF show the gains of applying DISCO on [114], which is the prior SOTA defense against $L_2$ and $L_{\\infty }$ Autoattack on Cifar10.", "DISCO also improves the prior SOTA defense [47] on Cifar100 by 2.89%.", "These results indicate that, beyond being a robust defense by itself, DISCO can also be applied to existing defenses to improve their robustness." ], [ "Kernel Size s", "In this section, we ablate the kernel size used to train DISCO on ImageNet.", "The kernel size $s$ controls the feature neighborhood forwarded to the local implicit module.", "Table REF shows that $s=3$ achieves the best performance, which degrades for $s=5$ by a significant margin (3.26%).", "This shows that while tasks like classification require large and global receptive fields, the projection of adversarial images into the natural image manifold can be done on small neighborhoods.", "Given that the complexity of modeling the manifold increases with the neighborhood size, it is not surprising that larger $s$ lead to weaker performance.", "This is consistent with the well known complexity of synthesizing images with global models, such as GANs.", "What is somewhat surprising is that even $s=1$ is sufficient to enable a robust defense.", "By default, we use $s=3$ in all our experiments." ], [ "Computation Time for STL and DISCO", "Table REF compares the inference time of STL [139], DISCO and cascade DISCO (from $K=$ 2 to 5) on Cifar10 and ImageNet.", "For a single image Cifar10 of size 32x32, STL requires an Cifar10 5.9$\\times $ (0.65 vs 0.011) larger than that of DISCO ($K$ =1).", "When cascade DISCO is used, inference time increases approximately linearly with $K$ .", "For a single ImageNet image of size 224, STL requires 23.71 seconds while DISCO (K=1) only requires 0.027.", "The inference time difference increases to $878.15\\times $ (23.71 vs 0.027) on ImageNet , which is significantly larger than that of Cifar10 (5.9$\\times $ ).", "This shows that DISCO is a better defense in the sense that it can handle widely varying input image sizes with minor variations of computing cost." ], [ "Training Details", "On Cifar10 and Cifar100, we train the DISCO for 40 epochs.", "On ImageNet, DISCO is only trained for 3 epochs because ImageNet images are larger and produce more random crops.", "The learning rate is set to 0.0001 and the Adam optimizer is used in all experiments.", "All experiments are conducted using Pytorch [110].", "All time measurements, for both baselines and DISCO, are made on a single Nvidia Titan Xp GPU with Intel Xeon CPU E5-2630, with batch size 1 and averaged over 100 images." ], [ "Adopted Code and Benchmark", "In this section, we list the url links that are used for training and evaluating DISCO.", "To create the adversarial-clean training pairs, we adopt the code from TorchAttackhttps://adversarial-attacks-pytorch.readthedocs.io/en/latest/ and Areshttps://github.com/thu-ml/ares, which support the multiple attack methods.", "These attack methods are then used to attack pretrained classifiers on Cifar10, Cifar100 and ImageNet.", "We use the ResNet18 classifiers from Ares for Cifar10, the WideResNet Cifar100 classifiers from this repository https://github.com/xternalz/WideResNet-pytorch and the ResNet18 ImageNet classifiers of Pytorch [110].", "To evaluate DISCO, we adopt Autoattack from RobustBench [27]https://github.com/RobustBench/robustbench and compare to the pretrained defenses on the RobustBench leaderboard.", "In addition to Autoattack, we use the AdverTorchhttps://github.com/BorealisAI/advertorch library to implement the BPDA attack [8] and the TorchAttackhttps://adversarial-attacks-pytorch.readthedocs.io/en/latest/ library for other attacks, like FGSM [46] and BIM [79].", "For the adversarially trained defense baselines, we adopt the pretrained weights from RobustBench [27]https://github.com/RobustBench/robustbench, while the codes for transformation based baselines are adopted from Ares, Cifar autoencoder https://github.com/chenjie/PyTorch-CIFAR-10-autoencoder and STL [139].", "To implement DISCO, we use code from LIIFhttps://github.com/yinboc/liif [24]." ], [ "Visualizations", "DISCO defense outputs against FGSM [46] and BIM [79] and PGD [88] attacks are visualized in Fig.", "REF , REF and REF , respectively.", "Take Fig.", "REF for example.", "The first and second rows show the clean and adversarial images, while rows 3-5 show the output of DISCO and cascade DISCO ($K=2$ and $K=3$ ).", "Clearly, both DISCO and its cascade version can effectively remove the adversarial perturbation.", "Note that these images are produced from the same DISCO model without retraining for any attack.", "Figure: Comparison of Clean image, Adversarial image and DISCO output from K=K= 1 to 3 under FGSM attack.Figure: Comparison of Clean image, Adversarial image and DISCO output from K=K= 1 to 3 under BIM attack.Figure: Comparison of Clean image, Adversarial image and DISCO output from K=K= 1 to 3 under PGD attack." ] ]
2212.05630
[ [ "Nearby SNR: a possible common origin to multi-messenger anomalies in\n spectra, ratios and anisotropy of cosmic rays" ], [ "Abstract The multi-messenger anomalies, including spectral hardening or excess for nuclei, leptons, ratios of $\\bar p/p$ and B/C, and anisotropic reversal, were observed in past years.", "AMS-02 experiment also revealed different spectral break for positron and electron at 284 GeV and beyond TeV respectively.", "It is natural to ask whether all those anomalies originate from one unified physical scenario.", "In this work, the spatially-dependent propagation (SDP) with a nearby SNR source is adopted to reproduce above mentioned anomalies.", "There possibly exists dense molecular cloud(DMC) around SNRs and the secondary particles can be produced by pp-collision or fragmentation between the accelerated primary cosmic rays and DMC.", "As a result, the spectral hardening for primary, secondary particles and ratios of $B/C$ and $\\bar p/p$ can be well reproduced.", "Due to the energy loss at source age of 330 kyrs, the characteristic spectral break-off for primary electron is at about 1 TeV hinted from the measurements.", "The secondary positron and electron from charged pion take up $5\\%$ energy from their mother particles, so the positron spectrum has a cut-off at $\\sim$250 GeV.", "Therefore, the different spectral break for positron and electron together with other anomalies can be fulfilled in this unified physical scenario.", "More interesting is that we also obtain the featured structures as spectral break-off at 5 TV for secondary particles of Li, Be, B, which can be served to verify our model.", "We hope that those tagged structures can be observed by the new generation of space-borne experiment HERD in future." ], [ "Introduction", "The origin of cosmic rays (CRs) has been a centurial mystery since its discovery.", "The scientists have been always devoted to resolve this problem.", "With new generation space-borne and ground-based experiments, CRs measurements are stepping into an era of high precision and a series of new phenomena in spectra, ratio of secondary-to-primary and anisotropy, as multi-messenger anomalies, are revealed now.", "It may be an effective way to pinpoint the origin problem by joint study of multi-messenger information.", "Firstly, the nuclei spectra have been measured with unprecedent precision and a fine structure of spectral hardening at 200 GV has been discovered by ATIC-2, CREAM and PAMELA experiments [85], [86], [35], [103], [14].", "Lately, AMS-02 experiment also confirmed it [18], [19] and further revealed that other heavy nuclei including secondary particles have similar anomaly [23], [22], [24], [27], [28], [29].", "More interesting is that the spectral break-off around $\\sim $ 14 TeV was observed by CREAM, NUCLEON and DAMPE experiments [103], [48], [49], [46].", "Furthermore, recent spectral measurement of Helium showed that the drop-off starts from $\\sim $ 34 TV, which supported the rigidity dependent cut-off [37].", "Several kinds of models have been proposed to explain the origin of spectral hardening, including the nearby source [94], [75], [76], [87], [101], the combined effects from different group sources and the spatially-dependent propagation(SDP)[69], [70], [77], [79].", "Considering the break-off at the rigidity of 14 TV in spectrum, it seems that the nearby source model becomes natural and accessible.", "However, other observational clues are required to support this point of view.", "Secondly, the spectra of positron and electron is another good choice to shed new light on this topic.", "The famous spectral excess of positron above 20 GeV was discovered by PAMELA experiment [13].", "Then the AMS-02 experiment confirmed this remarkable result [11], [25].", "Just recently, a sharp drop-off at 284 GeV was observed by AMS-02 experiment with above 4$\\sigma $ confidence level [25].", "As for electrons, the measurement by the AMS-02 experiment showed that the energy spectrum could not be described by a single power-law form, the power index changes at about 42 GeV.", "Contrary to the positron flux, which has an exponential energy cutoff of about 810 GeV, at the 5$\\sigma $ level the electron flux does not have an energy cutoff below 1.9 TeV [26].", "However, the drop-off around 1 TeV in the total spectrum of positron and electron was first reported by HESS collaboration [31], [33] and validated by MAGIC [54], VERITAS [92] experiments.", "The DAMPE experiment also performed the direct measurement to this feature and announced that the break-off was at $\\sim $ 0.9 TeV [59].", "It is obvious that the spectra of positron and electron have extra-components at high energy similar to nuclei one.", "The nearby source is also an alternative for interpreting the above feasures.", "One of the natural advantages of positron and electron is their fast cooling in the interstellar radiation field (ISRF), which can roughly decide the source distance and age.", "For example, the spectral cut-off at $\\sim $ TeV requires that the age of nearby source is about 330 kyrs [64], [72], [108], [78], which will set a much more strict constrain on its place.", "Where is the nearby source and its rough direction may point out a new bright road.", "Lastly, the anisotropy of CRs is one of the best choices to fulfill this role.", "Thanks to unremitting efforts of ground-based experiments, the measurements of large-scale anisotropy has made great progress from hundreds of GeV to several PeV [41], [44], [2], [1], [51].", "It is obvious that the phase has reversed at 100 TeV and the direction roughly point to local magnetic field and Galactic Center (GC) below and above 100 TeV respectively.", "Coincidentally, the amplitude has a dip structure at $\\sim $ 100 TeV, starting from $\\sim $ 10 TeV.", "The most importance thing is that there exists a common transition energy scale between the structures of the energy spectra and the anisotropies.", "The local source possibly plays a very important role to resolve the conjunct problems of spectra and anisotropies.", "Furthermore, the direction of anisotropy can roughly outline the position of such local source.", "In our recent work, we proposed a local source under the SDP model to reproduce the co-evolution of the spectra and anisotropies and found that the optimal candidate of local source is possible a SNR at Geminga’s birth place [76], [87].", "Based on above discussions, the local source is necessary to understand the multi-messenger anomalies.", "However, the latest observations bring new challenges into this model, such as the different spectral break-off for positron and electron and a series of results of nuclei spectra and ratios that were published by AMS-02 experiment recently.", "Therefore, a systematic study is useful to understand those new observations.", "More important is that the featured structure is required and necessary to verify this model.", "In this work, a unified physical scenario as the SDP with a nearby source, Geminga SNR, is adopted to reproduce all the above anomalies.", "Simultaneously, we obtain the tagged structure to examine our model.", "The paper is organized as follows.", "Section 2 describes the model and method briefly, Section 3 presents all the calculated results and Section 4 gives the conclusion." ], [ "Model and Method Description", "The CRs in solar system come from two parts as the global one from the galactic background sources (bkg) and the local one from nearby SNRs (loc SNR).", "For the background sources, it is viable to assume that the spatial distribution of CRs from them arrives at steady state.", "Nevertheless for the nearby single SNR, the time-dependent transport of CRs after injection is requisite.", "Furthermore, the dense molecular cloud (DMC) plays a key role in the star formation, which means that there possibly exists DMC around SNRs [32], [99], [107].", "In our model, we assume that the DMC or dense interstellar medium(DISM) exists around the nearby SNR.", "Therefore, the general physical picture can be sketched as three steps.", "Firstly, the nuclei and electrons can be accelerated to very high energy (VHE) together during the SNR explosion.", "Then the VHE CR nuclei and electrons undergo the interaction with DMC and ISRF and then produce secondary particles.", "Lastly, the primary and secondary particles will go through the interstellar space and experience a long travel in the galaxy.", "Certainly, a limited part will arrive at the earth and be observed by the various experiments.", "The results can be imaged as the cartoon illustration of Figure REF .", "Here, the spectral break-off of primary nuclei and electrons with exponential form is adopted to be 5 TV for the sake of reproducing the observed bump structure of proton at $\\sim $ 14 TV by DAMPE satellite experiment [46].", "The primary electrons have to suffer the energy loss cooling by scattering off the ISRF with the time around 330 kyrs [91], [80], [61], which leads to the sharp dropping of electron spectrum in the energy region of $\\sim $ TeV as observed by DAMPE, HESS, MAGIC and VERITAS [31], [33], [54], [92], [59].", "The anti-proton and positron will be produced in the pp-collision between VHE proton and DMC.", "Due to the spectral break-off of primary nuclei at the rigidity of 5 TV, this causes the cut-off of positron, the secondary particle, at about $\\sim $ 300 GeV.", "Simultaneously, the $\\gamma $ -rays from $\\pi ^0$ decay in the pp-collision is produced and has a spectral cut-off around 500 GeV.", "In addition, the heavy nuclei will undergo the fragmentation with the DISM or DMC and then the secondary nuclei such as $Li, Be, B$ are produced.", "The secondary nuclei from the fragmentation inherit the property of their mother particle and have the same morphology of energy break-off around 5 TV, which can be served to differentiate with other models [109], [77].", "The detailed descriptions of cosmic ray propagation, Galactic background sources, and nearby SNRs are displayed in the following appendix A, B and C respectively.", "Table: The parameters of transport spectrum with SDP model.Table: Injection parameters of the background and local source.Figure: The cartoon illustration to describe the production mechanism of the multi-messenger anomalies: the spectral hardening and break-off at rigidity of 200 GV and 14 TV for both primary and secondary nuclei, the energy cut-off for positron and electron at around 300 GeV and 1 TeV respectively." ], [ "Results", "Based on above discussions, the spectra of primary, secondary and all particles are calculated to reproduce the measurements.", "Simultaneously, the ratios between primary to primary, secondary to primary and secondary to secondary are presented.", "For the sake of completeness of this work, we also give the anisotropy for CR nuclei and electrons.", "In the model calculations, the parameters of propagation and inject spectra both for local source and galactic ones are listed in Table REF and REF respectively." ], [ "Spectra", "The CR spectra are the most important effects to understand their propagation in the galaxy.", "Thanks to the new generations of spaced-borne experiments, the spectral measurements of CRs are stepping into precise era and revealed a series of new phenomena, such as the spectral hardening at 200 GV and break-off at $\\sim $ 14 TV, the famous excess of positron and extra-component of electron at high energy.", "In this section, the unified model calculations are described to reproduce and understand the measurements.", "Figure REF and REF show the measurements and model calculations for most of nuclei species individually.", "In the model calculations, the red solid line with shadow is the contribution from local SNR, the blue solid line is from galactic background sources and the black solid line is the sum of them.", "It is obvious that the spectral hardening can be reproduced well for all the species and its origin dominantly originates from the contribution of local source.", "To satisfy the energy break-off of proton at the rigidity of 14 TV observed by DAMPE experiment [46], the injection spectrum of local source is parameterized as a cutoff power-law form, $q_{\\rm inj}({\\cal R})=q_0{\\cal R}^{-\\nu ^{\\prime }}\\exp (-{\\cal R}/{\\cal R}^{\\prime }_{\\rm c})$ , where the normalization $q_0$ and spectral index $\\nu ^{\\prime }$ are determined through fitting to the CR energy spectra.", "The parameter ${\\cal R}^{\\prime }_{\\rm c}$ is adopted to be 15 TV, which leads to the bend of CR spectra starting around the rigidity of 5 TV.", "For detailed parameter information about the spectra, please refer to Table REF .", "Figure REF shows that the spectral break-off of proton is consistent well between model expection and data points from DAMPE and CREAM measurements [103], [46], but the Helium species has a little difference with DAMPE measurement and is roughly consistent with CREAM observations at several highest energy points.", "The reason is that the measured spectral break-off from DAMPE for Helium species is at the rigidity of $\\sim $ 34 TV, which is a little higher than proton under the Z-dependent energy cut-off frame [37].", "To keep the uniformity of all the nuclei, we choose the cut-off rigidity at 15 TV, which doesn't affect the results of other heavier primary nuclei and the conclusions.", "Under this physical scenario, our model expects that all the heavier primary nuclei have the same rigidity break-off at 5 TV as shown in Figure REF , which can be observed by the HERD experiment in future [73].", "Figure: The comparison between model calculations and experimental measurements for proton and Helium spectra as (a) and (b) panel.", "Here the red shadow shape is from the contribution of local SNR (e.g.", "Geminga), the blue solid line comes from the background components and the black solid line is the total contribution.", "The data points are measured by AMS-02, CREAM and DAMPE experiments, , , , .Figure: Similar to Figure , the individual species of C,O,Ne,Mg,SiC, O, Ne, Mg, Si and FeFe from panel (a) to (f).", "The data points are from the AMS-02 experiment, , ." ], [ "Secondary Particles", "Following the primary species, the secondary particles spectra also have two components as the local and the global sources.", "Figure REF gives the spectra of $\\bar{p}$ , $Li, Be$ and $B$ from panel (a) to (d).", "The model calculations are well consistent with the observations from AMS-02 [20], [23].", "The hardening of $\\bar{p}$ spectrum starts from tens of GeV owing to 200 GV hardening of its mother proton.", "For the species of $Li, Be$ and $B$ , they are produced through fragmentation of heavier nuclei, such as $C, O$ and so on.", "They keep the same behaviour as their mother particles with the hardening at 200 GV and break-off at around $\\sim $ 10 TV.", "The typical energy break-off for secondary particles is pivotal to validate the dominant interactions around source regions for the nearby source, which are probe to unveil the origin of positron and $\\bar{p}$ excess at high energy.", "We hope that the spectral bend around $\\sim $ TeV can be observed by HERD experiment in near future.", "Figure: Similar to Figure , the secondary particles of p ¯,Li,Be\\bar{p}, Li, Be and BB from panel (a) to panel (d).", "The data points are from the AMS-02 experiment., ." ], [ "Two components particles", "There are also some special particles, such as $N, Na, Al$ , including both the primary and secondary components.", "They are thought to be produced both in galactic background sources, and by the collisions of heavier nuclei with the interstellar medium (ISM) ($\\rm O\\rightarrow \\rm N+ \\rm X$ , $\\rm Mg\\rightarrow \\rm Na+ \\rm X$ , $\\rm Si\\rightarrow \\rm Na+ \\rm X$ , $\\rm Si\\rightarrow \\rm Al+ \\rm X$ )[67], [53], [93].", "As shown in Figure REF , the red solid line with shadow is the contribution from primary component accelerated directly by local SNR, the green solid line with shadow indicates the secondary component produced by the collisions of heavier nuclei with ISM, the blue solid line is from galactic background sources and the black solid line is the sum of them.", "It can be seen that the model calculations are agreement with the observations, and their spectra also have a break-off structure similar to that of primary particles at $\\sim 5$ TV.", "Figure: Similar to Figure , the individual species of N,Na,AlN, Na, Al, including both primary and secondary components, from panel (a) to (c).", "The data points are from the AMS-02 experiment, ." ], [ "All particles", "The space-borne experiments have decisive advantages in the seperation ability for different species.", "However they have limited effective detector area, which lead to the lower statistical event numbers in high energy.", "On the contrary, the ground-based experiments have opposite properties.", "This makes that the spectral measurements for individual species step into dilemma above tens of TeV.", "The all-particle spectra can make up this shortcoming to constrain the high energy contributions.", "Figure REF shows the model calculations and measurements for all-particle spectrum.", "In the calculations, four groups as $H+He, C+N+O, Ne+Mg+Si$ and $Fe$ are demonstrated in blue, orange, green and red solid lines.", "It is clear that the knee structure of the all-particle spectrum can be properly reproduced by the background component assuming a Z-dependent cutoff with $R_c \\sim 7$ PV.", "In this case, the light species of protons and He nuclei dominantly composes the knee structure.", "This is because we try to fit the KASCADE spectra of proton and Helium, which was also favoured by the diffuse $\\gamma $ -ray measurement at ultra-high energy by AS$\\gamma $ experiment [45].", "Figure: The observed and calculated all-particle spectra.", "In the model calculations, the solid blue, orange, green, red and black line represent the P+He, C+N+O, Ne+Mg+Si, Fe species and all-particle spectra respectively.", "The data points are measured by TALE , Tibet-ASγ\\gamma , ICETOP , experiments and from weighted by Horandel ." ], [ "Positron and Electron", "It is a hot topic for the positron excess and electron spectral hardening at high energy.", "Similar to nuclei secondary particles, the positron is also composed of two parts as around Geminga SNR and global background component from pp-collision.", "Figure REF shows the spectra of positron, electron, their sum and the ratio of positron to the sum of positron and electron.", "The model calculations are good reproduction of experimental data.", "Particularly, the positron will take $5\\%$ energy from its mother species proton in pp-collision.", "Owing to the spectral break-off around 5 TeV of proton, the positron spectrum has cut-off around 250 GeV and successfully reproduce the AMS-02 measurements as shown in panel (a).", "As the discussion in section 2, the age of Geminga SNR is $3.3\\times 10^5$ yrs, which leads to the energy break of electron spectrum about TeV as shown in panel (b) [50].", "Then the difference of energy break-off between positron and electron can be naturally understood.", "For the ratio of positron to the sum of positron and electron, it increases with increasing energy above about TeV, which results from the cut-off of the total electron spectrum at $\\sim $ TeV.", "Figure: Similar to the Figure , the spectra of positron, electron and their sum from panel (a) to (c) and the ratio of positron to the sum of positron and electron in panel (d).", "The data points are adopted from AMS-02 experiments , ." ], [ "Ratios", "The ratios are important to understand the acceleration, propagation and interaction properties of CRs.", "Thanks to the unprecedented precise measurements from AMS-02, the ratios of primary to primary, secondary to secondary and secondary to secondary species can be well measured and clearly shown the difference [22], [24], [28], [27], [29].", "In this section, the corresponding model calculations are obtained to reproduce those observations." ], [ "Primary to primary species", "The ratios of primary to primary species carry the acceleration information in the source region.", "The model of diffusive shock acceleration predicts that the individual species should have identical power law spectrum [90], [30].", "Figure REF shows the comparison between model calculations and observations for the ratios of primary to primary species for $He/O$ , $C/O$ ,$N/O$ , $Ne/O$ , $Mg/O$ , $Si/O$ , $Ne/Mg$ , $Si/Mg$ , $Na/Si$ , $Al/Si$ , $Fe/He$ , $Fe/O$ and $Fe/Si$ .", "The red solid lines with shadow indicate the contributions from the nearby SNR.", "The black solid lines represent the ratio results of model expectation, taking into account the contributions from the nearby source.", "In fact, the individual spectrum has well reproduced the observations as shown in Figure REF , REF and REF , so the ratios should be also consistent between data and model calculations.", "Figure REF lists the ratios of primary to primary species, which has good consistency with observations." ], [ "secondary to primary species", "It is believed that most of the secondary nuclei originate from collisions of CRs with ISM in propagation.", "Therefore the information about CR propagation can be extracted from comparison between the spectra of secondary particles and those of primary CRs [105], [104], [106].", "Figure REF shows the ratios of secondary to primary species for $\\bar{p}/p$ , $Li/C$ , $Be/C$ , $B/C$ , $Li/O$ , $Be/O$ and $B/O$ .", "The blue solid lines represent the ratios of background component of secondary particles to the total amount of primary particles, the red solid lines show the ratios of nearby SNR component of secondary particles to the total amount of primary particles and the black solid line is the ratio of total secondary to total primary.", "The model calculations work well to reproduce the observations.", "Here except $\\bar{p}/p$ with energy independent (constant) distribution above 10 GeV, all other heavier nuclei have hardening above 200 GeV for model calculations.", "Just recently, the similar hardening has also been discovered in $B/C$ and $B/O$ above 100 GeV/n by DAMPE experiment [58].", "It is obvious that our model calculations work well to reproduce this hardening.", "Similar to the spectra of secondary species, the ratios have a break-off around TeV for $\\bar{p}/p$ and 5 TeV for heavier nuclei, which can offer a crucial and definitive identification with other kinds of models.", "Figure: Comparison between model calculations and observations for the ratios of secondary to primary species.", "The data points are adopted from AMS-02 measurements , , ." ], [ "Secondary to secondary species", "The ratios of secondary to secondary species take important information of interaction in CR propagation.", "If they originate from the same mother particle, the spectral behavior reflects the common pecularity considering the known interaction cross-section, the same ISM and the same interaction time.", "This property can be served to understand the CR origin puzzle, such as positron [13].", "Figure REF shows the ratio comparison between model calculations and measurements for $\\bar{p}/e^+$ , $Li/B$ and $Be/B$ .", "Our model calculations are consistent with the measurements.", "More interesting is that the energy independent distribution is clear shown from them above 10 GeV.", "The model calculation of $\\bar{p}/e^+$ rises up sharply above 300 GeV.", "This is because the energy cut-off of positron around 300 GeV.", "Figure: Similar to Figure .", "Comparison between model calculations and observations for the ratios of secondary to secondary species.", "The data points are adopted from AMS-02 measurements , .Similar to the results of spectra and ratios, the anisotropy is also the joint contributions from the background and local sources of CRs.", "Due to more abundant sources in the inner disk, the phase of anisotropy directs to the galactic center.", "However, the observations of phase roughly point to the direction of anti-galactic center from 100 GeV to 100 TeV.", "The local source, located at the outer of galactic disk, plays the dominant roles in this energy region.", "So the anisotropy demonstrates mutually repressive competition between local source and background.", "The dip structure at 100 TeV is the transition energy point for the two kinds of sources.", "Figure REF shows the amplitude and phase of anisotropy for CRs.", "The CR anistropy can be fitted well under this physical scenario.", "Figure REF presents amplitude and phase for CREs.", "The expection of amplitude for CREs is lower than the up-limit of Fermi-LAT experiment [9].", "Figure: Comparison between model calculations and observations of the amplitude and phase of anisotropy for CRs.", "The data points are taken from underground muon detectors:Norikura (),Ottawa (),London (),Bolivia (),Budapest (),Hobart (),London (),Misato (),Socorro (),Yakutsk (),Banksan (),Hong Kong (),Sakashita (),Utah (),Liapootah (),Matsushiro (),Poatina (),Kamiokande (),Marco (),SuperKamiokande ();and air shower array experiments:PeakMusala (),Baksan (),Norikura (),EAS-TOP (, , ),Baksan (),Milagro (),IceCube (, ),Ice-Top (),ARGO-YBJ (),Tibet (, , )K-Grande().Figure: Comparison between model calculations and up-limit of the amplitude and phase of anisotropy for CREs.", "The data points are adopted from" ], [ "Summary", "The new generation of space-borne and ground-based experiments took unprecedented precise measurements for CR spectra and anisotropy and revealed multi-messenger anomalies for them.", "In this work, we propose that the local source, Geminga SNR, is the common origin for all those anomalies.", "The Geminga SNR has three critical advantages: perfect age of 330 kyrs, suitable position with a distance of 330 pc and assumed DMC around it.", "The physical figure can be summarized as following.", "Firstly, the diffusive shock around Geminga SNR can accelerate the CR nuclei and electrons to very high energy together.", "Here we adopt the break-off rigidity to be $\\sim $ 5 TV for the sake of 14 TV energy cut-off observation by DAMPE experiment [46].", "The electron will suffer the energy loss for 330 kyrs, which lead to the spectral break around TeV observed by DAMPE, HESS [31], [33], [59].", "Due to the contributions of the local source, the spectral hardening and drop-off for all nuclei and electron respective can be well reproduced.", "Secondarily, the interaction of pp-collision and fragmentation with DMC happen to produce secondary particles, such as $\\bar{p}, e^+, \\gamma , Li,Be,B$ and so on.", "Inheriting the properties from their mother particles, the $\\bar{p}, e^+, \\gamma $ as products of pp-collision will take $10\\%, 5\\%, 10\\%$ of proton energy, which result in the corresponding energy break-off at 250, 500, 500 GeV respectively for them.", "So the hardening and drop-off for secondary particles can also be well consistent with the measurements.", "Simultaneously, the heavier secondary nuclei from fragmentation process, such as $Li,Be,B$ , will have the same energy break-off with their mothers around $\\sim $ 5 TeV.", "This property is typical for this kind model, which can be served to test the model in future.", "Lastly, thanks to the appropriate location of Geminga SNR, the special evolution with energy of anisotropy in amplitude and phase can roughly reproduce the observations.", "Summarily, the multi-messenger anomaly of spectra, ratios and anisotropy can be well reproduced in one unified physical mechanism.", "Particularly, the difference of spectral break-off between positron and electron can also be explained in the same scenario.", "More interesting is that we expect the typical energy break-off feature for secondary nuclei $Li,Be,B$ and their ratios with primary nuclei.", "This feature will also play important role to resolve the origin puzzle of positron and $\\bar{p}$ excesses.", "We hope that this feature can be observed by HERD experiment in future.", "This work is supported by the National Key Research and Development Program (No.", "2018YFA0404202), the National Natural Science Foundation of China (Nos.", "12275279, U2031110, 12175248)." ], [ "The propagation of CRs", "It has been recognized in recent years that the propagation of CRs in the Milky Way should depend on the spatial locations, as inferred by the HAWC and LHAASO observations of extended $\\gamma $ -ray halos around pulsars [10], [34] and the spatial variations of the CR intensities and spectral indices from Fermi-LAT observations [102], [12].", "The spatially-dependent propagation (SDP) model was also proposed to explain the observed hardening of CRs [97], [98], [62], [69], [77], [70], and also the large-scale anisotropies by means of a nearby source[76], [87].", "In the SDP model, the diffusive halo is divided into two parts, the inner halo (disk) and the outer halo.", "In the inner halo, the diffusion coefficient is much smaller than that in the outer halo, as indicated by the HAWC observations.", "The propagation equation of CRs in the magnetic halo and the descriptions of specific iterms in it can be refered to [69].", "The spatial diffusion coefficient $D_{xx}$ can be parameterized as $D_{xx}(r,z, {\\cal R} )= D_{0}F(r,z)\\beta ^{\\eta } \\left(\\frac{\\cal R}{{\\cal R}_{0}} \\right)^{\\delta _{0}F(r,z)},$ where $r$ and $z$ are cylindrical coordinate, ${\\cal R}$ is the particle's rigidity, $\\beta $ is the particle's velocity in unit of light speed, $D_0$ and $\\delta _0$ are constants representing the diffusion coefficient and its high-energy rigidity dependence in the outer halo, $\\eta $ is a phenomenological constant in order to fit the low-energy data.", "The spatial dependent function $F(r, z)$ is given as $F(r,z) = \\left\\lbrace \\begin{array}{ll}{\\frac{N_m}{1+f(r,z)}+\\left[1-\\frac{N_{m}}{1+f(r,z)}\\right]}\\left(\\frac{{z}}{\\xi {z}_{\\rm h}} \\right)^{n}, & {{|z|} \\le \\xi {z}_{\\rm h}} \\\\\\\\1, & { {|z|} > \\xi {z}_{\\rm h}} \\\\\\end{array},\\right.$ where the total half-thickness of the propagation halo is $z_h$ , and the half-thickness of the inner halo is $\\xi z_{h}$ .", "The constant $n$ describes the smoothness of the parameters at the transition between the two halos.", "The expression $f(r, z)$ is the source density distribution.", "In this work, we adopt the diffusion re-acceleration model, with the diffusive re-acceleration coefficient $D_{pp}$ , which correlated with $D_{xx}$ via $D_{pp}D_{xx} = \\frac{4p^{2}v_{A}^{2}}{3\\delta (4-\\delta ^{2})(4-\\delta )}$ , where $v_A$ is the Alfvén velocity, $p$ is the momentum, and $\\delta $ is the rigidity dependence slope of the diffusion coefficient [89].", "The parameters of SDP model used here are listed in Table REF ." ], [ "Background sources", "All of the SNRs other than the nearby one are labeled as background sources.", "The source density distribution is approximated as an axisymmetric form parametrized as $f(r, z) = \\left(\\frac{r}{r_\\odot } \\right)^\\alpha \\exp \\left[-\\frac{\\beta (r-r_\\odot )}{r_\\odot } \\right] \\exp \\left(-\\frac{|z|}{z_s} \\right) ~,$ where $r_\\odot \\equiv 8.5$ kpc represents the distance from the Galactic center to the solar system.", "Parameters $\\alpha $ and $\\beta $ are taken to be 1.69 and 3.33 [55].", "The density of the source distribution decreases exponentially along the vertical height from the Galactic plane, with $z_s$ = 200 pc.", "The parameters of SDP model used here are listed in Table REF .", "The injection spectrum of nuclei and primary electrons are assumed to be an exponentially cutoff broken power-law function of particle rigidity ${\\cal R}$ , i.e.", "$q({\\cal R}) = Q_{0} \\left\\lbrace \\begin{array}{lll}\\left(\\frac{{\\cal R}}{{\\cal R}_{\\rm br}} \\right)^{\\nu _{1}}, & {{\\cal R} \\le {\\cal R}_{\\rm br}} \\\\\\\\\\left(\\frac{{\\cal R}}{{\\cal R}_{\\rm br}} \\right)^{\\nu _{2}} \\exp \\left[-\\frac{\\cal R}{{\\cal R}_{\\rm c}} \\right], & { {\\cal R} > {\\cal R}_{\\rm br}} \\\\\\end{array}\\right.,$ where $Q_{0}$ is the normalization factor, ${\\cal R}_{\\rm br}$ is break rigidity, $\\nu _{1,2}$ are the spectral incides before and after the break rigidity, ${\\cal R}_{\\rm c}$ is the cutoff rigidity.", "Table REF shows the injection spectrum parameters of different CR nuclei species.", "The numerical package DRAGON is used to solve the propagation equation of CRs [60].", "For energies smaller than tens of GeV, the fluxes of CRs are suppressed by the solar modulation effect.", "We use the force-field approximation [65] to account for the solar modulation.", "The secondary CR nuclei, such as Li, Be, B, can be brought forth from the fragmentation of heavier parent nuclei throughout the transport.", "The production rate is expressed as follows $Q_j = \\sum _{i = \\rm C, N, O} (n_{\\rm H} \\sigma _{i+{\\rm H}\\rightarrow j} +n_{\\rm He} \\sigma _{i+{\\rm He} \\rightarrow j} ) v \\psi _i ~,$ where $n_{{\\rm H}/{\\rm He}}$ is the number density of hydrogen/helium in the ISM and $\\sigma _{i+{\\rm H/He}\\rightarrow j}$ is the total cross section of the corresponding hadronic interaction.", "Unlike above secondary CR nuclei, secondary $\\rm e^+$ and $\\rm \\bar{p}$ are produced through the pp collisions between the primary CR nuclei from background sources and ISM.", "Therefore the source term of both $\\rm e^+$ and $\\rm \\bar{p}$ is the convolution of the energy spectra of primary nuclei $\\Phi _i(E)$ and the relevant differential cross section $d \\sigma _{i + {\\rm H/He} \\rightarrow j}/d E_j$ , i.e.", "$\\nonumber Q_j &=& \\sum _{i = \\rm p, He} \\int dp_i v \\left\\lbrace n_{\\rm H} \\frac{ \\sigma _{i+{\\rm H}\\rightarrow j}(p_i, p_j)}{ dp_j} \\right.", "\\\\& & \\left.", "+n_{\\rm He} \\frac{\\sigma _{i+{\\rm He}\\rightarrow j}(p_i, p_j)}{dp_j} \\right\\rbrace \\psi _i(p_i) ~,$ Furthermore, antiprotons may still undergo non-annihilated inelastic scattering with ISM protons during propagation, which can generate the tertiary production." ], [ "Nearby supernova remnant", "We assume that one supernova explosion in the vicinity of the solar neighborhood occurred in a giant MC about $10^5-10^6$ years ago.", "The CR charged particles were continually accelerated by passing back and forth across the shock front with the expansion of supernova ejecta.", "Under the SDP scenario of GCR propagation, [78] demonstrated that, among the observed local SNRs, only Geminga SNR is able to explain both the spectra and anisotropies observations of CRs simultaneously.", "Since the location of Monogem is similar to that of Geminga, their impacts on CR flux may degenerate with each other.", "Here, for simplicity, we take Geminga SNR as the major contributor of the local source in this work.", "The Geminga SNR locates at its birth place with the distance and age of r = 330 pc and $t_{inj} = 3.3\\times 10^5$ yrs.", "The direction in the galactic coordinate is $l=194^{\\circ }$ (galactic longitude) and $b=-13^{\\circ }$ (galactic latitude) [91].", "Its distance, age and direction jointly decide the important role as an optimal candidate of nearby source to CRs [76], [87], [110].", "The injection process of SNR is approximated as burst-like.", "So the primary and secondary particles have experienced 300 kyrs and a tiny part of them enter into solar system in the end.", "In this work, the propagated spectrum from Geminga SNR is thus a convolution of the Green's function and the time-dependent injection rate $Q_0(t)$[50], i.e.", "$\\varphi (\\vec{r}, {\\cal R}, t) = \\int _{t_i}^{t} G(\\vec{r}-\\vec{r}^\\prime , t-t^\\prime , {\\cal R}) Q_0(t^\\prime ) d t^\\prime .$ The normalization is determined through fitting Galactic cosmic rays energy spectra and the detailed parameters is listed in Table REF .", "Besides, the CR nuclei generated by the local SNR also collide with the molecular gas around them and give birth to prolific daughter particles, like B, $\\rm e^{\\pm }$ , $\\rm \\bar{p}$ , and so forth.", "The yields of B and $\\rm e^{\\pm }$ , $\\rm \\bar{p}$ inside the MC are respectively $Q_j = \\sum _{i = \\rm C, N, O} (n_{\\rm H} \\sigma _{i+{\\rm H}\\rightarrow j} +n_{\\rm He} \\sigma _{i+{\\rm He}\\rightarrow j} )v Q_i(E) t_{\\rm col}$ and $Q_j \\; = \\;\\sum _{i = \\rm p, He} {\\displaystyle \\int \\limits _{E_{\\rm th}}^{+ \\infty }} \\; d E_i \\;v \\; \\left\\lbrace n_{\\, \\rm H}{\\displaystyle \\frac{d \\sigma _{i + {\\rm H} \\rightarrow j}}{d E_j }} \\right.", "\\\\\\left.", "+n_{\\, \\rm He} {\\displaystyle \\frac{d \\sigma _{i + {\\rm He} \\rightarrow j}}{d E_j }} \\right\\rbrace Q_i(E_i) t_{\\rm col} ~,$ where $n_{\\rm H/He}$ is the number density of hydrogen/helium in MC.", "In this work, we assume that it is 1000 times greater than the mean value of ISM.", "$t_{\\rm col}$ is the duration of collision of 420 yrs.", "$Q_i(E)$ is the accelerated spectrum of primary nuclei inside local SNR.", "Summarily, the local source, Geminga SNR, is responsible for the spectral hardening at 200 GV and cut-off at 14 TV for all the nuclei species.", "The primary electron can be accelerated to $\\sim $ 5 TeV, similar to nuclei, but will undergo the energy loss in ISRF and have a spectral cut-off around TeV due to its age of $3.3\\times 10^5$ yrs.", "For the secondary particles, the spectral cut-off depend on their mother ones and the interaction modes.", "For positron, the energy break-off will located at $\\sim $ 300 GeV.", "The different energy break-off between positron and electron can be natural understood in this scenario.", "The secondary heavier nuclei will have the same spectral cut-off as their mothers at $\\sim $ 5 TV, which can be tested in future experiments like HERD [73]." ] ]
2212.05641
[ [ "Generic Tagging for RISC-V Binaries" ], [ "Abstract With the widespread popularity of RISC-V -- an open-source ISA -- custom hardware security solutions targeting specific defense needs are gaining popularity.", "These solutions often require specialized compilers that can insert metadata (called tags) into the generated binaries, and/or extend the RISC-V ISA with new instructions.", "Developing such compilers can be a tedious and time-consuming process.", "In this paper, we present COGENT, a generic instruction tag generator for RISC-V architecture.", "COGENT is capable of associating a tag of configurable and varying widths (1 to 20 bits) to each instruction.", "It is also capable of emitting labels that are central to the implementation of control-flow integrity (CFI) solutions.", "COGENT encodes all tags and labels as nop instructions thereby providing full backward compatibility.", "We evaluate COGENT on a subset of programs from the SPEC CPU2017 benchmark suite and report the binary size increase to be 29.3% and 18.27% for the lowest and highest tag coverage levels respectively.", "Additionally, we executed tagged programs on COTS RISC-V unmodified hardware and found the execution time overhead (with respect to backward compatibility) to be 13.4% and 5.72% for the lowest and highest coverage levels respectively.", "Finally, using a case study, we present possible use case scenarios where COGENT can be applied." ], [ "Introduction", "Over the years, the advent of Internet of Things (IoT) along with an increasing number of mobile, desktop and server systems has led to a steady increase in cybersecurity attacks.", "Most of the attacks–even those targeted at the hardware (e.g.", "Spectre[1], and Meltdown[2]) are initiated through software.", "The defense community has focused efforts at different layers of deployments.", "For instance, the software community has explored software-only defenses such as StackGuard[3], and CFI via program instrumentation (e.g.,  [4], [5], [6]).", "Meanwhile, the hardware community has pursued defenses that leverage hardware features in order to detect and flag exploits (e.g., Intel CET[7], W⌃X) or prevent information leakage (e.g., Intel SGX[8], ARM TrustZone[9]).", "While software-only security measures are highly desirable and can be deployed with little effort, they typically suffer higher performance penalties compared to hardware solutions.", "Furthermore, the security mechanisms themselves may be a part of the attack surface, which brings about transparency issues.", "On the other hand, hardware-only security mechanisms must bridge the semantic gap problem and recover high-level security-relevant program semantics in order to enforce rich and expressive policy abstractions.", "Further, the tolerance for errors is low and “fixing\" design errors is very expensive (e.g.", "Intel MPX[10], [11] had problems that resulted in eventual deprecation).", "With the gaining popularity of RISC-V, an open-source ISA, hardware - software cohesive defenses have gained center stage and are desirable.", "Typically, defenders take advantage of (a) fine-grained control over the RISC-V hardware including (but not limited to) changes to ISA in combination with (b) curated compilers capable of enriching binaries with security metadata/stubs, deploying highly effective and fast performing defenses.", "For example, recent defenses such as Cheri [12], Zero [13], RetTag [14], Framer [15], etc., all rely on a dedicated compiler that can generate instrumented binaries for their modified hardware.", "The modified binaries contain metadata called “tags” that provides the hardware with specific information regarding the code or data unit in the program.", "Typically, there is a one-to-one mapping between a tag and corresponding code/data unit.", "These defenses strike a balance between security and performance requirements and are often tailor-made to deployment needs.", "They rely on performant compilers and/or hardware design and engineering, yet compiler development remains an expensive process often requiring several hundred developer-hours for development and testing.", "More importantly, availability of a suitable compiler would attract more hardware defenses that are currently impeded by the lack of a compiler.", "Solutions that shorten the development effort and allow quick incorporation of metadata into binaries are missing and highly sought after.", "In this paper, we address the above need and present COGENT: COmpiler for GENeric Tagging, a generic LLVM-based compiler that incorporates instruction tags (or code metadata) into RISC-V binaries.", "In essence, COGENT removes the burden of compiler development from RISC-V hardware defenses that rely on embedding instruction metadata into binaries.", "Motivated by the needs of Control-Flow Integrity (CFI) defenses that embed CFI labels into code, COGENT interleaves tag metadata into the instruction stream.", "COGENT has several key features that make it highly favorable for hardware-software cohesive defenses.", "First, it is highly flexible.", "It can accommodate various tag widths (1 to 20 bits per instruction).", "This flexibility is essential in order to achieve rich expressiveness in tags.", "Second, it is functionally correct.", "That is, it ensures that the inserted tags do not interfere with program logic.", "Functional correctness is non-trivial due to several challenges (see Section REF ).", "This is important since a tag system is not usable without such guarantees.", "Third, COGENT preserves backward compatibility.", "That is, tagged binaries will seamlessly run on generic RISC-V processors that are tag-unaware.", "We accomplish backward compatibility by encoding tag information into NOP instructions that are simply ignored by unsupported hardware.", "Finally, our changes are segregated into LLVM IR-level and target-specific changes.", "Other architectures (e.g., ARM) can be supported by reusing the IR-specific changes and rewriting ARM LLVM backend.", "We make the following contributions: We implement a flexible and defense-scheme agnostic implementation codenamed COGENT, a LLVM-based compiler for RISC-V target.", "COGENT provides configurable instruction-level tagging support to several different tag widths and coverage.", "We also make changes to the LLD linker to support COGENT.", "We demonstrate the efficacy of COGENT via a CFI implementation.", "We evaluate the example CFI implementation for correctness and backwards compatibility.", "We demonstrate moderate to low performance overhead and binary bloat.", "Through case studies, we demonstrate potential real-world applicability of COGENT.", "Summary of Results: We evaluated the correctness of COGENT by compiling and running a subset of the SPEC CPU2017 benchmark suite on Commercial Off The Shelf (COTS) hardware.", "Our subset has small function sizes, heavy memory overhead, and high loop counts, all of which stress instruction caches.", "For our example CFI implementation, we found that COGENT increased the size of the .text section of these programs by an average of 29.32% for the lowest tag coverage, and 18.27% for the highest.", "We found that backward-compatibility performance (i.e., overhead imposed by tagged binaries running on tag-unaware hardware) was impacted by an average of 13.40% for the low coverage and 5.72% for the highest.", "We also present several possible use cases for the system, including specific recent CVEs that could be addressed.", "The rest of the paper is organized as follows:  sec:background provides the technical background necessary to understand the remainder of the paper.", "In sec:overview and sec:technicaldetails, we present the overview and technical details of COGENT.", "We evaluate COGENT in sec:eval and provide case studies to highlight benefits of using COGENT.", "Finally, we present related work in sec:relatedwork and conclude in sec:conclusions.", "Tagging is a technique that very broadly associates some amount of metadata (the tag), with a memory location and registers.", "Specially designed architectures then consume that tag and enforce a security policy.", "Tags can be associated with a) memory objects (data tagging) allowing policies to track and protect data in memory, and/or on b) code (instruction tagging), allowing policies to change or extend the behavior of the tagged instructions.", "Of particular importance to this paper is instruction tagging.", "Instruction Tagging: allows a tag-aware architecture to execute additional code or checks on any instruction associated with a particular tag.", "Instruction tags are useful in implementing Control-Flow Integrity (CFI) policies.", "Identical tags are used at control flow sources (call/jump sites) and control flow targets in order to signify legal control flow.", "Tags can be placed on load/stores to protect sensitive data, forbidding access to a memory region if the tag is incorrect.", "These kinds of tags often require corresponding metadata placed with the data they are meant to access.", "The tag policies and enforcement are implementation specific and form the essence of a tag-based defense.", "The specific instruction tags are a function of the intended security policies, and therefore vary greatly between designs [16].", "Differences in tag requirements can even result in different designs which use the same hardware for similar purposes [17].", "This implementation-specific design requirement is inherent to the nature of tagging.", "Tags are simply a way of providing more information to the program at run time, but the information needed is dependent on the design of the system itself.", "Broadly speaking, there is a trade-off made in the design of tag systems between how much information it provides to the hardware, and how much overhead the tags introduce into the system.", "This overhead can include power consumption, hardware complexity, performance or any other costs paid by the design.", "Depending on how much information one's defense needs, tags can be arbitrarily large, or as small as a single bit.", "Designers often use techniques such as encoding the metadata information into the unused upper bits of a pointer, but this is often insufficient on its own [14], requiring additional tagging or ISA(instruction set architecture) changes.", "The choices of where to store metadata, how much metadata to store, and how to access it are defining characteristics of tagging systems." ], [ "LLVM Compiler Infrastructure", "The LLVM compiler infrastructure [18] is a fully featured compiler designed with modularity in mind.", "It is broken down into components, which operate on different parts of the compilation process.", "Important to COGENT is the idea of the language-specific front end, the target architecture-specific back end, and the language- and target-agnostic intermediate representation (IR).", "The LLVM design requires the language-specific front end to convert source code into LLVM IR, the optimizer schedules an analysis and transformation pass pipeline on that IR, and its output is IR for consumption by the back end.", "The back end might also perform additional transformations on the code via its own set of passes and built-in functionality.", "The final product of the back end is then output to an object file, which is fed into a linker to create the final binary.", "Modifying the LLVM compiler can thus take two main forms.", "The simplest scenario is to create and modify transformation and analysis passes within the existing structure of the pipeline.", "These transformation passes can be scheduled to run before or after any specific pass or combination of passes, allowing one to place new transformations and analysis at any point in the pipeline.", "The more complex scenario is to modify the internal components of LLVM, changing or extending the functionality of pre-existing code in the LLVM infrastructure." ], [ "Relocations and Symbols", "As part of the creation of an object file, the back end will emit target specific information called a relocation into object files.", "These relocations are designed to inform the linker of how to handle information that cannot be finalized before the linking phase of compilation.", "Examples of this for RISC-V include encoding addresses, which are often emitted as a pair of relocations (named HI and LOW), covering two instructions (i.e.", "auipc, jalr).", "These relocations tell the linker to place the final address of the target symbol across two instructions when it is required.", "Symbols themselves can be kept as offsets from an unknown starting address, allowing the linker to know the final address during the linking process, even though the absolute position cannot be determined by the compiler back end.", "Tagging schemes are complex and varied, and different schemes require different configurations.", "There are some that require more space (i.e., metadata) to be accessible for any given instruction, while some might require only a single bit for some defenses [13].", "Additionally tagging schemes must be able to provide the metadata to the hardware.", "A generic instruction tagging scheme must be able to meet this requirement under as many different configurations as possible.", "To that end we set out to create COGENT.", "The goal of COGENT is to enable embedding configurable metadata tags into instruction stream, and not implementation details for different defense schemes such as forward or backward edge CFI, heap or stack, stack protection, or any other particular defense scheme.", "We did not set out to create a specific defense with this paper, or provide analysis that can be used to create one.", "Instead we focused on three major tasks.", "Flexibility: COGENT must be able to generate binaries for RISC-V architecture that require varying tag widths, with minimal changes.", "Additionally, we aim to modularize our implementation in order to minimize the developer effort in modifying COGENT to support a new architecture.", "COGENT must also be able conducive to supporting different types of defense schemes including those that require large amounts of metadata.", "Functional Correctness: Insertion of tags and metadata must not interfere with program logic.", "That is, all necessary tags must be appropriately inserted, and inserted tags must not alter the control flow or corrupt data.", "Backwards Compatibility: The tool must allow created code to be backwards compatible.", "That is, tagged binaries must be able to seamlessly execute on tag-unaware hardware.", "While backwards compatibility is a desirable feature for many systems, we recognize that it may not be necessary in every case.", "If turning it off could provide benefits to a scheme COGENT should allow that as well.", "A tag or metadata lookup action in the hardware could be expensive and inefficient.", "Therefore it is best to reduce the number of abstractions required to retrieve any tag for a given instruction.", "Likewise having separate (but mapped) memory adds complexity to the process of bringing the tags into the hardware." ], [ "Tag Semantic Gap", "During the compilation process, programs pass through different stages of compilation.", "While the tags are eventually assigned on the raw instruction stream, the actual assignment of tags may occur at a higher level of abstraction (i.e., source code or intermediate representation).", "Depending on the stage of compilation where tags are assigned, percolating them down to the instructions is important and requires bridging of the semantic gap between the higher and lower levels of program representation.", "Further, during compilation the compiler will often expand IR instructions/source statements into multiple assembly instructions.", "This manifests as a Many-to-One relationship where many tags map to a single IR instruction/source statement.", "A very common example in RISC-V is the lowering of the LLVM-IR pseudo instruction PSEUDOCALL from MachineInstr level into the MCinst level.", "During this process the single PSEUDOCALL is expanded into an auipc,jalr pair.", "Likewise there are One-to-Many relationships where multiple instructions at a higher-level of abstraction are lowered into a single assembly instruction (e.g.", "bitcast instructions).", "These one-to-many and many-to-one relationships can occur during any point of the lowering process, and where possible, COGENT must provide ways to handle them correctly.", "Unfortunately there are many cases without a generic solution i.e.", "in the lowering from IR level instructions to machine Instructions." ], [ "Compiler-inserted Code", "During compilation the compiler may introduce instructions that did not exist at the previous (i.e., higher) level of representation.", "This is typically done to meet back end specifications or optimizations.", "Examples include in-lining and function prologue/epilogue insertion.", "COGENT must provide a way to tag compiler-inserted code, and must ensure that it does not disrupt the tag schemes." ], [ "Pre-calculated Offsets, Symbols, and Relocations", "Features such as C++’s try/catch block implementation often pre-calculate the distance between two symbols instead of leaving it as a relocation wherever possible.", "COGENT must be sensitive to these pre-calculated values.", "More generally, wherever possible the compiler calculates the offset to a relocation or a symbol from the beginning of a compilation unit.", "COGENT must preserve the relative locations in the code section though any changes it makes." ], [ "ISA Changes", "Prior efforts have altered the ISA to enrich code with metadata.", "Any ISA changes would by their very nature break backwards compatibility.", "COGENT must be able to encode tags without any modifications to the ISA." ], [ "Our Approach", "When examining both the challenges and the goals, we converged on the following design choices for COGENT.", "These solutions address both the challenges and goals." ], [ "Flexibility", "To address flexibility and the many back-end specific challenges we encountered, we designed COGENT to be as modular as possible, with front end and back end components in order to allow for adaptability.", "Unfortunately many (but not all) of the back end changes are target specific in the best case, and design specific in the worst.", "To solve this we clearly separate the COGENT process into five major pieces.", "IR Level Tagging API.", "This component allows the creation of IR level passes that can take advantage of the front and back end agnostic nature of the IR to perform analysis and select tags for IR level instructions.", "These IR level tagging passes can be shared between any source and any target, and so are the most portable method of tag selection.", "This component consists of an analysis pass that provides propagation of assigned tags into any back-end passes where possible.", "Backend Level Tagging API.", "This component allows the creation of LLVM MachineInstr and MCinst level passes that can tag code after it has been lowered into a target specific back end.", "This allows tagging of code inserted after IR level optimizations or back end insertions.", "Backend and Defense Specific Changes.", "These changes are both back end and defense specific.", "Representing choices that must be made in how the target back-end and the scheme must interact.", "For the RISC-V prototype we developed (see sec:example), One example would be the insertion of a metadata label between an auipc jalr pair to include a CFI label.", "A second would be the specific behavior of tags in the many to-one and one-to-many lowering relationships.", "These changes cannot be handled by back end passes alone and must be hard coded into the back end itself on a case-by-case basis.", "Defense Scheme Agnostic Back End Changes.", "These are changes to the back end that must be done to allow for the insertion of the tag instructions as late as possible into the code stream.", "This includes turning off the pre-calculated offsets, ensuring correct tag placement, and adjusting symbol and relocation offsets before object file emissions.", "These do not interact with the logic of tag selection, only with the emission of the tags themselves into an object file.", "Configuration File.", "This file allows a user to specify simply what emission options they would like for the final tag instructions.", "Details of the provided options are presented in sec:select, but the configuration file allows developers to switch between tag widths for their own instruction tagging schemes.", "To address the flexibility requirement of providing additional metadata where needed, we introduce metadata labels as a tool for a developer to use.", "These labels can be repurposed for any use, and are encoded as nops by the compiler in order to maintain backwards compatibility.", "In the RISC-V back end we chose lui instruction which provides 20 bits of metadata per inserted label." ], [ "Functional Correctness", ".", "Addressing correct tag placement becomes challenging when inserting new tag instructions at specified alignments.", "Any changes made to the code after a tag has been inserted will displace the tags and must be adjusted in order to maintain consistent tag placement.", "To this end, all functions are forced to be aligned.", "Additionally, we assume that relaxation/compressed instruction generation is turned off (see sec:compressed for a discussion on compressed instructions).", "Additionally optimizations such as pre-calculating offsets based on the MCInst’s as described in challenges must be turned off.", "This adds additional one time overhead to the linking process, but no extra run time cost beyond the insertion of the tags themselves.", "Optimizations introduced by IR passes can be left in place however, by insuring tag insertion is completed at the last point possible in the pipeline.", "Finally tag instructions can be placed in locations that disrupt relocation pairs, meaning the linker must be made aware of this possibility." ], [ "Backward Compatibility", "To allow for backwards compatibility, at least one of the possible tag instruction configurations must be encoded as a nop instruction.", "For the RISC-V prototype, the instruction lui x0, <imm> was selected, as it is functionally a nop and gives the maximum number of bytes to encode tag values.", "Other target architectures would have to select different nop-equivalent instructions with differing amounts of bits to ensure backwards compatibility.", "Table: Tag instructions used by COGENT." ], [ "Details", "With the solutions outlined in sec:overview we created a prototype version of COGENT.", "This prototype comes in two parts, The underlying changes to LLVM itself, and an example implementation that provides metadata labels on computed jumps and computed callsites." ], [ "LLVM Changes", "We make all of the changes described in this section to LLVM version 10.0.0.", "The first and the most important step in COGENT implementation is to provide a way for the developer to select how they wish to represent the tags in the instruction stream.", "This representation is referred to as the tag instruction.", "The selection of this tag instruction determines How many bits a user will have available to store tags How many instructions any single tag can cover Backwards compatibility fig:taginstructions shows the different choices we provide via a configuration file.", "We detail their bit layout, how many bits they provide, and the number of bits available for each instruction based on the coverage desired.", "If backwards compatibility is not required, users can select a custom opcode for the tag instruction.", "The tag enumeration itself is defined in a separate header file that must be included to the relevant passes.", "Figure: Demonstration of COGENT using the simple CFI described in sec:example, contains compiler inserted code (0x58-0x68), many to one relationships (0x94-0x9C), relocation and symbol adjustments (0x6c-0x74), a tag inserted between a relocation (0x70).", "Additionally it includes data that must be persisted from the IR (the function labels) to the backend inserted labels at (0x50) and (0x98)." ], [ "Tagging API", "Once the tag instruction has been selected, the next COGENT process is to assign tags to LLVM’s IR level instructions or machineInstr.", "In the backend, this process is done via a simple API we expose to the user, consisting of standard .set(<tag>) and .get(<tag>) extensions on top of LLVM’s base machineInst classes.", "These are employed by LLVM backend passes that developers insert into the compiler pipeline.", "In general tag instrumentation passes should be inserted as late as possible in the pipeline to prevent optimizations from interfering with tag lowering.", "We also provide label emission, via an emission of a single lui nop at any point in the compilation process.", "This lui must be tagged appropriately to indicate to the hardware to process this label as a particular type of metadata.", "For tagging in the IR level, we provide a pass that IR level passes can push a mapping of IRInstruction : tag into, and that backend passes can register with to access that tag mapping to assign those tags to machineInstr.", "The lowering of these IR tags is complicated by the needs of the target specific backend, as described in the next section." ], [ "Lowering Behavior", "For IR instruction tagging, we provide a default behavior in a modified RISCV backend.", "However depending on the specific deployment needs, developers must investigate the lowering process and make changes where appropriate to their planned tagging behavior.", "By default one-to-many transformations apply the tag on the IR instruction on every machine instruction created by that instruction during the lowering process.", "For many-to-one, the default behavior is the first tag.", "Any instructions not directly created from another instruction (an optimization or specification specific to the back end) will be assigned the default tag value from the tag configuration file.", "This process may be insufficient for specific defense needs, but there is no one-size-fits-all approach to the lowering problem and necessitates developer effort.", "For machine instruction inserted tags, we modify the lowering process from machineInstr to MCInst allowing the tag field to be copied over.", "In cases where this is a one to one lowering, we simply copy the field from the machineInstr to the MCInst.", "For one to many relationships, such as PSEUDOCALL, we assign multiple tags to the machineInstr which are assigned to each of the MCInst’s during this expansion process.", "Specific tag values for a given expansion will depend on the tagging scheme itself.", "Similar to IR level lowering there is no one size fits all solution to expansion.", "Fortunately at the machineInstr level there are only a few of these cases to handle in the RISCV back-end." ], [ "Emitting the Tags", "The actual tag emission into the instruction stream occurs in the finalize function during code emission in the LLVM backend.", "At this point the byte code for instructions has been placed into containers called fragments, and their layout relative to each other is known.", "However they have not been emitted into the final binary.", "We modify LLVM such that as instructions are encoded into their fragment, their ordered list of tags are also added into a new “tag” array.", "With this tag array in place, and the knowledge of the fragments offset (provided by llvm) we create a new data array for the fragment, that has the tag instructions inserted into the instructions.", "We do this for every fragment that contains instructions.", "In this manner we can be sure that any MCinst that has received a tag, will have that tag emitted into the final binary Any non-zero fragment offset must be recalculated to account for tags that will have been inserted into previous fragments.", "However because this is a simple one tag instruction per-number of encountered instructions, it is always possible to calculate this correctly with just the original offset.", "(I.E.", "If the offset is 12 bytes, and the tag coverage is 3, there would be a single tag instruction inserted before that offset, and the new offset would then be 16) Like the fragment offsets, during tag emission symbols and relocation must be adjusted to account for the tag insertions.", "As they contain offsets that are byte distances from the beginning of a fragments, these values must be adjusted by the same one per n calculation as the fragment offsets.", "For both symbols and fragments we create a new array, with the updated values, and then overwrite the unmodified arrays using a ConstCast.", "There are some corner cases in symbol resolution that must be addressed.", "Specifically the compiler will attempt to calculate the distance between two symbols before the finalize layout step if it is able to determine the distance early (i.e., if they are in the same fragment), this is a compiler optimization to increase symbol resolution speed in the linker.", "To deal with this, we disable this optimization and allow the linker to resolve the difference after tag insertion.", "Additionally, the calculation of if a jump should be PC-relative or indirect is done before the final emission based on the maximum distance allowed in a single PC-relative jump instruction.", "This calculation must be adjusted to assume an additional one tag instruction per number of instructions insertion.", "An example of the final result of these changes, and the final emission of tags is presented in fig:cfiexample" ], [ "Linker Changes", "To support both the insertion of tags and possibly metadata labels, the linker must be made aware that there can be instructions inserted between a HI and LO relocation pair.", "Normally HI-LO relocation's must appear as a pair, directly next to each other.", "We modify the linker to allow for up to three instructions inserted between the pair, and adjust the final value calculations to account for the added distance.", "Three additional instructions are selected as a default to allow HI, tag, METADATA_LABEL, METADATA_LABEL, LO configurations, however this number can be adjusted if more metadata is desired." ], [ "Example CFI Implementation", "We provide an example CFI implementation that demonstrates the common design challenges and changes of a simple COGENT implementation.", "CFI is a very well studied problem [6][19][20] and is a common use case for tagging implementations [16][13][14].", "This example CFI implementation places metadata labels at the beginning of exported or address taken functions, and before the function call at computed call sites.", "It consists of the following components.", "A tag configuration with 2 tags, using a lui-nop to provide backwards compatibility.", "The tags are simply N (A normal instruction) or CFL (control flow label) this allows any of the coverage's supported by lui fig:taginstructions to be selected.", "An IR level pass that assigns CFI labels to functions and callsites based on their function signature.", "Function-signature-based CFI policy is consistent with the CFI approach in LLVM.", "A callsite and a destination must have matching CFL labels.", "A Machine instruction level pass that inserts and tags the CFI labels with CFL at the beginning of every non-internal function.", "A back-end and scheme specific change to insert the CFL label between the AUIPC-JALR pair that RISCV generates for call sites.", "An example of this CFI implementation on a from source to object file is provided in fig:cfiexample.", "This example also includes many of the common challenges faced by COGENT.", "CFI is such a well studied problem, and our example solution is based on previous work [18].", "As such we do not claim this implementation as a novel contribution." ], [ "Tag Expansion", "Although we do not claim it as a contribution, there is no fundamental reason that placing two tag instructions next to each other would not work.", "Such a configuration could allow for higher coverage at the cost of higher space overhead.", "This may be useful when space is a nonissue, or for inserting complex metadata for testing purposes." ], [ "Compressed Instruction Support", "Likewise there is no fundamental barrier to support compressed instructions.", "The instruction opcode in RISCV indicates if an instruction is compressed or non-compressed.", "With that knowledge and the current PC, it is possible to infer what bits in the tag instruction contain the corresponding tag.", "For compressed instruction support, care must be taken when designing the tags and the tag policy, as the tag bits must be small enough such that a single tag instruction can cover every instruction (compressed or not), even if they are all tagged.", "This can be done by using tags small enough to cover all possible instructions (i.e., 3 bit tags for an lui with an uncompressed coverage of 3) or a two-tiered tag system, where uncompressed instructions receive a larger share of the tag space.", "The compiler must additionally be modified to insert alignment NOPs in the case where a non-compressed instruction would cross a tag instruction, pushing it under the coverage of the next tag." ], [ "Dynamic Linking", "Currently COGENT only supports statically linked binaries.", "This is to ensure that the tags for any externally linked libraries are always located in the correct place for interpretation.", "We plan to support dynamic linking in future versions of COGENT." ], [ "Handwritten Assembly Support", "Handwritten Assembly sections are loaded into the compiler backend for parsing before being emitted to the final object file.", "This allows COGENT to insert the tag instructions in hand written assembly segments.", "However its ability to correctly determine tags on these sections is limited.", "The correct tagging of these sections would be left to the implementation of a specific tagging scheme, in much the same way as backend and scheme specific tagging would be and may only be possible with manual tagging." ], [ "Additional Configurations", "Other tag layouts beyond the single one we chose are desirable in a generic tagging system.", "Possibilities include Placing all the tags for a function's instructions at the beginning of that function.", "This would consist of a series of tag instructions sufficient to cover every instruction in the function.", "On non tag aware systems this would simply be the equivalent of a series of NOPs that run before a function starts.", "Placing the tags for a basic block at the beginning of that basic block.", "Again, this would consist of a series of tag instructions large enough to cover every instruction in that basic block.", "Placing the tag a separate .text.tag section in the ELF executable file.", "It would be up to the hardware to determine how to use this section, but it can be desirable, and non tag aware hardware can simply ignore the section completely.", "While COGENT currently does not support these additional configurations, we have plans to extend support to them in the near future.", "Figure: Backward compatibility results for Lui NOP, coverage 3, 7, and 15, with and without labels in percentage overheadCOGENT is a generic system with many different possible use cases.", "As such we evaluate it Four primary ways.", "Overhead on COTS hardware (i.e.", "the overhead of backwards compatibility.", "), and binary size overhead, are presented to demonstrate the cost of cogent in its most common configurations and uses.", "The last two, are a comparison of related works, and case studies demonstrating how the system can easily be adapted to different use-cases.", "The off the shell performance tests were done on a SiFive U540 SoC, which contains four 64-bit RISC-V cores and 8GB of memory, running Gentoo Linux with the kernel version 5.2.9.", "For the backwards compatibility evaluation, we use the lui nop described in sec:select.", "This is chosen to allow the maximum amount of space per tag and still allow the binaries to run on unmodified hardware.", "Coverage 3 provides 6 tag bits per instructions covered, 7 provides 2, and 15 provides one.", "While the labels were inserted as described in the example CFI configuration (see sec:example), evaluating them requires special hardware capable of interpreting the tags.", "This is explored in a separate hardware paper [21].", "We chose a subset of the SPEC CPU2017 rate benchmarks for our tests.", "This subset has a wide range of characteristics, including heavy memory operations in mcf_r, and rapid arithmetic operations in deepsjeng_r [22].", "Testing against these characteristics reveals what kind of performance impacts can be expected on unmodified hardware.", "SPEC CPU2017 provides three different types of workloads.", "Test, training, and reference.", "The test workload is a simple test of the executable, the training is designed to give feedback for optimizations, and the reference workload is the fully timed data set.", "For these tests we used the full reference workload.", "The size overhead includes the overhead of label insertion on the coverage 3 configuration.", "This is done comparing the size of a binary compiled with and without labels for each benchmark program.", "A simple count of the number of inserted labels would be insufficient when including tags, as the position and number of labels can add to the number of tags inserted into the program." ], [ "Backwards Compatible Results", "Overall the average backward-compatibility performance overhead of inserting a tagged lui nop every three instructions and labels was 13.40%.", "Every 7 instructions and labels was 6.98%, and every 15 instructions and labels was 5.72%.", "This is not a linear reduction due to two major factors.", "The inclusion of the labels, whose overhead is the same over every coverage, and varied properties of benchmarks themselves.", "Additionally, for small loops and functions in binaries the number of nop’s encountered in the code stream can be greater than simply 1 per 3 normal instructions.", "loop_start:     <TAG>     addi     addi     addi     <TAG>     j    loop_start Consider the simplified toy example in asm-ex.", "In this set of instructions, we encounter 2 tags every 4 normal instructions every time we iterate over the loop.", "Examples can be found for every coverage size, using larger loops or functions." ], [ "Binary Properties", "The properties of the benchmarks have a major impact on the results of our evaluation.", "For programs that are memory operation intensive, even in backwards compatibility mode, the overhead of additional nops added to the instruction stream is quickly overwhelmed by the cost of memory operations.", "mcf_r has this property, and has an overhead of 3.79% for coverage 3, and 1.09% for coverage 15 in backwards compatibility mode.", "Conversely, small looping in-register operations can cause the nops to have an oversized impact on program performance.", "Deepsjeng fits this profile, and the resulting overhead is 21.79% for C3, 12.45% for C7, and 8.39% for C15.", "Programs that do not exhibit such extremes in behavior tend to have overhead results close to the average results.", "Full results can be found in fig:overhead." ], [ "Size Overhead", "Overall we expect the text section to increase in proportion to the number of tag instructions we insert.", "However complicating this number is the function alignment requirements.", "In the text section of the binary, the space between functions is filled in with zeros that extend from the end of one function to the beginning of the next.", "This effect is more pronounced in C++ programs with many small functions such as omnetpp_r and xlancbmk_r.", "Labels have a minimal impact on binary size, averaging 0.78%.", "Overall the text section size increase is on average 29.32% for coverage three with labels, 20.49% for coverage seven, and 18.27% for coverage 15.", "The full text section size results are available in tab:size.", "Table: Text section size in bytes for baseline and 4 different configurations.", "Luic3 is coverage 3 using the lui nop.", "luic7 is coverage 7 using the lui nop.", "luic15 is coverage 15 using the lui nop.Table: State of the art comparison.", "*PUMP's tags are sizeof(int *) per memory word, some applications will not use the full space.", "**PUMP allows arbitrary metadata by allowing a pointer to be encoded into the tag" ], [ "Feature Comparison Against State of the Art", "COGENT was created to enable flexibility in its deployment.", "To demonstrate this against the current state of the art, we compared COGENT against recent tag-based architectures using several criteria.", "How easy is it to employ the tool in a different manner then one of the creators designs?", "A high rating means the tool was designed for that purpose, while a low indicates it was created for one specific task.", "Does this tool have a Variable tag size?", "Can this tool provide an arbitrary amount of metadata to a user above and beyond the size of the tag, if that user desires it.", "Does this tool change the underlying ISA, removing backwards compatibility.", "Does this tool use the unused pointer bits?", "What kind of additional memory protection does this tool employ to protect its tags?", "The results of this investigation are available in tab:comp.", "In general we find that unless a tool was designed for reuse (i.e.", "PUMP), it would be difficult to re-target the hardware for other tasks.", "State of the art solutions use ISA changes to provide the compiler with the wrappers and tag aware instructions they need to manipulate their metadata.", "These ISA changes break backwards compatibility.", "The common use of unused pointer bits, or custom pointers, render solutions incompatible with each other.", "Table: Sample CVE's and tag-based solutions" ], [ "Case Studies and Use Cases", "COGENT is a generic tool designed to allow for the easy implementation of other tagging and labeling schemes.", "We present a case study, offering a detailed look at systems COGENT could be used to achieve.", "We selected CVE-2021-3330 to explore with detailed solutions presenting and describing Tag Aware Arithmetic sec:2021-3330. tab:CWECVE shows a selection of recent CVEs to which our findings could be applied, including the one we selected for our case study.", "Additionally, we present an alternate usage of COGENT for fuzzing and testing, where we use COGENT to keep track of coverage and branching behavior of a program at runtime.", "Table: Overflow check tagging schemeCVE-2021-3330 is a vulnerability that affects the Zephyr RTOS, and comes from a series of errors that includes an improperly constructed cyclical list, where a singularly linked list is expected.", "Iterating over this now cyclical list causes an unsigned value to be repeatedly subtracted from another unsigned value, leading to an integer underflow.", "This value is then fed as the length argument into [language=C]|memmove()|, leading to a massive memory move that can overwrite kernel structures.", "This CVE consists of several points of failure.", "Improper list construction, integer underflow, and the overwrite itself.", "In this case study, we focus on a potential solution for the integer underflow, which would prevent the memory overwrite by triggering an exception when the underflow occurs." ], [ "Type Aware Arithmetic", "To do this we will introduce type aware arithmetic, requiring a tag with a width of a single bit, giving two possible states for every instruction as shown in tab:overflowcheck.", "These two tags are N and UN_ARTH.", "The N tag is given to any instruction in the stream that will not be checked for integer overflows.", "This can be because the instruction does not perform any arithmetic on an unsigned integer, or because the programmer or the compiler has chosen to disable the check on that instruction.", "While the UN_ARTH would be automatically placed on any instruction that performs arithmetic operations on unsigned types.", "To insert this tag, an llvm compiler pass would be created that examines the left hand side of any arithmetic expression, determining if the type of the result is unsigned.", "If it is, the compiler will add the tag to any and all arithmetic instructions that result from the IR level instruction, which the COGENT system would then propagate down into the final binary.", "There are two cases where the compiler would not place the UN_ARTH tag on an instruction that creates an unsigned value.", "The first is when the compiler can determine that values being subtracted are hard coded immediate values, in this case we would have to assume the programmer is attempting to exploit the defined behavior of unsigned (IE unsigned MAX = 0 - 1).", "The second would be a provided annotation that allows the programmer to opt out of the checks if they had a reason to.", "This is required as in the C standard integer over/underflow are defined behavior.", "Forcing the user to opt out of the behavior is done in an attempt to minimize developer errors when using this defense." ], [ "Required Hardware Support", "The required hardware support for this tag scheme is very simple, if an instruction is tagged with UN_ARTH, an exception should be thrown if the results from that operation over or underflow.", "Hardware would require this additional tag support as without metadata, it would be unable to differentiate between an operations on signed and unsigned integers.", "This can either be a hard stop to the program, or be call an exception handler if the policy calls for it.", "In this case, for CVE-2021-3330, we would halt the program execution, as the corruption that results from this bug can overwrite kernel data structures." ], [ "Code and Tag Layout", "For CVE 2021-3330, the original integer overflow occurs in the following two lines of code.", "memmove(frag->data, frag->data +     hdr_len, frag->len - hdr_len); frag->len -= hdr_len; These lines of code operate only on unsigned integers, and do not have the opt-out annotation.", "After compilation, this code produces the following relevant lines of assembly, loading the required values into registers, and then subtracting them.", "linerange=1-10, breaklines=true, frame=single, caption=CVE-2021-3330 underflow ASM ]listings/taggedunarth There are two places in this code that will need to be tagged, sub a2, a0, a2 and sub a1, a3, a1.", "This information can be determined by the compiler, and so they will receive the UN_ARTH tags.", "Resulting int he following final code layout.", "Note the tag instructions themselves are not included in this listing, as there exact position would depend on compiler optimizations levels, and overall layout of the code, however they would be inserted into the instruction stream at the appropriate alignments.", "linerange=13-22, breaklines=true, frame=single, tabsize=2, caption=CVE-2021-3330 underflow ASM ]listings/taggedunarth With all this in place, when this tagged code is run and the bug is triggered, operation would proceed normally until the subtraction that causes the arithmetic underflow.", "Once that happens, the hardware would trigger an exception based on the UN_ARTH tag, causing an extra check as part of the instruction processing phase.", "A final strength of this solution is that because of the simple nature of the two tags employed, it would be a trivial exercise for COGENT (although not necessarily the hardware) to layer this tag with another single bit tag that would not be placed on arithmetic instructions.", "An example could be the single bit return address protection provided by [16] and [13].", "These two protections could be layered without the need for extra tag sizes." ], [ "Description", "Inline tagging could be used during testing and fuzzing to allow for quick tracking of program coverage, counts of reached instructions, and paths taken through the CFG.", "In this use case, run time security is not considered as part of the design.", "Instead COGENT inserts metadata informing testing hardware what information it needs to record, and where it needs to record it.", "Hardware changes are limited to what is required to facilitate these tasks.", "Table: Coverage tags summary" ], [ "Tag and Label Design", "This use case requires the coverage 7, 2 bit LUI labeling scheme.", "Giving 4 possible tags, each indicating a different behavior to the testing hardware.", "Additionally we introduce counting labels, with the initialized value of 0 as the primary fast coverage tracking method.", "An in depth description of each tag and the labels are as follows, with a summary provided in tab:coverage.", "CL (Counting label) - Indicates that this instruction is a Counting label.", "When this tag, and the associated label are encountered in the instruction stream.", "First the value contained in them is incremented by one, Then written back into the label.", "This requires that the system be allowed to write values back into the code section.", "These will be placed at the beginning of every basic block.", "CI (Counting Instruction) - Indicates that this instruction should be counted.", "When the instruction stream reaches this tag, it will check the current program counter against a mapping of PC : Int to see if this instruction has been encountered before.", "If it has, it will increment the associated Int by one.", "If not, it will create a new entry in the map, with a value of one.", "This counting instruction is explicitly slower than the counting label, due to the need for the mapping lookup, but could be used for specific purposes.", "BCF (Branching Control Flow) - Indicates a branching point in the instruction stream that needs to be tracked.", "This tells the hardware to check if the current PC is in the BCF mappings.", "If it is not, it creates a new entry in a {PC : {Next_PC : Count}} mapping.", "After it has either found, or created, the entry for the current PC, the Hardware then checks if the next program counter is in the {Next_PC: Count} mapping, adding it if it does not find it, or increment the counter if it does.", "In this way the BCF tag builds a map of each branching instruction, the targets it has reached, and how many times it has reached those targets.", "N (Normal) - This tells the hardware that no special hardware actions are needed to enforce the overall security policies within this instruction." ], [ "Compiler Support", "For coverage tracking, the compiler would be configured to place counting labels and their associated tags at the beginning of every basic block at the machine instruction level.", "Additionally, a pass would be added to enable two levels of branching control flow tracking depending on the desires of the tester.", "BCF labels could be placed on every control flow instruction or, just on computed control flow instructions." ], [ "Hardware Support", "This coverage system requires two major changes to the supporting hardware beyond the ability to read tags.", "First is the ability to write the incremented counters back into the counting labels.", "Second is the ability to record the PC of instructions tagged with CI or BCF into mappings.", "To do this we would reserve a known memory for the CI and BCF mappings, and ensure that the size of each entry is known to the hardware.", "This means for BCF mappings there would be a configurable maximum number of targets in order to keep lookup times reasonable." ], [ "Example Code", "The RISC-V assembly code below is an example of how the coverage tagging would look like on a simple basic block, followed by a branching jump.", "linerange=1-19, breaklines=true, frame=single, tabsize=2, caption=Coverage Example ASM ]listings/coverage Assuming the above was entered three times, jumping once and falling through twice, the counting label at 0x4 would be |lui x0, 3| and at 0x1C it would be |lui x0, 2|.", "LBB0_2 is always reached at 0x34, making its value |lui x0, 3|.", "The branching table would contain 0x18 : 0x1C: 2, 0x30: 1, showing the exact behavior of the branch over three runs." ], [ "Related Work", "PUMP [16] provides a flexible system to associate metadata with memory locations, including on instructions.", "Its large fixed-width tag sizeof(*) provides unlimited metadata size per memory location via allowing the tag to simply be a pointer to additional memory.", "However schemes that actually use this amount of space are not common, and even then, the common case for any given memory location is a small amount of metadata.", "To help alleviate the large overhead of having so much unused space in there tags, they propose and implement tag compression schemes to speed up retrieval.", "Compared to PUMP, COGENT's instruction tagging allows simpler protection schemes to use smaller tag sizes when needed, while at the same time allowing that same arbitrarily sized metadata though the use of any number of inline labels.", "Importantly though, PUMP has an actual hardware design backing it, while COGENT does not have specialized hardware, so the only comparison can be between the location and storage size of metadata, not on performance, hardware complexity, or overall overhead.", "CHERI [12] is a well known capability based architectural enhancement for RISC instruction sets.", "Work on it is current and on-going [24] [25] [26].", "CHERI provides its security though architectural capabilities, and tagged memory.", "Additionally it is a full stack solution, and includes compiler, OS, and hardware support.", "Cheri itself requires that compiler to support on all levels.", "Language specific front end, IR level passes, and backend emissions.", "RetTag [14] uses compiler modifications to insert new instructions in the function prologue and epilogue to generate a new instruction (thus modifying the ISA and preventing backward compatibility) called pac.", "This instruction generates a unique PAC using the a unique function ID, the virtual address of the functions RA the SP, and a 128-bit RAA key and stores it in the 16 unused bits of a function pointer, and it is used to protect the return address from corruption.", "ZERØ [27], provides a set of new instructions via a modification of the ISA and a method of encoding metadata that keeps the overhead to eight bytes per 4kb page.", "This work discards backwards compatibility by including new ISA instructions for memory and metadata management.", "Additionally it stores ten bits of information in the unused bits of pointers to enforce its own version of CFI using unique-per-function signature labels.", "It modifies the compiler to insert its new instructions at the appropriate places.", "Timber-V [28] allows the creation of up to four protected enclaves on a RISC-V system.", "It likewise uses a new set of instructions in order to tell the hardware when it can check and manipulate tags.", "Like ZERØ this removes its ability to provide backwards compatible code.", "While Timber-V does not require compiler support, they specify that compiler support would allow for better integration.", "Several works including Framer [15], RetTag [14] and ZERØ [13] use the upper unused bits of a pointer as place to store there metadata.", "Doing this gives these works strong backwards compatibility and low overhead.", "However by their nature it is difficult to layer any works that use the unused pointer bits.", "Additional space is limited, leading framer to use a novel encoding scheme to provide access to the pointers metadata, relying as much as possible on relative offsets from known points (i.e.", "frames).", "Such works could be combined with COGENT as we do not manipulate the pointers themselves, leaving the unused parts of the pointer available for any other scheme, or to be used for a CFI label.", "FlexFit [29] is a generalized solution presented as a method to allow users to configure a set of instruction filters for use at run time.", "They implement this on a RISC-V FPGA board to demonstrate the practicality, feasibility, and usability of there generic filtering scheme.", "They store the protection information in four unused bits of the RISC-V PTE, giving up to 16 protected domains at a page granularity.", "Using the unused bits in the PTE, has been explored by other works as well [30] [31], which is why FlexFit limits its protection domains to 16, using only 4 of the available bits.", "While FlexFit does not use a compiler, the authors suggest a specialized compiler and loader for future work." ], [ "Conclusion", "This paper presents COGENT, a modified LLVM compiler capable of generating RISCV binaries with configurable width instruction tags.", "These tags can be used to design security solutions that convey rich semantic information to a tag-aware architecture.", "COGENT tackles several challenges including one-to-many and many-to-one instruction-level tag associations.", "We evaluate COGENT on a subset of SPEC 2017 benchmark programs for performance and binary bloat.", "Further, we present case studies that highlight potential use cases of COGENT." ] ]
2212.05614
[ [ "YoloCurvSeg: You Only Label One Noisy Skeleton for Vessel-style\n Curvilinear Structure Segmentation" ], [ "Abstract Weakly-supervised learning (WSL) has been proposed to alleviate the conflict between data annotation cost and model performance through employing sparsely-grained (i.e., point-, box-, scribble-wise) supervision and has shown promising performance, particularly in the image segmentation field.", "However, it is still a very challenging problem due to the limited supervision, especially when only a small number of labeled samples are available.", "Additionally, almost all existing WSL segmentation methods are designed for star-convex structures which are very different from curvilinear structures such as vessels and nerves.", "In this paper, we propose a novel sparsely annotated segmentation framework for curvilinear structures, named YoloCurvSeg, based on image synthesis.", "A background generator delivers image backgrounds that closely match real distributions through inpainting dilated skeletons.", "The extracted backgrounds are then combined with randomly emulated curves generated by a Space Colonization Algorithm-based foreground generator and through a multilayer patch-wise contrastive learning synthesizer.", "In this way, a synthetic dataset with both images and curve segmentation labels is obtained, at the cost of only one or a few noisy skeleton annotations.", "Finally, a segmenter is trained with the generated dataset and possibly an unlabeled dataset.", "The proposed YoloCurvSeg is evaluated on four publicly available datasets (OCTA500, CORN, DRIVE and CHASEDB1) and the results show that YoloCurvSeg outperforms state-of-the-art WSL segmentation methods by large margins.", "With only one noisy skeleton annotation (respectively 0.14%, 0.03%, 1.40%, and 0.65% of the full annotation), YoloCurvSeg achieves more than 97% of the fully-supervised performance on each dataset.", "Code and datasets will be released at https://github.com/llmir/YoloCurvSeg." ], [ "Introduction", "Curvilinear structures are elongated, curved, multi-scale structures that often appear tree-like and are commonly found in natural images (e.g., cracks and aerial road maps) and biomedical images (e.g., vessels, nerves and cell membranes).", "Automatic and precise segmentation of these curvilinear structures plays a significant role in both computer vision and biomedical image analysis.", "For example, road mapping serves as a prerequisite in both autonomous driving and urban planning.", "In the biomedical field, studies [1], [2], [3], [4] have suggested that the morphology and topology of specific curvilinear anatomy (e.g., retinal vessels and corneal nerve fibers) are highly relevant to the presence or severity of various diseases such as hypertension, arteriolosclerosis, keratitis, age-related macular degeneration, diabetic retinopathy, and so on.", "Retinal vessels are observable in retinal fundus images and optical coherence tomography angiography (OCTA) images, while corneal nerve fibers are identifiable in confocal corneal microscopy (CCM) images.", "It has been suggested that early signs of many ophthalmic diseases are reflected by microvascular and capillary abnormalities [5], [6].", "Collectively, accurate segmentation of various curvilinear structures is of great importance for computer-aided diagnosis, quantitative analysis and early screening, especially in ophthalmology.", "In recent years, benefiting from the development of deep learning (DL), many DL-based segmentation algorithms for curvilinear structures have been proposed and have shown overwhelming performance compared to traditional (e.g., matched filter-based and morphological processing-based [7], [8]) methods.", "Most existing works are dedicated to designing sophisticated network architectures [9], [10], [11] or deploying strategies to preserve curvilinear structures' topology by employing generative adversarial networks (GANs) [2], [12] or topology-preserving loss functions [13], [14].", "These methods are typically fully-supervised, wherein large-scale well-annotated datasets are required.", "However, collecting and labeling a large-scale dataset with full annotation is very costly and time-consuming, particularly for medical images since their annotation requires expert knowledge and clinical experience.", "Furthermore, annotating curvilinear structures is even more challenging, given that curvilinear structures are slender, multi-scale, and complex in shape with fine details.", "More recently, many efforts have been made to reduce the annotation cost for DL model training.", "For example, semi-supervised learning (SSL) trains models by combining limited amounts of annotated data with massive unlabeled data [15], [16], [17].", "While effective, most state-of-the-art (SOTA) SSL methods still require about 5%-30% of the accurately and precisely labeled data to achieve about 85%-95% of the fully-supervised performance, which is still not sufficiently cost-effective and still time-consuming when it comes to labeling curvilinear structures.", "Weakly supervised learning (WSL) attempts to alleviate the annotation issue from another perspective by performing sparsely-grained (i.e., point-, scribble-, bounding box-wise) supervision and attains promising performance [18], [19], [20], [21], [22].", "Compared with either point or bounding box, scribble is a relatively more flexible and generalizable form of sparse annotation that can be used to annotate complex structures [23].", "Existing scribble-supervised segmentation methods mainly fall into two categories.", "The first line of research exploits structural or volumetric priors to expand scribbles into more accurate pseudo proposals; for example, grouping pixels with similar grayscale intensities or locations into the same class [18], [19], [24].", "However, the expansion process may introduce noisy proposals, which may induce error accumulation and deteriorate the performance of the segmentation model.", "Some work [25] also points out the inherent weakness of these methods, namely models retain their own predictions and thus resist updating.", "The second line learns adversarial shape priors utilizing extra unpaired but fully-annotated masks.", "Such approaches somewhat contradict the motivation of saving annotation costs, especially for complex curvilinear structures [26], [27], [28].", "Moreover, most WSL methods still require sparsely labeling the entire dataset (or a large portion), and they are mainly designed and validated on relatively simple structures (e.g., cardiac structures or abdominal organs) with assumptions and priors that may not apply to complex structures (e.g., curvilinear structures).", "To address these aforementioned challenges, we here present a novel WSL segmentation framework for vessel-style curvilinear structures, namely You Only Label One Noisy Skeleton for Curvilinear Structure Segmentation (YoloCurvSeg).", "For curvilinear structures, label noises/errors are inevitable, and a good segmentation approach should be noise tolerant.", "Therefore, instead of utilizing only the annotated pixels for supervision, YoloCurvSeg ingeniously converts the weakly-supervised problem into a fully- or semi-supervised one via image synthesis.", "It employs a trained inpainting network as a background generator, which takes one (or multiple depending on availability) noisy skeleton (as shown in Fig.", "REF ) and dilates it to serve as an inpainting mask to obtain a background that closely matches the real distribution.", "The extracted background is then augmented and combined with randomly emulated curves generated by a Space Colonization Algorithm-based foreground generator, from which a synthetic dataset is obtained through a multilayer patch-wise contrastive learning synthesizer.", "Finally, a segmenter performs coarse-to-fine two-stage training using the synthetic dataset and an unlabeled dataset (if available).", "Our main contributions are summarized as follows: We propose a novel weakly-supervised framework for one-shot skeleton/scribble-supervised curvilinear structure segmentation, namely YoloCurvSeg.", "To the best of our knowledge, YoloCurvSeg is the first sparsely-annotated and weakly-supervised segmentation method for curvilinear structures.", "YoloCurvSeg novelly converts a WSL problem into a fully supervised one through four steps: curve generation, image inpainting, image translation and coarse-to-fine segmentation.", "The proposed framework is noise-robust, sample-insensitive and easily extensible to various curvilinear structures.", "We evaluate YoloCurvSeg on four challenging curvilinear structure segmentation datasets, namely OCTA500 [29], CORN [30], DRIVE [31], CHASEDB1 [32].", "Experimental results show that YoloCurvSeg outperforms SOTA WSL and noisy label learning methods by large margins.", "Meanwhile, we demonstrate that $>$ 97% of the fully-supervised performance can be achieved with only one noisy skeleton label (approximately 0.1% or 1% of the full annotation), which shall also inspire subsequent works on WSL and curvilinear dataset construction.", "Existing automatic curvilinear structure segmentation algorithms can be roughly divided into two categories.", "The first category is traditional unsupervised methods, mainly including mathematical morphology methods and various filtering methods [10].", "For instance, Zana et al.", "[33] segment vascular-like patterns using a hybrid framework of morphological filtering and cross-curvature analysis.", "Passat et al.", "[34] present a preliminary approach to strengthen the segmentation of cerebral vessels by incorporating high-level anatomical knowledge into the segmentation process.", "Filtering methods include Hessian matrix-based filters [35], matched filters [8], [36], multi-oriented filters [37], symmetry filter [38], etc.", "The other category is supervised methods, wherein data with ground truth labels are used to train segmenters based on predefined or model-extracted features.", "Traditional machine-learning-based approaches are dedicated to pixel-level classification using handcrafted features [39], [40].", "Recently, DL-based approaches have made significant progress in various segmentation tasks.", "For example, Ronneberger et al.", "[41] propose U-Net, which has been widely used in numerous medical image segmentation tasks.", "Existing curvilinear structure segmentation works focus on well-designed network architectures by introducing multi-scale [11], [42], multi-task [6], [9], [43], or various attention mechanisms [10], [44] and well-playing morphological and topological properties by introducing GANs or morphology-/topology-preserving loss functions [13], [14].", "Still, data availability and annotation quality are the main limitations of these methods." ], [ "Weakly-supervised Segmentation", "Weakly-supervised segmentation aims to reduce the labeling costs by training segmentation models on data annotated with coarse granularity [18].", "Among various formats of sparse annotations, scribble is recognized as the most flexible and versatile one that can be used to annotate complex structures [23], [27].", "Existing scribble-supervised segmentation methods fall into two main categories.", "The first one exploits structural or volumetric priors to expand scribble annotations by assigning a same class to pixels with similar intensities or nearby locations [18], [19], [24].", "The main limitation of such approaches is that they heavily rely on pseudo proposals and often contain multiple stages, which can be time-consuming and prone to errors that may be propagated during model training.", "The second category learns adversarial shape priors utilizing extra unpaired but fully-annotated masks.", "Such approaches somewhat contradict the motivation of saving annotation costs, especially for complex curvilinear structures [26], [27], [28].", "Additionally, these methods still require sparsely labeling the entire dataset or a large portion, and they are mainly designed and validated on relatively simple structures like cardiac structures or abdominal organs with assumptions and priors that may not apply to complex structures (e.g., curvilinear ones).", "In this paper, we make use of noisy skeletons that differ from scribbles in two ways: (1) skeletons are more label demanding since all branches are supposed to be covered; (2) they are more likely to contain errors or noises, which are inevitable when quickly labeling slender structures.", "We convert sparse and noisy skeleton annotations to accurate ones via an image synthesis pipeline, thus requiring only one noisy skeleton label.", "This significantly reduces the annotation cost." ], [ "Medical Image Synthesis", "GAN [45] has become the mainstay of medical image synthesis, with common applications in intra-modality augmentation [46], cross-domain image-to-image translation [47], quality enhancement [48], missing modality generation [49], etc.", "Below we briefly review previous works on retinal image synthesis, the topic of which is relevant to our work.", "Costa et al.", "[50] employ a U-Net trained with paired fundus images and vessel masks.", "It employs a conditional GAN (Pix2pix [52]) to learn a mapping from vessel masks to the corresponding fundus images.", "To simplify the framework, they propose an adversarial autoencoder (AAE) for retinal vascularity synthesis and a GAN for generating retinal images [51].", "Similarly, Guibas et al.", "[53] present a two-stage approach that consists of a DCGAN for generating vasculature from noise and a cGAN (Pix2pix) to synthesize the corresponding fundus image.", "These methods require an extra set of vessel annotations to train AAE or DCGAN and may sometimes generate vessels with unrealistic morphology.", "The generated images also lack diversity.", "Zhao et al.", "[54] develop Tub-sGAN, which incorporates style transfer into the GAN framework to generate more diverse outputs.", "Note that cGAN requires paired images and vessel masks for training, which is a strict condition to some extent.", "In another work, SkrGAN [55] is proposed to introduce a sketch prior related constraint to guide the image generation process.", "Yet, the sketches utilized are extracted by the Sobel edge operator, and cannot be used as segmentation masks.", "In this paper, we employ a multilayer patch-wise contrastive foreground-background fusion GAN for several considerations.", "According to previous research, training a GAN to learn a direct mapping from a curvilinear structure mask to the corresponding image is difficult, especially under few-shot conditions [56].", "Therefore, we provide GAN with extracted real backgrounds, enabling implicit skip-connection that allows the GAN to focus more on mapping the foreground regions.", "Such a design not only enhances performance but also accelerates convergence.", "Multilayer patch-wise contrastive learning allows the provided mask and the foreground region of the generated image to be spatially aligned (via unpaired training), which further benefits the subsequent segmenter.", "YoloCurvSeg comprises four main components: (1) a Curve Generator that produces binary curve masks that well accommodate the corresponding image modality of interest; (2) an Inpainter for extracting backgrounds from labeled samples; (3) a Synthesizer that synthesizes images from the generated curve masks and the image backgrounds; and (4) a two-stage Segmenter trained with the synthetic dataset and an unlabeled dataset.", "The overall framework is shown in Fig.", "REF .", "Figure: Overview of our proposed YoloCurvSeg, which comprises four main components: a space colonization algorithm-based curve generator, a background inpainter, a multilayer patch-wise contrastive foreground-background fusion synthesizer, and a two-stage coarse-to-fine segmenter." ], [ "Curvilinear Structure Generation", "Space colonization is a procedural modeling algorithm in computer graphics that simulates the growth of branching networks or tree-like structures [57], [58], including vasculature, leaf venations, root systems, etc.", "It is employed in YoloCurvSeg for modeling the iterative growth of curvilinear structures with two fundamental elements: attractors and nodes.", "Its core steps are described in the top left panel of Fig.", "REF , wherein blue dots denote attractors and black ones denote nodes: a) place a set of attractors randomly or following a predefined pattern, and then associate nodes with nearby attractors (if their distance is within an attraction distance $D_a$ ); b) for each node, calculate its average direction from all attractors affecting it; c) calculate the position of new nodes via normalizing the average direction to a unit vector and scaling it by a pre-defined segment length $L_s$ ; d) place nodes at the calculated positions and check if any nodes are within an attractor's kill zone; e) prune an attractor if there are nodes staying within its kill distance $D_k$ ; f) repeat steps b)-e) until the maximum number of nodes is reached.", "Through observing the pattern of the foreground/curve in a single image or a few images that are accessible, including the curves' starting point, boundary, and degree of curvature, etc., it is relatively straightforward to set the corresponding hyperparameters, such as the root node coordinates $C_r$ (e.g., the starting point of the vessels in fundus lies in the optic disc region), as well as the bounds and obstacles.", "For $D_a$ , $D_k$ and $L_s$ , the commonly used values of 5, 30 and 5 can be re-tuned as needed.", "Table REF summarizes the parameters and post-processing operations we employ for generating the four types of curves and representative examples are demonstrated in the top panel of Fig.", "REF .", "Please note that our adopted settings and post-processing operations only represent our empirical choices and are not necessarily the best-performing ones; users can make further adjustments based on their observations and experiences.", "In addition to the curvilinear shape, we also need to simulate the thickness of each branch $R^n = R_1^n + R_2^n,$ where $R$ , $R_1$ and $R_2$ respectively denote the radii of a father branch and its two child branches.", "$n$ is set to be 3 according to Murray's law [59].", "The calculation is performed recursively from the branch tips (whose radii are set to be 1) towards the tree base.", "Several intuitive demos can be accessed at linkhttps://jasonwebb.github.io/2d-space-colonization-experiments/.", "By setting random grid attractors and root nodes via predefined parameters, we construct four curve banks for our four datasets of interest, which are then utilized for training the synthesizers and the segmenters.", "Table: parameters for generating different types of curves.", "D a D_a, D k D_k, L s L_s and C r C_r respectively denote the attraction distance, kill distance, segment length and root node coordinates.", "rr and ll denote radius and length.", "⨁\\bigoplus and ⨀\\bigodot represent the union and element-wise multiplication operations.", "there are two rows for OCTA500 since two components are needed." ], [ "Inpainting for Background Extraction", "Inpainting is the task of reconstructing missing or masked regions in an image.", "Similar to removing watermarks or extraneous pedestrians from images, we employ an inpainting model here to remove foregrounds (e.g., vessels and nerve fibers) from the images of interest, under the hypothesis that the dilated noisy skeletons can fully cover the foregrounds.", "In inpainting, common concerns are the network's ability to grasp local and global context and to generalize to a different (especially higher) resolution." ], [ "Architecture", "Inspired by [60], we adopt an inpainting network based on the recently proposed fast Fourier convolutions (FFCs) [61] with image-wide receptive fields, strong generalizability and relatively few parameters.", "Given a masked image $I \\odot (1-m)$ , where $I$ and $m$ respectively denote the original image and the binary mask of inpainting regions, the feed-forward inpainting network $f_\\theta (\\cdot )$ aims to output an inpainted image $\\hat{I}$ = $f_\\theta (I^{\\prime })$ taking a four-channel input $I^{\\prime }=\\operatorname{concat}(I\\odot (1-m),m)$ .", "FFC builds its basis on channel-wise fast Fourier transform (FFT) and has a receptive field covering the whole image.", "It splits channels into two parallel branches: a local branch uses conventional convolutions and a global branch uses real FFT to capture global context, as shown in Fig.", "REF .", "real FFT is only applicable to real-valued signals, and inverse real FFT ensures the output is real-valued.", "Compared to FFT, real FFT uses only half of the spectrum.", "In FFC, real FFT is first applied to the input tensor and a ComplexToReal operation is performed by concatenating the real and imaginary parts.", "Then, it applies convolutions in the frequency domain.", "Inverse real FFT is performed to transform features from the frequency domain to the spatial domain through the RealToComplex operation.", "Finally, the local and global branches are fused.", "For the upsampling and downsampling of the Inpainter and the discriminator architecture in adversarial training, we follow the ResNet settings respectively employed in [62] and [60].", "The training is performed on [image, randomly synthesized mask] pairs.", "We adopt the mask generation strategy in [60], containing multiple rectangles with arbitrary aspect ratios and wide polygonal chains." ], [ "Objective", "Compared with naive supervised losses which may result in blurry predictions, perceptual loss [63] evaluates the distance between feature maps of the inpainted image and the original image via a pre-trained network $\\phi (\\cdot )$ .", "It does not require exact reconstruction and allows for variations in the reconstructed image.", "Given that inpainting focuses on understanding the global structure, we introduce a perceptual loss of a large receptive field $\\mathcal {L}_{HRP}$ through a pre-trained ResNet50 $\\phi _{HRF}(\\cdot )$ with dilated convolutions provided in [60] $\\mathcal {L}_{HRP}(I, \\hat{I})=\\mathcal {M}([\\phi _{HRF}(I)-\\phi _{HRF}(\\hat{I})]^2),$ where $\\mathcal {M}$ is a sequential two-stage mean operation, i.e., obtaining the inter-layer mean of intra-layer means.", "Additionally, an adversarial loss $\\mathcal {L}_{adv}$ is utilized to encourage the inpainted image to be realistic.", "Specifically, we use a PatchGAN [52] discriminator $\\mathcal {D}_{\\xi }(\\cdot )$ and label patches that overlap with the mask as fake and the others as real.", "The non-saturating adversarial loss is defined as ${\\begin{array}{c}\\mathcal {L}_D=-E_I[\\log D_{\\xi }(I)]-E_{I, m}[\\log D_{\\xi }(\\hat{I}) \\odot (1-m)] \\\\-E_{I, m}[\\log (1-D_{\\xi }(\\hat{I})) \\odot m],\\end{array}}$ $\\mathcal {L}_G=-E_{I, m}[\\log D_{\\xi }(\\hat{I})],$ $\\mathcal {L}_{adv}=\\operatorname{sg}_\\theta (\\mathcal {L}_D)+\\operatorname{sg}_{\\xi }(\\mathcal {L}_G) \\rightarrow \\min _{\\theta , \\xi },\\vspace{-2.84544pt}$ where $\\hat{I} = f_\\theta (I^\\prime )$ is the output of the inpainting network and $\\operatorname{sg}_{var}$ represents stop gradient w.r.t.", "$var$ .", "To further stabilize the training, we use a gradient penalty $\\mathcal {L}_{GP} = E_I\\Vert \\nabla D_{\\xi }(I)\\Vert _2^2$ [64] and a perceptual loss defined on features of the discriminator $\\mathcal {L}_{DP}$ [65].", "The final objective of the Inpainter is $\\mathcal {L}_{inpaint}=\\mathcal {L}_{HRP}+\\lambda _{adv}\\mathcal {L}_{adv}+\\lambda _{DP}\\mathcal {L}_{DP}+\\lambda _{GP}\\mathcal {L}_{GP},$ where $\\lambda _{adv}$ , $\\lambda _{DP}$ and $\\lambda _{GP}$ are hyper-parameters balancing the contributions of different losses.", "$\\mathcal {L}_{HRP}$ is responsible for supervised signals and global structure consistency while $\\mathcal {L}_{adv}$ and $\\mathcal {L}_{DP}$ are responsible for local details and realism.", "Figure: The architecture of the Inpainter.", "The input is a four-channel image with the first three channels being the original image and the last channel being the binary mask of the inpainting regions.", "The output is the inpainted image.", "The dimensional change of the feature map in FFC is shown in the lower right panel." ], [ "Training", "Given that the training of the Inpainter does not require annotation and it learns a general ability to recover missing regions through contextual understanding, we initialize the model with pre-trained parameters from the Places-Challenge dataset [66] and fine-tune it on images accessible within each training set.", "The validation set consists of both accessible training set images and validation set images (paired with predefined masks obtained using a specific generation strategy [60]).", "The training is conducted with a batch size of 8 and an Adam optimizer is adopted with a learning rate of $10^{-3}$ for 50 epochs.", "The training data are augmented through random flipping, rotation and color jittering.", "We empirically set $\\lambda _{adv}=3$ , $\\lambda _{DP}=10$ and $\\lambda _{GP}=10^{-4}$ .", "Once trained, the Inpainter is used to remove the foregrounds from the skeleton-labeled samples taking the dilated noisy annotations as the masks.", "Then we construct a background bank for each dataset by augmenting the extracted backgrounds through random flipping, rotation, etc.", "Now we have a curve (foreground) bank $B_{curv} = \\lbrace c^{1}, \\cdots , c^{N}\\rbrace $ and a background bank $B_{bg} = \\lbrace b^{1}, \\cdots , b^{N}\\rbrace $ respectively from the Curve Generator and the Inpainter, for each given dataset.", "We construct an intermediate dataset $X_{inter} = \\lbrace x^{1}, \\cdots , x^{N}\\rbrace $ through randomly sampling a curve $c^{i}$ from $B_{curv}$ and a background $b^{i}$ from $B_{bg}$ , and then concatenating them to form a temporary sample $x^{i} = \\operatorname{concat}(b^{i},c^{i})$ .", "The problem now turns into an unpaired image-to-image translation task, i.e., designing a synthesizer to learn a mapping from $X_{inter}$ to the corresponding real dataset $Y$ .", "It is desirable that the local context especially the foreground of the synthetic image $\\hat{y}^{i}$ is spatially aligned with that of the corresponding intermediate image $x^{i}$ (especially $c^{i}$ ) as much as possible.", "Previously, for unpaired image translation, most existing methods apply GANs with a cycle structure, relying on cycle-consistency to ensure high-level correspondence [67].", "While effective, the underlying bijective assumption behind cycle-consistency is sometimes too restrictive, which may reduce the diversity of the generated samples.", "More importantly, the cycle-consistency is not suitable for our task since it does not guarantee any explicit or implicit spatial constraints.", "In such context, we introduce a multilayer patch-wise contrastive learning based synthesizer to learn a mapping from $X_{inter}$ to $Y$ inspired by [68], [69], which is illustrated in the lower middle panel of Fig.", "REF .", "It is trained in a generative adversarial manner with an internal contrastive learning pretext task.", "The generator (i.e., synthesizer) $G$ is a U-shape network, which firstly down-samples the input image into high-level features via an encoder $E$ with three residual blocks equipped with instance normalization and ReLU activation.", "As such, each pixel in the high-level feature map represents the embedding feature vector of a patch in the original image.", "Several layers of interest $E_{l \\in L}(x)$ in $E$ are selected to extract multi-scale features of patches and each passes through a two-layer multilayer perceptron (MLP) $H_l$ ($l$ indexes a layer), obtaining a feature stack $\\lbrace v_{l \\in L}=H_{l \\in L}[E_{l \\in L}(x)]\\rbrace $ .", "Given patchwise features $v_{l}$ and the corresponding pair $\\lbrace H_l(E_l(x))^{s_1}, H_l(E_l(G(x)))^{s_2}\\rbrace $ with $s_1$ and $s_2$ denoting the spatial locations of the patches of interest, we set $v^+$ to represent a patch at the same location as $v$ and $v^-_n$ to denote the $n^{th}$ among $N$ patches at different locations.", "The objective of the contrastive learning task is to maintain the local information at the same spatial location.", "Similar to the noise contrastive estimation loss [70], our objective function can be written as $\\hspace{-5.406pt}\\mathcal {L}_{c}=-\\sum _{l \\in L} \\log \\frac{\\exp (v_l \\cdot v_l^{+} / \\tau )}{\\exp (v_l \\cdot v_l^{+} / \\tau )+\\sum _{n=1}^N \\exp (v_l \\cdot v_{l n}^{-} / \\tau )},$ where $\\tau $ is a temperature hyper-parameter.", "Besides, we employ the identity loss, which was first proposed in [67] for regularizing the generator $G$ .", "We pass each real sample $y \\in Y$ through the encoder $E$ and obtain the patchwise features $v^*$ , the negative samples $v^{*-}$ and the positive samples $v_n^{*+}$ .", "The identity loss is formulated as $\\hspace{-2.84526pt}\\mathcal {L}_{id}=-\\sum _{l \\in L} \\log \\frac{\\exp (v_l^* \\cdot v_l^{*+} / \\tau )}{\\exp (v_l^* \\cdot v_l^{*+} / \\tau )+\\sum _{n=1}^N \\exp (v_l^* \\cdot v_{l n}^{*-} / \\tau )}.$ We use the LSGAN loss as our adversarial loss $\\mathcal {L}_{adv}$ [71] to make the synthetic images as realistic as possible.", "Therefore, with trade-off parameters $\\lambda _{adv}$ , $\\lambda _{c}$ and $\\lambda _{id}$ , the overall loss of the synthesizer is defined as $\\mathcal {L}_{syn} = \\lambda _{adv}\\mathcal {L}_{adv} + \\lambda _{c}\\mathcal {L}_{c} + \\lambda _{id}\\mathcal {L}_{id}.$ The training of the synthesizer is conducted employing an Adam optimizer with a learning rate of $10^{-4}$ and a cosine decay strategy, together with a batch size of 1.", "We utilize images that are accessible within each training set as the corresponding real dataset $Y$ for training the synthesizer.", "The hyperparameters and model weights are selected based on the Fréchet Inception Distances (FIDs) [76] between the synthesized images and $Y$ .", "We set $\\lambda _{adv}$ as 1, $\\lambda _{c}$ as 1, $\\lambda _{id}$ as 0.5 and $\\tau $ as 0.07.", "The training process lasted for 300 epochs." ], [ "Two-stage Coarse-to-Fine Segmentation", "A synthetic dataset $\\mathcal {D}_{syn} = \\lbrace (\\hat{y}^{1},c^1),\\cdots ,(\\hat{y}^{N},c^N)\\rbrace $ , with $\\hat{y}^{i}$ being a synthetic image and $c^i$ being the corresponding curve ground truth, is created by the Synthesizer.", "The weakly-supervised task is then transformed into a fully- or semi- one when making use of solely the synthetic dataset or a combination of an unlabeled dataset $\\mathcal {D}_{ori}$ and the synthetic dataset $\\mathcal {D}_{syn}$ .", "In this section, we introduce a two-stage coarse-to-fine segmentation pipeline to tackle the task.", "A specific segmentation network is first trained on $\\mathcal {D}_{syn}$ to obtain a coarse model $S_{coarse}$ with a segmentation loss $\\mathcal {L}_{seg}$ $\\hspace{-5.406pt}\\mathcal {L}_{seg} = 0.5\\times \\mathcal {L}_{ce} + 0.5\\times \\mathcal {L}_{dice},$ where $\\mathcal {L}_{ce}$ and $\\mathcal {L}_{dice}$ respectively denote the cross-entropy loss and the Dice loss.", "We observe and conclude that the performance of $S_{coarse}$ is mainly limited by two issues; one is that the curve generated by the Curve Generator still has a certain morphological gap with the foreground of the real image, and the other one is that there is also a slight but inevitable intensity gap between the Synthesizer-generated image and the real image.", "We target at relieving the latter issue by making use of $\\mathcal {D}_{ori}$ to further boost the segmentation performance.", "We employ predictions on $\\mathcal {D}_{ori}$ from $S_{coarse}$ as pseudo-labels and train a fine model $S_{fine}$ (initialized on $S_{coarse}$ ) on the combination of $\\mathcal {D}_{ori}$ and $\\mathcal {D}_{syn}$ via a final loss $\\mathcal {L}_{final}$ which is formulated as $\\mathcal {L}_{final} = \\mathcal {L}_{seg} + \\lambda _{psd}\\mathcal {L}_{psd},$ where $\\mathcal {L}_{psd}$ denotes the loss on $\\mathcal {D}_{ori}$ sharing the same formulation as $\\mathcal {L}_{seg}$ in Eq.", "(REF ) and $\\lambda _{psd}$ is a trade-off parameter.", "We employ vanilla U-Net with feature channels of 16, 32, 64, 128 and 256 as our $S_{coarse}$ 's and $S_{fine}$ 's architecture.", "We use an SGD optimizer (weight decay = $10^{-4}$ , momentum = 0.9) for training both $S_{coarse}$ and $S_{fine}$ with a batch size of 12 and an initial learning rate of $10^{-2}$ .", "The total iterations and $\\lambda _{psd}$ are respectively set to be 30$k$ and 1.", "In this section, we extensively evaluate the effectiveness of our YoloCurvSeg framework on four representative curvilinear structure segmentation datasets." ], [ "Datasets and Preprocessing", "We comprehensively evaluate YoloCurvSeg on four ophthalmic datasets: OCTA500, CORN, DRIVE and CHASEDB1.", "OCTA500 is used for retinal microvascular segmentation, and only the subset that contains 300 samples with a $6\\times 6$ $mm^2$ field of view (FOV) and a $400\\times 400$ resolution is utilized.", "We only make use of the $en$ -$face$ images generated by maximum projection between the internal limiting membrane layer and the outer plexiform layer.", "CORN consists of 1578 CCM images for nerve fiber segmentation.", "It also provides two subsets respectively consisting of 340 low-quality and 288 high-quality images.", "All CCM images have a resolution of $384\\times 384$ and an FOV of $400\\times 400$ $\\mu m^2$ .", "Instead of following the dataset's original division, we use 1532 images (samples that overlap with the test set are removed and the validation split ratio is 0.2) for training and validation, and test on 60 relatively accurately labeled samples provided in its subset.", "DRIVE and CHASEDB1 are used for retinal vessel segmentation and respectively have resolutions of $565\\times 584$ and $999\\times 960$ .", "These two fundus datasets are cropped via the provided FOV masks and respectively resized to $576\\times 576$ and $960\\times 960$ .", "For DRIVE, we utilize the original division of 20 training samples and 20 testing samples.", "For CHASEDB1, we follow the division in [2], [11], with the first 20 images serving as the training set and the remaining 8 used for testing.", "For OCTA500, we respectively utilize 200, 10 and 90 samples as the training, validation and testing sets.", "Images are first normalized and data augmentation consists of random rotation, flipping and Bézier Curve transformation [72]." ], [ "Implementation Details", "We implement YoloCurvSeg and other compared methods by PyTorch on a workstation equipped with 8 RTX 3090Ti GPUs.", "In the Synthesizer, the indices of the layers selected to calculate $\\mathcal {L}_c$ include $\\lbrace 0,4,8,12,16\\rbrace $ .", "For training the Segmenters $S_{coarse}$ and $S_{fine}$ , the polynomial policy with $power=0.9$ is used to adjust the learning rate online [73].", "Other hyperparameters, training details and model architecture are already provided in previous sections.", "It is worth noting that manually-delineated vessel segmentation labels are provided for OCTA500, DRIVE and CHASEDB1.", "To generate noisy skeleton annotations for those three datasets, we perform the skeletonize operation in scikit-image [74] to obtain the skeletons of the original ground truth masks and then employ elastic transformation to simulate jitter noises that may be introduced during fast manual labeling.", "For CORN, only noisy skeleton labels are provided, and thus are directly used in all our experiments.", "For this dataset, we dilate each skeleton to a 3-pixel width to serve as the full mask in its fully-supervised learning setting, and the same operation is also applied to the testing set annotations.", "For sparse labels used in other comparative WSL methods, skeletons of the backgrounds are also generated via skeletonization.", "The synthesis process in YoloCurvSeg can be online or offline.", "For better reproducibility and fair comparison, we use the offline version in our experiments, i.e., we first generate the synthetic dataset and then train the Segmenters.", "We generate 1276, 5005, 1240, and 1604 synthetic samples respectively for OCTA500, CORN, DRIVE and CHASEDB1 if all training samples are labeled.", "If only one sample is labeled, we respectively generate 100, 100, 60, and 80 synthetic samples." ], [ "Synthesis Performance", "Before comparing with SOTA WSL methods, we first qualitatively and quantitatively evaluate the synthesis performance of YoloCurvSeg.", "We visualize representative examples, in terms of the noisy skeleton labels, dilated masks for inpainting, extracted backgrounds, generated curves and synthesized images, in Fig.", "REF .", "It can be observed from the last column that the generated curves well match the synthetic images.", "Figure: Visualization of synthetic data from YoloCurvSeg.", "From left to right are examples of the noisy skeleton label, the inflated inpainting mask, the extracted background, the generated foreground, the synthesized image and the generated foreground superimposed on the synthesized image.", "Zoom in for details.We also compare the intensity distributions of the synthetic datasets with the real ones in Fig.", "REF , exhibiting high intensity similarities between the synthetic and real images in terms of both background and foreground.", "From the t-SNE [75] visualization in Fig.", "REF , the synthetic datasets are generally in line and well mixed with the real ones.", "In most cases, the synthetic data are even more uniformly and widely distributed, having a similar effect as data augmentation.", "The FIDs between the synthetic components and the corresponding real ones are tabulated in Table REF to measure the difficulty of learning a mapping between the two distributions.", "Column A shows the morphological gap between the synthetic and real curvilinear structures.", "Columns B and C illustrate there is great difficulty in translating directly to the image distribution from foreground masks, while the background image with the foreground removed is less distant from the original distribution.", "Since YoloCurvSeg incorporates background information, which implicitly acts as skip connections, it reduces the difficulty of image translation and thus can synthesize images close to the real ones even under few-shot settings.", "Our method achieves competitive FID scores on all four datasets, two of which are even smaller than those between the real training and test sets, as shown in column E of Table REF .", "Figure: Histograms of the four datasets in terms of the real data (top) and the corresponding synthetic data (bottom).Table: FID scores between various synthetic components and the real ones.", "A: synthetic mask vs. real mask, B: synthetic mask vs. real image, C: synthetic background vs. real image, D: synthetic image vs. real image, E: real training image vs. real test image." ], [ "Comparisons with SOTA", "Since noisy skeletons can be considered as sparse annotations with a certain degree of noise or directly utilized as noisy labels, we compare YoloCurvSeg with two categories of methods: 1) WSL methods and 2) noisy label learning (NLL) methods.", "The Dice similarity coefficient (DSC) and the average symmetric surface distance (ASSD) are used as the evaluation metrics." ], [ "Comparison with WSL methods", "Firstly, we compare YoloCurvSeg with 11 scribble-supervised segmentation methods employing the same skeleton set that we generate: pCE (baseline); random walker pseudo labeling (RW); uncertainty aware self-ensembling and transformation-consistent model (USTM); Scribble2Label (S2L); Mumford–shah Loss (MLoss); entropy minimization (EM); dense CRF loss and gated CRF loss; active contour loss (AC); dual-branch network with dynamically mixed pseudo labels supervision (DBDM) and tree energy loss, the results of which are shown in Table REF .", "For fair comparisons, YoloCurvSeg does not go through the fine stage and is denoted as Ours (coarse) (i.e., the performance of $S_{coarse}$ ).", "The upper part of the table indicates that all training data are sparsely labeled, while the lower part indicates that only one representative sample is labeled and all other data are unlabeled and not utilized.", "YoloCurvSeg achieves the best performance on all datasets under both settings, outperforming other WSL methods by large margins.", "Comparing \"All\" versus \"One\", apparently, YoloCurvSeg is not sensitive to the sample size of the labeled data and achieves 96.1%, 106.3%, 95.3% and 96.2% fully-supervised performance on the four datasets with only 0.14%, 0.03%, 1.40% and 0.65% labeled pixels in terms of DSC.", "Representative visualization results are shown in Fig.", "REF .", "Figure: t-SNE visualization of the four real and synthetic datasets.", "CORN good and CORN poor respectively denote the high-quality and low-quality subsets in CORN." ], [ "Comparison with NLL methods", "In Table REF , we also compare YoloCurvSeg with several NLL methods, including generalized cross-entropy loss (GCE), co-teaching (COT), TriNet, confident learning with spatial label smoothing (CLS LS) and divergence-aware selective training (DAST) on OCTA500 and DRIVE.", "Despite utilizing one full mask and multiple or even all skeleton samples, these methods are inferior to YoloCurvSeg which employs only one skeleton sample.", "We also find that all NLL methods perform worse than the fully-supervised model (FS in Table REF ) trained solely with the same single fully labeled sample, illustrating that additional noisily labeled samples are not beneficial to model performance under such noisy conditions.", "Table: Comparison with NLL methods on OCTA500 and DRIVE.", "M and S respectively indicate full mask and noisy skeleton.", "FS denotes fully-supervised learning." ], [ "Robustness Analysis and Ablation Study", "To verify the robustness of YoloCurvSeg for the selected one-shot sparsely labeled sample, we randomly select 10 samples from each dataset and compare the performance with the fully-supervised model trained on the same sample.", "As demonstrated in Fig.", "REF , YoloCurvSeg exceeds full supervision in almost all cases and delivers highly stable performance decoupled from image/annotation quality which nevertheless induces great fluctuations in the performance of the fully-supervised models.", "In addition to robustness, the predictions from YoloCurvSeg also have smaller variances.", "Both aspects indicate that YoloCurvSeg is sample-insensitive and can reduce the risk of selecting a wrong sample to label.", "As for the ablation study, we first remove the background bank extracted by the Inpainter and perform direct curve-to-image translation.", "As shown in panel (a) of Fig.", "REF , the synthetic images present unrealistic background texture due to the large gap between pre- and post-translation distributions (which is also shown in column B of Table REF ).", "For high-resolution image datasets, the foregrounds of the synthetic images are distorted and fail to spatially align with the corresponding curve masks, which also occurs when we remove $\\mathcal {L}_c$ of the Synthesizer (and use CycleGAN [67] for substitution), as shown in Fig.", "REF (b).", "This indicates that the contrastive Synthesizer (especially $\\mathcal {L}_c$ ) is crucial for maintaining the corresponding local context at the same spatial location.", "Figure: Performance of YoloCurvSeg given different labeled samples.Figure: Qualitative visualization of representative results from our S coarse S_{coarse} and other SOTA WSL methods under one-shot setting.It is worth pointing out that the results we have reported in the previous section represent the performance of $S_{coarse}$ .", "We also explore the performance of $S_{fine}$ with and without utilizing $\\mathcal {D}_{syn}$ .", "It can be observed in Fig.", "REF that the performance of $S_{fine}$ is further improved by utilizing $\\mathcal {D}_{syn}$ , which may be attributed to the fact that the synthetic curves have high degrees of continuity and can reduce the model's outlier predictions.", "We also conduct a performance comparison of the fully-supervised model with and without pretraining on $\\mathcal {D}_{syn}$ .", "Results show that the synthetic images from YoloCurvSeg also have great potential in serving as pretraining images; pretraining on $\\mathcal {D}_{syn}$ increases DSC by 0.55, 0.63, 0.81 and decreases ASSD by 0.061, 0.041, 0.148, respectively on OCTA500, DRIVE and CHASEDB1.", "Ultimately, via further utilizing an additional unlabeled dataset $\\mathcal {D}_{ori}$ , YoloCurvSeg achieves 97.00%, 97.49% and 97.63% of the fully-supervised performance (with the full masks of all samples employed) with only one noisy skeleton annotation on those three datasets.", "Figure: Visualization of representative synthetic results from the ablation study.", "Arrows mark the unrealistic background regions, structures, or misalignments between the masks and the regions of interest.Figure: Performance of YoloCurvSeg under different training paradigms.", "FS denotes fully-supervised learning.", "This paper presents a novel sparsely annotated segmentation framework for curvilinear structures, named YoloCurvSeg.", "Extensive experiments are conducted on four publicly accessible datasets, with superiority of our proposed framework being successfully established.", "Potential future directions are transferring YoloCurvSeg to 3D scenarios and exploring a better pipeline to further reduce the domain gap between synthetic images and real images." ] ]
2212.05566
[ [ "New examples of twisted Brill-Noether loci I" ], [ "Abstract Our purpose in this paper is to construct new examples of twisted Brill-Noether loci on curves of genus $g\\ge2$.", "Many of these examples have negative expected dimension.", "We deduce also the existence of a new region in the Brill-Noether map, whose points support non-empty standard Brill-Noether loci." ], [ "Introduction", "Our object in this paper is to construct twisted Brill-Noether loci different from those constructed in [12].", "In particular, the relevance of many of these examples is that they have negative Brill-Noether numbers, whereas the examples constructed in [12] all have positive Brill-Noether numbers.", "We deduce also the existence of a new region in the Brill-Noether map (see below for the definition), the points of which support non-empty Brill-Noether loci, whose existence was not previously known.", "This implies that our new examples also differ from those constructed in [12] (see Remark REF (iii)).", "In what follows, we shall usually abbreviate Brill-Noether to BN.", "Let $C$ be a smooth irreducible projective curve of genus $g\\ge 2$ defined over the complex numbers.", "For $i=1,2$ , let $M_i:=M(n_i,d_i)$ denote the moduli space of stable bundles of rank $n_i$ and degree $d_i$ .", "Similarly, we write $\\widetilde{M}_i$ for the corresponding moduli space of S-equivalence classes of semistable bundles.", "Let $E_2$ be any vector bundle on $C$ of rank $n_2$ and degree $d_2$ .", "For any integers $n_1$ , $d_1$ , $k$ with $n_1\\ge 1$ , one can define the twisted BN locus $B(n_1,d_1,k)(E_2):=\\lbrace E_1\\in M_1\\,|\\,h^0(E_1\\otimes E_2)\\ge k\\rbrace .$ The locus $B(n_1,d_1,k)(E_2)$ is a closed subscheme of $M_1$ .", "In fact, it is a determinantal locus; it follows that, if $\\beta (n_1,d_1,k)(E_2)\\le n_1^2(g-1)+1$ , then every irreducible component of $B(n_1,d_1,k)(E_2)$ has dimension greater than or equal to the BN number $\\beta (n_1,d_1,k)(E_2):=n_1^2(g-1)+1-k(k-\\chi ),$ where $\\chi :=n_2d_1+n_1d_2-n_1n_2(g-1).$ For this reason, the BN number is often referred to as the expected dimension.", "We define also $\\widetilde{B}(n_1,d_1,k)(E_2):=\\lbrace [E_1]\\in \\widetilde{M}_1\\,|\\,h^0(\\operatorname{gr}E_1 \\otimes E_2)\\ge k\\rbrace ,$ where $[E_1]$ denotes the S-equivalence class of $E_1$ and $\\text{gr}E_1$ is the corresponding graded bundle.", "This is a closed subscheme of $\\widetilde{M}_1$ , but can have components of dimension less than the BN number.", "In the special case $E_2=\\mathcal {O}_C$ , we obtain the standard (untwisted) higher rank BN locus, denoted by $B(n_1,d_1,k)$ , with expected dimension $\\beta (n_1,d_1,k):=n_1^2(g-1)+1-k(k-d_1+n_1(g-1)).$ In particular, $B(1,d_1,k)$ is the classical BN locus $W^{k-1}_{d_1}$ .", "In the case $n_1=1$ , twisted BN loci have been studied for some time and include the special case of maximal line subbundles.", "A systematic study for arbitrary $n_1$ was started in [12], where, in particular, examples of twisted BN loci were constructed [12] using the technique of [18].", "As we shall see in Theorem REF , these have the important feature that the BN number (REF ) is positive.", "We can now introduce the loci which we are going to study in this paper.", "This is a special case of a more general construction introduced in [12], where it was used primarily to obtain results for $B(n_1,d_1,k)(E_2)$ .", "If $\\gcd (n_i,d_i)=1$ for $i=1,2$ , then there exist universal bundles $\\mathcal {U}_i$ on $M_i\\times C$ and, for any integer $k$ , we define the universal twisted BN locus as follows: $B^k(\\mathcal {U}_1,\\mathcal {U}_2):=\\lbrace (E_1,E_2)\\in M_1\\times M_2\\, |\\,h^0(E_1\\otimes E_2)\\ge k\\rbrace .$ In the non-coprime case, we need to lift to étale coverings of $M_1$ and $M_2$ in order to construct “universal” bundles [20].", "However (REF ) still makes sense and we shall use the same notation.", "The locus $B^k(\\mathcal {U}_1,\\mathcal {U}_2)$ is a closed subscheme of $M_1\\times M_2$ ; if $k\\le 0$ , then $B^k(\\mathcal {U}_1,\\mathcal {U}_2)=M_1\\times M_2$ .", "The structure of $B^k(\\mathcal {U}_1,\\mathcal {U}_2)$ as a determinantal locus allows us to define a BN number (see [12] $\\beta ^k(\\mathcal {U}_1,\\mathcal {U}_2):=\\dim M_1+\\dim M_2-k(k-\\chi ),$ which may again be referred to as the expected dimension.", "Note that $\\chi =\\chi (E_1\\otimes E_2)$ for all $(E_1,E_2)\\in M_1\\times M_2$ .", "If $\\beta ^k(\\mathcal {U}_1,\\mathcal {U}_2)\\le \\dim M_1+\\dim M_2$ , then every irreducible component of $B^k(\\mathcal {U}_1,\\mathcal {U}_2)$ has dimension greater than or equal to $\\beta ^k(\\mathcal {U}_1,\\mathcal {U}_2)$ .", "We define similarly the corresponding semistable locus $\\widetilde{B}(\\mathcal {U}_1,\\mathcal {U}_2):=\\lbrace ([E_1],[E_2])\\in \\widetilde{M}_1\\times \\widetilde{M}_2\\,|\\,h^0(\\text{gr}E_1\\otimes \\text{gr}E_2)\\ge k\\rbrace .$ Note that the definition is symmetric and $B^k(\\mathcal {U}_1,\\mathcal {U}_2)\\cong B^k(\\mathcal {U}_2,\\mathcal {U}_1)$ .", "Moreover, by Serre duality, $B^k(\\mathcal {U}_1,\\mathcal {U}_2)\\cong B^{k_1}(\\mathcal {U}_1^*,\\mathcal {U}_2^*\\otimes p_C^*K_C)$ , where $k_1=k-\\chi $ and $K_C$ denotes the canonical line bundle on $C$ .", "Note also that $B^k(\\mathcal {U}_1,\\mathcal {U}_2)\\cong B^k(\\mathcal {U}_1\\otimes p_C^*L^*,\\mathcal {U}_2\\otimes p_C^*L)$ for any line bundle $L$ .", "We take account of these possibilities in Theorem REF (ii) and Remark REF (iii) (see also Remark REF (i)).", "Remark 1.1 The following conditions are equivalent: $B^k(\\mathcal {U}_1,\\mathcal {U}_2)\\ne \\emptyset $ ; $B(n_1,d_1,k)(E_2)\\ne \\emptyset $ for some $E_2\\in M_2$ ; $B(n_2,d_2,k)(E_1)\\ne \\emptyset $ for some $E_1\\in M_1$ .", "Many of our results are valid for any smooth curve.", "For some, however, we require the concept of a Petri curve.", "The curve $C$ is said to be Petri if the classical Petri map $H^0(L)\\otimes H^0(K_C\\otimes L^*)\\longrightarrow H^0(K_C)$ is injective for every line bundle $L$ on $C$ .", "The general curve is Petri in the sense that Petri curves of genus $g$ form a non-empty open subset of the moduli space of curves of genus $g$ [11], [16].", "By classical BN theory, if $C$ is Petri, the BN locus $B(1,d_1,k)$ has dimension precisely $\\beta (1,d_1,k)$ whenever $0\\le \\beta (1,d_1,k)\\le g$ and is empty when $\\beta (1,d_1,k)<0$ .", "Sometimes we will not be able to obtain results for arbitrary Petri curves but only for curves which are general in the sense that they live in an unspecified open subset of the moduli space of curves of genus $g$ .", "The BN loci $B(1,d_1,k)(E_2)$ , with $E_2$ general, behave in a similar way to the classical BN loci [12].", "By Remark REF , this implies non-emptiness results for $B^k(\\mathcal {U}_1,\\mathcal {U}_2)$ when $n_1=1$ or $n_2=1$ .", "Moreover, when $n_2=1$ , we have an isomorphism $B^k(\\mathcal {U}_1,\\mathcal {U}_2)\\longrightarrow B(n_1,d_1+n_1d_2,k)\\times \\mbox{Pic}^{d_2}(C):\\ (E_1,L)\\mapsto (E_1\\otimes L,L),$ giving a description of $B^k(\\mathcal {U}_1,\\mathcal {U}_2)$ in terms of an untwisted BN locus.", "We know many cases in which such loci are non-empty; see, in particular, Section .", "In this paper, we shall assume that $n_i\\ge 2$ for $i=1,2$ .", "We can now state our first main theorem.", "Theorem 1.2 Suppose that $C$ is a smooth curve of genus $g\\ge 2$ and that $\\mu _i$ , $\\lambda _i$ $(i=1,2)$ are positive rational numbers.", "Let $B(n_i,d_i,k_i)$ be non-empty BN loci with $n_i\\ge 2$ , $d_i/n_i=\\mu _i$ and $k_i/n_i=\\lambda _i$ and suppose that $k=k_1k_2$ .", "If $\\mu _1<2$ and $\\mu _2\\le 2g$ , then $B^k(\\mathcal {U}_1,\\mathcal {U}_2)\\ne \\emptyset $ .", "If, in addition, $\\mu _1+\\mu _2<\\lambda _1\\lambda _2+g-1$ , then, for sufficiently large $n_1$ , $n_2$ , $\\beta ^k(\\mathcal {U}_1,\\mathcal {U}_2)<0$ .", "The major part of this theorem will be stated and proved in a more detailed version as Theorem REF , the last part being covered by Lemma REF .", "Using the results of Section , it is easy to find many cases where the hypotheses of the theorem are satisfied, thus obtaining the examples we are seeking (see Example REF ).", "There exist examples even when $n_1=n_2=2$ (see Example REF ).", "In Theorem REF , we show that this construction gives a new region in the $(\\mu ,\\lambda )$ -plane, whose points support non-empty (untwisted) BN loci.", "Our second main theorem is of a rather different nature, in that $d_2$ is negative and so $h^0(E_2)=0$ for all $E_2\\in M_2$ .", "Theorem 1.3 Suppose that $C$ is a smooth curve of genus $g\\ge 2$ , $n_1\\ge 2$ , $k_1>n_1$ , $n$ a positive integer and either $d>2ng$ or $d=2ng$ and $C$ is non-hyperelliptic.", "Suppose further that $B(n_1,d_1,k_1)\\ne \\emptyset $ and that $k\\le (d-n(g-1))(k_1-n_1)-nd_1.$ Let $(n_2,d_2)=(d-ng,-d)$ .", "Then $B^k(\\mathcal {U}_1,\\mathcal {U}_2)\\ne \\emptyset $ .", "If, in addition, $k=d(k_1-n_1)-e$ , where $e\\ge n(g-1)(k_1-n_1)+nd_1$ , and $d_1<k_1+n_1(g-1)-\\frac{g-1}{k_1-n_1},$ then, for any fixed values of $n_1$ , $d_1$ , $k_1$ , $n$ and $e$ , and $d\\gg 0$ , $\\beta ^k(\\mathcal {U}_1,\\mathcal {U}_2)<0$ .", "A more detailed version of this theorem will be stated and proved as Theorem REF .", "In Corollary REF and the related Example REF , we show that the requirement that $B(n_1,d_1,k_1)\\ne \\emptyset $ is compatible with the condition $d_1<k_1+n_1(g-1)-\\frac{g-1}{k_1-n_1}$ .", "Hence Theorem REF does lead to examples of twisted BN loci with negative expected dimension.", "Section is concerned with the non-emptiness of $B(n_1,d_1,k_1)$ .", "In subsection REF , we introduce the BN map and describe some sufficient conditions for non-emptiness.", "Subsection REF ) contains a description of kernel bundles and the dual span construction.", "We include also a description of BN loci for bundles of slope $\\le 2$ (subsection REF ).", "In section , we obtain an extended version of the main existence theorem of [12].", "Sections and contain our main results.", "The dual span construction works also when we consider pairs $(E,V)$ where $V$ is a subspace of $H^0(E)$ which generates $E$ .", "In the case where $E$ is a line bundle, there are then further stability results, which have consequences for the non-emptiness of $B(n_1,d_1,n_1+1)$ .", "We shall investigate applications of this to twisted BN loci in a future paper [7].", "Throughout the paper, $C$ will denote a smooth irreducible projective curve of genus $g\\ge 2$ defined over ${\\mathbb {C}}$ .", "For a vector bundle $E$ on $C$ of rank $n$ and degree $d$ , we denote by $\\mu (E):=\\frac{d}{n}$ the slope of $E$ .", "We write also $h^0(E)$ for the dimension of the space of sections $H^0(E)$ .", "The canonical line bundle on $C$ is denoted by $K_C$ .", "For any real number $a$ , we write $\\lfloor a\\rfloor $ and $\\lceil a\\rceil $ for the largest integer $\\le a$ and the smallest integer $\\ge a$ , respectively.", "We thank Lilia Alanis Lopez for help with the figures." ], [ "Non-emptiness of BN loci", "In this section, we will recall and extend some of the principal known results on the non-emptiness of the BN loci $B(n_1,d_1,k_1)$ .", "The answer is completely known for $k_1\\le n_1$ , but the problem is much more complicated (and still not completely solved) for $k_1>n_1$ (except for $g\\le 3$ and (almost) for hyperelliptic curves) (see [6]).", "For Petri curves, the naïve conjecture is that $B(n_1,d_1,k_1)$ is non-empty if and only if $\\beta (n_1,d_1,k_1)\\ge 0$ .", "From (REF ), this is equivalent to $d_1\\ge k_1+n_1(g-1)-\\frac{n_1(g-1)}{k_1/n_1}-\\frac{1}{k_1}.$ In fact, numerous examples show that (REF ) is neither necessary nor sufficient for non-emptiness of $B(n_1,d_1,k_1)$ (see, for example,[4], [2], [17], [15], [1])." ], [ "The BN map", "Following [6], the non-emptiness of $B(n_1,d_1,k_1)$ can be understood in terms of the BN map.", "For this, we map any non-empty BN locus $B(n_1,d_1,k_1)$ (or $\\widetilde{B}(n_1,d_1,k_1)$ ) with $0\\le d_1\\le n_1(2g-2)$ to the point in the plane $(\\mu ,\\lambda )=(d_1/n_1,k_1/n_1).$ We shall say that $(\\mu ,\\lambda )$ supports $B(n_1,d_1,k_1)$ (resp., $\\widetilde{B}(n_1,d_1,k_1)$ ) if (REF ) holds and $B(n_1,d_1,k_1)\\ne \\emptyset $ (resp., $\\widetilde{B}(n_1,d_1,k_1)\\ne \\emptyset $ ).", "We are interested here in regions in the $(\\mu ,\\lambda )$ -plane for which it is known that all points with rational coordinates support infinitely many BN loci.", "The first examples of such regions were determined by Teixidor i Bigas [21] and Mercat [18].", "To describe these in terms of the BN map, we follow [6].", "For $s\\ge 1$ , write $\\widehat{\\eta }^{\\prime }(s):=s+g-2-\\left\\lfloor \\frac{g-1}{s}\\right\\rfloor $ ; note that $d\\ge \\widehat{\\eta }^{\\prime }(s)$ if and only if $\\beta (1,d+1,s)\\ge 1$ , and that $\\widehat{\\eta }^{\\prime }(1)=0$ , $\\widehat{\\eta }^{\\prime }(g)=2g-2$ .", "Now define a function $t_g$ for $0\\le \\mu \\le 2g-2$ as follows: $t_g(\\mu ):={\\left\\lbrace \\begin{array}{ll}0,&\\mu =0;\\\\\\mu -\\lceil \\mu \\rceil +s,&\\mu \\in ]\\widehat{\\eta }^{\\prime }(s),\\widehat{\\eta }^{\\prime }(s)+1];\\\\s,&\\mu \\in ]\\widehat{\\eta }^{\\prime }(s)+1,\\widehat{\\eta }^{\\prime }(s+1)].\\end{array}\\right.", "}$ We define T to be the region in the $(\\mu ,\\lambda )$ -plane given by $\\mu ,\\lambda \\in {\\mathbb {Q}},\\ 0\\le \\mu \\le 2g-2,\\ 0<\\lambda \\le t_g(\\mu ).$ The region T is shown in [6] in the case $g=7$ .", "Note that Clifford's Theorem can be stated in the form $\\lambda \\le \\frac{\\mu }{2}+1$ and the boundary line for this inequality (the Clifford line) is shown in the figure.", "Moreover, the condition $\\beta (n_1,d_1,k_1)=1$ becomes $\\lambda (\\lambda -\\mu +g-1)=g-1$ (see (REF )), which defines a branch of a hyperbola with asymptotes $\\lambda =0$ and $\\lambda =\\mu -g+1$ (the BN curve), and is also shown in the figure.", "The following lemma summarises the results of Teixidor [21] and Mercat [18].", "Teixidor's proof is for general $C$ ; for $\\widetilde{B}(n_1,d_1,k_1)$ , this is sufficient to prove it for any smooth curve.", "Mercat's result [18] is valid for any smooth curve and implies our lemma.", "For the version stated below, see [6].", "Lemma 2.1 Let $C$ be a smooth curve of genus $g\\ge 2$ and suppose that $n_1\\ge 2$ .", "If $(\\mu ,\\lambda )\\in \\operatorname{T}$ , then $(\\mu ,\\lambda )$ supports $\\widetilde{B}(n_1,d_1,k_1)$ for all values of $n_1$ for which $n_1\\mu $ and $n_1\\lambda $ are both integers.", "The same is true for $B(n_1,d_1,k_1)$ , provided that the points $(\\mu ,\\lambda )=(\\widehat{\\eta }^{\\prime }(s)+1,\\lambda ),\\ s-1<\\lambda \\le s,\\ \\widehat{\\eta }^{\\prime }(s)+1\\ne \\widehat{\\eta }^{\\prime }(s+1).$ are excluded.", "Remark 2.2 Lemma REF is equivalent to the statement that $\\widetilde{B}(n_1,d_1,k_1)\\ne \\emptyset $ if $d_1\\ge k_1+n_1(g-1)-n_1\\left\\lfloor \\frac{g-1}{\\lceil k_1/n_1\\rceil }\\right\\rfloor .$ This can also be stated as $\\mu \\ge \\lambda +g-1-\\left\\lfloor \\frac{g-1}{\\left\\lceil \\lambda \\right\\rceil }\\right\\rfloor .$ For $B(n_1,d_1,k_1)$ , the case $d_1= n_1\\left(\\left\\lceil \\frac{k_1}{n_1}\\right\\rceil +g-1- \\left\\lfloor \\frac{g-1}{\\lceil k_1/n_1\\rceil }\\right\\rfloor \\right)$ must be excluded when $\\beta (1,d_1/n_1+1,\\left\\lceil k_1/n_1\\right\\rceil +1)\\le 0$ .", "(REF ) and (REF ) are essentially a restatement of Teixidor's conditions [21] and relate closely to Mercat's construction [18].", "Another method of constructing non-empty BN loci on a smooth curve $C$ is described in [6].", "This starts from the results of [4], [17], [19], which give precise conditions for non-emptiness when $d_1\\le 2n_1$ , and proceeds by tensoring by line bundles $L$ with $h^0(L)=s\\ge 1$ .", "Serre duality extends the region further and we denote the result by BMNO.", "To describe BMNO in the BN map, we write $\\hat{\\eta }(s):=s+g-1-\\left\\lfloor \\frac{g}{s}\\right\\rfloor $ for $s\\ge 1$ .", "The condition $d\\ge \\hat{\\eta }(s)$ is then equivalent to $\\beta (1,d,s)\\ge 0$ .", "On a Petri curve, this is equivalent by classical BN theory to the assertion that $B(1,d,s)\\ne \\emptyset $ .", "Note that $\\hat{\\eta }(1)=0$ and, on a Petri curve, $\\hat{\\eta }(2)$ is the gonality of $C$ .", "We now define a function $f_g(\\mu )$ for $0\\le \\mu \\le g-1$ as follows: $f_g(\\mu ):={\\left\\lbrace \\begin{array}{ll}\\frac{s}{g}(\\mu -\\left\\lceil \\mu \\right\\rceil )+s,&\\mu \\in [\\hat{\\eta }(s),\\hat{\\eta }(s)+1];\\\\\\frac{s}{g}(\\mu -\\left\\lceil \\mu \\right\\rceil +1)+s,&\\mu \\in ]\\hat{\\eta }(s)+1,\\hat{\\eta }(s+1)-1];\\\\\\frac{\\hat{\\eta }(s+1)-s}{g}(\\mu -\\left\\lceil \\mu \\right\\rceil +1)+s,&\\mu \\in ]\\hat{\\eta }(s+1)-1,\\hat{\\eta }(s+1)[.\\end{array}\\right.", "}$ We extend the definition of $f_g$ , which is modified from that in [6] to allow $\\mu =0$ , to the whole interval $[0,2g-2]$ by Serre duality.", "The region BMNO in the $(\\mu ,\\lambda )$ -plane is then defined by the conditions $0\\le \\mu \\le 2g-2,\\ 0\\le \\lambda \\le f_g(\\mu ).$ For $g=10$ , this region is illustrated in [6], which also includes the top boundary of T, the Clifford line and the BN curve.", "In Figure 1, we recall [6], including also an indication of the excluded points from Lemmas REF and REF and Remark REF (ii).", "The following lemma summarises the principal result of [6], taking account of the results of [19] (see [6]).", "Lemma 2.3 Let $C$ be a smooth curve of genus $g\\ge 2$ and $n_1\\ge 2$ .", "If $(\\mu ,\\lambda )\\in \\operatorname{BMNO}$ , then $(\\mu ,\\lambda )$ supports $\\widetilde{B}(n_1,d_1,k_1)$ for infinitely many values of $n_1$ .", "If $C$ is non-hyperelliptic, the same holds for $B(n_1,d_1,k_1)$ , provided that points of the form $(\\mu ,\\lambda )$ such that $\\mu =\\hat{\\eta }(s)$ with $\\mu \\le g-1$ and $\\lambda >\\frac{s-1}{g}+s-1$ , together with their Serre duals and the points $((\\widehat{\\eta }(s)+1,s)$ , are excluded.", "Remark 2.4 (i) [6] provide information about which values of $n_1$ are included in the above result for any given $(\\mu ,\\lambda )$ .", "In particular, there is a dense set of points in BMNO for which all values of $n_1$ with $n_1\\mu , n_1\\lambda \\in {\\mathbb {Z}}$ are allowed.", "(ii) Some points excluded in Lemma REF are included in Lemma REF .", "In fact, if $g$ is a multiple of $s$ , then $\\widehat{\\eta }^{\\prime }(s)=\\widehat{\\eta }(s)$ , and the points $(\\widehat{\\eta }(s)+1,\\lambda )$ for $s-1<\\lambda <s$ are excluded by (REF ), but included by Lemma REF .", "(iii) For $g\\ge 4$ , there are many non-empty $B(n_1,d_1,k_1)$ corresponding to points outside the union of T and BMNO; see, for example, the figures in [13], [14], [15].", "We shall construct later a new region of the BN map, giving new points for all $g\\ge 5$ (see Theorem REF ).", "(iv)[6] also contains comprehensive results for hyperelliptic curves.", "Figure: NO_CAPTION" ], [ "Kernel and dual span bundles", "Let $E$ be a generated bundle of rank $n$ and degree $d$ .", "Consider the exact sequence $0\\longrightarrow D_E^*\\longrightarrow H^0(E)\\otimes \\mathcal {O}\\longrightarrow E\\longrightarrow 0.$ The bundle $D_E^*$ may be referred to as a kernel bundle (or syzygy bundle) and has rank $d-ng$ and degree $-d$ .", "Dualising (REF ), we have $0\\longrightarrow E^*\\longrightarrow H^0(E)^*\\otimes \\mathcal {O}\\longrightarrow D_E\\longrightarrow 0.$ Following [10], we call $D_E$ a dual span bundle.", "The following theorem addresses the stability of $D_E$ and summarises results of Butler and Mercat (see also [5]).", "We write $S:=\\lbrace E\\in M(n,d)|D_E\\text{ is stable}\\rbrace .$ Theorem 2.5 Let $C$ be a smooth curve of genus $g\\ge 2$ .", "Suppose that either $d>2ng$ and $E\\in M(n,d)$ or $d=2ng$ , $C$ is non-hyperelliptic and $E$ is a general element of $M(n,d)$ .", "Then $E$ is generated and $D_E$ is stable of slope $\\le 2$ .", "In particular, $S$ is a non-empty open subset of $M(n,d)$ and $S=M(n,d)$ when $d>2ng$ .", "Moreover, the morphism $S\\longrightarrow B(d-ng,d,d-n(g-1)): E\\mapsto D_E$ is an isomorphism.", "Any stable bundle of slope $>2g-1$ is generated.", "Moreover, under the given hypotheses, $D_E$ is stable by [9].", "The fact that (REF ) is an isomorphism follows from [17] and [19]." ], [ "Small slope", "When $d_1<2n_1$ , Lemma REF includes a complete answer to the question of the non-emptiness of $B(n_1,d_1,k_1)$ .", "The references [4], [17] (see also [3]) contain much further information about these BN loci, in particular that there exists $E_1\\in B(n_1,d_1,k_1)$ , taking one of a number of special forms.", "Theorem 2.6 Let $C$ be a curve of genus $g\\ge 2$ .", "If $d_1<2n_1$ and $B(n_1,d_1,k_1)\\ne \\emptyset $ , then there exists $E_1\\in B(n_1,d_1,k_1)$ taking one of the following forms.", "I $d_1<n_1+g$ .", "$E_1$ fits into an exact sequence $0\\longrightarrow \\mathcal {O}^{k_1}\\longrightarrow E_1\\longrightarrow F\\longrightarrow 0$ for some sheaf $F$ .", "II $d_1=n_1+g\\ell <2n_1$ , $\\ell \\ge 1$ .", "$E_1$ is the dual span of a stable bundle $E$ of slope $>2g$ and is therefore given by an exact sequence $0\\longrightarrow E^*\\longrightarrow H^0(E)^*\\otimes \\mathcal {O}\\longrightarrow E_1\\longrightarrow 0.$ Moreover, $h^0(E)=h^0(E_1)= n_1+\\ell $ .", "III $d_1=n_1+g\\ell +\\ell ^{\\prime }<2n_1$ with $\\ell >0$ and $0<\\ell ^{\\prime }<g$ .", "There exist exact sequences $0\\longrightarrow D_{E^{\\prime }}^*\\longrightarrow H^0(E^{\\prime })\\otimes \\mathcal {O}\\longrightarrow E^{\\prime }\\longrightarrow 0$ with $E^{\\prime }$ stable of rank $n_1+\\ell ^{\\prime }$ , degree $d_1$ and $h^0(E^{\\prime })=n_1+\\ell +\\ell ^{\\prime }$ , and $0\\longrightarrow \\mathcal {O}^{\\ell ^{\\prime }}\\longrightarrow E^{\\prime }\\longrightarrow E_1\\longrightarrow 0.$ I.", "By [4] (for $d_1\\le n_1$ ) and [17] (for $n_1<d_1<n_1+g$ ), the sections of $E_1$ are all carried by a subsheaf $\\mathcal {O}^{k_1}$ .", "This gives (REF ) II.", "We know that $B(n_1,d_1,k_1)\\ne \\emptyset $ if and only if $d_1\\ge n_1+g(k_1-n_1)$ .", "We can therefore choose $E_1\\in B(n_1,d_1,n_1+\\ell )$ and then $h^0(E_1)=n_1+\\ell $ .", "By [17], $E_1=D_E$ for some stable bundle $E$ of rank $\\ell $ and degree $d_1$ , hence of slope $>2g$ .", "So $h^0(E)=n_1+\\ell $ and we have the exact sequence (REF ).", "III.", "By [17] (see Case II), there exists a stable bundle $E^{\\prime }$ of rank $n_1+\\ell ^{\\prime }$ and degree $d_1$ with $h^0(E^{\\prime })=n_1+\\ell +\\ell ^{\\prime }$ , fitting in an exact sequence REF .", "Now, as in the proof of [17], we have an exact sequence (REF ), with $E_1\\in B(n_1,d_1,n_1+\\ell )$ .", "Since $B(n_1,d_1,k_1)\\ne \\emptyset $ , we have (see [17]) $n_1+g\\ell +\\ell ^{\\prime }=d_1\\ge n_1+g(k_1-n_1),$ hence $\\ell \\ge k_1-n_1$ and $E_1\\in B(n_1,d_1,k_1)$ .", "Remark 2.7 Theorem REF can be extended to the case $d_1=2n_1$ (see [19]).", "We have two cases.", "IV $C$ non-hyperelliptic.", "By the results of [19], we are in one of Cases I-III, except that in Case II, we now have $\\deg E=2n_1g$ .", "There is one further case, namely when $E_1=D_{K_C}\\in B(g-1,2g-2,g)$ .", "V $C$ hyperelliptic.", "In this case $B(n_1,2n_1,k_1)\\ne \\emptyset $ if and only if $k_1\\le n_1$ (see [19]).", "When this holds, there exists $E_1\\in B(n_1,2n_1,k_1)$ which admits $\\mathcal {O}^{k_1}$ as a subsheaf." ], [ "A non-emptiness result for twisted BN loci", "In this section we give an extended version of [12], which we need for comparison purposes.", "Theorem 3.1 Let $C$ be a smooth curve of genus $g\\ge 2$ and $E_2$ any vector bundle of rank $n_2$ and degree $d_2$ on $C$ .", "Let $d_0$ and $k_0$ be integers satisfying $\\beta (1,d_0,k_0)(E_2)\\ge 1$ , and suppose that $n_1\\ge 2$ .", "Then, (i) if $k\\le n_1k_0$ and $d_1\\ge n_1d_0+1\\ (\\text{resp.", "}, d_1\\ge n_1d_0)$ , the twisted BN locus $B(n_1,d_1,k)(E_2) \\ (\\text{resp.", "}, \\widetilde{B}(n_1,d_1,k)(E_2))$ is non-empty and $\\beta (n_1,d_1,k)(E_2)>(\\text{resp.", "}, \\ge )\\,1$ ; (ii) if $k_1:=k-n_2d_1+n_1d_2-n_1n_2(g-1)\\le n_1k_0$ and $-d_1\\ge n_1d_0+1\\ (\\text{resp.", "}, -d_1\\ge n_1d_0)$ , the twisted BN locus $B(n_1,d_1,k)(K_C\\otimes E_2^*)\\ (\\text{resp.", "}, \\widetilde{B}(n_1,d_1,k)(K_C\\otimes E_2^*))$ is non-empty and $\\beta (n_1,d_1,k)(K_C\\otimes E_2^*)>(\\text{resp.", "}, \\ge )\\,1.$ (i) The non-emptiness of $B(n_1,d_1,k)(E_2)$ (resp., $\\widetilde{B}(n_1,d_1,k)(E_2)$ ) is [12].", "The inequality $\\beta (n_1,d_1,k)(E_2)>(\\ge )1$ follows from the observation that $\\beta (n_1,n_1d_0,n_1k_0)(E_2)\\ge 1 \\Longleftrightarrow \\beta (1,d_0,k_0)(E_2)\\ge 1$ and the fact that $\\beta (n_1,d_1,k)(E_2)$ is a strictly increasing function of $d_1$ and a decreasing function of $k$ .", "(ii) Under the stated hypotheses, in the case $-d_1\\ge n_1d_0+1$ , (i) implies that $B(n_1,-d_1,k_1)(E_2)\\ne \\emptyset $ .", "By Serre duality, $B(n_1,d_1,k)(K_C\\otimes E_2^*)\\cong B(n_1,-d_1,k_1)(E_2).$ So $B(n_1,d_1,k)(K_C\\otimes E_2^*)\\ne \\emptyset $ .", "Moreover, as in the proof of (i), $\\beta (n_1,-d_1,k_1)(E_2)>1$ and $\\beta (n_1,d_1,k)(K_C\\otimes E_2^*)=n_1^2(g-1)+1-kk_1=\\beta (n_1,-d_1,k_1)(E_2).$ The same proof applies in the case $-d_1\\ge n_1d_0$ if we replace $>$ in (REF ) by $\\ge $ and $B$ with $\\widetilde{B}$ throughout.", "Corollary 3.2 Under the hypotheses of Theorem REF , the twisted BN locus $B^k(\\mathcal {U}_!,\\mathcal {U}_2)\\ (\\text{resp.", "}, \\widetilde{B}(\\mathcal {U}_1,\\mathcal {U}_2))$ is non-empty.", "Moreover $\\beta ^k(\\mathcal {U}_1,\\mathcal {U}_2)>n_2^2(g-1)+2\\ >0\\ (\\mbox{resp.,}\\ge n_2^2(g-1)+2).$ In the theorem, we can take $E_2$ to be stable; the required non-emptiness then follows (see Remark REF ).", "The inequality for $\\beta ^k(\\mathcal {U}_1,\\mathcal {U}_2)$ follows from the fact that $\\beta ^k(\\mathcal {U}_1,\\mathcal {U}_2)= \\dim M_2+\\beta (n_1,d_1,k)(E_2)$ (see (REF ) and (REF )).", "Remark 3.3 (i) As indicated in the introduction, the fact that the BN numbers are positive will be crucial for us in establishing that many of the examples constructed in this paper are new.", "(ii) (REF ) makes Theorem REF (i) the precise analogue of the theorem of Teixidor i Bigas [21] for twisted BN loci $B(n_1,d_1,k)(E_2)$ , at least on a general curve; the proof in [12] depends on Mercat's construction [18] and holds for any curve.", "(iii) If we replace $E_2$ by $E_2\\otimes L$ , where $L$ is a line bundle of degree $\\ell $ , and simultaneously replace $d_0$ by $d_0-\\ell $ and $d_1$ by $d_1-n_1\\ell $ in Theorem REF (i), the BN numbers and the isomorphism classes of the BN loci are unchanged.", "This applies also to Theorem REF (ii) except that now $d_1$ must be replaced by $d_1+n_1\\ell $ ." ], [ "Examples with $k=k_1k_2$", "In this section, we consider the following situation.", "Suppose that $B(n_i,d_i,k_i)\\ne \\emptyset $ for $i=1,2$ and that there exist $E_i\\in B(n_i,d_i,k_i)$ such that $h^0(E_1\\otimes E_2)\\ge k_1k_2$ .", "When $n_2=1$ , this is exactly the situation considered in [6].", "Here, however, we assume $n_i\\ge 2$ .", "One scenario to which this applies is when $E_1=D_E$ ." ], [ "The construction", "Proposition 4.1 Let $C$ be a smooth curve of genus $g$ .", "Suppose that $E$ is a generated stable (resp., semistable) bundle of rank $n$ and degree $d_1$ and that $h^0(E)=k_1=n+n_1$ .", "Suppose further that $D_E$ is stable (resp., semistable) and that $E_2\\in B(n_2,d_2,k_2)\\ (\\text{resp.", "}, [E_2]\\in \\widetilde{B}(n_2,d_2,k_2))$ with $\\mu (E_2)<\\mu (E)$ .", "Then $h^0(D_E\\otimes E_2)\\ge k_1k_2\\ (\\text{resp.", "}, h^0(D_E\\otimes \\text{gr}E_2)\\ge k_1k_2)$ .", "In particular, $B^k(\\mathcal {U}_1,\\mathcal {U}_2)\\ne \\emptyset \\ (\\text{resp.", "}, \\widetilde{B}^k(\\mathcal {U}_1,\\mathcal {U}_2)\\ne \\emptyset )$ .", "Consider the sequence (REF ) and suppose first that we are in the stable case.", "Note that $D_E\\in B(n_1,d_1,k_1)$ , since $D_E$ is stable by hypothesis and $h^0(E^*)=0$ .", "Note also that $h^0(E^*\\otimes E_2)=0$ since $E^*\\otimes E_2$ is semistable of negative slope.", "Tensoring (REF ) by $E_2$ and taking global sections, we obtain $h^0(D_E\\otimes E_2)\\ge h^0(E)\\cdot h^0(E_2)\\ge k_1k_2.$ The semistable case proceeds similarly.", "Note that, by (REF ), we have $\\beta ^k(\\mathcal {U}_1,\\mathcal {U}_2)=(n_1^2+n_2^2)(g-1)+2-k(k-\\chi ).$ We have also $\\beta (n_1n_2,n_2d_1+n_1d_2,k)=n_1^2n_2^2(g-1)+1-k(k-\\chi ).$ Lemma 4.2 Let $\\mu _i$ , $\\lambda _i$ be positive rational numbers for $i=1,2$ and suppose that $d_i=n_i\\mu _i$ , $k_i=n_i\\lambda _i$ , $k=k_1k_2$ .", "If $\\mu _1+\\mu _2<\\lambda _1\\lambda _2+g-1,$ then, for fixed $\\mu _i$ , $\\lambda _i$ and sufficiently large $n_1$ , $n_2$ , $\\beta ^k(\\mathcal {U}_1,\\mathcal {U}_2)<0$ .", "Since $k=k_1k_2$ , (REF ) is equivalent to $\\frac{\\beta ^k(\\mathcal {U}_1,\\mathcal {U}_2)}{n_1^2n_2^2}=\\left(\\frac{1}{n_2^2}+\\frac{1}{n_1^2}\\right)(g-1)+\\frac{2}{n_1^2n_2^2}-\\lambda _1\\lambda _2(\\lambda _1\\lambda _2-(\\mu _1+\\mu _2)+g-1).$ The result follows.", "Theorem 4.3 Suppose that $C$ is a smooth curve of genus $g\\ge 2$ and that $\\mu _i$ , $\\lambda _i$ ($i=1,2$ ) are positive rational numbers, $\\mu _1<2$ and $\\mu _2\\le 2g$ .", "(i) If $B(n_i,d_i,k_i)$ are non-empty BN loci with $n_i\\ge 2$ supported by $(\\mu _i,\\lambda _i)$ and $k=k_1k_2$ , then $B^k(\\mathcal {U}_1,\\mathcal {U}_2)\\ne \\emptyset $ .", "(ii) If $\\widetilde{B}(n_i,d_i,k_i)$ are non-empty BN loci with $n_i\\ge 2$ supported by $(\\mu _i,\\lambda _i)$ and $k=k_1k_2$ , then $\\widetilde{B}^k(\\mathcal {U}_1,\\mathcal {U}_2)\\ne \\emptyset $ and $\\widetilde{B}(n_1n_2,n_2d_1+n_1d_2,k)\\ne \\emptyset .$ (i) Let $E_2\\in B(n_2,d_2,k_2)$ with $d_2\\le 2n_2g$ .", "We prove that there exists $E_1\\in B(n_1,d_1,k_1)$ such that $h^0(E_1\\otimes E_2)\\ge k_1k_2$ .", "The proof is the same as in [6] with $L$ replaced by $E_2$ .", "We need to consider cases I-III of Theorem REF .", "I.", "$\\mathcal {O}^{k_1}\\otimes E_2$ is a subsheaf of $E_1\\otimes E_2$ .", "It follows at once that $k_1k_2\\le h^0(E_1\\otimes E_2$ ).", "II.", "in (REF ), $E_1\\cong D_E$ and $\\mu (E_2)\\le 2g<\\mu (E)$ .", "The result follows from Proposition REF .", "III.", "Tensoring (REF ) and (REF ) by $E_2$ and noting that $D_{E^{\\prime }}^*\\otimes E_2$ is semistable of negative slope, we obtain $h^0(E^{\\prime }\\otimes E_2)\\ge h^0(E^{\\prime }) h^0(E_2)$ and hence $h^0(E_1\\otimes E_2)\\ge ( h^0(E^{\\prime })-\\ell ^{\\prime })h^0(E_2)=(n_1+\\ell )h^0(E_2)\\ge k_1k_2.$ (ii) This follows in the same way as (i), together with the fact that the tensor product of semistable bundles is semistable.", "Remark 4.4 If $C$ is non-hyperelliptic, the theorem holds under the hypotheses $\\mu _1\\le 2$ , $\\mu _2<2g$ (see Case IV in Remark REF )." ], [ "Examples", "Using Lemmas REF and REF , it is easy to find many cases as in Theorem REF where (REF ) is also satisfied, thus yielding examples of non-empty BN loci $B^k(\\mathcal {U}_1,\\mathcal {U}_2)$ with negative BN number.", "These cannot be obtained from Theorem REF .", "Example 4.5 Suppose that $\\mu _i\\le \\frac{1}{2}$ for $i=1,2$ .", "From [4] (see Lemma REF and Figure 1), we can take $\\lambda _i$ to be any positive rational numbers with $\\lambda _i\\le 1-\\frac{1}{g}(1-\\mu _i)$ .", "Then, Lemma REF and Theorem REF apply, giving $B^k(\\mathcal {U}_1,\\mathcal {U}_2)\\ne \\emptyset $ and $\\beta ^k(\\mathcal {U}_1,\\mathcal {U}_2)<0$ for sufficiently large $n_1$ , $n_2$ , even when $g=2$ .", "More generally, we can take $\\mu _1\\le 1$ and $\\mu _2\\le g-2$ for $g\\ge 3$ , or $\\mu _1<2$ and $\\mu _2\\le g-3$ for $g\\ge 4$ , with $\\lambda _i$ any positive rational numbers for which there exist non-empty BN loci $B(n_i,d_i,k_i)$ supported by $(\\mu _i,\\lambda _i)$ .", "It is even possible to find examples with $n_1=n_2=2$ .", "Example 4.6 Suppose that $n_1=n_2=2$ .", "Then, under the hypotheses of Theorem REF , $\\beta ^k(\\mathcal {U}_1,\\mathcal {U}_2)<0$ if and only if $\\left(\\frac{1}{2}-\\lambda _1\\lambda _2\\right)(g-1)+\\frac{1}{8}-\\lambda _1\\lambda _2(\\lambda _1\\lambda _2-(\\mu _1+\\mu _2))<0$ (see (REF )).", "We can take $B(n_i,d_i,k_i)=B(2,3,2)$ for $i=1,2$ ; this is non-empty by either Lemma REF or Lemma REF .", "(REF ) now becomes $-\\frac{1}{2}(g-1)+\\frac{1}{8}+2<0,$ which is true for $g\\ge 6$ .", "Here, $B(2,3,2)$ is the only candidate for $B(2,d_1,k_1)$ for applying Theorem REF , but, for $i=2$ , we could take any non-empty $B(2,d_2,k_2)$ instead of $B(2,3,2)$ ; once we have fixed $d_2$ with $d_2\\le 4g$ and $k_2\\ge 2$ , (REF ) will hold for sufficiently large $g$ ." ], [ "A new region in the BN map", "Substituting $k=k_1k_2$ in (REF ) and dividing by $n_1^2n_2^2$ , we have $\\beta :=\\frac{\\beta (n_1n_2,n_2d_1+n_1d_2,k)}{n_1^2n_2^2}=\\\\=g-1+\\frac{1}{n_1^2n_2^2}-\\lambda _1\\lambda _2(\\lambda _1\\lambda _2-(\\mu _1+\\mu _2)+g-1).$ It is an interesting question as to whether $\\beta $ can ever be negative under the hypotheses of Theorem REF and the assumption that $\\beta (n_2,d_2,k_2)\\ge 1$ .", "One can see easily that $\\beta >0$ if $\\lambda _1\\le 1$ , but other cases are not so straightforward.", "Independently of this, one can ask whether the theorem can give rise to new examples of non-empty BN loci in $M(n_1n_2,n_2d_1+n_1d_2,k_1k_2)$ .", "Our next result shows that this is indeed the case.", "We define a new region of the BN map as follows.", "Let $R:=\\left\\lbrace (\\mu _1,\\mu _2,\\lambda _1,\\lambda _2)\\in {\\mathbb {Q}}^4\\,\\left|\\begin{array}{c}\\,0<\\mu _1<2,0<\\mu _2\\le 2g-2-\\mu _1,\\\\\\lambda _i\\le \\max \\lbrace f_g(\\mu _i),t_g(\\mu _i)\\rbrace \\end{array}\\right.\\right\\rbrace $ and let $h:R\\rightarrow {\\mathbb {Q}}^2$ be defined by $h(\\mu _1,\\mu _2,\\lambda _1,\\lambda _2)=(\\mu _1+\\mu _2,\\lambda _1\\lambda _2).$ Definition 4.7 The region BPN in the BN map is defined to be the union of $\\operatorname{Im}h$ and its Serre dual.", "Theorem 4.8 Let $C$ be a smooth curve of genus $g\\ge 2$ and $n_i\\ge 2$ .", "(i) If $(\\mu ,\\lambda )=h(\\mu _1,\\mu _2,\\lambda _1,\\lambda _2)\\in \\operatorname{BPN}$ , then $(\\mu _1+\\mu _2,\\lambda _1\\lambda _2)$ supports a BN locus $\\widetilde{B}(n_1n_2,n_2d_1+n_1d_2,k)$ with $k=k_1k_2$ for infinitely many values of $n_1$ , $n_2$ ; (ii) If $g\\ge 5$ , $\\operatorname{BPN} $ is not contained in $\\operatorname{T\\cup BMNO}$ .", "(i) By Lemmas REF and REF , $(\\mu _i,\\lambda _i)$ supports the BN locus $\\widetilde{B}(n_i,d_i,k_i)$ ($i=1,2$ ) for infinitely many values of $n_i$ .", "The result follows from Theorem REF (ii).", "(ii) Suppose that $2<\\mu <4$ .", "We can write $\\mu =\\mu _1+\\mu _2$ with $1<\\mu _i<2$ for $i=1,2$ .", "Now, if $\\lambda _i=1+\\frac{1}{g}(\\mu _i-1)$ , then, by [17] (see also Lemma REF ), $(\\mu _i,\\lambda _i$ ) is in the region BMNO and $(\\mu _i,\\lambda _i)$ supports $B(n_i,d_i,k_i)$ for any allowable $n_i$ .", "So, by Theorem REF (ii), $\\widetilde{B}(n_1n_2,n_2d_1+n_1d_2,k)\\ne \\emptyset $ when $k=k_1k_2$ .", "On the other hand, $\\lambda _1\\lambda _2>1+\\frac{1}{g}(\\mu _1+\\mu _2-2)\\ge 1+\\frac{1}{g}(\\mu _1+\\mu _2-\\left\\lceil \\mu _1+\\mu _2\\right\\rceil +1).$ It follows from Lemma REF and REF that $(\\mu _1+\\mu _2,\\lambda _1\\lambda _2)$ is not in the region BMNO if $\\widehat{\\eta }(2)\\ge 5$ , in other words, if $g\\ge 7$ .", "It is clear also that $(\\mu _1+\\mu _2,\\lambda _1\\lambda _2)$ is not in the region T, whose upper boundary is given by $\\lambda =1$ in this range.", "In fact, our construction also gives new points for $g=5$ (compare [14]) and $g=6$ (compare [15]).", "(When $g=6$ , there is a missing grey area in [15]; the upper boundary for $\\operatorname{BMNO}$ should be given by $\\lambda =1+\\frac{1}{2}(\\mu -3)$ for $3<\\mu <4$ .", "This does not affect our assertion.)", "Remark 4.9 The construction in the proof of Theorem REF shows that, for $g\\ge 7$ , the top boundary of BPN in the range $2<\\mu <4$ is given by $\\lambda =1+\\frac{1}{g}(\\mu -2)+\\frac{1}{g^2}\\left(\\frac{\\mu -2}{2}\\right)^2$ , whereas the top boundary of BMNO is given by $\\lambda =1+\\frac{1}{g}(\\mu -2)$ for $2<\\mu <3$ and by $\\lambda =1+\\frac{1}{g}(\\mu -3)$ for $3<\\mu <4$ .", "At the point $(2,1)$ , the line $\\lambda =1+\\frac{1}{g}(\\mu -2)$ is the tangent to the parabola $\\lambda =1+\\frac{1}{g}(\\mu -2)+\\frac{1}{g^2}\\left(\\frac{\\mu -2}{2}\\right)^2$ .", "For $g=10$ , BPN is illustrated in Figure 2, which extends to the range $4<\\mu <5$ .", "Figure 2 also shows an expanded version of the BN map close to $\\mu =3$ .", "Figure: NO_CAPTION Remark 4.10 (i) BPN is a new region in the BN map (see Figure 2), whereas the papers [14], [15] give examples for isolated values of $\\mu $ , some of which extend to higher values of $g$ .", "When $g=4$ , on the other hand, there is an additional grey area of the BN map (see [13]), which contains the new part of the region BPN in the range $2<\\mu <3$ .", "(ii) At least for Petri curves, some of the non-empty BN loci $\\widetilde{B}(n_1n_2,n_2d_1+n_1d_2,k)\\subset \\widetilde{M}(n_1n_2,n_2d_1+n_1d_2)$ constructed in Theorem REF are new.", "It is possible to choose the ranks and degrees so that $\\gcd (n_1n_2,n_2d_1+n_1d_2)=1$ ; for example, if $g$ is odd, one can take $(n_1,d_1,k_1)=(g+1,2g+1,g+2),\\ (n_2,d_2,k_2)=(g+2,2g+2,g+3).$ Then $B(n_1n_2,n_2d_1+n_1d_2,k)\\ne \\emptyset $ .", "(iii) One can check that, in the framework of Theorem REF , the examples of twisted BN loci with negative expected dimension constructed in [12] give rise to points of T. It follows that our examples are different." ], [ "The construction", "The essential ingredient in proving Theorem REF is Mercat's construction [18], while Theorem REF depends on the constructions of [4], [17].", "Another way of constructing non-empty twisted BN loci is described in [8] and uses kernel bundles.", "The following proposition is contained in [8] and plays an essential rôle in the proof of [8].", "We include a proof for the convenience of the reader.", "Proposition 5.1 Suppose that $E_1$ has rank $n_1$ and degree $d_1$ with $h^0(E_1)\\ge k_1$ and that $E$ is as in (REF ).", "If $h^1(E_1\\otimes E)=0$ and $k\\le (d-n(g-1))(k_1-n_1)-nd_1,$ then $h^0(E_1\\otimes D_E^*)\\ge k$ .", "We have $h^0(E_1\\otimes E)=nd_1+n_1d-nn_1(g-1)$ and $h^0(E)\\ge d-n(g-1)$ .", "So, tensoring (REF ) by $E_1$ and taking global sections, $h^0(E_1\\otimes D_E^*)\\ge (d-n(g-1))k_1-nd_1-n_1d+nn_1(g-1).$ Simplifying and comparing with (REF ), this gives the result.", "We have the following immediate corollary (see Remark REF ).", "Corollary 5.2 Suppose that the hypotheses of Proposition REF are satisfied.", "(i) If $E_1$ is stable $(\\text{resp., semistable})$ , then $B(n_1,d_1,k)(D_E^*)\\ne \\emptyset $ $(\\text{resp.", "}, \\widetilde{B}(n_1,d_1,k)(D_E^*)\\ne \\emptyset )$ .", "(ii) If $D_E^*$ is stable $(\\text{resp., semistable})$ , then $B(h^0(E)-n,-d,k)(E_1)\\ne \\emptyset \\ (\\text{resp.", "},\\widetilde{B}(h^0(E)-n,-d,k)(E_1)\\ne \\emptyset )$ .", "(iii) If both $E_1$ and $D_E^*$ are stable $(\\text{resp., semistable})$ and $(n_2,d_2)=(h^0(E)-n,-d)$ , then $B^k(\\mathcal {U}_1,\\mathcal {U}_2)\\ne \\emptyset \\ (\\text{resp.", "}, \\widetilde{B}^k(\\mathcal {U}_1,\\mathcal {U}_2)\\ne \\emptyset )$ .", "Our object here is to use Corollary REF (iii) to obtain new non-empty examples of twisted BN loci.", "We assume that $n_1\\ge 2$ .", "In order for (REF ) to yield positive values of $k$ when $d>n(g-1)$ , which will always be the case, we clearly require $k_1>n_1$ .", "The key point we require in order to apply Corollary REF (iii) is that $D_E^*$ and $E_1$ are stable.", "We have already discussed the existence of stable $E_1$ in subsection REF and the stability of $D_E$ in subsection REF .", "We can now state and prove a general theorem, which includes Theorem REF .", "Theorem 5.3 Suppose that $C$ is a smooth curve of genus $g\\ge 2$ , $n_1\\ge 2$ , $k_1>n_1$ and either $d>2ng$ or $d=2ng$ and $C$ is non-hyperelliptic.", "Suppose further that $B(n_1,d_1,k_1)\\ne \\emptyset $ and that (REF ) holds.", "Let $(n_2,d_2)=(d-ng,-d)$ and suppose that $S$ is defined as in (REF ).", "Then $S\\ne \\emptyset $ and the morphism $B(n_1,d_1,k_1)\\times S\\longrightarrow M_1\\times M_2:(E_1,E)\\mapsto (E_1,D_E^*)$ is injective and has image contained in $B^k(\\mathcal {U}_1,\\mathcal {U}_2)$ ; in particular, $B^k(\\mathcal {U}_1,\\mathcal {U}_2)\\ne \\emptyset .$ If, in addition, $k=d(k_1-n_1)-e$ , where $e\\ge n(g-1)(k_1-n_1)+nd_1,$ and $d_1<k_1+n_1(g-1)-\\frac{g-1}{k_1-n_1},$ then, for any fixed values of $n_1$ , $d_1$ , $k_1$ , $n$ and $e$ satisfying (REF ), and $d\\gg 0$ , $\\beta ^k(\\mathcal {U}_1,\\mathcal {U}_2)<0.$ The first part follows from Theorem REF , Proposition REF and Corollary REF (iii).", "We need to note that $h^1(E_1\\otimes E)=0$ since $E_1\\otimes E$ is semistable of slope greater than $2g$ .", "For the second part, we have $\\nonumber \\beta ^k(\\mathcal {U}_1,\\mathcal {U}_2)&=&n_1^2(g-1)+1+(d-ng)^2(g-1)+1\\\\&&-(d(k_1-n_1)-e)(d(k_1-n_1)-e-\\chi ),$ where, by (REF ), $\\nonumber \\chi &=&(d-ng)d_1-n_1d-n_1(d-ng)(g-1)\\\\&=&(d-ng)(d_1-n_1g)-nn_1g.$ The right-hand side of (REF ) is a quadratic in $d$ , with leading coefficient $g-1-(k_1-n_1)(k_1-n_1-d_1+n_1g).$ So $\\beta ^k(\\mathcal {U}_1,\\mathcal {U}_2)<0$ for $d\\gg 0$ provided that $d_1<k_1+n_1(g-1)-\\frac{g-1}{k_1-n_1}.$ Remark 5.4 (i) Under the hypotheses of Theorem REF , including the inequalities (REF ) and REF , we deduce that the non-emptiness of $B^k(\\mathcal {U}_1,\\mathcal {U}_2)$ cannot be obtained from Theorem REF .", "In fact, there are several choices we can make in attempting to apply Theorem REF to this situation.", "We can take $E_2$ to be any of $E_1\\otimes L$ , $D_E^*\\otimes L$ , $K_C\\otimes E_1^*\\otimes L$ and $K_C\\otimes D_E\\otimes L$ , where $L$ is any line bundle, with the appropriate choice of $(n_1,d_1)$ in each case.", "In all cases, we obtain $\\beta ^k(\\mathcal {U}_1,\\mathcal {U}_2)\\ge 1$ .", "In the absence of (REF ), we would have to work through all these cases separately.", "(ii) If we assume only that $\\widetilde{B}(n_1,d_1,k_1)\\ne \\emptyset $ and $d\\ge 2ng$ with $C$ any smooth curve of genus $g\\ge 2$ , we obtain $\\widetilde{B}(\\mathcal {U}_1,\\mathcal {U}_2)\\ne \\emptyset $ .", "Also, if (REF ) holds and $d\\gg 0$ , then the non-emptiness of $\\widetilde{B}(\\mathcal {U}_1,\\mathcal {U}_2)$ cannot be obtained from Theorem REF ." ], [ "Examples", "To show that Theorem REF provides the examples we are seeking, we need to show that the conditions $B(n_1,d_1,k_1)\\ne \\emptyset $ and (REF ) are compatible.", "For this, we need more precise statements concerning the possible values of $d_1$ .", "Corollary 5.5 Suppose that $C$ is a smooth curve of genus $g\\ge 3$ , $n_1\\ge 2$ , $k_1>n_1$ and $(n_2,d_2)=(d-ng,-d)$ with either $d>2ng$ or $d=2ng$ and $C$ non-hyperelliptic.", "Suppose further that $k_1+n_1(g-1)-n_1\\left\\lfloor \\frac{g-1}{\\lceil k_1/n_1\\rceil }\\right\\rfloor \\le d_1<k_1+n_1(g-1)-\\frac{g-1}{k_1-n_1}$ and that $d_1$ is not divisible by $n_1$ .", "Then, $B(n_1,d_1,k_1)\\ne \\emptyset $ .", "Moreover, for any fixed value of $e$ satisfying (REF ) and $d\\gg 0$ , $B^k(\\mathcal {U}_1,\\mathcal {U}_2)\\ne \\emptyset $ , but $\\beta ^k(\\mathcal {U}_1,\\mathcal {U}_2)<0$ .", "In particular, $B^k(\\mathcal {U}_1,\\mathcal {U}_2)$ is of negative expected dimension and the non-emptiness of $B^k(\\mathcal {U}_1,\\mathcal {U}_2)$ cannot be obtained from Theorem REF .", "The fact that $B(n_1,d_1,k_1)\\ne \\emptyset $ follows from Remark REF .", "Theorem REF then implies that $B^k(\\mathcal {U}_1,\\mathcal {U}_2)\\ne \\emptyset $ and that $\\beta ^k(\\mathcal {U}_1,\\mathcal {U}_2)<0$ when (REF ) holds and $d\\gg 0$ .", "Hence, the non-emptiness cannot be obtained from Theorem REF (see Remark REF (i)).", "To obtain examples from this corollary, we need, in particular, to prove the existence of $d_1$ satisfying (REF ).", "This is equivalent to showing that $n_1\\left\\lfloor \\frac{g-1}{\\lceil k_1/n_1\\rceil }\\right\\rfloor >\\frac{g-1}{k_1-n_1}$ In fact (REF ) cannot hold if $g=2$ (which is why we have assumed $g\\ge 3$ in Corollary REF ) or, more generally, if $k_1\\ge n_1(g-1)+1$ , since, in that case, the left-hand side of (REF ) is 0.", "We therefore assume that $n_1<k_1\\le n_1(g-1)$ Example 5.6 Suppose that $g=3$ and (REF ) holds.", "Then (REF ) holds if and only $n_1(k_1-n_1)>2$ , in other words, either $n_1\\ge 3$ or $n_1=2$ and $k_1=4$ .", "In the latter case, the only possible value for $d_1$ is $d_1=6$ .", "This is not permitted by the hypotheses of Corollary REF and, in fact $B(2,6,4)=\\emptyset $ .", "For $n_1\\ge 3$ , there do exist solutions of (REF ) for which $d_1$ is not divisible by $n_1$ .", "In fact, if $k_1=n_1+2$ , (REF ) gives $2n_1+2\\le d_1<3n_1+1$ , giving $n_1-2$ values of $d_1$ not divisible by $n_1$ .", "If $n_1+3\\le k_1\\le 2n_1$ , there are $n_1-1$ permitted values of $d_1$ in the range $k_1+n_1\\le d_1<k_1+2n_1$ .", "A similar analysis can be carried out for $g\\ge 4$ ; in this case, in particular, $B(2,4g-5,2g-2)\\ne \\emptyset $ and (REF ) is satisfied, so examples exist for $n_1=2$ , as well as for larger $n_1$ .", "Remark 5.7 When $k_1=n_1+1$ , there are additional cases in which the non-emptiness of $B(n_1,d_1,k_1)$ is known.", "This is related to the extended dual span construction, which involves the stability of bundles $D_{L,V}$ , where $L$ is a line bundle and $V$ is a subspace of $H^0(L)$ which generates $L$ .", "Applications of this will be considered in a paper currently in preparation [7]." ] ]
2212.05573
[ [ "Multimodal and Explainable Internet Meme Classification" ], [ "Abstract Warning: this paper contains content that may be offensive or upsetting.", "In the current context where online platforms have been effectively weaponized in a variety of geo-political events and social issues, Internet memes make fair content moderation at scale even more difficult.", "Existing work on meme classification and tracking has focused on black-box methods that do not explicitly consider the semantics of the memes or the context of their creation.", "In this paper, we pursue a modular and explainable architecture for Internet meme understanding.", "We design and implement multimodal classification methods that perform example- and prototype-based reasoning over training cases, while leveraging both textual and visual SOTA models to represent the individual cases.", "We study the relevance of our modular and explainable models in detecting harmful memes on two existing tasks: Hate Speech Detection and Misogyny Classification.", "We compare the performance between example- and prototype-based methods, and between text, vision, and multimodal models, across different categories of harmfulness (e.g., stereotype and objectification).", "We devise a user-friendly interface that facilitates the comparative analysis of examples retrieved by all of our models for any given meme, informing the community about the strengths and limitations of these explainable methods." ], [ "Introduction", "The moderation of content on social media is becoming one of the main societal challenges as online platforms have been effectively weaponized in a variety of geo-political events and social issues [37], [36], [10], [38].", "While lowering barriers to information sharing can guarantee freedom of expression, research showed that it also facilitates the diffusion of harmful narratives, including violent content and misinformation [5], [6], [50].", "The detection of harmful content is challenging, given that content can be easily created in different modalities, ranging from text to multimedia content, and spread very quickly, sometimes amplified by coordinated accounts involved in influence operations (e.g., bots and trolls) [30], [53], [29], [45], and often across platforms with different degrees and strategies for moderation [48], [52].", "Meanwhile, determining toxicity, or inappropriateness broadly is non-obvious even for humans, as social media interactions are integrated into both the virtual and the real-world context.", "Content moderation policies, or the lack thereof, can have serious implications on individuals, groups, and society as a whole.", "On the one hand, content moderators may react late, inconsistently, or unfairly, thus angering users [19], as well as contributing to reinforcing and exacerbating conspiratorial narratives [10], [28].", "On the other hand, minimal content moderation may permit coordinated influence operations [16] or enable the spontaneous formation of toxic and dangerous communities, e.g., the study by [31] demonstrates how “the Manosphere”, a conglomerate of men-centered online communities, may serve as a gateway to far-right movements.", "A recent study [14] revealed worrying patterns of online abuse, estimating 1.1 million toxic tweets sent to women over one year.", "Their study also reveals that black women were 84% more likely than white women to experience abuse on the platform.", "These studies collectively show that sexism and misogyny are still prevalent all over the globe [21], despite initiatives such as the UN Sustainable Development Goals [7] that emphasize the importance of gender equality, peace, and justice.", "The recent explosion of multimedia content, in the form of Internet memes (IMs), makes content moderation even more difficult, especially when the context is not taken into account.", "An Internet meme can be roughly defined as “a piece of culture, typically a joke, which gains influence through online transmission” [13].", "An Internet meme is based on a medium, typically an image representing a well-understood reference to a prototypical situation within a certain community.", "Given that IMs are potential vectors for misinformation, political propaganda, and hate speech, enabling their scalable analysis is essential.", "Nonetheless, the automated analysis of IMs is challenging because of their nature: IMs are multimodal, i.e., they combine visual and language information creatively.", "Notably, IMs are not just funny; they are relatable and, thus, they are community-dependent.", "Therefore, their correct interpretation passes from the identification of the right virtual context.", "Moreover, IMs are succinct, i.e., they spread complex messages with a minimal information unit that connects the virtual circumstances to the real ones.", "Finally, IMs are fluid, i.e., they are subject to variations and alterations.", "In one study by Meta [1], 121,605 different variants of one particular meme were posted across 1.14 million status updates.", "The inaccurate classification of memes can lead to inadequate moderation interventions (removal, flagging, demotion, etc.)", "that, combined with the lack of tracing mechanisms across platforms, has the potential to further decrease public trust in social media platforms, and related moderation policies.", "Existing work on meme tracking and classification has focused on their temporal spread over time (i.e., virality) [32], [49], [25] and high-level categorization tasks like hate speech detection that focus on perceptual features [23], [17].", "Little work has focused on the aspects of semantics and pragmatics of a meme, which require precise feature extraction from images and from the text.", "Moreover, memes assume rich background knowledge about the spatio-temporal and cultural context in which they came into existence.", "Combining text, vision, and extra knowledge is an AI-complete problem.", "In this paper, we explore explainable multimodal methods for IMs classification.", "We rely on the general idea of case-based reasoning, where a method prediction can be traced back to similar memes that the method has observed at training time.", "Considering the complex nature of Internet memes, we opt for case-based reasoning because it can provide transparent insights into the model reasoning, while still leveraging the representation learning ability of state-of-the-art (SOTA) models.", "Our contributions are: We build on prior work to employ explainable reasoning methods for meme understanding.", "These methods perform example- and prototype-based reasoning over training cases, while leveraging both textual and visual SOTA models to extract features for the individual cases.", "We study the relevance of our modular and explainable models in detecting harmful memes on two tasks: Hate Speech Detection and Misogyny Classification.", "We compare the performance between example- and prototype-based methods and between text, vision, and multimodal models, across different categories of harmfulness (e.g., stereotype and objectification).", "We devise a user-friendly interface that facilitates the comparative analysis of examples retrieved by all of our models for any given meme.", "We leverage the user interface to understand the ability of different explainable models to retrieve useful instances for case-based reasoning and inform future work about these methods' strengths and limitations.", "We make our code available to facilitate future research on explainable IM classification for social good.https://github.com/usc-isi-i2/meme-understanding Figure: Classification and feature extraction model within the Example-based explanation model." ], [ "Method", "A meme classification model that detects offensive or inappropriate memes can be easily trained.", "However, the black-box nature of ML models makes it difficult to interpret why a meme is flagged [2], especially when flagged wrongly.", "We adopt example-based and prototype-based approaches to make explainable predictions for internet meme classification tasks.", "Both approaches utilize a frozen pre-trained model to extract features from a meme in a transfer learning setup with a separate downstream classification model, which leverages the features to make a final decision.", "The modularity of the approach enables an easy comparison over the combination of the pre-trained model and the explanation method used.", "We further develop a web-based visualization tool to study these explanation methodologies.", "Figure: Example-based explanation based on similarity-based meme search.", "The Train meme-features database contains pre-computed features using the Feature Extractor module." ], [ "Example-based Meme Classification", "We adopt an example-based method [41] to make predictions and explain them by displaying similar memes to end-users.", "Example-based explanation works by showing training examples that have a similar representation to the test example from the model's point of view to act as a proxy to understand the model's behavior.", "We use example-based classification because it helps users to understand how the classification model represents a meme compared to the training dataset supporting the model prediction.", "Although Internet memes involve text, image, and often need commonsense or cultural-specific knowledge to be well understood, and that might limit the efficacy of example-based explanation, still even as a heuristic, similar examples can help the end-users understand the reasoning that is done by the model [42].", "This approach further helps analyze misclassifications and detect latent biases in the dataset [47].", "Figure REF shows the meme classification model, which applies a classification head (L1, L2, L3 and Predictions layer) over the frozen pre-trained model for prediction (see the last Subsection about Pretrained models).", "The last hidden state (output of L3) of the trained classifier is used as the extracted features for calculating the similarity between memes using cosine similarity.", "Then, for an unlabelled meme, we predict the labels using this classification model.", "The features can be fed into a query engine to select similar images (Figure REF ) from a database that stores pre-computed features corresponding to the training memes.", "To display the retrieved similar memes in a user-friendly way, we develop a visualization tool to display the model-wise predictions and similar memes from the training dataset, thus supporting the predictions with example-based explanations.", "Figure: Our architecture for prototype-based explainable classification called Explainable Deep Neural Networks (xDNN).", "Figure reused from ." ], [ "Prototype-based Meme Classification", "Unlike an example-based explanation, a prototype-based explanation is not a post-hoc approach.", "Instead, prototype-based classification relies on learning label-wise prototypes from the training dataset followed by a rule-based decision algorithm for the classification, which makes these models inherently interpretable.", "Prototype-based explanation is based on prototype theory [43], which is a theory of categorization in psychology and cognitive linguistics, in which there is a graded degree of belonging to a conceptual category, and some members are more central than others.", "In prototype theory, any given concept in any given language has a real-world example that best represents this concept, i.e., its prototype.", "Like example-based explanation, prototype theory is also an instance of case-based reasoning, and there has been some controversy over the superiority of one over the other.", "There are both claims about the superiority of prototypical examples over normal examples [20], as well as their counterparts [33] who state that a context theory of classification, which derives concepts purely from exemplars works better than a class of theories that included prototype theory.", "We reuse the implementation of Explainable Deep Neural Networks (xDNN) [3].", "It uses the training data features extracted using the pre-trained model to create class-wise prototypes (local peaks for class distribution).", "As shown in Figure REF , the new unlabelled sample can be evaluated against these prototypes and then classified using rule-based local and global decision-making stages." ], [ "Pretrained Models for Feature Extraction", "We chose the following pre-trained models for feature extraction to analyze the information captured by models trained over different modalities and pretraining strategies.", "Textual Models: We use the BERT$_{base}$ model  [15], trained on BooksCorpus (800M words) and English Wikipedia (2,500M words) using two unsupervised tasks of Masked LM and Next Sentence Prediction.", "We expect that BERT would help analyze explainability for general-purpose formal language.", "Also, we used the BERTweet model  [35] having the same architecture as BERT$_{base}$ and trained using the RoBERTa  [26] pretraining procedure over 80 GB corpus of 850M English tweets.", "BERTweet is supposed to be more contextually related to a meme text as tweets have short text lengths and generally contain informal grammar with irregular vocabulary, similar to IMs.", "Vision Models: To capture visual information, we used the CLIP (Contrastive Language-Image Pre-training) model  [39].", "It is trained with Natural Language Supervision over 400 million (image, text) pairs collected from the Internet with the contrastive objective of creating similar features for an image and text pair.", "Because of the variety of training data and unrestricted text supervision, CLIP reaches SOTA-comparable zero-shot performance over various tasks like fine-grained object classification and geo-localization, action recognition in videos, and OCR.", "CLIP is robust toward distribution shift between various datasets and shows better domain generalization over various datasets.", "Mixed Models: To capture both graphical and textual information simultaneously, we concatenate features from both BERTweet and CLIP together and use them for downstream predictions." ], [ "Experimental Setup", "This section discusses the setup for evaluating our approaches over the explainable meme classification tasks." ], [ "Tasks and Datasets", "We experimented with meme classification tasks over two existing datasets: MAMI and Hateful Memes.", "SemEval-2022 Task 5: Multimedia Automatic Misogyny Identification (MAMI) dataset  [17] consists of two sub-tasks of misogyny detection and its type classification.", "Sub-task A: Misogyny Detection Task focuses on detecting whether a meme is misogynous.", "The inter-annotator Fleiss-k Agreement for sub-task A is $0.5767$ .", "Sub-task B: Misogyny Type Classification Task is a multi-class task that categorizes a meme into one or more misogyny types, namely, shaming, stereotype, objectification, and violence.", "A more formal description of these categories can be found in  [17].", "The inter-annotator Fleiss-k Agreement for sub-task B is $0.3373$ .", "Data statistics for both subtasks of the MAMI dataset are presented in Table REF .", "The inter-annotator Fleiss-k Agreement clearly shows that sub-task B is comparatively more difficult than sub-task A. Hateful Memes dataset  [22] consists of a single task of meme hate detection.", "The dataset consists of 10K memes equally divided into hateful and not-hateful classes; the dev and test set consist of $5\\%$ and $10\\%$ of the dataset, respectively.", "The human accuracy for the classification was $84.70\\%$ , ranging from $83\\%$ to $87\\%$ ." ], [ "Evaluation", "We keep our evaluation of classification performance consistent with the original paper about the MAMI dataset.", "Sub-task A is evaluated using macro-average F1 measure for each class label (misogynous and not misogynous).", "Likewise, Sub-task B is evaluated using weighted-average F1 measure, weighted by the true label count for each label.", "For the Hateful Meme dataset, we compare the models based on the macro-average F1-score between the hateful and not-hateful classes.", "Table REF shows performance statistics for all the participants in the MAMI task within the SemEval-2022 competition [17].", "Table: Classification Head parameters for the Example-based method.In addition, we manually evaluate the example-based explanation approach using the visualization tool by analyzing the prediction and similar memes from the training dataset.", "We evaluate the prototype-based explanation method (xDNN) by its classification performance and manually investigating the prototypes identified from the training dataset.", "Table: MAMI dataset characteristics.Table: Basic statistics of the results for the participating systems in Sub-task A and Sub-task B, expressed in terms of macro-averaged and weighted-average F1-score respectively.Table: Classification results for MAMI dataset and Hateful memes dataset." ], [ "Model Training Details", "The classification model (Figure REF ) used in the example-based explanation setup applies a trainable neural head over frozen pre-trained models, which is trained with the Binary Cross Entropy Loss using the Adam optimizer with a learning rate of $10^{-4}$ .", "Table  REF describes each layer of the classification head, and the hidden state of L3 is used for feature extraction for similar example searches over the training dataset.", "xDNN  [3] is a generative model, i.e., it learns prototypes and respective distributions automatically from the training data with no user/problem specific parameters.", "We reuse the publicly available xDNN implementationhttps://github.com/Plamen-Eduardo/xDNN---Python and experiment with different pre-trained models described in the subsection on Pretrained Models.", "Figure: Explanatory interface for our Example-based classification method." ], [ "Analysis", "In this section, we analyze and compare different methods and feature extraction models for meme classification.", "As our methods focus on both accuracy and explainability, we present an analysis on both of them in turn." ], [ "Results", "Table  REF shows the performance of individual models on the MAMI and Hateful memes datasets.", "The table compares the explainability strategies' performance over different pre-trained models' choices.", "MAMI v/s Hateful Meme: Between the subtask A (misogyny detection) of the MAMI dataset and Hate detection over the Hateful Meme dataset, all models have better performance for the misogyny detection task.", "This can be because of the fact that the presence of misogyny directly relates to the mention of women (or related terminology); in contrast, hate is a more open-ended problem.", "For both tasks and methods, we also observe that the BERTweet model, which is trained on Twitter data, performs better than the BERT-based models for the MAMI dataset, though the difference between the two models is within one point difference.", "This shows that exposure to social media (Twitter) data has a positive, yet limited, impact on models for meme content classification.", "However, for the Hateful meme task, BERT performs better than the BERTweet model suggesting the shift in distribution between the two datasets.", "Prototype-based (xDNN) v/s Example-based Explanation (Deep learning): For both datasets and different pre-training models, the Example-based method, which uses a neural classification head, performs better than the Prototype-based (xDNN) on the same pre-trained model.", "This indicates that the prototype-based models rely entirely on the pre-trained features and might lose performance on learning complex patterns, which the deep learning model can learn.", "However, in terms of training time, xDNN is much faster than training the neural classifier head, as it needs just a single pass over the training dataset.", "Modality performance analysis (Text v/s Image v/s Mixed): For each meme dataset and method combination, the CLIP-based image model performs comparatively better than BERT-based text models.", "The combined model using CLIP and BERTweet features outperforms all models, including those using CLIP alone.", "However, the improvement of the joint model over CLIP is relatively low (0.5-1.5 absolute points) compared to the improvement over BERT models (10-10 absolute points) for both explanation strategies.", "This means that either visual information is more important than text or that the CLIP model can also capture the textual information in the meme or a combination of both reasons.", "For the task of misogyny classification, the combination of CLIP and BERTweet also performs the best, with the CLIP-only model performing very closely as a second best." ], [ "Explanability Analysis", "Example-based Classification Figure REF shows the visualization tool for the Example-based classification prediction over one test image.", "The tool displays the model-wise predictions for BERTTweet, CLIP, and their combination, together with similar memes from the training dataset for explainability.", "The test image is misogynous, portraying shaming and objectification of women.", "The predictions from each model are correct with high confidence about the misogyny detection and shaming type classification, which is easily explainable from the similar examples from the training dataset.", "Looking at the most similar images per model, we observe that the combined model retrieves images that also depict misogyny and either shaming, objectification, or both.", "This shows that the examples retrieved by this model are most reliable, which also correlates with its best performance.", "The retrieved examples by CLIP and BERTTweet are partially useful, with CLIP retrieving more relevant images than BERTTweet in most cases.", "Prototype-based Classification xDNN model is inherently explainable as it predicts the label for a meme based on the closest prototype as shown in  REF .", "To our surprise, xDNN for both the MAMI and the Hateful Meme datasets creates prototypes equal to the training supports for the class.", "This can be because of the fact that the memes, even though belonging to the same category, can have very different textual/visual information content and representation.", "Nevertheless, we are further investigating this behavior in depth to find the exact reason." ], [ "Discussion", "Our experiments revealed that methods for meme classification can balance the goals of explainability and good accuracy.", "While the example-based method is simpler, it achieved higher performance and its explanatory power was more intuitive, as demonstrated through our tool-supported analysis.", "Among the feature extractors, we observed that vision models were more effective than language models, and the combination of the two achieved the best performance.", "We next elevate these analyses to high-level lessons and discussion points that may benefit future work on explainable meme classification for tasks like hate speech detection and misogyny classification.", "Lack of virtual and real-world context During our analysis, we found that memes often rely on real-world, day-to-day context to be understood.", "As the pre-trained models lack this context, they may wrongly classify a meme that is very contextual to real-world common sense, especially in social media and misogyny.", "Our models have no access to this additional context.", "For instance, the usage of the meme can be occurring in an exchange between parties on social media / messaging platforms, which is not provided as a context in the task.", "A hollistic method for meme classification and explaination needs to account for the cultural and folkloric nature of memes [4]: memes often start from a seed of a concept or an idea (this real-world common sense or common cultural background / reference), which then gets derived to achieve an intent.", "The way this idea is adapted or degenerated depends on the intent and can happen by using figures of speech such as an oxymoron, in which case the image and the meme's text will be antonymous.", "Integration of background knowledge and figures of speech The example in Figure REF includes a test image where the canvas image depicts Nicki Minaj.", "The shaming, objectification, and misogyny in this case are probably linked to assumed background knowledge, such as Nicky's public image, a referenced song with a music video, and the lyrics that belong to the song in that music video.", "While the meme caption includes slang terms like “ass” and “std”, the cadence and the formulation of the meme sentence rely on the aforementioned song.", "As for the other images on the right-hand side where some context can be detected, we have the center image, which is from Victoria's Secret fashion show (which has been criticized for objectifying women).", "While some of these cues might be implicitly captured by CLIP, such examples demonstrate the need for background commonsense and factual knowledge, as well as Internet folklore, to build robust and explainable meme classification methods in the future.", "This example also motivates the need for the integration of an analysis of figures of speech like the center image being somewhat paradoxical.", "Subjectivity of Ground Truth Labels Another observation that came out of our analysis is that the problem statement of misogyny detection and type classification is inherently subjective, and the labeler's background and familiarity with social norms affect it.", "This argument is also clear from the inter-annotator agreement score for the MAMI dataset.", "Hence, for subjective problem statements like these, the ground truth labels are always questionable and are biased by the labelers.", "This observation relates to recent work that highlights issues with crowdsourced labeling for hate speech detection or sentiment analysis on social media, as examples [34],  [51] or [12] to address the bias in labeling hate or abuse datasets.", "It also connects to discussions of evaluating toxicity [9].", "Concerns about the consistency of labeling aspects such as sexism are also prominent in computational social sciences like psychology [44]." ], [ "Related Work", "Most prior works on Internet memes in AI have focused on understanding their virality and spread on social media over time [32], [49], [25].", "Another popular direction has been detecting forms of hate speech in memes.", "The Hateful Memes Challenge and Dataset [23] is a competition and open source dataset with over 10 thousand examples, where the goal is to leverage vision and language understanding to identify memes that employ hate speech.", "Kirk et al.", "[24] compare memes in this challenge to memes in the `wild', observing that extraction of captions is an open challenge, and that open-world memes are more diverse than traditional memes.", "The Multimedia Automatic Misogyny Identification (MAMI) [17] challenge asks systems to identify misogynous memes, based on both text and images in the input memes.", "Methods for these challenges typically employ Transformer-based models that incorporate vision and language, like ViLBERT [27], UNITER [11], and CLIP [40].", "The work by Sheratt [46] aims to organize memes into a genealogy, with the goal of building a comprehensive knowledge base going forward.", "The combination of efforts to explain IMs with explicit knowledge and the generalization power of large visual, textual, and multimodal models holds a promise to advance the SOTA of meme understanding and classification.", "However, to our knowledge, no prior work has focused on such multi-faceted and explainable methods for understanding IMs.", "To bridge this gap, we design a modular architecture that integrates visual and textual models with prototype- and example-based reasoning methods.", "Our framework thus balances the goals of obtaining SOTA performance and providing transparent access to the model reasoning.", "There has been a surge in using example-based explanations to enhance people's comprehension of black-box deep learning models' behavior and acquired knowledge.", "[8] propose and evaluate two kinds of example-based explanations in the visual domain.", "The extracted similar training data points help the end-users understand and recognize the capabilities of the model better.", "Although [18] have the same conclusion as [8] confirming the effect of examples to boost the comprehension of the model by end-users, they do not see any evidence supporting the same effect about the trust of end-users when presented with example-based explanations.", "Similarly, methods for prototype-based classification have been developed for visual tasks in the past, such as xDNN [3].", "However, to our knowledge, we are the first work to employ example-based and prototype-based methods for downstream tasks of IM classification." ], [ "Conclusions", "In this work, we implemented and analyzed example- and prototype-based approaches for explainable Misogyny Identification and Hate Speech Detection in IMs.", "Our experiments revealed that methods for IMs classification can balance the goals of explainability and good accuracy.", "While the example-based method is simpler, it achieved higher performance and its explanatory power was more intuitive, as demonstrated through our tool-supported analysis.", "Among the feature extractors, we observed that vision models were more effective than language models, and the combination of the two achieved the best performance.", "We connected these findings to thorny challenges about including background knowledge and real-world context in complex multimodal tasks, as well as concerns about the subjective nature of tasks that revolve around these tasks.", "We make our code available in hope that subsequent research can help us in pursuing these challenges together." ], [ "Acknowledgements", "The first two authors have been supported by armasuisse Science and Technology, Switzerland under contract No.", "8003532866.", "The experiments were run on a cluster provided by armasuisse Science and Technology." ] ]
2212.05612
[ [ "Using Multiple Instance Learning to Build Multimodal Representations" ], [ "Abstract Image-text multimodal representation learning aligns data across modalities and enables important medical applications, e.g., image classification, visual grounding, and cross-modal retrieval.", "In this work, we establish a connection between multimodal representation learning and multiple instance learning.", "Based on this connection, we propose a generic framework for constructing permutation-invariant score functions with many existing multimodal representation learning approaches as special cases.", "Furthermore, we use the framework to derive a novel contrastive learning approach and demonstrate that our method achieves state-of-the-art results on a number of downstream tasks." ], [ "Introduction", "In this paper, we propose a framework for designing multimodal representation learning methods that encompasses previous approaches as special cases and implies a new algorithm for multimodal learning that advances the state of the art.", "Specifically, we establish a connection between self-supervised representation learning based on contrastive learning and multiple instance learning [3] and show that they share similar assumptions and goals.", "We bring insights from multiple instance learning to offer a fresh perspective on self-supervised representation learning and ideas for performance improvements.", "With this connection in mind, we derive a novel algorithm for learning image-text representations that capture the structure shared between the two modalities and generalize well in a variety of downstream tasks.", "We aim to establish alignment between images and associated text to improve clinical workflow.", "For example, an image model that mimics the radiologists' interpretation could retroactively label images to select relevant patients for a clinical trial.", "Further, local alignment between image regions and text fragments (e.g., sentences) promises to benefit many downstream tasks.", "For example, cross-modal retrieval can provide description of an image region for automated documentation or enable comparisons with similar previously imaged patients for better interpretation based on local anatomy or pathology.", "Similarly, radiologists documenting findings can verify the accuracy of the report by noting if the referred location (i.e., visual grounding of the text) is consistent with their impression of the image.", "Self-supervised representation learning is a useful tool for reducing annotation burden for machine learning models in medical imaging.", "Despite the need and opportunities for automation, development of robust machine learning methods is held back by the lack of annotations that serve as the supervision signal for learning.", "Self-supervised representation learning on paired image-text data offers two advantages: (i) learning requires no further annotations and (ii) treating text as “labels” enables us to use natural language to reference visual concepts and vice versa [27].", "Thus, we focus on learning image-text multimodal representations but the proposed framework is broadly applicable to representation learning on other multimodal data.", "Learning joint representations involves training image and text encoders to perform self-supervised tasks on paired image-text data [5], [23] and evaluating on relevant downstream tasks.", "We focus on contrastive learning, i.e., classifying image-text pairs as matched (i.e., corresponding to the same imaging event), or mismatched.", "Contrastive learning has been applied to medical domain, demonstrating impressive transfer capabilities on a diverse set of tasks [2], [4], [14], [33].", "The biggest improvements come from addressing challenges unique to this domain, e.g., the use of cross attention to deal with the lack of effective pathology detectors [14] and adaptation of language models to address linguistic challenges in clinical notes [2].", "Training the models has involved increasingly complex contrastive loss functions that treat image and text symmetrically [2], [4], [14], [33] and on multiple scales [2], [14].", "In contrast to previous work that relies on many loss terms, our proposed contrastive loss is simple to implement and yields superior performance.", "Borrowing ideas from multiple instance learning, we treat local image region features as “data” and sentence features as (complex) “labels”.", "Multiple instance learning is a type of weakly supervised learning that is effective for problems that lack fine-grain annotations [3].", "For example, it can localize tumor cells in whole slide images with just image-level labels [22].", "Central to multiple instance learning is the construction of permutation-invariant score functions [15], and the choice of how the instance scores or features are aggregated to be evaluated against an image-level label.", "Effective instance aggregators take advantage of domain knowledge [8], e.g., the Noisy-OR aggregator for drug activity prediction [24], the Noisy-AND aggregator for cellular phenotype classification [20].", "In our work, we extend multiple instance classification to contrastive learning by constructing permutation-invariant image-text score functions.", "Building on insights from multiple instance classification with correlated instances [22], our proposed instance aggregator exploits correlation across instances to build representations that perform well in downstream tasks.", "Many prior multiple instance learning methods focused on one particular task of interest, e.g., detection [32], region classification [7], or retrieval [18].", "Some investigated the choices of instance aggregators for more than one downstream task [11], [25] but are limited in generality (i.e., not meant to be applicable to other applications) and scope (i.e., explored a few simple instance aggregators).", "In contrast, the proposed framework for constructing permutation-invariant score functions can be readily applied to other applications.", "We systematically investigate instance aggregators and their effect on representation learning, which yields a novel approach to learning joint representations.", "We evaluate the resulting image-text representations on a diverse set of downstream tasks and demonstrate state-of-the-art performance across all tasks in the context of a large set of chest X-ray images and associated radiological reports." ], [ "Method", "We first introduce notation and discuss the local and global approaches for constructing permutation-invariant image-document score functions at the core of the learning procedure.", "We then instantiate the framework for a specific choice of aggregators for contrastive learning." ], [ "Problem Setup", "A local $D$ -dimensional representation of an image with $N$ proposed regions is a collection of $N$ features vectors $x_n\\in \\mathcal {X}\\subset \\mathbb {R}^D$ , $n\\in \\left\\lbrace 1,\\cdots ,N \\right\\rbrace $ .", "In our experiments, we use regular tiling to generate image regions and leave more sophisticated proposal methods (e.g., [28]) for future work.", "A local representation of a $M$ -sentence document (e.g., a radiology report) is a collection of sentence feature vectors $y_m \\in \\mathcal {Y}\\subset \\mathbb {R}^D$ , $m\\in \\left\\lbrace 1,\\cdots ,M \\right\\rbrace $ .", "Function $h: \\mathcal {X}\\times \\mathcal {Y}\\rightarrow \\mathbb {R}$ measures the similarity between representations, e.g., $h(x_n,y_m)$ is the similarity between a region and a sentence.", "In our experiments, we use cosine similarity $h(x, y) = \\langle x, y \\rangle / \\left( \\left\\Vert x\\right\\Vert \\left\\Vert y\\right\\Vert \\right)$ , though the formulation accepts any differentiable similarity function.", "For any vector space $\\mathcal {U}$ , aggregator function $\\pi : \\mathcal {P}(\\mathcal {U}) \\rightarrow \\mathcal {U}$ aggregates elements in the input set into a “representative”.", "$\\mathcal {P}(\\mathcal {U})$ is the set of all finite subsets of $\\mathcal {U}$ .", "For example, $\\pi (\\left\\lbrace x_n \\right\\rbrace ) = \\frac{1}{N}\\sum _n x_n$ aggregates $N$ region features $x_n\\in \\mathcal {X}$ by averaging them, while $\\pi (\\left\\lbrace h_n \\right\\rbrace ) = \\max _n h_n$ aggregates $N$ similarity scores into a single score by computing the maximum score.", "We restrict our attention to aggregators that are permutation-invariant, i.e., they treat their input as an unordered set rather than an ordered vector.", "Permutation-invariant image-document score function $S: \\mathcal {P}(\\mathcal {X}) \\times \\mathcal {P}(\\mathcal {Y}) \\rightarrow \\mathbb {R}$ measures the similarity between an image and a document based on region features $\\left\\lbrace x_n \\right\\rbrace $ and sentence features $\\left\\lbrace y_m \\right\\rbrace $ .", "Figure: Local (top) and global (bottom) image-document score functions." ], [ "Local & Global Permutation-Invariant Score Functions", "Contrastive representation learning can be seen as maximizing the likelihood of correctly classifying image-text pairs as matched or mismatched.", "Since supervision is provided at the image-document level, we define a framework to build permutation-invariant image-document score functions.", "The local approach aggregates region-sentence scores into an image-sentence score.", "The image-sentence score $g_m$ for sentence $m$ in the document is obtained by applying a local aggregator function $\\pi _l$ to region-sentence scores, i.e., $g_m = \\pi _l ( \\left\\lbrace h(x_n, y_m) \\right\\rbrace _n ) \\triangleq \\pi _l ( \\left\\lbrace h(x_1, y_m), \\cdots , h(x_N, y_m) \\right\\rbrace ) $ .", "The global approach first aggregates local region features $\\left\\lbrace x_n \\right\\rbrace $ into a single image feature vector $\\pi _g(\\left\\lbrace x_n \\right\\rbrace )$ using a global aggregator function $\\pi _g$ .", "The image-sentence score $g_m$ is computed using the similarity function $h$ on the image feature vector $\\pi _g(\\left\\lbrace x_n \\right\\rbrace )$ and sentence feature vector $y_m$ , i.e., $g_m = h(\\pi _g(\\left\\lbrace x_n \\right\\rbrace ), y_m)$ .", "In both approaches, the image-document score $S$ is obtained by aggregating image-sentence scores with another aggregator function $\\pi _s$ , i.e., $S(\\left\\lbrace x_n \\right\\rbrace , \\left\\lbrace y_m \\right\\rbrace ) = \\pi _s(\\left\\lbrace g_m \\right\\rbrace )$ .", "Figure REF illustrates the framework for constructing $S$ .", "To summarize, the local and global image-document scores $S_l$ and $S_g$ are computed as follows: $S_l(\\left\\lbrace x_n \\right\\rbrace , \\left\\lbrace y_m \\right\\rbrace )&= \\pi _s(\\left\\lbrace \\pi _l(\\left\\lbrace h(x_n, y_m) \\right\\rbrace _n) \\right\\rbrace _m), \\\\S_g(\\left\\lbrace x_n \\right\\rbrace , \\left\\lbrace y_m \\right\\rbrace )&= \\pi _s(\\left\\lbrace h( \\pi _g(\\left\\lbrace x_n \\right\\rbrace ), y_m ) \\right\\rbrace _m).$ As the aggregator functions are permutation-invariant, the image-document score function $S$ is naturally permutation-invariant as well.", "We emphasize that $S$ treats image features and text features differently, and that the order of application of similarity evaluation $h(\\cdot )$ and aggregators $\\pi (\\cdot )$ is empirically relevant.", "This design decision is motivated by the fact that each sentence in a radiology report represent a concept and its location in the image, i.e., it is akin to a label for some region in the image.", "The converse is not necessarily true as some parts of the image are not described in the report." ], [ "Representation Learning with LSE$+$ NL Aggregators", "In this section, we introduce our method LSE$+$ NL for learning multimodal representations that relies on a combination of local and global image-document score functions and an asymmetric text-to-image contrastive loss.", "Inspired by [22], we use a soft maximum function to identify the most relevant region for a sentence, i.e., the critical region, and attend more to regions that are similar to the critical region.", "Specifically, the local aggregator $\\pi _l$ is the log-sum-exp (LSE) function $\\pi _l(\\left\\lbrace h_n \\right\\rbrace )= \\frac{1}{\\gamma _l} \\log \\sum _{n=1}^N \\exp (\\gamma _l\\,h_n),$ where $\\gamma _l$ is a scale parameter that controls how well the LSE function approximates the max function.", "The global aggregator $\\pi _g$ linearly combine the region features using the distance to the critical region as weights, i.e., $\\pi _g(\\left\\lbrace x_n \\right\\rbrace )= \\sum _{n=1}^N \\dfrac{ \\exp ( \\gamma _g\\, \\langle Ax_n, Ax_k \\rangle ) }{ \\sum _{n^{\\prime }=1}^N \\exp (\\gamma _g\\, \\langle Ax_{n^{\\prime }}, Ax_k \\rangle ) } \\, x_n,$ where $k$ is the index of the critical region, i.e., $k=\\operatornamewithlimits{\\arg \\,\\max }_n h(x_n,y_m)$ , $A$ is a learned weight matrix, and $\\gamma _g$ is the scale parameter for the softmax function.", "We can interpret $\\pi _g$ as a form of attention where regions that are more similar to the critical region are given a higher attention weight.", "In effect, $\\pi _g$ exploits the correlation between each region and the critical region using attention.", "In addition, $\\pi _g$ can be seen as a form of non-local (NL) network [31].", "Both $\\pi _l$ and $\\pi _g$ are permutation-invariant functions.", "We choose $\\pi _s$ to be the average function.", "We use the local and global image-document scores in (REF ) and () computed with our choice of $\\pi _l$ and $\\pi _g$ for contrastive learning.", "Given a document, we form an image-document score vector $s \\triangleq (s^+,s^-_1,\\cdots ,s^-_K)$ where $s^+\\in \\mathbb {R}$ is the image-document score with its matched image and $s^-_k\\in \\mathbb {R}$ for $k=1,\\cdots ,K$ is the image-document score with $K$ mismatched images.", "We use $s_l$ and $s_g$ to denote $(K+1)$ -length score vectors defined above computed using the local and the global score functions respectively.", "The image and text encoders are trained to minimize $\\mathcal {L}(s_l)+\\mathcal {L}(s_g)$ over documents in the training set where $\\mathcal {L}$ is the text-to-image contrastive loss [26], [33] $\\mathcal {L}(s)\\triangleq -\\log \\frac{ \\exp (\\gamma \\,s^+) }{ \\exp (\\gamma \\,s^+) + \\sum _{k=1}^K \\exp (\\gamma \\,s^-_k) }$ with scale parameter $\\gamma $ .", "In the equation above, $s$ is either vector $s_l$ computed using (REF ) with $\\pi _l$ defined in (REF ) or vector $s_g$ computed using () with $\\pi _g$ defined in (REF ).", "The image-to-text contrastive loss where the negative scores are computed for an image with $K$ different mismatched documents is often used alongside $\\mathcal {L}$ in prior work [2], [4], [14], [33].", "We choose to treat images and text asymmetrically and show that the simple text-to-image contrastive loss is sufficient to induce representations that generalize well.", "In multiple instance learning [3], a set that contains many instances $\\left\\lbrace x_1,\\cdots ,x_N \\right\\rbrace $ is referred to as a bag.", "The training set consists of bags and their associated bag labels $y$ while the instance labels are not provided.", "For binary bag labels, a positive bag is guaranteed to include at least one positive instance, while a negative bag includes no positive instances.", "The bag-level labels are used to train classifier to assign instance-level and bag-level labels in new, unseen bags.", "Existing image-text representation learning algorithms that are either predictive [6] or contrastive [27] can be seen as a form of multiple instance learning.", "Specifically, we can view an image as a bag of region features and the corresponding sentence that describes the image as the bag label.", "Instead of taking on binary values, the bag labels can represent arbitrary categories via natural language.", "Although the exact region that corresponds to the sentence is unknown, the matched image contains at least one region that corresponds to the text while a randomly sampled image most likely does not.", "Similar to multiple instance learning, self-supervised representation learning methods use these assumptions for learning.", "More generally, we consider the text label as a bag of sentences.", "For example, sentences describing findings within a chest X-ray image most likely can be permuted without changing the overall meaning.", "Therefore, representation learning can be interpreted as predicting the label bag $\\left\\lbrace y_m \\right\\rbrace $ given the input bag $\\left\\lbrace x_m \\right\\rbrace $ .", "This setup corresponds to multi-instance multi-label learning [34].", "Moreover, multiple instance learning and multimodal representation learning share comparable goals.", "Multiple instance learning aims to align instances and bags with labels such that the pre-trained model performs well in classification tasks.", "Multimodal representation learning aims to align images and their subregions with text such that the pre-trained model perform well on tasks that rely on such alignment, e.g., image classification relies on image-sentence alignment, visual grounding and cross-modal retrieval rely on region-sentence alignment.", "There are two main multiple instance learning approaches, instance-level and embedding-level approaches [1].", "Instance-level approach computes the bag score by aggregating the instance scores.", "Embedding-level approach computes the bag score based on a bag feature that is aggregated from the instance features.", "The local and global approaches in Section REF are extensions of the instance and embedding approaches to contrastive learning.", "This parallel enables us to analyze prior methods as instances of the framework defined in Section REF that is inspired by multiple instance learning (Table REF ).", "We make one generalization to the formulation in Section REF to accommodate cross attention [21]: the local aggregator function $\\pi _l$ can potentially rely on label features $y_m$ to multiplex its behavior, i.e., $\\pi _l: \\mathcal {P}(\\mathcal {X})\\times \\mathcal {Y}\\rightarrow \\mathcal {X}$ .", "In summary, a diverse set of aggregators $\\pi _l,\\pi _g,\\pi _s$ have been demonstrated on multimodal representation learning at varying scales, implying there may not be a single set of aggregators that works well for every problem.", "More realistically, the best aggregator functions are the ones that fit application-specific assumptions well." ], [ "Experiments", "We illustrate the proposed approach by building a representation of frontal chest X-ray images and associated radiology reports and using it in downstream tasks.", "In all the experiments, data used for representation learning is disjoint from the test sets used to evaluate the downstream tasks.", "We normalize the images and resize them to 512x512 resolution.", "We apply random image augmentations, i.e., 480x480 random crops, brightness and contrast variations, and random affine transforms (only for image model fine-tuning during evaluation).", "We use PySBD [29] for sentence tokenization.", "We employ ResNet-50 [13] as the image region encoder and CXR-BERT [2] as the sentence encoder.", "Each encoder is followed by a linear projection to a 128 dimension embedding space.", "In particular, the projected ResNet-50 conv-5 activations act as the region features $\\left\\lbrace x_n \\right\\rbrace $ and the projected mean-pooled contextualized word embeddings acts as the sentence features $\\left\\lbrace y_m \\right\\rbrace $ ." ], [ "Representation learning", "We use a subset of 234,073 chest X-ray images and report from MIMIC-CXR [16] for representation learning.", "We randomly initialize the image encoder and use the CXR-BERT model [2] pretrained on a biomedical corpus (i.e., the stage II model) as the sentence encoder.", "We use the AdamW optimizer and decay the initial learning rate of 5e-5 using a cosine schedule with 2k warmup steps.", "we initialize $\\gamma =14$ and optimize this hyperparameter alongside the encoder parameters.", "We set other scale parameters as follows: $\\gamma _l=0.1, \\gamma _g=e$ .", "We use a batch size of 64.", "For every image in the batch, we sample 5 sentences, with replacement if needed, to make up the label bag.", "Here, $N=225$ and $M=5$ ." ], [ "Downstream Tasks", "Image Classification To evaluate zero-shot (ZS) and fine-tuned (FT) classification performance, we use the same split of RSNA Pneumonia (RSNA) [30] as in [14], specifically, 18,678/4,003/4,003 for training/validation/testing.", "To evaluate in-distribution fine-tuned classification performance in the ablation study, we use 5 CheXpert labels (Atelectasis, Cardiomegaly, Edema, Pleural Effusion, Pneumothorax) on the MIMIC-CXR data set [16] that we denote MIMIC-CheXpert (CheX).", "There are roughly 1k images in the test set associated with each CheXpert label.", "To evaluate the data efficiency of representation learning approaches, we use different amounts of training data (1% and 100%).", "For zero-shot image classification, we first tokenize and encode the class-specific text prompts (e.g., “Findings suggesting pneumonia.” and “No evidence of pneumonia.”).", "For each image, we assign a binary label that corresponds to the prompt with the higher image-sentence score.", "We find it important to normalize the scores to $[0,1]$ for each class before applying the softmax.", "For fine-tuned image classification, we use the Adam optimizer [19] with a learning rate of 3e-3 to optimize the randomly initialized weights and a bias over the mean-pooled region features while keeping the encoder weights fixed.", "For RSNA Pneumonia, we report accuracy and AUC.", "For MIMIC-CheXpert, we report the average AUC over five binary classification tasks.", "Visual Grounding We evaluate visual grounding performance using the MS-CXR region-sentence annotations [2].", "This data set consists of 1,448 bounding boxes over 1,162 images, where each bounding box is associated with a sentence that describes its dominant radiological feature.", "We compute region-sentence scores to quantify how well the sentence is localized in the image.", "We report a measure of discrepancy between region-sentence scores inside and outside the bounding box, i.e., contrast-to-noise ratio (CNR) [2], and how well the thresholded region-sentence scores overlap with the bounding box on average, i.e., mean intersection over union (mIoU).", "In contrast to [2], we pick thresholds that span $[-1,1]$ in $0.05$ increments to compute the mIoU for a fair comparison.", "Cross-Modal Retrieval We evaluate cross-modal retrieval performance using the MS-CXR data set as well.", "We compute the bounding box features from the region features using RoIAlign [12].", "We compute box-sentence scores for ranking.", "Specifically, we retrieve items in one modality given a query from the other modality by sorting based on the box-sentence scores.", "The correctly retrieved item is the one that is paired with the query item.", "We report the fraction of times the correct item was found in the top K results (R@K) and the median rank of the correct item in the ranked list (MedR)." ], [ "Results", "Comparison with State-of-the-art Methods We compare the proposed approach LSE$+$ NL with the state-of-the-art methods GLoRIA [14] and BioViL [2].", "GLoRIA is a representation learning method that learns based on image-sentence and region-word pairs.", "BioViL improves upon GLoRIA by using a better text encoder, relying on a symmetric contrastive loss and masked language modeling for representation learning.", "We show that our simple model provides consistently better performance than these state-of-the-art algorithms.", "We omit reporting GLoRIA's classification and visual grounding performance for GLoRIA as [2] showed that BioViL is consistently better than GLoRIA on these tasks.", "Table REF reports image classification accuracy based on the learned representations for different amounts of data used to fine-tune the representation for the downstream task (zero-shot, 1%, and 100%).", "Our method is competitive or better than the baseline, especially in the zero-shot setup, underscoring its promise for limited annotation scenarios.", "Table REF and Table REF report the methods' performance on visual grounding and cross-modal retrieval respectively.", "Our method is significantly better than the baseline.", "Figure REF illustrates examples of visual grounding.", "Unlike [2], we do not smooth the region-sentence scores produced by our model.", "Our method yield qualitatively better region-sentence scores than BioViL on a few challenging failure cases discussed in [2].", "In particular, our pre-trained model captures location specifications more effectively, e.g., recognizing “at both lung bases” in the first image and “right” in the third image.", "Both our method and BioViL are prone to false positives, i.e., regions outside the ground-truth bounding box with high region-sentence scores, which highlights the need for further improvements.", "Figure: Example visual grounding results for several challenging cases for BioVil (top row) and our method (bottom row).", "Text queries and the corresponding ground truth bounding boxes are shown for each image.", "Colormap overlay visualizes region-sentence scores (blue corresponds to low scores, red highlights regions with high scores).", "Our method provides maps that align better with the ground truth bounding boxes.Table: Ablation study results.", "For each variant of the method, performance statistics are reported for each downstream task consistently with Tables , , and .", "RSNA is RSNA Pneumonia.", "CheX is MIMIC-CheXpert.", "FT is fine-tuned classification using 100% of the labels.", "ZS is zero-shot classification.", "We report AUC for image classification.", "Local representations perform well for image classification, while visual grounding and cross-modal retrieval benefit from integration of local and global representations.Figure: Effects of aggregator choice on the performance.", "Performance of models trained with local aggregators (shades of blue), global aggregators (shades of orange) and combinations of local and global aggregators (shades of green) is shown for image classification (AUC), visual grounding (CNR) and cross-modality retrieval (MedR averaged for both directions).", "The metrics are normalized to unit interval for easier comparisons across tasks.", "The choice of aggregators effects image classification performance much less than that of visual grounding and cross-modality retrieval.", "There is high performance variations within each group.", "Combination approaches do well on all tasks.Ablation In the ablation study (Table REF ), we compare our method LSE$+$ NL with using either the local LSE or the global NL approach only, as well as replacing the NL with average as the region aggregator, i.e., LSE$+$ Average.", "To enable extensive experimentation, we use ResNet-18 as the image encoder.", "LSE$+$ NL provides good trade-off between region-sentence and image-sentence alignment.", "LSE$+$ NL has comparable performance to LSE for image classification tasks while significantly outperforming all alternatives in visual grounding and cross-modal retrieval.", "Using a larger image encoder model ResNet-50 provides only a modest improvement in visual grounding.", "Aggregator Choices Figure REF compares a few instance aggregators' performance on downstream tasks.", "We compare the local approach (e.g., LSE, NOR [24], NAND [20]) the global approach (e.g., Max, Average, Att [15]) and a combination of local and global approaches (e.g., LSE$+$ Att, LSE$+$ NL).", "Aggregators within each approach exhibits high performance variations.", "The best local aggregator is superior to the best global aggregators we explored on all downstream tasks; While a combination of local and global approaches yields the best performing method." ], [ "Conclusions", "In this paper, we propose a framework to construct permutation-invariant image-document score functions for multimodal contrastive learning.", "Taking inspiration from multiple instance learning, we introduce LSE$+$ NL for learning multimodal representations that rely on both local and global score functions and exploit correlation between image regions.", "Our method outperforms the state-of-the-art approaches on image classification, visual grounding, and cross-modal retrieval.", "In addition, we show that contrastive representation learning is a form of multiple instance learning, providing us with valuable insights from a related field for solving shared challenges to learn representations that generalized well." ] ]
2212.05561
[ [ "ResFed: Communication Efficient Federated Learning by Transmitting Deep\n Compressed Residuals" ], [ "Abstract Federated learning enables cooperative training among massively distributed clients by sharing their learned local model parameters.", "However, with increasing model size, deploying federated learning requires a large communication bandwidth, which limits its deployment in wireless networks.", "To address this bottleneck, we introduce a residual-based federated learning framework (ResFed), where residuals rather than model parameters are transmitted in communication networks for training.", "In particular, we integrate two pairs of shared predictors for the model prediction in both server-to-client and client-to-server communication.", "By employing a common prediction rule, both locally and globally updated models are always fully recoverable in clients and the server.", "We highlight that the residuals only indicate the quasi-update of a model in a single inter-round, and hence contain more dense information and have a lower entropy than the model, comparing to model weights and gradients.", "Based on this property, we further conduct lossy compression of the residuals by sparsification and quantization and encode them for efficient communication.", "The experimental evaluation shows that our ResFed needs remarkably less communication costs and achieves better accuracy by leveraging less sensitive residuals, compared to standard federated learning.", "For instance, to train a 4.08 MB CNN model on CIFAR-10 with 10 clients under non-independent and identically distributed (Non-IID) setting, our approach achieves a compression ratio over 700X in each communication round with minimum impact on the accuracy.", "To reach an accuracy of 70%, it saves around 99% of the total communication volume from 587.61 Mb to 6.79 Mb in up-streaming and to 4.61 Mb in down-streaming on average for all clients." ], [ "Introduction", "Federated learning has become an emerged machine learning paradigm, which enables distributed training on broad data sources without disclosing their original data [18].", "Instead of transmitting raw data, only parameters (mostly model weights or gradients) in federated learning are iteratively shared between clients and a server via heterogeneous networks.", "Federated learning has been successfully applied to various applications [17], such as mobile keyboard prediction, speech recognition, image object detection, etc, However, with the increasing size of machine learning models, the existing mobile communication infrastructure cannot always meet the requirement in terms of bandwidth and latency in federated learning, which constraints the wide deployment of federated learning.", "For instance, to train a transformer model with billions of parameters usually 32-bit float parameters), the size of a message in a single federated learning round can be several 10 or 100 Gigabytes, e.g.", "a CTRL model ([10]) with 1.6 billions parameters or a T5 model ([21]) with up to 11 billions parameters.", "That can cause an enormous and extremely costly data traffic, even in 5G NR networks, where the throughput can be from 5 Gbps to 18 Gbps.", "Another application scenario is to improve machine learning models for road traffic object recognition and detection in V2X (Vehicle-to-Everything) communication networks, where the bandwidth for V2X is also occupied for other traffic services at the same time, e.g.", "collective perception service, and obviously the safety-related services should have higher priority.", "Therefore, communication efficiency is a pivotal component for deploying federated learning, especially in wireless networks.", "In an attempt to tackle the communication bottleneck, the parameter compression is considered as one of the most effective approaches, which allows for updating the models by transmitting much smaller size of messages in networks, and thereby reduces the required time per communication round in federated learning.", "The approaches proposed by [28], [22], [7] can effectively reduce the communication volume in each round by various quantization techniques, however they only consider the communication efficiency for uploading (client-to-server) but not for downloading (server-to-client).", "[16] compress the gradients instead of model parameters for distributed learning, which can not well fit federated learning, where clients can train multiple epochs in each round.", "[26] use knowledge distillation ([6]) to learn and transmit a smaller model, where the original model structure is affected.", "Furthermore, all of those works attempt to compress the model parameters or gradients based on the model in a specific round, without consideration of inter-round model update similarity, which contains additional redundancy sequentially.", "Inspired by residuals in video compression protocols from [13], we introduce a residual-based federated learning framework, termed as ResFed.", "It allows the server and clients to share and update models by sharing model parameter residuals rather than model parameters or gradients.", "Particularly, by observing training trajectory in each local client and the aggregation trajectory in the server, we believe model updates in both clients and the server can be predictable.", "Those predictive models – in analogy with predictive frames in video transmission – can foresee model updates in the federated learning.", "After each communication round, we use the deviations between the predictive and the actual updated model parameters, which we call the model residuals, for the communication in networks.", "Note that the actual updated models can be always recoverable by acquiring the residuals, as the predictors in the senders are shared to the receivers in ResFed.", "More details are provided in Sec.", "and .", "Unlike transmitting model weights, ResFed can wring out the potential redundancy by removing the predictive information from history updates and only keep the residuals for communication.", "Compared to transmitting model gradients after each training epoch, ResFed allows the models to be trained locally multiple times.", "Compared to transmitting residual accumulation for multiple epochs, ResFed further minimizes the information by predicting the model updates from history.", "As shown in Fig.REF , the values of residuals are overall smaller than weights and gradients during the entire training process.", "To further shrink the size of messages for communication, we then compress only residuals using sparisification and quantization, and encode the messages for information sharing in client-to-server and server-to-client.", "Our main contributions are summarized as follows: We introduce and formulate the model residuals for the communication efficiency in federated learning and indicate the residuals contain more dense information than model weights and gradients.", "We propose a novel federated learning framework (ResFed) based on deep residual compression, which consists of the following steps: predictor sharing, model prediction, residual generation, residual compression, residual communicating, model recovering and model trajectory synchronization.", "We provide the experimental evaluation of our framework with various communication cost budgets in both up- and down-streaming, which gives an insight in deploying it in resource-constrained communication environments.", "The open source implementation of ResFed will be publicly available." ], [ "System Setup", "We first introduce the related concepts and techniques that will be used in our framework.", "Given $N$ clients and a server in a federated learning system, we only focus on the information sharing between one single client $k$ and the server from communication round $t$ to $t+1$ , as shown in Fig REF .", "The information sharing for other clients is the same." ], [ "Model Update", "Client.", "Given a client $k$ with a local dataset $\\mathcal {D}_k$ , the initial local model in a new round $t$ is $w_k^{t-1}$ .", "Before the local training starts, the client initially receives the global model $w^t$ from the server and updates the local model to $\\hat{w}_{k}^t$ .", "After that, local model $\\hat{w}_{k}^t$ is trained on $\\mathcal {D}_k$ and transited to $w_{k}^t$ .", "We mark the first update as $w_{k}^{t-1} \\rightarrow \\hat{w}_{k}^t$ and the second one as $\\hat{w}_{k}^t \\rightarrow w_{k}^t$ .", "Note that the first updated model is equal or similar to the global updated model $w^t$ , i.e.", "$\\hat{w}_{k}^t \\simeq w^t$ .", "If lossy compression is used for communication and the loss due to compression can not be repaired, then $\\hat{w}_{k}^t \\sim w^t$ .", "Server.", "Similarly, the global model $w_{t-1}$ in the server is also updated twice after one round of communication $t$ .", "The first update happens when it receives models from the clients, i.e.", "$w^{t-1} \\rightarrow \\lbrace \\hat{w}_i^t|i=1,2,...,N\\rbrace $ .", "Then the aggregation leads to the second update, $\\lbrace \\hat{w}_i^t|i=1,2,...,N\\rbrace \\rightarrow w^{t}$ ." ], [ "Model Trajectory", "Client.", "Given a client $k$ at time point $t$ , we cache the updated models with a sliding time window $[t-T, t]$ in two different queues, that distinguish by two model updates.", "We refer the time sequence of local model updates $\\mathcal {L}_{k}^t=\\lbrace w_{k}^{t-T}, ...,w_{k}^t\\rbrace $ from $w_{k}^t \\rightarrow \\hat{w}_{k}^t$ as a local model trajectory, and $\\mathcal {\\hat{G}}_{k}^t=\\lbrace \\hat{w}_{k}^{t-T}, ...,\\hat{w}_{k}^t\\rbrace $ from $\\hat{w}_{k}^t \\rightarrow w_{k}^{t+1}$ as a global model trajectory.", "Server.", "Correspondingly, we cache the local and global model updates in the local and global model trajectories for all client at the server, i.e.", "$\\lbrace \\mathcal {\\hat{L}}_i|i=1,2,...,N\\rbrace $ and $\\lbrace \\mathcal {G}_i|i=1,2,...,N\\rbrace $ .", "Note that if the server can always send the lossless global model update to all clients, the global trajectories at time $t$ are the same for all clients." ], [ "Model Prediction", "Client.", "Given a client $k$ at time point $t$ , we predict $\\hat{w}_{k}^t \\rightarrow w_{k}^t$ from the local and global training trajectories, $\\mathcal {L}_{k}^{t-1}$ and $\\mathcal {\\hat{G}}_{k}^{t-1}$ as follows: $\\tilde{w}_{k}^{t} = f_{predict,k}(\\mathcal {L}_{k}^{t-1}, \\mathcal {\\hat{G}}_{k}^{t-1}, \\hat{w}_{k}^{t}) = \\operatornamewithlimits{arg\\,max}_{w_{k}^t} p(w_{k}^t|\\underbrace{w_{k}^{t-T}, ...,w_{k}^{t-1}}_{\\text{$\\mathcal {L}_{k}^{t-1}$}},\\underbrace{\\hat{w}_{k}^{t-T}, ...,\\hat{w}_{k}^{t-1}}_{\\text{$\\mathcal {\\hat{G}}_{k}^{t-1}$}}, \\hat{w}_{k}^{t})$ where $f_{predict,k}$ is the used predictor for model prediction in the client $k$ .", "Server.", "For the server, we predict model updates $\\hat{w}_i^t\\rightarrow w^t$ for each client $i$ from local and global trajectories $\\mathcal {\\hat{L}}_{i}^{t-1}$ and $\\mathcal {G}_{i}^{t-1}$ as follows: $\\tilde{w}_{i}^{t} = h_{predict,i}(\\mathcal {\\hat{L}}_{i}^{t-1}, \\mathcal {G}_{i}^{t-1}, \\hat{w}_{i}^{t}) = \\operatornamewithlimits{arg\\,max}_{w_{i}^t} p(w_{i}^t|\\underbrace{w_{i}^{t-T}, ...,w_{i}^{t-1}}_{\\text{$\\mathcal {\\hat{L}}_{k}^{t-1}$}},\\underbrace{\\hat{w}_{i}^{t-T}, ...,\\hat{w}_{i}^{t-1}}_{\\text{$\\mathcal {G}_{k}^{t-1}$}}, \\hat{w}_{i}^{t}), \\forall i\\in {1,...,N}$ where $h_{predict,i}$ is the used predictor for model prediction in the server for each client $i$ ." ], [ "Model Residual", "Given a model update $\\hat{w}_{i}^t \\rightarrow w_{i}^t$ for the client $i$ at time $t$ in the server or in the corresponding client, if we can compute the model prediction $\\tilde{w}_{i}^{t}$ based on Eq.", "REF , we define the model residual as follows: $r_{i}^t = w_{i}^t - \\tilde{w}_{i}^t$ Note that $r_{i, ul}^t$ and $r_{i, dl}^t$ are the residuals of clients for uploading and the residuals of the server for downloading, respectively.More understanding of residuals is provided in Sec.", "." ], [ "Deep Compression", "To reduce the model size for more efficient communication, we shrink the model size before sending it out.", "We define a compressed model in the client k: $\\bar{w}_{k}^t = f_{compress}(w_{k}^t)$ and in the server: $\\bar{w}_{i}^t = h_{compress}(w_{i}^t), \\forall i\\in \\lbrace 1,...,N\\rbrace $ where $f_{compress}$ and $h_{compress}$ is the used compressor for model compression in clients and server respectively.", "In our system, we consider to compress and communicate model residuals instead of model itself in the client k: $\\bar{r}_{k,ul}^t = f_{compress}(r_{k,ul}^t) = f_{compress}( w_{k}^t - f_{predict, i}(w_{k}^{t-T}, ...,w_{i}^{t-1},\\hat{w}_{k}^{t-T}, ...,\\hat{w}_{k}^{t-1}, \\hat{w}_{k}^{t}))$ and in the server: $\\bar{r}_{i,dl}^t = h_{compress}(r_{i,dl}^t) = h_{compress}( w_{i}^t - f_{predict, i}(w_{i}^{t-T}, ...,w_{i}^{t-1},\\hat{w}_{i}^{t-T}, ...,\\hat{w}_{i}^{t-1}, \\hat{w}_{i}^{t})), \\forall i\\in \\lbrace 1,...,N\\rbrace $" ], [ "ResFed: Residual-based Federated Learning Framework", "[t]: ResFed: Residual-based federated learning framework [1] Server runs: initialize the global model $w$ initialize the empty local model trajectories $\\mathcal {\\hat{L}}_{1},...,\\mathcal {\\hat{L}}_{N}$ initialize the global model trajectories $\\mathcal {G}_{1},...,\\mathcal {G}_{N}$ initialize the predictor $h_{predict}$ $i \\in \\lbrace 1,2,...,N\\rbrace $ initialize an empty local model trajectories $\\mathcal {L}_{i}$ @client $i$ initialize an empty global model trajectories $\\mathcal {\\hat{G}}_{i}$ @client $i$ $h^{\\prime }_{predict,i} = h_{predict}$ sharing predictors to client $i$ $f^{\\prime }_{predict,i} = f_{predict, i}$ get the shared predictors from client $i$ $t \\in \\lbrace 1,2,...,M\\rbrace $ $i \\in \\lbrace 1,2,...,N\\rbrace $ in parallel $t < T$ $\\mathcal {\\hat{G}}_i \\leftarrow cache(w)$ cache the global model in $\\mathcal {\\hat{G}}_i $ @client $i$ Server communicates $w$ to the client $i$ $\\hat{w}_i \\leftarrow \\textbf {LocalTrain}(w)$ @client $i$ Client $i$ communicates $\\hat{w}_i$ to the server $\\mathcal {L}_i \\leftarrow cache(\\hat{w}_i)$ cache the local model in $\\mathcal {L}_i$ @client $i$ Server communicates $\\bar{r}_{i,dl}$ to the client $i$ $\\bar{r}_{i,ul} \\leftarrow $ ResFedClientUpdate $(i, \\bar{r}_{i,dl})$ @client $i$ Client $i$ communicates $\\bar{r}_{i,ul}$ to the server $\\tilde{w}_i \\leftarrow f^{\\prime }_{predict,i}(\\mathcal {G}_i, \\mathcal {\\hat{L}}_i, \\hat{w})$ predict updated model $\\hat{w}_i \\leftarrow \\tilde{w}_i + \\bar{r}_{i,ul}$ recover models $\\mathcal {\\hat{L}}_i \\leftarrow cache(\\hat{w}_i)$ update local trajectory $w \\leftarrow \\textbf {Aggregate}(\\hat{w}_1,...,\\hat{w}_N)$ $i \\in \\lbrace 1,2,...,N\\rbrace $ $\\tilde{w}_i \\leftarrow h_{predict}(\\mathcal {G}_i, \\mathcal {\\hat{L}}_i, w)$ predict updated model $r_{i,dl} \\leftarrow w - \\tilde{w}_i$ compute model residuals $\\bar{r}_{i,dl} \\leftarrow h_{compress}(r_{i,dl})$ compress model residuals $\\mathcal {G}_i \\leftarrow cache(\\tilde{w}_i + \\bar{r}_{i,dl})$ synchronize global trajectory return $w$ ResFedClientUpdate $(k, \\bar{r})$ $\\tilde{w} \\leftarrow h^{\\prime }_{predict, i}(\\mathcal {\\hat{G}}_k, \\mathcal {L}_k, w_k)$ predict updated model $\\hat{w} \\leftarrow \\tilde{w} + \\bar{r}$ recover models $\\mathcal {\\hat{G}}_k \\leftarrow cache(\\hat{w})$ update global trajectory $w_k \\leftarrow \\textbf {LocalTrain}(\\hat{w})$ $\\tilde{w}_k \\leftarrow f_{predict,i}(\\mathcal {\\hat{G}}_k, \\mathcal {L}_k, \\hat{w})$ predict updated model $r_k \\leftarrow w_k - \\tilde{w}_k$ compute model residuals $\\bar{r}_k \\leftarrow f_{compress}(r_k)$ compress model residuals $\\mathcal {L}_k \\leftarrow cache(\\tilde{w}_k + \\bar{r}_k)$ synchronize local trajectory return $\\bar{r}_k$ The overview of the ResFed is shown in Fig.", "REF and the detailed steps with lossy compression in one communication round is given in Fig.", "REF .", "In particular, we introduce (a) predictor sharing, (b) model prediction, and (c) residual generation in Sec REF .", "In Sec.", "REF , (d) residual compression is formulated in Eq.", "REF and Eq.", "REF .", "Then, after (e) communicating residual bits, we provide details on (f) model recovering in Sec.", "REF .", "Finally in Sec.", "REF , we describe (g) trajectory synchronization." ], [ "Predictor Sharing", "Then, we consider to deploy a pair of predictors in both clients and the server, which can execute the model predictions based on the local and global model trajectories in the time series.", "In our framework, the server caches the local model trajectories in all clients once the local model update is received.", "Given a client $k$ , if the received models in the server is exactly the same as the local model, and the models in $\\mathcal {\\hat{G}}_k$ also the same, we say that both trajectories in client $k$ are fully observable in the server.", "Give a predictor $f_{predict, k}$ in the client $k$ , we share it to make the predictor in the server $f^{\\prime }_{predict, k}=f_{predict, k}$ in ResFed.", "If the trajectories in client are fully observable in the server, we can get the same model predictions in both server and clients from Eq.", "REF .", "Then, by communicating the model residuals, the new model update at time $t+1$ can be recovered by the model residual and the model at last time point.", "Also, we share the set of predictors in the server $\\lbrace h_{predict, i}|i=1,...,N\\rbrace $ to the corresponding client, i.e.", "$h^{\\prime }_{predict, i}=h_{predict, i}, \\forall i\\in \\lbrace 1,...,N\\rbrace $ .", "Then we can get the same model predictions based on Eq.", "REF .", "For the predictor, various design choices exist.", "In this work, we employ the predictor with respect to the model transition dynamics in a sliding history time window.", "The predictor is formulated as follows: $\\tilde{w}_{k}^{t} = f_{predict}(\\mathcal {L}_{k}^{t-1}, \\mathcal {\\hat{G}}_{k}^{t-1}, \\hat{w}_{k}^t) ={\\left\\lbrace \\begin{array}{ll}\\hat{w}_{k}^t, & \\text{$T=0$}.\\\\\\hat{w}_{k}^t + \\sum _{\\tau =1}^T (-1)^{T-\\tau }(T-\\tau +1)(w_{k}^{t-\\tau }-\\hat{w}_{k}^{t-\\tau }), & \\text{$T>0$}.\\end{array}\\right.", "}$ To reduce the used memory for caching trajectories in the client, we apply a short time window $[t-T, t]$ in the prediction process.", "We term (i) stationary predictor when $T=0$ ; and (ii) linear predictor when $T=1$ .", "Note that we consider the model updates in [1], [19], [8] as special residuals, which calculated by stationary predictor.", "Specifically, the stationary predictor uses the current model for the prediction of the next model, $\\tilde{w}_{k}^{t} = \\hat{w}_{k}^t$ , where the model residual is always $r_{k}^{t} = w_{k}^{t} - \\hat{w}_{k}^{t}$ .", "Note that when the number of local training epochs is fixed to 1, the stationary residuals is proportional to gradients, i.e.", "$r = \\eta g$ , where $\\eta $ is the learning rate and $g$ represents the gradients.", "In the linear predictor, $\\tilde{w}_{k}^{t} = \\hat{w}_{k}^{t} + w_{k}^{t-1} - \\hat{w}_{k}^{t-1}$ , the model transition in the last local training step is always considered, where $r_{k}^{t} = w_{k}^{t} - \\hat{w}_{k}^{t} - w_{k}^{t-1} + \\hat{w}_{k}^{t-1}$ .", "The predictor for the client $k$ in the server $h_{predict,k}$ is similar to $f_{predict,k}$ ." ], [ "Model Recovery", "We cache the trajectories in both server and clients.", "Each client has two model trajectories for local and global model updates in the history.", "In the server, it caches the global trajectory and the local model trajectories of all connecting clients in the history.", "In this case, the two trajectories in each client are fully observable in the server.", "Through sharing predictors, given a client $k$ at round $t$ , the server is able to get the same model prediction $\\tilde{w}_{k}^t$ as the client $k$ .", "If uncompressed model residuals ($\\hat{w}_{k}^t = w^t$ ) are received from client $k$ , the model update after local training $w_{k}^t$ can be recovered in the server as follows: $w_{k}^t = r_{k}^{t} + \\hat{w}_{k}^t = r_{k}^{t} + w^t$ where $\\hat{w}_{k}^t$ is the global model in the last round.", "Similarly, if we predict the global model update in the client, through sharing predictors and uncompressed residuals, the aggregated model can also be recovered in the client." ], [ "Model Trajectory Synchronization", "Since for the model residuals lossy compression is applied, i.e.", "$ \\bar{r} \\ne r$ , the updated model of a sender $w$ can be recovered in receivers as $\\hat{w} = \\bar{r} + \\tilde{w} $ .", "Therefore, we say that the $w$ cannot be fully recovered in receivers, as $\\hat{w} \\ne w $ .", "If we cache the original models $w$ in the sender and $\\hat{w}$ in receivers, the trajectories in the sender and receiver are different, which leads to drift in results of shared predictors.", "To avoid the drift effect, we synchronize the model trajectories in the sender by simulating the recovering process: The originally updated models are not cached in the trajectories; instead, we recover the model from compressed model residuals locally, in order to enforce the trajectories in the sender in the same way as the trajectories in the receiver.", "The ResFed pseudocode is given in Algorithm ." ], [ "Experimental Settings", "Guided by the previous work by [2][14], we process and distribute the datasets MNIST ([12]), Fashion-MNIST ([27]), SVHN ([20]), CIFAR-10 ([11]) and CIFAR-100 ([20]) on a set of clients, and train LeNet-5, CNNIt consists of 5 convolutional and 3 fully connected layers., ResNet-18 on those federated dataset distributively, as shown in Tab.", "REF .", "We provide details on the experimental settings in Sec.", "." ], [ "Residuals vs Gradients and Weights", "On top of the basic evaluation in Fig.", "REF , we believe deep residual compression can save more communication volume in federated learning with minimum impact on the accuracy.", "Thus, we demonstrate the federated learning integrating compressing weights, gradients and two different residuals, i.e.", "stationary and linear residuals in Eq.", "REF .", "As shown in Fig.", "REF , the testing accuracy on both IID and Non-IID datasets from deep residual compression always outperforms weight and gradient compression.", "Also, the linear residuals can achieve a higher accuracy and faster convergence than stationary residuals.", "The results indicate communicating residuals in federated learning can enable larger compression ratio per communication round, compared to communicating other parameters.", "Figure: Comparison of compressing weights, gradients, residuals with stationary (Res-0) and linear (Res-1) predictors.", "The sparsities are 0.2 and 0.01 for training on MNIST and CIFAR-100 distributed in 10 clients, respectively.", "We quantize the non-zero parameters and use 1 bit to represent each of them.", "For Non-IID setting, each client owns only the data with half classes." ], [ "Communication Efficiency Improvement", "Next, we evaluate the required communication volume for training three sizes of models on different datasets in both IID and Non-IID settings.", "Tab.", "REF shows that to reach a promising target accuracy, ResFed with lossy compression (compression ratio is set on $350\\times -375\\times $ ) can save on average around 99% of the total communication volume for all clients in only up- or down-streaming.", "Furthermore, the bitsaving ratios of ResFed on IID and Non-IID settings are similar, which indicates the compression performance of ResFed is robust to data heterogeneity in federated learning.", "We show testing accuracy and training loss change with increasing required communication volume in ResFed in Fig.", "REF .", "The results indicate communicating residuals in federated learning can remarkably save overall communication volume." ], [ "Scalability for Resource-constrained Communication Environments", "Finally, we explore the scalability of ResFed by tuning compression ratios for client-to-server and server-to-client, as in real application scenarios.", "The available network resources for up- and down-streaming can be heterogeneous.", "Fig REF shows the test accuracy effected for different values of sparsity, which leads to various compression ratios for each communication round for up- and down-streaming.", "From Fig.", "REF , we can observe the testing accuracy reduces with higher compression ratio per communication round, when the number of communication rounds is always the same, i.e.", "set to 300.", "However, when we consider the dedicated budget for communication costs in up- or down-streaming, a large compression ratio in ResFed can achieve better accuracy, as shown in Fig.", "REF .", "By adapting the compression ratio in resource-constrained communication environments, ResFed can effectively enhance the federated learning using deep residual compression." ], [ "Related Work", "Deep compression.", "Deep compression was originally proposed by [5] and aims at compressing deep learning models by a pipeline including sparsification (pruning), quantization and encoding for more efficient deployment.", "Based on deep compression, [16] have proposed deep gradient compression to reduce the communication costs in distributed learning by compressing gradients rather than model weights, which can also be used in federated learning.", "However, models are usually trained more than one epoch locally in federated learning ([18], [15], [25], [9]), which results in gradient accumulation instead of gradients in other distributed learning scenarios ([23]).", "Our conducted experiments also show compressing residuals can achieve better communication efficiency comparing to compress gradients due to the additional prediction step.", "In ResFed, we especially consider residuals, which eliminate the model similarity in a single inter-round of federated learning communication and achieve a better compression performance by leveraging the deep residual compression.", "Federated learning and communication efficiency.", "Communication efficiency is the key for deploying federated learning in real application scenarios, especially to train a large model.", "Previous research by [29], [9], [4] attempted to reduce the number of needed communication rounds for a better communication efficiency.", "Meanwhile, the proposed approaches by [28], [22], [7] are built upon deep compression and focus on improving communication efficiency by decreasing the communication volume.", "However, unlike compressing residuals in ResFed, they compressed model weights without consideration of any potential redundancy in sequential updating of federated learning.", "The recent work by [30] has also mentioned the predictive model update in federated learning, which is concurrent to our work, but the information in the history of model updating is not considered for reducing the parameter redundancy there.", "Additionally, all those algorithms above can only be used to improve the communication efficiency for up-streaming, while ResFed can handle with up- and/or down-streaming for heterogeneous resource-constrained environments.", "Residuals in video encoding.", "The residuals have been widely and successfully utilized in video encoding since H.261 ([3], [13]).", "By considering inter-frame correlations, the pixel values in the current frame are predicted from history frames and then only residuals, i.e.", "the deviations between predicted and the actual pixel values in the current frame, are encoded and streamed to the receivers.", "Inspired by the residuals in video encoding, we integrate the model residuals into federated learning in ResFed, where the inter-round similarity of a model update is analogous to inter-frame correlation in video encoding." ], [ "Conclusion", "In this work, we introduce a residual-based federated learning framework, which allows clients and the server to share residuals instead of weights or gradients.", "It achieves more efficient communication for both up- and down-streaming in federated learning by leveraging deep residual compression, and hence can be flexibly deployed in heterogeneous network environments.", "Our conducted experimental evaluation shows that the framework remarkably reduces overall communication volume to reach the same prediction accuracy in standard federated learning.", "Compared to compressing model weights or gradients, ResFed achieves higher accuracy and faster convergence speed.", "Limitations.", "We cache the recovered models as local and global trajectories for continual model prediction and residual computing in all clients and server.", "Assuming that we perform ResFed with $N$ clients for training a model with $V$ 32-bit float parameters and we set the trajectory length on $T$ , each client should use $2*32*V*T$ bits memory to cache the 2 trajectories.", "Thus, the additional required memory size in each client is proportional to $T*V$ .", "For the server, it needs $2*32*V*T*N$ bits memory to cache the local and global trajectories for all clients.", "In order to reduce the required memory, a potential solution is to cache the compressed models in the trajectories for both sender and receiver symmetrically, after model recovery.", "However, the accuracy of model prediction based on compressed trajectories is reduced, then the memory-accuracy trade-off needs to be investigated in future work." ], [ "Reproducibility statement", "We provide the source-code for the implementation and evaluation of our proposed framework ResFed in the code appendix.", "The user guidelines for installation and execution is given in the file README.md.", "The details on our conducted experiments are provided in Sec. .", "The source-code will be made publicly available after double-blind review.", "Tables of Notations We provide an overview of the most relevant notations in Tab.", "REF .", "Table: Summary of mainly used notations in this paper.", "Experimental Details and Further Results In this section, we provide the details on our conducted experiment in .", "We run on a computer cluster with 4$\\times $  NVIDIA-A100-PCIE-40GB GPUs and 4$\\times $  32-Core-AMD-EPYC-7513 CPUs.", "The environment is a Linux system with Pytorch 1.8.1 and Cuda 11.1.", "We demonstrate the learning task on 5 different datasets: MNIST [12]: 60000 data points in the training set and 10000 data points in the test set.", "Each data point is a 28x28 gray-scale digit image, associated with a label from 10 classes.", "CIFAR-10 [11]: 50000 data points in the training set and 10000 data points in the test set.", "Each data point is a 32x32 RGB image, associated with a label from 10 classes.", "Fashion-MNIST [27]: 60000 data points in the training set and 10000 data points in the test set.", "Each data point is a 28x28 gray-scale image, associated with a label from 10 classes.", "SVHN [20]: 73257 data points in the training set and 26032 data points in the test set.", "Each data point is a 32x32 RGB digit image, associated with a label from 10 classes.", "CIFAR-100 [20]: 50000 data points in the training set and 10000 data points in the test set.", "Each data point is a 32x32 RGB image, associated with a label from 100 classes.", "The models trained on those dataset are shown in Tab.", "REF .", "Table: Dataset and models in experimentsExperiments for Sec.", "REF We divide the dataset, i.e.", "MNIST and CIFAR-100, into 10 clients and run the FedAvg with local optimizer of stochastic gradient descent (SGD) (momentum is 0.9) to train LeNet-5 and CNN (with 5 convolutional and 3 fully connected layers), respectively.", "The learning rate is fixed on 0.01 and the batch size is 64 for all tests.", "In Non-IID data setting, each client owns only 2 out of 10 classes in MNIST and 50 out of 100 classes in CIFAR-100.", "In clients, we consider 5 different approaches as follows: No Compression: The standard federated learning without any compression methods is used as the baseline, which provides the best results among all of the approaches, when number of communication rounds is the same.", "Compress Weights: Before communication in standard federated learning, the model weights are first compressed.", "Compress Gradients: The gradients in each epoch are compressed and communicated to the server.", "Compress Res-0: The residuals are computed by stationary predictor, i.e.", "Eq.", "REF with $T=0$ .", "Compress Res-1: The residuals are computed by linear predictor, i.e.", "Eq.", "REF with $T=1$ .", "For each approach on each dataset, we run 10 tests with different seeds and show the mean value and standard variance in Fig.", "REF .", "We compress the those model parameters using deep compression pipeline ([5]) only for client-to-server.", "In particular, we set sparsity on 80% and 99% for residuals in LetNet and CNN, respectively.", "We use SGD optimizer momentum of 0.9.", "The number of local epoch is 1.", "Those sparsified parameters are zero-parameters and the number of the continually appearing zero-parameters are encoded as a 16-bit float parameters ([16]).", "After that, we quantize the non-zero parameters in 1 bit with median value of positive and negative parameters ([28]).", "Finally, Huffman encoding ([24]) is used.", "Experiments for Sec.", "REF We train LeNet-5, CNN and ResNet18 (size from small to large) on Fashion-MNIST, CIFAR-10 and SVHN with 10 clients in both IID and Non-IID settings.", "We demonstrate the ResFed and lossy compress the residuals either only for uploading (UL) or downloading (DL) to study the effects on each direction separately.", "The learning rate is fixed on 0.01 and the batch size is 64 for all tests.", "In Non-IID data setting, each client owns 50% classes (5 out of 10).", "We use mean values from 5 tests in each experiment for the evaluation in Tab.", "REF and show the results with standard variance in Fig.", "REF , which indicate the overall communication volume can be remarkably reduced in ResFed.", "Figure: Overall communication efficiency enhanced by ResFed with Lossy Compression (LC) for only Uploading (UL) or Downloading (DL)For all experiments, we set the sparsity on 99% and use 1 bit for each non-zero parameters.", "Consequently, the compression ratio per communication round for LetNet-5 is about $350\\times $ , CNN is about $375\\times $ and ResNet-18 is about $356\\times $ .", "Experiments for Sec.", "REF To show the correlation between deep residual compression in up- und down-streaming, we train the CNN model on IID CIFAR-10 with 10 clients and tune the sparsity for realizing different compression ratios per communication round in ResFed.", "The learning rate is fixed on 0.01 and the batch size is 64.", "We use SGD optimizer momentum of 0.9.", "The number of local epoch is 1.", "Specifically, the value of sparsity is $\\lbrace 0\\%, 90\\%, 95\\%, 99\\%, 99.5\\%\\rbrace $ for both up- and down-streaming and then set 1 bit for all non-zero parameters in quantization.", "Understanding Residuals We provide an illustration of model transitions during federated learning in Fig.", "REF .", "Given a sender and a receiver (both can be a client or a server), the communication and operation result in model transition.", "Note that for a client, the operation is local training; for a server, the operation is aggregation.", "We consider the model transition caused by an operation as a internal model transition, and by communication as a external model transition.", "Then, as explained in Sec.", "REF , the model is updated twice between two communication rounds, which is shown in Fig.", "REF as dual model transition.", "Consequently, we can have an internal and an external model transition trajectory in both sender and receiver.", "Note that for a client, the internal model transition trajectory is a local model trajectory; for a server, the internal model transition trajectory is a global model trajectory, as described in Sec.", "REF .", "Figure: Residual generation in ResFed during model transitions.", "For a better overview, we simplify the system by disregarding the trajectory synchronization step in Sec.", ".Figure: Value comparison of model parameters, gradients and residuals in federated learning.", "We train a LeNet with 61706 weights of 32-bits float on MNIST distributed among 10 clients, with fixed learning rate 0.0010.001 and batch size 64.", "For fairly comparing gradients and residuals, the number of local epoch in each client is set as 1.", "We set 6 checkpoints when the number of communication rounds is {1,4,8,16,32,128}\\lbrace 1, 4, 8, 16, 32, 128\\rbrace .", "The results show that most values of residuals are smaller than weights and gradients during the training.", "It indicates that lossly compressing residuals naturally lose less information and have a smaller affect on the accuracy.Finally, ResFed allows the sender to predict the model for the next internal model transition, which is shown in orange.", "Meanwhile the sender does the operation to execute the internal model transition and residuals (in purple) are deduced from the difference of both model transition results.", "We believe the predicted model can be closer to the updated model than the previous model, which leads to smaller values of residuals.", "To evaluate it, we set 6 checkpoints when the number of communication rounds is $\\lbrace 1, 4, 8, 16, 32, 128\\rbrace $ , and show the values of model weights, gradients and residuals in Fig.", "REF .", "Based on this, the residuals can be compressed smaller than weights and gradients." ], [ "Tables of Notations", "We provide an overview of the most relevant notations in Tab.", "REF .", "Table: Summary of mainly used notations in this paper." ], [ "Experimental Details and Further Results", "In this section, we provide the details on our conducted experiment in .", "We run on a computer cluster with 4$\\times $  NVIDIA-A100-PCIE-40GB GPUs and 4$\\times $  32-Core-AMD-EPYC-7513 CPUs.", "The environment is a Linux system with Pytorch 1.8.1 and Cuda 11.1.", "We demonstrate the learning task on 5 different datasets: MNIST [12]: 60000 data points in the training set and 10000 data points in the test set.", "Each data point is a 28x28 gray-scale digit image, associated with a label from 10 classes.", "CIFAR-10 [11]: 50000 data points in the training set and 10000 data points in the test set.", "Each data point is a 32x32 RGB image, associated with a label from 10 classes.", "Fashion-MNIST [27]: 60000 data points in the training set and 10000 data points in the test set.", "Each data point is a 28x28 gray-scale image, associated with a label from 10 classes.", "SVHN [20]: 73257 data points in the training set and 26032 data points in the test set.", "Each data point is a 32x32 RGB digit image, associated with a label from 10 classes.", "CIFAR-100 [20]: 50000 data points in the training set and 10000 data points in the test set.", "Each data point is a 32x32 RGB image, associated with a label from 100 classes.", "The models trained on those dataset are shown in Tab.", "REF .", "Table: Dataset and models in experiments" ], [ "Experiments for Sec. ", "We divide the dataset, i.e.", "MNIST and CIFAR-100, into 10 clients and run the FedAvg with local optimizer of stochastic gradient descent (SGD) (momentum is 0.9) to train LeNet-5 and CNN (with 5 convolutional and 3 fully connected layers), respectively.", "The learning rate is fixed on 0.01 and the batch size is 64 for all tests.", "In Non-IID data setting, each client owns only 2 out of 10 classes in MNIST and 50 out of 100 classes in CIFAR-100.", "In clients, we consider 5 different approaches as follows: No Compression: The standard federated learning without any compression methods is used as the baseline, which provides the best results among all of the approaches, when number of communication rounds is the same.", "Compress Weights: Before communication in standard federated learning, the model weights are first compressed.", "Compress Gradients: The gradients in each epoch are compressed and communicated to the server.", "Compress Res-0: The residuals are computed by stationary predictor, i.e.", "Eq.", "REF with $T=0$ .", "Compress Res-1: The residuals are computed by linear predictor, i.e.", "Eq.", "REF with $T=1$ .", "For each approach on each dataset, we run 10 tests with different seeds and show the mean value and standard variance in Fig.", "REF .", "We compress the those model parameters using deep compression pipeline ([5]) only for client-to-server.", "In particular, we set sparsity on 80% and 99% for residuals in LetNet and CNN, respectively.", "We use SGD optimizer momentum of 0.9.", "The number of local epoch is 1.", "Those sparsified parameters are zero-parameters and the number of the continually appearing zero-parameters are encoded as a 16-bit float parameters ([16]).", "After that, we quantize the non-zero parameters in 1 bit with median value of positive and negative parameters ([28]).", "Finally, Huffman encoding ([24]) is used." ], [ "Experiments for Sec. ", "We train LeNet-5, CNN and ResNet18 (size from small to large) on Fashion-MNIST, CIFAR-10 and SVHN with 10 clients in both IID and Non-IID settings.", "We demonstrate the ResFed and lossy compress the residuals either only for uploading (UL) or downloading (DL) to study the effects on each direction separately.", "The learning rate is fixed on 0.01 and the batch size is 64 for all tests.", "In Non-IID data setting, each client owns 50% classes (5 out of 10).", "We use mean values from 5 tests in each experiment for the evaluation in Tab.", "REF and show the results with standard variance in Fig.", "REF , which indicate the overall communication volume can be remarkably reduced in ResFed.", "Figure: Overall communication efficiency enhanced by ResFed with Lossy Compression (LC) for only Uploading (UL) or Downloading (DL)For all experiments, we set the sparsity on 99% and use 1 bit for each non-zero parameters.", "Consequently, the compression ratio per communication round for LetNet-5 is about $350\\times $ , CNN is about $375\\times $ and ResNet-18 is about $356\\times $ ." ], [ "Experiments for Sec. ", "To show the correlation between deep residual compression in up- und down-streaming, we train the CNN model on IID CIFAR-10 with 10 clients and tune the sparsity for realizing different compression ratios per communication round in ResFed.", "The learning rate is fixed on 0.01 and the batch size is 64.", "We use SGD optimizer momentum of 0.9.", "The number of local epoch is 1.", "Specifically, the value of sparsity is $\\lbrace 0\\%, 90\\%, 95\\%, 99\\%, 99.5\\%\\rbrace $ for both up- and down-streaming and then set 1 bit for all non-zero parameters in quantization." ], [ "Understanding Residuals", "We provide an illustration of model transitions during federated learning in Fig.", "REF .", "Given a sender and a receiver (both can be a client or a server), the communication and operation result in model transition.", "Note that for a client, the operation is local training; for a server, the operation is aggregation.", "We consider the model transition caused by an operation as a internal model transition, and by communication as a external model transition.", "Then, as explained in Sec.", "REF , the model is updated twice between two communication rounds, which is shown in Fig.", "REF as dual model transition.", "Consequently, we can have an internal and an external model transition trajectory in both sender and receiver.", "Note that for a client, the internal model transition trajectory is a local model trajectory; for a server, the internal model transition trajectory is a global model trajectory, as described in Sec.", "REF .", "Figure: Residual generation in ResFed during model transitions.", "For a better overview, we simplify the system by disregarding the trajectory synchronization step in Sec.", ".Figure: Value comparison of model parameters, gradients and residuals in federated learning.", "We train a LeNet with 61706 weights of 32-bits float on MNIST distributed among 10 clients, with fixed learning rate 0.0010.001 and batch size 64.", "For fairly comparing gradients and residuals, the number of local epoch in each client is set as 1.", "We set 6 checkpoints when the number of communication rounds is {1,4,8,16,32,128}\\lbrace 1, 4, 8, 16, 32, 128\\rbrace .", "The results show that most values of residuals are smaller than weights and gradients during the training.", "It indicates that lossly compressing residuals naturally lose less information and have a smaller affect on the accuracy.Finally, ResFed allows the sender to predict the model for the next internal model transition, which is shown in orange.", "Meanwhile the sender does the operation to execute the internal model transition and residuals (in purple) are deduced from the difference of both model transition results.", "We believe the predicted model can be closer to the updated model than the previous model, which leads to smaller values of residuals.", "To evaluate it, we set 6 checkpoints when the number of communication rounds is $\\lbrace 1, 4, 8, 16, 32, 128\\rbrace $ , and show the values of model weights, gradients and residuals in Fig.", "REF .", "Based on this, the residuals can be compressed smaller than weights and gradients." ] ]
2212.05602
[ [ "Universality of the local limit of preferential attachment models" ], [ "Abstract We study preferential attachment models where vertices enter the network with i.i.d.", "random numbers of edges that we call the out-degree.", "We identify the local limit of such models, substantially extending the work of Berger et al.(2014).", "The degree distribution of this limiting random graph, which we call the random P\\'{o}lya point tree, has a surprising size-biasing phenomenon.", "Many of the existing preferential attachment models can be viewed as special cases of our preferential attachment model with i.i.d.", "out-degrees.", "Additionally, our models incorporate negative values of the preferential attachment fitness parameter, which allows us to consider preferential attachment models with infinite-variance degrees.", "Our proof of local convergence consists of two main steps: a P\\'olya urn description of our graphs, and an explicit identification of the neighbourhoods in them.", "We provide a novel and explicit proof to establish a coupling between the preferential attachment model and the P\\'{o}lya urn graph.", "Our result proves a density convergence result, for fixed ages of vertices in the local limit." ], [ "[enumerate]itemsep = -0.2em figurename=figure" ] ]
2212.05551
[ [ "Morphology of Shocked Lateral Outflows in Colliding Hydrodynamic Flows" ], [ "Abstract Supersonic interacting flows occurring in phenomena such as protostellar jets give rise to strong shocks, and have been demonstrated in several laboratory experiments.", "To study such colliding flows, we use the AstroBEAR AMR code to conduct hydrodynamic simulations in three dimensions.", "We introduce variations in the flow parameters of density, velocity, and cross sectional radius of the colliding flows %radius in order to study the propagation and conical shape of the bow shock formed by collisions between two, not necessarily symmetric, hypersonic flows.", "We find that the motion of the interaction region is driven by imbalances in ram pressure between the two flows, while the conical structure of the bow shock is a result of shocked lateral outflows (SLOs) being deflected from the horizontal when the flows are of differing cross-section." ], [ "Introduction", "Radiative shocks occur in a variety of settings, such as High Energy Density Plasma (HEDP), protostellar jets [1], supernova explosions [2], gamma ray bursts [3], and active galactic nuclei [4].", "Over the years several efforts have been made to study these phenomena in the laboratory.", "Despite large differences in physical scale, results obtained in the laboratory can be used to understand phenomena in the astrophysical setting, thanks to the use of dimensionless parameters [5], [6], [7].", "Experiments at Imperial College London have produced single flows using a pulsed-power generator to drive radial wire arrays [8] and radial foils [9], [10]; similar experiments have also been performed at Cornell University [11].", "Collimation of these flows was driven by toroidal magnetic fields in a manner consistent with the “Magnetic Tower\" model [12].", "Extending this work, [13] found that the presence of an ambient medium results in the formation of a shock, driven by ablation of halo plasma.", "While astronomical shocks are generally produced via time-variations in velocity, a similar effect can be obtained in a laboratory setting by creating a collision between two supersonic flows.", "[14] conducted such an experiment, images of which are shown in figure REF .", "Among the results of that study are the emergence of small scale structures behind a conical bow shock, which was observed to propagate in a downward direction.", "When two flows collide, an interaction region forms, with the jet shocks becoming the boundaries.", "This region usually consists of a cooling region, in which gas cools from post-shock temperature $T_s$ immediately behind the jet shocks to a lower temperature further behind the shock.", "After passing through the cooling region, gas reaches its final post-cooling temperature and is deposited onto the cold slab, where densities are highest (see figure REF ), some shocked material gets ejected laterally by the high pressures throughout the interaction region, producing shocked lateral outflows (SLOs) [15].", "Figure: (left) Images of a colliding flow experiment.", "Adapted from figure 2 of .", "(right) a diagram showing the structure of colliding flows; adapted from In our previous paper [16] we began a series of simulations in an effort to study colliding radiative flows like those of [14].", "In that paper we used hydrodynamic simulations with an analytic form of radiative cooling, focusing our attention on instabilities in an effort to explain the origin of small scale structures.", "While [14] had suggested that the [17] instability was a source of structure, we did not see any evidence of this mode present in the case of analytic cooling.", "Instead, long-term evolution was found to be dominated by the bending modes characteristic of the nonlinear thin shell instability (NTSI) [18], which could be triggered either by sufficiently short cooling lengths or by oscillations resulting from the radiative shock instability [19].", "In this paper, we continue to investigate the results of [14] by extending the results of [16].", "Here we shift our focus away from instabilities (which were discussed in significant depth in [16]) to a study of flow parameter variation, allowing us to better understand the conical shape and downward propagation of the bow shock and lateral outflows.", "We will continue to focus solely on hydrodynamic simulations, but the analytic cooling curve previously used will be replaced with a more realistic cooling function; this will be described in further detail in section .", "Our approach of continuing to remain in the radiative hydrodynamic case for now will eventually allow us to have a better understanding of the magnetic case by providing a reference for comparison.", "This paper is organized as follows: In Section we discuss the model system, the selected cooling function, and simulation parameters.", "In section , we present the results of the simulations.", "Section will include a discussion of motion of the interaction region, properties of the cold slab, and deflection of the SLOs." ], [ "Methods and Model", "The simulations in this study were conducted using AstroBEARhttps://astrobear.pas.rochester.edu/${ }^{,}$ [21], [22], which is a massively parallelized adaptive mesh refinement (AMR) code that includes a variety of multiphysics solvers, such as magnetic resistivity, radiative transport, self-gravity, heat conduction, and ionization dynamics.", "Our study uses only the hydro solvers with an energy source term associated with the radiative cooling.", "Thus our governing equations are; $\\frac{\\partial \\rho }{\\partial t} + {\\nabla } \\cdot \\rho {v} = 0$ $\\frac{\\partial \\rho {v}}{\\partial t} + {\\nabla } \\cdot \\left( \\rho {v} \\otimes {v} \\right)= - \\nabla p$ $\\frac{\\partial E}{\\partial t} + {\\nabla } \\cdot ((E + p) {v}) =-n^2\\Lambda (n,T)$ where $\\rho $ is the mass density, $n$ is the number density of nuclei, ${v}$ is the fluid velocity, $p$ is the thermal pressure, and $E = \\frac{1}{\\gamma - 1} p + \\frac{1}{2}\\rho v^2$ is the combined internal and kinetic energies.", "$n^2\\Lambda (n,T)$ is a cooling function, which gives the energy loss per unit volume.", "While in [16] we used an analytic cooling function of the form $\\Lambda (T) = \\alpha \\left(\\frac{T}{T_0}\\right)^\\beta $ , in this paper we use a more realistic cooling function.", "Such a cooling function introduces variation in the slope $\\frac{\\operatorname{d} \\log \\Lambda }{\\operatorname{d}\\log T}$ , which is important to both the radiative shock instability and to the Field instability.", "The specific function $\\Lambda (n,T)$ we used was chosen in attempt to provide correspondence with the experiments of [14] and is plotted in figure REF .", "This function was calculated using ABAKO/RAPCAL [23], [24].", "The required atomic data were obtained using FAC [25], while the resulting function was parameterized in terms of density and temperature using PARPRA [26].", "Cooling is strongest at high temperatures and lower densities; as a fluid cools, $\\Lambda (n,T)$ decreases until a local minimum is encountered at a temperature on the order of 10 eV.", "Cooling then increases again as temperature continues to decrease, but this effect is reduced at higher densities.", "Figure: The cooling function plotted (top) as a function of temperature, for selected densities, and (bottom) as a colour-map in terms of density and temperature.Similarly to [16], we drove two cylindrical jets, one from the top and bottom z-boundaries respectively.", "For the first run, both jets had densities of $3.0\\times 10^{18}$ particles per cm$^3$ , speeds of 70.0 km s$^{-1}$ , temperature of 720 K, and cross sectional radii of 1.5 mm.", "The ambient medium density was $1\\times 10^{18}$ particles per cm$^3$ at a temperature of 4320 K. Extrapolated boundary conditions were used in all directions.", "Runs were conducted in a 6.4mm cubic space, and consisted of the reference run described above and nine other runs in which jet parameters were varied as follows (summarised in table REF ).", "For the first two runs, we changed the density of the bottom jet to $2.0\\times 10^{18}$ and $1.0\\times 10^{18}$ particles per cm$^3$ .", "For third and fourth runs, the radius of the bottom jet was varied to 2.0 and 1.0 mm, and the density was varied to $1.69\\times 10^{18}$ and $6.75\\times 10^{18}$ particles per cm$^3$ such that the mass of the jets remained unaltered.", "For the fifth and sixth runs, the radius of the bottom jet was varied to 2.0 and 1.0 mm, but this time the density of the jet was reverted to the original value.", "Finally, the seventh and eighth runs varied velocities of the top and bottom jets to 80 and 60 km s$^{-1}$ respectively; these values were chosen to match the experiments in [14].", "The seventh run retained the densities of the reference run, while the eighth run used densities of $2.63\\times 10^{18}$ and $3.50\\times 10^{18}$ so that the jets were of equal momentum density, and the ninth run used densities of $2.30\\times 10^{18}$ and $4.08\\times 10^{18}$ so that the jets were of equal momentum flux.", "Table: Densities, velocities, and radii of jets in the first set of runs.", "For each pair of numbers, the first describes the top jet while the second describes the bottom jet." ], [ "Results", "We will first examine the reference run (run 0 in table REF ), shown in figure REF .", "This run shows a collision between two identical jets, similar to those studied in [16] but now using the complex cooling curve described in section .", "As gas crosses the jet shock and enters the interaction region, the density increases from $3\\times 10^{18}$ cm$^{-3}$ to $1.2\\times 10^{19}$ cm$^{-3}$ .", "The density varies as the gas cools but remains at the same order of magnitude throughout the cooling region, but is seen to be as high as $2.5\\times 10^{21}$ cm$^{-3}$ in the cold slab.", "As gas exits the interaction region through the sides and into the shocked lateral outflows, densities drop to a value of order $1\\times 10^{18}$ cm$^{-3}$ and lower as gas begins to mix into the ambient medium.", "At later times clumps are observed to form in the cold slab, though the origin of these is uncertain given that the cold slab is not well resolved at just a few cells ($10^{-3}$ cm) in width.", "Since the jets are identical, the interaction region remains stationary and shows two identical jet shocks; while the shocks exhibit some curvature, no conical structure is observed.", "The shocked lateral outflows flow primarily horizontally even though some material leaks from the top and bottom of the SLOs.", "Figure: From left to right, runs 0, 1, and 2 from table .", "For these runs, the density of the top jet remains fixed while the density of the bottom jet decreases with each successive run.Next, we examine cases in which the density of the bottom jet is varied, while the density of the top jet is fixed at $1.0\\times 10^{18}$ particles per cm$^3$ .", "Figure REF shows jets with densities $3.0\\times 10^{18}, 2.0\\times 10^{18}$ , and $1.0\\times 10^{18}$ particles per cm$^3$ .", "As the density of the jets becomes imbalanced, the interaction region no longer remains in place, with the velocity of the interaction region in this case being proportional to the density imbalance.", "Lateral outflows are seen to be nearly horizontal.", "Clump formation in the cold slab begins much sooner in collisions with a density imbalance; while the details of smaller structures are beyond the scope of this paper, we conjecture that this is the result of asymmetry exacerbating the formation of perturbations within the cold slab.", "Increasing the imbalance further does not significantly change the onset of clump formation.", "These clumps merge with each other as time progresses, though the regions of cold slab between these clumps is once again not well resolved.", "Study of the the long term behaviour of these clumps is also limited by the interaction region hitting the wall after a relatively short amount of time.", "Figure: From left to right, runs 3, 0, and 4 from table .", "For these runs, the density and radius of the top jet remains fixed while the bottom jet both increases in density and decreases in radius with each successive run.Next, we examine cases in which the density of the bottom jet is again varied, this time varying the radius of the bottom jet as $r \\propto \\rho ^{-\\frac{1}{2}}$ so that the total mass flux through a horizontal slice remains fixed.", "Figure REF shows jets with densities $1.69\\times 10^{18}, 3.0\\times 10^{18}$ , and $6.75\\times 10^{18}$ particles per cm$^3$ and radii 2.0, 1.5, and 1.0 mm.", "The jet with the smaller radius, having a higher density, drives the motion of the interaction region.", "As with the previous set, an imbalance of density also results in formation of clumps within the cold slab at earlier times, with these clumps again merging into fewer but larger clumps as time progresses.", "Since the jets are of differing radii, the SLOs are no longer horizontal, instead bending away from the jet of larger radius.", "Figure: From left to right, runs 5, 0, and 6 from table .", "For these runs, the radius of the top jet remains fixed while the radius of the bottom jet decreases with each successive run.We next examine cases of equal speed jets for which the radius of the bottom jet is varied at fixed density.", "Figure REF shows jets with radii 2.0, 1.5, and 1.0 mm.", "For equal velocities and equal opposing jet densities, ram pressure is balanced and the interaction region remains at a fixed location.", "As with density imbalances, an imbalance of radius alone appears to be sufficient to promote earlier clump formation, though in the case with a 1.5 mm jet colliding with a 1.0 mm jet the merging of clumps appears to be inhibited.", "Since the speeds and densities are balanced and the interaction region never approaches a wall, these cases can be observed for longer times.", "After material ejected in the initial collision flies away, a lateral outflow remains.", "When the jets differ in size, the direction of the SLO occurs at an angle to the horizontal plane and points away from the larger jet.", "The exact angle increases with the ratio $\\frac{(r_+^2 - r_-^2)}{r_-}$ , where $r_+$ and $r_-$ are the radii of the larger and smaller jet respectively, and is explored in section REF .", "The interaction region also exhibits curvature, bending away from the larger flow.", "Figure: From left to right, runs 7, 8, 9 from table .", "While the previous runs use v=70v = 70 km s -1 ^{-1} for both jets, these runs use jet velocities of 80 and 60 km s -1 ^{-1} for the top and bottom jets respectively.", "The density of the top jet decreases (and the bottom jet increases) with each successive run.Figure: Run 9 from table , at intervals of 200 ns.", "Density is plotted in the left column, while the right column shows a plot of temperature.", "At later times the Nonlinear Thin Shell Instability can be observed.Lastly, we examine effects of changing the jet velocity.", "While previous scenarios used a velocity of 70 km s$^{-1}$ , the final set of runs uses velocities of 80 km s$^{-1}$ for the top jet and 60 km s$^{-1}$ for the bottom jet.", "Figure REF shows three simulations with these velocities.", "The first case shows two jets of equal density; here the imbalance of velocity results in the interaction region moving with a velocity of 0 km s$^{-1}$ after collision.", "The second case shows two jets of equal momentum density (and thus equal mass flux); the velocity of the interaction region is significantly reduced, but motion still occurs as a result of ram pressure imbalance.", "Both of these runs exhibit clumping behaviour similar to that of the runs with a density imbalance.", "The final case shows two jets of equal ram pressure.", "The interaction region now remains stationary.", "While clumping behaviour is initially similar to the other two cases, clump mergers in this case produce a full cold slab spanning the entire radius of the jet.", "At later times (figure REF ), this cold slab is broken apart by the nonlinear thin shell instability, similar to the cases examined in [16]." ], [ "Motion of the Interaction Region", "As seen in figures REF , REF , and REF , the interaction region has a nonzero velocity unless the ram pressure $\\rho v^2$ is equal for both the top and bottom jet.", "Otherwise, it moves away from the jet with higher ram pressure with velocity $V$ .", "In a reference frame comoving with $V$ , the momentum flux entering the interaction region from one jet must balance the momentum flux entering from the other jet.", "This constraint can be expressed as $ \\rho _1 \\left( v_1 - V \\right)^2 + p_1= \\rho _2 \\left( v_2 - V \\right)^2 + p_2$ where positive velocities indicate the downward direction; $v_2$ is therefore negative.", "We shall also neglect the thermal pressure difference $p_1-p_2$ .", "The requirement that both jets flow into the interaction region produces the constraint $v_2 < V < v_1$ .", "Taking the square root of equation REF , subject to this constraint gives us $\\sqrt{\\rho _1}\\left( v_1 - V \\right) = \\sqrt{\\rho _2} \\left(V - v_2 \\right)$ which can be solved for $V$ : $ V = \\frac{ v_1 \\sqrt{\\rho _1} + v_2 \\sqrt{\\rho _2} }{\\sqrt{\\rho _1}+\\sqrt{\\rho _2}}$ We will now examine this result in the context of figures REF -REF .", "Velocities can be measured as follows: the location of the cold slab, lower jet shock, and upper jet shock are measured at $t=130$ ns and $t=230$ ns.", "The positions at each time step are averaged; taking the difference of these averages between the two time steps gives us the displacement (positive is down) over the 100 ns time interval.", "For runs 7-9, a similar procedure is used except that the measurements are taken at $t=140$ ns and $t=280$ ns.", "Results are given in table REF .", "Table: Densities and velocities of the jets in each run, along with the estimated (equation ) and observed value for the interaction region velocity VV.", "All velocities are given in km s -1 ^{-1} and densities are given in 10 18 10^{18} cm -3 ^{-3}We can connect this prediction to the experiments of [14], in which the jet velocities were measured to be $v_1 = 80\\pm 10$ km s${ }^{-1}$ and $v_2 = -60\\pm 10$ km s${ }^{-1}$ .", "Substantial uncertainties ($\\sim 60\\%$ ) were noted for measurements of density, but in any case the ratio $\\frac{\\rho _1}{\\rho _2}$ was within an order of magnitude of unity.", "If equation REF is instead solved for $\\frac{\\rho _1}{\\rho _2}$ we find $\\frac{\\rho _1}{\\rho _2} = \\left(\\frac{v_2-V}{v_1-V}\\right)^2$ The bow shock velocity is stated to be $40\\pm 10$ km s${ }^{-1}$ , so the density ratio evaluates to $\\frac{\\rho _1}{\\rho _2} = 6.25 \\pm 4.38$ ; this on the higher end of what is in agreement with the measurements presented in [14].", "Another useful quantity to compute for collisions between flows of unequal ram pressure is the Mach number in the centre-of-mass frame.", "In this paper we define Mach number to be $M^2 = \\frac{\\rho v^2}{\\gamma p}$ , and $M_1$ and $M_2$ to be the Mach numbers for top and bottom flows in the laboratory frame.", "When combined with equation REF this definition gives us $M^2 = \\frac{\\rho _1(v_1-V)^2}{\\gamma p_1} = \\frac{\\rho _1}{\\gamma p_1}\\left(v_1-\\frac{v_1\\sqrt{\\rho _1}+v_2\\sqrt{\\rho _2}}{\\sqrt{\\rho _1}+\\sqrt{\\rho _2}}\\right)^2$ for the centre-of-mass frame (note that Mach numbers must be equal when ram pressures balance).", "This simplifies to $M^2 = \\frac{\\rho _1\\rho _2}{\\gamma p_i}\\left(\\frac{v_1-v_2}{\\sqrt{\\rho _1}+\\sqrt{\\rho _2}}\\right)^2$ which, assuming $v_2 < 0 < v_1$ , can be further simplified to $ M = \\frac{M_1\\sqrt{\\rho _2}+M_2\\sqrt{\\rho _1}}{\\sqrt{\\rho _2}+\\sqrt{\\rho _1}}$ Note that while $v_2$ is assumed to be negative, we always define the Mach number to be positive.", "Note that while the interaction region velocity is an average weighed by the square roots of densities, the Mach number is an average weighted by the square roots of inverse densities." ], [ "Cold Slab in One Dimension", "We now begin our examination of shocked lateral outflow by considering a scenario with no outflow: the growth of the cold slab for a flow colliding with a wall in one dimension.", "In subsequent sections, these results will be useful in the development of a three-dimensional model.", "Define $\\rho _i, v_i$ , and $T_i$ to be the initial pre-shock density, velocity, and temperature; also define $\\rho _f, v_f,$ and $T_f$ to be the same values for material in the cold slab.", "Velocities are specified in a “shock\" frame (see figure REF ), in which the front (i.e.", "the side closer to the shock) of the cold slab remains stationarywe assume that the radiative shock instability can be ignored, so that the distance between the shock and the front of the cold slab remains fixed.", "Any growth of the slab appears in this frame as the far side of the slab moving further away from the shock.", "Figure: Temperature (red) and Density (blue) for a one dimensional shock in a) the shock frame and b) the lab frame.In the shock frame, the shock remains stationary while material in the cold slab flows away from the shock.", "In the lab frame, the shock moves to the left (dashed curve) while material already in the cold slab remains stationary.Let $h$ be the height of the cold slab and $m = \\rho _f hA$ be the mass of the cold slab within area $A$ .", "Mass continuity tells us that $\\frac{\\operatorname{d} m}{\\operatorname{d} t}$ is equal to the incoming mass flux $\\rho _f v_f$ times the area $A$ , so $ \\rho _f v_f A = \\rho _f A \\frac{\\operatorname{d} h}{\\operatorname{d} t} + Ah\\frac{\\operatorname{d} \\rho _f}{\\operatorname{d} t}$ which can be solved for $\\frac{\\operatorname{d} h}{\\operatorname{d} t}$ $ \\frac{\\operatorname{d} h}{\\operatorname{d} t} =v_f - \\frac{h}{\\rho _f}\\frac{\\operatorname{d} \\rho _f}{\\operatorname{d} t}$ .", "Going forward we shall assume that $\\rho _f$ reaches a constant value once $h$ reaches an appreciable length, allowing us to neglect the $\\frac{h}{\\rho _f}\\frac{\\operatorname{d} \\rho _f}{\\operatorname{d} t} $ term; we therefore conclude that the rate of growth of the slab is equal to the velocity of fluid flow within the slab.", "With no way for material to exit the slab, the slab will grow indefinitely.", "We now establish a relation between the preshock and cold slab states.", "Conservation of momentum requires that $\\rho _i v_i^2 + p_i = \\rho _f v_f^2 + p_f$ Using mass continuity $\\rho _i v_i = \\rho _f v_f$ gives $v_f\\left(v_i^2 + \\frac{p_i}{\\rho _i}\\right) = v_i\\left(v_f^2 + \\frac{p_f}{\\rho _f}\\right)$ Under the assumption of an isothermal shock ($\\frac{p_f}{\\rho _f} = \\frac{p_i}{\\rho _i}$ ), this has two solutions: $v_f = v_i$ (preshock), and $v_f = \\frac{p_i}{\\rho _i v_i}$ (cold slab).", "For the latter solution mass continuity then gives $\\rho _f = \\frac{\\rho _i v_i}{v_f} = \\frac{\\rho _i^2 v_i^2}{p_i}.$ Since the Mach number in the shock frame is defined as $M^2 = \\frac{\\rho v_i^2}{\\gamma p_i}$ , we can express the state of the cold slab asNote that the pressure in the cold slab exceeds the immediate postshock pressure by a factor of $\\frac{\\gamma +1}{2}$ .", "$ \\frac{\\rho _f}{\\rho _i} = \\frac{p_f}{p_i} = \\frac{v_i}{v_f} = \\gamma M^2.", "$ We can apply this result to estimate the density of the cold slab in our simulations; we will do this for run 0.", "The density of the incoming flow is $3\\times 10^{18}$ cm$^{-3}$ while the incoming pressure is $2.98\\times 10^{5}$ erg cm$^{-3}$ .", "Using a mass of 1 amu, we find a sound speed of 3.15 km s$^{-1}$ , which for a jet velocity of 70 km s$^{-1}$ gives us a Mach number of 22.", "This gives us a cold slab density of $2.4\\times 10^{21}$ cm$^{-3}$ , which is in good agreement with what is observed in our simulations.", "Before moving on to three dimensions, we briefly discuss this problem in the “lab\" frame, which is useful to consider for a flow running into a stationary wall.", "If we return to figure REF , we see that in this frame the far side of the cold slab remains stationary while the shock moves away from the wall.", "We therefore can define $v_i^{\\prime } = v_i - v_f$ and $M^{\\prime } = \\frac{v_i^{\\prime }}{v_i}M$ as the preshock velocity and Mach number in this frame.", "Using this, we find that equation REF becomes $ \\frac{\\rho _f}{\\rho _i} = \\frac{p_f}{p_i} = \\frac{v_i^{\\prime }}{v_f}+1 = \\gamma M^{\\prime 2}\\left(1+\\frac{v_f}{v_i^{\\prime }}\\right)^2$ the solution of which is given by $ \\frac{v_f}{v_i^{\\prime }}= \\sqrt{\\frac{1}{4}+\\frac{1}{\\gamma M^{\\prime 2}} }-\\frac{1}{2}, $ and $ \\frac{\\rho _f}{\\rho _i}= \\gamma M^{\\prime 2} \\left[\\frac{1}{2}+ \\sqrt{\\frac{1}{4}+\\frac{1}{\\gamma M^{\\prime 2}} }\\right] +1.", "$ These expressions can also be derived by considering conservation laws in the lab frame, noting that the mass of any region enclosing the shock increases over time." ], [ "Cold Slab for Cylindrical flows", "We now wish to extend the model developed in the previous section to a cylindrical model in which gas is permitted to escape laterally.", "Our primary aim in this section is to use mass conservation to estimate the size of the cold slab.", "Although some gas also escapes from the sides of the cooling regions without ever reaching the cold slab, the cooling region accounts for just 10% of the mass of the interaction despite being around 90% of the volume.", "We therefore make the approximation that the mass of the SLO originates entirely from the cold slab material.", "A more accurate calculation could be done by dividing the cooling region into layers and computing the SLO for each layer.", "Let $r_-$ be the radius of the incoming flow (the reason for labelling as minus will become apparent in the next section).", "Assume $h$ is the height of our cylinder, and that mass flows into the cylinder from both the top and bottom and out the sides.", "The outgoing mass flux is equal to $\\rho _h (2\\pi r_0 h)v_\\perp $ , so equations REF and REF become $ \\rho _f v_f (2\\pi r_-^2) -\\rho _hv_\\perp (2\\pi r_- h) = \\rho _f (\\pi r_-^2) \\frac{\\operatorname{d} h}{\\operatorname{d} t} + \\pi r_-^2h\\frac{\\operatorname{d} \\rho _f}{\\operatorname{d} t}$ $ \\frac{\\operatorname{d} h}{\\operatorname{d} t} =2v_f - \\frac{h}{\\rho _f}\\frac{\\operatorname{d} \\rho _f}{\\operatorname{d} t} - \\frac{\\rho _h (2\\pi r_- h)}{\\rho _f(\\pi r_-^2)} v_\\perp .$ In equilibrium this gives an expression for the SLO velocity $ v_\\perp =\\frac{\\rho _fv_f r_-}{\\rho _h h}.$ Figure: A plot of Mach number near the edge of the interaction region, for run 0 at time 660 ns.", "A black contour is drawn at M=1M=1.Let us begin by guessing that $\\rho _h = \\rho _f$ .", "We will also assume that $v_\\perp $ is equal to the sound speed (which must equal that of the incoming flows).", "This is justifiable both theoretically [15] and by observation of our simulations (see figure REF ).", "We therefore find $\\frac{v_i}{M} = v_f \\frac{r_-}{h}$ Since the slab has no net growth we can use equation REF for $\\frac{v_f}{v_i}$ .", "Solving for $\\frac{h}{r_-}$ therefore gives us $ \\frac{h}{r_-} = \\frac{1}{\\gamma M}$ The actual ratio may be slightly smaller as a result of aforementioned small amount of gas leaking laterally out of the cooling region.", "The jets in our simulations have $\\gamma = \\frac{5}{3}$ and $M\\sim 20$ so this factor should be approximately 0.03.", "For a jet of radius 1.5 mm the size of the cold slab would be just 0.045 mm, which is comparable to what we observe.", "We must note however that such a length corresponds to less than 4 cells on the grid at the highest level of refinement, so numerical errors are likely to be significant." ], [ "Deflection Angle for Jets with Different Radii", "Finally, we shall examine the deflection angle for SLOs produced by jets with different radii.", "We will assume that the jets are of equal density and velocity.", "Under these assumptions, the deflection angle can be approximated as the collision between the SLO of a system of identical jets with radius $r_-$ (the radius of the smaller jet), and an incoming flow from the larger jet between $r_-$ and $r_+$ (the radius of the larger jet).", "Define angle $\\theta $ to be the angle of deflection, relative to the surface normal to the incoming flows of density $\\rho _i$ , speed, $v_i$ , and pressure $p_i$ (see figure REF ).", "Define area $A_\\text{SLO}\\operatorname{d}\\varphi $ as the area of some surface which is normal to the SLO and covers the extent of the SLO through angle $\\operatorname{d}\\varphi $ about the jet axis.", "Figure: A diagram discussing the geometry of the problem for determining properties of the SLO for multiple jets.Let the SLO be characterized by $\\rho _\\text{SLO}$ , $v_\\text{SLO}$ , and $p_\\text{SLO}$ ; note that the $\\text{SLO}$ subscript denotes material in the deflected SLO, while an $h$ subscript describes a horizontal SLO.", "The outgoing horizontal and vertical momentum fluxes must therefore be $F_x = (\\rho _\\text{SLO}v_\\text{SLO}^2+p_\\text{SLO})A_\\text{SLO}\\cos \\theta \\operatorname{d}\\varphi $ $F_y = (\\rho _\\text{SLO}v_\\text{SLO}^2+p_\\text{SLO})A_\\text{SLO}\\sin \\theta \\operatorname{d}\\varphi $ We now express the horizontal momentum flux leaving the interaction region in terns if the incoming quantities.", "The SLO from the collision within the inner radius can be characterized as having density $\\rho _h = \\rho _f = \\gamma M^2 \\rho _i$ and velocity $v_\\perp = \\sqrt{\\frac{\\gamma p_i}{\\rho _i}}$ (see equation REF and the discussion in section REF ).", "The pressure $p_h$ can be estimated using conservation of energy within the cold slab (which features no cooling).", "$\\left[\\frac{\\gamma p_f v_f}{\\gamma -1}+\\frac{\\rho _fv_f^3}{2}\\right](2\\pi r_-^2) = \\left[\\frac{\\gamma p_h v_\\perp }{\\gamma -1}+\\frac{\\rho _hv_\\perp ^3}{2}\\right](2\\pi r_-h)$ Note that we use $v_f$ (the state of material entering the cold slab) instead of $v_i$ as we are only considering energy flux in and out of the cold slab rather than the interaction.", "Expressing $\\rho _h$ and $v_\\perp $ in terms of $\\rho _f, p_f$ , and $M$ allows us to solve for $p_h$ $p_h = \\left[\\frac{3-\\gamma }{2} +\\frac{\\gamma -1}{2\\gamma ^2M^2}\\right] p_f$ Therefore in the limit of high Mach number, the horizontal momentum flux produced by the cold slab is equal to $ F_x^h = (\\rho _h v_\\perp ^2 + p_h) (r_- h \\operatorname{d}\\varphi ) \\approx Mp_i \\left(\\frac{3+\\gamma }{2}\\right) (r_-^2 \\operatorname{d}\\varphi )$ While neglecting the cooling region is sufficient when considering mass flux, the high pressures found in the cooling region also contribute significantly to the momentum flux.", "We can however continue to ignore the kinetic term $\\rho _\\text{cr} v_\\perp ^2$ as this term approaches a constant in the limit of high Mach number and thus accounts for less than one percent of the momentum flux for collisions with $M\\sim 20$ .", "We shall approxmiate the pressure within the cooling region to be equal to the immediate postshock pressure; this pressure within the cooling region should exceed this, but by a factor of less than $\\frac{\\gamma +1}{2}$ (assuming pressure in the cooling region never exceeds that of the cold slab).", "We therefore conclude that a lower bound for the contribution of the cooling region to the momentum flux is given by $ F_x^L = p_\\text{ps} (2r_- L \\operatorname{d}\\varphi ) = \\frac{4\\gamma M^2 \\xi }{\\gamma +1} p_i(r_-^2 \\operatorname{d}\\varphi )$ where $L$ is the (vertical) size of the cooling region and $\\xi = \\frac{L}{r_-}$ .", "In [16] we showed how to calculate $L$ for a power-law cooling function as an improvement of the approximation given in [29], and a similar approach can be used for a more complex function.", "While performing such a calculation is necessary if one wishes to predict the outflow angle without running a simulation, when comparing analytical predictions to simulation results we will instead take the approach of measuring $L$ from the simulation results.", "In the vertical direction, the incoming jet flux is $ F_y = (\\rho _iv_i^2+p_i)\\left(\\frac{r_+^2 - r_-^2}{2} \\operatorname{d}\\varphi \\right) = (\\gamma M^2+1)p_i\\left(\\frac{r_+^2 - r_-^2}{2}\\right) \\operatorname{d}\\varphi $ Using equations REF we can deduce $\\tan \\theta = \\frac{F_y}{F_x^h+F_x^L}$ ; combining this with equations REF - REF allows us to estimate the deflection angle: $\\tan \\theta = \\left[\\frac{(\\gamma +1)(\\gamma M^2 +1)}{8\\gamma \\xi M^2 + (\\gamma +1)(\\gamma +3)M}\\right] \\frac{r_+^2 - r_-^2}{r_-^2}$ We now compare equation REF to our simulations.", "Using $\\gamma =\\frac{5}{3}$ , $M=20$ , $L = 0.4$ mm, $r_- = 1.0$ mm and $r_+ = 1.5$ mm, we find $\\tan \\theta = 0.93$ or $\\theta = 43^\\circ $ ; this is confirmed by the 600 ns frame of figure REF which shows a deflection angle of $\\theta = 41^\\circ $ .", "Meanwhile if we use $r_- = 1.5$ mm and $r_+ = 2.0$ mm we find $\\tan \\theta = 0.73$ or $\\theta = 39^\\circ $ , with the measured angle being $36^\\circ $ .", "To further test equation REF , we ran an additional set of simulations.", "These simulations were similar to run 6 (see figure REF ), but the radius of the top jet varied between values of 1.0 mm, 1.5 mm, 2.0 mm, and 2.5 mm; these runs were also conducted at lower resolution than the other runs.", "Using the 1.0 mm case we estimate $\\xi = 0.3$ , and for each subsequent run we took four measurements of deflection angle, shown in magenta in figure REF .", "We find that equation REF was very accurate for the 1.5 mm case, but sorely overestimates for $\\frac{r_+}{r_-} \\gtrsim 2$ .", "An improved result may be obtained by considering the curvature in the region between $r_-$ and $r_+$ , as equation REF is best suited to the limit where that region is small.", "Figure: (top) A set of simulations used to further test equation .", "(bottom) Deflection angles vs the ratio of radii.Each red cross corresponds to a magenta line in the simulations, while the black curve is the estimated deflection." ], [ "Conclusions", "In this paper we have continued our studies of colliding flows.", "While in [16] we used identical flows and an analytic cooling model to isolate the effects of instabilities, in this paper we used a more realistic cooling function and variation of flow parameters; this was done in part to bring our simulations closer to the experiments of [14] while still remaining the hydrodynamic regime.", "Our simulations suggest explanations for two important features of the bow shock.", "First, for flows of differing density or velocity, the interaction region (and thus a shock which bounds it) moves at a speed given by equation REF .", "Second, we find that the conical shape of the bow shock arises from a mismatch in the cross-sectional radii of the flows, with the shocked lateral outflows being deflected away from the jet of larger radius.", "All of the simulations in these first two papers have been conducted in the hydrodynamic limit.", "While this is useful for isolating various aspects of the underlying physics, magnetic fields are likely to be significant in both laboratory and astrophysical flows, and are thus a worthwhile inclusion in future papers on the subject.", "Effects relating to optical depth may also be worth exploring, particularly in connection to laboratory contexts.", "This work used the computational and visualization resources in the Center for Integrated Research Computing (CIRC) at the University of Rochester.", "Financial support for this project was provided in part by the Department of Energy grants GR523126, DE-SC0001063, and DE-SC0020432, the National Science Foundation grant GR506177, and the Space Telescope Science Institute grant GR528562.", "Additional funding for this research was provided by the Center for Matter at Atomic Pressures (CMAP), a National Science Foundation (NSF) Physics Frontiers Center, under Award PHY-2020249.", "Any opinions, findings, conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect those of the National Science Foundation." ], [ "Data Availability Statement", "The data that support the findings of this study are available from the corresponding author upon reasonable request.", "The authors have no conflicts to disclose." ] ]
2212.05631